Unnamed: 0
int64
0
1.83k
Clean_Title
stringlengths
8
153
Clean_Text
stringlengths
330
2.26k
Clean_Summary
stringlengths
53
295
1,400
Variational Autoencoder with Arbitrary Conditioning
We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in "one shot".The features may be both real-valued and categorical.Training of the model is performed by stochastic variational Bayes.The experimental evaluation on synthetic data, as well as feature imputation and image inpainting problems, shows the effectiveness of the proposed approach and diversity of the generated samples.
We propose an extension of conditional variational autoencoder that allows conditioning on an arbitrary subset of the features and sampling the remaining ones.
1,401
Hierarchical Representations for Efficient Architecture Search
We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance.Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies.Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches.We also present results using random search, achieving 0.3% less top-1 accuracy on CIFAR-10 and 0.1% less on ImageNet whilst reducing the search time from 36 hours down to 1 hour.
In this paper we propose a hierarchical architecture representation in which doing random or evolutionary architecture search yields highly competitive results using fewer computational resources than the prior art.
1,402
Hallucinative Topological Memory for Zero-Shot Visual Planning
In visual planning, an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline, e.g., images obtained from self-supervised robot interaction.VP algorithms essentially combine data-driven perception and planning, and are important for robotic manipulation and navigation domains, among others.A recent and promising approach to VP is the semi-parametric topological memory method, where image samples are treated as nodes in a graph, and the connectivity in the graph is learned using deep image classification.Thus, the learned graph represents the topological connectivity of the data, and planning can be performed using conventional graph search methods.However, training SPTM necessitates a suitable loss function for the connectivity classifier, which requires non-trivial manual tuning.More importantly, SPTM is constricted in its ability to generalize to changes in the domain, as its graph is constructed from direct observations and thus requires collecting new samples for planning.In this paper, we propose Hallucinative Topological Memory, which overcomes these shortcomings.In HTM, instead of training a discriminative classifier we train an energy function using contrastive predictive coding.In addition, we learn a conditional VAE model that generates samples given a context image of the domain, and use these hallucinated samples for building the connectivity graph, allowing for zero-shot generalization to domain changes.In simulated domains, HTM outperforms conventional SPTM and visual foresight methods in terms of both plan quality and success in long-horizon planning.
We propose Hallucinative Topological Memory (HTM), a visual planning algorithm that can perform zero-shot long horizon planning in new environments.
1,403
Effective and Efficient Batch Normalization Using Few Uncorrelated Data for Statistics' Estimation
Deep Neural Networks thrive in recent years in which Batch Normalization plays an indispensable role.However, it has been observed that BN is costly due to the reduction operations.In this paper, we propose alleviating the BN’s cost by using only a small fraction of data for mean & variance estimation at each iteration.The key challenge to reach this goal is how to achieve a satisfactory balance between normalization effectiveness and execution efficiency.We identify that the effectiveness expects less data correlation while the efficiency expects regular execution pattern.To this end, we propose two categories of approach: sampling or creating few uncorrelated data for statistics’ estimation with certain strategy constraints.The former includes “Batch Sampling” that randomly selects few samples from each batch and “Feature Sampling” that randomly selects a small patch from each feature map of all samples, and the latter is “Virtual Dataset Normalization” that generates few synthetic random samples.Accordingly, multi-way strategies are designed to reduce the data correlation for accurate estimation and optimize the execution pattern for running acceleration in the meantime.All the proposed methods are comprehensively evaluated on various DNN models, where an overall training speedup by up to 21.7% on modern GPUs can be practically achieved without the support of any specialized libraries, and the loss of model accuracy and convergence rate are negligible.Furthermore, our methods demonstrate powerful performance when solving the well-known “micro-batch normalization” problem in the case of tiny batch size.
We propose accelerating Batch Normalization (BN) through sampling less correlated data for reduction operations with regular execution pattern, which achieves up to 2x and 20% speedup for BN itself and the overall training, respectively.
1,404
SHE2: Stochastic Hamiltonian Exploration and Exploitation for Derivative-Free Optimization
Derivative-free optimization using trust region methods is frequently used for machine learning applications, such asparameter optimization without the derivatives of objective functions known. Inspired by the recent work in continuous-time minimizers, our work models the common trust region methods with the exploration-exploitation using a dynamical system coupling a pair of dynamical processes.While the first exploration process searches the minimum of the blackbox function through minimizing a time-evolving surrogation function, another exploitation process updates the surrogation function time-to-time using the points traversed by the exploration process.The efficiency of derivative-free optimization thus depends on ways the two processes couple.In this paper, we propose a novel dynamical system, namely \\ThePrev---\\underlinetochastic \\underlineamiltonian \\underlinexploration and \\underlinexploitation, that surrogates the subregions of blackbox function using a time-evolving quadratic function, then explores and tracks the minimum of the quadratic functions using a fast-converging Hamiltonian system.The \\ThePrev\\ algorithm is later provided as a discrete-time numerical approximation to the system.To further accelerate optimization, we present \\TheName\\ that parallelizes multiple \\ThePrev\\ threads for concurrent exploration and exploitation.Experiment results based on a wide range of machine learning applications show that \\TheName\\ outperform a boarder range of derivative-free optimization algorithms with faster convergence speed under the same settings.
a new derivative-free optimization algorithms derived from Nesterov's accelerated gradient methods and Hamiltonian dynamics
1,405
Binarized Back-Propagation: Training Binarized Neural Networks with Binarized Gradients
Binarized Neural networks have been shown to be effective in improving network efficiency during the inference phase, after the network has been trained.However, BNNs only binarize the model parameters and activations during propagations.Therefore, BNNs do not offer significant efficiency improvements during training, since the gradients are still propagated and used with high precision.We show there is no inherent difficulty in training BNNs using "Binarized BackPropagation", in which we also binarize the gradients.To avoid significant degradation in test accuracy, we simply increase the number of filter maps in a each convolution layer.Using BBP on dedicated hardware can potentially significantly improve the execution efficiency and speed up the training process with an appropriate hardware support, even after such an increase in network size.Moreover, our method is ideal for distributed learning as it reduces the communication costs significantly.Using this method, we demonstrate a minimal loss in classification accuracy on several datasets and topologies.
Binarized Back-Propagation all you need for completely binarized training is to is to inflate the size of the network
1,406
On the Selection of Initialization and Activation Function for Deep Neural Networks
The weight initialization and the activation function of deep neural networks have a crucial impact on the performance of the training procedure.An inappropriate selection can lead to the loss of information of the input during forward propagation and the exponential vanishing/exploding of gradients during back-propagation."Understanding the theoretical properties of untrained random networks is key to identifying which deep networks may be trained successfully as recently demonstrated by Schoenholz et al. who showed that for deep feedforward neural networks only a specific choice of hyperparameters known as the `edge of chaos' can lead to good performance.", 'We complete this analysis by providing quantitative results showing that, for a class of ReLU-like activation functions, the information propagates indeed deeper for an initialization at the edge of chaos.By further extending this analysis, we identify a class of activation functions that improve the information propagation over ReLU-like functions.This class includes the Swish activation,, used in Hendrycks & Gimpel,Elfwing et al. and Ramachandran et al..This provides a theoretical grounding for the excellent empirical performance of observed in these contributions.We complement those previous results by illustrating the benefit of using a random initialization on the edge of chaos in this context.
How to effectively choose Initialization and Activation function for deep neural networks
1,407
Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs
The driving force behind the recent success of LSTMs has been their ability to learn complex and non-linear relationships.Consequently, our inability to describe these relationships has led to LSTMs being characterized as black boxes.To this end, we introduce contextual decomposition, an interpretation algorithm for analysing individual predictions made by standard LSTMs, without any changes to the underlying model.By decomposing the output of a LSTM, CD captures the contributions of combinations of words or variables to the final prediction of an LSTM."On the task of sentiment analysis with the Yelp and SST data sets, we show that CD is able to reliably identify words and phrases of contrasting sentiment, and how they are combined to yield the LSTM's final prediction.", 'Using the phrase-level labels in SST, we also demonstrate that CD is able to successfully extract positive and negative negations from an LSTM, something which has not previously been done.
We introduce contextual decompositions, an interpretation algorithm for LSTMs capable of extracting word, phrase and interaction-level importance score
1,408
Phase-Aware Speech Enhancement with Deep Complex U-Net
Most deep learning-based models for speech enhancement have mainly focused on estimating the magnitude of spectrogram while reusing the phase from noisy speech for reconstruction.This is due to the difficulty of estimating the phase of clean speech.To improve speech enhancement performance, we tackle the phase estimation problem in three ways.First, we propose Deep Complex U-Net, an advanced U-Net structured model incorporating well-defined complex-valued building blocks to deal with complex-valued spectrograms.Second, we propose a polar coordinate-wise complex-valued masking method to reflect the distribution of complex ideal ratio masks.Third, we define a novel loss function, weighted source-to-distortion ratio loss, which is designed to directly correlate with a quantitative evaluation measure.Our model was evaluated on a mixture of the Voice Bank corpus and DEMAND database, which has been widely used by many deep learning models for speech enhancement.Ablation experiments were conducted on the mixed dataset showing that all three proposed approaches are empirically valid.Experimental results show that the proposed method achieves state-of-the-art performance in all metrics, outperforming previous approaches by a large margin.
This paper proposes a novel complex masking method for speech enhancement along with a loss function for efficient phase estimation.
1,409
SMiRL: Surprise Minimizing RL in Entropic Environments
All living organisms struggle against the forces of nature to carve out niches wherethey can maintain relative stasis.We propose that such a search for order amidstchaos might offer a unifying principle for the emergence of useful behaviors inartificial agents.We formalize this idea into an unsupervised reinforcement learningmethod called surprise minimizing RL.SMiRL trains an agent with theobjective of maximizing the probability of observed states under a model trained onall previously seen states.The resulting agents acquire several proactive behaviorsto seek and maintain stable states such as balancing and damage avoidance, thatare closely tied to the affordances of the environment and its prevailing sourcesof entropy, such as winds, earthquakes, and other agents. We demonstrate thatour surprise minimizing agents can successfully play Tetris, Doom, and controla humanoid to avoid falls, without any task-specific reward supervision. Wefurther show that SMiRL can be used as an unsupervised pre-training objectivethat substantially accelerates subsequent reward-driven learning
Learning emergent behavior by minimizing Bayesian surprise with RL in natural environments with entropy.
1,410
Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm
Learning to learn is a powerful paradigm for enabling models to learn from data more effectively and efficiently.A popular approach to meta-learning is to train a recurrent model to read in a training dataset as input and output the parameters of a learned model, or output predictions for new test inputs.Alternatively, a more recent approach to meta-learning aims to acquire deep representations that can be effectively fine-tuned, via standard gradient descent, to new tasks.In this paper, we consider the meta-learning problem from the perspective of universality, formalizing the notion of learning algorithm approximation and comparing the expressive power of the aforementioned recurrent models to the more recent approaches that embed gradient descent into the meta-learner.In particular, we seek to answer the following question: does deep representation combined with standard gradient descent have sufficient capacity to approximate any learning algorithm?We find that this is indeed true, and further find, in our experiments, that gradient-based meta-learning consistently leads to learning strategies that generalize more widely compared to those represented by recurrent models.
Deep representations combined with gradient descent can approximate any learning algorithm.
1,411
Translating neural signals to text using a Brain-Computer Interface
Brain-Computer Interfaces may help patients with faltering communication abilities due to neurodegenerative diseases produce text or speech by direct neural processing.However, their practical realization has proven difficult due to limitations in speed, accuracy, and generalizability of existing interfaces.To this end, we aim to create a BCI that decodes text directly from neural signals.We implement a framework that initially isolates frequency bands in the input signal encapsulating differential information regarding production of various phonemic classes.These bands form a feature set that feeds into an LSTM which discerns at each time point probability distributions across all phonemes uttered by a subject.Finally, a particle filtering algorithm temporally smooths these probabilities incorporating prior knowledge of the English language to output text corresponding to the decoded word.Further, in producing an output, we abstain from constraining the reconstructed word to be from a given bag-of-words, unlike previous studies.The empirical success of our proposed approach, offers promise for the employment of such an interface by patients in unfettered, naturalistic environments.
We present an open-loop brain-machine interface whose performance is unconstrained to the traditionally used bag-of-words approach.
1,412
An Out-of-the-box Full-network Embedding for Convolutional Neural Networks
Transfer learning for feature extraction can be used to exploit deep representations in contexts where there is very few training data, where there are limited computational resources, or when tuning the hyper-parameters needed for training is not an option.While previous contributions to feature extraction propose embeddings based on a single layer of the network, in this paper we propose a full-network embedding which successfully integrates convolutional and fully connected features, coming from all layers of a deep convolutional neural network.To do so, the embedding normalizes features in the context of the problem, and discretizes their values to reduce noise and regularize the embedding space.Significantly, this also reduces the computational cost of processing the resultant representations.The proposed method is shown to outperform single layer embeddings on several image classification tasks, while also being more robust to the choice of the pre-trained model used for obtaining the initial features.The performance gap in classification accuracy between thoroughly tuned solutions and the full-network embedding is also reduced, which makes of the proposed approach a competitive solution for a large set of applications.
We present a full-network embedding of CNN which outperforms single layer embeddings for transfer learning tasks.
1,413
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds.These thresholds can have fine-grained layer-wise adjustments dynamically via backpropagation.We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss using the same training epochs as dense models.Dynamic Sparse Training achieves prior art performance compared with other sparse training algorithms on various network architectures.Additionally, we have several surprising observations that provide strong evidence to the effectiveness and efficiency of our algorithm.These observations reveal the underlying problems of traditional three-stage pruning algorithms and present the potential guidance provided by our algorithm to the design of more compact network architectures.
We present a novel network pruning method that can find the optimal sparse structure during the training process with trainable pruning threshold
1,414
Evaluating biological plausibility of learning algorithms the lazy way
To which extent can successful machine learning inform our understanding of biological learning?One popular avenue of inquiry in recent years has been to directly map such algorithms into a realistic circuit implementation.Here we focus on learning in recurrent networks and investigate a range of learning algorithms.Our approach decomposes them into their computational building blocks and discusses their abstract potential as biological operations.This alternative strategy provides a “lazy” but principled way of evaluating ML ideas in terms of their biological plausibility
We evaluate new ML learning algorithms' biological plausibility in the abstract based on mathematical operations needed
1,415
Provable robustness against all adversarial $l_p$-perturbations for $p\geq 1$
In recent years several adversarial attacks and defenses have been proposed.Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used.One way out of this dilemma are provable robustness guarantees.While provably robust models for specific-perturbation models have been developed, we show that they do not come with any guarantee against other-perturbations.We propose a new regularization scheme, MMR-Universal, for ReLU networks which enforces robustness wrt- -perturbations and show how that leads to the first provably robust models wrt any-norm for.
We introduce a method to train models with provable robustness wrt all the-norms for simultaneously.
1,416
Dynamically Unfolding Recurrent Restorer: A Moving Endpoint Control Method for Image Restoration
In this paper, we propose a new control framework called the moving endpoint control to restore images corrupted by different degradation levels in one model.The proposed control problem contains a restoration dynamics which is modeled by an RNN.The moving endpoint, which is essentially the terminal time of the associated dynamics, is determined by a policy network.We call the proposed model the dynamically unfolding recurrent restorer.Numerical experiments show that DURR is able to achieve state-of-the-art performances on blind image denoising and JPEG image deblocking.Furthermore, DURR can well generalize to images with higher degradation levels that are not included in the training stage.
We propose a novel method to handle image degradations of different levels by learning a diffusion terminal time. Our model can generalize to unseen degradation level and different noise statistic.
1,417
Learning Wasserstein Embeddings
The Wasserstein distance received a lot of attention recently in the community of machine learning, especially for its principled way of comparing distributions.It has found numerous applications in several hard problems, such as domain adaptation, dimensionality reduction or generative models.However, its use is still limited by a heavy computational cost.Our goal is to alleviate this problem by providing an approximation mechanism that allows to break its inherent complexity.It relies on the search of an embedding where the Euclidean distance mimics the Wasserstein distance.We show that such an embedding can be found with a siamese architecture associated with a decoder network that allows to move from the embedding space back to the original input space.Once this embedding has been found, computing optimization problems in the Wasserstein space can be conducted extremely fast.Numerical experiments supporting this idea are conducted on image datasets, and show the wide potential benefits of our method.
We show that it is possible to fastly approximate Wasserstein distances computation by finding an appropriate embedding where Euclidean distance emulates the Wasserstein distance
1,418
CBOW Is Not All You Need: Combining CBOW with the Compositional Matrix Space Model
Continuous Bag of Words is a powerful text embedding method.Due to its strong capabilities to encode word content, CBOW embeddings perform well on a wide range of downstream tasks while being efficient to compute.However, CBOW is not capable of capturing the word order."The reason is that the computation of CBOW's word embeddings is commutative, i.e., embeddings of XYZ and ZYX are the same.", 'In order to address this shortcoming, we propose alearning algorithm for the Continuous Matrix Space Model, which we call Continual Multiplication of Words.Our algorithm is an adaptation of word2vec, so that it can be trained on large quantities of unlabeled text.We empirically show that CMOW better captures linguistic properties, but it is inferior to CBOW in memorizing word content.Motivated by these findings, we propose a hybrid model that combines the strengths of CBOW and CMOW."Our results show that the hybrid CBOW-CMOW-model retains CBOW's strong ability to memorize word content while at the same time substantially improving its ability to encode other linguistic information by 8%.", 'As a result, the hybrid also performs better on 8 out of 11 supervised downstream tasks with an average improvement of 1.2%.
We present a novel training scheme for efficiently obtaining order-aware sentence representations.
1,419
Generative Models from the perspective of Continual Learning
Which generative model is the most suitable for Continual Learning?This paper aims at evaluating and comparing generative models on disjoint sequential image generation tasks.We investigate how several models learn and forget, considering various strategies: rehearsal, regularization, generative replay and fine-tuning.We used two quantitative metrics to estimate the generation quality and memory ability.We experiment with sequential tasks on three commonly used benchmarks for Continual Learning.We found that among all models, the original GAN performs best and among Continual Learning strategies, generative replay outperforms all other methods.Even if we found satisfactory combinations on MNIST and Fashion MNIST, training generative models sequentially on CIFAR10 is particularly instable, and remains a challenge.
A comparative study of generative models on Continual Learning scenarios.
1,420
Supervised Policy Update for Deep Reinforcement Learning
We propose a new sample-efficient methodology, called Supervised Policy Update, for deep reinforcement learning.Starting with data generated by the current policy, SPU formulates and solves a constrained optimization problem in the non-parameterized proximal policy space.Using supervised regression, it then converts the optimal non-parameterized policy to a parameterized policy, from which it draws new samples.The methodology is general in that it applies to both discrete and continuous action spaces, and can handle a wide variety of proximity constraints for the non-parameterized optimization problem.We show how the Natural Policy Gradient and Trust Region Policy Optimization problems, and the Proximal Policy Optimization problem can be addressed by this methodology.The SPU implementation is much simpler than TRPO.In terms of sample efficiency, our extensive experiments show SPU outperforms TRPO in Mujoco simulated robotic tasks and outperforms PPO in Atari video game tasks.
first posing and solving the sample efficiency optimization problem in the non-parameterized policy space, and then solving a supervised regression problem to find a parameterized policy that is near the optimal non-parameterized policy.
1,421
How well do deep neural networks trained on object recognition characterize the mouse visual system?
Recent work on modeling neural responses in the primate visual system has benefited from deep neural networks trained on large-scale object recognition, and found a hierarchical correspondence between layers of the artificial neural network and brain areas along the ventral visual stream.However, we neither know whether such task-optimized networks enable equally good models of the rodent visual system, nor if a similar hierarchical correspondence exists.Here, we address these questions in the mouse visual system by extracting features at several layers of a convolutional neural network trained on ImageNet to predict the responses of thousands of neurons in four visual areas to natural images.We found that the CNN features outperform classical subunit energy models, but found no evidence for an order of the areas we recorded via a correspondence to the hierarchy of CNN layers.Moreover, the same CNN but with random weights provided an equivalently useful feature space for predicting neural responses.Our results suggest that object recognition as a high-level task does not provide more discriminative features to characterize the mouse visual system than a random network.Unlike in the primate, training on ethologically relevant visually guided behaviors -- beyond static object recognition -- may be needed to unveil the functional organization of the mouse visual cortex.
A goal-driven approach to model four mouse visual areas (V1, LM, AL, RL) based on deep neural networks trained on static object recognition does not unveil a functional organization of visual cortex unlike in primates
1,422
Recurrent Hierarchical Topic-Guided Neural Language Models
To simultaneously capture syntax and semantics from a text corpus, we propose a new larger-context language model that extracts recurrent hierarchical semantic structure via a dynamic deep topic model to guide natural language generation.Moving beyond a conventional language model that ignores long-range word dependencies and sentence order, the proposed model captures not only intra-sentence word dependencies, but also temporal transitions between sentences and inter-sentence topic dependences.For inference, we develop a hybrid of stochastic-gradient MCMC and recurrent autoencoding variational Bayes.Experimental results on a variety of real-world text corpora demonstrate that the proposed model not only outperforms state-of-the-art larger-context language models, but also learns interpretable recurrent multilayer topics and generates diverse sentences and paragraphs that are syntactically correct and semantically coherent.
We introduce a novel larger-context language model to simultaneously captures syntax and semantics, making it capable of generating highly interpretable sentences and paragraphs
1,423
Training a Constrained Natural Media Painting Agent using Reinforcement Learning
We present a novel approach to train a natural media painting using reinforcement learning.Given a reference image, our formulation is based on stroke-based rendering that imitates human drawing and can be learned from scratch without supervision.Our painting agent computes a sequence of actions that represent the primitive painting strokes.In order to ensure that the generated policy is predictable and controllable, we use a constrained learning method and train the painting agent using the environment model and follows the commands encoded in an observation.We have applied our approach on many benchmarks and our results demonstrate that our constrained agent can handle different painting media and different constraints in the action space to collaborate with humans or other agents.
We train a natural media painting agent using environment model. Based on our painting agent, we present a novel approach to train a constrained painting agent that follows the command encoded in the observation.
1,424
ConQUR: Mitigating Delusional Bias in Deep Q-Learning
Delusional bias is a fundamental source of error in approximate Q-learning.To date, the only techniques that explicitly address delusion require comprehensive search using tabular value estimates.In this paper, we develop efficient methods to mitigate delusional bias by training Q-approximators with labels that are "consistent" with the underlying greedy policy class.We introduce a simple penalization scheme that encourages Q-labels used across training batches to remain consistent with the expressible policy class.We also propose a search framework that allows multiple Q-approximators to be generated and tracked, thus mitigating the effect of premature policy commitments.Experimental results demonstrate that these methods can improve the performance of Q-learning in a variety of Atari games, sometimes dramatically.
We developed a search framework and consistency penalty to mitigate delusional bias.
1,425
APPLICATION OF DEEP CONVOLUTIONAL NEURAL NETWORK TO PREVENT ATM FRAUD BY FACIAL DISGUISE IDENTIFICATION
The paper proposes and demonstrates a Deep Convolutional Neural Network architecture to identify users with disguised face attempting a fraudulent ATM transaction.The recent introduction of Disguised Face Identification framework proves the applicability of deep neural networks for this very problem.All the ATMs nowadays incorporate a hidden camera in them and capture the footage of their users.However, it is impossible for the police to track down the impersonators with disguised faces from the ATM footage.The proposed deep convolutional neural network is trained to identify, in real time, whether the user in the captured image is trying to cloak his identity or not.The output of the DCNN is then reported to the ATM to take appropriate steps and prevent the swindler from completing the transaction.The network is trained using a dataset of images captured in similar situations as of an ATM.The comparatively low background clutter in the images enables the network to demonstrate high accuracy in feature extraction and classification for all the different disguises.
Proposed System can prevent impersonators with facial disguises from completing a fraudulent transaction using a pre-trained DCNN.
1,426
Learning Multi-Level Hierarchies with Hindsight
Hierarchical agents have the potential to solve sequential decision making tasks with greater sample efficiency than their non-hierarchical counterparts because hierarchical agents can break down tasks into sets of subtasks that only require short sequences of decisions. In order to realize this potential of faster learning, hierarchical agents need to be able to learn their multiple levels of policies in parallel so these simpler subproblems can be solved simultaneously. Yet, learning multiple levels of policies in parallel is hard because it is inherently unstable: changes in a policy at one level of the hierarchy may cause changes in the transition and reward functions at higher levels in the hierarchy, making it difficult to jointly learn multiple levels of policies. In this paper, we introduce a new Hierarchical Reinforcement Learning framework, Hierarchical Actor-Critic, that can overcome the instability issues that arise when agents try to jointly learn multiple levels of policies. The main idea behind HAC is to train each level of the hierarchy independently of the lower levels by training each level as if the lower level policies are already optimal. We demonstrate experimentally in both grid world and simulated robotics domains that our approach can significantly accelerate learning relative to other non-hierarchical and hierarchical methods. Indeed, our framework is the first to successfully learn 3-level hierarchies in parallel in tasks with continuous state and action spaces.
We introduce the first Hierarchical RL approach to successfully learn 3-level hierarchies in parallel in tasks with continuous state and action spaces.
1,427
Classification from Positive, Unlabeled and Biased Negative Data
Positive-unlabeled learning addresses the problem of learning a binary classifier from positive and unlabeled data.It is often applied to situations where negative data are difficult to be fully labeled.However, collecting a non-representative N set that contains only a small portion of all possible N data can be much easier in many practical situations.This paper studies a novel classification framework which incorporates such biased N data in PU learning.The fact that the training N data are biased also makes our work very different from those of standard semi-supervised learning.We provide an empirical risk minimization-based method to address this PUbN classification problem.Our approach can be regarded as a variant of traditional example-reweighting algorithms, with the weight of each example computed through a preliminary step that draws inspiration from PU learning.We also derive an estimation error bound for the proposed method.Experimental results demonstrate the effectiveness of our algorithm in not only PUbN learning scenarios but also ordinary PU leaning scenarios on several benchmark datasets.
This paper studied the PUbN classification problem, where we incorporate biased negative (bN) data, i.e., negative data that is not fully representative of the true underlying negative distribution, into positive-unlabeled (PU) learning.
1,428
On the Weaknesses of Reinforcement Learning for Neural Machine Translation
Reinforcement learning is frequently used to increase performance in text generation tasks,including machine translation,notably through the use of Minimum Risk Training and Generative Adversarial Networks.However, little is known about what and how these methods learn in the context of MT.We prove that one of the most common RL methods for MT does not optimize theexpected reward, as well as show that other methods take an infeasibly long time to converge.In fact, our results suggest that RL practices in MT are likely to improve performanceonly where the pre-trained parameters are already close to yielding the correct translation.Our findings further suggest that observed gains may be due to effects unrelated to the training signal, concretely, changes in the shape of the distribution curve.
Reinforcment practices for machine translation performance gains might not come from better predictions.
1,429
Transformer to CNN: Label-scarce distillation for efficient text classification
Significant advances have been made in Natural Language Processing modelling since the beginning of 2018.The new approaches allow for accurate results, even when there is little labelled data, because these NLP models can benefit from training on both task-agnostic and task-specific unlabelled data.However, these advantages come with significant size and computational costs.This workshop paper outlines how our proposed convolutional student architecture, having been trained by a distillation process from a large-scale model, can achieve 300x inference speedup and 39x reduction in parameter count.In some cases, the student model performance surpasses its teacher on the studied tasks.
We train a small, efficient CNN with the same performance as the OpenAI Transformer on text classification tasks
1,430
Temporal Difference Weighted Ensemble For Reinforcement Learning
Combining multiple function approximators in machine learning models typically leads to better performance and robustness compared with a single function.In reinforcement learning, ensemble algorithms such as an averaging method and a majority voting method are not always optimal, because each function can learn fundamentally different optimal trajectories from exploration.In this paper, we propose a Temporal Difference Weighted algorithm, an ensemble method that adjusts weights of each contribution based on accumulated temporal difference errors.The advantage of this algorithm is that it improves ensemble performance by reducing weights of Q-functions unfamiliar with current trajectories.We provide experimental results for Gridworld tasks and Atari tasks that show significant performance improvements compared with baseline algorithms.
Ensemble method for reinforcement learning that weights Q-functions based on accumulated TD errors.
1,431
Behaviour Suite for Reinforcement Learning
This paper introduces the Behaviour Suite for Reinforcement Learning, or bsuite for short.bsuite is a collection of carefully-designed experiments that investigate core capabilities of reinforcement learning agents with two objectives.First, to collect clear, informative and scalable problems that capture key issues in the design of general and efficient learning algorithms.Second, to study agent behaviour through their performance on these shared benchmarks.To complement this effort, we open source this http URL, which automates evaluation and analysis of any agent on bsuite.This library facilitates reproducible and accessible research on the core issues in RL, and ultimately the design of superior learning algorithms.Our code is Python, and easy to use within existing projects.We include examples with OpenAI Baselines, Dopamine as well as new reference implementations.Going forward, we hope to incorporate more excellent experiments from the research community, and commit to a periodic review of bsuite from a committee of prominent researchers.
Bsuite is a collection of carefully-designed experiments that investigate the core capabilities of RL agents.
1,432
The Adaptive Stress Testing Formulation
Validation is a key challenge in the search for safe autonomy.Simulations are often either too simple to provide robust validation, or too complex to tractably compute.Therefore, approximate validation methods are needed to tractably find failures without unsafe simplifications.This paper presents the theory behind one such black-box approach: adaptive stress testing.We also provide three examples of validation problems formulated to work with AST.
A formulation for a black-box, reinforcement learning method to find the most-likely failure of a system acting in complex scenarios.
1,433
Multi-step Greedy Policies in Model-Free Deep Reinforcement Learning
Multi-step greedy policies have been extensively used in model-based Reinforcement Learning and in the case when a model of the environment is available.In this work, we explore the benefits of multi-step greedy policies in model-free RL when employed in the framework of multi-step Dynamic Programming: multi-step Policy and Value Iteration.These algorithms iteratively solve short-horizon decision problems and converge to the optimal solution of the original one.By using model-free algorithms as solvers of the short-horizon problems we derive fully model-free algorithms which are instances of the multi-step DP framework.As model-free algorithms are prone to instabilities w.r.t. the decision problem horizon, this simple approach can help in mitigating these instabilities and results in an improved model-free algorithms.We test this approach and show results on both discrete and continuous control problems.
Use model free algorithms like DQN/TRPO to solve short horizon problems (model free) iteratively in a Policy/Value Iteration fashion.
1,434
The Limitations of Adversarial Training and the Blind-Spot Attack
The adversarial training procedure proposed by Madry et al. is one of the most effective methods to defend against adversarial examples in deep neural net- works.In our paper, we shed some lights on the practicality and the hardness of adversarial training by showing that the effectiveness of adversarial training has a strong correlation with the distance between a test point and the manifold of training data embedded by the network.Test examples that are relatively far away from this manifold are more likely to be vulnerable to adversarial attacks.Consequentially, an adversarial training based defense is susceptible to a new class of attacks, the “blind-spot attack”, where the input images reside in “blind-spots” of the empirical distri- bution of training data but is still on the ground-truth data manifold.For MNIST, we found that these blind-spots can be easily found by simply scaling and shifting image pixel values.Most importantly, for large datasets with high dimensional and complex data manifold, the existence of blind-spots in adversarial training makes defending on any valid test examples difficult due to the curse of dimensionality and the scarcity of training data.Additionally, we find that blind-spots also exist on provable defenses including and because these trainable robustness certificates can only be practically optimized on a limited set of training data.
We show that even the strongest adversarial training methods cannot defend against adversarial examples crafted on slightly scaled and shifted test images.
1,435
Random Bias Initialization Improving Binary Neural Network Training
Edge intelligence especially binary neural network has attracted considerable attention of the artificial intelligence community recently.BNNs significantly reduce the computational cost, model size, and memory footprint. However, there is still a performance gap between the successful full-precision neural network with ReLU activation and BNNs.We argue that the accuracy drop of BNNs is due to their geometry.We analyze the behaviour of the full-precision neural network with ReLU activation and compare it with its binarized counterpart.This comparison suggests random bias initialization as a remedy to activation saturation in full-precision networks and leads us towards an improved BNN training.Our numerical experiments confirm our geometric intuition.
Improve saturating activations (sigmoid, tanh, htanh etc.) and Binarized Neural Network with Bias Initialization
1,436
Capturing Human Category Representations by Sampling in Deep Feature Spaces
Understanding how people represent categories is a core problem in cognitive science, with the flexibility of human learning remaining a gold standard to which modern artificial intelligence and machine learning aspire.Decades of psychological research have yielded a variety of formal theories of categories, yet validating these theories with naturalistic stimuli remains a challenge.The problem is that human category representations cannot be directly observed and running informative experiments with naturalistic stimuli such as images requires having a workable representation of these stimuli.Deep neural networks have recently been successful in a range of computer vision tasks and provide a way to represent the features of images.In this paper, we introduce a method for estimating the structure of human categories that draws on ideas from both cognitive science and machine learning, blending human-based algorithms with state-of-the-art deep representation learners.We provide qualitative and quantitative results as a proof of concept for the feasibility of the method.Samples drawn from human distributions rival the quality of current state-of-the-art generative models and outperform alternative methods for estimating the structure of human categories.
using deep neural networks and clever algorithms to capture human mental visual concepts
1,437
Targeted Adversarial Examples for Black Box Audio Systems
The application of deep recurrent networks to audio transcription has led to impressive gains in automatic speech recognition systems.Many have demonstrated that small adversarial perturbations can fool deep neural networks into incorrectly predicting a specified target with high confidence.Current work on fooling ASR systems have focused on white-box attacks, in which the model architecture and parameters are known.In this paper, we adopt a black-box approach to adversarial generation, combining the approaches of both genetic algorithms and gradient estimation to solve the task.We achieve a 89.25% targeted attack similarity after 3000 generations while maintaining 94.6% audio file similarity.
We present a novel black-box targeted attack that is able to fool state of the art speech to text transcription.
1,438
Learning to Sit: Synthesizing Human-Chair Interactions via Hierarchical Control
Recent progress on physics-based character animation has shown impressive breakthroughs on human motion synthesis, through imitating motion capture data via deep reinforcement learning.However, results have mostly been demonstrated on imitating a single distinct motion pattern, and do not generalize to interactive tasks that require flexible motion patterns due to varying human-object spatial configurations.To bridge this gap, we focus on one class of interactive tasks---sitting onto a chair.We propose a hierarchical reinforcement learning framework which relies on a collection of subtask controllers trained to imitate simple, reusable mocap motions, and a meta controller trained to execute the subtasks properly to complete the main task.We experimentally demonstrate the strength of our approach over different single level and hierarchical baselines.We also show that our approach can be applied to motion prediction given an image input.A video highlight can be found at https://youtu.be/XWU3wzz1ip8/.
Synthesizing human motions on interactive tasks using mocap data and hierarchical RL.
1,439
On Convergence and Stability of GANs
We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions.We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens.We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse.We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points.We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN.We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions.
Analysis of convergence and mode collapse by studying GAN training process as regret minimization
1,440
Interpretability Evaluation Framework for Deep Neural Networks
Deep neural networks have attained surprising achievement during the last decade due to the advantages of automatic feature learning and freedom of expressiveness.However, their interpretability remains mysterious because DNNs are complex combinations of linear and nonlinear transformations.Even though many models have been proposed to explore the interpretability of DNNs, several challenges remain unsolved:1) The lack of interpretability quantity measures for DNNs, 2) the lack of theory for stability of DNNs, and3) the difficulty to solve nonconvex DNN problems with interpretability constraints.To address these challenges simultaneously, this paper presents a novel intrinsic interpretability evaluation framework for DNNs.Specifically, Four independent properties of interpretability are defined based on existing works.Moreover, we investigate the theory for the stability of DNNs, which is an important aspect of interpretability, and prove that DNNs are generally stable given different activation functions.Finally, an extended version of deep learning Alternating Direction Method of Multipliers are proposed to solve DNN problems with interpretability constraints efficiently and accurately.Extensive experiments on several benchmark datasets validate several DNNs by our proposed interpretability framework.
We propose a novel framework to evaluate the interpretability of neural network.
1,441
Hyperbolic Discounting and Learning Over Multiple Horizons
Reinforcement learning typically defines a discount factor as part of the Markov Decision Process. The discount factor values future rewards by an exponential scheme that leads to theoretical convergence guarantees of the Bellman equation.However, evidence from psychology, economics and neuroscience suggests that humans and animals instead have hyperbolic time-preferences. Here we extend earlier work of Kurth-Nelson and Redish and propose an efficient deep reinforcement learning agent that acts via hyperbolic discounting and other non-exponential discount mechanisms.We demonstrate that a simple approach approximates hyperbolic discount functions while still using familiar temporal-difference learning techniques in RL. Additionally, and independent of hyperbolic discounting, we make a surprising discovery that simultaneously learning value functions over multiple time-horizons is an effective auxiliary task which often improves over state-of-the-art methods.
A deep RL agent that learns hyperbolic (and other non-exponential) Q-values and a new multi-horizon auxiliary task.
1,442
EEG based Emotion Recognition of Image Stimuli
Emotion is playing a great role in our daily lives.The necessity and importance of an automatic Emotion recognition system is getting increased.Traditional approaches of emotion recognition are based on facial images, measurements of heart rates, blood pressure, temperatures, tones of voice/speech, etc.However, these features can potentially be changed to fake features.So to detect hidden and real features that is not controlled by the person are data measured from brain signals.There are various ways of measuring brain waves: EEG, MEG, FMRI, etc.On the bases of cost effectiveness and performance trade-offs, EEG is chosen for emotion recognition in this work.The main aim of this study is to detect emotion based on EEG signal analysis recorded from brain in response to visual stimuli.The approaches used were the selected visual stimuli were presented to 11 healthy target subjects and EEG signal were recorded in controlled situation to minimize artefacts. The signals were filtered and type of frequency band was computed and detected.The proposed method predicts an emotion type in response to the presented stimuli.Finally, the performance of the proposed approach was tested.The average accuracy of machine learning algorithms are 78.86, 74.76, 77.82 and 82.46 respectively. In this study, we also applied EEG applications in the context of neuro-marketing.The results empirically demonstrated detection of the favourite colour preference of customers in response to the logo colour of an organization or Service.
This paper presents EEG based emotion detection of a person towards an image stimuli and its applicability on neuromarketing.
1,443
Latent Variable Session-Based Recommendation
We present a probabilistic framework for session based recommendation. A latent variable for the user state is updated as the user views more items and we learn more about their interests. We provide computational solutions using both the re-parameterization trick and using the Bouchard bound for the softmax function, we further explore employing a variational auto-encoder and a variational Expectation-Maximization algorithm for tightening the variational bound. Finally we show that the Bouchard bound causes the denominator of the softmax to decompose into a sum enabling fast noisy gradients of the bound giving a fully probabilistic algorithm reminiscent of word2vec and a fast online EM algorithm.
Fast variational approximations for approximating a user state and learning product embeddings
1,444
An Information-Theoretic Metric of Transferability for Task Transfer Learning
An important question in task transfer learning is to determine task transferability, i.e. given a common input domain, estimating to what extent representations learned from a source task can help in learning a target task.Typically, transferability is either measured experimentally or inferred through task relatedness, which is often defined without a clear operational meaning.In this paper, we present a novel metric, H-score, an easily-computable evaluation function that estimates the performance of transferred representations from one task to another in classification problems.Inspired by a principled information theoretic approach, H-score has a direct connection to the asymptotic error probability of the decision function based on the transferred feature.This formulation of transferability can further be used to select a suitable set of source tasks in task transfer learning problems or to devise efficient transfer learning policies.Experiments using both synthetic and real image data show that not only our formulation of transferability is meaningful in practice, but also it can generalize to inference problems beyond classification, such as recognition tasks for 3D indoor-scene understanding.
We present a provable and easily-computable evaluation function that estimates the performance of transferred representations from one learning task to another in task transfer learning.
1,445
PROTOTYPE-ASSISTED ADVERSARIAL LEARNING FOR UNSUPERVISED DOMAIN ADAPTATION
This paper presents a generic framework to tackle the crucial class mismatch problem in unsupervised domain adaptation for multi-class distributions. Previous adversarial learning methods condition domain alignment only on pseudo labels, but noisy and inaccurate pseudo labels may perturb the multi-class distribution embedded in probabilistic predictions, hence bringing insufficient alleviation to the latent mismatch problem. Compared with pseudo labels, class prototypes are more accurate and reliable since they summarize over all the instances and are able to represent the inherent semantic distribution shared across domains.Therefore, we propose a novel Prototype-Assisted Adversarial Learning scheme, which incorporates instance probabilistic predictions and class prototypes together to provide reliable indicators for adversarial domain alignment. With the PAAL scheme, we align both the instance feature representations and class prototype representations to alleviate the mismatch among semantically different classes. Also, we exploit the class prototypes as proxy to minimize the within-class variance in the target domain to mitigate the mismatch among semantically similar classes. With these novelties, we constitute a Prototype-Assisted Conditional Domain Adaptation framework which well tackles the class mismatch problem.We demonstrate the good performance and generalization ability of the PAAL scheme and also PACDA framework on two UDA tasks, i.e., object recognition and synthetic-to-real semantic segmentation.
We propose a reliable conditional adversarial learning scheme along with a simple, generic yet effective framework for UDA tasks.
1,446
D2KE: From Distance to Kernel and Embedding via Random Features For Structured Inputs
We present a new methodology that constructs a family of from any given dissimilarity measure on structured inputs whose elements are either real-valued time series or discrete structures such as strings, histograms, and graphs.Our approach, which we call D2KE, draws from the literature of Random Features.However, instead of deriving random feature maps from a user-defined kernel to approximate kernel machines, we build a kernel from a random feature map, that we specify given the distance measure.We further propose use of a finite number of random objects to produce a random feature embedding of each instance.We provide a theoretical analysis showing that D2KE enjoys better generalizability than universal Nearest-Neighbor estimates.On one hand, D2KE subsumes the widely-used as a special case, and relates to the well-known in a limiting case.On the other hand, D2KE generalizes existing applicable only to vector input representations to complex structured inputs of variable sizes.We conduct classification experiments over such disparate domains as time series, strings, and histograms, for which our proposed framework compares favorably to existing distance-based learning methods in terms of both testing accuracy and computational time.
From Distance to Kernel and Embedding via Random Features For Structured Inputs
1,447
Towards Quantum Inspired Convolution Networks
Deep Convolution Neural Networks, rooted by the pioneer work of , and summarized in , have been shown to be very useful in a variety of fields. The state-of-the art CNN machines such as image rest net are described by real value inputs and kernel convolutions followed by the local and non-linear rectified linear outputs. Understanding the role of these layers, the accuracy and limitations of them, as well as making them more efficient are all ongoing research questions.Inspired in quantum theory, we propose the use of complex value kernel functions, followed by the local non-linear absolute operator square.We argue that an advantage of quantum inspired complex kernels is robustness to realistic unpredictable scenarios.We study a concrete problem of shape detection and show that when multiple overlapping shapes are deformed and/or clutter noise is added, a convolution layer with quantum inspired complex kernels outperforms the statistical/classical kernel counterpart and a "Bayesian shape estimator" .The superior performance is due to the quantum phenomena of interference, not present in classical CNNs.
A quantum inspired kernel for convolution network, exhibiting interference phenomena, can be very useful (and compared it with real value counterpart).
1,448
Neural MMO: A massively multiplayer game environment for intelligent agents
We present an artificial intelligence research platform inspired by the human game genre of MMORPGs.We demonstrate how this platform can be used to study behavior and learning in large populations of neural agents.Unlike currently popular game environments, our platform supports persistent environments, with variable number of agents, and open-ended task descriptions.The emergence of complex life on Earth is often attributed to the arms race that ensued from a huge number of organisms all competing for finite resources.Our platform aims to simulate this setting in microcosm: we conduct a series of experiments to test how large-scale multiagent competition can incentivize the development of skillful behavior.We find that population size magnifies the complexity of the behaviors that emerge and results in agents that out-compete agents trained in smaller populations.
An MMO-inspired research game platform for studying emergent behaviors of large populations in a complex environment
1,449
Improving Sample-based Evaluation for Generative Adversarial Networks
In this paper, we propose an improved quantitative evaluation framework for Generative Adversarial Networks on generating domain-specific images, where we improve conventional evaluation methods on two levels: the feature representation and the evaluation metric.Unlike most existing evaluation frameworks which transfer the representation of ImageNet inception model to map images onto the feature space, our framework uses a specialized encoder to acquire fine-grained domain-specific representation.Moreover, for datasets with multiple classes, we propose Class-Aware Frechet Distance, which employs a Gaussian mixture model on the feature space to better fit the multi-manifold feature distribution.Experiments and analysis on both the feature level and the image level were conducted to demonstrate improvements of our proposed framework over the recently proposed state-of-the-art FID method.To our best knowledge, we are the first to provide counter examples where FID gives inconsistent results with human judgments.It is shown in the experiments that our framework is able to overcome the shortness of FID and improves robustness.Code will be made available.
This paper improves existing sample-based evaluation for GANs and contains some insightful experiments.
1,450
ResBinNet: Residual Binary Neural Network
Recent efforts on training light-weight binary neural networks offer promising execution/memory efficiency.This paper introduces ResBinNet, which is a composition of two interlinked methodologies aiming to address the slow convergence speed and limited accuracy of binary convolutional neural networks.The first method, called residual binarization, learns a multi-level binary representation for the features within a certain neural network layer.The second method, called temperature adjustment, gradually binarizes the weights of a particular layer.The two methods jointly learn a set of soft-binarized parameters that improve the convergence rate and accuracy of binary neural networks.We corroborate the applicability and scalability of ResBinNet by implementing a prototype hardware accelerator.The accelerator is reconfigurable in terms of the numerical precision of the binarized features, offering a trade-off between runtime and inference accuracy.
Residual Binary Neural Networks significantly improve the convergence rate and inference accuracy of the binary neural networks.
1,451
Anomalous Pattern Detection in Activations and Reconstruction Error of Autoencoders
In real-world machine learning applications, large outliers and pervasive noise are commonplace, and access to clean training data as required by standard deep autoencoders is unlikely.Reliably detecting anomalies in a given set of images is a task of high practical relevance for visual quality inspection, surveillance, or medical image analysis.Autoencoder neural networks learn to reconstruct normal images, and hence can classify those images as anomalous if the reconstruction error exceeds some threshold.In this paper, we proposed an unsupervised method based on subset scanning over autoencoder activations.The contributions of our work are threefold.First, we propose a novel method combining detection with reconstruction error and subset scanning scores to improve the anomaly score of current autoencoders without requiring any retraining.Second, we provide the ability to inspect and visualize the set of anomalous nodes in the reconstruction error space that make a sample noised.Third, we show that subset scanning can be used for anomaly detection in the inner layers of the autoencoder.We provide detection power results for several untargeted adversarial noise models under standard datasets.
Unsupervised method to detect adversarial samples in autoencoder's activations and reconstruction error space
1,452
Pretrain-KGEs: Learning Knowledge Representation from Pretrained Models for Knowledge Graph Embeddings
Learning knowledge graph embeddings is an efficient approach to knowledge graph completion.Conventional KGEs often suffer from limited knowledge representation, which causes less accuracy especially when training on sparse knowledge graphs.To remedy this, we present Pretrain-KGEs, a training framework for learning better knowledgeable entity and relation embeddings, leveraging the abundant linguistic knowledge from pretrained language models.Specifically, we propose a unified approach in which we first learn entity and relation representations via pretrained language models and use the representations to initialize entity and relation embeddings for training KGE models.Our proposed method is model agnostic in the sense that it can be applied to any variant of KGE models.Experimental results show that our method can consistently improve results and achieve state-of-the-art performance using different KGE models such as TransE and QuatE, across four benchmark KG datasets in link prediction and triplet classification tasks.
We propose to learn knowledgeable entity and relation representations from Bert for knowledge graph embeddings.
1,453
Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base
We describe a novel way of representing a symbolic knowledge base called a sparse-matrix reified KB. This representation enables neural modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs.The sparse-matrix reified KB can be distributed across multiple GPUs, can scale to tens of millions of entities and facts, and is orders of magnitude faster than naive sparse-matrix implementations. The reified KB enables very simple end-to-end architectures to obtain competitive performance on several benchmarks representing two families of tasks: KB completion, and learning semantic parsers from denotations.
A scalable differentiable neural module that implements reasoning on symbolic KBs.
1,454
The Differentiable Cross-Entropy Method
We study the Cross-Entropy Method for the non-convex optimization of a continuous and parameterized objective function and introduce a differentiable variant that enables us to differentiate the output of CEM with respect to the objective function's parameters.", 'In the machine learning setting this brings CEM inside of the end-to-end learning pipeline in cases this has otherwise been impossible.We show applications in a synthetic energy-based structured prediction task and in non-convex continuous control.In the control setting we show on the simulated cheetah and walker tasks that we can embed their optimal action sequences with DCEM and then use policy optimization to fine-tune components of the controller as a step towards combining model-based and model-free RL.
DCEM learns latent domains for optimization problems and helps bridge the gap between model-based and model-free RL --- we create a differentiable controller and fine-tune parts of it with PPO
1,455
A General Upper Bound for Unsupervised Domain Adaptation
In this work, we present a novel upper bound of target error to address the problem for unsupervised domain adaptation.Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks.Furthermore, Ben-David et al. provide an upper bound for target error when transferring the knowledge, which can be summarized as minimizing the source error and distance between marginal distributions simultaneously.However, common methods based on the theory usually ignore the joint error such that samples from different classes might be mixed together when matching marginal distribution.And in such case, no matter how we minimize the marginal discrepancy, the target error is not bounded due to an increasing joint error.To address this problem, we propose a general upper bound taking joint error into account, such that the undesirable case can be properly penalized.In addition, we utilize constrained hypothesis space to further formalize a tighter bound as well as a novel cross margin discrepancy to measure the dissimilarity between hypotheses which alleviates instability during adversarial learning.Extensive empirical evidence shows that our proposal outperforms related approaches in image classification error rates on standard domain adaptation benchmarks.
joint error matters for unsupervised domain adaptation especially when the domain shift is huge
1,456
Knowledge Flow: Improve Upon Your Teachers
A zoo of deep nets is available these days for almost any given task, and it is increasingly unclear which net to start with when addressing a new task, or which net to use as an initialization for fine-tuning a new model.To address this issue, in this paper, we develop knowledge flow which moves ‘knowledge’ from multiple deep nets, referred to as teachers, to a new deep net model, called the student.The structure of the teachers and the student can differ arbitrarily and they can be trained on entirely different tasks with different output spaces too.Upon training with knowledge flow the student is independent of the teachers.We demonstrate our approach on a variety of supervised and reinforcement learning tasks, outperforming fine-tuning and other ‘knowledge exchange’ methods.
‘Knowledge Flow’ trains a deep net (student) by injecting information from multiple nets (teachers). The student is independent upon training and performs very well on learned tasks irrespective of the setting (reinforcement or supervised learning).
1,457
Analytical Moment Regularizer for Training Robust Networks
Despite the impressive performance of deep neural networks on numerous learning tasks, they still exhibit uncouth behaviours.One puzzling behaviour is the subtle sensitive reaction of DNNs to various noise attacks.Such a nuisance has strengthened the line of research around developing and training noise-robust networks.In this work, we propose a new training regularizer that aims to minimize the probabilistic expected training loss of a DNN subject to a generic Gaussian input.We provide an efficient and simple approach to approximate such a regularizer for arbitrarily deep networks.This is done by leveraging the analytic expression of the output mean of a shallow neural network, avoiding the need for memory and computation expensive data augmentation.We conduct extensive experiments on LeNet and AlexNet on various datasets including MNIST, CIFAR10, and CIFAR100 to demonstrate the effectiveness of our proposed regularizer.In particular, we show that networks that are trained with the proposed regularizer benefit from a boost in robustness against Gaussian noise to an equivalent amount of performing 3-21 folds of noisy data augmentation.Moreover, we empirically show on several architectures and datasets that improving robustness against Gaussian noise, by using the new regularizer, can improve the overall robustness against 6 other types of attacks by two orders of magnitude.
An efficient estimate to the Gaussian first moment of DNNs as a regularizer to training robust networks.
1,458
Multi-Mention Learning for Reading Comprehension with Neural Cascades
Reading comprehension is a challenging task, especially when executed across longer or across multiple evidence documents, where the answer is likely to reoccur.Existing neural architectures typically do not scale to the entire evidence, and hence, resort to selecting a single passage in the document, and carefully searching for the answer within that passage.However, in some cases, this strategy can be suboptimal, since by focusing on a specific passage, it becomes difficult to leverage multiple mentions of the same answer throughout the document.In this work, we take a different approach by constructing lightweight models that are combined in a cascade to find the answer.Each submodel consists only of feed-forward networks equipped with an attention mechanism, making it trivially parallelizable.We show that our approach can scale to approximately an order of magnitude larger evidence documents and can aggregate information from multiple mentions of each answer candidate across the document.Empirically, our approach achieves state-of-the-art performance on both the Wikipedia and web domains of the TriviaQA dataset, outperforming more complex, recurrent architectures.
We propose neural cascades, a simple and trivially parallelizable approach to reading comprehension, consisting only of feed-forward nets and attention that achieves state-of-the-art performance on the TriviaQA dataset.
1,459
Atomic Compression Networks
Compressed forms of deep neural networks are essential in deploying large-scalecomputational models on resource-constrained devices.Contrary to analogousdomains where large-scale systems are build as a hierarchical repetition of small-scale units, the current practice in Machine Learning largely relies on models withnon-repetitive components.In the spirit of molecular composition with repeatingatoms, we advance the state-of-the-art in model compression by proposing AtomicCompression Networks, a novel architecture that is constructed by recursiverepetition of a small set of neurons.In other words, the same neurons with thesame weights are stochastically re-positioned in subsequent layers of the network.Empirical evidence suggests that ACNs achieve compression rates of up to threeorders of magnitudes compared to fine-tuned fully-connected neural networks with only a fractional deterioration of classification accuracy.Moreover our method can yield sub-linear model complexitiesand permits learning deep ACNs with less parameters than a logistic regressionwith no decline in classification accuracy.
We advance the state-of-the-art in model compression by proposing Atomic Compression Networks (ACNs), a novel architecture that is constructed by recursive repetition of a small set of neurons.
1,460
VideoFlow: A Conditional Flow-Based Model for Stochastic Video Generation
Generative models that can model and predict sequences of future events can, in principle, learn to capture complex real-world phenomena, such as physical interactions.However, a central challenge in video prediction is that the future is highly uncertain: a sequence of past observations of events can imply many possible futures.Although a number of recent works have studied probabilistic models that can represent uncertain futures, such models are either extremely expensive computationally as in the case of pixel-level autoregressive models, or do not directly optimize the likelihood of the data.To our knowledge, our work is the first to propose multi-frame video prediction with normalizing flows, which allows for direct optimization of the data likelihood, and produces high-quality stochastic predictions.We describe an approach for modeling the latent space dynamics, and demonstrate that flow-based generative models offer a viable and competitive approach to generative modeling of video.
We demonstrate that flow-based generative models offer a viable and competitive approach to generative modeling of video.
1,461
The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Minima and Regularization Effects
Understanding the behavior of stochastic gradient descent in the context of deep neural networks has raised lots of concerns recently.Along this line, we theoretically study a general form of gradient based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics.Through investigating this general optimization dynamics, we analyze the behavior of SGD on escaping from minima and its regularization effects.A novel indicator is derived to characterize the efficiency of escaping from minima through measuring the alignment of noise covariance and the curvature of loss function.Based on this indicator, two conditions are established to show which type of noise structure is superior to isotropic noise in term of escaping efficiency.We further show that the anisotropic noise in SGD satisfies the two conditions, and thus helps to escape from sharp and poor minima effectively, towards more stable and flat minima that typically generalize well.We verify our understanding through comparingthis anisotropic diffusion with full gradient descent plus isotropic diffusion and other types of position-dependent noise.
We provide theoretical and empirical analysis on the role of anisotropic noise introduced by stochastic gradient on escaping from minima.
1,462
Model-Augmented Actor-Critic: Backpropagating through Paths
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator to augment the data for policy optimization or value function learning.In this paper, we show how to make more effective use of the model by exploiting its differentiability.We construct a policy optimization algorithm that uses the pathwise derivative of the learned model and policy across future timesteps.Instabilities of learning across many timesteps are prevented by using a terminal value function, learning the policy in an actor-critic fashion.Furthermore, we present a derivation on the monotonic improvement of our objective in terms of the gradient error in the model and value function.We show that our approach is consistently more sample efficient than existing state-of-the-art model-based algorithms, matches the asymptotic performance of model-free algorithms, and scales to long horizons, a regime where typically past model-based approaches have struggled.
Policy gradient through backpropagation through time using learned models and Q-functions. SOTA results in reinforcement learning benchmark environments.
1,463
Unsupervised Meta-Learning for Reinforcement Learning
Meta-learning algorithms learn to acquire new tasks more quickly from past experience.In the context of reinforcement learning, meta-learning algorithms can acquire reinforcement learning procedures to solve new problems more efficiently by utilizing experience from prior tasks.The performance of meta-learning algorithms depends on the tasks available for meta-training: in the same way that supervised learning generalizes best to test points drawn from the same distribution as the training points, meta-learning methods generalize best to tasks from the same distribution as the meta-training tasks.In effect, meta-reinforcement learning offloads the design burden from algorithm design to task design.If we can automate the process of task design as well, we can devise a meta-learning algorithm that is truly automated.In this work, we take a step in this direction, proposing a family of unsupervised meta-learning algorithms for reinforcement learning.We motivate and describe a general recipe for unsupervised meta-reinforcement learning, and present an instantiation of this approach.Our conceptual and theoretical contributions consist of formulating the unsupervised meta-reinforcement learning problem and describing how task proposals based on mutual information can in principle be used to train optimal meta-learners.Our experimental results indicate that unsupervised meta-reinforcement learning effectively acquires accelerated reinforcement learning procedures without the need for manual task design and significantly exceeds the performance of learning from scratch.
Meta-learning on self-proposed task distributions to speed up reinforcement learning without human specified task distributions
1,464
K for the Price of 1: Parameter-efficient Multi-task and Transfer Learning
We introduce a novel method that enables parameter-efficient transfer and multi-task learning with deep neural networks.The basic approach is to learn a model patch - a small set of parameters - that will specialize to each task, instead of fine-tuning the last layer or the entire network.For instance, we show that learning a set of scales and biases is sufficient to convert a pretrained network to perform well on qualitatively different problems model into a 1000-class image classification model while reusing 98% of parameters of the SSD feature extractor).Similarly, we show that re-learning existing low-parameter layers while keeping the rest of the network frozen also improves transfer-learning accuracy significantly.Our approach allows both simultaneous as well as sequential transfer learning.In several multi-task learning problems, despite using much fewer parameters than traditional logits-only fine-tuning, we match single-task performance.
A novel and practically effective method to adapt pretrained neural networks to new tasks by retraining a minimal (e.g., less than 2%) number of parameters
1,465
One Bit Matters: Understanding Adversarial Examples as the Abuse of Redundancy
Adversarial examples have somewhat disrupted the enormous success of machine learning and are causing concern with regards to its trustworthiness: A small perturbation of an input results in an arbitrary failure of an otherwise seemingly well-trained ML system.While studies are being conducted to discover the intrinsic properties of adversarial examples, such as their transferability and universality, there is insufficient theoretic analysis to help understand the phenomenon in a way that can influence the design process of ML experiments.In this paper, we deduce an information-theoretic model which explains adversarial attacks universally as the abuse of feature redundancies in ML algorithms.We prove that feature redundancy is a necessary condition for the existence of adversarial examples.Our model helps to explain the major questions raised in many anecdotal studies on adversarial examples.Our theory is backed up by empirical measurements of the information content of benign and adversarial examples on both image and text datasets.Our measurements show that typical adversarial examples introduce just enough redundancy to overflow the decision making of a machine learner trained on corresponding benign examples.We conclude with actionable recommendations to improve the robustness of machine learners against adversarial examples.
A new theoretical explanation for the existence of adversarial examples
1,466
Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Imbalanced Data
We propose a novel unsupervised generative model, Elastic-InfoGAN, that learns to disentangle object identity from other low-level aspects in class-imbalanced datasets.We first investigate the issues surrounding the assumptions about uniformity made by InfoGAN, and demonstrate its ineffectiveness to properly disentangle object identity in imbalanced data."Our key idea is to make the discovery of the discrete latent factor of variation invariant to identity-preserving transformations in real images, and use that as the signal to learn the latent distribution's parameters.", 'Experiments on both artificial and real-world datasets demonstrate the effectiveness of our approach in imbalanced data by: better disentanglement of object identity as a latent factor of variation; and better approximation of class imbalance in the data, as reflected in the learned parameters of the latent distribution.
Elastic-InfoGAN is a modification of InfoGAN that learns, without any supervision, disentangled representations in class imbalanced data
1,467
Generalizing Deep Multi-task Learning with Heterogeneous Structured Networks
Many real applications show a great deal of interest in learning multiple tasks from different data sources/modalities with unbalanced samples and dimensions.Unfortunately, existing cutting-edge deep multi-task learning approaches cannot be directly applied to these settings, due to either heterogeneous input dimensions or the heterogeneity in the optimal network architectures of different tasks.It is thus demanding to develop knowledge-sharing mechanism to handle the intrinsic discrepancies among network architectures across tasks.To this end, we propose a flexible knowledge-sharing framework for jointly learning multiple tasks from distinct data sources/modalities.The proposed framework allows each task to own its task-specific network design, via utilizing a compact tensor representation, while the sharing is achieved through the partially shared latent cores.By providing more elaborate sharing control with latent cores, our framework is effective in transferring task-invariant knowledge, yet also being efficient in learning task-specific features.Experiments on both single and multiple data sources/modalities settings display the promising results of the proposed method, especially favourable in insufficient data scenarios.
a distributed latent-space based knowledge-sharing framework for deep multi-task learning
1,468
Game Changer: Accessible Audio and Tactile Guidance for Board and Card Games
Board games often rely on visual information such as the location of the game pieces and textual information on cards.Due to this reliance on visual feedback, blind players are at a disadvantage because they cannot read the cards or see the location of the game pieces and may be unable to play a game without sighted help.We present Game Changer, an augmented workspace that provides both audio descriptions and tactile additions to make the state of the board game accessible to blind and visually impaired players.In this paper, we describe the design of Game Changer and present findings from a user study in which 7 blind participants used Game Changer to play against a sighted partner.Most players stated the game was more accessible with the additions from Game Changer and felt that Game Changer could be used to augment other games.
Game Changer is a system that provides both audio descriptions and tactile additions to make the state of the board game accessible to blind and visually impaired players.
1,469
Learning from Rules Generalizing Labeled Exemplars
In many applications labeled data is not readily available, and needs to be collected via pain-staking human supervision.We propose a rule-exemplar model for collecting human supervision to combine the scalability of rules with the quality of instance labels. The supervision is coupled such that it is both natural for humans and synergistic for learning.We propose a training algorithm that jointly denoises rules via latent coverage variables, and trains the model through a soft implication loss over the coverage and label variables. Empirical evaluation on five different tasks shows that our algorithm is more accurate than several existing methods of learning from a mix of clean and noisy supervision, and the coupled rule-exemplar supervision is effective in denoising rules.
Coupled rule-exemplar supervision and a implication loss helps to jointly learn to denoise rules and imply labels.
1,470
Adversarial Imitation via Variational Inverse Reinforcement Learning
We consider a problem of learning the reward and policy from expert examples under unknown dynamics.Our proposed method builds on the framework of generative adversarial networks and introduces the empowerment-regularized maximum-entropy inverse reinforcement learning to learn near-optimal rewards and policies.Empowerment-based regularization prevents the policy from overfitting to expert demonstrations, which advantageously leads to more generalized behaviors that result in learning near-optimal rewards.Our method simultaneously learns empowerment through variational information maximization along with the reward and policy under the adversarial learning formulation.We evaluate our approach on various high-dimensional complex control tasks.We also test our learned rewards in challenging transfer learning problems where training and testing environments are made to be different from each other in terms of dynamics or structure.The results show that our proposed method not only learns near-optimal rewards and policies that are matching expert behavior but also performs significantly better than state-of-the-art inverse reinforcement learning algorithms.
Our method introduces the empowerment-regularized maximum-entropy inverse reinforcement learning to learn near-optimal rewards and policies from expert demonstrations.
1,471
RNNs implicitly implement tensor-product representations
Recurrent neural networks can learn continuous vector representations of symbolic structures such as sequences and sentences; these representations often exhibit linear regularities. Such regularities motivate our hypothesis that RNNs that show such regularities implicitly compile symbolic structures into tensor product representations, which additively combine tensor products of vectors representing roles and vectors representing fillers. To test this hypothesis, we introduce Tensor Product Decomposition Networks, which use TPRs to approximate existing vector representations. We demonstrate using synthetic data that TPDNs can successfully approximate linear and tree-based RNN autoencoder representations, suggesting that these representations exhibit interpretable compositional structure; we explore the settings that lead RNNs to induce such structure-sensitive representations. By contrast, further TPDN experiments show that the representations of four models trained to encode naturally-occurring sentences can be largely approximated with a bag of words, with only marginal improvements from more sophisticated structures. We conclude that TPDNs provide a powerful method for interpreting vector representations, and that standard RNNs can induce compositional sequence representations that are remarkably well approximated byTPRs; at the same time, existing training tasks for sentence representation learning may not be sufficient for inducing robust structural representations
RNNs implicitly implement tensor-product representations, a principled and interpretable method for representing symbolic structures in continuous space.
1,472
Data for free: Fewer-shot algorithm learning with parametricity data augmentation
We address the problem of teaching an RNN to approximate list-processing algorithms given a small number of input-output training examples.Our approach is to generalize the idea of parametricity from programming language theory to formulate a semantic property that distinguishes common algorithms from arbitrary non-algorithmic functions.This characterization leads naturally to a learned data augmentation scheme that encourages RNNs to learn algorithmic behavior and enables small-sample learning in a variety of list-processing tasks.
Learned data augmentation instills algorithm-favoring inductive biases that let RNNs learn list-processing algorithms from fewer examples.
1,473
A Pseudo-Label Method for Coarse-to-Fine Multi-Label Learning with Limited Supervision
The goal of multi-label learning is to associate a given instance with its relevant labels from a set of concepts.Previous works of MLL mainly focused on the setting where the concept set is assumed to be fixed, while many real-world applications require introducing new concepts into the set to meet new demands.One common need is to refine the original coarse concepts and split them into finer-grained ones, where the refinement process typically begins with limited labeled data for the finer-grained concepts.To address the need, we propose a special weakly supervised MLL problem that not only focuses on the situation of limited fine-grained supervision but also leverages the hierarchical relationship between the coarse concepts and the fine-grained ones.The problem can be reduced to a multi-label version of negative-unlabeled learning problem using the hierarchical relationship.We tackle the reduced problem with a meta-learning approach that learns to assign pseudo-labels to the unlabeled entries.Experimental results demonstrate that our proposed method is able to assign accurate pseudo-labels, and in turn achieves superior classification performance when compared with other existing methods.
We propose a special weakly-supervised multi-label learning problem along with a newly tailored algorithm that learns the underlying classifier by learning to assign pseudo-labels.
1,474
Improving Search Through A3C Reinforcement Learning Based Conversational Agent
We develop a reinforcement learning based search assistant which can assist users through a set of actions and sequence of interactions to enable them realize their intent.Our approach caters to subjective search where the user is seeking digital assets such as images which is fundamentally different from the tasks which have objective and limited search modalities.Labeled conversational data is generally not available in such search tasks and training the agent through human interactions can be time consuming.We propose a stochastic virtual user which impersonates a real user and can be used to sample user behavior efficiently to train the agent which accelerates the bootstrapping of the agent.We develop A3C algorithm based context preserving architecture which enables the agent to provide contextual assistance to the user.We compare the A3C agent with Q-learning and evaluate its performance on average rewards and state values it obtains with the virtual user in validation episodes.Our experiments show that the agent learns to achieve higher rewards and better states.
A Reinforcement Learning based conversational search assistant which provides contextual assistance in subjective search (like digital assets).
1,475
Informed Temporal Modeling via Logical Specification of Factorial LSTMs
Consider a world in which events occur that involve various entities.Learning how to predict future events from patterns of past events becomes more difficult as we consider more types of events.Many of the patterns detected in the dataset by an ordinary LSTM will be spurious since the number of potential pairwise correlations, for example, grows quadratically with the number of events.We propose a type of factorial LSTM architecture where different blocks of LSTM cells are responsible for capturing different aspects of the world state.We use Datalog rules to specify how to derive the LSTM structure from a database of facts about the entities in the world.This is analogous to how a probabilistic relational model specifies a recipe for deriving a graphical model structure from a database.In both cases, the goal is to obtain useful inductive biases by encoding informed independence assumptions into the model.We specifically consider the neural Hawkes process, which uses an LSTM to modulate the rate of instantaneous events in continuous time.In both synthetic and real-world domains, we show that we obtain better generalization by using appropriate factorial designs specified by simple Datalog programs.
Factorize LSTM states and zero-out/tie LSTM weight matrices according to real-world structural biases expressed by Datalog programs.
1,476
Model Inversion Networks for Model-Based Optimization
In this work, we aim to solve data-driven optimization problems, where the goal is to find an input that maximizes an unknown score function given access to a dataset of input, score pairs.Inputs may lie on extremely thin manifolds in high-dimensional spaces, making the optimization prone to falling-off the manifold.Further, evaluating the unknown function may be expensive, so the algorithm should be able to exploit static, offline data.We propose model inversion networks as an approach to solve such problems.Unlike prior work, MINs scale to extremely high-dimensional input spaces and can efficiently leverage offline logged datasets for optimization in both contextual and non-contextual settings.We show that MINs can also be extended to the active setting, commonly studied in prior work, via a simple, novel and effective scheme for active data collection.Our experiments show that MINs act as powerful optimizers on a range of contextual/non-contextual, static/active problems including optimization over images and protein designs and learning from logged bandit feedback.
We propose a novel approach to solve data-driven model-based optimization problems in both passive and active settings that can scale to high-dimensional input spaces.
1,477
Natural- to formal-language generation using Tensor Product Representations
Generating formal-language represented by relational tuples, such as Lisp programs or mathematical expressions, from a natural-language input is an extremely challenging task because it requires to explicitly capture discrete symbolic structural information from the input to generate the output.Most state-of-the-art neural sequence models do not explicitly capture such structure information, and thus do not perform well on these tasks.In this paper, we propose a new encoder-decoder model based on Tensor Product Representations for Natural- to Formal-language generation, called TP-N2F."The encoder of TP-N2F employs TPR 'binding' to encode natural-language symbolic structure in vector space and the decoder uses TPR 'unbinding' to generate a sequence of relational tuples, each consisting of a relation and a number of arguments, in symbolic space.", 'TP-N2F considerably outperforms LSTM-based Seq2Seq models, creating a new state of the art results on two benchmarks: the MathQA dataset for math problem solving, and the AlgoList dataset for program synthesis.Ablation studies show that improvements are mainly attributed to the use of TPRs in both the encoder and decoder to explicitly capture relational structure information for symbolic reasoning.
In this paper, we propose a new encoder-decoder model based on Tensor Product Representations for Natural- to Formal-language generation, called TP-N2F.
1,478
Forward Modeling for Partial Observation Strategy Games - A StarCraft Defogger
This paper we present a defogger, a model that learns to predict future hidden information from partial observations.We formulate this model in the context of forward modeling and leverage spatial and sequential constraints and correlations via convolutional neural networks and long short-term memory networks, respectively.We evaluate our approach on a large dataset of human games of StarCraft: Brood War, a real-time strategy video game.Our models consistently beat strong rule-based baselines and qualitatively produce sensible future game states.
This paper presents a defogger, a model that learns to predict future hidden information from partial observations, applied to a StarCraft dataset.
1,479
Towards Understanding Generalization in Gradient-Based Meta-Learning
In this work we study generalization of neural networks in gradient-based meta-learning by analyzing various properties of the objective landscapes.We experimentally demonstrate that as meta-training progresses, the meta-test solutions obtained by adapting the meta-train solution of the model to new tasks via few steps of gradient-based fine-tuning, become flatter, lower in loss, and further away from the meta-train solution.We also show that those meta-test solutions become flatter even as generalization starts to degrade, thus providing an experimental evidence against the correlation between generalization and flat minima in the paradigm of gradient-based meta-leaning.Furthermore, we provide empirical evidence that generalization to new tasks is correlated with the coherence between their adaptation trajectories in parameter space, measured by the average cosine similarity between task-specific trajectory directions, starting from a same meta-train solution.We also show that coherence of meta-test gradients, measured by the average inner product between the task-specific gradient vectors evaluated at meta-train solution, is also correlated with generalization.
We study generalization of neural networks in gradient-based meta- learning by analyzing various properties of the objective landscape.
1,480
Feature Map Variational Auto-Encoders
There have been multiple attempts with variational auto-encoders to learn powerful global representations of complex data using a combination of latent stochastic variables and an autoregressive model over the dimensions of the data.However, for the most challenging natural image tasks the purely autoregressive model with stochastic variables still outperform the combined stochastic autoregressive models.In this paper, we present simple additions to the VAE framework that generalize to natural images by embedding spatial information in the stochastic layers.We significantly improve the state-of-the-art results on MNIST, OMNIGLOT, CIFAR10 and ImageNet when the feature map parameterization of the stochastic variables are combined with the autoregressive PixelCNN approach.Interestingly, we also observe close to state-of-the-art results without the autoregressive part.This opens the possibility for high quality image generation with only one forward-pass.
We present a generative model that proves state-of-the-art results on gray-scale and natural images.
1,481
Overcoming the vanishing gradient problem in plain recurrent networks
Plain recurrent networks greatly suffer from the vanishing gradient problem while Gated Neural Networks such as Long-short Term Memory and Gated Recurrent Unit deliver promising results in many sequence learning tasks through sophisticated network designs.This paper shows how we can address this problem in a plain recurrent network by analyzing the gating mechanisms in GNNs.We propose a novel network called the Recurrent Identity Network which allows a plain recurrent network to overcome the vanishing gradient problem while training very deep models without the use of gates.We compare this model with IRNNs and LSTMs on multiple sequence modeling benchmarks.The RINs demonstrate competitive performance and converge faster in all tasks.Notably, small RIN models produce 12%–67% higher accuracy on the Sequential and Permuted MNIST datasets and reach state-of-the-art performance on the bAbI question answering dataset.
We propose a novel network called the Recurrent Identity Network (RIN) which allows a plain recurrent network to overcome the vanishing gradient problem while training very deep models without the use of gates.
1,482
Fault Tolerant Reinforcement Learning via A Markov Game of Control and Stopping
Recently, there has been a surge in interest in safe and robust techniques within reinforcement learning.Current notions of risk in RL fail to capture the potential for systemic failures such as abrupt stoppages from system failures or surpassing of safety thresholds and the appropriate responsive controls in such instances.We propose a novel approach to fault-tolerance within RL in which the controller learns a policy can cope with adversarial attacks and random stoppages that lead to failures of the system subcomponents.The results of the paper also cover fault-tolerant control so that the controller learns to avoid states that carry risk of system failures.By demonstrating that the class of problems is represented by a variant of SGs, we prove the existence of a solution which is a unique fixed point equilibrium of the game and characterise the optimal controller behaviour.We then introduce a value function approximation algorithm that converges to the solution through simulation in unknown environments.
The paper tackles fault-tolerance under random and adversarial stoppages.
1,483
Restoration of Video Frames from a Single Blurred Image with Motion Understanding
We propose a novel framework to generate clean video frames from a single motion-blurred image.While a broad range of literature focuses on recovering a single image from a blurred image, in this work, we tackle a more challenging task i.e. video restoration from a blurred image.We formulate video restoration from a single blurred image as an inverse problem by setting clean image sequence and their respective motion as latent factors, and the blurred image as an observation.Our framework is based on an encoder-decoder structure with spatial transformer network modules to restore a video sequence and its underlying motion in an end-to-end manner.We design a loss function and regularizers with complementary properties to stabilize the training and analyze variant models of the proposed network.The effectiveness and transferability of our network are highlighted through a large set of experiments on two different types of datasets: camera rotation blurs generated from panorama scenes and dynamic motion blurs in high speed videos.Our code and models will be publicly available.
We present a novel unified architecture that restores video frames from a single motion-blurred image in an end-to-end manner.
1,484
Pruning at a Glance: A Structured Class-Blind Pruning Technique for Model Compression
High performance of deep learning models typically comes at cost of considerable model size and computation time.These factors limit applicability for deployment on memory and battery constraint devices such as mobile phones or embedded systems.In this work we propose a novel pruning technique that eliminates entire filters and neurons according to their relative L1-norm as compared to the rest of the network, yielding more compression and decreased redundancy in the parameters.The resulting network is non-sparse, however, much more compact and requires no special infrastructure for its deployment.We prove the viability of our method by achieving 97.4%, 47.8% and 53% compression of LeNet-5, ResNet-56 and ResNet-110 respectively, exceeding state-of-the-art compression results reported on ResNet without losing any performance compared to the baseline.Our approach does not only exhibit good performance, but is also easy to implement on many architectures.
We propose a novel structured class-blind pruning technique to produce highly compressed neural networks.
1,485
Low Rank Training of Deep Neural Networks for Emerging Memory Technology
The recent success of neural networks for solving difficult decision tasks has incentivized incorporating smart decision making "at the edge."However, this work has traditionally focused on neural network inference, rather than training, due to memory and compute limitations, especially in emerging non-volatile memory systems, where writes are energetically costly and reduce lifespan.Yet, the ability to train at the edge is becoming increasingly important as it enables applications such as real-time adaptability to device drift and environmental variation, user customization, and federated learning across devices.In this work, we address four key challenges for training on edge devices with non-volatile memory: low weight update density, weight quantization, low auxiliary memory, and online learning.We present a low-rank training scheme that addresses these four challenges while maintaining computational efficiency.We then demonstrate the technique on a representative convolutional neural network across several adaptation problems, where it out-performs standard SGD both in accuracy and in number of weight updates.
We use Kronecker sum approximations for low-rank training to address challenges in training neural networks on edge devices that utilize emerging memory technologies.
1,486
Characterizing the Accuracy/Complexity Landscape of Explanations of Deep Networks through Knowledge Extraction
Knowledge extraction techniques are used to convert neural networks into symbolic descriptions with the objective of producing more comprehensible learning models.The central challenge is to find an explanation which is more comprehensible than the original model while still representing that model faithfully.The distributed nature of deep networks has led many to believe that the hidden features of a neural network cannot be explained by logical descriptions simple enough to be understood by humans, and that decompositional knowledge extraction should be abandoned in favour of other methods.In this paper we examine this question systematically by proposing a knowledge extraction method using rules which allows us to map the complexity/accuracy landscape of rules describing hidden features in a Convolutional Neural Network.Experiments reported in this paper show that the shape of this landscape reveals an optimal trade off between comprehensibility and accuracy, showing that each latent variable has an optimal rule to describe its behaviour.We find that the rules with optimal tradeoff in the first and final layer have a high degree of explainability whereas the rules with the optimal tradeoff in the second and third layer are less explainable.The results shed light on the feasibility of rule extraction from deep networks, and point to the value of decompositional knowledge extraction as a method of explainability.
Systematically examines how well we can explain the hidden features of a deep network in terms of logical rules.
1,487
Seeing the whole picture instead of a single point: Self-supervised likelihood learning for deep generative models
Recent findings show that deep generative models can judge out-of-distribution samples as more likely than those drawn from the same distribution as the training data.In this work, we focus on variational autoencoders and address the problem of misaligned likelihood estimates on image data.We develop a novel likelihood function that is based not only on the parameters returned by the VAE but also on the features of the data learned in a self-supervised fashion.In this way, the model additionally captures the semantic information that is disregarded by the usual VAE likelihood function.We demonstrate the improvements in reliability of the estimates with experiments on the FashionMNIST and MNIST datasets.
Improved likelihood estimates in variational autoencoders using self-supervised feature learning
1,488
Global Convergence of Policy Gradient Methods for Linearized Control Problems
Direct policy gradient methods for reinforcement learning and continuous control problems are a popularapproach for a variety of reasons:1) they are easy to implement without explicit knowledge of the underlying model;2) they are an "end-to-end" approach, directly optimizing the performance metric of interest;3) they inherently allow for richly parameterized policies.A notable drawback is that even in the most basic continuous control problem, these methods must solve a non-convex optimization problem, where little is understood about their efficiency from both computational and statistical perspectives.In contrast, system identification and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that policy gradient methods globally converge to the optimal solution and are efficient with regards to their sample and computational complexities.
This paper shows that model-free policy gradient methods can converge to the global optimal solution for non-convex linearized control problems.
1,489
A Compressed Sensing View of Unsupervised Text Embeddings, Bag-of-n-Grams, and LSTMs
Low-dimensional vector embeddings, computed using LSTMs or simpler techniques, are a popular approach for capturing the “meaning” of text and a form of unsupervised learning useful for downstream tasks.However, their power is not theoretically understood.The current paper derives formal understanding by looking at the subcase of linear embedding schemes.Using the theory of compressed sensing we show that representations combining the constituent word vectors are essentially information-preserving linear measurements of Bag-of-n-Grams representations of text.This leads to a new theoretical result about LSTMs: low-dimensional embeddings derived from a low-memory LSTM are provably at least as powerful on classification tasks, up to small error, as a linear classifier over BonG vectors, a result that extensive empirical work has thus far been unable to show.Our experiments support these theoretical findings and establish strong, simple, and unsupervised baselines on standard benchmarks that in some cases are state of the art among word-level methods.We also show a surprising new property of embeddings such as GloVe and word2vec: they form a good sensing matrix for text that is more efficient than random matrices, the standard sparse recovery tool, which may explain why they lead to better representations in practice.
We use the theory of compressed sensing to prove that LSTMs can do at least as well on linear text classification as Bag-of-n-Grams.
1,490
A NEW POINTWISE CONVOLUTION IN DEEP NEURAL NETWORKS THROUGH EXTREMELY FAST AND NON PARAMETRIC TRANSFORMS
Some conventional transforms such as Discrete Walsh-Hadamard Transform and Discrete Cosine Transform have been widely used as feature extractors in image processing but rarely applied in neural networks.However, we found that these conventional transforms have the ability to capture the cross-channel correlations without any learnable parameters in DNNs.This paper firstly proposes to apply conventional transforms on pointwise convolution, showing that such transforms significantly reduce the computational complexity of neural networks without accuracy performance degradation.Especially for DWHT, it requires no floating point multiplications but only additions and subtractions, which can considerably reduce computation overheads.In addition, its fast algorithm further reduces complexity of floating point addition from O to O.These non-parametric and low computational properties construct extremely efficient networks in the number parameters and operations, enjoying accuracy gain.Our proposed DWHT-based model gained 1.49% accuracy increase with 79.4% reduced parameters and 48.4% reduced FLOPs compared with its baseline model on the CIFAR 100 dataset.
We introduce new pointwise convolution layers equipped with extremely fast conventional transforms in deep neural network.
1,491
Lattice Representation Learning
We introduce the notion of , in which the representation for some object of interest is a lattice point in an Euclidean space.Our main contribution is a result for replacing an objective function which employs lattice quantization with an objective function in which quantization is absent, thus allowing optimization techniques based on gradient descent to apply; we call the resulting algorithms algorithms as they are designed explicitly to allow for an optimization procedure where only local information is employed.We also argue that a technique commonly used in Variational Auto-Encoders is tightly connected with the idea of lattice representations, as the quantization error in good high dimensional lattices can be modeled as a Gaussian distribution.We use a traditional encoder/decoder architecture to explore the idea of latticed valued representations, and provide experimental evidence of the potential of using lattice representations by modifying the \exttt generic \exttt architecture so that it can implement not only Gaussian dithering of representations, but also the well known straight-through estimator and its application to vector quantization.
We propose to use lattices to represent objects and prove a fundamental result on how to train networks that use them.
1,492
Finding a human-like classifier
There were many attempts to explain the trade-off between accuracy and adversarial robustness.However, there was no clear understanding of the behaviors of a robust classifier which has human-like robustness.We argue why we need to consider adversarial robustness against varying magnitudes of perturbations not only focusing on a fixed perturbation threshold, why we need to use different method to generate adversarially perturbed samples that can be used to train a robust classifier and measure the robustness of classifiers and why we need to prioritize adversarial accuracies with different magnitudes.We introduce Lexicographical Genuine Robustness of classifiers that combines the above requirements. We also suggest a candidate oracle classifier called "Optimal Lexicographically Genuinely Robust Classifier " that prioritizes accuracy on meaningful adversarially perturbed examples generated by smaller magnitude perturbations. The training algorithm for estimating OLGRC requires lexicographical optimization unlike existing adversarial training methods.To apply lexicographical optimization to neural network, we utilize Gradient Episodic Memory which was originally developed for continual learning by preventing catastrophic forgetting.
We try to design and train a classifier whose adversarial robustness is more resemblance to robustness of human.
1,493
Discrete Wasserstein Generative Adversarial Networks (DWGAN)
Generating complex discrete distributions remains as one of the challenging problems in machine learning.Existing techniques for generating complex distributions with high degrees of freedom depend on standard generative models like Generative Adversarial Networks, Wasserstein GAN, and associated variations.Such models are based on an optimization involving the distance between two continuous distributions.We introduce a Discrete Wasserstein GAN model which is based on a dual formulation of the Wasserstein distance between two discrete distributions.We derive a novel training algorithm and corresponding network architecture based on the formulation.Experimental results are provided for both synthetic discrete data, and real discretized data from MNIST handwritten digits.
We propose a Discrete Wasserstein GAN (DWGAN) model which is based on a dual formulation of the Wasserstein distance between two discrete distributions.
1,494
Omega: An Architecture for AI Unification
We introduce the open-ended, modular, self-improving Omega AI unification architecture which is a refinement of Solomonoff's Alpha architecture, as considered from first principles.", 'The architecture embodies several crucial principles of general intelligence including diversity of representations, diversity of data types, integrated memory, modularity, and higher-order cognition."We retain the basic design of a fundamental algorithmic substrate called an AI kernel for problem solving and basic cognitive functions like memory, and a larger, modular architecture that re-uses the kernel in many ways.", 'Omega includes eight representation languages and six classes of neural networks, which are briefly introduced.The architecture is intended to initially address data science automation, hence it includes many problem solving methods for statistical tasks.We review the broad software architecture, higher-order cognition, self-improvement, modular neural architectures, intelligent agents, the process and memory hierarchy, hardware abstraction, peer-to-peer computing, and data abstraction facility.
It's a new AGI architecture for trans-sapient performance.This is a high-level overview of the Omega AGI architecture which is the basis of a data science automation system. Submitted to a workshop.
1,495
FlowQA: Grasping Flow in History for Conversational Machine Comprehension
Conversational machine comprehension requires a deep understanding of the conversation history.To enable traditional, single-turn models to encode the history comprehensively, we introduce Flow, a mechanism that can incorporate intermediate representations generated during the process of answering previous questions, through an alternating parallel processing structure.Compared to shallow approaches that concatenate previous questions/answers as input, Flow integrates the latent semantics of the conversation history more deeply.Our model, FlowQA, shows superior performance on two recently proposed conversational challenges.The effectiveness of Flow also shows in other tasks.By reducing sequential instruction understanding to conversational machine comprehension, FlowQA outperforms the best models on all three domains in SCONE, with +1.8% to +4.4% improvement in accuracy.
We propose the Flow mechanism and an end-to-end architecture, FlowQA, that achieves SotA on two conversational QA datasets and a sequential instruction understanding task.
1,496
Residual Loss Prediction: Reinforcement Learning With No Incremental Feedback
We consider reinforcement learning and bandit structured prediction problems with very sparse loss feedback: only at the end of an episode.We introduce a novel algorithm, RESIDUAL LOSS PREDICTION, that solves such problems by automatically learning an internal representation of a denser reward function.RESLOPE operates as a reduction to contextual bandits, using its learned loss representation to solve the credit assignment problem, and a contextual bandit oracle to trade-off exploration and exploitation.RESLOPE enjoys a no-regret reduction-style theoretical guarantee and outperforms state of the art reinforcement learning algorithms in both MDP environments and bandit structured prediction settings.
We present a novel algorithm for solving reinforcement learning and bandit structured prediction problems with very sparse loss feedback.
1,497
Stabilizing Adversarial Nets with Prediction Methods
Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train.These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function.The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates.We propose a simple modification of stochastic gradient descent that stabilizes adversarial networks.We show, both in theory and practice, that the proposed method reliably converges to saddle points.This makes adversarial networks less likely to "collapse," and enables faster training with larger learning rates.
We present a simple modification to the alternating SGD method, called a prediction step, that improves the stability of adversarial networks.
1,498
Towards Model-Based Contrastive Explanations for Explainable Planning
An important type of question that arises in Explainable Planning is a contrastive question, of the form "Why action A instead of action B?".These kinds of questions can be answered with a contrastive explanation that compares properties of the original plan containing A against the contrastive plan containing B. An effective explanation of this type serves to highlight the differences between the decisions that have been made by the planner and what the user would expect, as well as to provide further insight into the model and the planning process.Producing this kind of explanation requires the generation of the contrastive plan.This paper introduces domain-independent compilations of user questions into constraints.These constraints are added to the planning model, so that a solution to the new model represents the contrastive plan.We introduce a formal description of the compilation from user question to constraints in a temporal and numeric PDDL2.1 planning setting.
This paper introduces domain-independent compilations of user questions into constraints for contrastive explanations.
1,499
Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking
Methods that learn representations of nodes in a graph play a critical role in network analysis since they enable many downstream learning tasks.We propose Graph2Gauss - an approach that can efficiently learn versatile node embeddings on large scale graphs that show strong performance on tasks such as link prediction and node classification.Unlike most approaches that represent nodes as point vectors in a low-dimensional continuous space, we embed each node as a Gaussian distribution, allowing us to capture uncertainty about the representation.Furthermore, we propose an unsupervised method that handles inductive learning scenarios and is applicable to different types of graphs: plain/attributed, directed/undirected.By leveraging both the network structure and the associated node attributes, we are able to generalize to unseen nodes without additional training.To learn the embeddings we adopt a personalized ranking formulation w.r.t. the node distances that exploits the natural ordering of the nodes imposed by the network structure.Experiments on real world networks demonstrate the high performance of our approach, outperforming state-of-the-art network embedding methods on several different tasks.Additionally, we demonstrate the benefits of modeling uncertainty - by analyzing it we can estimate neighborhood diversity and detect the intrinsic latent dimensionality of a graph.
We embed nodes in a graph as Gaussian distributions allowing us to capture uncertainty about their representation.