Unnamed: 0
int64
0
1.83k
Clean_Title
stringlengths
8
153
Clean_Text
stringlengths
330
2.26k
Clean_Summary
stringlengths
53
295
1,800
Trace norm regularization and faster inference for embedded speech recognition RNNs
We propose and evaluate new techniques for compressing and speeding up dense matrix multiplications as found in the fully connected and recurrent layers of neural networks for embedded large vocabulary continuous speech recognition.For compression, we introduce and study a trace norm regularization technique for training low rank factored versions of matrix multiplications.Compared to standard low rank training, we show that our method leads to good accuracy versus number of parameter trade-offs and can be used to speed up training of large models.For speedup, we enable faster inference on ARM processors through new open sourced kernels optimized for small batch sizes, resulting in 3x to 7x speed ups over the widely used gemmlowp library.Beyond LVCSR, we expect our techniques and kernels to be more generally applicable to embedded neural networks with large fully connected or recurrent layers.
We compress and speed up speech recognition models on embedded devices through a trace norm regularization technique and optimized kernels.
1,801
Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets
Training activation quantized neural networks involves minimizing a piecewise constant training loss whose gradient vanishes almost everywhere, which is undesirable for the standard back-propagation or chain rule.An empirical way around this issue is to use a straight-through estimator in the backward pass only, so that the "gradient" through the modified chain rule becomes non-trivial.Since this unusual "gradient" is certainly not the gradient of loss function, the following question arises: why searching in its negative direction minimizes the training loss?In this paper, we provide the theoretical justification of the concept of STE by answering this question.We consider the problem of learning a two-linear-layer network with binarized ReLU activation and Gaussian input data.We shall refer to the unusual "gradient" given by the STE-modifed chain rule as coarse gradient.The choice of STE is not unique.We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient, and its negation is a descent direction for minimizing the population loss.We further show the associated coarse gradient descent algorithm converges to a critical point of the population loss minimization problem. Moreover, we show that a poor choice of STE leads to instability of the training algorithm near certain local minima, which is verified with CIFAR-10 experiments.
We make theoretical justification for the concept of straight-through estimator.
1,802
GumbelClip: Off-Policy Actor-Critic Using Experience Replay
This paper presents GumbelClip, a set of modifications to the actor-critic algorithm, for off-policy reinforcement learning.GumbelClip uses the concepts of truncated importance sampling along with additive noise to produce a loss function enabling the use of off-policy samples.The modified algorithm achieves an increase in convergence speed and sample efficiency compared to on-policy algorithms and is competitive with existing off-policy policy gradient methods while being significantly simpler to implement.The effectiveness of GumbelClip is demonstrated against existing on-policy and off-policy actor-critic algorithms on a subset of the Atari domain.
With a set of modifications, under 10 LOC, to A2C you get an off-policy actor-critic that outperforms A2C and performs similarly to ACER. The modifications are large batchsizes, aggressive clamping, and policy "forcing" with gumbel noise.
1,803
Skip-Thought GAN: Generating Text through Adversarial Training using Skip-Thought Vectors
In the past few years, various advancements have been made in generative models owing to the formulation of Generative Adversarial Networks.GANs have been shown to perform exceedingly well on a wide variety of tasks pertaining to image generation and style transfer.In the field of Natural Language Processing, word embeddings such as word2vec and GLoVe are state-of-the-art methods for applying neural network models on textual data.Attempts have been made for utilizing GANs with word embeddings for text generation.This work presents an approach to text generation using Skip-Thought sentence embeddings in conjunction with GANs based on gradient penalty functions and f-measures.The results of using sentence embeddings with GANs for generating text conditioned on input information are comparable to the approaches where word embeddings are used.
Generating text using sentence embeddings from Skip-Thought Vectors with the help of Generative Adversarial Networks.
1,804
Neural Subgraph Isomorphism Counting
In this paper, we study a new graph learning problem: learning to count subgraph isomorphisms.Although the learning based approach is inexact, we are able to generalize to count large patterns and data graphs in polynomial time compared to the exponential time of the original NP-complete problem.Different from other traditional graph learning problems such as node classification and link prediction, subgraph isomorphism counting requires more global inference to oversee the whole graph.To tackle this problem, we propose a dynamic intermedium attention memory network which augments different representation learning architectures and iteratively attends pattern and target data graphs to memorize different subgraph isomorphisms for the global counting.We develop both small graphs and large graphs sets to evaluate different models.Experimental results show that learning based subgraph isomorphism counting can help reduce the time complexity with acceptable accuracy.Our DIAMNet can further improve existing representation learning models for this more global problem.
In this paper, we study a new graph learning problem: learning to count subgraph isomorphisms.
1,805
Zero-Shot Policy Transfer with Disentangled Attention
Domain adaptation is an open problem in deep reinforcement learning.Often, agents are asked to perform in environments where data is difficult to obtain.In such settings, agents are trained in similar environments, such as simulators, and are then transferred to the original environment.The gap between visual observations of the source and target environments often causes the agent to fail in the target environment.We present a new RL agent, SADALA.SADALA first learns a compressed state representation.It then jointly learns to ignore distracting features and solve the task presented."SADALA's separation of important and unimportant visual features leads to robust domain transfer.", 'SADALA outperforms both prior disentangled-representation based RL and domain randomization approaches across RL environments.
We present an agent that uses a beta-vae to extract visual features and an attention mechanism to ignore irrelevant features from visual observations to enable robust transfer between visual domains.
1,806
Robustness Verification for Transformers
Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding the behavior of a given model and for obtaining safety guarantees.However, previous methods are usually limited to relatively simple neural networks.In this paper, we consider the robustness verification problem for Transformers.Transformers have complex self-attention layers that pose many challenges for verification, including cross-nonlinearity and cross-position dependency, which have not been discussed in previous work.We resolve these challenges and develop the first verification algorithm for Transformers.The certified robustness bounds computed by our method are significantly tighter than those by naive Interval Bound Propagation.These bounds also shed light on interpreting Transformers as they consistently reflect the importance of words in sentiment analysis.
We propose the first algorithm for verifying the robustness of Transformers.
1,807
Generalization Puzzles in Deep Networks
In the last few years, deep learning has been tremendously successful in many applications.However, our theoretical understanding of deep learning, and thus the ability of providing principled improvements, seems to lag behind.A theoretical puzzle concerns the ability of deep networks to predict well despite their intriguing apparent lack of generalization: their classification accuracy on the training set is not a proxy for their performance on a test set.How is it possible that training performance is independent of testing performance?Do indeed deep networks require a drastically new theory of generalization?Or are there measurements based on the training data that are predictive of the network performance on future data?Here we show that when performance is measured appropriately, the training performance is in fact predictive of expected performance, consistently with classical machine learning theory.
Contrary to previous beliefs, the training performance of deep networks, when measured appropriately, is predictive of test performance, consistent with classical machine learning theory.
1,808
Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control
We propose a "plan online and learn offline" framework for the setting where an agent, with an internal model, needs to continually act and learn in the world.Our work builds on the synergistic relationship between local model-based control, global value function learning, and exploration.We study how local trajectory optimization can cope with approximation errors in the value function, and can stabilize and accelerate value function learning.Conversely, we also study how approximate value functions can help reduce the planning horizon and allow for better policies beyond local solutions.Finally, we also demonstrate how trajectory optimization can be used to perform temporally coordinated exploration in conjunction with estimating uncertainty in value function approximation.This exploration is critical for fast and stable learning of the value function.Combining these components enable solutions to complex control tasks, like humanoid locomotion and dexterous in-hand manipulation, in the equivalent of a few minutes of experience in the real world.
We propose a framework that incorporates planning for efficient exploration and learning in complex environments.
1,809
On the Generalization Effects of DenseNet Model Structures
Modern neural network architectures take advantage of increasingly deeper layers, and various advances in their structure to achieve better performance.While traditional explicit regularization techniques like dropout, weight decay, and data augmentation are still being used in these new models, little about the regularization and generalization effects of these new structures have been studied."Besides being deeper than their predecessors, could newer architectures like ResNet and DenseNet also benefit from their structures' implicit regularization properties?", "In this work, we investigate the skip connection's effect on network's generalization features.", 'Through experiments, we show that certain neural network architectures contribute to their generalization abilities."Specifically, we study the effect that low-level features have on generalization performance when they are introduced to deeper layers in DenseNet, ResNet as well as networks with 'skip connections'.", 'We show that these low-level representations do help with generalization in multiple settings when both the quality and quantity of training data is decreased.
Our paper analyses the tremendous representational power of networks especially with 'skip connections', which may be used as a method for better generalization.
1,810
Bridging ELBO objective and MMD
One of the challenges in training generative models such as the variational auto encoder is avoiding posterior collapse.When the generator has too much capacity, it is prone to ignoring latent code.This problem is exacerbated when the dataset is small, and the latent dimension is high.The root of the problem is the ELBO objective, specifically the Kullback–Leibler divergence term in objective function.This paper proposes a new objective function to replace the KL term with one that emulates the maximum mean discrepancy objective.It also introduces a new technique, named latent clipping, that is used to control distance between samples in latent space.A probabilistic autoencoder model, named-VAE, is designed and trained on MNIST and MNIST Fashion datasets, using the new objective function and is shown to outperform models trained with ELBO and-VAE objective.The-VAE is less prone to posterior collapse, and can generate reconstructions and new samples in good quality.Latent representations learned by-VAE are shown to be good and can be used for downstream tasks such as classification.
This paper proposes a new objective function to replace KL term with one that emulates maximum mean discrepancy (MMD) objective.
1,811
Input Complexity and Out-of-distribution Detection with Likelihood-based Generative Models
Likelihood-based generative models are a promising resource to detect out-of-distribution inputs which could compromise the robustness or reliability of a machine learning system.However, likelihoods derived from such models have been shown to be problematic for detecting certain types of inputs that significantly differ from training data."In this paper, we pose that this problem is due to the excessive influence that input complexity has in generative models' likelihoods.", 'We report a set of experiments supporting this hypothesis, and use an estimate of input complexity to derive an efficient and parameter-free OOD score, which can be seen as a likelihood-ratio, akin to Bayesian model comparison.We find such score to perform comparably to, or even better than, existing OOD detection approaches under a wide range of data sets, models, model sizes, and complexity estimates.
We pose that generative models' likelihoods are excessively influenced by the input's complexity, and propose a way to compensate it when detecting out-of-distribution inputs
1,812
On the Convergence of Adam and Beyond
Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients.In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution.We show that one cause for such failures is the exponential moving average used in the algorithms.We provide an explicit example of a simple convex optimization setting where Adam does not converge to the optimal solution, and describe the precise problems with the previous analysis of Adam algorithm."Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with long-term memory of past gradients, and propose new variants of the Adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance.
We investigate the convergence of popular optimization algorithms like Adam , RMSProp and propose new variants of these methods which provably converge to optimal solution in convex settings.
1,813
Abstract Diagrammatic Reasoning with Multiplex Graph Networks
Abstract reasoning, particularly in the visual domain, is a complex human ability, but it remains a challenging problem for artificial neural learning systems.In this work we propose MXGNet, a multilayer graph neural network for multi-panel diagrammatic reasoning tasks.MXGNet combines three powerful concepts, namely, object-level representation, graph neural networks and multiplex graphs, for solving visual reasoning tasks.MXGNet first extracts object-level representations for each element in all panels of the diagrams, and then forms a multi-layer multiplex graph capturing multiple relations between objects across different diagram panels.MXGNet summarises the multiple graphs extracted from the diagrams of the task, and uses this summarisation to pick the most probable answer from the given candidates.We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices.For an Euler Diagram Syllogism task MXGNet achieves state-of-the-art accuracy of 99.8%. For PGM and RAVEN, two comprehensive datasets for RPM reasoning, MXGNet outperforms the state-of-the-art models by a considerable margin.
MXGNet is a multilayer, multiplex graph based architecture which achieves good performance on various diagrammatic reasoning tasks.
1,814
Semantic Structure Extraction for Spreadsheet Tables with a Multi-task Learning Architecture
Semantic structure extraction for spreadsheets includes detecting table regions, recognizing structural components and classifying cell types.Automatic semantic structure extraction is key to automatic data transformation from various table structures into canonical schema so as to enable data analysis and knowledge discovery.However, they are challenged by the diverse table structures and the spatial-correlated semantics on cell grids.To learn spatial correlations and capture semantics on spreadsheets, we have developed a novel learning-based framework for spreadsheet semantic structure extraction.First, we propose a multi-task framework that learns table region, structural components and cell types jointly; second, we leverage the advances of the recent language model to capture semantics in each cell value; third, we build a large human-labeled dataset with broad coverage of table structures.Our evaluation shows that our proposed multi-task framework is highly effective that outperforms the results of training each task separately.
We propose a novel multi-task framework that learns table detection, semantic component recognition and cell type classification for spreadsheet tables with promising results.
1,815
Towards Holistic and Automatic Evaluation of Open-Domain Dialogue Generation
Open-domain dialogue generation has gained increasing attention in Natural Language Processing.Comparing these methods requires a holistic means of dialogue evaluation.Human ratings are deemed as the gold standard.As human evaluation is inefficient and costly, an automated substitute is desirable.In this paper, we propose holistic evaluation metrics which capture both the quality and diversity of dialogues.Our metrics consists of GPT-2 based context coherence between sentences in a dialogue, GPT-2 based fluency in phrasing, and,-gram based diversity in responses to augmented queries.The empirical validity of our metrics is demonstrated by strong correlation with human judgments.We provide the associated code, datasets and human ratings.
We propose automatic metrics to holistically evaluate open-dialogue generation and they strongly correlate with human evaluation.
1,816
Learning Graph Convolution Filters from Data Manifold
Convolution Neural Network has gained tremendous success in computer vision tasks with its outstanding ability to capture the local latent features.Recently, there has been an increasing interest in extending CNNs to the general spatial domain.Although various types of graph convolution and geometric convolution methods have been proposed, their connections to traditional 2D-convolution are not well-understood.In this paper, we show that depthwise separable convolution is a path to unify the two kinds of convolution methods in one mathematical view, based on which we derive a novel Depthwise Separable Graph Convolution that subsumes existing graph convolution methods as special cases of our formulation.Experiments show that the proposed approach consistently outperforms other graph convolution and geometric convolution baselines on benchmark datasets in multiple domains.
We devise a novel Depthwise Separable Graph Convolution (DSGC) for the generic spatial domain data, which is highly compatible with depthwise separable convolution.
1,817
Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset
Generating musical audio directly with neural networks is notoriously difficult because it requires coherently modeling structure at many different timescales.Fortunately, most music is also highly structured and can be represented as discrete note events played on musical instruments.Herein, we show that by using notes as an intermediate representation, we can train a suite of models capable of transcribing, composing, and synthesizing audio waveforms with coherent musical structure on timescales spanning six orders of magnitude, a process we call Wave2Midi2Wave.This large advance in the state of the art is enabled by our release of the new MAESTRO dataset, composed of over 172 hours of virtuosic piano performances captured with fine alignment between note labels and audio waveforms.The networks and the dataset together present a promising approach toward creating new expressive and interpretable neural models of music.
We train a suite of models capable of transcribing, composing, and synthesizing audio waveforms with coherent musical structure, enabled by the new MAESTRO dataset.
1,818
Challenges in Computing and Optimizing Upper Bounds of Marginal Likelihood based on Chi-Square Divergences
Variational inference based on chi-square divergence minimization provides a way to approximate a model's posterior while obtaining an upper bound on the marginal likelihood.", 'However, in practice CHIVI relies on Monte Carlo estimates of an upper bound objective that at modest sample sizes are not guaranteed to be true bounds on the marginal likelihood.This paper provides an empirical study of CHIVI performance on a series of synthetic inference tasks.We show that CHIVI is far more sensitive to initialization than classic VI based on KL minimization, often needs a very large number of samples, and may not be a reliable upper bound.We also suggest possible ways to detect and alleviate some of these pathologies, including diagnostic bounds and initialization strategies.
An empirical study of variational inference based on chi-square divergence minimization, showing that minimizing the CUBO is trickier than maximizing the ELBO
1,819
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
It has been widely recognized that adversarial examples can be easily crafted to fool deep networks, which mainly root from the locally non-linear behavior nearby input examples.Applying mixup in training provides an effective mechanism to improve generalization performance and model robustness against adversarial perturbations, which introduces the globally linear behavior in-between training examples.However, in previous work, the mixup-trained models only passively defend adversarial attacks in inference by directly classifying the inputs, where the induced global linearity is not well exploited.Namely, since the locality of the adversarial perturbations, it would be more efficient to actively break the locality via the globality of the model predictions.Inspired by simple geometric intuition, we develop an inference principle, named mixup inference, for mixup-trained models.MI mixups the input with other random clean samples, which can shrink and transfer the equivalent perturbation if the input is adversarial.Our experiments on CIFAR-10 and CIFAR-100 demonstrate that MI can further improve the adversarial robustness for the models trained by mixup and its variants.
We exploit the global linearity of the mixup-trained models in inference to break the locality of the adversarial perturbations.
1,820
BERT Goes to Law School: Quantifying the Competitive Advantage of Access to Large Legal Corpora in Contract Understanding
Fine-tuning language models, such as BERT, on domain specific corpora has proven to be valuable in domains like scientific papers and biomedical text.In this paper, we show that fine-tuning BERT on legal documents similarly provides valuable improvements on NLP tasks in the legal domain.Demonstrating this outcome is significant for analyzing commercial agreements, because obtaining large legal corpora is challenging due to their confidential nature.As such, we show that having access to large legal corpora is a competitive advantage for commercial applications, and academic research on analyzing contracts.
Fine-tuning BERT on legal corpora provides marginal, but valuable, improvements on NLP tasks in the legal domain.
1,821
A Probabilistic Formulation of Unsupervised Text Style Transfer
We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques.Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus.By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion.In contrast with traditional generative sequence models, our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution.While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate.Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss.Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation.Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes.Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art.
We formulate a probabilistic latent sequence model to tackle unsupervised text style transfer, and show its effectiveness across a suite of unsupervised text style transfer tasks.
1,822
Compressed Sensing and Overparametrized Networks: Overfitting Peaks in a Model of Misparametrized Sparse Regression in the Interpolation Limit
Current practice in machine learning is to employ deep nets in an overparametrized limit, with the nominal number of parameters typically exceeding the number of measurements.This resembles the situation in compressed sensing, or in sparse regression with penalty terms, and provides a theoretical avenue for understanding phenomena that arise in the context of deep nets.One such phenonemon is the success of deep nets in providing good generalization in an interpolating regime with zero training error.Traditional statistical practice calls for regularization or smoothing to prevent "overfitting".However, recent work shows that there exist data interpolation procedures which are statistically consistent and provide good generalization performance.In this context, it has been suggested that "classical" and "modern" regimes for machine learning are separated by a peak in the generalization error curve, a phenomenon dubbed "double descent".While such overfitting peaks do exist and arise from ill-conditioned design matrices, here we challenge the interpretation of the overfitting peak as demarcating the regime where good generalization occurs under overparametrization.We propose a model of Misparamatrized Sparse Regression and analytically compute the GE curves for and penalties.We show that the overfitting peak arising in the interpolation limit is dissociated from the regime of good generalization.The analytical expressions are obtained in the so called "thermodynamic" limit.We find an additional interesting phenomenon: increasing overparametrization in the fitting model increases sparsity, which should intuitively improve performance of penalized regression.However, at the same time, the relative number of measurements decrease compared to the number of fitting parameters, and eventually overparametrization does lead to poor generalization.Nevertheless, penalized regression can show good generalization performance under conditions of data interpolation even with a large amount of overparametrization.These results provide a theoretical avenue into studying inverse problems in the interpolating regime using overparametrized fitting functions such as deep nets.
Proposes an analytically tractable model and inference procedure (misparametrized sparse regression, inferred using L_1 penalty and studied in the data-interpolation limit) to study deep-net related phenomena in the context of inverse problems.
1,823
Variational Hashing-based Collaborative Filtering with Self-Masking
Hashing-based collaborative filtering learns binary vector representations of users and items, such that recommendations can be computed very efficiently using the Hamming distance, which is simply the sum of differing bits between two hash codes.A problem with hashing-based collaborative filtering using the Hamming distance, is that each bit is equally weighted in the distance computation, but in practice some bits might encode more important properties than other bits, where the importance depends on the user."To this end, we propose an end-to-end trainable variational hashing-based collaborative filtering approach that uses the novel concept of self-masking: the user hash code acts as a mask on the items, such that it learns to encode which bits are important to the user, rather than the user's preference towards the underlying item property that the bits represent.", 'This allows a binary user-level importance weighting of each item without the need to store additional weights for each user.We experimentally evaluate our approach against state-of-the-art baselines on 4 datasets, and obtain significant gains of up to 12% in NDCG.We also make available an efficient implementation of self-masking, which experimentally yields <4% runtime overhead compared to the standard Hamming distance.
We propose a new variational hashing-based collaborative filtering approach optimized for a novel self-mask variant of the Hamming distance, which outperforms state-of-the-art by up to 12% on NDCG.
1,824
A Resizable Mini-batch Gradient Descent based on a Multi-Armed Bandit
Determining the appropriate batch size for mini-batch gradient descent is always time consuming as it often relies on grid search.This paper considers a resizable mini-batch gradient descent algorithm based on a multi-armed bandit that achieves performance equivalent to that of best fixed batch-size.At each epoch, the RMGD samples a batch size according to a certain probability distribution proportional to a batch being successful in reducing the loss function.Sampling from this probability provides a mechanism for exploring different batch size and exploiting batch sizes with history of success. After obtaining the validation loss at each epoch with the sampled batch size, the probability distribution is updated to incorporate the effectiveness of the sampled batch size.Experimental results show that the RMGD achieves performance better than the best performing single batch size.It is surprising that the RMGD achieves better performance than grid search.Furthermore, it attains this performance in a shorter amount of time than grid search.
An optimization algorithm that explores various batch sizes based on probability and automatically exploits successful batch size which minimizes validation loss.
1,825
NoiGAN: NOISE AWARE KNOWLEDGE GRAPH EMBEDDING WITH GAN
Knowledge graph has gained increasing attention in recent years for its successful applications of numerous tasks.Despite the rapid growth of knowledge construction, knowledge graphs still suffer from severe incompletion and inevitably involve various kinds of errors.Several attempts have been made to complete knowledge graph as well as to detect noise.However, none of them considers unifying these two tasks even though they are inter-dependent and can mutually boost the performance of each other.In this paper, we proposed to jointly combine these two tasks with a unified Generative Adversarial Networks framework to learn noise-aware knowledge graph embedding.Extensive experiments have demonstrated that our approach is superior to existing state-of-the-art algorithms both in regard to knowledge graph completion and error detection.
We proposed a unified Generative Adversarial Networks (GAN) framework to learn noise-aware knowledge graph embedding.
1,826
Residual EBMs: Does Real vs. Fake Text Discrimination Generalize?
Energy-based models, a.k.a.un-normalized models, have had recent successes in continuous spaces.However, they have not been successfully applied to model text sequences. While decreasing the energy at training samples is straightforward, mining samples where the energy should be increased is difficult. In part, this is because standard gradient-based methods are not readily applicable when the input is high-dimensional and discrete. Here, we side-step this issue by generating negatives using pre-trained auto-regressive language models. The EBM then worksin the of the language model; and is trained to discriminate real text from text generated by the auto-regressive models.We investigate the generalization ability of residual EBMs, a pre-requisite for using them in other applications. We extensively analyze generalization for the task of classifying whether an input is machine or human generated, a natural task given the training loss and how we mine negatives.Overall, we observe that EBMs can generalize remarkably well to changes in the architecture of the generators producing negatives.However, EBMs exhibit more sensitivity to the training set used by such generators.
A residual EBM for text whose formulation is equivalent to discriminating between human and machine generated text. We study its generalization behavior.
1,827
Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning
Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints.In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples.We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions.We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that mixup augmentation and setting a minimum number of labeled samples per mini-batch are effective regularization techniques for reducing it.The proposed approach achieves state-of-the-art results in CIFAR-10/100 and Mini-ImageNet despite being much simpler than other state-of-the-art.These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work.Code will be made available.
Pseudo-labeling has shown to be a weak alternative for semi-supervised learning. We, conversely, demonstrate that dealing with confirmation bias with several regularizations makes pseudo-labeling a suitable approach.
1,828
Temporal Difference Models: Model-Free Deep RL for Model-Based Control
Model-free reinforcement learning has been proven to be a powerful, general tool for learning complex behaviors.However, its sample efficiency is often impractically large for solving challenging real-world problems, even for off-policy algorithms such as Q-learning.A limiting factor in classic model-free RL is that the learning signal consists only of scalar rewards, ignoring much of the rich information contained in state transition tuples.Model-based RL uses this information, by training a predictive model, but often does not achieve the same asymptotic performance as model-free RL due to model bias.We introduce temporal difference models, a family of goal-conditioned value functions that can be trained with model-free learning and used for model-based control.TDMs combine the benefits of model-free and model-based RL: they leverage the rich information in state transitions to learn very efficiently, while still attaining asymptotic performance that exceeds that of direct model-based RL methods.Our experimental results show that, on a range of continuous control tasks, TDMs provide a substantial improvement in efficiency compared to state-of-the-art model-based and model-free methods.
We show that a special goal-condition value function trained with model free methods can be used within model-based control, resulting in substantially better sample efficiency and performance.
1,829
Neural Permutation Processes
We introduce a neural architecture to perform amortized approximate Bayesian inference over latent random permutations of two sets of objects.The method involves approximating permanents of matrices of pairwise probabilities using recent ideas on functions defined over sets.Each sampled permutation comes with a probability estimate, a quantity unavailable in MCMC approaches.We illustrate the method in sets of 2D points and MNIST images.
A novel neural architecture for efficient amortized inference over latent permutations
1,830
Improving Relevance Prediction with Transfer Learning in Large-scale Retrieval Systems
Machine learned large-scale retrieval systems require a large amount of training data representing query-item relevance."However, collecting users' explicit feedback is costly.", 'In this paper, we propose to leverage user logs and implicit feedback as auxiliary objectives to improve relevance modeling in retrieval systems.Specifically, we adopt a two-tower neural net architecture to model query-item relevance given both collaborative and content information.By introducing auxiliary tasks trained with much richer implicit user feedback data, we improve the quality and resolution for the learned representations of queries and items.Applying these learned representations to an industrial retrieval system has delivered significant improvements.
We propose a novel two-tower shared-bottom model architecture for transferring knowledge from rich implicit feedbacks to predict relevance for large-scale retrieval systems.
1,831
Learning to Move with Affordance Maps
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent, from household robotic vacuums to autonomous vehicles.Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry, but fail to model dynamic objects or semantic constraints.Learning-based RL agents are an attractive alternative because they can incorporate both semantic and geometric information, but are notoriously sample inefficient, difficult to generalize to novel settings, and are difficult to interpret.In this paper, we combine the best of both worlds with a modular approach that a spatial representation of a scene that is trained to be effective when coupled with traditional geometric planners.Specifically, we design an agent that learns to predict a spatial affordance map that elucidates what parts of a scene are navigable through active self-supervised experience gathering.In contrast to most simulation environments that assume a static world, we evaluate our approach in the VizDoom simulator, using large-scale randomly-generated maps containing a variety of dynamic actors and hazards.We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
We address the task of autonomous exploration and navigation using spatial affordance maps that can be learned in a self-supervised manner, these outperform classic geometric baselines while being more sample efficient than contemporary RL algorithms