source
sequence
source_labels
sequence
rouge_scores
sequence
paper_id
stringlengths
9
11
target
sequence
[ "Generative Adversarial Networks (GANs) have proven to be a powerful framework for learning to draw samples from complex distributions.", "However, GANs are also notoriously difficult to train, with mode collapse and oscillations a common problem.", "We hypothesize that this is at least in part due to the evolution of the generator distribution and the catastrophic forgetting tendency of neural networks, which leads to the discriminator losing the ability to remember synthesized samples from previous instantiations of the generator.", "Recognizing this, our contributions are twofold.", "First, we show that GAN training makes for a more interesting and realistic benchmark for continual learning methods evaluation than some of the more canonical datasets.", "Second, we propose leveraging continual learning techniques to augment the discriminator, preserving its ability to recognize previous generator samples.", "We show that the resulting methods add only a light amount of computation, involve minimal changes to the model, and result in better overall performance on the examined image and text generation tasks." ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.2222222238779068, 0.07999999821186066, 0.0476190447807312, 0, 0.060606054961681366, 0, 0.05128204822540283 ]
SJzuHiA9tQ
[ "Generative Adversarial Network Training is a Continual Learning Problem." ]
[ "Many problems with large-scale labeled training data have been impressively solved by deep learning.", "However, Unseen Class Categorization (UCC) with minimal information provided about target classes is the most commonly encountered setting in industry, which remains a challenging research problem in machine learning.", "Previous approaches to UCC either fail to generate a powerful discriminative feature extractor or fail to learn a flexible classifier that can be easily adapted to unseen classes.", "In this paper, we propose to address these issues through network reparameterization, \\textit{i.e.}, reparametrizing the learnable weights of a network as a function of other variables, by which we decouple the feature extraction part and the classification part of a deep classification model to suit the special setting of UCC, securing both strong discriminability and excellent adaptability.", "Extensive experiments for UCC on several widely-used benchmark datasets in the settings of zero-shot and few-shot learning demonstrate that, our method with network reparameterization achieves state-of-the-art performance." ]
[ 0, 0, 0, 0, 1 ]
[ 0.07407406717538834, 0.04878048226237297, 0, 0.10344827175140381, 0.4000000059604645 ]
rJeyV2AcKX
[ "A unified frame for both few-shot learning and zero-shot learning based on network reparameterization" ]
[ "Proteins are ubiquitous molecules whose function in biological processes is determined by their 3D structure.\n", "Experimental identification of a protein's structure can be time-consuming, prohibitively expensive, and not always possible. \n", "Alternatively, protein folding can be modeled using computational methods, which however are not guaranteed to always produce optimal results.\n", "GraphQA is a graph-based method to estimate the quality of protein models, that possesses favorable properties such as representation learning, explicit modeling of both sequential and 3D structure, geometric invariance and computational efficiency. \n", "In this work, we demonstrate significant improvements of the state-of-the-art for both hand-engineered and representation-learning approaches, as well as carefully evaluating the individual contributions of GraphQA." ]
[ 0, 0, 0, 1, 0 ]
[ 0.05882352590560913, 0.11764705181121826, 0.052631575614213943, 0.4000000059604645, 0.39024388790130615 ]
HyxgBerKwB
[ "GraphQA is a graph-based method for protein Quality Assessment that improves the state-of-the-art for both hand-engineered and representation-learning approaches" ]
[ "We study the problem of training machine learning models incrementally using active learning with access to imperfect or noisy oracles.", "We specifically consider the setting of batch active learning, in which multiple samples are selected as opposed to a single sample as in classical settings so as to reduce the training overhead.", "Our approach bridges between uniform randomness and score based importance sampling of clusters when selecting a batch of new samples.", "Experiments on\n", "benchmark image classification datasets (MNIST, SVHN, and CIFAR10) shows improvement over existing active learning strategies.", "We introduce an extra denoising layer to deep networks to make active learning robust to label noises and show significant improvements.\n" ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.4285714328289032, 0.3199999928474426, 0.1428571343421936, 0.15789473056793213, 0.23255813121795654 ]
SJxIkkSKwB
[ "We address the active learning in batch setting with noisy oracles and use model uncertainty to encode the decision quality of active learning algorithm during acquisition." ]
[ "Artistic style transfer is the problem of synthesizing an image with content similar to a given image and style similar to another.", "Although recent feed-forward neural networks can generate stylized images in real-time, these models produce a single stylization given a pair of style/content images, and the user doesn't have control over the synthesized output.", "Moreover, the style transfer depends on the hyper-parameters of the model with varying ``optimum\" for different input images.", "Therefore, if the stylized output is not appealing to the user, she/he has to try multiple models or retrain one with different hyper-parameters to get a favorite stylization.", "In this paper, we address these issues by proposing a novel method which allows adjustment of crucial hyper-parameters, after the training and in real-time, through a set of manually adjustable parameters.", "These parameters enable the user to modify the synthesized outputs from the same pair of style/content images, in search of a favorite stylized image.", "Our quantitative and qualitative experiments indicate how adjusting these parameters is comparable to retraining the model with different hyper-parameters.", "We also demonstrate how these parameters can be randomized to generate results which are diverse but still very similar in style and content." ]
[ 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.23999999463558197, 0, 0.260869562625885, 0.0624999962747097, 0.0555555522441864, 0, 0.07692307233810425, 0.06666666269302368 ]
HJg4E8IFdE
[ "Stochastic style transfer with adjustable features. " ]
[ "Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards.", "However, this places on environment designers the onus of designing language-conditional reward functions which may not be easily or tractably implemented as the complexity of the environment and the language scales.", "To overcome this limitation, we present a framework within which instruction-conditional RL agents are trained using rewards obtained not from the environment, but from reward models which are jointly trained from expert examples. ", "As reward models improve, they learn to accurately reward agents for completing tasks for environment configurations---and for instructions---not present amongst the expert data.", "This framework effectively separates the representation of what instructions require from how they can be executed.\n", "In a simple grid world, it enables an agent to learn a range of commands requiring interaction with blocks and understanding of spatial relations and underspecified abstract arrangements.", "We further show the method allows our agent to adapt to changes in the environment without requiring new expert examples." ]
[ 0, 0, 0, 0, 1, 0, 0 ]
[ 0.23529411852359772, 0.0476190410554409, 0.2222222238779068, 0.1666666567325592, 0.24242423474788666, 0.1463414579629898, 0.1764705777168274 ]
H1xsSjC9Ym
[ "We propose AGILE, a framework for training agents to perform instructions from examples of respective goal-states." ]
[ "We present Multitask Soft Option Learning (MSOL), a hierarchical multi-task framework based on Planning-as-Inference.", "MSOL extends the concept of Options, using separate variational posteriors for each task, regularized by a shared prior.", "The learned soft-options are temporally extended, allowing a higher-level master policy to train faster on new tasks by making decisions with lower frequency.", "Additionally, MSOL allows fine-tuning of soft-options for new tasks without unlearning previously useful behavior, and avoids problems with local minima in multitask training.", "We demonstrate empirically that MSOL significantly outperforms both hierarchical and flat transfer-learning baselines in challenging multi-task environments." ]
[ 0, 0, 0, 1, 0 ]
[ 0.05405404791235924, 0.1463414579629898, 0.04347825422883034, 0.17391303181648254, 0.14999999105930328 ]
BkeDGJBKvB
[ "In Hierarchical RL, we introduce the notion of a 'soft', i.e. adaptable, option and show that this helps learning in multitask settings." ]
[ "We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning.", "The learning objective is achieved by providing signal to the latent encoding/embedding in VAE without changing its main backbone architecture, hence retaining the desirable properties of the VAE.", "We design an unsupervised and a supervised strategy in Guided-VAE and observe enhanced modeling and controlling capability over the vanilla VAE.", "In the unsupervised strategy, we guide the VAE learning by introducing a lightweight decoder that learns latent geometric transformation and principal components; in the supervised strategy, we use an adversarial excitation and inhibition mechanism to encourage the disentanglement of the latent variables.", "Guided-VAE enjoys its transparency and simplicity for the general representation learning task, as well as disentanglement learning.", "On a number of experiments for representation learning, improved synthesis/sampling, better disentanglement for classification, and reduced classification errors in meta learning have been observed." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.5882353186607361, 0.1666666567325592, 0.06666666269302368, 0.2222222238779068, 0.23076923191547394, 0.23529411852359772 ]
SygaYANFPr
[ "Learning a controllable generative model by performing latent representation disentanglement learning." ]
[ "Neural language models (NLMs) are generative, and they model the distribution of grammatical sentences.", "Trained on huge corpus, NLMs are pushing the limit of modeling accuracy.", "Besides, they have also been applied to supervised learning tasks that decode text, e.g., automatic speech recognition (ASR).", "By re-scoring the n-best list, NLM can select grammatically more correct candidate among the list, and significantly reduce word/char error rate.", "However, the generative nature of NLM may not guarantee a discrimination between “good” and “bad” (in a task-specific sense) sentences, resulting in suboptimal performance.", "This work proposes an approach to adapt a generative NLM to a discriminative one.", "Different from the commonly used maximum likelihood objective, the proposed method aims at enlarging the margin between the “good” and “bad” sentences.", "It is trained end-to-end and can be widely applied to tasks that involve the re-scoring of the decoded text.", "Significant gains are observed in both ASR and statistical machine translation (SMT) tasks." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.27272728085517883, 0.09999999403953552, 0.1428571343421936, 0.07407406717538834, 0.06451612710952759, 0, 0.07407406717538834, 0.07692307233810425, 0 ]
H1g-gk5EuQ
[ "Enhance the language model for supervised learning task " ]
[ "Conventionally, convolutional neural networks (CNNs) process different images with the same set of filters.", "However, the variations in images pose a challenge to this fashion.", "In this paper, we propose to generate sample-specific filters for convolutional layers in the forward pass.", "Since the filters are generated on-the-fly, the model becomes more flexible and can better fit the training data compared to traditional CNNs.", "In order to obtain sample-specific features, we extract the intermediate feature maps from an autoencoder.", "As filters are usually high dimensional, we propose to learn a set of coefficients instead of a set of filters.", "These coefficients are used to linearly combine the base filters from a filter repository to generate the final filters for a CNN.", "The proposed method is evaluated on MNIST, MTFL and CIFAR10 datasets.", "Experiment results demonstrate that the classification accuracy of the baseline model can be improved by using the proposed filter generation method." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.1428571343421936, 0.1599999964237213, 0.46666666865348816, 0.1764705777168274, 0.06896550953388214, 0.06896550953388214, 0.25, 0.07999999821186066, 0.060606054961681366 ]
rJa90ceAb
[ "dynamically generate filters conditioned on the input image for CNNs in each forward pass " ]
[ "We propose a new anytime neural network which allows partial evaluation by subnetworks with different widths as well as depths.", "Compared to conventional anytime networks only with the depth controllability, the increased architectural diversity leads to higher resource utilization and consequent performance improvement under various and dynamic resource budgets.", "We highlight architectural features to make our scheme feasible as well as efficient, and show its effectiveness in image classification tasks." ]
[ 1, 0, 0 ]
[ 1, 0.09090908616781235, 0.1538461446762085 ]
SygAlRLvoX
[ "We propose a new anytime neural network which allows partial evaluation by subnetworks with different widths as well as depths." ]
[ "We propose a new model for making generalizable and diverse retrosynthetic reaction predictions.", "Given a target compound, the task is to predict the likely chemical reactants to produce the target.", "This generative task can be framed as a sequence-to-sequence problem by using the SMILES representations of the molecules.", "Building on top of the popular Transformer architecture, we propose two novel pre-training methods that construct relevant auxiliary tasks (plausible reactions) for our problem.", "Furthermore, we incorporate a discrete latent variable model into the architecture to encourage the model to produce a diverse set of alternative predictions.", "On the 50k subset of reaction examples from the United States patent literature (USPTO-50k) benchmark dataset, our model greatly improves performance over the baseline, while also generating predictions that are more diverse." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 1, 0.07692307233810425, 0.06666666269302368, 0.10810810327529907, 0.25, 0.1860465109348297 ]
BygfrANKvB
[ "We propose a new model for making generalizable and diverse retrosynthetic reaction predictions." ]
[ "Unsupervised learning of disentangled representations is an open problem in machine learning.", "The Disentanglement-PyTorch library is developed to facilitate research, implementation, and testing of new variational algorithms.", "In this modular library, neural architectures, dimensionality of the latent space, and the training algorithms are fully decoupled, allowing for independent and consistent experiments across variational methods.", "The library handles the training scheduling, logging, and visualizations of reconstructions and latent space traversals.", "It also evaluates the encodings based on various disentanglement metrics.", "The library, so far, includes implementations of the following unsupervised algorithms VAE, Beta-VAE, Factor-VAE, DIP-I-VAE, DIP-II-VAE, Info-VAE, and Beta-TCVAE, as well as conditional approaches such as CVAE and IFCVAE.", "The library is compatible with the Disentanglement Challenge of NeurIPS 2019, hosted on AICrowd and was used to compete in the first and second stages of the challenge, where it was ranked among the best few participants." ]
[ 0, 1, 0, 0, 0, 0, 0 ]
[ 0.21052631735801697, 0.3478260934352875, 0.12121211737394333, 0.09090908616781235, 0, 0, 0.10256409645080566 ]
rJgUsFYnir
[ "Disentanglement-PyTorch is a library for variational representation learning" ]
[ "Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL).", "While current trust region strategies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment.", "To address this problem, we propose an off-policy trust region method, Trust-PCL, which exploits an observation that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path.", "The introduction of relative entropy regularization allows Trust-PCL to maintain optimization stability while exploiting off-policy data to improve sample efficiency.", "When evaluated on a number of continuous control tasks, Trust-PCL significantly improves the solution quality and sample efficiency of TRPO." ]
[ 1, 0, 0, 0, 0 ]
[ 0.1249999925494194, 0.0555555522441864, 0, 0.060606054961681366, 0.12121211737394333 ]
HyrCWeWCb
[ "We extend recent insights related to softmax consistency to achieve state-of-the-art results in continuous control." ]
[ "Deep reinforcement learning has achieved many recent successes, but our understanding of its strengths and limitations is hampered by the lack of rich environments in which we can fully characterize optimal behavior, and correspondingly diagnose individual actions against such a characterization. \n\n", "Here we consider a family of combinatorial games, arising from work of Erdos, Selfridge, and Spencer, and we propose their use as environments for evaluating and comparing different approaches to reinforcement learning.", "These games have a number of appealing features: they are challenging for current learning approaches, but they form", "(i) a low-dimensional, simply parametrized environment where", "(ii) there is a linear closed form solution for optimal behavior from any state, and", "(iii) the difficulty of the game can be tuned by changing environment parameters in an interpretable way.", "We use these Erdos-Selfridge-Spencer games not only to compare different algorithms, but also to compare approaches based on supervised and reinforcement learning, to analyze the power of multi-agent approaches in improving performance, and to evaluate generalization to environments outside the training set." ]
[ 0, 1, 0, 0, 0, 0, 0 ]
[ 0.23999999463558197, 0.2857142686843872, 0.19230768084526062, 0.0952380895614624, 0.20000000298023224, 0.19607841968536377, 0.260869562625885 ]
HkCnm-bAb
[ "We adapt a family of combinatorial games with tunable difficulty and an optimal policy expressible as linear network, developing it as a rich environment for reinforcement learning, showing contrasts in performance with supervised learning, and analyzing multiagent learning and generalization. " ]
[ "Adoption of deep learning in safety-critical systems raise the need for understanding what deep neural networks do not understand.", "Several methodologies to estimate model uncertainty have been proposed, but these methodologies constrain either how the neural network is trained or constructed.", "We present Outlier Detection In Neural networks (ODIN), an assumption-free method for detecting outlier observations during prediction, based on principles widely used in manufacturing process monitoring.", "By using a linear approximation of the hidden layer manifold, we add prediction-time outlier detection to models after training without altering architecture or training.", "We demonstrate that ODIN efficiently detect outliers during prediction on Fashion-MNIST, ImageNet-synsets and speech command recognition." ]
[ 0, 0, 0, 0, 1 ]
[ 0.20689654350280762, 0.0624999962747097, 0.1621621549129486, 0.11764705181121826, 0.2222222238779068 ]
rkGqLoR5tX
[ "An add-on method for deep learning to detect outliers during prediction-time" ]
[ "This work introduces a simple network for producing character aware word embeddings.", "Position agnostic and position aware character embeddings are combined to produce an embedding vector for each word.", "The learned word representations are shown to be very sparse and facilitate improved results on language modeling tasks, despite using markedly fewer parameters, and without the need to apply dropout.", "A final experiment suggests that weight sharing contributes to sparsity, increases performance, and prevents overfitting." ]
[ 0, 1, 0, 0 ]
[ 0.1818181723356247, 0.31578946113586426, 0.16326530277729034, 0.1666666567325592 ]
rJ8rHkWRb
[ "A fully connected architecture is used to produce word embeddings from character representations, outperforms traditional embeddings and provides insight into sparsity and dropout." ]
[ "Neural networks with low-precision weights and activations offer compelling\n", "efficiency advantages over their full-precision equivalents.", "The two most\n", "frequently discussed benefits of quantization are reduced memory consumption,\n", "and a faster forward pass when implemented with efficient bitwise\n", "operations.", "We propose a third benefit of very low-precision neural networks:\n", "improved robustness against some adversarial attacks, and in the worst case,\n", "performance that is on par with full-precision models.", "We focus on the very\n", "low-precision case where weights and activations are both quantized to $\\pm$1,\n", "and note that stochastically quantizing weights in just one layer can sharply\n", "reduce the impact of iterative attacks.", "We observe that non-scaled binary neural\n", "networks exhibit a similar effect to the original \\emph{defensive distillation}\n", "procedure that led to \\emph{gradient masking}, and a false notion of security.\n", "We address this by conducting both black-box and white-box experiments with\n", "binary models that do not artificially mask gradients." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11428570747375488, 0, 0, 0.05714285373687744, 0.1111111044883728, 0.2222222238779068, 0.3243243098258972, 0.05882352590560913, 0.12903225421905518, 0.054054051637649536, 0.15789473056793213, 0.3125, 0.1875, 0.1666666567325592, 0.20512820780277252, 0.1621621549129486, 0.05882352590560913 ]
HkTEFfZRb
[ "We conduct adversarial attacks against binarized neural networks and show that we reduce the impact of the strongest attacks, while maintaining comparable accuracy in a black-box setting" ]
[ "Unsupervised bilingual dictionary induction (UBDI) is useful for unsupervised machine translation and for cross-lingual transfer of models into low-resource languages.", "One approach to UBDI is to align word vector spaces in different languages using Generative adversarial networks (GANs) with linear generators, achieving state-of-the-art performance for several language pairs.", "For some pairs, however, GAN-based induction is unstable or completely fails to align the vector spaces.", "We focus on cases where linear transformations provably exist, but the performance of GAN-based UBDI depends heavily on the model initialization.", "We show that the instability depends on the shape and density of the vector sets, but not on noise; it is the result of local optima, but neither over-parameterization nor changing the batch size or the learning rate consistently reduces instability.", "Nevertheless, we can stabilize GAN-based UBDI through best-of-N model selection, based on an unsupervised stopping criterion." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.09999999403953552, 0.1666666567325592, 0.21621620655059814, 0.44999998807907104, 0.18867923319339752, 0.10810810327529907 ]
SJxbps09K7
[ "An empirical investigation of GAN-based alignment of word vector spaces, focusing on cases, where linear transformations provably exist, but training is unstable." ]
[ "Owing to the ubiquity of computer software, software vulnerability detection (SVD) has become an important problem in the software industry and in the field of computer security.", "One of the most crucial issues in SVD is coping with the scarcity of labeled vulnerabilities in projects that require the laborious manual labeling of code by software security experts.", "One possible way to address is to employ deep domain adaptation which has recently witnessed enormous success in transferring learning from structural labeled to unlabeled data sources.", "The general idea is to map both source and target data into a joint feature space and close the discrepancy gap of those data in this joint feature space.", "Generative adversarial network (GAN) is a technique that attempts to bridge the discrepancy gap and also emerges as a building block to develop deep domain adaptation approaches with state-of-the-art performance.", "However, deep domain adaptation approaches using the GAN principle to close the discrepancy gap are subject to the mode collapsing problem that negatively impacts the predictive performance.", "Our aim in this paper is to propose Dual Generator-Discriminator Deep Code Domain Adaptation Network (Dual-GD-DDAN) for tackling the problem of transfer learning from labeled to unlabeled software projects in the context of SVD in order to resolve the mode collapsing problem faced in previous approaches.", "The experimental results on real-world software projects show that our proposed method outperforms state-of-the-art baselines by a wide margin." ]
[ 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.22641508281230927, 0.28070175647735596, 0.24561403691768646, 0.25, 0.1666666567325592, 0.2181818187236786, 0.8405796885490417, 0.11764705181121826 ]
BkglepEFDS
[ "Our aim in this paper is to propose a new approach for tackling the problem of transfer learning from labeled to unlabeled software projects in the context of SVD in order to resolve the mode collapsing problem faced in previous approaches." ]
[ "Representation learning is one of the foundations of Deep Learning and allowed important improvements on several Machine Learning tasks, such as Neural Machine Translation, Question Answering and Speech Recognition.", "Recent works have proposed new methods for learning representations for nodes and edges in graphs.", "Several of these methods are based on the SkipGram algorithm, and they usually process a large number of multi-hop neighbors in order to produce the context from which node representations are learned.", "In this paper, we propose an effective and also efficient method for generating node embeddings in graphs that employs a restricted number of permutations over the immediate neighborhood of a node as context to generate its representation, thus ego-centric representations.", "We present a thorough evaluation showing that our method outperforms state-of-the-art methods in six different datasets related to the problems of link prediction and node classification, being one to three orders of magnitude faster than baselines when generating node embeddings for very large graphs." ]
[ 0, 0, 0, 1, 0 ]
[ 0.0833333283662796, 0.05405404791235924, 0.23076923191547394, 0.6333333253860474, 0.3125 ]
SJyfrl-0b
[ "A faster method for generating node embeddings that employs a number of permutations over a node's immediate neighborhood as context to generate its representation." ]
[ "Orthogonal recurrent neural networks address the vanishing gradient problem by parameterizing the recurrent connections using an orthogonal matrix.", "This class of models is particularly effective to solve tasks that require the memorization of long sequences.", "We propose an alternative solution based on explicit memorization using linear autoencoders for sequences.", "We show how a recently proposed recurrent architecture, the Linear Memory Network, composed of a nonlinear feedforward layer and a separate linear recurrence, can be used to solve hard memorization tasks.", "We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution.", "The initialization schema can be easily adapted to any recurrent architecture.\n ", "We argue that this approach is superior to a random orthogonal initialization due to the autoencoder, which allows the memorization of long sequences even before training.", "The empirical analysis show that our approach achieves competitive results against alternative orthogonal models, and the LSTM, on sequential MNIST, permuted MNIST and TIMIT." ]
[ 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.1538461446762085, 0.20512819290161133, 0.2702702581882477, 0.3461538553237915, 0.44897958636283875, 0.11428570747375488, 0.3829787075519562, 0.17391303181648254 ]
BkgM7xHYwH
[ "We show how to initialize recurrent architectures with the closed-form solution of a linear autoencoder for sequences. We show the advantages of this approach compared to orthogonal RNNs." ]
[ "This paper improves upon the line of research that formulates named entity recognition (NER) as a sequence-labeling problem.", "We use so-called black-box long short-term memory (LSTM) encoders to achieve state-of-the-art results while providing insightful understanding of what the auto-regressive model learns with a parallel self-attention mechanism.", "Specifically, we decouple the sequence-labeling problem of NER into entity chunking, e.g., Barack_B Obama_E was_O elected_O, and entity typing, e.g., Barack_PERSON Obama_PERSON was_NONE elected_NONE, and analyze how the model learns to, or has difficulties in, capturing text patterns for each of the subtasks.", "The insights we gain then lead us to explore a more sophisticated deep cross-Bi-LSTM encoder, which proves better at capturing global interactions given both empirical results and a theoretical justification." ]
[ 0, 1, 0, 0 ]
[ 0.10256409645080566, 0.2448979616165161, 0.13333332538604736, 0.23999999463558197 ]
rklNwjCcYm
[ "We provide insightful understanding of sequence-labeling NER and propose to use two types of cross structures, both of which bring theoretical and empirical improvements." ]
[ "Knowledge Graph Embedding (KGE) is the task of jointly learning entity and relation embeddings for a given knowledge graph.", "Existing methods for learning KGEs can be seen as a two-stage process where", "(a) entities and relations in the knowledge graph are represented using some linear algebraic structures (embeddings), and", "(b) a scoring function is defined that evaluates the strength of a relation that holds between two entities using the corresponding relation and entity embeddings.", "Unfortunately, prior proposals for the scoring functions in the first step have been heuristically motivated, and it is unclear as to how the scoring functions in KGEs relate to the generation process of the underlying knowledge graph.", "To address this issue, we propose a generative account of the KGE learning task.", "Specifically, given a knowledge graph represented by a set of relational triples (h, R, t), where the semantic relation R holds between the two entities h (head) and t (tail), we extend the random walk model (Arora et al., 2016a) of word embeddings to KGE.", "We derive a theoretical relationship between the joint probability p(h, R, t) and the embeddings of h, R and t.", "Moreover, we show that marginal loss minimisation, a popular objective used by much prior work in KGE, follows naturally from the log-likelihood ratio maximisation under the probabilities estimated from the KGEs according to our theoretical relationship.", "We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.", "The KGEs learnt by our proposed method obtain state-of-the-art performance on FB15K237 and WN18RR benchmark datasets, providing empirical evidence in support of the theory.\n" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.25806450843811035, 0.07999999821186066, 0.1428571343421936, 0.12121211737394333, 0.1463414579629898, 0.23076923191547394, 0.18518517911434174, 0.19999998807907104, 0.04444444179534912, 0.27586206793785095, 0.10810810327529907 ]
SkxbDsR9Ym
[ "We present a theoretically proven generative model of knowledge graph embedding. " ]
[ "Currently the only techniques for sharing governance of a deep learning model are homomorphic encryption and secure multiparty computation.", "Unfortunately, neither of these techniques is applicable to the training of large neural networks due to their large computational and communication overheads.", "As a scalable technique for shared model governance, we propose splitting deep learning model between multiple parties.", "This paper empirically investigates the security guarantee of this technique, which is introduced as the problem of model completion: Given the entire training data set or an environment simulator, and a subset of the parameters of a trained deep learning model, how much training is required to recover the model’s original performance? ", "We define a metric for evaluating the hardness of the model completion problem and study it empirically in both supervised learning on ImageNet and reinforcement learning on Atari and DeepMind Lab.", "Our experiments show that (1) the model completion problem is harder in reinforcement learning than in supervised learning because of the unavailability of the trained agent’s trajectories, and (2) its hardness depends not primarily on the number of parameters of the missing part, but more so on their type and location. ", "Our results suggest that model splitting might be a feasible technique for shared model governance in some settings where training is very expensive." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0.060606054961681366, 0.1818181723356247, 0, 0.25, 0.25, 0.145454540848732, 0.0555555522441864 ]
H1xEtoRqtQ
[ "We study empirically how hard it is to recover missing parts of trained models" ]
[ "This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference.", "Unlike the existing methods on domain transfer through deep generative models, such as StarGAN (Choi et al., 2017) and UFDN (Liu et al., 2018), the variational domain adaptation has three advantages.", "Firstly, the samples from the target are not required.", "Instead, the framework requires one known source as a prior $p(x)$ and binary discriminators, $p(\\mathcal{D}_i|x)$, discriminating the target domain $\\mathcal{D}_i$ from others.", "Consequently, the framework regards a target as a posterior that can be explicitly formulated through the Bayesian inference, $p(x|\\mathcal{D}_i) \\propto p(\\mathcal{D}_i|x)p(x)$, as exhibited by a further proposed model of dual variational autoencoder (DualVAE).", "Secondly, the framework is scablable to large-scale domains.", "As well as VAE encodes a sample $x$ as a mode on a latent space: $\\mu(x) \\in \\mathcal{Z}$, DualVAE encodes a domain $\\mathcal{D}_i$ as a mode on the dual latent space $\\mu^*(\\mathcal{D}_i) \\in \\mathcal{Z}^*$, named domain embedding.", "It reformulates the posterior with a natural paring $\\langle, \\rangle: \\mathcal{Z} \\times \\mathcal{Z}^* \\rightarrow \\Real$, which can be expanded to uncountable infinite domains such as continuous domains as well as interpolation.", "Thirdly, DualVAE fastly converges without sophisticated automatic/manual hyperparameter search in comparison to GANs as it requires only one additional parameter to VAE.", "Through the numerical experiment, we demonstrate the three benefits with multi-domain image generation task on CelebA with up to 60 domains, and exhibits that DualVAE records the state-of-the-art performance outperforming StarGAN and UFDN." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.9411764740943909, 0.13333332538604736, 0, 0.15789473056793213, 0.17391303181648254, 0.07999999821186066, 0.0952380895614624, 0.04444443807005882, 0, 0 ]
ByeLmn0qtX
[ "This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference" ]
[ "We propose a new method to train neural networks based on a novel combination of adversarial training and provable defenses.", "The key idea is to model training as a procedure which includes both, the verifier and the adversary.", "In every iteration, the verifier aims to certify the network using convex relaxation while the adversary tries to find inputs inside that convex relaxation which cause verification to fail.", "We experimentally show that this training method is promising and achieves the best of both worlds – it produces a model with state-of-the-art accuracy (74.8%) and certified robustness (55.9%) on the challenging CIFAR-10 dataset with a 2/255 L-infinity perturbation.", "This is a significant improvement over the currently known best results of 68.3% accuracy and 53.9% certified robustness, achieved using a 5 times larger network than our work." ]
[ 1, 0, 0, 0, 0 ]
[ 0.5853658318519592, 0.25641024112701416, 0.04444443807005882, 0.47457626461982727, 0.19607841968536377 ]
SJxSDxrKDr
[ "We propose a novel combination of adversarial training and provable defenses which produces a model with state-of-the-art accuracy and certified robustness on CIFAR-10. " ]
[ "Learning tasks on source code (i.e., formal languages) have been considered recently, but most work has tried to transfer natural language methods and does not capitalize on the unique opportunities offered by code's known syntax.", "For example, long-range dependencies induced by using the same variable or function in distant locations are often not considered.", "We propose to use graphs to represent both the syntactic and semantic structure of code and use graph-based deep learning methods to learn to reason over program structures.\n\n", "In this work, we present how to construct graphs from source code and how to scale Gated Graph Neural Networks training to such large graphs.", "We evaluate our method on two tasks: VarNaming, in which a network attempts to predict the name of a variable given its usage, and VarMisuse, in which the network learns to reason about selecting the correct variable that should be used at a given program location.", "Our comparison to methods that use less structured program representations shows the advantages of modeling known structure, and suggests that our models learn to infer meaningful names and to solve the VarMisuse task in many cases.", "Additionally, our testing showed that VarMisuse identifies a number of bugs in mature open-source projects." ]
[ 0, 0, 1, 0, 0, 0, 0 ]
[ 0.1428571343421936, 0, 0.22727271914482117, 0.19512194395065308, 0.178571417927742, 0.15686273574829102, 0.11428570747375488 ]
BJOFETxR-
[ "Programs have structure that can be represented as graphs, and graph neural networks can learn to find bugs on such graphs" ]
[ "Overfitting is an ubiquitous problem in neural network training and usually mitigated using a holdout data set.\n", "Here we challenge this rationale and investigate criteria for overfitting without using a holdout data set.\n", "Specifically, we train a model for a fixed number of epochs multiple times with varying fractions of randomized labels and for a range of regularization strengths. \n", "A properly trained model should not be able to attain an accuracy greater than the fraction of properly labeled data points.", "Otherwise the model overfits. \n", "We introduce two criteria for detecting overfitting and one to detect underfitting.", "We analyze early stopping, the regularization factor, and network depth.\n", "In safety critical applications we are interested in models and parameter settings which perform well and are not likely to overfit.", "The methods of this paper allow characterizing and identifying such models." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.07407406717538834, 0.307692289352417, 0.12903225421905518, 0, 0, 0.6666666865348816, 0.29999998211860657, 0.0714285671710968, 0.09999999403953552 ]
B1lKtjA9FQ
[ "We introduce and analyze several criteria for detecting overfitting." ]
[ "Partially observable Markov decision processes (POMDPs) are a widely-used framework to model decision-making with uncertainty about the environment and under stochastic outcome.", "In conventional POMDP models, the observations that the agent receives originate from fixed known distribution.", "However, in a variety of real-world scenarios the agent has an active role in its perception by selecting which observations to receive.", "Due to combinatorial nature of such selection process, it is computationally intractable to integrate the perception decision with the planning decision.", "To prevent such expansion of the action space, we propose a greedy strategy for observation selection that aims to minimize the uncertainty in state. \n", "We develop a novel point-based value iteration algorithm that incorporates the greedy strategy to achieve near-optimal uncertainty reduction for sampled belief points.", "This in turn enables the solver to efficiently approximate the reachable subspace of belief simplex by essentially separating computations related to perception from planning.\n", "Lastly, we implement the proposed solver and demonstrate its performance and computational advantage in a range of robotic scenarios where the robot simultaneously performs active perception and planning." ]
[ 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.1621621549129486, 0, 0.1666666567325592, 0.1818181723356247, 0.10256409645080566, 0.37837836146354675, 0.15789473056793213, 0.29999998211860657 ]
S1lTg3RcFm
[ "We develop a point-based value iteration solver for POMDPs with active perception and planning tasks." ]
[ "Deep neural networks, in particular convolutional neural networks, have become highly effective tools for compressing images and solving inverse problems including denoising, inpainting, and reconstruction from few and noisy measurements.", "This success can be attributed in part to their ability to represent and generate natural images well.", "Contrary to classical tools such as wavelets, image-generating deep neural networks have a large number of parameters---typically a multiple of their output dimension---and need to be trained on large datasets. \n", "In this paper, we propose an untrained simple image model, called the deep decoder, which is a deep neural network that can generate natural images from very few weight parameters.\n", "The deep decoder has a simple architecture with no convolutions and fewer weight parameters than the output dimensionality.", "This underparameterization enables the deep decoder to compress images into a concise set of network weights, which we show is on par with wavelet-based thresholding.", "Further, underparameterization provides a barrier to overfitting, allowing the deep decoder to have state-of-the-art performance for denoising.", "The deep decoder is simple in the sense that each layer has an identical structure that consists of only one upsampling unit, pixel-wise linear combination of channels, ReLU activation, and channelwise normalization.", "This simplicity makes the network amenable to theoretical analysis, and it sheds light on the aspects of neural networks that enable them to form effective signal representations." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.11538460850715637, 0.1904761791229248, 0.07547169178724289, 0.3214285671710968, 0.13636362552642822, 0.11764705181121826, 0.0952380895614624, 0.178571417927742, 0.15686273574829102 ]
rylV-2C9KQ
[ "We introduce an underparameterized, nonconvolutional, and simple deep neural network that can, without training, effectively represent natural images and solve image processing tasks like compression and denoising competitively." ]
[ "In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU).", "We give an algorithm to train a ReLU DNN with one hidden layer to {\\em global optimality} with runtime polynomial in the data size albeit exponential in the input dimension.", "Further, we improve on the known lower bounds on size (from exponential to super exponential) for approximating a ReLU deep net function by a shallower ReLU net.", "Our gap theorems hold for smoothly parametrized families of ``hard'' functions, contrary to countable, discrete families known in the literature. ", "An example consequence of our gap theorems is the following: for every natural number $k$ there exists a function representable by a ReLU DNN with $k^2$ hidden layers and total size $k^3$, such that any ReLU DNN with at most $k$ hidden layers will require at least $\\frac12k^{k+1}-1$ total nodes.", "Finally, for the family of $\\R^n\\to \\R$ DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an explicit construction of a \\emph{smoothly parameterized} family of functions attaining this scaling.", "Our construction utilizes the theory of zonotopes from polyhedral theory." ]
[ 0, 1, 0, 0, 0, 0, 0 ]
[ 0.2222222238779068, 0.2666666507720947, 0.17543859779834747, 0.18518517911434174, 0.18666666746139526, 0.19999998807907104, 0.09302325546741486 ]
B1J_rgWRW
[ "This paper 1) characterizes functions representable by ReLU DNNs, 2) formally studies the benefit of depth in such architectures, 3) gives an algorithm to implement empirical risk minimization to global optimality for two layer ReLU nets." ]
[ "The backpropagation of error algorithm (BP) is often said to be impossible to implement in a real brain.", "The recent success of deep networks in machine learning and AI, however, has inspired a number of proposals for understanding how the brain might learn across multiple layers, and hence how it might implement or approximate BP.", "As of yet, none of these proposals have been rigorously evaluated on tasks where BP-guided deep learning has proved critical, or in architectures more structured than simple fully-connected networks.", "Here we present the first results on scaling up a biologically motivated model of deep learning to datasets which need deep networks with appropriate architectures to achieve good performance.", "We present results on CIFAR-10 and ImageNet. ", "For CIFAR-10 we show that our algorithm, a straightforward, weight-transport-free variant of difference target-propagation (DTP) modified to remove backpropagation from the penultimate layer, is competitive with BP in training deep networks with locally defined receptive fields that have untied weights. ", "For ImageNet we find that both DTP and our algorithm perform significantly worse than BP, opening questions about whether different architectures or algorithms are required to scale these approaches.", "Our results and implementation details help establish baselines for biologically motivated deep learning schemes going forward." ]
[ 0, 0, 0, 0, 0, 0, 0, 1 ]
[ 0, 0.13636362552642822, 0.1538461446762085, 0.2631579041481018, 0.21052631735801697, 0, 0.14999999105930328, 0.29629629850387573 ]
BypdvewVM
[ "Benchmarks for biologically plausible learning algorithms on complex datasets and architectures" ]
[ "Deep neural networks (DNNs) usually contain millions, maybe billions, of parameters/weights, making both storage and computation very expensive.", "This has motivated a large body of work to reduce the complexity of the neural network by using sparsity-inducing regularizers. ", "Another well-known approach for controlling the complexity of DNNs is parameter sharing/tying, where certain sets of weights are forced to share a common value.", "Some forms of weight sharing are hard-wired to express certain in- variances, with a notable example being the shift-invariance of convolutional layers.", "However, there may be other groups of weights that may be tied together during the learning process, thus further re- ducing the complexity of the network.", "In this paper, we adopt a recently proposed sparsity-inducing regularizer, named GrOWL (group ordered weighted l1), which encourages sparsity and, simulta- neously, learns which groups of parameters should share a common value.", "GrOWL has been proven effective in linear regression, being able to identify and cope with strongly correlated covariates.", "Unlike standard sparsity-inducing regularizers (e.g., l1 a.k.a.", "Lasso), GrOWL not only eliminates unimportant neurons by setting all the corresponding weights to zero, but also explicitly identifies strongly correlated neurons by tying the corresponding weights to a common value.", "This ability of GrOWL motivates the following two-stage procedure: (i) use GrOWL regularization in the training process to simultaneously identify significant neurons and groups of parameter that should be tied together; (ii) retrain the network, enforcing the structure that was unveiled in the previous phase, i.e., keeping only the significant neurons and enforcing the learned tying structure.", "We evaluate the proposed approach on several benchmark datasets, showing that it can dramatically compress the network with slight or even no loss on generalization performance.\n" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.0555555522441864, 0.1621621549129486, 0.1463414579629898, 0.051282044500112534, 0.10256409645080566, 0.1249999925494194, 0.1666666567325592, 0, 0.1395348757505417, 0.19354838132858276, 0.1860465109348297 ]
rypT3fb0b
[ "We have proposed using the recent GrOWL regularizer for simultaneous parameter sparsity and tying in DNN learning. " ]
[ "Weight-sharing—the simultaneous optimization of multiple neural networks using the same parameters—has emerged as a key component of state-of-the-art neural architecture search.", "However, its success is poorly understood and often found to be surprising.", "We argue that, rather than just being an optimization trick, the weight-sharing approach is induced by the relaxation of a structured hypothesis space, and introduces new algorithmic and theoretical challenges as well as applications beyond neural architecture search.", "Algorithmically, we show how the geometry of ERM for weight-sharing requires greater care when designing gradient- based minimization methods and apply tools from non-convex non-Euclidean optimization to give general-purpose algorithms that adapt to the underlying structure.", "We further analyze the learning-theoretic behavior of the bilevel optimization solved by practical weight-sharing methods.", "Next, using kernel configuration and NLP feature selection as case studies, we demonstrate how weight-sharing applies to the architecture search generalization of NAS and effectively optimizes the resulting bilevel objective.", "Finally, we use our optimization analysis to develop a simple exponentiated gradient method for NAS that aligns with the underlying optimization geometry and matches state-of-the-art approaches on CIFAR-10." ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.42424240708351135, 0.07692307233810425, 0.3265306055545807, 0.1666666567325592, 0.2142857164144516, 0.2380952388048172, 0.19512194395065308 ]
HJgRCyHFDr
[ "An analysis of the learning and optimization structures of architecture search in neural networks and beyond." ]
[ "Deep latent variable models have seen recent success in many data domains.", "Lossless compression is an application of these models which, despite having the potential to be highly useful, has yet to be implemented in a practical manner.", "We present '`Bits Back with ANS' (BB-ANS), a scheme to perform lossless compression with latent variable models at a near optimal rate.", "We demonstrate this scheme by using it to compress the MNIST dataset with a variational auto-encoder model (VAE), achieving compression rates superior to standard methods with only a simple VAE.", "Given that the scheme is highly amenable to parallelization, we conclude that with a sufficiently high quality generative model this scheme could be used to achieve substantial improvements in compression rate with acceptable running time.", "We make our implementation available open source at https://github.com/bits-back/bits-back ." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0, 0.15789473056793213, 0.23529411852359772, 0.19512194395065308, 0.08888888359069824, 0.0833333283662796 ]
ryE98iR5tm
[ "We do lossless compression of large image datasets using a VAE, beat existing compression algorithms." ]
[ "Hyperparameter tuning is arguably the most important ingredient for obtaining state of art performance in deep networks. ", "We focus on hyperparameters that are related to the optimization algorithm, e.g. learning rates, which have a large impact on the training speed and the resulting accuracy.", "Typically, fixed learning rate schedules are employed during training.", "We propose Hyperdyn a dynamic hyperparameter optimization method that selects new learning rates on the fly at the end of each epoch.", "Our explore-exploit framework combines Bayesian optimization (BO) with a rejection strategy, based on a simple probabilistic wait and watch test. ", "We obtain state of art accuracy results on CIFAR and Imagenet datasets, but with significantly faster training, when compared with the best manually tuned networks." ]
[ 0, 0, 0, 0, 1, 0 ]
[ 0, 0.06666666269302368, 0, 0.1538461446762085, 0.23999999463558197, 0 ]
HJtPtdqQG
[ "Bayesian optimization based online hyperparameter optimization." ]
[ "Multi-hop text-based question-answering is a current challenge in machine comprehension. \n", "This task requires to sequentially integrate facts from multiple passages to answer complex natural language questions.\n", "In this paper, we propose a novel architecture, called the Latent Question Reformulation Network (LQR-net), a multi-hop and parallel attentive network designed for question-answering tasks that require reasoning capabilities.\n", "LQR-net is composed of an association of \\textbf{reading modules} and \\textbf{reformulation modules}.\n", "The purpose of the reading module is to produce a question-aware representation of the document.\n", "From this document representation, the reformulation module extracts essential elements to calculate an updated representation of the question.\n", "This updated question is then passed to the following hop.\n", "We evaluate our architecture on the \\hotpotqa question-answering dataset designed to assess multi-hop reasoning capabilities.\n", "Our model achieves competitive results on the public leaderboard and outperforms the best current \\textit{published} models in terms of Exact Match (EM) and $F_1$ score.\n", "Finally, we show that an analysis of the sequential reformulations can provide interpretable reasoning paths." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1111111044883728, 0, 0.9259259104728699, 0.0555555522441864, 0.10256409645080566, 0.09302324801683426, 0.0555555522441864, 0.2926829159259796, 0.08163265138864517, 0.19999998807907104 ]
S1x63TEYvr
[ "In this paper, we propose the Latent Question Reformulation Network (LQR-net), a multi-hop and parallel attentive network designed for question-answering tasks that require reasoning capabilities." ]
[ "We propose a method to automatically compute the importance of features at every observation in time series, by simulating counterfactual trajectories given previous observations.", "We define the importance of each observation as the change in the model output caused by replacing the observation with a generated one.", "Our method can be applied to arbitrarily complex time series models.", "We compare the generated feature importance to existing methods like sensitivity analyses, feature occlusion, and other explanation baselines to show that our approach generates more precise explanations and is less sensitive to noise in the input signals." ]
[ 1, 0, 0, 0 ]
[ 0.21621620655059814, 0.1249999925494194, 0.0833333283662796, 0.04444444179534912 ]
HygDF1rYDB
[ "Explaining Multivariate Time Series Models by finding important observations in time using Counterfactuals" ]
[ "This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data.", "Like much of previous work, we seek to align the learned representations of the source and target domains while preserving discriminability.", "The way we accomplish alignment is by learning to perform auxiliary self-supervised task(s) on both domains simultaneously. ", "Each self-supervised task brings the two domains closer together along the direction relevant to that task.", "Training this jointly with the main task classifier on the source domain is shown to successfully generalize to the unlabeled target domain. ", "The presented objective is straightforward to implement and easy to optimize.", "We achieve state-of-the-art results on four out of seven standard benchmarks, and competitive results on segmentation adaptation.", "We also demonstrate that our method composes well with another popular pixel-level adaptation method." ]
[ 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.20000000298023224, 0.12903225421905518, 0.19999998807907104, 0.07692307233810425, 0.19354838132858276, 0.09090908616781235, 0.2222222238779068, 0.1599999964237213 ]
S1lF8xHYwS
[ "We use self-supervision on both domain to align them for unsupervised domain adaptation." ]
[ "We introduce simple, efficient algorithms for computing a MinHash of a probability distribution, suitable for both sparse and dense data, with equivalent running times to the state of the art for both cases.", "The collision probability of these algorithms is a new measure of the similarity of positive vectors which we investigate in detail.", "We describe the sense in which this collision probability is optimal for any Locality Sensitive Hash based on sampling.", "We argue that this similarity measure is more useful for probability distributions than the similarity pursued by other algorithms for weighted MinHash, and is the natural generalization of the Jaccard index." ]
[ 0, 1, 0, 0 ]
[ 0.21276594698429108, 0.307692289352417, 0.1538461446762085, 0.30434781312942505 ]
BkOswnc5z
[ "The minimum of a set of exponentially distributed hashes has a very useful collision probability that generalizes the Jaccard Index to probability distributions." ]
[ "Recently, progress has been made towards improving relational reasoning in machine learning field.", "Among existing models, graph neural networks (GNNs) is one of the most effective approaches for multi-hop relational reasoning.", "In fact, multi-hop relational reasoning is indispensable in many natural language processing tasks such as relation extraction.", "In this paper, we propose to generate the parameters of graph neural networks (GP-GNNs) according to natural language sentences, which enables GNNs to process relational reasoning on unstructured text inputs.", "We verify GP-GNNs in relation extraction from text.", "Experimental results on a human-annotated dataset and two distantly supervised datasets show that our model achieves significant improvements compared to the baselines.", "We also perform a qualitative analysis to demonstrate that our model could discover more accurate relations by multi-hop relational reasoning." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0.06666666269302368, 0.22857142984867096, 0.1764705777168274, 0.2666666507720947, 0.07999999821186066, 0.05128204822540283, 0.21621620655059814 ]
SkgzYiRqtX
[ "A graph neural network model with parameters generated from natural languages, which can perform multi-hop reasoning. " ]
[ "Off-Policy Actor-Critic (Off-PAC) methods have proven successful in a variety of continuous control tasks.", "Normally, the critic’s action-value function is updated using temporal-difference, and the critic in turn provides a loss for the actor that trains it to take actions with higher expected return.", "In this paper, we introduce a novel and flexible meta-critic that observes the learning process and meta-learns an additional loss for the actor that accelerates and improves actor-critic learning.", "Compared to the vanilla critic, the meta-critic network is explicitly trained to accelerate the learning process; and compared to existing meta-learning algorithms, meta-critic is rapidly learned online for a single task, rather than slowly over a family of tasks.", "Crucially, our meta-critic framework is designed for off-policy based learners, which currently provide state-of-the-art reinforcement learning sample efficiency.", "We demonstrate that online meta-critic learning leads to improvements in a variety of continuous control environments when combined with contemporary Off-PAC methods DDPG, TD3 and the state-of-the-art SAC." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.05882352590560913, 0.1249999925494194, 0.22727271914482117, 0.1538461446762085, 0.15789473056793213, 0.2083333283662796 ]
H1lKd6NYPS
[ "We present Meta-Critic, an auxiliary critic module for off-policy actor-critic methods that can be meta-learned online during single task learning." ]
[ "Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data.", "Nevertheless, these networks often generalize well in practice.", "It has also been observed that trained networks can often be ``compressed to much smaller representations.", "The purpose of this paper is to connect these two empirical observations.", "Our main technical result is a generalization bound for compressed networks based on the compressed size that, combined with off-the-shelf compression algorithms, leads to state-of-the-art generalization guarantees.", "In particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem.", "Additionally, we show that compressibility of models that tend to overfit is limited.", "Empirical results show that an increase in overfitting increases the number of bits required to describe a trained network." ]
[ 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.11764705181121826, 0.06896550953388214, 0.05405404791235924, 0, 0.260869562625885, 0.10526315122842789, 0, 0.09999999403953552 ]
BJgqqsAct7
[ "We obtain non-vacuous generalization bounds on ImageNet-scale deep neural networks by combining an original PAC-Bayes bound and an off-the-shelf neural network compression method." ]
[ "Adversarial examples can be defined as inputs to a model which induce a mistake -- where the model output is different than that of an oracle, perhaps in surprising or malicious ways.", "Original models of adversarial attacks are primarily studied in the context of classification and computer vision tasks.", "While several attacks have been proposed in natural language processing (NLP) settings, they often vary in defining the parameters of an attack and what a successful attack would look like.", "The goal of this work is to propose a unifying model of adversarial examples suitable for NLP tasks in both generative and classification settings.", "We define the notion of adversarial gain: based in control theory, it is a measure of the change in the output of a system relative to the perturbation of the input (caused by the so-called adversary) presented to the learner.", "This definition, as we show, can be used under different feature spaces and distance conditions to determine attack or defense effectiveness across different intuitive manifolds.", "This notion of adversarial gain not only provides a useful way for evaluating adversaries and defenses, but can act as a building block for future work in robustness under adversaries due to its rooted nature in stability and manifold theory." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0.1818181723356247, 0.24390242993831635, 0.22641508281230927, 0.3333333134651184, 0.30188679695129395, 0.12244897335767746, 0.23333333432674408 ]
HkgGWM3som
[ "We propose an alternative measure for determining effectiveness of adversarial attacks in NLP models according to a distance measure-based method like incremental L2-gain in control theory." ]
[ "We propose a Warped Residual Network (WarpNet) using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network.", "We apply a perturbation theory on residual networks and decouple the interactions between residual units.", "The resulting warp operator is a first order approximation of the output over multiple layers.", "The first order perturbation theory exhibits properties such as binomial path lengths and exponential gradient scaling found experimentally by Veit et al (2016). \n", "We demonstrate through an extensive performance study that the proposed network achieves comparable predictive performance to the original residual network with the same number of parameters, while achieving a significant speed-up on the total training time.", "As WarpNet performs model parallelism in residual network training in which weights are distributed over different GPUs, it offers speed-up and capability to train larger networks compared to original residual networks." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.9642857313156128, 0.2380952388048172, 0.23255813121795654, 0.07692307233810425, 0.2711864411830902, 0.1818181723356247 ]
SyMvJrdaW
[ "We propose the Warped Residual Network using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network. " ]
[ "A plethora of methods attempting to explain predictions of black-box models have been proposed by the Explainable Artificial Intelligence (XAI) community.", "Yet, measuring the quality of the generated explanations is largely unexplored, making quantitative comparisons non-trivial.", "In this work, we propose a suite of multifaceted metrics that enables us to objectively compare explainers based on the correctness, consistency, as well as the confidence of the generated explanations.", "These metrics are computationally inexpensive, do not require model-retraining and can be used across different data modalities.", "We evaluate them on common explainers such as Grad-CAM, SmoothGrad, LIME and Integrated Gradients.", "Our experiments show that the proposed metrics reflect qualitative observations reported in earlier works." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.1463414579629898, 0.05714285373687744, 0.375, 0.10526315122842789, 0.22857142984867096, 0.11428570747375488 ]
B1xBAA4FwH
[ "We propose a suite of metrics that capture desired properties of explainability algorithms and use it to objectively compare and evaluate such methods" ]
[ "Neural networks are known to produce unexpected results on inputs that are far from the training distribution.", "One approach to tackle this problem is to detect the samples on which the trained network can not answer reliably.", "ODIN is a recently proposed method for out-of-distribution detection that does not modify the trained network and achieves good performance for various image classification tasks.", "In this paper we adapt ODIN for sentence classification and word tagging tasks.", "We show that the scores produced by ODIN can be used as a confidence measure for the predictions on both in-distribution and out-of-distribution datasets." ]
[ 0, 0, 0, 0, 1 ]
[ 0.12121211737394333, 0.11428570747375488, 0.2926829159259796, 0.13333332538604736, 0.29999998211860657 ]
HJf2ds2ssm
[ "A recent out-of-distribution detection method helps to measure the confidence of RNN predictions for some NLP tasks" ]
[ "Some recent work has shown separation between the expressive power of depth-2 and depth-3 neural networks.", "These separation results are shown by constructing functions and input distributions, so that the function is well-approximable by a depth-3 neural network of polynomial size but it cannot be well-approximated under the chosen input distribution by any depth-2 neural network of polynomial size.", "These results are not robust and require carefully chosen functions as well as input distributions.\n\n", "We show a similar separation between the expressive power of depth-2 and depth-3 sigmoidal neural networks over a large class of input distributions, as long as the weights are polynomially bounded.", "While doing so, we also show that depth-2 sigmoidal neural networks with small width and small weights can be well-approximated by low-degree multivariate polynomials." ]
[ 0, 0, 0, 1, 0 ]
[ 0.23999999463558197, 0.09302325546741486, 0.0833333283662796, 0.277777761220932, 0.1875 ]
SJICXeWAb
[ "depth-2-vs-3 separation for sigmoidal neural networks over general distributions" ]
[ "The smallest eigenvectors of the graph Laplacian are well-known to provide a succinct representation of the geometry of a weighted graph.", "In reinforcement learning (RL), where the weighted graph may be interpreted as the state transition process induced by a behavior policy acting on the environment, approximating the eigenvectors of the Laplacian provides a promising approach to state representation learning.", "However, existing methods for performing this approximation are ill-suited in general RL settings for two main reasons: First, they are computationally expensive, often requiring operations on large matrices.", "Second, these methods lack adequate justification beyond simple, tabular, finite-state settings.", "In this paper, we present a fully general and scalable method for approximating the eigenvectors of the Laplacian in a model-free RL context.", "We systematically evaluate our approach and empirically show that it generalizes beyond the tabular, finite-state setting.", "Even in tabular, finite-state settings, its ability to approximate the eigenvectors outperforms previous proposals.", "Finally, we show the potential benefits of using a Laplacian representation learned using our method in goal-achieving RL tasks, providing evidence that our technique can be used to significantly improve the performance of an RL agent." ]
[ 0, 0, 0, 0, 0, 0, 0, 1 ]
[ 0.2790697515010834, 0.2711864411830902, 0.07547169178724289, 0, 0.5, 0.23255813121795654, 0.24390242993831635, 0.5862069129943848 ]
HJlNpoA5YQ
[ "We propose a scalable method to approximate the eigenvectors of the Laplacian in the reinforcement learning context and we show that the learned representations can improve the performance of an RL agent." ]
[ "Our work offers a new method for domain translation from semantic label maps\n", "and Computer Graphic (CG) simulation edge map images to photo-realistic im-\n", "ages.", "We train a Generative Adversarial Network (GAN) in a conditional way to\n", "generate a photo-realistic version of a given CG scene.", "Existing architectures of\n", "GANs still lack the photo-realism capabilities needed to train DNNs for computer\n", "vision tasks, we address this issue by embedding edge maps, and training it in an\n", "adversarial mode.", "We also offer an extension to our model that uses our GAN\n", "architecture to create visually appealing and temporally coherent videos." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0952380895614624, 0.31578946113586426, 0.10526315122842789, 0, 0, 0.09999999403953552, 0.08695651590824127, 0.10526315122842789, 0.23529411852359772 ]
BJxqohNFPB
[ "Simulation to real images translation and video generation" ]
[ "Deep neural networks are widely used in various domains, but the prohibitive computational complexity prevents their deployment on mobile devices.", "Numerous model compression algorithms have been proposed, however, it is often difficult and time-consuming to choose proper hyper-parameters to obtain an efficient compressed model.", "In this paper, we propose an automated framework for model compression and acceleration, namely PocketFlow.", "This is an easy-to-use toolkit that integrates a series of model compression algorithms and embeds a hyper-parameter optimization module to automatically search for the optimal combination of hyper-parameters.", "Furthermore, the compressed model can be converted into the TensorFlow Lite format and easily deployed on mobile devices to speed-up the inference.", "PocketFlow is now open-source and publicly available at https://github.com/Tencent/PocketFlow." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.19999998807907104, 0.2380952388048172, 0.514285683631897, 0.260869562625885, 0.29999998211860657, 0.06666666269302368 ]
H1fWoYhdim
[ "We propose PocketFlow, an automated framework for model compression and acceleration, to facilitate deep learning models' deployment on mobile devices." ]
[ "Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest.", "However, current techniques for training generative models require access to fully-observed samples.", "In many settings, it is expensive or even impossible to obtain fully-observed samples, but economical to obtain partial, noisy observations.", "We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest.", "We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models.", "Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN.", "On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements.", "Generative models trained with our method can obtain $2$-$4$x higher inception scores than the baselines." ]
[ 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.0624999962747097, 0.0952380895614624, 0.14814814925193787, 0.07407406717538834, 0, 0, 0, 0 ]
Hy7fDog0b
[ "How to learn GANs from noisy, distorted, partial observations" ]
[ "Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models such as AlexNet and Inception, and smaller models trained from scratch, such as LeNet5 and a miniature-AlexNet. ", "Empirical and theoretical results clearly indicate that the empirical spectral density (ESD) of DNN layer matrices displays signatures of traditionally-regularized statistical models, even in the absence of exogenously specifying traditional forms of regularization, such as Dropout or Weight Norm constraints. ", "Building on recent results in RMT, most notably its extension to Universality classes of Heavy-Tailed matrices, we develop a theory to identify 5+1 Phases of Training, corresponding to increasing amounts of Implicit Self-Regularization. ", "For smaller and/or older DNNs, this Implicit Self-Regularization is like traditional Tikhonov regularization, in that there is a \"size scale\" separating signal from noise. ", "For state-of-the-art DNNs, however, we identify a novel form of Heavy-Tailed Self-Regularization, similar to the self-organization seen in the statistical physics of disordered systems. ", "This implicit Self-Regularization can depend strongly on the many knobs of the training process. ", "By exploiting the generalization gap phenomena, we demonstrate that we can cause a small model to exhibit all 5+1 phases of training simply by changing the batch size." ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.1666666567325592, 0.09677419066429138, 0.072727270424366, 0.08163265138864517, 0.1249999925494194, 0.1538461446762085, 0.15686273574829102 ]
SJeFNoRcFQ
[ "See the abstract. (For the revision, the paper is identical, except for a 59 page Supplementary Material, which can serve as a stand-along technical report version of the paper.)" ]
[ "We introduce an attention mechanism to improve feature extraction for deep active learning (AL) in the semi-supervised setting.", "The proposed attention mechanism is based on recent methods to visually explain predictions made by DNNs.", "We apply the proposed explanation-based attention to MNIST and SVHN classification.", "The conducted experiments show accuracy improvements for the original and class-imbalanced datasets with the same number of training examples and faster long-tail convergence compared to uncertainty-based methods." ]
[ 1, 0, 0, 0 ]
[ 1, 0.1764705777168274, 0.27586206793785095, 0.1395348757505417 ]
SyxKiVmedV
[ "We introduce an attention mechanism to improve feature extraction for deep active learning (AL) in the semi-supervised setting." ]
[ "We apply canonical forms of gradient complexes (barcodes) to explore neural networks loss surfaces.", "We present an algorithm for calculations of the objective function's barcodes of minima. ", "Our experiments confirm two principal observations: (1) the barcodes of minima are located in a small lower part of the range of values of objective function and (2) increase of the neural network's depth brings down the minima's barcodes.", "This has natural implications for the neural network learning and the ability to generalize." ]
[ 1, 0, 0, 0 ]
[ 1, 0.14814814925193787, 0.08888888359069824, 0.14814814925193787 ]
S1gwC1StwS
[ "We apply canonical forms of gradient complexes (barcodes) to explore neural networks loss surfaces." ]
[ "\nNew types of compute hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs.", "However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon.", "In particular, models that exploit structured input via complex and instance-dependent control flow are difficult to accelerate using existing algorithms and hardware that typically rely on minibatching.", "We present an asynchronous model-parallel (AMP) training algorithm that is specifically motivated by training on networks of interconnected devices.", "Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently, even for small minibatch sizes, resulting in shorter overall training times.", "Our framework opens the door for scaling up a new class of deep learning models that cannot be efficiently trained today." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0, 0.1249999925494194, 0.11428570747375488, 0.1428571343421936, 0.07843136787414551, 0 ]
HJnQJXbC-
[ "Using asynchronous gradient updates to accelerate dynamic neural network training" ]
[ "In cooperative multi-agent reinforcement learning (MARL), how to design a suitable reward signal to accelerate learning and stabilize convergence is a critical problem.", "The global reward signal assigns the same global reward to all agents without distinguishing their contributions, while the local reward signal provides different local rewards to each agent based solely on individual behavior.", "Both of the two reward assignment approaches have some shortcomings: the former might encourage lazy agents, while the latter might produce selfish agents.\n\n", "In this paper, we study reward design problem in cooperative MARL based on packet routing environments.", "Firstly, we show that the above two reward signals are prone to produce suboptimal policies.", "Then, inspired by some observations and considerations, we design some mixed reward signals, which are off-the-shelf to learn better policies.", "Finally, we turn the mixed reward signals into the adaptive counterparts, which achieve best results in our experiments.", "Other reward signals are also discussed in this paper.", "As reward design is a very fundamental problem in RL and especially in MARL, we hope that MARL researchers can rethink the rewards used in their systems." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.19607841968536377, 0.28070175647735596, 0.07692307233810425, 0.5106382966041565, 0.17391303181648254, 0.1599999964237213, 0.1666666567325592, 0.14999999105930328, 0.2142857164144516 ]
r15kjpHa-
[ "We study reward design problem in cooperative MARL based on packet routing environments. The experimental results remind us to be careful to design the rewards, as they are really important to guide the agent behavior." ]
[ "Recent advances have illustrated that it is often possible to learn to solve linear inverse problems in imaging using training data that can outperform more traditional regularized least squares solutions.", "Along these lines, we present some extensions of the Neumann network, a recently introduced end-to-end learned architecture inspired by a truncated Neumann series expansion of the solution map to a regularized least squares problem.", "Here we summarize the Neumann network approach, and show that it has a form compatible with the optimal reconstruction function for a given inverse problem.", "We also investigate an extension of the Neumann network that incorporates a more sample efficient patch-based regularization approach." ]
[ 0, 0, 1, 0 ]
[ 0.2641509473323822, 0.1111111044883728, 0.3333333134651184, 0.3255814015865326 ]
SyxYnQ398H
[ "Neumann networks are an end-to-end, sample-efficient learning approach to solving linear inverse problems in imaging that are compatible with the MSE optimal approach and admit an extension to patch-based learning." ]
[ "End-to-end task-oriented dialogue is challenging since knowledge bases are usually large, dynamic and hard to incorporate into a learning framework.", "We propose the global-to-local memory pointer (GLMP) networks to address this issue.", "In our model, a global memory encoder and a local memory decoder are proposed to share external knowledge.", "The encoder encodes dialogue history, modifies global contextual representation, and generates a global memory pointer.", "The decoder first generates a sketch response with unfilled slots.", "Next, it passes the global memory pointer to filter the external knowledge for relevant information, then instantiates the slots via the local memory pointers.", "We empirically show that our model can improve copy accuracy and mitigate the common out-of-vocabulary problem.", "As a result, GLMP is able to improve over the previous state-of-the-art models in both simulated bAbI Dialogue dataset and human-human Stanford Multi-domain Dialogue dataset on automatic and human evaluation." ]
[ 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.260869562625885, 0.10526315122842789, 0.5714285373687744, 0.25, 0.1111111044883728, 0.260869562625885, 0.0952380895614624, 0.11320754140615463 ]
ryxnHhRqFm
[ "GLMP: Global memory encoder (context RNN, global pointer) and local memory decoder (sketch RNN, local pointer) that share external knowledge (MemNN) are proposed to strengthen response generation in task-oriented dialogue." ]
[ "The checkerboard phenomenon is one of the well-known visual artifacts in the computer vision field.", "The origins and solutions of checkerboard artifacts in the pixel space have been studied for a long time, but their effects on the gradient space have rarely been investigated.", "In this paper, we revisit the checkerboard artifacts in the gradient space which turn out to be the weak point of a network architecture.", "We explore image-agnostic property of gradient checkerboard artifacts and propose a simple yet effective defense method by utilizing the artifacts.", "We introduce our defense module, dubbed Artificial Checkerboard Enhancer (ACE), which induces adversarial attacks on designated pixels.", "This enables the model to deflect attacks by shifting only a single pixel in the image with a remarkable defense rate.", "We provide extensive experiments to support the effectiveness of our work for various attack scenarios using state-of-the-art attack methods.", "Furthermore, we show that ACE is even applicable to large-scale datasets including ImageNet dataset and can be easily transferred to various pretrained networks." ]
[ 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.052631575614213943, 0.20408162474632263, 0.21739129722118378, 0.2790697515010834, 0.1463414579629898, 0.23255813121795654, 0.0952380895614624, 0.08695651590824127 ]
BJlc6iA5YX
[ "We propose a novel aritificial checkerboard enhancer (ACE) module which guides attacks to a pre-specified pixel space and successfully defends it with a simple padding operation." ]
[ "Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, and understanding the dynamics of (stochastic) gradient algorithms for solving such formulations has been a grand challenge.", "As a first step, we restrict to bilinear zero-sum games and give a systematic analysis of popular gradient updates, for both simultaneous and alternating versions.", "We provide exact conditions for their convergence and find the optimal parameter setup and convergence rates.", "In particular, our results offer formal evidence that alternating updates converge \"better\" than simultaneous ones." ]
[ 0, 1, 0, 0 ]
[ 0.2641509473323822, 0.41860464215278625, 0.29411762952804565, 0.17142856121063232 ]
SJlVY04FwH
[ "We systematically analyze the convergence behaviour of popular gradient algorithms for solving bilinear games, with both simultaneous and alternating updates." ]
[ "Most approaches in generalized zero-shot learning rely on cross-modal mapping between an image feature space and a class embedding space or on generating artificial image features.", "However, learning a shared cross-modal embedding by aligning the latent spaces of modality-specific autoencoders is shown to be promising in (generalized) zero-shot learning.", "While following the same direction, we also take artificial feature generation one step further and propose a model where a shared latent space of image features and class embeddings is learned by aligned variational autoencoders, for the purpose of generating latent features to train a softmax classifier.", "We evaluate our learned latent features on conventional benchmark datasets and establish a new state of the art on generalized zero-shot as well as on few-shot learning.", "Moreover, our results on ImageNet with various zero-shot splits show that our latent features generalize well in large-scale settings." ]
[ 1, 0, 0, 0, 0 ]
[ 0.47826087474823, 0.35555556416511536, 0.2539682388305664, 0.3404255211353302, 0.24390242993831635 ]
BkghJoRNO4
[ "We use VAEs to learn a shared latent space embedding between image features and attributes and thereby achieve state-of-the-art results in generalized zero-shot learning." ]
[ "Intuitively, image classification should profit from using spatial information.", "Recent work, however, suggests that this might be overrated in standard CNNs.", "In this paper, we are pushing the envelope and aim to further investigate the reliance on and necessity of spatial information.", "We propose and analyze three methods, namely Shuffle Conv, GAP+FC and 1x1 Conv, that destroy spatial information during both training and testing phases.", "We extensively evaluate these methods on several object recognition datasets (CIFAR100, Small-ImageNet, ImageNet) with a wide range of CNN architectures (VGG16, ResNet50, ResNet152, MobileNet, SqueezeNet).", "Interestingly, we consistently observe that spatial information can be completely deleted from a significant number of layers with no or only small performance drops." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.1818181723356247, 0, 0.0624999962747097, 0.060606054961681366, 0.052631575614213943, 0.1621621549129486 ]
H1l7AkrFPS
[ "Spatial information at last layers is not necessary for a good classification accuracy." ]
[ "Disentangling underlying generative factors of a data distribution is important for interpretability and generalizable representations.", "In this paper, we introduce two novel disentangling methods.", "Our first method, Unlabeled Disentangling GAN (UD-GAN, unsupervised), decomposes the latent noise by generating similar/dissimilar image pairs and it learns a distance metric on these pairs with siamese networks and a contrastive loss.", "This pairwise approach provides consistent representations for similar data points.", "Our second method (UD-GAN-G, weakly supervised) modifies the UD-GAN with user-defined guidance functions, which restrict the information that goes into the siamese networks.", "This constraint helps UD-GAN-G to focus on the desired semantic variations in the data.", "We show that both our methods outperform existing unsupervised approaches in quantitative metrics that measure semantic accuracy of the learned representations.", "In addition, we illustrate that simple guidance functions we use in UD-GAN-G allow us to directly capture the desired variations in the data." ]
[ 0, 0, 0, 0, 0, 0, 0, 1 ]
[ 0.12903225421905518, 0, 0.08695651590824127, 0.07692307233810425, 0.05405404791235924, 0.27586206793785095, 0.1666666567325592, 0.277777761220932 ]
H1e0-30qKm
[ "We use Siamese Networks to guide and disentangle the generation process in GANs without labeled data." ]
[ "We present Predicted Variables, an approach to making machine learning (ML) a first class citizen in programming languages.\n", "There is a growing divide in approaches to building systems: using human experts (e.g. programming) on the one hand, and using behavior learned from data (e.g. ML) on the other hand.", "PVars aim to make using ML in programming easier by hybridizing the two.", "We leverage the existing concept of variables and create a new type, a predicted variable.", "PVars are akin to native variables with one important distinction: PVars determine their value using ML when evaluated.", "We describe PVars and their interface, how they can be used in programming, and demonstrate the feasibility of our approach on three algorithmic problems: binary search, QuickSort, and caches.\n", "We show experimentally that PVars are able to improve over the commonly used heuristics and lead to a better performance than the original algorithms.\n", "As opposed to previous work applying ML to algorithmic problems, PVars have the advantage that they can be used within the existing frameworks and do not require the existing domain knowledge to be replaced.", "PVars allow for a seamless integration of ML into existing systems and algorithms.\n", "Our PVars implementation currently relies on standard Reinforcement Learning (RL) methods.", "To learn faster, PVars use the heuristic function, which they are replacing, as an initial function.", "We show that PVars quickly pick up the behavior of the initial function and then improve performance beyond that without ever performing substantially worse -- allowing for a safe deployment in critical applications." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.9444444179534912, 0.13333332538604736, 0.19999998807907104, 0.12903225421905518, 0.05882352590560913, 0.13333332538604736, 0.14999999105930328, 0.04444443807005882, 0.06451612710952759, 0, 0.060606054961681366, 0.1249999925494194 ]
B1epooR5FX
[ "We present Predicted Variables, an approach to making machine learning a first class citizen in programming languages." ]
[ "Much recent research has been devoted to video prediction and generation, but mostly for short-scale time horizons.", "The hierarchical video prediction method by Villegas et al. (2017) is an example of a state of the art method for long term video prediction. ", "However, their method has limited applicability in practical settings as it requires a ground truth pose (e.g., poses of joints of a human) at training time. ", "This paper presents a long term hierarchical video prediction model that does not have such a restriction.", "We show that the network learns its own higher-level structure (e.g., pose equivalent hidden variables) that works better in cases where the ground truth pose does not fully capture all of the information needed to predict the next frame. ", "This method gives sharper results than other video prediction methods which do not require a ground truth pose, and its efficiency is shown on the Humans 3.6M and Robot Pushing datasets." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.19354838132858276, 0.2222222238779068, 0.09756097197532654, 0.3333333134651184, 0.15686273574829102, 0.13333332538604736 ]
rkmtTJZCb
[ "We show ways to train a hierarchical video prediction model without needing pose labels." ]
[ "Combining information from different sensory modalities to execute goal directed actions is a key aspect of human intelligence.", "Specifically, human agents are very easily able to translate the task communicated in one sensory domain (say vision) into a representation that enables them to complete this task when they can only sense their environment using a separate sensory modality (say touch).", "In order to build agents with similar capabilities, in this work we consider the problem of a retrieving a target object from a drawer.", "The agent is provided with an image of a previously unseen object and it explores objects in the drawer using only tactile sensing to retrieve the object that was shown in the image without receiving any visual feedback.", "Success at this task requires close integration of visual and tactile sensing.", "We present a method for performing this task in a simulated environment using an anthropomorphic hand.", "We hope that future research in the direction of combining sensory signals for acting will find the object retrieval from a drawer to be a useful benchmark problem" ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0.12765957415103912, 0.21212120354175568, 0.31372547149658203, 0.3870967626571655, 0.19512194395065308, 0.13636362552642822, 0.2181818187236786 ]
B1lXGnRctX
[ "In this work, we study the problem of learning representations to identify novel objects by exploring objects using tactile sensing. Key point here is that the query is provided in image domain." ]
[ "Locality sensitive hashing schemes such as \\simhash provide compact representations of multisets from which similarity can be estimated.", "However, in certain applications, we need to estimate the similarity of dynamically changing sets. ", "In this case, we need the representation to be a homomorphism so that the hash of unions and differences of sets can be computed directly from the hashes of operands. ", "We propose two representations that have this property for cosine similarity (an extension of \\simhash and angle-preserving random projections), and make substantial progress on a third representation for Jaccard similarity (an extension of \\minhash).", "We employ these hashes to compress the sufficient statistics of a conditional random field (CRF) coreference model and study how this compression affects our ability to compute similarities as entities are split and merged during inference.", "\\cut{We study these hashes in a conditional random field (CRF) hierarchical coreference model in order to compute the similarity of entities as they are merged and split during inference.}", "We also provide novel statistical analysis of \\simhash to help justify it as an estimator inside a CRF, showing that the bias and variance reduce quickly with the number of bits.", "On a problem of author coreference, we find that our \\simhash scheme allows scaling the hierarchical coreference algorithm by an order of magnitude without degrading its statistical performance or the model's coreference accuracy, as long as we employ at least 128 or 256 bits. ", "Angle-preserving random projections further improve the coreference quality, potentially allowing even fewer dimensions to be used." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.0833333283662796, 0.13333332538604736, 0.2142857164144516, 0.20689654350280762, 0.53125, 0.4067796468734741, 0.23728813230991364, 0.28985506296157837, 0.21739129722118378 ]
H1gwRx5T6Q
[ "We employ linear homomorphic compression schemes to represent the sufficient statistics of a conditional random field model of coreference and this allows us to scale inference and improve speed by an order of magnitude." ]
[ "Motivated by applications to unsupervised learning, we consider the problem of measuring mutual information.", "Recent analysis has shown that naive kNN estimators of mutual information have serious statistical limitations motivating more refined methods.", "In this paper we prove that serious statistical limitations are inherent to any measurement method.", "More specifically, we show that any distribution-free high-confidence lower bound on mutual information cannot be larger than $O(\\ln N)$ where $N$ is the size of the data sample.", "We also analyze the Donsker-Varadhan lower bound on KL divergence in particular and show that, when simple statistical considerations are taken into account, this bound can never produce a high-confidence value larger than $\\ln N$.", "While large high-confidence lower bounds are impossible, in practice one can use estimators without formal guarantees.", "We suggest expressing mutual information as a difference of entropies and using cross entropy as an entropy estimator.", " We observe that, although cross entropy is only an upper bound on entropy, cross-entropy estimates converge to the true cross entropy at the rate of $1/\\sqrt{N}$." ]
[ 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.307692289352417, 0.25806450843811035, 0.07407406717538834, 0.20512820780277252, 0.17391304671764374, 0, 0.4285714328289032, 0.17142856121063232 ]
BkedwoC5t7
[ "We give a theoretical analysis of the measurement and optimization of mutual information." ]
[ "In this paper, we propose a neural network framework called neuron hierarchical network (NHN), that evolves beyond the hierarchy in layers, and concentrates on the hierarchy of neurons.", "We observe mass redundancy in the weights of both handcrafted and randomly searched architectures.", "Inspired by the development of human brains, we prune low-sensitivity neurons in the model and add new neurons to the graph, and the relation between individual neurons are emphasized and the existence of layers weakened.", "We propose a process to discover the best base model by random architecture search, and discover the best locations and connections of the added neurons by evolutionary search.", "Experiment results show that the NHN achieves higher test accuracy on Cifar-10 than state-of-the-art handcrafted and randomly searched architectures, while requiring much fewer parameters and less searching time." ]
[ 1, 0, 0, 0, 0 ]
[ 0.30188679695129395, 0.1428571343421936, 0.18518517911434174, 0.23999999463558197, 0.2545454502105713 ]
rylxrsR9Fm
[ "By breaking the layer hierarchy, we propose a 3-step approach to the construction of neuron-hierarchy networks that outperform NAS, SMASH and hierarchical representation with fewer parameters and shorter searching time." ]
[ "Simulation is a useful tool in situations where training data for machine learning models is costly to annotate or even hard to acquire.", "In this work, we propose a reinforcement learning-based method for automatically adjusting the parameters of any (non-differentiable) simulator, thereby controlling the distribution of synthesized data in order to maximize the accuracy of a model trained on that data.", "In contrast to prior art that hand-crafts these simulation parameters or adjusts only parts of the available parameters, our approach fully controls the simulator with the actual underlying goal of maximizing accuracy, rather than mimicking the real data distribution or randomly generating a large volume of data.", "We find that our approach", "(i) quickly converges to the optimal simulation parameters in controlled experiments and", "(ii) can indeed discover good sets of parameters for an image rendering simulator in actual computer vision applications." ]
[ 0, 1, 0, 0, 0, 0 ]
[ 0.2666666507720947, 0.3571428656578064, 0.25, 0.13793103396892548, 0.1666666567325592, 0.1904761791229248 ]
HJgkx2Aqt7
[ "We propose an algorithm that automatically adjusts parameters of a simulation engine to generate training data for a neural network such that validation accuracy is maximized." ]
[ "Modelling statistical relationships beyond the conditional mean is crucial in many settings.", "Conditional density estimation (CDE) aims to learn the full conditional probability density from data.", "Though highly expressive, neural network based CDE models can suffer from severe over-fitting when trained with the maximum likelihood objective.", "Due to the inherent structure of such models, classical regularization approaches in the parameter space are rendered ineffective.", "To address this issue, we develop a model-agnostic noise regularization method for CDE that adds random perturbations to the data during training.", "We demonstrate that the proposed approach corresponds to a smoothness regularization and prove its asymptotic consistency.", "In our experiments, noise regularization significantly and consistently outperforms other regularization methods across seven data sets and three CDE models.", "The effectiveness of noise regularization makes neural network based CDE the preferable method over previous non- and semi-parametric approaches, even when training data is scarce." ]
[ 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.09090908616781235, 0.260869562625885, 0.06666666269302368, 0.07407406717538834, 0.1875, 0.07692307233810425, 0.0714285671710968, 0.11428570747375488 ]
rygtPhVtDS
[ "A model-agnostic regularization scheme for neural network-based conditional density estimation." ]
[ "Unsupervised representation learning holds the promise of exploiting large amount of available unlabeled data to learn general representations.", "A promising technique for unsupervised learning is the framework of Variational Auto-encoders (VAEs).", "However, unsupervised representations learned by VAEs are significantly outperformed by those learned by supervising for recognition.", "Our hypothesis is that to learn useful representations for recognition the model needs to be encouraged to learn about repeating and consistent patterns in data.", "Drawing inspiration from the mid-level representation discovery work, we propose PatchVAE, that reasons about images at patch level.", "Our key contribution is a bottleneck formulation in a VAE framework that encourages mid-level style representations.", "Our experiments demonstrate that representations learned by our method perform much better on the recognition tasks compared to those learned by vanilla VAEs." ]
[ 0, 0, 0, 0, 0, 1, 0 ]
[ 0.05882352590560913, 0.2666666507720947, 0.2666666507720947, 0.25641024112701416, 0.05714285373687744, 0.5, 0.21052631735801697 ]
r1x1kJHKDH
[ "A patch-based bottleneck formulation in a VAE framework that learns unsupervised representations better suited for visual recognition." ]
[ "Vanishing and exploding gradients are two of the main obstacles in training deep neural networks, especially in capturing long range dependencies in recurrent neural networks (RNNs).", "In this paper, we present an efficient parametrization of the transition matrix of an RNN that allows us to stabilize the gradients that arise in its training.", "Specifically, we parameterize the transition matrix by its singular value decomposition (SVD), which allows us to explicitly track and control its singular values.", "We attain efficiency by using tools that are common in numerical linear algebra, namely Householder reflectors for representing the orthogonal matrices that arise in the SVD.", "By explicitly controlling the singular values, our proposed svdRNN method allows us to easily solve the exploding gradient problem and we observe that it empirically solves the vanishing gradient issue to a large extent.", "We note that the SVD parameterization can be used for any rectangular weight matrix, hence it can be easily extended to any deep neural network, such as a multi-layer perceptron.", "Theoretically, we demonstrate that our parameterization does not lose any expressive power, and show how it potentially makes the optimization process easier.", "Our extensive experimental results also demonstrate that the proposed framework converges faster, and has good generalization, especially when the depth is large. \n" ]
[ 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.12244897335767746, 0.40816324949264526, 0.21276594698429108, 0.08163265138864517, 0.2142857164144516, 0.07547169178724289, 0.25, 0.25 ]
SyL9u-WA-
[ "To solve the gradient vanishing/exploding problems, we proprose an efficient parametrization of the transition matrix of RNN that loses no expressive power, converges faster and has good generalization." ]
[ "Recent image style transferring methods achieved arbitrary stylization with input content and style images.", "To transfer the style of an arbitrary image to a content image, these methods used a feed-forward network with a lowest-scaled feature transformer or a cascade of the networks with a feature transformer of a corresponding scale.", "However, their approaches did not consider either multi-scaled style in their single-scale feature transformer or dependency between the transformed feature statistics across the cascade networks.", "This shortcoming resulted in generating partially and inexactly transferred style in the generated images.\n", "To overcome this limitation of partial style transfer, we propose a total style transferring method which transfers multi-scaled feature statistics through a single feed-forward process.", "First, our method transforms multi-scaled feature maps of a content image into those of a target style image by considering both inter-channel correlations in each single scaled feature map and inter-scale correlations between multi-scaled feature maps.", "Second, each transformed feature map is inserted into the decoder layer of the corresponding scale using skip-connection.", "Finally, the skip-connected multi-scaled feature maps are decoded into a stylized image through our trained decoder network." ]
[ 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.1428571343421936, 0.2926829159259796, 0.1621621549129486, 0.20689654350280762, 0.21052631735801697, 0.1860465109348297, 0.19354838132858276, 0.1249999925494194 ]
BJ4AFsRcFQ
[ "A paper suggesting a method to transform the style of images using deep neural networks." ]
[ "Beyond understanding what is being discussed, human communication requires an awareness of what someone is feeling.", "One challenge for dialogue agents is recognizing feelings in the conversation partner and replying accordingly, a key communicative skill that is trivial for humans.", "Research in this area is made difficult by the paucity of suitable publicly available datasets both for emotion and dialogues.", "This work proposes a new task for empathetic dialogue generation and EmpatheticDialogues, a dataset of 25k conversations grounded in emotional situations to facilitate training and evaluating dialogue systems.", "Our experiments indicate that dialogue models that use our dataset are perceived to be more empathetic by human evaluators, while improving on other metrics as well (e.g. perceived relevance of responses, BLEU scores), compared to models merely trained on large-scale Internet conversation data.", "We also present empirical comparisons of several ways to improve the performance of a given model by leveraging existing models or datasets without requiring lengthy re-training of the full model." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.04999999701976776, 0.1666666567325592, 0.21739129722118378, 0.3921568691730499, 0.1538461446762085, 0.26923075318336487 ]
HyesW2C9YQ
[ "We improve existing dialogue systems for responding to people sharing personal stories, incorporating emotion prediction representations and also release a new benchmark and dataset of empathetic dialogues." ]
[ "Granger causality is a widely-used criterion for analyzing interactions in large-scale networks.", "As most physical interactions are inherently nonlinear, we consider the problem of inferring the existence of pairwise Granger causality between nonlinearly interacting stochastic processes from their time series measurements.", "Our proposed approach relies on modeling the embedded nonlinearities in the measurements using a component-wise time series prediction model based on Statistical Recurrent Units (SRUs).", "We make a case that the network topology of Granger causal relations is directly inferrable from a structured sparse estimate of the internal parameters of the SRU networks trained to predict the processes’ time series measurements.", "We propose a variant of SRU, called economy-SRU, which, by design has considerably fewer trainable parameters, and therefore less prone to overfitting.", "The economy-SRU computes a low-dimensional sketch of its high-dimensional hidden state in the form of random projections to generate the feedback for its recurrent processing.", "Additionally, the internal weight parameters of the economy-SRU are strategically regularized in a group-wise manner to facilitate the proposed network in extracting meaningful predictive features that are highly time-localized to mimic real-world causal events.", "Extensive experiments are carried out to demonstrate that the proposed economy-SRU based time series prediction model outperforms the MLP, LSTM and attention-gated CNN-based time series models considered previously for inferring Granger causality." ]
[ 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.20689654350280762, 0.3636363446712494, 0.09999999403953552, 0.1702127605676651, 0, 0.10256409645080566, 0.043478257954120636, 0.21739129722118378 ]
SyxV9ANFDH
[ "A new recurrent neural network architecture for detecting pairwise Granger causality between nonlinearly interacting time series. " ]
[ "Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data.", "However, GCN computes nodes' representation recursively from their neighbors, making the receptive field size grow exponentially with the number of layers. ", "Previous attempts on reducing the receptive field size by subsampling neighbors do not have any convergence guarantee, and their receptive field size per node is still in the order of hundreds.", "In this paper, we develop a preprocessing strategy and two control variate based algorithms to further reduce the receptive field size.", "Our algorithms are guaranteed to converge to GCN's local optimum regardless of the neighbor sampling size.", "Empirical results show that our algorithms have a similar convergence speed per epoch with the exact algorithm even using only two neighbors per node.", "The time consumption of our algorithm on the Reddit dataset is only one fifth of previous neighbor sampling algorithms." ]
[ 0, 0, 0, 0, 0, 1, 0 ]
[ 0.1818181723356247, 0.1395348757505417, 0.2448979616165161, 0.3255814015865326, 0.05405404791235924, 0.35555556416511536, 0.14999999105930328 ]
rylejExC-
[ "A control variate based stochastic training algorithm for graph convolutional networks that the receptive field can be only two neighbors per node." ]
[ "Low bit-width integer weights and activations are very important for efficient inference, especially with respect to lower power consumption.", "We propose to apply Monte Carlo methods and importance sampling to sparsify and quantize pre-trained neural networks without any retraining.", "We obtain sparse, low bit-width integer representations that approximate the full precision weights and activations.", "The precision, sparsity, and complexity are easily configurable by the amount of sampling performed.", "Our approach, called Monte Carlo Quantization (MCQ), is linear in both time and space, while the resulting quantized sparse networks show minimal accuracy loss compared to the original full-precision networks.", "Our method either outperforms or achieves results competitive with methods that do require additional training on a variety of challenging tasks." ]
[ 0, 1, 0, 0, 0, 0 ]
[ 0.06666666269302368, 0.41379308700561523, 0, 0, 0.10256409645080566, 0.1875 ]
B1e5NySKwH
[ "Monte Carlo methods for quantizing pre-trained models without any additional training." ]
[ "We propose the Information Maximization Autoencoder (IMAE), an information theoretic approach to simultaneously learn continuous and discrete representations in an unsupervised setting.", "Unlike the Variational Autoencoder framework, IMAE starts from a stochastic encoder that seeks to map each input data to a hybrid discrete and continuous representation with the objective of maximizing the mutual information between the data and their representations.", "A decoder is included to approximate the posterior distribution of the data given their representations, where a high fidelity approximation can be achieved by leveraging the informative representations. \n", "We show that the proposed objective is theoretically valid and provides a principled framework for understanding the tradeoffs regarding informativeness of each representation factor, disentanglement of representations, and decoding quality." ]
[ 1, 0, 0, 0 ]
[ 0.3529411852359772, 0.2666666507720947, 0.14999999105930328, 0.25 ]
SyVpB2RqFX
[ "Information theoretical approach for unsupervised learning of unsupervised learning of a hybrid of discrete and continuous representations, " ]
[ "Learning rules for neural networks necessarily include some form of regularization.", "Most regularization techniques are conceptualized and implemented in the space of parameters.", "However, it is also possible to regularize in the space of functions.", "Here, we propose to measure networks in an $L^2$ Hilbert space, and test a learning rule that regularizes the distance a network can travel through $L^2$-space each update. ", "This approach is inspired by the slow movement of gradient descent through parameter space as well as by the natural gradient, which can be derived from a regularization term upon functional change.", "The resulting learning rule, which we call Hilbert-constrained gradient descent (HCGD), is thus closely related to the natural gradient but regularizes a different and more calculable metric over the space of functions.", "Experiments show that the HCGD is efficient and leads to considerably better generalization." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0, 0.1111111044883728, 0.1666666567325592, 0.307692289352417, 0.11320754140615463, 0.14814814925193787, 0.10810810327529907 ]
H1l8sz-AW
[ "It's important to consider optimization in function space, not just parameter space. We introduce a learning rule that reduces distance traveled in function space, just like SGD limits distance traveled in parameter space." ]
[ "Stochastic gradient descent (SGD), which dates back to the 1950s, is one of the most popular and effective approaches for performing stochastic optimization.", "Research on SGD resurged recently in machine learning for optimizing convex loss functions and training nonconvex deep neural networks.", "The theory assumes that one can easily compute an unbiased gradient estimator, which is usually the case due to the sample average nature of empirical risk minimization.", "There exist, however, many scenarios (e.g., graphs) where an unbiased estimator may be as expensive to compute as the full gradient because training examples are interconnected.", "Recently, Chen et al. (2018) proposed using a consistent gradient estimator as an economic alternative.", "Encouraged by empirical success, we show, in a general setting, that consistent estimators result in the same convergence behavior as do unbiased ones.", "Our analysis covers strongly convex, convex, and nonconvex objectives.", "We verify the results with illustrative experiments on synthetic and real-world data.", "This work opens several new research directions, including the development of more efficient SGD updates with consistent estimators and the design of efficient training algorithms for large-scale graphs.\n" ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.307692289352417, 0.2222222238779068, 0.1395348757505417, 0.09090908616781235, 0.0624999962747097, 0.10256409645080566, 0.07999999821186066, 0.06896550953388214, 0.1395348757505417 ]
rygMWT4twS
[ "Convergence theory for biased (but consistent) gradient estimators in stochastic optimization and application to graph convolutional networks" ]
[ "We consider the problem of uncertainty estimation in the context of (non-Bayesian) deep neural classification.", "In this context, all known methods are based on extracting uncertainty signals from a trained network optimized to solve the classification problem at hand.", "We demonstrate that such techniques tend to introduce biased estimates for instances whose predictions are supposed to be highly confident.", "We argue that this deficiency is an artifact of the dynamics of training with SGD-like optimizers, and it has some properties similar to overfitting.", "Based on this observation, we develop an uncertainty estimation algorithm that selectively estimates the uncertainty of highly confident points, using earlier snapshots of the trained model, before their estimates are jittered (and way before they are ready for actual classification).", "We present extensive experiments indicating that the proposed algorithm provides uncertainty estimates that are consistently better than all known methods." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.3333333432674408, 0.24390242993831635, 0.1111111044883728, 0.25, 0.19607841968536377, 0.1666666567325592 ]
SJfb5jCqKm
[ "We use snapshots from the training process to improve any uncertainty estimation method of a DNN classifier." ]
[ "Existing public face image datasets are strongly biased toward Caucasian faces, and other races (e.g., Latino) are significantly underrepresented.", "The models trained from such datasets suffer from inconsistent classification accuracy, which limits the applicability of face analytic systems to non-White race groups.", "To mitigate the race bias problem in these datasets, we constructed a novel face image dataset containing 108,501 images which is balanced on race.", "We define 7 race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino.", "Images were collected from the YFCC-100M Flickr dataset and labeled with race, gender, and age groups.", "Evaluations were performed on existing face attribute datasets as well as novel image datasets to measure the generalization performance.", "We find that the model trained from our dataset is substantially more accurate on novel datasets and the accuracy is consistent across race and gender groups.", "We also compare several commercial computer vision APIs and report their balanced accuracy across gender, race, and age groups." ]
[ 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.15789473056793213, 0.09999999403953552, 0.2926829159259796, 0.060606054961681366, 0.3030303120613098, 0.11428570747375488, 0.09756097197532654, 0.277777761220932 ]
S1xSSTNKDB
[ "A new face image dataset for balanced race, gender, and age which can be used for bias measurement and mitigation" ]
[ "Dramatic advances in generative models have resulted in near photographic quality for artificially rendered faces, animals and other objects in the natural world.", "In spite of such advances, a higher level understanding of vision and imagery does not arise from exhaustively modeling an object, but instead identifying higher-level attributes that best summarize the aspects of an object. ", "In this work we attempt to model the drawing process of fonts by building sequential generative models of vector graphics. ", "This model has the benefit of providing a scale-invariant representation for imagery whose latent representation may be systematically manipulated and exploited to perform style propagation.", "We demonstrate these results on a large dataset of fonts and highlight how such a model captures the statistical dependencies and richness of this dataset.", "We envision that our model can find use as a tool for designers to facilitate font design." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.13636362552642822, 0.1090909019112587, 0.6976743936538696, 0.25531914830207825, 0.27272728085517883, 0.25 ]
rklf4IUtOE
[ "We attempt to model the drawing process of fonts by building sequential generative models of vector graphics (SVGs), a highly structured representation of font characters." ]
[ "What can we learn about the functional organization of cortical microcircuits from large-scale recordings of neural activity? ", "To obtain an explicit and interpretable model of time-dependent functional connections between neurons and to establish the dynamics of the cortical information flow, we develop 'dynamic neural relational inference' (dNRI).", "We study both synthetic and real-world neural spiking data and demonstrate that the developed method is able to uncover the dynamic relations between neurons more reliably than existing baselines." ]
[ 0, 1, 0 ]
[ 0.15789473056793213, 0.36734694242477417, 0.36734694242477417 ]
S1leV7t8IB
[ "We develop 'dynamic neural relational inference', a variational autoencoder model that can explicitly and interpretably represent the hidden dynamic relations between neurons." ]
[ "DeePa is a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training process of convolutional neural networks.", "DeePa optimizes parallelism at the granularity of each individual layer in the network.", "We present an elimination-based algorithm that finds an optimal parallelism configuration for every layer.", "Our evaluation shows that DeePa achieves up to 6.5× speedup compared to state-of-the-art deep learning frameworks and reduces data transfers by up to 23×." ]
[ 1, 0, 0, 0 ]
[ 0.5416666865348816, 0.5263158082962036, 0.1538461446762085, 0.2083333283662796 ]
SJCPLLpaW
[ "To the best of our knowledge, DeePa is the first deep learning framework that controls and optimizes the parallelism of CNNs in all parallelizable dimensions at the granularity of each layer." ]
[ "One can substitute each neuron in any neural network with a kernel machine and obtain a counterpart powered by kernel machines.", "The new network inherits the expressive power and architecture of the original but works in a more intuitive way since each node enjoys the simple interpretation as a hyperplane (in a reproducing kernel Hilbert space).", "Further, using the kernel multilayer perceptron as an example, we prove that in classification, an optimal representation that minimizes the risk of the network can be characterized for each hidden layer.", "This result removes the need of backpropagation in learning the model and can be generalized to any feedforward kernel network.", "Moreover, unlike backpropagation, which turns models into black boxes, the optimal hidden representation enjoys an intuitive geometric interpretation, making the dynamics of learning in a deep kernel network simple to understand.", "Empirical results are provided to validate our theory." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.1860465109348297, 0.145454540848732, 0.19607841968536377, 0.2790697515010834, 0.2222222238779068, 0 ]
H1GLm2R9Km
[ "We combine kernel method with connectionist models and show that the resulting deep architectures can be trained layer-wise and have more transparent learning dynamics. " ]
[ "Many notions of fairness may be expressed as linear constraints, and the resulting constrained objective is often optimized by transforming the problem into its Lagrangian dual with additive linear penalties.", "In non-convex settings, the resulting problem may be difficult to solve as the Lagrangian is not guaranteed to have a deterministic saddle-point equilibrium. ", "In this paper, we propose to modify the linear penalties to second-order ones, and we argue that this results in a more practical training procedure in non-convex, large-data settings.", "For one, the use of second-order penalties allows training the penalized objective with a fixed value of the penalty coefficient, thus avoiding the instability and potential lack of convergence associated with two-player min-max games.", "Secondly, we derive a method for efficiently computing the gradients associated with the second-order penalties in stochastic mini-batch settings.", "Our resulting algorithm performs well empirically, learning an appropriately fair classifier on a number of standard benchmarks." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.1304347813129425, 0.14999999105930328, 0.3720930218696594, 0.21739129722118378, 0.2222222238779068, 0.05714285373687744 ]
Bke0rjR5F7
[ "We propose a method to stochastically optimize second-order penalties and show how this may apply to training fairness-aware classifiers." ]
[ "Training methods for deep networks are primarily variants on stochastic gradient descent. ", "Techniques that use (approximate) second-order information are rarely used because of the computational cost and noise associated with those approaches in deep learning contexts. ", "However, in this paper, we show how feedforward deep networks exhibit a low-rank derivative structure. ", "This low-rank structure makes it possible to use second-order information without needing approximations and without incurring a significantly greater computational cost than gradient descent. ", "To demonstrate this capability, we implement Cubic Regularization (CR) on a feedforward deep network with stochastic gradient descent and two of its variants. ", "There, we use CR to calculate learning rates on a per-iteration basis while training on the MNIST and CIFAR-10 datasets. ", "CR proved particularly successful in escaping plateau regions of the objective function. ", "We also found that this approach requires less problem-specific information (e.g. an optimal initial learning rate) than other first-order methods in order to perform well." ]
[ 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.04878048226237297, 0.30188679695129395, 0.3636363446712494, 0.307692289352417, 0.19230768084526062, 0.2916666567325592, 0.04878048226237297, 0.25925925374031067 ]
ByJ7obb0b
[ "We show that deep learning network derivatives have a low-rank structure, and this structure allows us to use second-order derivative information to calculate learning rates adaptively and in a computationally feasible manner." ]
[ "The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations.", "Nonetheless, min-max optimization beyond the purpose of AT has not been rigorously explored in the research of adversarial attack and defense.", "In particular, given a set of risk sources (domains), minimizing the maximal loss induced from the domain set can be reformulated as a general min-max problem that is different from AT.", "Examples of this general formulation include attacking model ensembles, devising universal perturbation under multiple inputs or data transformations, and generalized AT over different types of attack models.", "We show that these problems can be solved under a unified and theoretically principled min-max optimization framework. ", "We also show that the self-adjusted domain weights learned from our method provides a means to explain the difficulty level of attack and defense over multiple domains.", "Extensive experiments show that our approach leads to substantial performance improvement over the conventional averaging strategy." ]
[ 0, 1, 0, 0, 0, 0, 0 ]
[ 0.10256409645080566, 0.41379308700561523, 0.054054051637649536, 0.1111111044883728, 0.3571428656578064, 0.1666666567325592, 0 ]
S1eik6EtPB
[ "A unified min-max optimization framework for adversarial attack and defense" ]
[ "Most deep learning models rely on expressive high-dimensional representations to achieve good performance on tasks such as classification.", "However, the high dimensionality of these representations makes them difficult to interpret and prone to over-fitting.", "We propose a simple, intuitive and scalable dimension reduction framework that takes into account the soft probabilistic interpretation of standard deep models for classification.", "When applying our framework to visualization, our representations more accurately reflect inter-class distances than standard visualization techniques such as t-SNE.", "We show experimentally that our framework improves generalization performance to unseen categories in zero-shot learning.", "We also provide a finite sample error upper bound guarantee for the method." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.06666666269302368, 0.0714285671710968, 0.1621621549129486, 0.0624999962747097, 0, 0.07692307233810425 ]
SygD-hCcF7
[ "dimensionality reduction for cases where examples can be represented as soft probability distributions" ]
[ "Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments.", "These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces.", "However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy.", "In this work, we propose an approach using deep representation learning algorithms to learn an adequate goal space.", "This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space.", "We present experiments with a simulated robot arm interacting with an object, and we show that exploration algorithms using such learned representations can closely match, and even sometimes improve, the performance obtained using engineered representations." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.30434781312942505, 0.03999999538064003, 0.04347825422883034, 0.1860465109348297, 0.158730149269104, 0.17543859779834747 ]
S1DWPP1A-
[ "We propose a novel Intrinsically Motivated Goal Exploration architecture with unsupervised learning of goal space representations, and evaluate how various implementations enable the discovery of a diversity of policies." ]
[ "One of the main challenges of deep learning methods is the choice of an appropriate training strategy.", "In particular, additional steps, such as unsupervised pre-training, have been shown to greatly improve the performances of deep structures.", "In this article, we propose an extra training step, called post-training, which only optimizes the last layer of the network.", "We show that this procedure can be analyzed in the context of kernel theory, with the first layers computing an embedding of the data and the last layer a statistical model to solve the task based on this embedding.", "This step makes sure that the embedding, or representation, of the data is used in the best possible way for the considered task.", "This idea is then tested on multiple architectures with various data sets, showing that it consistently provides a boost in performance." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.25, 0.1621621549129486, 0.6486486196517944, 0.23999999463558197, 0.15789473056793213, 0 ]
H1O0KGC6b
[ "We propose an additional training step, called post-training, which computes optimal weights for the last layer of the network." ]
[ "Natural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint.", "Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance.", "For this purpose, we propose to construct the embeddings with few basis vectors.", "For each word, the composition of basis vectors is determined by a hash code.", "To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme.", "Each code is composed of multiple discrete numbers, such as (3, 2, 1, 8), where the value of each component is limited to a fixed range.", "We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick.", "Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ~ 99% in machine translation tasks without performance loss.", "In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate.", "Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.06451612710952759, 0.37037035822868347, 0.1818181723356247, 0.08695651590824127, 0.0833333283662796, 0.060606054961681366, 0.07692307233810425, 0.25806450843811035, 0.1666666567325592, 0.06896550953388214 ]
BJRZzFlRb
[ "Compressing the word embeddings over 94% without hurting the performance." ]
[ "It is important to detect anomalous inputs when deploying machine learning systems.", "The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples.", "At the same time, diverse image and text data are available in enormous quantities.", "We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE).", "This enables anomaly detectors to generalize and detect unseen anomalies.", "In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance.", "We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue.", "We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance." ]
[ 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.04444444179534912, 0.07692307233810425, 0.12765957415103912, 0.13793103396892548, 0.23255813121795654, 0.18518517911434174, 0.13793103396892548, 0.03999999538064003 ]
HyxCxhRcY7
[ "OE teaches anomaly detectors to learn heuristics for detecting unseen anomalies; experiments are in classification, density estimation, and calibration in NLP and vision settings; we do not tune on test distribution samples, unlike previous work" ]
[ "While generative neural networks can learn to transform a specific input dataset into a specific target dataset, they require having just such a paired set of input/output datasets.", "For instance, to fool the discriminator, a generative adversarial network (GAN) exclusively trained to transform images of black-haired *men* to blond-haired *men* would need to change gender-related characteristics as well as hair color when given images of black-haired *women* as input.", "This is problematic, as often it is possible to obtain *a* pair of (source, target) distributions but then have a second source distribution where the target distribution is unknown.", "The computational challenge is that generative models are good at generation within the manifold of the data that they are trained on.", "However, generating new samples outside of the manifold or extrapolating \"out-of-sample\" is a much harder problem that has been less well studied.", "To address this, we introduce a technique called *neuron editing* that learns how neurons encode an edit for a particular transformation in a latent space.", "We use an autoencoder to decompose the variation within the dataset into activations of different neurons and generate transformed data by defining an editing transformation on those neurons.", "By performing the transformation in a latent trained space, we encode fairly complex and non-linear transformations to the data with much simpler distribution shifts to the neuron's activations.", "Our technique is general and works on a wide variety of data domains and applications.", "We first demonstrate it on image transformations and then move to our two main biological applications: removal of batch artifacts representing unwanted noise and modeling the effect of drug treatments to predict synergy between drugs." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2083333283662796, 0.072727270424366, 0.2857142686843872, 0.0952380895614624, 0.13333332538604736, 0.1304347813129425, 0.1666666567325592, 0.1249999925494194, 0.21621620655059814, 0.145454540848732 ]
H1lDSCEYPH
[ "A method for learning a transformation between one pair of source/target datasets and applying it a separate source dataset for which there is no target dataset" ]
[ "In this paper, we propose to combine imitation and reinforcement learning via the idea of reward shaping using an oracle.", "We study the effectiveness of the near- optimal cost-to-go oracle on the planning horizon and demonstrate that the cost- to-go oracle shortens the learner’s planning horizon as function of its accuracy: a globally optimal oracle can shorten the planning horizon to one, leading to a one- step greedy Markov Decision Process which is much easier to optimize, while an oracle that is far away from the optimality requires planning over a longer horizon to achieve near-optimal performance.", "Hence our new insight bridges the gap and interpolates between imitation learning and reinforcement learning.", "Motivated by the above mentioned insights, we propose Truncated HORizon Policy Search (THOR), a method that focuses on searching for policies that maximize the total reshaped reward over a finite planning horizon when the oracle is sub-optimal.", "We experimentally demonstrate that a gradient-based implementation of THOR can achieve superior performance compared to RL baselines and IL baselines even when the oracle is sub-optimal." ]
[ 1, 0, 0, 0, 0 ]
[ 0.20000000298023224, 0.0952380895614624, 0.17391303181648254, 0.04651162400841713, 0.17142856121063232 ]
ryUlhzWCZ
[ "Combining Imitation Learning and Reinforcement Learning to learn to outperform the expert" ]
[ "Recently, Generative Adversarial Networks (GANs) have emerged as a popular alternative for modeling complex high dimensional distributions.", "Most of the existing works implicitly assume that the clean samples from the target distribution are easily available.", "However, in many applications, this assumption is violated.", "In this paper, we consider the observation setting in which the samples from a target distribution are given by the superposition of two structured components, and leverage GANs for learning of the structure of the components.", "We propose a novel framework, demixing-GAN, which learns the distribution of two components at the same time.", "Through extensive numerical experiments, we demonstrate that the proposed framework can generate clean samples from unknown distributions, which further can be used in demixing of the unseen test images." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.06896550953388214, 0.0714285671710968, 0, 0.2857142686843872, 0.0714285671710968, 0.05128204822540283 ]
BygbVL8KO4
[ "An unsupervised learning approach for separating two structured signals from their superposition" ]