source
sequence
source_labels
sequence
rouge_scores
sequence
paper_id
string
target
sequence
[ "This paper explores the scenarios under which\n", "an attacker can claim that ‘Noise and access to\n", "the softmax layer of the model is all you need’\n", "to steal the weights of a convolutional neural network\n", "whose architecture is already known.", "We\n", "were able to achieve 96% test accuracy using\n", "the stolen MNIST model and 82% accuracy using\n", "stolen KMNIST model learned using only\n", "i.i.d. Bernoulli noise inputs.", "We posit that this\n", "theft-susceptibility of the weights is indicative\n", "of the complexity of the dataset and propose a\n", "new metric that captures the same.", "The goal of\n", "this dissemination is to not just showcase how far\n", "knowing the architecture can take you in terms of\n", "model stealing, but to also draw attention to this\n", "rather idiosyncratic weight learnability aspects of\n", "CNNs spurred by i.i.d. noise input.", "We also disseminate\n", "some initial results obtained with using\n", "the Ising probability distribution in lieu of the i.i.d.\n", "Bernoulli distribution" ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11764705181121826, 0, 0.21052631735801697, 0.31578946113586426, 0, 0, 0.1111111044883728, 0.1249999925494194, 0.13333332538604736, 0, 0.25, 0.11764705181121826, 0.1249999925494194, 0, 0, 0.10526315122842789, 0, 0, 0.11764705181121826, 0, 0, 0.09999999403953552 ]
H1le3y356N
[ "Input only noise , glean the softmax outputs, steal the weights" ]
[ "We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN).", "It is known that GAN can produce very realistic samples while VAE does not suffer from mode collapsing problem.", "Our model optimizes λ-Jeffreys divergence between the model distribution and the true data distribution.", "We show that it takes the best properties of VAE and GAN objectives.", "It consists of two parts.", "One of these parts can be optimized by using the standard adversarial training, and the second one is the very objective of the VAE model.", "However, the straightforward way of substituting the VAE loss does not work well if we use an explicit likelihood such as Gaussian or Laplace which have limited flexibility in high dimensions and are unnatural for modelling images in the space of pixels.", "To tackle this problem we propose a novel approach to train the VAE model with an implicit likelihood by an adversarially trained discriminator.", "In an extensive set of experiments on CIFAR-10 and TinyImagent datasets, we show that our model achieves the state-of-the-art generation and reconstruction quality and demonstrate how we can balance between mode-seeking and mode-covering behaviour of our model by adjusting the weight λ in our objective." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 1, 0, 0.1818181723356247, 0.34285715222358704, 0.07407407462596893, 0.23255813121795654, 0.1666666567325592, 0.22727271914482117, 0.17241378128528595 ]
Syxc1yrKvr
[ "We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN)" ]
[ "Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task.", "This approach encounters difficulty when transfer is not mutually beneficial, for instance, when tasks are sufficiently dissimilar or change over time.", "Here, we use the connection between gradient-based meta-learning and hierarchical Bayes to propose a mixture of hierarchical Bayesian models over the parameters of an arbitrary function approximator such as a neural network.", "Generalizing the model-agnostic meta-learning (MAML) algorithm, we present a stochastic expectation maximization procedure to jointly estimate parameter initializations for gradient descent as well as a latent assignment of tasks to initializations.", "This approach better captures the diversity of training tasks as opposed to consolidating inductive biases into a single set of hyperparameters.", "Our experiments demonstrate better generalization on the standard miniImageNet benchmark for 1-shot classification.", "We further derive a novel and scalable non-parametric variant of our method that captures the evolution of a task distribution over time as demonstrated on a set of few-shot regression tasks." ]
[ 0, 0, 1, 0, 0, 0, 0 ]
[ 0.2926829159259796, 0.09090908616781235, 0.5, 0.23529411852359772, 0.1818181723356247, 0.10810810327529907, 0.31372547149658203 ]
HyxpNnRcFX
[ "We use the connection between gradient-based meta-learning and hierarchical Bayes to learn a mixture of meta-learners that is appropriate for a heterogeneous and evolving task distribution." ]
[ "We introduce a new routing algorithm for capsule networks, in which a child capsule is routed to a parent based only on agreement between the parent's state and the child's vote.", "Unlike previously proposed routing algorithms, the parent's ability to reconstruct the child is not explicitly taken into account to update the routing probabilities.", "This simplifies the routing procedure and improves performance on benchmark datasets such as CIFAR-10 and CIFAR-100.", "The new mechanism 1) designs routing via inverted dot-product attention; 2) imposes Layer Normalization as normalization; and 3) replaces sequential iterative routing with concurrent iterative routing.", "Besides outperforming existing capsule networks, our model performs at-par with a powerful CNN (ResNet-18), using less than 25% of the parameters. ", "On a different task of recognizing digits from overlayed digit images, the proposed capsule model performs favorably against CNNs given the same number of layers and neurons per layer. ", "We believe that our work raises the possibility of applying capsule networks to complex real-world tasks." ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.35555556416511536, 0.05405404791235924, 0.24242423474788666, 0.19512194395065308, 0.25, 0.1304347813129425, 0.05882352590560913 ]
HJe6uANtwH
[ "We present a new routing method for Capsule networks, and it performs at-par with ResNet-18 on CIFAR-10/ CIFAR-100." ]
[ " We introduce Doc2Dial, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing.", "Such data can be used to train automated dialogue agents performing customer care tasks for the enterprises or organizations.", "In particular, the framework takes the documents as input and generates the tasks for obtaining the annotations for simulating dialog flows.", "The dialog flows are used to guide the collection of utterances produced by crowd workers.", "The outcomes include dialogue data grounded in the given documents, as well as various types of annotations that help ensure the quality of the data and the flexibility to (re)composite dialogues." ]
[ 1, 0, 0, 0, 0 ]
[ 0.8888888955116272, 0.307692289352417, 0.1621621549129486, 0, 0.17777776718139648 ]
S1eTMp59LB
[ "We introduce Doc2Dial, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing for train automated dialogue agents" ]
[ "Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps. ", "While long-range dependencies are difficult to model directly in the time domain, we show that they can be more tractably modelled in two-dimensional time-frequency representations such as spectrograms. ", "By leveraging this representational advantage, in conjunction with a highly expressive probabilistic model and a multiscale generation procedure, we design a model capable of generating high-fidelity audio samples which capture structure at timescales which time-domain models have yet to achieve. ", "We demonstrate that our model captures longer-range dependencies than time-domain models such as WaveNet across a diverse set of unconditional generation tasks, including single-speaker speech generation, multi-speaker speech generation, and music generation." ]
[ 0, 0, 0, 1 ]
[ 0, 0.1395348757505417, 0.1538461446762085, 0.3181818127632141 ]
r1gIa0NtDH
[ "We introduce an autoregressive generative model for spectrograms and demonstrate applications to speech and music generation" ]
[ "Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations.", "In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations.", "Furthermore, we see these failures to generalize more frequently in more modern networks.", "We show that these failures are related to the fact that the architecture of modern CNNs ignores the classical sampling theorem so that generalization is not guaranteed.", "We also show that biases in the statistics of commonly used image datasets makes it unlikely that CNNs will learn to be invariant to these transformations.", "Taken together our results suggest that the performance of CNNs in object recognition falls far short of the generalization capabilities of humans." ]
[ 0, 0, 0, 0, 1, 0 ]
[ 0.1860465109348297, 0.3030303120613098, 0.10256409645080566, 0.3199999928474426, 0.3529411852359772, 0.17391303181648254 ]
HJxYwiC5tm
[ "Modern deep CNNs are not invariant to translations, scalings and other realistic image transformations, and this lack of invariance is related to the subsampling operation and the biases contained in image datasets." ]
[ "We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN).", "Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network.", "Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running.", "In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion.", "The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training.", "Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.29411762952804565, 0.1304347813129425, 0.06666666269302368, 0.1249999925494194, 0, 0.10256409645080566 ]
r11Q2SlRW
[ "Synthesize complex and extended human motions using an auto-conditioned LSTM network" ]
[ " {\\em Saliency methods} attempt to explain a deep net's decision by assigning a {\\em score} to each feature/pixel in the input, often doing this credit-assignment via the gradient of the output with respect to input. \n", "Recently \\citet{adebayosan} questioned the validity of many of these methods since they do not pass simple {\\em sanity checks}, which test whether the scores shift/vanish when layers of the trained net are randomized, or when the net is retrained using random labels for inputs.", "% for the inputs.", " %Surprisingly, the tested methods did not pass these checks: the explanations were relatively unchanged", ". \n\nWe propose a simple fix to existing saliency methods that helps them pass sanity checks, which we call {\\em competition for pixels}.", "This involves computing saliency maps for all possible labels in the classification task, and using a simple competition among them to identify and remove less relevant pixels from the map.", "Some theoretical justification is provided for it and its performance is empirically demonstrated on several popular methods." ]
[ 0, 0, 0, 0, 1, 0, 0 ]
[ 0.12244897335767746, 0.1428571343421936, 0.08695651590824127, 0.1875, 0.4390243887901306, 0.2978723347187042, 0.05714285373687744 ]
BJeGZxrFvS
[ "We devise a mechanism called competition among pixels that allows (approximately) complete saliency methods to pass the sanity checks." ]
[ "Classification systems typically act in isolation, meaning they are required to implicitly memorize the characteristics of all candidate classes in order to classify.", "The cost of this is increased memory usage and poor sample efficiency.", "We propose a model which instead verifies using reference images during the classification process, reducing the burden of memorization.", "The model uses iterative non-differentiable queries in order to classify an image.", "We demonstrate that such a model is feasible to train and can match baseline accuracy while being more parameter efficient.", "However, we show that finding the correct balance between image recognition and verification is essential to pushing the model towards desired behavior, suggesting that a pipeline of recognition followed by verification is a more promising approach towards designing more powerful networks with simpler architectures." ]
[ 0, 0, 0, 0, 0, 1 ]
[ 0.1428571343421936, 0.060606054961681366, 0.20512819290161133, 0.12121211737394333, 0.1463414579629898, 0.21052631735801697 ]
HygF59JVo7
[ "Image classification via iteratively querying for reference image from a candidate class with a RNN and use CNN to compare to the input image" ]
[ "To reduce memory footprint and run-time latency, techniques such as neural net-work pruning and binarization have been explored separately.", " However, it is un-clear how to combine the best of the two worlds to get extremely small and efficient models", ". In this paper, we, for the first time, define the filter-level pruning problem for binary neural networks, which cannot be solved by simply migrating existing structural pruning methods for full-precision models", ". A novel learning-based approach is proposed to prune filters in our main/subsidiary network frame-work, where the main network is responsible for learning representative features to optimize the prediction performance, and the subsidiary component works as a filter selector on the main network", ". To avoid gradient mismatch when training the subsidiary component, we propose a layer-wise and bottom-up scheme", ". We also provide the theoretical and experimental comparison between our learning-based and greedy rule-based methods", ". Finally, we empirically demonstrate the effectiveness of our approach applied on several binary models, including binarizedNIN, VGG-11, and ResNet-18, on various image classification datasets", ". For bi-nary ResNet-18 on ImageNet, we use 78.6% filters but can achieve slightly better test error 49.87% (50.02%-0.15%) than the original model" ]
[ 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.1666666567325592, 0.2222222238779068, 0.4000000059604645, 0.1538461446762085, 0.23529411852359772, 0.1249999925494194, 0.19512194395065308, 0.09090908616781235 ]
ryxfHnCctX
[ "we define the filter-level pruning problem for binary neural networks for the first time and propose method to solve it." ]
[ "Wide adoption of complex RNN based models is hindered by their inference performance, cost and memory requirements.", "To address this issue, we develop AntMan, combining structured sparsity with low-rank decomposition synergistically, to reduce model computation, size and execution time of RNNs while attaining desired accuracy.", "AntMan extends knowledge distillation based training to learn the compressed models efficiently.", "Our evaluation shows that AntMan offers up to 100x computation reduction with less than 1pt accuracy drop for language and machine reading comprehension models.", "Our evaluation also shows that for a given accuracy target, AntMan produces 5x smaller models than the state-of-art.", "Lastly, we show that AntMan offers super-linear speed gains compared to theoretical speedup, demonstrating its practical value on commodity hardware." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.31578946113586426, 0.16326530277729034, 0.24242423474788666, 0.2666666507720947, 0.051282044500112534, 0.04878048226237297 ]
BJgsN3R9Km
[ "Reducing computation and memory complexity of RNN models by up to 100x using sparse low-rank compression modules, trained via knowledge distillation." ]
[ "Graph-structured data such as social networks, functional brain networks, gene regulatory networks, communications networks have brought the interest in generalizing deep learning techniques to graph domains.", "In this paper, we are interested to design neural networks for graphs with variable length in order to solve learning problems such as vertex classification, graph classification, graph regression, and graph generative tasks.", "Most existing works have focused on recurrent neural networks (RNNs) to learn meaningful representations of graphs, and more recently new convolutional neural networks (ConvNets) have been introduced.", "In this work, we want to compare rigorously these two fundamental families of architectures to solve graph learning tasks.", "We review existing graph RNN and ConvNet architectures, and propose natural extension of LSTM and ConvNet to graphs with arbitrary size.", "Then, we design a set of analytically controlled experiments on two basic graph problems, i.e. subgraph matching and graph clustering, to test the different architectures. ", "Numerical results show that the proposed graph ConvNets are 3-17% more accurate and 1.5-4x faster than graph RNNs.", "Graph ConvNets are also 36% more accurate than variational (non-learning) techniques.", "Finally, the most effective graph ConvNet architecture uses gated edges and residuality.", "Residuality plays an essential role to learn multi-layer architectures as they provide a 10% gain of performance." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.09999999403953552, 0.17777776718139648, 0.09999999403953552, 0.23529411852359772, 0.29411762952804565, 0.2380952388048172, 0.29411762952804565, 0.07407406717538834, 0.3571428656578064, 0.060606054961681366 ]
HyXBcYg0b
[ "We compare graph RNNs and graph ConvNets, and we consider the most generic class of graph ConvNets with residuality." ]
[ "Complex-value neural networks are not a new concept, however, the use of real-values has often been favoured over complex-values due to difficulties in training and accuracy of results.", "Existing literature ignores the number of parameters used.", "We compared complex- and real-valued neural networks using five activation functions.", "We found that when real and complex neural networks are compared using simple classification tasks, complex neural networks perform equal to or slightly worse than real-value neural networks.", "However, when specialised architecture is used, complex-valued neural networks outperform real-valued neural networks.", "Therefore, complex–valued neural networks should be used when the input data is also complex or it can be meaningfully to the complex plane, or when the network architecture uses the structure defined by using complex numbers." ]
[ 0, 1, 0, 0, 0, 0 ]
[ 0.20000000298023224, 0.380952388048172, 0.25, 0.1111111044883728, 0.0833333283662796, 0.09756097197532654 ]
HkCy2uqQM
[ "Comparison of complex- and real-valued multi-layer perceptron with respect to the number of real-valued parameters." ]
[ "The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. \n", "In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs.", "The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest.", "Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators.", "Extensive experimental results show the superior performance of our approach on spectral image classification, community detection, vertex classification and matrix completion tasks." ]
[ 0, 1, 0, 0, 0 ]
[ 0.04651162400841713, 0.1599999964237213, 0.052631575614213943, 0.10526315122842789, 0.06451612710952759 ]
S1680_1Rb
[ "A spectral graph convolutional neural network with spectral zoom properties." ]
[ "We present FasterSeg, an automatically designed semantic segmentation network with not only state-of-the-art performance but also faster speed than current methods.", "Utilizing neural architecture search (NAS), FasterSeg is discovered from a novel and broader search space integrating multi-resolution branches, that has been recently found to be vital in manually designed segmentation models.", "To better calibrate the balance between the goals of high accuracy and low latency, we propose a decoupled and fine-grained latency regularization, that effectively overcomes our observed phenomenons that the searched networks are prone to \"collapsing\" to low-latency yet poor-accuracy models.", "Moreover, we seamlessly extend FasterSeg to a new collaborative search (co-searching) framework, simultaneously searching for a teacher and a student network in the same single run.", "The teacher-student distillation further boosts the student model’s accuracy.", "Experiments on popular segmentation benchmarks demonstrate the competency of FasterSeg.", "For example, FasterSeg can run over 30% faster than the closest manually designed competitor on Cityscapes, while maintaining comparable accuracy." ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.3589743673801422, 0.1666666567325592, 0.07407406717538834, 0.0952380895614624, 0, 0.0714285671710968, 0.15789473056793213 ]
BJgqQ6NYvB
[ "We present a real-time segmentation model automatically discovered by a multi-scale NAS framework, achieving 30% faster than state-of-the-art models." ]
[ "This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions.", "We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks where other state of the art methods (both with and without demonstrations) fail to see even a single successful trajectory after tens of billions of steps of exploration." ]
[ 1, 0 ]
[ 0.9019607901573181, 0.24242423474788666 ]
SygKyeHKDH
[ "We introduce R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions." ]
[ "We investigate the learned dynamical landscape of a recurrent neural network solving a simple task requiring the interaction of two memory mechanisms: long- and short-term.", "Our results show that while long-term memory is implemented by asymptotic attractors, sequential recall is now additionally implemented by oscillatory dynamics in a transverse subspace to the basins of attraction of these stable steady states.", "Based on our observations, we propose how different types of memory mechanisms can coexist and work together in a single neural network, and discuss possible applications to the fields of artificial intelligence and neuroscience." ]
[ 1, 0, 0 ]
[ 0.4736841917037964, 0.21276594698429108, 0.21276594698429108 ]
SJevPNShnV
[ "We investigate how a recurrent neural network successfully learns a task combining long-term memory and sequential recall." ]
[ "The problem of exploration in reinforcement learning is well-understood in the tabular case and many sample-efficient algorithms are known.", "Nevertheless, it is often unclear how the algorithms in the tabular setting can be extended to tasks with large state-spaces where generalization is required.", "Recent promising developments generally depend on problem-specific density models or handcrafted features.", "In this paper we introduce a simple approach for exploration that allows us to develop theoretically justified algorithms in the tabular case but that also give us intuitions for new algorithms applicable to settings where function approximation is required.", "Our approach and its underlying theory is based on the substochastic successor representation, a concept we develop here.", "While the traditional successor representation is a representation that defines state generalization by the similarity of successor states, the substochastic successor representation is also able to implicitly count the number of times each state (or feature) has been observed.", "This extension connects two until now disjoint areas of research.", "We show in traditional tabular domains (RiverSwim and SixArms) that our algorithm empirically performs as well as other sample-efficient algorithms.", "We then describe a deep reinforcement learning algorithm inspired by these ideas and show that it matches the performance of recent pseudo-count-based methods in hard exploration Atari 2600 games." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ 0.260869562625885, 0.07999999821186066, 0, 0.12903225421905518, 0.08695651590824127, 0.14035087823867798, 0.052631575614213943, 0.1702127605676651, 0.5614035129547119 ]
S1giVsRcYm
[ "We propose the idea of using the norm of the successor representation an exploration bonus in reinforcement learning. In hard exploration Atari games, our the deep RL algorithm matches the performance of recent pseudo-count-based methods." ]
[ "Deep generative modeling using flows has gained popularity owing to the tractable exact log-likelihood estimation with efficient training and synthesis process.", "However, flow models suffer from the challenge of having high dimensional latent space, same in dimension as the input space.", "An effective solution to the above challenge as proposed by Dinh et al. (2016) is a multi-scale architecture, which is based on iterative early factorization of a part of the total dimensions at regular intervals.", "Prior works on generative flows involving a multi-scale architecture perform the dimension factorization based on a static masking.", "We propose a novel multi-scale architecture that performs data dependent factorization to decide which dimensions should pass through more flow layers.", "To facilitate the same, we introduce a heuristic based on the contribution of each dimension to the total log-likelihood which encodes the importance of the dimensions.", "Our proposed heuristic is readily obtained as part of the flow training process, enabling versatile implementation of our likelihood contribution based multi-scale architecture for generic flow models.", "We present such an implementation for the original flow introduced in Dinh et al. (2016), and demonstrate improvements in log-likelihood score and sampling quality on standard image benchmarks.", "We also conduct ablation studies to compare proposed method with other options for dimension factorization." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.1666666567325592, 0.1764705777168274, 0.43478259444236755, 0.4516128897666931, 0.3333333432674408, 0.555555522441864, 0.29999998211860657, 0.19512194395065308, 0.13333332538604736 ]
H1eRI04KPB
[ "Data-dependent factorization of dimensions in a multi-scale architecture based on contribution to the total log-likelihood" ]
[ "Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years.", "While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective.", "What is more, no exhaustive empirical comparison has been performed in the past.", "In this work we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them.", "By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation.", "Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0, 0.1249999925494194, 0, 0.13333332538604736, 0, 0.08888888359069824 ]
Sy21R9JAW
[ "Four existing backpropagation-based attribution methods are fundamentally similar. How to assess it?" ]
[ "We study SGD and Adam for estimating a rank one signal planted in matrix or tensor noise.", "The extreme simplicity of the problem setup allows us to isolate the effects of various factors: signal to noise ratio, density of critical points, stochasticity and initialization.", "We observe a surprising phenomenon: Adam seems to get stuck in local minima as soon as polynomially many critical points appear (matrix case), while SGD escapes those.", "However, when the number of critical points degenerates to exponentials (tensor case), then both algorithms get trapped.", "Theory tells us that at fixed SNR the problem becomes intractable for large $d$ and in our experiments SGD does not escape this.", "We exhibit the benefits of warm starting in those situations.", "We conclude that in this class of problems, warm starting cannot be replaced by stochasticity in gradients to find the basin of attraction." ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.37037035822868347, 0.060606054961681366, 0.1111111044883728, 0, 0.1818181723356247, 0, 0 ]
rkl8DES3nE
[ "SGD and Adam under single spiked model for tensor PCA" ]
[ "This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically.", "How to analyze the specific rationale of each prediction made by the CNN presents one of key issues of understanding neural networks, but it is also of significant practical values in certain applications.", "In this study, we propose to distill knowledge from the CNN into an explainable additive model, so that we can use the explainable model to provide a quantitative explanation for the CNN prediction.", "We analyze the typical bias-interpreting problem of the explainable model and develop prior losses to guide the learning of the explainable additive model.", "Experimental results have demonstrated the effectiveness of our method." ]
[ 1, 0, 0, 0, 0 ]
[ 1, 0.21276594698429108, 0.17777776718139648, 0.17142856121063232, 0.14814814925193787 ]
SJfWKsC5K7
[ "This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically." ]
[ "We present methodology for using dynamic evaluation to improve neural sequence models.", "Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns.", "Dynamic evaluation outperforms existing adaptation approaches in our comparisons.", "Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies on the text8 and Hutter Prize datasets to 1.19 bits/char and 1.08 bits/char respectively." ]
[ 1, 0, 0, 0 ]
[ 0.4761904776096344, 0, 0.1111111044883728, 0.054054051637649536 ]
rkdU7tCaZ
[ "Paper presents dynamic evaluation methodology for adaptive sequence modelling" ]
[ "We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. \n", "This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. \n", "With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. \n", "We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. \n", "This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator." ]
[ 1, 0, 0, 0, 0 ]
[ 0.978723406791687, 0.2857142686843872, 0.2142857164144516, 0.19999998807907104, 0.22727271914482117 ]
ByS1VpgRZ
[ "We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model." ]
[ "Learning theory tells us that more data is better when minimizing the generalization error of identically distributed training and test sets.", "However, when training and test distribution differ, this distribution shift can have a significant effect.", "With a novel perspective on function transfer learning, we are able to lower bound the change of performance when transferring from training to test set with the Wasserstein distance between the embedded training and test set distribution.", "We find that there is a trade-off affecting performance between how invariant a function is to changes in training and test distribution and how large this shift in distribution is.", "Empirically across several data domains, we substantiate this viewpoint by showing that test performance correlates strongly with the distance in data distributions between training and test set.", "Complementary to the popular belief that more data is always better, our results highlight the utility of also choosing a training data distribution that is close to the test data distribution when the learned function is not invariant to such changes." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.1860465109348297, 0.2222222238779068, 0.37735849618911743, 0.4444444477558136, 0.3829787075519562, 0.26923075318336487 ]
SJgSflHKDr
[ "The Frechet Distance between train and test distribution correlates with the change in performance for functions that are not invariant to the shift." ]
[ "We introduce a new procedural dynamic system that can generate a variety of shapes that often appear as curves, but technically, the figures are plots of many points.", "We name them spiroplots and show how this new system relates to other procedures or processes that generate figures.", "Spiroplots are an extremely simple process but with a surprising visual variety.", "We prove some fundamental properties and analyze some instances to see how the geometry or topology of the input determines the generated figures.", "We show that some spiroplots have a finite cycle and return to the initial situation, whereas others will produce new points infinitely often.", "This paper is accompanied by a JavaScript app that allows anyone to generate spiroplots." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.1860465109348297, 0.1621621549129486, 0.13333332538604736, 0.10526315122842789, 0.09756097197532654, 0.1249999925494194 ]
8PAFHtYh17
[ "A new, very simple dynamic system is introduced that generates pretty patterns; properties are proved and possibilities are explored" ]
[ "Unsupervised image-to-image translation aims to learn a mapping between several visual domains by using unpaired training pairs.", "Recent studies have shown remarkable success in image-to-image translation for multiple domains but they suffer from two main limitations: they are either built from several two-domain mappings that are required to be learned independently and/or they generate low-diversity results, a phenomenon known as model collapse.", "To overcome these limitations, we propose a method named GMM-UNIT based on a content-attribute disentangled representation, where the attribute space is fitted with a GMM.", "Each GMM component represents a domain, and this simple assumption has two prominent advantages.", "First, the dimension of the attribute space does not grow linearly with the number of domains, as it is the case in the literature.", "Second, the continuous domain encoding allows for interpolation between domains and for extrapolation to unseen domains.", "Additionally, we show how GMM-UNIT can be constrained down to different methods in the literature, meaning that GMM-UNIT is a unifying framework for unsupervised image-to-image translation." ]
[ 0, 0, 0, 0, 0, 0, 1 ]
[ 0.3030303120613098, 0.31578946113586426, 0.1538461446762085, 0.06666666269302368, 0.11428570747375488, 0.13333332538604736, 0.39024388790130615 ]
HkeFQgrFDr
[ "GMM-UNIT is an image-to-image translation model that maps an image to multiple domains in a stochastic fashion." ]
[ "We present Compositional Attention Networks, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning.", "While many types of neural networks are effective at learning and generalizing from massive quantities of data, this model moves away from monolithic black-box architectures towards a design that provides a strong prior for iterative reasoning, enabling it to support explainable and structured learning, as well as generalization from a modest amount of data.", "The model builds on the great success of existing recurrent cells such as LSTMs: It sequences a single recurrent Memory, Attention, and Control (MAC) cell, and by careful design imposes structural constraints on the operation of each cell and the interactions between them, incorporating explicit control and soft attention mechanisms into their interfaces.", "We demonstrate the model's strength and robustness on the challenging CLEVR dataset for visual reasoning, achieving a new state-of-the-art 98.9% accuracy, halving the error rate of the previous best model.", "More importantly, we show that the new model is more computationally efficient, data-efficient, and requires an order of magnitude less time and/or data to achieve good results." ]
[ 1, 0, 0, 0, 0 ]
[ 0.37837836146354675, 0.1249999925494194, 0.1904761791229248, 0.30434781312942505, 0.13333332538604736 ]
S1Euwz-Rb
[ "We present a novel architecture, based on dynamic memory, attention and composition for the task of machine reasoning." ]
[ " Variational Auto-Encoders (VAEs) are designed to capture compressible information about a dataset. ", "As a consequence the information stored in the latent space is seldom sufficient to reconstruct a particular image. ", "To help understand the type of information stored in the latent space we train a GAN-style decoder constrained to produce images that the VAE encoder will map to the same region of latent space.", "This allows us to ''imagine'' the information captured in the latent space. ", "We argue that this is necessary to make a VAE into a truly generative model. ", "We use our GAN to visualise the latent space of a standard VAE and of a $\\beta$-VAE." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.1538461446762085, 0.3720930218696594, 0.9433962106704712, 0.31578946113586426, 0.19512194395065308, 0.3414634168148041 ]
BJe4PyrFvB
[ "To understand the information stored in the latent space, we train a GAN-style decoder constrained to produce images that the VAE encoder will map to the same region of latent space." ]
[ "Human brain function as measured by functional magnetic resonance imaging\n", "(fMRI), exhibits a rich diversity.", "In response, understanding the individual variability\n", "of brain function and its association with behavior has become one of the\n", "major concerns in modern cognitive neuroscience.", "Our work is motivated by the\n", "view that generative models provide a useful tool for understanding this variability.\n", "To this end, this manuscript presents two novel generative models trained\n", "on real neuroimaging data which synthesize task-dependent functional brain images.\n", "Brain images are high dimensional tensors which exhibit structured spatial\n", "correlations.", "Thus, both models are 3D conditional Generative Adversarial networks\n", "(GANs) which apply Convolutional Neural Networks (CNNs) to learn an\n", "abstraction of brain image representations.", "Our results show that the generated\n", "brain images are diverse, yet task dependent.", "In addition to qualitative evaluation,\n", "we utilize the generated synthetic brain volumes as additional training data to improve\n", "downstream fMRI classifiers (also known as decoding, or brain reading).\n", "Our approach achieves significant improvements for a variety of datasets, classifi-\n", "cation tasks and evaluation scores.", "Our classification results provide a quantitative\n", "evaluation of the quality of the generated images, and also serve as an additional\n", "contribution of this manuscript." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.06666666269302368, 0, 0, 0.1249999925494194, 0, 0, 0, 0.06666666269302368, 0.12903225421905518, 0.13333332538604736, 0.13793103396892548, 0.06666666269302368, 0.07999999821186066, 0, 0.2222222238779068, 0.07999999821186066, 0.24242423474788666, 0.19354838132858276, 0, 0.1599999964237213, 0.07692307233810425, 0.0624999962747097, 0 ]
BJaU__eCZ
[ "Two novel GANs are constructed to generate high-quality 3D fMRI brain images and synthetic brain images greatly help to improve downstream classification tasks." ]
[ "Transferring representations from large-scale supervised tasks to downstream tasks have shown outstanding results in Machine Learning in both Computer Vision and natural language processing (NLP).", "One particular example can be sequence-to-sequence models for Machine Translation (Neural Machine Translation - NMT).", "It is because, once trained in a multilingual setup, NMT systems can translate between multiple languages and are also capable of performing zero-shot translation between unseen source-target pairs at test time.", "In this paper, we first investigate if we can extend the zero-shot transfer capability of multilingual NMT systems to cross-lingual NLP tasks (tasks other than MT, e.g. sentiment classification and natural language inference).", "We demonstrate a simple framework by reusing the encoder from a multilingual NMT system, a multilingual Encoder-Classifier, achieves remarkable zero-shot cross-lingual classification performance, almost out-of-the-box on three downstream benchmark tasks - Amazon Reviews, Stanford sentiment treebank (SST) and Stanford natural language inference (SNLI).", "In order to understand the underlying factors contributing to this finding, we conducted a series of analyses on the effect of the shared vocabulary, the training data type for NMT models, classifier complexity, encoder representation power, and model generalization on zero-shot performance.", "Our results provide strong evidence that the representations learned from multilingual NMT systems are widely applicable across languages and tasks, and the high, out-of-the-box classification performance is correlated with the generalization capability of such systems." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0, 0, 0.10256409645080566, 0.1428571343421936, 0.125, 0, 0.04999999701976776 ]
H1gni9-ojX
[ "Zero-shot cross-lingual transfer by using multilingual neural machine translation " ]
[ "This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner.", "Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent.", "Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques." ]
[ 1, 0, 0 ]
[ 0.2790697515010834, 0.24137930572032928, 0.25806450843811035 ]
S1eYHoC5FX
[ "We propose a differentiable architecture search algorithm for both convolutional and recurrent networks, achieving competitive performance with the state of the art using orders of magnitude less computation resources." ]
[ "Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). ", "The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration — output text that is bland, incoherent, or gets stuck in repetitive loops.\n\n", "To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models.", "Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass.\n\n", "To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition.", "Our results show that (1) maximization is an inappropriate decoding objective for open-ended text generation, (2) the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and (3) Nucleus Sampling is the best decoding strategy for generating long-form text that is both high-quality — as measured by human evaluation — and as diverse as human-written text." ]
[ 0, 0, 0, 0, 0, 1 ]
[ 0.14035087823867798, 0.18918918073177338, 0.19999998807907104, 0.03999999538064003, 0.10526315122842789, 0.20512820780277252 ]
rygGQyrFvH
[ "Current language generation systems either aim for high likelihood and devolve into generic repetition or miscalibrate their stochasticity—we provide evidence of both and propose a solution: Nucleus Sampling." ]
[ "Up until very recently, inspired by a mass of researches on adversarial examples for computer vision, there has been a growing interest in designing adversarial attacks for Natural Language Processing (NLP) tasks, followed by very few works of adversarial defenses for NLP.", "To our knowledge, there exists no defense method against the successful synonym substitution based attacks that aim to satisfy all the lexical, grammatical, semantic constraints and thus are hard to perceived by humans.", "We contribute to fill this gap and propose a novel adversarial defense method called Synonym Encoding Method (SEM), which inserts an encoder before the input layer of the model and then trains the model to eliminate adversarial perturbations.", "Extensive experiments demonstrate that SEM can efficiently defend current best synonym substitution based adversarial attacks with little decay on the accuracy for benign examples.", "To better evaluate SEM, we also design a strong attack method called Improved Genetic Algorithm (IGA) that adopts the genetic metaheuristic for synonym substitution based attacks.", "Compared with existing genetic based adversarial attack, IGA can achieve higher attack success rate while maintaining the transferability of the adversarial examples." ]
[ 0, 1, 0, 0, 0, 0 ]
[ 0.11320754140615463, 0.3199999928474426, 0.19607841968536377, 0.23255813121795654, 0.2666666507720947, 0.20512819290161133 ]
BJl_a2VYPH
[ "The first text adversarial defense method in word level, and the improved generic based attack method against synonyms substitution based attacks." ]
[ "A major drawback of backpropagation through time (BPTT) is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation.", "This makes BPTT both computationally impractical and biologically implausible.", "For this reason, full backpropagation through time is rarely used on long sequences, and truncated backpropagation through time is used as a heuristic. ", "However, this usually leads to biased estimates of the gradient in which longer term dependencies are ignored. ", "Addressing this issue, we propose an alternative algorithm, Sparse Attentive Backtracking, which might also be related to principles used by brains to learn long-term dependencies.", "Sparse Attentive Backtracking learns an attention mechanism over the hidden states of the past and selectively backpropagates through paths with high attention weights. ", "This allows the model to learn long term dependencies while only backtracking for a small number of time steps, not just from the recent past but also from attended relevant past states. " ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0, 0, 0, 0.06896550953388214, 0, 0, 0 ]
SJCq_fZ0Z
[ "Towards Efficient Credit Assignment in Recurrent Networks without Backpropagation Through Time" ]
[ "We propose a novel adversarial learning framework in this work.", "Existing adversarial learning methods involve two separate networks, i.e., the structured prediction models and the discriminative models, in the training.", "The information captured by discriminative models complements that in the structured prediction models, but few existing researches have studied on utilizing such information to improve structured prediction models at the inference stage.", "In this work, we propose to refine the predictions of structured prediction models by effectively integrating discriminative models into the prediction.", "Discriminative models are treated as energy-based models.", "Similar to the adversarial learning, discriminative models are trained to estimate scores which measure the quality of predicted outputs, while structured prediction models are trained to predict contrastive outputs with maximal energy scores.", "In this way, the gradient vanishing problem is ameliorated, and thus we are able to perform inference by following the ascent gradient directions of discriminative models to refine structured prediction models.", "The proposed method is able to handle a range of tasks, \\emph{e.g.}, multi-label classification and image segmentation. ", "Empirical results on these two tasks validate the effectiveness of our learning method." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.4571428596973419, 0.35555556416511536, 0.38461539149284363, 0.3720930218696594, 0.06451612710952759, 0.31372547149658203, 0.307692289352417, 0.13333332538604736, 0.10526315122842789 ]
By40DoAqtX
[ "We propose a novel adversarial learning framework for structured prediction, in which discriminative models can be used to refine structured prediction models at the inference stage. " ]
[ "We propose RaPP, a new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder.\n", "Precisely, RaPP compares input and its autoencoder reconstruction not only in the input space but also in the hidden spaces.\n", "We show that if we feed a reconstructed input to the same autoencoder again, its activated values in a hidden space are equivalent to the corresponding reconstruction in that hidden space given the original input.\n", "In order to aggregate the hidden space activation values, we propose two metrics, which enhance the novelty detection performance.\n", "Through extensive experiments using diverse datasets, we validate that RaPP improves novelty detection performances of autoencoder-based approaches.\n", "Besides, we show that RaPP outperforms recent novelty detection methods evaluated on popular benchmarks.\n" ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.8648648858070374, 0.17142856121063232, 0.22727271914482117, 0.277777761220932, 0.11428570747375488, 0.1249999925494194 ]
HkgeGeBYDB
[ "A new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder." ]
[ "Learning preferences of users over plan traces can be a challenging task given a large number of features and limited queries that we can ask a single user.", "Additionally, the preference function itself can be quite convoluted and non-linear.", "Our approach uses feature-directed active learning to gather the necessary information about plan trace preferences.", "This data is used to train a simple feedforward neural network to learn preferences over the sequential data.", "We evaluate the impact of active learning on the number of traces that are needed to train a model that is accurate and interpretable.", "This evaluation is done by comparing the aforementioned feedforward network to a more complex neural network model that uses LSTMs and is trained with a larger dataset without active learning." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.3125, 0, 0.3478260934352875, 0.1666666567325592, 0.20689654350280762, 0.11428570747375488 ]
BkGeETnQcE
[ "Learning preferences over plan traces using active learning." ]
[ "Reinforcement learning is a promising framework for solving control problems, but its use in practical situations is hampered by the fact that reward functions are often difficult to engineer.", "Specifying goals and tasks for autonomous machines, such as robots, is a significant challenge: conventionally, reward functions and goal states have been used to communicate objectives.", "But people can communicate objectives to each other simply by describing or demonstrating them.", "How can we build learning algorithms that will allow us to tell machines what we want them to do?", "In this work, we investigate the problem of grounding language commands as reward functions using inverse reinforcement learning, and argue that language-conditioned rewards are more transferable than language-conditioned policies to new environments.", "We propose language-conditioned reward learning (LC-RL), which grounds language commands as a reward function represented by a deep neural network.", "We demonstrate that our model learns rewards that transfer to novel tasks and environments on realistic, high-dimensional visual environments with natural language commands, whereas directly learning a language-conditioned policy leads to poor performance." ]
[ 0, 0, 0, 0, 0, 1, 0 ]
[ 0.1818181723356247, 0.04878048226237297, 0.06666666269302368, 0.060606054961681366, 0.2978723347187042, 0.4117647111415863, 0.3478260934352875 ]
r1lq1hRqYQ
[ "We ground language commands in a high-dimensional visual environment by learning language-conditioned rewards using inverse reinforcement learning." ]
[ "Many biological learning systems such as the mushroom body, hippocampus, and cerebellum are built from sparsely connected networks of neurons.", "For a new understanding of such networks, we study the function spaces induced by sparse random features and characterize what functions may and may not be learned.", "A network with d inputs per neuron is found to be equivalent to an additive model of order d, whereas with a degree distribution the network combines additive terms of different orders.", "We identify three specific advantages of sparsity: additive function approximation is a powerful inductive bias that limits the curse of dimensionality, sparse networks are stable to outlier noise in the inputs, and sparse random features are scalable.", "Thus, even simple brain architectures can be powerful function approximators.", "Finally, we hope that this work helps popularize kernel theories of networks among computational neuroscientists." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.3243243098258972, 0.2380952388048172, 0.09090908616781235, 0.23999999463558197, 0, 0.1249999925494194 ]
rylt7mFU8S
[ "We advocate for random features as a theory of biological neural networks, focusing on sparsely connected networks" ]
[ "We propose a new application of embedding techniques to problem retrieval in adaptive tutoring.", "The objective is to retrieve problems similar in mathematical concepts.", "There are two challenges: First, like sentences, problems helpful to tutoring are never exactly the same in terms of the underlying concepts.", "Instead, good problems mix concepts in innovative ways, while still displaying continuity in their relationships.", "Second, it is difficult for humans to determine a similarity score consistent across a large enough training set.", "We propose a hierarchical problem embedding algorithm, called Prob2Vec, that consists of an abstraction and an embedding step.", "Prob2Vec achieves 96.88\\% accuracy on a problem similarity test, in contrast to 75\\% from directly applying state-of-the-art sentence embedding methods.", "It is surprising that Prob2Vec is able to distinguish very fine-grained differences among problems, an ability humans need time and effort to acquire.", "In addition, the sub-problem of concept labeling with imbalanced training data set is interesting in its own right.", "It is a multi-label problem suffering from dimensionality explosion, which we propose ways to ameliorate.", "We propose the novel negative pre-training algorithm that dramatically reduces false negative and positive ratios for classification, using an imbalanced training data set." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ 0.3255814015865326, 0.1538461446762085, 0.12244897335767746, 0.04651162400841713, 0.260869562625885, 0.2666666507720947, 0.23999999463558197, 0.11999999731779099, 0.2978723347187042, 0.22727271914482117, 0.3529411852359772 ]
SJl8gnAqtX
[ "We propose the Prob2Vec method for problem embedding used in a personalized e-learning tool in addition to a data level classification method, called negative pre-training, for cases where the training data set is imbalanced." ]
[ "We introduce a new deep convolutional neural network, CrescendoNet, by stacking simple building blocks without residual connections.", "Each Crescendo block contains independent convolution paths with increased depths.", "The numbers of convolution layers and parameters are only increased linearly in Crescendo blocks.", "In experiments, CrescendoNet with only 15 layers outperforms almost all networks without residual connections on benchmark datasets, CIFAR10, CIFAR100, and SVHN.", "Given sufficient amount of data as in SVHN dataset, CrescendoNet with 15 layers and 4.1M parameters can match the performance of DenseNet-BC with 250 layers and 15.3M parameters.", "CrescendoNet provides a new way to construct high performance deep convolutional neural networks without residual connections.", "Moreover, through investigating the behavior and performance of subnetworks in CrescendoNet, we note that the high performance of CrescendoNet may come from its implicit ensemble behavior, which differs from the FractalNet that is also a deep convolutional neural network without residual connections.", "Furthermore, the independence between paths in CrescendoNet allows us to introduce a new path-wise training procedure, which can reduce the memory needed for training." ]
[ 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.8125, 0, 0.06896550953388214, 0.1666666567325592, 0, 0.32258063554763794, 0.23529411852359772, 0.10810810327529907 ]
HJdXGy1RW
[ "We introduce CrescendoNet, a deep CNN architecture by stacking simple building blocks without residual connections." ]
[ "Gaussian processes are the leading class of distributions on random functions, but they suffer from well known issues including difficulty scaling and inflexibility with respect to certain shape constraints (such as nonnegativity).", "Here we propose Deep Random Splines, a flexible class of random functions obtained by transforming Gaussian noise through a deep neural network whose output are the parameters of a spline.", "Unlike Gaussian processes, Deep Random Splines allow us to readily enforce shape constraints while inheriting the richness and tractability of deep generative models.", "We also present an observational model for point process data which uses Deep Random Splines to model the intensity function of each point process and apply it to neuroscience data to obtain a low-dimensional representation of spiking activity.", "Inference is performed via a variational autoencoder that uses a novel recurrent encoder architecture that can handle multiple point processes as input." ]
[ 0, 0, 0, 1, 0 ]
[ 0.18867923319339752, 0.1666666567325592, 0.13636362552642822, 0.38461539149284363, 0.19512194395065308 ]
rJl97IIt_E
[ "We combine splines with neural networks to obtain a novel distribution over functions and use it to model intensity functions of point processes." ]
[ "The recent development of Natural Language Processing (NLP) has achieved great success using large pre-trained models with hundreds of millions of parameters.", "However, these models suffer from the heavy model size and high latency such that we cannot directly deploy them to resource-limited mobile devices.", "In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model.", "Like BERT, MobileBERT is task-agnostic; that is, it can be universally applied to various downstream NLP tasks via fine-tuning.", "MobileBERT is a slimmed version of BERT-LARGE augmented with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.", "To train MobileBERT, we use a bottom-to-top progressive scheme to transfer the intrinsic knowledge of a specially designed Inverted Bottleneck BERT-LARGE teacher to it.", "Empirical studies show that MobileBERT is 4.3x smaller and 4.0x faster than original BERT-BASE while achieving competitive results on well-known NLP benchmarks.", "On the natural language inference tasks of GLUE, MobileBERT achieves 0.6 GLUE score performance degradation, and 367 ms latency on a Pixel 3 phone.", "On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a 90.0/79.2 dev F1 score, which is 1.5/2.1 higher than BERT-BASE." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0, 0.04347825422883034, 0.05405404791235924, 0.0952380895614624, 0.1428571343421936, 0.04444443807005882, 0.5652173757553101, 0.2083333283662796, 0.25 ]
SJxjVaNKwB
[ "We develop a task-agnosticlly compressed BERT, which is 4.3x smaller and 4.0x faster than BERT-BASE while achieving competitive performance on GLUE and SQuAD." ]
[ "The importance weighted autoencoder (IWAE) (Burda et al., 2016) is a popular variational-inference method which achieves a tighter evidence bound (and hence a lower bias) than standard variational autoencoders by optimising a multi-sample objective, i.e. an objective that is expressible as an integral over $K > 1$ Monte Carlo samples.", "Unfortunately, IWAE crucially relies on the availability of reparametrisations and even if these exist, the multi-sample objective leads to inference-network gradients which break down as $K$ is increased (Rainforth et al., 2018).", "This breakdown can only be circumvented by removing high-variance score-function terms, either by heuristically ignoring them (which yields the 'sticking-the-landing' IWAE (IWAE-STL) gradient from Roeder et al. (2017)) or through an identity from Tucker et al. (2019) (which yields the 'doubly-reparametrised' IWAE (IWAE-DREG) gradient).", "In this work, we argue that directly optimising the proposal distribution in importance sampling as in the reweighted wake-sleep (RWS) algorithm from Bornschein & Bengio (2015) is preferable to optimising IWAE-type multi-sample objectives.", "To formalise this argument, we introduce an adaptive-importance sampling framework termed adaptive importance sampling for learning (AISLE) which slightly generalises the RWS algorithm.", "We then show that AISLE admits IWAE-STL and IWAE-DREG (i.e. the IWAE-gradients which avoid breakdown) as special cases." ]
[ 0, 0, 0, 0, 0, 1 ]
[ 0.10810810327529907, 0.10169491171836853, 0.0952380895614624, 0.17543859779834747, 0.12244897335767746, 0.30434781312942505 ]
ryg7jhEtPB
[ "We show that most variants of importance-weighted autoencoders can be derived in a more principled manner as special cases of adaptive importance-sampling approaches like the reweighted-wake sleep algorithm." ]
[ "As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed on clusters to perform model fitting in parallel.", "Alistarh et al. (2017) describe two variants of data-parallel SGD that quantize and encode gradients to lessen communication costs.", "For the first variant, QSGD, they provide strong theoretical guarantees.", "For the second variant, which we call QSGDinf, they demonstrate impressive empirical gains for distributed training of large neural networks.", "Building on their work, we propose an alternative scheme for quantizing gradients and show that it yields stronger theoretical guarantees than exist for QSGD while matching the empirical performance of QSGDinf." ]
[ 0, 0, 0, 0, 1 ]
[ 0.1395348757505417, 0.1249999925494194, 0.260869562625885, 0.1818181723356247, 0.41860464215278625 ]
HyeJmlrFvH
[ "NUQSGD closes the gap between the theoretical guarantees of QSGD and the empirical performance of QSGDinf." ]
[ "The impressive lifelong learning in animal brains is primarily enabled by plastic changes in synaptic connectivity.", "Importantly, these changes are not passive, but are actively controlled by neuromodulation, which is itself under the control of the brain.", "The resulting self-modifying abilities of the brain play an important role in learning and adaptation, and are a major basis for biological reinforcement learning.", "Here we show for the first time that artificial neural networks with such neuromodulated plasticity can be trained with gradient descent.", "Extending previous work on differentiable Hebbian plasticity, we propose a differentiable formulation for the neuromodulation of plasticity.", "We show that neuromodulated plasticity improves the performance of neural networks on both reinforcement learning and supervised learning tasks.", "In one task, neuromodulated plastic LSTMs with millions of parameters outperform standard LSTMs on a benchmark language modeling task (controlling for the number of parameters).", "We conclude that differentiable neuromodulation of plasticity offers a powerful new framework for training neural networks." ]
[ 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.0624999962747097, 0, 0.05128204822540283, 0.21621620655059814, 0.060606054961681366, 0.2857142686843872, 0.04999999329447746, 0.060606054961681366 ]
r1lrAiA5Ym
[ "Neural networks can be trained to modify their own connectivity, improving their online learning performance on challenging tasks." ]
[ "Deep learning has made remarkable achievement in many fields.", "However, learning\n", "the parameters of neural networks usually demands a large amount of labeled\n", "data.", "The algorithms of deep learning, therefore, encounter difficulties when applied\n", "to supervised learning where only little data are available.", "This specific task\n", "is called few-shot learning.", "To address it, we propose a novel algorithm for fewshot\n", "learning using discrete geometry, in the sense that the samples in a class are\n", "modeled as a reduced simplex.", "The volume of the simplex is used for the measurement\n", "of class scatter.", "During testing, combined with the test sample and the\n", "points in the class, a new simplex is formed.", "Then the similarity between the test\n", "sample and the class can be quantized with the ratio of volumes of the new simplex\n", "to the original class simplex.", "Moreover, we present an approach to constructing\n", "simplices using local regions of feature maps yielded by convolutional neural networks.\n", "Experiments on Omniglot and miniImageNet verify the effectiveness of\n", "our simplex algorithm on few-shot learning." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0952380895614624, 0, 0, 0.1904761791229248, 0, 0.375, 0, 0.0833333283662796, 0, 0.0952380895614624, 0, 0.09999999403953552, 0.0952380895614624, 0, 0.07999999821186066, 0.11764705181121826, 0.10526315122842789, 0, 0, 0.2222222238779068 ]
H1x5K0mSnQ
[ "A simplex-based geometric method is proposed to cope with few-shot learning problems." ]
[ "Reservoir computing is a powerful tool to explain how the brain learns temporal sequences, such as movements, but existing learning schemes are either biologically implausible or too inefficient to explain animal performance.", "We show that a network can learn complicated sequences with a reward-modulated Hebbian learning rule if the network of reservoir neurons is combined with a second network that serves as a dynamic working memory and provides a spatio-temporal backbone signal to the reservoir.", "In combination with the working memory, reward-modulated Hebbian learning of the readout neurons performs as well as FORCE learning, but with the advantage of a biologically plausible interpretation of both the learning rule and the learning paradigm." ]
[ 0, 1, 0 ]
[ 0.11538460850715637, 0.4727272689342499, 0.2857142686843872 ]
B1g0QmtIIS
[ "We show that a working memory input to a reservoir network makes a local reward-modulated Hebbian rule perform as well as recursive least-squares (aka FORCE)" ]
[ "Convolutional architectures have recently been shown to be competitive on many\n", "sequence modelling tasks when compared to the de-facto standard of recurrent neural networks (RNNs) while providing computational and modelling advantages due to inherent parallelism.", "However, currently, there remains a performance\n", "gap to more expressive stochastic RNN variants, especially those with several layers of dependent random variables.", "In this work, we propose stochastic temporal convolutional networks (STCNs), a novel architecture that combines the computational advantages of temporal convolutional networks (TCN) with the representational power and robustness of stochastic latent spaces.", "In particular, we propose a hierarchy of stochastic latent variables that captures temporal dependencies at different time-scales.", "The architecture is modular and flexible due to the decoupling of the deterministic and stochastic layers.", "We show that the proposed architecture achieves state of the art log-likelihoods across several tasks.", "Finally, the model is capable of predicting high-quality synthetic samples over a long-range temporal horizon in modelling of handwritten text." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.07999999821186066, 0.2222222238779068, 0, 0.2666666507720947, 0.4390243887901306, 0.32258063554763794, 0.2142857164144516, 0.2142857164144516, 0.1818181723356247 ]
HkzSQhCcK7
[ "We combine the computational advantages of temporal convolutional architectures with the expressiveness of stochastic latent variables." ]
[ "The weak contraction mapping is a self mapping that the range is always a subset of the domain, which admits a unique fixed-point.", "The iteration of weak contraction mapping is a Cauchy sequence that yields the unique fixed-point.", "A gradient-free optimization method as an application of weak contraction mapping is proposed to achieve global minimum convergence.", "The optimization method is robust to local minima and initial point position." ]
[ 0, 0, 1, 0 ]
[ 0.07407406717538834, 0.0833333283662796, 0.4444444477558136, 0.2857142686843872 ]
SygJSiA5YQ
[ "A gradient-free method is proposed for non-convex optimization problem " ]
[ "Over the last decade, two competing control strategies have emerged for solving complex control tasks with high efficacy.", "Model-based control algorithms, such as model-predictive control (MPC) and trajectory optimization, peer into the gradients of underlying system dynamics in order to solve control tasks with high sample efficiency. ", "However, like all gradient-based numerical optimization methods,model-based control methods are sensitive to intializations and are prone to becoming trapped in local minima.", "Deep reinforcement learning (DRL), on the other hand, can somewhat alleviate these issues by exploring the solution space through sampling — at the expense of computational cost.", "In this paper, we present a hybrid method that combines the best aspects of gradient-based methods and DRL.", "We base our algorithm on the deep deterministic policy gradients (DDPG) algorithm and propose a simple modification that uses true gradients from a differentiable physical simulator to increase the convergence rate of both the actor and the critic. ", "We demonstrate our algorithm on seven 2D robot control tasks, with the most complex one being a differentiable half cheetah with hard contact constraints.", "Empirical results show that our method boosts the performance of DDPGwithout sacrificing its robustness to local minima." ]
[ 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.1621621549129486, 0.2083333283662796, 0.09999999403953552, 0.08888888359069824, 0.2631579041481018, 0.38461539149284363, 0.23255813121795654, 0.3243243098258972 ]
rkxZCJrtwS
[ "We propose a novel method that leverages the gradients from differentiable simulators to improve the performance of RL for robotics control" ]
[ "We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks.", "A Bayesian hypernetwork, h, is a neural network which learns to transform a simple noise distribution, p(e) = N(0,I), to a distribution q(t) := q(h(e)) over the parameters t of another neural network (the ``primary network).", "We train q with variational inference, using an invertible h to enable efficient estimation of the variational lower bound on the posterior p(t | D) via sampling.", "In contrast to most methods for Bayesian deep learning, Bayesian hypernets can represent a complex multimodal approximate posterior with correlations between parameters, while enabling cheap iid sampling of q(t). ", "In practice, Bayesian hypernets provide a better defense against adversarial examples than dropout, and also exhibit competitive performance on a suite of tasks which evaluate model uncertainty, including regularization, active learning, and anomaly detection.\n" ]
[ 1, 0, 0, 0, 0 ]
[ 1, 0.1395348757505417, 0.05405404791235924, 0.19512194395065308, 0.08888888359069824 ]
S1fcY-Z0-
[ "We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks." ]
[ "Evolutionary-based optimization approaches have recently shown promising results in domains such as Atari and robot locomotion but less so in solving 3D tasks directly from pixels.", "This paper presents a method called Deep Innovation Protection (DIP) that allows training complex world models end-to-end for such 3D environments.", "The main idea behind the approach is to employ multiobjective optimization to temporally reduce the selection pressure on specific components in a world model, allowing other components to adapt.", "We investigate the emergent representations of these evolved networks, which learn a model of the world without the need for a specific forward-prediction loss." ]
[ 0, 1, 0, 0 ]
[ 0.10810810327529907, 0.6060606241226196, 0.05405404791235924, 0.1249999925494194 ]
SygLu0VtPH
[ "Deep Innovation Protection allows evolving complex world models end-to-end for 3D tasks." ]
[ "Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.", "In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses.", "Recent work shows that randomized smoothing can be used to provide certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER).", "The attack-free characteristic makes MACER faster to train and easier to optimize.", "In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN.", "For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius." ]
[ 0, 1, 0, 0, 0, 0 ]
[ 0.2083333283662796, 0.5283018946647644, 0.24137930572032928, 0, 0.0714285671710968, 0.3199999928474426 ]
rJx1Na4Fwr
[ "We propose MACER: a provable defense algorithm that trains robust models by maximizing the certified radius. It does not use adversarial training but performs better than all existing provable l2-defenses." ]
[ "Actor-critic methods solve reinforcement learning problems by updating a parameterized policy known as an actor in a direction that increases an estimate of the expected return known as a critic.", "However, existing actor-critic methods only use values or gradients of the critic to update the policy parameter.", "In this paper, we propose a novel actor-critic method called the guide actor-critic (GAC).", "GAC firstly learns a guide actor that locally maximizes the critic and then it updates the policy parameter based on the guide actor by supervised learning.", "Our main theoretical contributions are two folds.", "First, we show that GAC updates the guide actor by performing second-order optimization in the action space where the curvature matrix is based on the Hessians of the critic.", "Second, we show that the deterministic policy gradient method is a special case of GAC when the Hessians are ignored.", "Through experiments, we show that our method is a promising reinforcement learning method for continuous controls.\n" ]
[ 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.2926829159259796, 0.3125, 0.27586206793785095, 0.21052631735801697, 0, 0.24390242993831635, 0.2857142686843872, 0.1875 ]
BJk59JZ0b
[ "This paper proposes a novel actor-critic method that uses Hessians of a critic to update an actor." ]
[ "Deep Infomax~(DIM) is an unsupervised representation learning framework by maximizing the mutual information between the inputs and the outputs of an encoder, while probabilistic constraints are imposed on the outputs.", "In this paper, we propose Supervised Deep InfoMax~(SDIM), which introduces supervised probabilistic constraints to the encoder outputs.", "The supervised probabilistic constraints are equivalent to a generative classifier on high-level data representations, where class conditional log-likelihoods of samples can be evaluated.", "Unlike other works building generative classifiers with conditional generative models, SDIMs scale on complex datasets, and can achieve comparable performance with discriminative counterparts. ", "With SDIM, we could perform \\emph{classification with rejection}.\nInstead of always reporting a class label, SDIM only makes predictions when test samples' largest logits surpass some pre-chosen thresholds, otherwise they will be deemed as out of the data distributions, and be rejected. ", "Our experiments show that SDIM with rejection policy can effectively reject illegal inputs including out-of-distribution samples and adversarial examples." ]
[ 0, 0, 0, 0, 0, 1 ]
[ 0.13636362552642822, 0.0555555522441864, 0.1904761791229248, 0.3414634168148041, 0.03333332762122154, 0.4736841917037964 ]
rkg98yBFDr
[ "scale generative classifiers on complex datasets, and evaluate their effectiveness to reject illegal inputs including out-of-distribution samples and adversarial examples." ]
[ "Abstract In this work, we describe a set of rules for the design and initialization of well-conditioned neural networks, guided by the goal of naturally balancing the diagonal blocks of the Hessian at the start of training.", "We show how our measure of conditioning of a block relates to another natural measure of conditioning, the ratio of weight gradients to the weights.", "We prove that for a ReLU-based deep multilayer perceptron, a simple initialization scheme using the geometric mean of the fan-in and fan-out satisfies our scaling rule.", "For more sophisticated architectures, we show how our scaling principle can be used to guide design choices to produce well-conditioned neural networks, reducing guess-work." ]
[ 0, 0, 1, 0 ]
[ 0.25, 0.06666666269302368, 0.2857142686843872, 0.11764705181121826 ]
BJedt6VKPS
[ "A theory for initialization and scaling of ReLU neural network layers" ]
[ "We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model.", "Assuming that both the adversary and victim model fine-tune a large pretrained language model such as BERT (Devlin et al., 2019), we show that the adversary does not need any real training data to successfully mount the attack.", "In fact, the attacker need not even use grammatical or semantically meaningful queries: we show that random sequences of words coupled with task-specific heuristics form effective queries for model extraction on a diverse set of NLP tasks including natural language inference and question answering.", "Our work thus highlights an exploit only made feasible by the shift towards transfer learning methods within the NLP community: for a query budget of a few hundred dollars, an attacker can extract a model that performs only slightly worse than the victim model.", "Finally, we study two defense strategies against model extraction—membership classification and API watermarking—which while successful against some adversaries can also be circumvented by more clever ones." ]
[ 1, 0, 0, 0, 0 ]
[ 0.17777776718139648, 0.11320754140615463, 0.16129031777381897, 0.1428571343421936, 0.09090908616781235 ]
Byl5NREFDr
[ "Outputs of modern NLP APIs on nonsensical text provide strong signals about model internals, allowing adversaries to steal the APIs." ]
[ "We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the \"learning to search\" (L2S) approach to structured prediction.", "RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE).", "Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses.", "Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance.", "Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error.", "We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction.", "Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes.", "This allows us to validate the benefits of our approach on a machine translation task." ]
[ 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.5106382966041565, 0.08163265138864517, 0.2545454502105713, 0.0476190410554409, 0.1904761791229248, 0.09999999403953552, 0.10526315122842789, 0.25 ]
HkUR_y-RZ
[ "We introduce SeaRNN, a novel algorithm for RNN training, inspired by the learning to search approach to structured prediction, in order to avoid the limitations of MLE training." ]
[ "Deep reinforcement learning has demonstrated increasing capabilities for continuous control problems,\n", "including agents that can move with skill and agility through their environment. \n", "An open problem in this setting is that of developing good strategies for integrating or merging policies\n", "for multiple skills, where each individual skill is a specialist in a specific skill and its associated state distribution. \n", "We extend policy distillation methods to the continuous action setting and leverage this technique to combine \\expert policies,\n", "as evaluated in the domain of simulated bipedal locomotion across different classes of terrain.\n", "We also introduce an input injection method for augmenting an existing policy network to exploit new input features.\n", "Lastly, our method uses transfer learning to assist in the efficient acquisition of new skills.\n", "The combination of these methods allows a policy to be incrementally augmented with new skills.\n", "We compare our progressive learning and integration via distillation (PLAID) method\n", "against three alternative baselines." ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.07407406717538834, 0.13793103396892548, 0.12121211737394333, 0.05882352590560913, 0.24242423474788666, 0, 0.1818181723356247, 0.4375, 0.1875, 0.29629629850387573, 0 ]
B13njo1R-
[ "A continual learning method that uses distillation to combine expert policies and transfer learning to accelerate learning new skills." ]
[ "Deep Reinforcement Learning (DRL) has led to many recent breakthroughs on complex control tasks, such as defeating the best human player in the game of Go.", "However, decisions made by the DRL agent are not explainable, hindering its applicability in safety-critical settings.", "Viper, a recently proposed technique, constructs a decision tree policy by mimicking the DRL agent.", "Decision trees are interpretable as each action made can be traced back to the decision rule path that lead to it.", "However, one global decision tree approximating the DRL policy has significant limitations with respect to the geometry of decision boundaries.", "We propose MoET, a more expressive, yet still interpretable model based on Mixture of Experts, consisting of a gating function that partitions the state space, and multiple decision tree experts that specialize on different partitions.", "We propose a training procedure to support non-differentiable decision tree experts and integrate it into imitation learning procedure of Viper.", "We evaluate our algorithm on four OpenAI gym environments, and show that the policy constructed in such a way is more performant and better mimics the DRL agent by lowering mispredictions and increasing the reward.", "We also show that MoET policies are amenable for verification using off-the-shelf automated theorem provers such as Z3." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.05128204822540283, 0, 0.1428571343421936, 0.05882352590560913, 0.25, 0.22727271914482117, 0.3636363446712494, 0, 0.0624999962747097 ]
BJlxdCVKDB
[ "Explainable reinforcement learning model using novel combination of mixture of experts with non-differentiable decision tree experts." ]
[ "We consider the problem of unconstrained minimization of a smooth objective\n", "function in $\\mathbb{R}^d$ in setting where only function evaluations are possible.", "We propose and analyze stochastic zeroth-order method with heavy ball momentum.", "In particular, we propose, SMTP, a momentum version of the stochastic three-point method (STP) Bergou et al. (2019).", "We show new complexity results for non-convex, convex and strongly convex functions.", "We test our method on a collection of learning to continuous control tasks on several MuJoCo Todorov et al. (2012) environments with varying difficulty and compare against STP, other state-of-the-art derivative-free optimization algorithms and against policy gradient methods.", "SMTP significantly outperforms STP and all other methods that we considered in our numerical experiments.", "Our second contribution is SMTP with importance sampling which we call SMTP_IS.", "We provide convergence analysis of this method for non-convex, convex and strongly convex objectives." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.1428571343421936, 0, 0.3448275923728943, 0.1111111044883728, 0.20689654350280762, 0.30188679695129395, 0.060606054961681366, 0.19999998807907104, 0.12903225421905518 ]
HylAoJSKvH
[ "We develop and analyze a new derivative free optimization algorithm with momentum and importance sampling with applications to continuous control." ]
[ "Using class labels to represent class similarity is a typical approach to training deep hashing systems for retrieval; samples from the same or different classes take binary 1 or 0 similarity values.", "This similarity does not model the full rich knowledge of semantic relations that may be present between data points.", "In this work we build upon the idea of using semantic hierarchies to form distance metrics between all available sample labels; for example cat to dog has a smaller distance than cat to guitar.", "We combine this type of semantic distance into a loss function to promote similar distances between the deep neural network embeddings.", "We also introduce an empirical Kullback-Leibler divergence loss term to promote binarization and uniformity of the embeddings.", "We test the resulting SHREWD method and demonstrate improvements in hierarchical retrieval scores using compact, binary hash codes instead of real valued ones, and show that in a weakly supervised hashing setting we are able to learn competitively without explicitly relying on class labels, but instead on similarities between labels." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.260869562625885, 0.05405404791235924, 0.2083333283662796, 0.25641024112701416, 0.05714285373687744, 0.21875 ]
rJgqFi5rOV
[ "We propose a new method for training deep hashing for image retrieval using only a relational distance metric between samples" ]
[ "In this paper, we propose a novel approach to improve a given surface mapping through local refinement.", "The approach\n", "receives an established mapping between two surfaces and follows four phases:", "(i) inspection of the mapping and creation of a sparse\nset of landmarks in mismatching regions;", "(ii) segmentation with a low-distortion region-growing process based on flattening the\nsegmented parts;", "(iii) optimization of the deformation of segmented parts to align the landmarks in the planar parameterization domain;\nand", "(iv) aggregation of the mappings from segments to update the surface mapping.", "In addition, we propose a new method to deform the\n", "mesh in order to meet constraints (in our case, the landmark alignment of phase", "(iii)).", "We incrementally adjust the cotangent weights for\n", "the constraints and apply the deformation in a fashion that guarantees that the deformed mesh will be free of flipped faces and will have\n", "low conformal distortion.", "Our new deformation approach, Iterative Least Squares Conformal Mapping (ILSCM), outperforms other\n", "low-distortion deformation methods.", "The approach is general, and we tested it by improving the mappings from different existing surface\n", "mapping methods.", "We also tested its effectiveness by editing the mappings for a variety of 3D objects." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.5365853905677795, 0.0555555522441864, 0.20512820780277252, 0.15789473056793213, 0.14999999105930328, 0.1666666567325592, 0.4000000059604645, 0.3589743673801422, 0.125, 0.22727271914482117, 0, 0.05405404791235924, 0, 0.09756097197532654, 0.14999999105930328 ]
DNj0cjDNsP
[ "We propose a novel approach to improve a given cross-surface mapping through local refinement with a new iterative method to deform the mesh in order to meet user constraints." ]
[ "Understanding the groundbreaking performance of Deep Neural Networks is one\n", "of the greatest challenges to the scientific community today.", "In this work, we\n", "introduce an information theoretic viewpoint on the behavior of deep networks\n", "optimization processes and their generalization abilities.", "By studying the Information\n", "Plane, the plane of the mutual information between the input variable and\n", "the desired label, for each hidden layer.", "Specifically, we show that the training of\n", "the network is characterized by a rapid increase in the mutual information (MI)\n", "between the layers and the target label, followed by a longer decrease in the MI\n", "between the layers and the input variable.", "Further, we explicitly show that these\n", "two fundamental information-theoretic quantities correspond to the generalization\n", "error of the network, as a result of introducing a new generalization bound that is\n", "exponential in the representation compression.", "The analysis focuses on typical\n", "patterns of large-scale problems.", "For this purpose, we introduce a novel analytic\n", "bound on the mutual information between consecutive layers in the network.\n", "An important consequence of our analysis is a super-linear boost in training time\n", "with the number of non-degenerate hidden layers, demonstrating the computational\n", "benefit of the hidden layers." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.14814814925193787, 0.1599999964237213, 0, 0.7142857313156128, 0.52173912525177, 0.0952380895614624, 0.29629629850387573, 0.0833333283662796, 0.1666666567325592, 0.13793103396892548, 0.13333332538604736, 0.17391304671764374, 0, 0.1599999964237213, 0.19999998807907104, 0.09090908616781235, 0.09090908616781235, 0.0952380895614624, 0, 0.2142857164144516, 0.06666666269302368, 0.1538461446762085, 0.1818181723356247 ]
SkeL6sCqK7
[ "Introduce an information theoretic viewpoint on the behavior of deep networks optimization processes and their generalization abilities" ]
[ "We propose and study a method for learning interpretable representations for the task of regression.", "Features are represented as networks of multi-type expression trees comprised of activation functions common in neural networks in addition to other elementary functions.", "Differentiable features are trained via gradient descent, and the performance of features in a linear model is used to weight the rate of change among subcomponents of each representation.", "The search process maintains an archive of representations with accuracy-complexity trade-offs to assist in generalization and interpretation.", "We compare several stochastic optimization approaches within this framework.", "We benchmark these variants on 100 open-source regression problems in comparison to state-of-the-art machine learning approaches.", "Our main finding is that this approach produces the highest average test scores across problems while producing representations that are orders of magnitude smaller than the next best performing method (gradient boosting).", "We also report a negative result in which attempts to directly optimize the disentanglement of the representation result in more highly correlated features." ]
[ 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2857142686843872, 0.19999998807907104, 0.21739129722118378, 0.15789473056793213, 0, 0.10810810327529907, 0.07843136787414551, 0.19512194395065308 ]
Hke-JhA9Y7
[ "Representing the network architecture as a set of syntax trees and optimizing their structure leads to accurate and concise regression models. " ]
[ "Most distributed machine learning (ML) systems store a copy of the model parameters locally on each machine to minimize network communication.", "In practice, in order to reduce synchronization waiting time, these copies of the model are not necessarily updated in lock-step, and can become stale.", "Despite much development in large-scale ML, the effect of staleness on the learning efficiency is inconclusive, mainly because it is challenging to control or monitor the staleness in complex distributed environments.", "In this work, we study the convergence behaviors of a wide array of ML models and algorithms under delayed updates.", "Our extensive experiments reveal the rich diversity of the effects of staleness on the convergence of ML algorithms and offer insights into seemingly contradictory reports in the literature.", "The empirical findings also inspire a new convergence analysis of SGD in non-convex optimization under staleness, matching the best-known convergence rate of O(1/\\sqrt{T})." ]
[ 0, 0, 0, 0, 1, 0 ]
[ 0.2857142686843872, 0.21052631735801697, 0.2926829159259796, 0.29411762952804565, 0.42105263471603394, 0.1666666567325592 ]
BylQV305YQ
[ "Empirical and theoretical study of the effects of staleness in non-synchronous execution on machine learning algorithms." ]
[ "System identification is the process of building a mathematical model of an unknown system from measurements of its inputs and outputs.", "It is a key step for model-based control, estimator design, and output prediction.", "This work presents an algorithm for non-linear offline system identification from partial observations, i.e. situations in which the system's full-state is not directly observable.", "The algorithm presented, called SISL, iteratively infers the system's full state through non-linear optimization and then updates the model parameters.", "We test our algorithm on a simulated system of coupled Lorenz attractors, showing our algorithm's ability to identify high-dimensional systems that prove intractable for particle-based approaches.", "We also use SISL to identify the dynamics of an aerobatic helicopter.", "By augmenting the state with unobserved fluid states, we learn a model that predicts the acceleration of the helicopter better than state-of-the-art approaches." ]
[ 0, 0, 1, 0, 0, 0, 0 ]
[ 0.24242423474788666, 0.14814814925193787, 0.5641025304794312, 0.12121211737394333, 0.20512820780277252, 0, 0.05714285373687744 ]
B1gR3ANFPS
[ "This work presents a scalable algorithm for non-linear offline system identification from partial observations." ]
[ "Various gradient compression schemes have been proposed to mitigate the communication cost in distributed training of large scale machine learning models.", "Sign-based methods, such as signSGD (Bernstein et al., 2018), have recently been gaining popularity because of their simple compression rule and connection to adaptive gradient methods, like ADAM.", "In this paper, we perform a general analysis of sign-based methods for non-convex optimization.", "Our analysis is built on intuitive bounds on success probabilities and does not rely on special noise distributions nor on the boundedness of the variance of stochastic gradients.", "Extending the theory to distributed setting within a parameter server framework, we assure exponentially fast variance reduction with respect to number of nodes, maintaining 1-bit compression in both directions and using small mini-batch sizes.", "We validate our theoretical findings experimentally." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.052631575614213943, 0.04444443807005882, 0.3870967626571655, 0.4000000059604645, 0.03999999538064003, 0 ]
rkxNelrKPB
[ "General analysis of sign-based methods (e.g. signSGD) for non-convex optimization, built on intuitive bounds on success probabilities." ]
[ "Off-policy learning, the task of evaluating and improving policies using historic data collected from a logging policy, is important because on-policy evaluation is usually expensive and has adverse impacts.", "One of the major challenge of off-policy learning is to derive counterfactual estimators that also has low variance and thus low generalization error. \n", "In this work, inspired by learning bounds for importance sampling problems, we present a new counterfactual learning principle for off-policy learning with bandit feedbacks.Our method regularizes the generalization error by minimizing the distribution divergence between the logging policy and the new policy, and removes the need for iterating through all training samples to compute sample variance regularization in prior work.", "With neural network policies, our end-to-end training algorithms using variational divergence minimization showed significant improvement over conventional baseline algorithms and is also consistent with our theoretical results." ]
[ 0, 0, 1, 0 ]
[ 0.11999999731779099, 0.2666666507720947, 0.27397260069847107, 0.1249999925494194 ]
SyPMT6gAb
[ "For off-policy learning with bandit feedbacks, we propose a new variance regularized counterfactual learning algorithm, which has both theoretical foundations and superior empirical performance." ]
[ "We outline new approaches to incorporate ideas from deep learning into wave-based least-squares imaging.", "The aim, and main contribution of this work, is the combination of handcrafted constraints with deep convolutional neural networks, as a way to harness their remarkable ease of generating natural images.", "The mathematical basis underlying our method is the expectation-maximization framework, where data are divided in batches and coupled to additional \"latent\" unknowns.", "These unknowns are pairs of elements from the original unknown space (but now coupled to a specific data batch) and network inputs.", "In this setting, the neural network controls the similarity between these additional parameters, acting as a \"center\" variable.", "The resulting problem amounts to a maximum-likelihood estimation of the network parameters when the augmented data model is marginalized over the latent variables." ]
[ 0, 1, 0, 0, 0, 0 ]
[ 0.19999998807907104, 0.290909081697464, 0.2083333283662796, 0.2083333283662796, 0.09302324801683426, 0.1702127605676651 ]
Hyet2Q29IS
[ "We combine hard handcrafted constraints with a deep prior weak constraint to perform seismic imaging and reap information on the \"posterior\" distribution leveraging multiplicity in the data." ]
[ "When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen database schemas. ", "The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modeling alignment between database columns and their mentions in a given query. ", "We present a unified framework, based on the relation-aware self-attention mechanism,to address schema encoding, schema linking, and feature representation within a text-to-SQL encoder.", "On the challenging Spider dataset this framework boosts the exact match accuracy to 53.7%, compared to 47.4% for the previous state-of-the-art model unaugmented with BERT embeddings.", "In addition, we observe qualitative improvements in the model’s understanding of schema linking and alignment." ]
[ 0, 0, 0, 0, 1 ]
[ 0.04999999329447746, 0.17777776718139648, 0.15789473056793213, 0.0476190410554409, 0.25 ]
H1egcgHtvB
[ "State of the art in complex text-to-SQL parsing by combining hard and soft relational reasoning in schema/question encoding." ]
[ "As our experience shows, humans can learn and deploy a myriad of different skills to tackle the situations they encounter daily.", "Neural networks, in contrast, have a fixed memory capacity that prevents them from learning more than a few sets of skills before starting to forget them. \n", "In this work, we make a step to bridge neural networks with human-like learning capabilities.", "For this, we propose a model with a growing and open-bounded memory capacity that can be accessed based on the model’s current demands.", "To test this system, we introduce a continual learning task based on language modelling where the model is exposed to multiple languages and domains in sequence, without providing any explicit signal on the type of input it is currently dealing with.", "The proposed system exhibits improved adaptation skills in that it can recover faster than comparable baselines after a switch in the input language or domain." ]
[ 0, 0, 0, 0, 1, 0 ]
[ 0.1538461446762085, 0.1428571343421936, 0.21739129722118378, 0.3396226465702057, 0.52173912525177, 0.1090909019112587 ]
rJxoi1HtPr
[ "We introduce a continual learning setup based on language modelling where no explicit task segmentation signal is given and propose a neural network model with growing long term memory to tackle it." ]
[ "We propose to tackle a time series regression problem by computing temporal evolution of a probability density function to provide a probabilistic forecast.", "A Recurrent Neural Network (RNN) based model is employed to learn a nonlinear operator for temporal evolution of a probability density function.", "We use a softmax layer for a numerical discretization of a smooth probability density functions, which transforms a function approximation problem to a classification task.", "Explicit and implicit regularization strategies are introduced to impose a smoothness condition on the estimated probability distribution.", "A Monte Carlo procedure to compute the temporal evolution of the distribution for a multiple-step forecast is presented.", "The evaluation of the proposed algorithm on three synthetic and two real data sets shows advantage over the compared baselines." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.1666666567325592, 0.05405404791235924, 0.05405404791235924, 0.1818181723356247, 0.12121211737394333, 0.11428570747375488 ]
BkDB51WR-
[ "Proposed RNN-based algorithm to estimate predictive distribution in one- and multi-step forecasts in time series prediction problems" ]
[ "In cognitive systems, the role of a working memory is crucial for visual reasoning and decision making.", "Tremendous progress has been made in understanding the mechanisms of the human/animal working memory, as well as in formulating different frameworks of artificial neural networks. ", "In the case of humans, the visual working memory (VWM) task is a standard one in which the subjects are presented with a sequence of images, each of which needs to be identified as to whether it was already seen or not. \n\n", "Our work is a study of multiple ways to learn a working memory model using recurrent neural networks that learn to remember input images across timesteps.", "We train these neural networks to solve the working memory task by training them with a sequence of images in supervised and reinforcement learning settings.", "The supervised setting uses image sequences with their corresponding labels.", "The reinforcement learning setting is inspired by the popular view in neuroscience that the working memory in the prefrontal cortex is modulated by a dopaminergic mechanism.", "We consider the VWM task as an environment that rewards the agent when it remembers past information and penalizes it for forgetting. \n \n", "We quantitatively estimate the performance of these models on sequences of images from a standard image dataset (CIFAR-100).", "Further, we evaluate their ability to remember and recall as they are increasingly trained over episodes.", "Based on our analysis, we establish that a gated recurrent neural network model with long short-term memory units trained using reinforcement learning is powerful and more efficient in temporally consolidating the input spatial information. \n\n", "This work is an initial analysis as a part of our ultimate goal to use artificial neural networks to model the behavior and information processing of the working memory of the brain and to use brain imaging data captured from human subjects during the VWM cognitive task to understand various memory mechanisms of the brain. \n" ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.1463414579629898, 0.1304347813129425, 0.1666666567325592, 0.21276594698429108, 0.20408162474632263, 0, 0.35555556416511536, 0.08888888359069824, 0.04878048226237297, 0.09999999403953552, 0.2711864411830902, 0.1230769157409668 ]
Syl0NmtLIr
[ "LSTMs can more effectively model the working memory if they are learned using reinforcement learning, much like the dopamine system that modulates the memory in the prefrontal cortex " ]
[ "Nonlinearity is crucial to the performance of a deep (neural) network (DN).\n", "To date there has been little progress understanding the menagerie of available nonlinearities, but recently progress has been made on understanding the r\\^{o}le played by piecewise affine and convex nonlinearities like the ReLU and absolute value activation functions and max-pooling.\n", "In particular, DN layers constructed from these operations can be interpreted as {\\em max-affine spline operators} (MASOs) that have an elegant link to vector quantization (VQ) and $K$-means.\n", "While this is good theoretical progress, the entire MASO approach is predicated on the requirement that the nonlinearities be piecewise affine and convex, which precludes important activation functions like the sigmoid, hyperbolic tangent, and softmax.\n", "{\\em This paper extends the MASO framework to these and an infinitely large class of new nonlinearities by linking deterministic MASOs with probabilistic Gaussian Mixture Models (GMMs).", "}\n", "We show that, under a GMM, piecewise affine, convex nonlinearities like ReLU, absolute value, and max-pooling can be interpreted as solutions to certain natural ``hard'' VQ inference problems, while sigmoid, hyperbolic tangent, and softmax can be interpreted as solutions to corresponding ``soft'' VQ inference problems.\n", "We further extend the framework by hybridizing the hard and soft VQ optimizations to create a $\\beta$-VQ inference that interpolates between hard, soft, and linear VQ inference.\n", "A prime example of a $\\beta$-VQ DN nonlinearity is the {\\em swish} nonlinearity, which offers state-of-the-art performance in a range of computer vision tasks but was developed ad hoc by experimentation.\n", "Finally, we validate with experiments an important assertion of our theory, namely that DN performance can be significantly improved by enforcing orthogonality in its linear filters.\n" ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.14814814925193787, 0.08510638028383255, 0.1860465109348297, 0.08888888359069824, 0.09756097197532654, 0.11764705181121826, 0.10526315122842789, 0.045454539358615875, 0 ]
Syxt2jC5FX
[ "Reformulate deep networks nonlinearities from a vector quantization scope and bridge most known nonlinearities together." ]
[ "Engineered proteins offer the potential to solve many problems in biomedicine, energy, and materials science, but creating designs that succeed is difficult in practice.", "A significant aspect of this challenge is the complex coupling between protein sequence and 3D structure, and the task of finding a viable design is often referred to as the inverse protein folding problem.", "We develop generative models for protein sequences conditioned on a graph-structured specification of the design target.", "Our approach efficiently captures the complex dependencies in proteins by focusing on those that are long-range in sequence but local in 3D space.", "Our framework significantly improves upon prior parametric models of protein sequences given structure, and takes a step toward rapid and targeted biomolecular design with the aid of deep generative models." ]
[ 0, 0, 1, 0, 0 ]
[ 0.09999999403953552, 0.13333332538604736, 0.24242423474788666, 0.21052631735801697, 0.22727271914482117 ]
SJgxrLLKOE
[ "We learn to conditionally generate protein sequences given structures with a model that captures sparse, long-range dependencies." ]
[ "We provide a novel perspective on the forward pass through a block of layers in a deep network.", "In particular, we show that a forward pass through a standard dropout layer followed by a linear layer and a non-linear activation is equivalent to optimizing a convex objective with a single iteration of a $\\tau$-nice Proximal Stochastic Gradient method.", "We further show that replacing standard Bernoulli dropout with additive dropout is equivalent to optimizing the same convex objective with a variance-reduced proximal method.", "By expressing both fully-connected and convolutional layers as special cases of a high-order tensor product, we unify the underlying convex optimization problem in the tensor setting and derive a formula for the Lipschitz constant $L$ used to determine the optimal step size of the above proximal methods.", "We conduct experiments with standard convolutional networks applied to the CIFAR-10 and CIFAR-100 datasets and show that replacing a block of layers with multiple iterations of the corresponding solver, with step size set via $L$, consistently improves classification accuracy." ]
[ 0, 0, 0, 0, 1 ]
[ 0.1666666567325592, 0.11320754140615463, 0.0952380895614624, 0.16949151456356049, 0.18518517911434174 ]
ryxxCiRqYX
[ "A framework that links deep network layers to stochastic optimization algorithms; can be used to improve model accuracy and inform network design." ]
[ "Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases.", "Here, we present a method for training such networks, Learned Step Size Quantization, that achieves the highest accuracy to date on the ImageNet dataset when using models, from a variety of architectures, with weights and activations quantized to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach full precision baseline accuracy.", "Our approach builds upon existing methods for learning weights in quantized networks by improving how the quantizer itself is configured.", "Specifically, we introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters.", "This approach works using different levels of precision as needed for a given system and requires only a simple modification of existing training code." ]
[ 0, 0, 1, 0, 0 ]
[ 0.21739129722118378, 0.2461538463830948, 0.2702702581882477, 0.07843136787414551, 0.1538461446762085 ]
rkgO66VKDS
[ "A method for learning quantization configuration for low precision networks that achieves state of the art performance for quantized networks." ]
[ "There have been several studies recently showing that strong natural language understanding (NLU) models are prone to relying on unwanted dataset biases without learning the underlying task, resulting in models which fail to generalize to out-of-domain datasets, and are likely to perform poorly in real-world scenarios.", "We propose several learning strategies to train neural models which are more robust to such biases and transfer better to out-of-domain datasets.", "We introduce an additional lightweight bias-only model which learns dataset biases and uses its prediction to adjust the loss of the base model to reduce the biases.", "In other words, our methods down-weight the importance of the biased examples, and focus training on hard examples, i.e. examples that cannot be correctly classified by only relying on biases.", "Our approaches are model agnostic and simple to implement. ", "We experiment on large-scale natural language inference and fact verification datasets and their out-of-domain datasets and show that our debiased models significantly improve the robustness in all settings, including gaining 9.76 points on the FEVER symmetric evaluation dataset, 5.45 on the HANS dataset and 4.78 points on the SNLI hard set. ", "These datasets are specifically designed to assess the robustness of models in the out-of-domain setting where typical biases in the training data do not exist in the evaluation set.\n" ]
[ 0, 1, 0, 0, 0, 0, 0 ]
[ 0.19354838132858276, 0.4285714328289032, 0.1818181723356247, 0.07999999821186066, 0.1249999925494194, 0.1818181723356247, 0.21276594698429108 ]
SJlCK1rYwB
[ "We propose several general debiasing strategies to address common biases seen in different datasets and obtain substantial improved out-of-domain performance in all settings." ]
[ "Reconstruction of few-view x-ray Computed Tomography (CT) data is a highly ill-posed problem.", "It is often used in applications that require low radiation dose in clinical CT, rapid industrial scanning, or fixed-gantry CT.", "Existing analytic or iterative algorithms generally produce poorly reconstructed images, severely deteriorated by artifacts and noise, especially when the number of x-ray projections is considerably low.", "This paper presents a deep network-driven approach to address extreme few-view CT by incorporating convolutional neural network-based inference into state-of-the-art iterative reconstruction.", "The proposed method interprets few-view sinogram data using attention-based deep networks to infer the reconstructed image.", "The predicted image is then used as prior knowledge in the iterative algorithm for final reconstruction.", "We demonstrate effectiveness of the proposed approach by performing reconstruction experiments on a chest CT dataset." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0.1538461446762085, 0.0624999962747097, 0, 0.34285715222358704, 0.13793103396892548, 0.13793103396892548, 0.27586206793785095 ]
B1g-h7398H
[ "We present a CNN inference-based reconstruction algorithm to address extremely few-view CT. " ]
[ "In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date.", "The most popular sequence to sequence models typically “generate and hope” generic utterances that can be memorized in the weights of the model when mapping from input utterance(s) to output, rather than employing recalled knowledge as context.", "Use of knowledge has so far proved difficult, in part because of the lack of a supervised learning benchmark task which exhibits knowledgeable open dialogue with clear grounding.", "To that end we collect and release a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. ", "We then design architectures capable of retrieving knowledge, reading and conditioning on it, and finally generating natural responses.", "Our best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while our new benchmark allows for measuring further improvements in this important research direction." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.05882352590560913, 0, 0.19999998807907104, 0.12121211737394333, 0.19354838132858276, 0.1599999964237213 ]
r1l73iRqKm
[ "We build knowledgeable conversational agents by conditioning on Wikipedia + a new supervised task." ]
[ "We formulate a new problem at the intersection of semi-supervised learning and contextual bandits, motivated by several applications including clinical trials and dialog systems.", "We demonstrate how contextual bandit and graph convolutional networks can be adjusted to the new problem formulation.", "We then take the best of both approaches to develop multi-GCN embedded contextual bandit.", "Our algorithms are verified on several real world datasets." ]
[ 1, 0, 0, 0 ]
[ 0.17142856121063232, 0.06896550953388214, 0.07692307233810425, 0.0952380895614624 ]
HkgrQE7luV
[ "Synthesis of GCN and LINUCB algorithms for online learning with missing feedbacks" ]
[ " A collection of scientific papers is often accompanied by tags:\n keywords, topics, concepts etc., associated with each paper.\n ", "Sometimes these tags are human-generated, sometimes they are\n machine-generated. ", "We propose a simple measure of the consistency\n of the tagging of scientific papers: whether these tags are\n predictive for the citation graph links. ", "Since the authors tend to\n cite papers about the topics close to those of their publications, a\n consistent tagging system could predict citations. ", "We present an\n algorithm to calculate consistency, and experiments with human- and\n machine-generated tags. ", "We show that augmentation, i.e. the combination\n of the manual tags with the machine-generated ones, can enhance the\n consistency of the tags. ", "We further introduce cross-consistency,\n the ability to predict citation links between papers tagged by\n different taggers, e.g. manually and by a machine.\n ", "Cross-consistency can be used to evaluate the tagging quality when\n the amount of labeled data is limited." ]
[ 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.17142856121063232, 0.0833333283662796, 0.1666666567325592, 0.21621620655059814, 0.20689654350280762, 0.12121211737394333, 0.2631579041481018, 0.12903225421905518 ]
SyeD-b9T6m
[ "A good tagger gives similar tags to a given paper and the papers it cites" ]
[ "Recent research has intensively revealed the vulnerability of deep neural networks, especially for convolutional neural networks (CNNs) on the task of image recognition, through creating adversarial samples which `\"slightly\" differ from legitimate samples.", "This vulnerability indicates that these powerful models are sensitive to specific perturbations and cannot filter out these adversarial perturbations.", "In this work, we propose a quantization-based method which enables a CNN to filter out adversarial perturbations effectively.", "Notably, different from prior work on input quantization, we apply the quantization in the intermediate layers of a CNN.", "Our approach is naturally aligned with the clustering of the coarse-grained semantic information learned by a CNN.", "Furthermore, to compensate for the loss of information which is inevitably caused by the quantization, we propose the multi-head quantization, where we project data points to different sub-spaces and perform quantization within each sub-space.", "We enclose our design in a quantization layer named as the Q-Layer.", "The results obtained on MNIST and Fashion-MNSIT datasets demonstrate that only adding one Q-Layer into a CNN could significantly improve its robustness against both white-box and black-box attacks." ]
[ 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.07547169178724289, 0.19512194395065308, 0.4878048598766327, 0.0476190410554409, 0.19999998807907104, 0.11320754140615463, 0.1111111044883728, 0.039215680211782455 ]
SJe7mC4twH
[ "We propose a quantization-based method which regularizes a CNN's learned representations to be automatically aligned with trainable concept matrix hence effectively filtering out adversarial perturbations." ]
[ "Invariant and equivariant networks have been successfully used for learning images, sets, point clouds, and graphs.", "A basic challenge in developing such networks is finding the maximal collection of invariant and equivariant \\emph{linear} layers.", "Although this question is answered for the first three examples (for popular transformations, at-least), a full characterization of invariant and equivariant linear layers for graphs is not known. \n\n", "In this paper we provide a characterization of all permutation invariant and equivariant linear layers for (hyper-)graph data, and show that their dimension, in case of edge-value graph data, is $2$ and $15$, respectively.", "More generally, for graph data defined on $k$-tuples of nodes, the dimension is the $k$-th and $2k$-th Bell numbers.", "Orthogonal bases for the layers are computed, including generalization to multi-graph data.", "The constant number of basis elements and their characteristics allow successfully applying the networks to different size graphs.", "From the theoretical point of view, our results generalize and unify recent advancement in equivariant deep learning.", "In particular, we show that our model is capable of approximating any message passing neural network.\n\n", "Applying these new linear layers in a simple deep neural network framework is shown to achieve comparable results to state-of-the-art and to have better expressivity than previous invariant and equivariant bases.\n" ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.19354838132858276, 0.29411762952804565, 0.4651162624359131, 0.52173912525177, 0.29411762952804565, 0.2142857164144516, 0.1764705777168274, 0.1818181723356247, 0.060606054961681366, 0.2666666507720947 ]
Syx72jC9tm
[ "The paper provides a full characterization of permutation invariant and equivariant linear layers for graph data." ]
[ "In reinforcement learning, we can learn a model of future observations and rewards, and use it to plan the agent's next actions.", "However, jointly modeling future observations can be computationally expensive or even intractable if the observations are high-dimensional (e.g. images).", "For this reason, previous works have considered partial models, which model only part of the observation.", "In this paper, we show that partial models can be causally incorrect: they are confounded by the observations they don't model, and can therefore lead to incorrect planning.", "To address this, we introduce a general family of partial models that are provably causally correct, but avoid the need to fully model future observations." ]
[ 0, 0, 1, 0, 0 ]
[ 0.10526315122842789, 0.0555555522441864, 0.24242423474788666, 0.23255813121795654, 0.2380952388048172 ]
HyeG9yHKPr
[ "Causally correct partial models do not have to generate the whole observation to remain causally correct in stochastic environments." ]
[ "In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task.", "In this work, we investigate the efficiency of current lifelong approaches, in terms of sample complexity, computational and memory cost.", "Towards this end, we first introduce a new and a more realistic evaluation protocol, whereby learners observe each example only once and hyper-parameter selection is done on a small and disjoint set of tasks, which is not used for the actual learning experience and evaluation.", "Second, we introduce a new metric measuring how quickly a learner acquires a new skill.", "Third, we propose an improved version of GEM (Lopez-Paz & Ranzato, 2017), dubbed Averaged GEM (A-GEM), which enjoys the same or even better performance as GEM, while being almost as computationally and memory efficient as EWC (Kirkpatrick et al., 2016) and other regularization-based methods.", "Finally, we show that all algorithms including A-GEM can learn even more quickly if they are provided with task descriptors specifying the classification tasks under consideration.", "Our experiments on several standard lifelong learning benchmarks demonstrate that A-GEM has the best trade-off between accuracy and efficiency" ]
[ 0, 0, 0, 0, 0, 0, 1 ]
[ 0.1702127605676651, 0.14999999105930328, 0.10169491171836853, 0.060606054961681366, 0.16129031777381897, 0.08510638028383255, 0.3499999940395355 ]
Hkf2_sC5FX
[ "An efficient lifelong learning algorithm that provides a better trade-off between accuracy and time/ memory complexity compared to other algorithms. " ]
[ "Reduced precision computation is one of the key areas addressing the widening’compute gap’, driven by an exponential growth in deep learning applications.", "In recent years, deep neural network training has largely migrated to 16-bit precision,with significant gains in performance and energy efficiency.", "However, attempts to train DNNs at 8-bit precision have met with significant challenges, because of the higher precision and dynamic range requirements of back-propagation. ", "In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. ", "We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16)and a broader set of workloads (Resnet-18/34/50, GNMT, and Transformer) than previously reported. ", "We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point, to improve error propagation.", "We also examine the impact of quantization noise on generalization, and propose a stochastic rounding technique to address gradient noise.", "As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline." ]
[ 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0, 0.05882352590560913, 0.05405404791235924, 0.21052631735801697, 0.21621620655059814, 0.1764705777168274, 0.060606054961681366, 0 ]
HJe88xBKPr
[ "We demonstrated state-of-the-art training results using 8-bit floating point representation, across Resnet, GNMT, Transformer." ]
[ "Loss functions play a crucial role in deep metric learning thus a variety of them have been proposed.", "Some supervise the learning process by pairwise or tripletwise similarity constraints while others take the advantage of structured similarity information among multiple data points.", "In this work, we approach deep metric learning from a novel perspective.", "We propose instance cross entropy (ICE) which measures the difference between an estimated instance-level matching distribution and its ground-truth one.", "ICE has three main appealing properties.", "Firstly, similar to categorical cross entropy (CCE), ICE has clear probabilistic interpretation and exploits structured semantic similarity information for learning supervision.", "Secondly, ICE is scalable to infinite training data as it learns on mini-batches iteratively and is independent of the training set size.", "Thirdly, motivated by our relative weight analysis, seamless sample reweighting is incorporated.", "It rescales samples’ gradients to control the differentiation degree over training examples instead of truncating them by sample mining.", "In addition to its simplicity and intuitiveness, extensive experiments on three real-world benchmarks demonstrate the superiority of ICE." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.04651162400841713, 0, 0.9756097793579102, 0, 0.1428571343421936, 0.09756097197532654, 0, 0.04999999329447746, 0.1538461446762085 ]
BJeguTEKDB
[ "We propose instance cross entropy (ICE) which measures the difference between an estimated instance-level matching distribution and its ground-truth one. " ]
[ "In model-based reinforcement learning, the agent interleaves between model learning and planning. ", "These two components are inextricably intertwined.", "If the model is not able to provide sensible long-term prediction, the executed planer would exploit model flaws, which can yield catastrophic failures.", "This paper focuses on building a model that reasons about the long-term future and demonstrates how to use this for efficient planning and exploration.", "To this end, we build a latent-variable autoregressive model by leveraging recent ideas in variational inference.", "We argue that forcing latent variables to carry future information through an auxiliary task substantially improves long-term predictions.", "Moreover, by planning in the latent space, the planner's solution is ensured to be within regions where the model is valid.", "An exploration strategy can be devised by searching for unlikely trajectories under the model.", "Our methods achieves higher reward faster compared to baselines on a variety of tasks and environments in both the imitation learning and model-based reinforcement learning settings." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.17142856121063232, 0, 0.1860465109348297, 0.2666666507720947, 0.052631575614213943, 0.29999998211860657, 0.25, 0.1111111044883728, 0.1304347813129425 ]
SkgQBn0cF7
[ "incorporating, in the model, latent variables that encode future content improves the long-term prediction accuracy, which is critical for better planning in model-based RL." ]
[ "Detecting anomalies is of growing importance for various industrial applications and mission-critical infrastructures, including satellite systems.", "Although there have been several studies in detecting anomalies based on rule-based or machine learning-based approaches for satellite systems, a tensor-based decomposition method has not been extensively explored for anomaly detection.", "In this work, we introduce an Integrative Tensor-based Anomaly Detection (ITAD) framework to detect anomalies in a satellite system.", "Because of the high risk and cost, detecting anomalies in a satellite system is crucial.", "We construct 3rd-order tensors with telemetry data collected from Korea Multi-Purpose Satellite-2 (KOMPSAT-2) and calculate the anomaly score using one of the component matrices obtained by applying CANDECOMP/PARAFAC decomposition to detect anomalies.", "Our result shows that our tensor-based approach can be effective in achieving higher accuracy and reducing false positives in detecting anomalies as compared to other existing approaches." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.1599999964237213, 0.15789473056793213, 0.5, 0.25, 0, 0 ]
HJeg46EKPr
[ "Integrative Tensor-based Anomaly Detection(ITAD) framework for a satellite system." ]
[ "Many real world tasks exhibit rich structure that is repeated across different parts of the state space or in time.", "In this work we study the possibility of leveraging such repeated structure to speed up and regularize learning.", "We start from the KL regularized expected reward objective which introduces an additional component, a default policy.", "Instead of relying on a fixed default policy, we learn it from data.", "But crucially, we restrict the amount of information the default policy receives, forcing it to learn reusable behaviors that help the policy learn faster.", "We formalize this strategy and discuss connections to information bottleneck approaches and to the variational EM algorithm.", "We present empirical results in both discrete and continuous action domains and demonstrate that, for certain tasks, learning a default policy alongside the policy can significantly speed up and improve learning.\n", "Please watch the video demonstrating learned experts and default policies on several continuous control tasks ( https://youtu.be/U2qA3llzus8 )." ]
[ 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.1428571343421936, 0.09999999403953552, 0.20512819290161133, 0.11428570747375488, 0.1904761791229248, 0.1621621549129486, 0.35999998450279236, 0.1463414579629898 ]
S1lqMn05Ym
[ "Limiting state information for the default policy can improvement performance, in a KL-regularized RL framework where both agent and default policy are optimized together" ]
[ "When an image classifier makes a prediction, which parts of the image are relevant and why?", "We can rephrase this question to ask: which parts of the image, if they were not seen by the classifier, would most change its decision?", "Producing an answer requires marginalizing over images that could have been seen but weren't.", "We can sample plausible image in-fills by conditioning a generative model on the rest of the image.", "We then optimize to find the image regions that most change the classifier's decision after in-fill.", "Our approach contrasts with ad-hoc in-filling approaches, such as blurring or injecting noise, which generate inputs far from the data distribution, and ignore informative relationships between different parts of the image.", "Our method produces more compact and relevant saliency maps, with fewer artifacts compared to previous methods." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0.05128204822540283, 0.1249999925494194, 0.10526315122842789, 0.307692289352417, 0.1538461446762085, 0, 0.09999999403953552 ]
B1MXz20cYQ
[ "We compute saliency by using a strong generative model to efficiently marginalize over plausible alternative inputs, revealing concentrated pixel areas that preserve label information." ]
[ "This paper presents the Variation Network (VarNet), a generative model providing means to manipulate the high-level attributes of a given input.", "The originality of our approach is that VarNet is not only capable of handling pre-defined attributes but can also learn the relevant attributes of the dataset by itself. ", "These two settings can be easily combined which makes VarNet applicable for a wide variety of tasks.", "Further, VarNet has a sound probabilistic interpretation which grants us with a novel way to navigate in the latent spaces as well as means to control how the attributes are learned.", "We demonstrate experimentally that this model is capable of performing interesting input manipulation and that the learned attributes are relevant and interpretable." ]
[ 1, 0, 0, 0, 0 ]
[ 0.4285714328289032, 0.25531914830207825, 0.19999998807907104, 0.11999999731779099, 0.2790697515010834 ]
ryfaViR9YX
[ "The Variation Network is a generative model able to learn high-level attributes without supervision that can then be used for controlled input manipulation." ]
[ "Despite alarm over the reliance of machine learning systems on so-called spurious patterns in training data, the term lacks coherent meaning in standard statistical frameworks.", "However, the language of causality offers clarity: spurious associations are those due to a common cause (confounding) vs direct or indirect effects.", "In this paper, we focus on NLP, introducing methods and resources for training models insensitive to spurious patterns.", "Given documents and their initial labels, we task humans with revise each document to accord with a counterfactual target label, asking that the revised documents be internally coherent while avoiding any gratuitous changes.", "Interestingly, on sentiment analysis and natural language inference tasks, classifiers trained on original data fail on their counterfactually-revised counterparts and vice versa.", "Classifiers trained on combined datasets perform remarkably well, just shy of those specialized to either domain.", "While classifiers trained on either original or manipulated data alone are sensitive to spurious features (e.g., mentions of genre), models trained on the combined data are insensitive to this signal.", "We will publicly release both datasets." ]
[ 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.2380952388048172, 0.19512194395065308, 0.1621621549129486, 0.3199999928474426, 0.052631575614213943, 0.11428570747375488, 0.17391303181648254, 0 ]
Sklgs0NFvr
[ "Humans in the loop revise documents to accord with counterfactual labels, resulting resource helps to reduce reliance on spurious associations." ]
[ "Among multiple ways of interpreting a machine learning model, measuring the importance of a set of features tied to a prediction is probably one of the most intuitive way to explain a model.", "In this paper, we establish the link between a set of features to a prediction with a new evaluation criteria, robustness analysis, which measures the minimum tolerance of adversarial perturbation.", "By measuring the tolerance level for an adversarial attack, we can extract a set of features that provides most robust support for a current prediction, and also can extract a set of features that contrasts the current prediction to a target class by setting a targeted adversarial attack.", "By applying this methodology to various prediction tasks across multiple domains, we observed the derived explanations are indeed capturing the significant feature set qualitatively and quantitatively." ]
[ 0, 1, 0, 0 ]
[ 0.145454540848732, 0.3214285671710968, 0.21875, 0.2181818187236786 ]
Hye4KeSYDr
[ "We propose new objective measurement for evaluating explanations based on the notion of adversarial robustness. The evaluation criteria further allows us to derive new explanations which capture pertinent features qualitatively and quantitatively." ]
[ "Generative adversarial networks (GANs) have been shown to provide an effective way to model complex distributions and have obtained impressive results on various challenging tasks.", "However, typical GANs require fully-observed data during training.", "In this paper, we present a GAN-based framework for learning from complex, high-dimensional incomplete data.", "The proposed framework learns a complete data generator along with a mask generator that models the missing data distribution.", "We further demonstrate how to impute missing data by equipping our framework with an adversarially trained imputer.", "We evaluate the proposed framework using a series of experiments with several types of missing data processes under the missing completely at random assumption." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0, 0.09090908616781235, 0.6206896305084229, 0.3333333134651184, 0.12903225421905518, 0.22857142984867096 ]
S1lDV3RcKm
[ "This paper presents a GAN-based framework for learning the distribution from high-dimensional incomplete data." ]