,Clean_Title,Clean_Text,Clean_Summary 0,Critical Points of Linear Neural Networks: Analytical Forms and Landscape Properties,"Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect.Particularly, the properties of critical points and the landscape around them are of importance to determine the convergence performance of optimization algorithms.In this paper, we provide a necessary and sufficient characterization of the analytical forms for the critical points of the square loss functions for linear neural networks.We show that the analytical forms of the critical points characterize the values of the corresponding loss functions as well as the necessary and sufficient conditions to achieve global minimum.Furthermore, we exploit the analytical forms of the critical points to characterize the landscape properties for the loss functions of linear neural networks and shallow ReLU networks.One particular conclusion is that: While the loss function of linear networks has no spurious local minimum, the loss function of one-hidden-layer nonlinear networks with ReLU activation function does have local minimum that is not global minimum.","We provide necessary and sufficient analytical forms for the critical points of the square loss functions for various neural networks, and exploit the analytical forms to characterize the landscape properties for the loss functions of these neural networks." 1,Biologically-Plausible Learning Algorithms Can Scale to Large Datasets,"The backpropagation algorithm is often thought to be biologically implausible in the brain.One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways.To address this “weight transport problem”, two biologically-plausible algorithms, proposed by Liao et al. and Lillicrap et al., relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets.However, a recent study by Bartunov et al. finds that although feedback alignment and some variants of target-propagation perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet.Here, we additionally evaluate the sign-symmetry algorithm, which differs from both BP and FA in that the feedback and feedforward weights do not share magnitudes but share signs.We examined the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures.Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks.These results complement the study by Bartunov et al. and establish a new benchmark for future biologically-plausible learning algorithms on more difficult datasets and more complex architectures.","Biologically plausible learning algorithms, particularly sign-symmetry, work well on ImageNet" 2,Logic and the 2-Simplicial Transformer,"We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors.We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.",We introduce the 2-simplicial Transformer and show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning. 3,Long-term Forecasting using Tensor-Train RNNs,"We present Tensor-Train RNN, a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics.Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation.Our proposed tensor recurrent architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions.Furthermore, we decompose the higher-order structure using the tensor-train decomposition to reduce the number of parameters while preserving the model performance.We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs.We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data.",Accurate forecasting over very long time horizons using tensor-train RNNs 4,Variational Message Passing with Structured Inference Networks,"Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret.We propose a variational message-passing algorithm for variational inference in such models.We make three contributions.First, we propose structured inference networks that incorporate the structure of the graphical model in the inference network of variational auto-encoders.Second, we establish conditions under which such inference networks enable fast amortized inference similar to VAE.Finally, we derive a variational message passing algorithm to perform efficient natural-gradient inference while retaining the efficiency of the amortized inference.By simultaneously enabling structured, amortized, and natural-gradient inference for deep structured models, our method simplifies and generalizes existing methods.",We propose a variational message-passing algorithm for models that contain both the deep model and probabilistic graphical model. 5,Adaptive Mixture of Low-Rank Factorizations for Compact Neural Modeling,"Modern deep neural networks have a large amount of weights, which make them difficult to deploy on computation constrained devices such as mobile phones.One common approach to reduce the model size and computational cost is to use low-rank factorization to approximate a weight matrix.However, performing standard low-rank factorization with a small rank can hurt the model expressiveness and significantly decrease the performance.In this work, we propose to use a mixture of multiple low-rank factorizations to model a large weight matrix, and the mixture coefficients are computed dynamically depending on its input.We demonstrate the effectiveness of the proposed approach on both language modeling and image classification tasks.Experiments show that our method not only improves the computation efficiency but also maintains its accuracy compared with the full-rank counterparts.",A simple modification to low-rank factorization that improves performances (in both image and language tasks) while still being compact. 6,Progressive Compressed Records: Taking a Byte Out of Deep Learning Data,"Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices.We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records.PCRs deviate from previous formats by leveraging progressive compression to split each training example into multiple examples of increasingly higher fidelity, without adding to the total data size.Training examples of similar fidelity are grouped together, which reduces both the system overhead and data bandwidth needed to train a model.We show that models can be trained on aggressively compressed representations of the training data and still retain high accuracy, and that PCRs can enable a 2x speedup on average over baseline formats using JPEG compression.Our results hold across deep learning architectures for a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ.","We propose a simple, general, and space-efficient data format to accelerate deep learning training by allowing sample fidelity to be dynamically selected at training time" 7,ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION PERSPECTIVE,"It is fundamental and challenging to train robust and accurate Deep Neural Networks when semantically abnormal examples exist.Although great progress has been made, there is still one crucial research question which is not thoroughly explored yet: What training examples should be focused and how much more should they be emphasised to achieve robust learning?In this work, we study this question and propose gradient rescaling to solve it.GR modifies the magnitude of logit vector’s gradient to emphasise on relatively easier training data points when noise becomes more severe, which functions as explicit emphasis regularisation to improve the generalisation performance of DNNs.Apart from regularisation, we connect GR to examples weighting and designing robust loss functions.We empirically demonstrate that GR is highly anomaly-robust and outperforms the state-of-the-art by a large margin, e.g., increasing 7% on CIFAR100 with 40% noisy labels.It is also significantly superior to standard regularisers in both clean and abnormal settings.Furthermore, we present comprehensive ablation studies to explore the behaviours of GR under different cases, which is informative for applying GR in real-world scenarios.",ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION PERSPECTIVE 8,Optimizing the Latent Space of Generative Networks,"Generative Adversarial Networks have achieved remarkable results in the task of generating realistic natural images.In most applications, GAN models share two aspects in common.On the one hand, GANs training involves solving a challenging saddle point optimization problem, interpreted as an adversarial game between a generator and a discriminator functions.On the other hand, the generator and the discriminator are parametrized in terms of deep convolutional neural networks.The goal of this paper is to disentangle the contribution of these two factors to the success of GANs.In particular, we introduce Generative Latent Optimization, a framework to train deep convolutional generators without using discriminators, thus avoiding the instability of adversarial optimization problems.Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: learning from large data, synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors.",Are GANs successful because of adversarial training or the use of ConvNets? We show a ConvNet generator trained with a simple reconstruction loss and learnable noise vectors leads many of the desirable properties of a GAN. 9,Dynamically Balanced Value Estimates for Actor-Critic Methods,"Reinforcement learning in an actor-critic setting relies on accurate value estimates of the critic.However, the combination of function approximation, temporal difference learning and off-policy training can lead to an overestimating value function.A solution is to use Clipped Double Q-learning, which is used in the TD3 algorithm and computes the minimum of two critics in the TD-target.We show that CDQ induces an underestimation bias and propose a new algorithm that accounts for this by using a weighted average of the target from CDQ and the target coming from a single critic.The weighting parameter is adjusted during training such that the value estimates match the actual discounted return on the most recent episodes and by that it balances over- and underestimation.Empirically, we obtain more accurate value estimates and demonstrate state of the art results on several OpenAI gym tasks.",A method for more accurate critic estimates in reinforcement learning. 10,A Systematic Framework for Natural Perturbations from Videos,"We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos.As part of this framework, we construct ImageNet-Vid-Robust, a human-expert--reviewed dataset of 22,668 images grouped into 1,145 sets of perceptually similar images derived from frames in the ImageNet Video Object Detection dataset.We evaluate a diverse array of classifiers trained on ImageNet, including models trained for robustness, and show a median classification accuracy drop of 16%.Additionally, we evaluate the Faster R-CNN and R-FCN models for detection, and show that natural perturbations induce both classification as well as localization errors, leading to a median drop in detection mAP of 14 points.Our analysis shows that natural perturbations in the real world are heavily problematic for current CNNs, posing a significant challenge to their deployment in safety-critical environments that require reliable, low-latency predictions.",We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos. 11,SuperTML: Two-Dimensional Word Embedding and Transfer Learning Using ImageNet Pretrained CNN Models for the Classifications on Tabular Data,"Structured tabular data is the most commonly used form of data in industry according to a Kaggle ML and DS Survey.Gradient Boosting Trees, Support Vector Machine, Random Forest, and Logistic Regression are typically used for classification tasks on tabular data.The recent work of Super Characters method using two-dimensional word embedding achieved state-of-the-art results in text classification tasks, showcasing the promise of this new approach.In this paper, we propose the SuperTML method, which borrows the idea of Super Characters method and two-dimensional embedding to address the problem of classification on tabular data.For each input of tabular data, the features are first projected into two-dimensional embedding like an image, and then this image is fed into fine-tuned ImageNet CNN models for classification.Experimental results have shown that the proposed SuperTML method have achieved state-of-the-art results on both large and small datasets.",Deep learning for structured tabular data machine learning using pre-trained CNN model from ImageNet. 12,PatchFormer: A neural architecture for self-supervised representation learning on images,"Learning rich representations from predictive learning without labels has been a longstanding challenge in the field of machine learning.Generative pre-training has so far not been as successful as contrastive methods in modeling representations of raw images.In this paper, we propose a neural architecture for self-supervised representation learning on raw images called the PatchFormer which learns to model spatial dependencies across patches in a raw image.Our method learns to model the conditional probability distribution of missing patches given the context of surrounding patches.We evaluate the utility of the learned representations by fine-tuning the pre-trained model on low data-regime classification tasks.Specifically, we benchmark our model on semi-supervised ImageNet classification which has become a popular benchmark recently for semi-supervised and self-supervised learning methods.Our model is able to achieve 30.3% and 65.5% top-1 accuracies when trained only using 1% and 10% of the labels on ImageNet showing the promise for generative pre-training methods.",Decoding pixels can still work for representation learning on images 13,The Case for Full-Matrix Adaptive Regularization,"Adaptive regularization methods pre-multiply a descent direction by a preconditioning matrix.Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive.We show how to modify full-matrix adaptive regularization in order to make it practical and effective.We also provide novel theoretical analysisfor adaptive regularization in non-convex optimization settings.The core of our algorithm, termed GGT, consists of efficient inverse computation of square roots of low-rank matrices.Our preliminary experiments underscore improved convergence rate of GGT across a variety of synthetic tasks and standard deep learning benchmarks.","fast, truly scalable full-matrix AdaGrad/Adam, with theory for adaptive stochastic non-convex optimization" 14,Attention over Parameters for Dialogue Systems,"Dialogue systems require a great deal of different but complementary expertise to assist, inform, and entertain humans.For example, different domains of goal-oriented dialogue systems can be viewed as different skills, and so does ordinary chatting abilities of chit-chat dialogue systems.In this paper, we propose to learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters.The experimental results show that this approach achieves competitive performance on a combined dataset of MultiWOZ, In-Car Assistant, and Persona-Chat.Finally, we demonstrate that each dialogue skill is effectively learned and can be combined with other skills to produce selective responses.","In this paper, we propose to learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP). " 15,Dataset Distillation,"Model distillation aims to distill the knowledge of a complex model into a simpler one.In this paper, we consider an alternative formulation called dataset distillation: we keep the model fixed and instead attempt to distill the knowledge from a large training dataset into a small one.The idea is to synthesize a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data.For example, we show that it is possible to compress 60,000 MNIST training images into just 10 synthetic distilled images and achieve close to the original performance, given a fixed network initialization.We evaluate our method in various initialization settings. Experiments on multiple datasets, MNIST, CIFAR10, PASCAL-VOC, and CUB-200, demonstrate the ad-vantage of our approach compared to alternative methods. Finally, we include a real-world application of dataset distillation to the continual learning setting: we show that storing distilled images as episodic memory of previous tasks can alleviate forgetting more effectively than real images.",We propose to distill a large dataset into a small set of synthetic data that can train networks close to original performance. 16,TRAINING GENERATIVE ADVERSARIAL NETWORKS VIA PRIMAL-DUAL SUBGRADIENT METHODS: A LAGRANGIAN PERSPECTIVE ON GAN,"We relate the minimax game of generative adversarial networks to finding the saddle points of the Lagrangian function for a convex optimization problem, where the discriminator outputs and the distribution of generator outputs play the roles of primal variables and dual variables, respectively.This formulation shows the connection between the standard GAN training process and the primal-dual subgradient methods for convex optimization.The inherent connection does not only provide a theoretical convergence proof for training GANs in the function space, but also inspires a novel objective function for training.The modified objective function forces the distribution of generator outputs to be updated along the direction according to the primal-dual subgradient methods.A toy example shows that the proposed method is able to resolve mode collapse, which in this case cannot be avoided by the standard GAN or Wasserstein GAN.Experiments on both Gaussian mixture synthetic data and real-world image datasets demonstrate the performance of the proposed method on generating diverse samples.",We propose a primal-dual subgradient method for training GANs and this method effectively alleviates mode collapse. 17,Irrationality can help reward inference,"Specifying reward functions is difficult, which motivates the area of reward inference: learning rewards from human behavior.The starting assumption in the area is that human behavior is optimal given the desired reward function, but in reality people have many different forms of irrationality, from noise to myopia to risk aversion and beyond.This fact seems like it will be strictly harmful to reward inference: it is already hard to infer the reward from rational behavior, and noise and systematic biases make actions have less direct of a relationship to the reward.Our insight in this work is that, contrary to expectations, irrationality can actually help rather than hinder reward inference.For some types and amounts of irrationality, the expert now produces more varied policies compared to rational behavior, which help disambiguate among different reward parameters -- those that otherwise correspond to the same rational behavior.We put this to the test in a systematic analysis of the effect of irrationality on reward inference.We start by covering the space of irrationalities as deviations from the Bellman update, simulate expert behavior, and measure the accuracy of inference to contrast the different types and study the gains and losses.We provide a mutual information-based analysis of our findings, and wrap up by discussing the need to accurately model irrationality, as well as to what extent we might expect real people to exhibit helpful irrationalities when teaching rewards to learners.",We find that irrationality from an expert demonstrator can help a learner infer their preferences. 18,Models in the Wild: On Corruption Robustness of NLP Systems,"Natural Language Processing models lack a unified approach to robustness testing.In this paper we introduce WildNLP - a framework for testing model stability in a natural setting where text corruptions such as keyboard errors or misspelling occur.We compare robustness of models from 4 popular NLP tasks: Q&A, NLI, NER and Sentiment Analysis by testing their performance on aspects introduced in the framework.In particular, we focus on a comparison between recent state-of-the- art text representations and non-contextualized word embeddings.In order to improve robust- ness, we perform adversarial training on se- lected aspects and check its transferability to the improvement of models with various cor- ruption types.We find that the high perfor- mance of models does not ensure sufficient robustness, although modern embedding tech- niques help to improve it.We release cor- rupted datasets and code for WildNLP frame- work for the community.","We compare robustness of models from 4 popular NLP tasks: Q&A, NLI, NER and Sentiment Analysis by testing their performance on perturbed inputs." 19,Curriculum Learning for Deep Generative Models with Clustering,"Training generative models like Generative Adversarial Network is challenging for noisy data.A novel curriculum learning algorithm pertaining to clustering is proposed to address this issue in this paper.The curriculum construction is based on the centrality of underlying clusters in data points. The data points of high centrality takes priority of being fed into generative models during training.To make our algorithm scalable to large-scale data, the active set is devised, in the sense that every round of training proceeds only on an active subset containing a small fraction of already trained data and the incremental data of lower centrality.Moreover, the geometric analysis is presented to interpret the necessity of cluster curriculum for generative models.The experiments on cat and human-face data validate that our algorithm is able to learn the optimal generative models with respect to specified quality metrics for noisy data.An interesting finding is that the optimal cluster curriculum is closely related to the critical point of the geometric percolation process formulated in the paper.",A novel cluster-based algorithm of curriculum learning is proposed to solve the robust training of generative models. 20,DBA: Distributed Backdoor Attacks against Federated Learning,"Backdoor attacks aim to manipulate a subset of training data by injecting adversarial triggers such that machine learning models trained on the tampered dataset will make arbitrarily incorrect prediction on the testset with the same trigger embedded.While federated learning is capable of aggregating information provided by different parties for training a better model, its distributed learning methodology and inherently heterogeneous data distribution across parties may bring new vulnerabilities.In addition to recent centralized backdoor attacks on FL where each party embeds the same global trigger during training, we propose the distributed backdoor attack --- a novel threat assessment framework developed by fully exploiting the distributed nature of FL.DBA decomposes a global trigger pattern into separate local patterns and embed them into the training set of different adversarial parties respectively.Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data.We conduct extensive experiments to show that the attack success rate of DBA is significantly higher than centralized backdoors under different settings.Moreover, we find that distributed attacks are indeed more insidious, as DBA can evade two state-of-the-art robust FL algorithms against centralized backdoors.We also provide explanations for the effectiveness of DBA via feature visual interpretation and feature importance ranking.To further explore the properties of DBA, we test the attack performance by varying different trigger factors, including local trigger variations, scaling factor in FL, data distribution, and poison ratio and interval.Our proposed DBA and thorough evaluation results shed lights on characterizing the robustness of FL.","We proposed a novel distributed backdoor attack on federated learning and show that it is not only more effective compared with standard centralized attacks, but also harder to be defended by existing robust FL methods" 21,Label Propagation Networks,"Graph networks have recently attracted considerable interest, and in particular in the context of semi-supervised learning.These methods typically work by generating node representations that are propagated throughout a given weighted graph.Here we argue that for semi-supervised learning, it is more natural to consider propagating labels in the graph instead.Towards this end, we propose a differentiable neural version of the classic Label Propagation algorithm.This formulation can be used for learning edge weights, unlike other methods where weights are set heuristically.Starting from a layer implementing a single iteration of LP, we proceed by adding several important non-linear steps that significantly enhance the label-propagating mechanism.Experiments in two distinct settings demonstrate the utility of our approach.",Neural net for graph-based semi-supervised learning; revisits the classics and propagates *labels* rather than feature representations 22,Neural Architecture Search for Natural Language Understanding,"Neural architecture search has made rapid progress incomputervision,wherebynewstate-of-the-artresultshave beenachievedinaseriesoftaskswithautomaticallysearched neural network architectures.In contrast, NAS has not made comparable advances in natural language understanding.Corresponding to encoder-aggregator meta architecture of typical neural networks models for NLU tasks, we re-define the search space, by splittingitinto twoparts:encodersearchspace,andaggregator search space.Encoder search space contains basic operations such as convolutions, RNNs, multi-head attention and its sparse variants, star-transformers.Dynamic routing is included in the aggregator search space, along with max pooling and self-attention pooling.Our search algorithm is then fulfilled via DARTS, a differentiable neural architecture search framework.We progressively reduce the search space every few epochs, which further reduces the search time and resource costs.Experiments on five benchmark data-sets show that, the new neural networks we generate can achieve performances comparable to the state-of-the-art models that does not involve language model pre-training.",Neural Architecture Search for a series of Natural Language Understanding tasks. Design the search space for NLU tasks. And Apply differentiable architecture search to discover new models 23,EvalNE: A Framework for Evaluating Network Embeddings on Link Prediction,"Network embedding methods aim to learn low-dimensional representations of network nodes as vectors, typically in Euclidean space.These representations are then used for a variety of downstream prediction tasks.Link prediction is one of the most popular choices for assessing the performance of NE methods.However, the complexity of link prediction requires a carefully designed evaluation pipeline to provide consistent, reproducible and comparable results.We argue this has not been considered sufficiently in recent works.The main goal of this paper is to overcome difficulties associated with evaluation pipelines and reproducibility of results.We introduce EvalNE, an evaluation framework to transparently assess and compare the performance of NE methods on link prediction.EvalNE provides automation and abstraction for tasks such as hyper-parameter tuning, model validation, edge sampling, computation of edge embeddings and model validation.The framework integrates efficient procedures for edge and non-edge sampling and can be used to easily evaluate any off-the-shelf embedding method.The framework is freely available as a Python toolbox.Finally, demonstrating the usefulness of EvalNE in practice, we conduct an empirical study in which we try to replicate and analyse experimental sections of several influential papers.","In this paper we introduce EvalNE, a Python toolbox for automating the evaluation of network embedding methods on link prediction and ensuring the reproducibility of results." 24,No Spurious Local Minima in a Two Hidden Unit ReLU Network,"Deep learning models can be efficiently optimized via stochastic gradient descent, but there is little theoretical evidence to support this.A key question in optimization is to understand when the optimization landscape of a neural network is amenable to gradient-based optimization.We focus on a simple neural network two-layer ReLU network with two hidden units, and show that all local minimizers are global.This combined with recent work of Lee et al.; Lee et al. show that gradient descent converges to the global minimizer.","Recovery guarantee of stochastic gradient descent with random initialization for learning a two-layer neural network with two hidden nodes, unit-norm weights, ReLU activation functions and Gaussian inputs." 25,Jumpout: Improved Dropout for Deep Neural Networks with Rectified Linear Units,"Dropout is a simple yet effective technique to improve generalization performance and prevent overfitting in deep neural networks.In this paper, we discuss three novel observations about dropout to better understand the generalization of DNNs with rectified linear unit activations: 1) dropout is a smoothing technique that encourages each local linear model of a DNN to be trained on data points from nearby regions; 2) a constant dropout rate can result in effective neural-deactivation rates that are significantly different for layers with different fractions of activated neurons; and 3) the rescaling factor of dropout causes an inconsistency to occur between the normalization during training and testing conditions when batch normalization is also used. The above leads to three simple but nontrivial improvements to dropout resulting in our proposed method ""Jumpout.""Jumpout samples the dropout rate using a monotone decreasing distribution, so the local linear model at each data point is trained, with high probability, to work better for data points from nearby than from more distant regions.Instead of tuning a dropout rate for each layer and applying it to all samples, jumpout moreover adaptively normalizes the dropout rate at each layer and every training sample/batch, so the effective dropout rate applied to the activated neurons are kept the same.Moreover, we rescale the outputs of jumpout for a better trade-off that keeps both the variance and mean of neurons more consistent between training and test phases, which mitigates the incompatibility between dropout and batch normalization.Compared to the original dropout, jumpout shows significantly improved performance on CIFAR10, CIFAR100, Fashion- MNIST, STL10, SVHN, ImageNet-1k, etc., while introducing negligible additional memory and computation costs.","Jumpout applies three simple yet effective modifications to dropout, based on novel understandings about the generalization performance of DNN with ReLU in local regions." 26,Sparsity Emerges Naturally in Neural Language Models,"Concerns about interpretability, computational resources, and principled inductive priors have motivated efforts to engineer sparse neural models for NLP tasks.If sparsity is important for NLP, might well-trained neural models naturally become roughly sparse?Using the Taxi-Euclidean norm to measure sparsity, we find that frequent input words are associated with concentrated or sparse activations, while frequent target words are associated with dispersed activations but concentrated gradients.We find that gradients associated with function words are more concentrated than the gradients of content words, even controlling for word frequency.","We study the natural emergence of sparsity in the activations and gradients for some layers of a dense LSTM language model, over the course of training." 27,Aging Memories Generate More Fluent Dialogue Responses with Memory Networks,"The integration of a Knowledge Base into a neural dialogue agent is one of the key challenges in Conversational AI.Memory networks has proven to be effective to encode KB information into an external memory to thus generate more fluent and informed responses.Unfortunately, such memory becomes full of latent representations during training, so the most common strategy is to overwrite old memory entries randomly.In this paper, we question this approach and provide experimental evidence showing that conventional memory networks generate many redundant latent vectors resulting in overfitting and the need for larger memories.We introduce memory dropout as an automatic technique that encourages diversity in the latent space by1) Aging redundant memories to increase their probability of being overwritten during training2) Sampling new memories that summarize the knowledge acquired by redundant memories.This technique allows us to incorporate Knowledge Bases to achieve state-of-the-art dialogue generation in the Stanford Multi-Turn Dialogue dataset.Considering the same architecture, its use provides an improvement of +2.2 BLEU points for the automatic generation of responses and an increase of +8.1% in the recognition of named entities.",Conventional memory networks generate many redundant latent vectors resulting in overfitting and the need for larger memories. We introduce memory dropout as an automatic technique that encourages diversity in the latent space. 28,Nesterov's method is the discretization of a differential equation with Hessian damping,"Su-Boyd-Candes made a connection between Nesterov's method and an ordinary differential equation. "", ""We show if a Hessian damping term is added to the ODE from Su-Boyd-Candes, then Nesterov's method arises as a straightforward discretization of the modified ODE."", ""Analogously, in the strongly convex case, a Hessian damping term is added to Polyak's ODE, which is then discretized to yield Nesterov's method for strongly convex functions.Despite the Hessian term, both second order ODEs can be represented as first order systems.Established Liapunov analysis is used to recover the accelerated rates of convergence in both continuous and discrete time. Moreover, the Liapunov analysis can be extended to the case of stochastic gradients which allows the full gradient case to be considered as a special case of the stochastic case. The result is a unified approach to convex acceleration in both continuous and discrete time and in both the stochastic and full gradient cases.",We derive Nesterov's method arises as a straightforward discretization of an ODE different from the one in Su-Boyd-Candes and prove acceleration the stochastic case 29,Learning to Transfer Learn,"We propose learning to transfer learn to improve transfer learning on a target dataset by judicious extraction of information from a source dataset.L2TL considers joint optimization of vastly-shared weights between models for source and target tasks, and employs adaptive weights for scaling of constituent losses.The adaptation of the weights is based on reinforcement learning, guided with a performance metric on the target validation set.We demonstrate state-of-the-art performance of L2TL given fixed models, consistently outperforming fine-tuning baselines on various datasets.In the regimes of small-scale target datasets and significant label mismatch between source and target datasets, L2TL outperforms previous work by an even larger margin.",We propose learning to transfer learn (L2TL) to improve transfer learning on a target dataset by judicious extraction of information from a source dataset. 30,AMRL: Aggregated Memory For Reinforcement Learning,"In many partially observable scenarios, Reinforcement Learning agents must rely on long-term memory in order to learn an optimal policy.We demonstrate that using techniques from NLP and supervised learning fails at RL tasks due to stochasticity from the environment and from exploration.Utilizing our insights on the limitations of traditional memory methods in RL, we propose AMRL, a class of models that can learn better policies with greater sample efficiency and are resilient to noisy inputs.Specifically, our models use a standard memory module to summarize short-term context, and then aggregate all prior states from the standard model without respect to order.We show that this provides advantages both in terms of gradient decay and signal-to-noise ratio over time.Evaluating in Minecraft and maze environments that test long-term memory, we find that our model improves average return by 19% over a baseline that has the same number of parameters and by 9% over a stronger baseline that has far more parameters.","In Deep RL, order-invariant functions can be used in conjunction with standard memory modules to improve gradient decay and resilience to noise." 31,Optimization on Multiple Manifolds,"Optimization on manifold has been widely used in machine learning, to handle optimization problems with constraint.Most previous works focus on the case with a single manifold.However, in practice it is quite common that the optimization problem involves more than one constraints,.It is not clear in general how to optimize on multiple manifolds effectively and provably especially when the intersection of multiple manifolds is not a manifold or cannot be easily calculated.We propose a unified algorithm framework to handle the optimization on multiple manifolds.Specifically, we integrate information from multiple manifolds and move along an ensemble direction by viewing the information from each manifold as a drift and adding them together.We prove the convergence properties of the proposed algorithms.We also apply the algorithms into training neural network with batch normalization layers and achieve preferable empirical results.",This paper introduces an algorithm to handle optimization problem with multiple constraints under vision of manifold. 32,Discrete Sequential Prediction of Continuous Actions for Deep RL,"It has long been assumed that high dimensional continuous control problems cannot be solved effectively by discretizing individual dimensions of the action space due to the exponentially large number of bins over which policies would have to be learned.In this paper, we draw inspiration from the recent success of sequence-to-sequence models for structured prediction problems to develop policies over discretized spaces.Central to this method is the realization that complex functions over high dimensional spaces can be modeled by neural networks that predict one dimension at a time.Specifically, we show how Q-values and policies over continuous spaces can be modeled using a next step prediction model over discretized dimensions.With this parameterization, it is possible to both leverage the compositional structure of action spaces during learning, as well as compute maxima over action spaces.On a simple example task we demonstrate empirically that our method can perform global search, which effectively gets around the local optimization issues that plague DDPG.We apply the technique to off-policy methods and show that our method can achieve the state-of-the-art for off-policy methods on several continuous control tasks.",A method to do Q-learning on continuous action spaces by predicting a sequence of discretized 1-D actions. 33,Model Imitation for Model-Based Reinforcement Learning,"Model-based reinforcement learning aims to learn a dynamic model to reduce the number of interactions with real-world environments.However, due to estimation error, rollouts in the learned model, especially those of long horizon, fail to match the ones in real-world environments.This mismatching has seriously impacted the sample complexity of MBRL.The phenomenon can be attributed to the fact that previous works employ supervised learning to learn the one-step transition models, which has inherent difficulty ensuring the matching of distributions from multi-step rollouts.Based on the claim, we propose to learn the synthesized model by matching the distributions of multi-step rollouts sampled from the synthesized model and the real ones via WGAN.We theoretically show that matching the two can minimize the difference of cumulative rewards between the real transition and the learned one.Our experiments also show that the proposed model imitation method outperforms the state-of-the-art in terms of sample complexity and average return.",Our method incorporates WGAN to achieve occupancy measure matching for transition learning. 34,Normalization Gradients are Least-squares Residuals,"Batch Normalization and its variants have seen widespread adoption in the deep learning community because they improve the training of deep neural networks.Discussions of why this normalization works so well remain unsettled. We make explicit the relationship between ordinary least squares and partial derivatives computed when back-propagating through BN.We recast the back-propagation of BN as a least squares fit, which zero-centers and decorrelates partial derivatives from normalized activations.This view, which we term, is an extensible and arithmetically accurate description of BN.To further explore this perspective, we motivate, interpret, and evaluate two adjustments to BN.","Gaussian normalization performs a least-squares fit during back-propagation, which zero-centers and decorrelates partial derivatives from normalized activations." 35,Theoretical Analysis of Auto Rate-Tuning by Batch Normalization,"Batch Normalization has become a cornerstone of deep learning across diverse architectures, appearing to help optimization as well as generalization.While the idea makes intuitive sense, theoretical analysis of its effectiveness has been lacking.Here theoretical support is provided for one of its conjectured properties, namely, the ability to allow gradient descent to succeed with less tuning of learning rates.It is shown that even if we fix the learning rate of scale-invariant parameters to a constant, gradient descent still approaches a stationary point in the rate of T^ in T iterations, asymptotically matching the best bound for gradient descent with well-tuned learning rates.A similar result with convergence rate T^ is also shown for stochastic gradient descent.","We give a theoretical analysis of the ability of batch normalization to automatically tune learning rates, in the context of finding stationary points for a deep learning objective." 36,Adversarial Video Generation on Complex Datasets,"Generative models of natural images have progressed towards high fidelity samples by the strong leveraging of scale.We attempt to carry this success to the field of video modeling by showing that large Generative Adversarial Networks trained on the complex Kinetics-600 dataset are able to produce video samples of substantially higher complexity and fidelity than previous work. Our proposed model, Dual Video Discriminator GAN, scales to longer and higher resolution videos by leveraging a computationally efficient decomposition of its discriminator.We evaluate on the related tasks of video synthesis and video prediction, and achieve new state-of-the-art Fréchet Inception Distance for prediction for Kinetics-600, as well as state-of-the-art Inception Score for synthesis on the UCF-101 dataset, alongside establishing a strong baseline for synthesis on Kinetics-600.","We propose DVD-GAN, a large video generative model that is state of the art on several tasks and produces highly complex videos when trained on large real world datasets." 37,Simulating Action Dynamics with Neural Process Networks,"Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated.In this work, we introduce Neural Process Networks to understand procedural text through simulation of action dynamics. Our model complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers.The model updates the states of the entities by executing learned action operators.Empirical results demonstrate that our proposed model can reason about the unstated causal effects of actions, allowing it to provide more accurate contextual information for understanding and generating procedural text, all while offering more interpretable internal representations than existing alternatives.",We propose a new recurrent memory architecture that can track common sense state changes of entities by simulating the causal effects of actions. 38,Meta-Learning Neural Bloom Filters,"There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression. In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence.In many applications this expensive initialization is not practical, for example streaming algorithms --- where inputs are ephemeral and can only be inspected a small number of times. In this paper we explore the learning of approximate set membership over a stream of data in one-shot via meta-learning.We propose a novel memory architecture, the Neural Bloom Filter, which we show to be more compressive than Bloom Filters and several existing memory-augmented neural networks in scenarios of skewed data or structured sets.",We investigate the space efficiency of memory-augmented neural nets when learning set membership. 39,A Scalable Laplace Approximation for Neural Networks,"We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network.Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them.We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network.We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks.Our approach only requires calculating two square curvature factor matrices for each layer.Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage.We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture.",We construct a Kronecker factored Laplace approximation for neural networks that leads to an efficient matrix normal distribution over the weights. 40,Spectral Embedding of Regularized Block Models,"Spectral embedding is a popular technique for the representation of graph data.Several regularization techniques have been proposed to improve the quality of the embedding with respect to downstream tasks like clustering.In this paper, we explain on a simple block model the impact of the complete graph regularization, whereby a constant is added to all entries of the adjacency matrix.Specifically, we show that the regularization forces the spectral embedding to focus on the largest blocks, making the representation less sensitive to noise or outliers.We illustrate these results on both on both synthetic and real data, showing how regularization improves standard clustering scores.","Graph regularization forces spectral embedding to focus on the largest clusters, making the representation less sensitive to noise. " 41,Quantifying Exposure Bias for Neural Language Generation,"The exposure bias problem refers to the training-inference discrepancy caused by teacher forcing in maximum likelihood estimation training for auto-regressive neural network language models.It has been regarded as a central problem for natural language generation model training.Although a lot of algorithms have been proposed to avoid teacher forcing and therefore to alleviate exposure bias, there is little work showing how serious the exposure bias problem is.In this work, we first identify the auto-recovery ability of MLE-trained LM, which casts doubt on the seriousness of exposure bias.We then develop a precise, quantifiable definition for exposure bias.""However, according to our measurements in controlled experiments, there's only around 3% performance gain when the training-inference discrepancy is completely removed."", 'Our results suggest the exposure bias problem could be much less serious than it is currently assumed to be.",We show that exposure bias could be much less serious than it is currently assumed to be for MLE LM training. 42,Emergence of Linguistic Communication from Referential Games with Symbolic and Pixel Input,"The ability of algorithms to evolve or learn communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks.Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games.We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation.We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured. ",A controlled study of the role of environments with respect to properties in emergent communication protocols. 43,BERTgrid: Contextualized Embedding for 2D Document Representation and Understanding,"For understanding generic documents, information like font sizes, column layout, and generally the positioning of words may carry semantic information that is crucial for solving a downstream document intelligence task.Our novel BERTgrid, which is based on Chargrid by Katti et al., represents a document as a grid of contextualized word piece embedding vectors, thereby making its spatial structure and semantics accessible to the processing neural network.The contextualized embedding vectors are retrieved from a BERT language model.We use BERTgrid in combination with a fully convolutional network on a semantic instance segmentation task for extracting fields from invoices.We demonstrate its performance on tabulated line item and document header field extraction.",Grid-based document representation with contextualized embedding vectors for documents with 2D layouts 44,Adversarial Policies: Attacking Deep Reinforcement Learning,"Deep reinforcement learning policies are known to be vulnerable to adversarial perturbations to their observations, similar to adversarial examples for classifiers.""However, an attacker is not usually able to directly modify another agent's observations."", 'This might lead one to wonder: is it possible to attack an RL agent simply by choosing an adversarial policy acting in a multi-agent environment so as to create natural observations that are adversarial?We demonstrate the existence of adversarial policies in zero-sum games between simulated humanoid robots with proprioceptive observations, against state-of-the-art victims trained via self-play to be robust to opponents.The adversarial policies reliably win against the victims but generate seemingly random and uncoordinated behavior.We find that these policies are more successful in high-dimensional environments, and induce substantially different activations in the victim policy network than when the victim plays against a normal opponent.Videos are available at https://attackingrl.github.io.",Deep RL policies can be attacked by other agents taking actions so as to create natural observations that are adversarial. 45,Exponential Family Word Embeddings: An Iterative Approach for Learning Word Vectors,"GloVe and Skip-gram word embedding methods learn word vectors by decomposing a denoised matrix of word co-occurrences into a product of low-rank matrices.In this work, we propose an iterative algorithm for computing word vectors based on modeling word co-occurrence matrices with Generalized Low Rank Models.Our algorithm generalizes both Skip-gram and GloVe as well as giving rise to other embedding methods based on the specified co-occurrence matrix, distribution of co-occurences, and the number of iterations in the iterative algorithm.For example, using a Tweedie distribution with one iteration results in GloVe and using a Multinomial distribution with full-convergence mode results in Skip-gram.Experimental results demonstrate that multiple iterations of our algorithm improves results over the GloVe method on the Google word analogy similarity task.",We present a novel iterative algorithm based on generalized low rank models for computing and interpreting word embedding models. 46,Coping With Simulators That Don’t Always Return,"Deterministic models are approximations of reality that are often easier to build and interpret than stochastic alternatives. Unfortunately, as nature is capricious, observational data can never be fully explained by deterministic models in practice. Observation and process noise need to be added to adapt deterministic models to behave stochastically, such that they are capable of explaining and extrapolating from noisy data.""Adding process noise to deterministic simulators can induce a failure in the simulator resulting in no return value for certain inputs -- a property we describe as brittle."", 'We investigate and address the wasted computation that arises from these failures, and the effect of such failures on downstream inference tasks.We show that performing inference in this space can be viewed as rejection sampling, and train a conditional normalizing flow as a proposal over noise values such that there is a low probability that the simulator crashes, increasing computational efficiency and inference fidelity for a fixed sample budget when used as the proposal in an approximate inference algorithm.","We learn a conditional autoregressive flow to propose perturbations that don't induce simulator failure, improving inference performance." 47,Multi-hop Question Answering via Reasoning Chains,"Multi-hop question answering requires models to gather information from different parts of a text to answer a question.Most current approaches learn to address this task in an end-to-end way with neural networks, without maintaining an explicit representation of the reasoning process.We propose a method to extract a discrete reasoning chain over the text, which consists of a series of sentences leading to the answer.We then feed the extracted chains to a BERT-based QA model to do final answer prediction.""Critically, we do not rely on gold annotated chains or supporting facts: at training time, we derive pseudogold reasoning chains using heuristics based on named entity recognition and coreference resolution."", 'Nor do we rely on these annotations at test time, as our model learns to extract chains from raw text alone. We test our approach on two recently proposed large multi-hop question answering datasets: WikiHop and HotpotQA, and achieve state-of-art performance on WikiHop and strong performance on HotpotQA.Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way.Furthermore, human evaluation shows that our extracted chains allow humans to give answers with high confidence, indicating that these are a strong intermediate abstraction for this task.",We improve answering of questions that require multi-hop reasoning extracting an intermediate chain of sentences. 48,Normalizing Constant Estimation with Gaussianized Bridge Sampling,"Normalizing constant is one of the central goals of Bayesian inference, yet most of the existing methods are both expensive and inaccurate.Here we develop a new approach, starting from posterior samples obtained with a standard Markov Chain Monte Carlo.We apply a novel Normalizing Flow approach to obtain an analytic density estimator from these samples, followed by Optimal Bridge Sampling to obtain the normalizing constant.We compare our method which we call Gaussianized Bridge Sampling to existing methods such as Nested Sampling and Annealed Importance Sampling on several examples, showing our method is both significantly faster and substantially more accurate than these methods, and comes with a reliable error estimation.","We develop a new method for normalization constant (Bayesian evidence) estimation using Optimal Bridge Sampling and a novel Normalizing Flow, which is shown to outperform existing methods in terms of accuracy and computational time." 49,"A comprehensive, application-oriented study of catastrophic forgetting in DNNs","We present a large-scale empirical study of catastrophic forgetting in modern Deep Neural Network models that perform sequential learning.A new experimental protocol is proposed that takes into account typical constraints encountered in application scenarios.As the investigation is empirical, we evaluate CF behavior on the hitherto largest number of visual classification datasets, from each of which we construct a representative number of Sequential Learning Tasks in close alignment to previous works on CF.Our results clearly indicate that there is no model that avoids CF for all investigated datasets and SLTs under application conditions.We conclude with a discussion of potential solutions and workarounds to CF, notably for the EWC and IMM models.","We check DNN models for catastrophic forgetting using a new evaluation scheme that reflects typical application conditions, with surprising results." 50,Improving Federated Learning Personalization via Model Agnostic Meta Learning,"Federated Learning refers to learning a high quality global model based on decentralized data storage, without ever copying the raw data.A natural scenario arises with data created on mobile phones by the activity of their users.Given the typical data heterogeneity in such situations, it is natural to ask how can the global model be personalized for every such device, individually.In this work, we point out that the setting of Model Agnostic Meta Learning, where one optimizes for a fast, gradient-based, few-shot adaptation to a heterogeneous distribution of tasks, has a number of similarities with the objective of personalization for FL.We present FL as a natural source of practical applications for MAML algorithms, and make the following observations.1) The popular FL algorithm, Federated Averaging, can be interpreted as a meta learning algorithm.2) Careful fine-tuning can yield a global model with higher accuracy, which is at the same time easier to personalize.However, solely optimizing for the global model accuracy yields a weaker personalization result.3) A model trained using a standard datacenter optimization method is much harder to personalize, compared to one trained using Federated Averaging, supporting the first claim.These results raise new questions for FL, MAML, and broader ML research.","Federated Averaging already is a Meta Learning algorithm, while datacenter-trained methods are significantly harder to personalize." 51,Downsampling leads to Image Memorization in Convolutional Autoencoders,"Memorization of data in deep neural networks has become a subject of significant research interest.In this paper, we link memorization of images in deep convolutional autoencoders to downsampling through strided convolution. To analyze this mechanism in a simpler setting, we train linear convolutional autoencoders and show that linear combinations of training data are stored as eigenvectors in the linear operator corresponding to the network when downsampling is used. On the other hand, networks without downsampling do not memorize training data. We provide further evidence that the same effect happens in nonlinear networks. Moreover, downsampling in nonlinear networks causes the model to not only memorize just linear combinations of images, but individual training images. Since convolutional autoencoder components are building blocks of deep convolutional networks, we envision that our findings will shed light on the important phenomenon of memorization in over-parameterized deep networks. ",We identify downsampling as a mechansim for memorization in convolutional autoencoders. 52,Learning Robust Rewards with Adverserial Inverse Reinforcement Learning,"Reinforcement learning provides a powerful and general framework for decisionmaking and control, but its application in practice is often hindered by the needfor extensive feature and reward engineering.Deep reinforcement learning methodscan remove the need for explicit engineering of policy or value features, butstill require a manually specified reward function.Inverse reinforcement learningholds the promise of automatic reward acquisition, but has proven exceptionallydifficult to apply to large, high-dimensional problems with unknown dynamics.Inthis work, we propose AIRL, a practical and scalable inverse reinforcement learningalgorithm based on an adversarial reward learning formulation that is competitivewith direct imitation learning algorithms.Additionally, we show that AIRL isable to recover portable reward functions that are robust to changes in dynamics,enabling us to learn policies even under significant variation in the environmentseen during training.","We propose an adversarial inverse reinforcement learning algorithm capable of learning reward functions which can transfer to new, unseen environments." 53,A Bayesian Perspective on Generalization and Stochastic Gradient Descent,"We consider two questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well?Our work responds to t, who showed deep neural networks can easily memorize randomly labeled training data, despite generalizing well on real labels of the same inputs.We show that the same phenomenon occurs in small linear models.These observations are explained by the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization.We also demonstrate that, when one holds the learning rate fixed, there is an optimum batch size which maximizes the test set accuracy.We propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large.Interpreting stochastic gradient descent as a stochastic differential equation, we identify the noise scale"", where is the learning rate, the training set size and the batch size.Consequently the optimum batch size is proportional to both the learning rate and the size of the training set,.We verify these predictions empirically.","Generalization is strongly correlated with the Bayesian evidence, and gradient noise drives SGD towards minima whose evidence is large." 54,Generative Adversarial Networks For Data Scarcity Industrial Positron Images With Attention,"In the industrial field, the positron annihilation is not affected by complex environment, and the gamma-ray photon penetration is strong, so the nondestructive detection of industrial parts can be realized.Due to the poor image quality caused by gamma-ray photon scattering, attenuation and short sampling time in positron process, we propose the idea of combining deep learning to generate positron images with good quality and clear details by adversarial nets.The structure of the paper is as follows: firstly, we encode to get the hidden vectors of medical CT images based on transfer Learning, and use PCA to extract positron image features.Secondly, we construct a positron image memory based on attention mechanism as a whole input to the adversarial nets which uses medical hidden variables as a query.Finally, we train the whole model jointly and update the input parameters until convergence.Experiments have proved the possibility of generating rare positron images for industrial non-destructive testing using countermeasure networks, and good imaging results have been achieved.","adversarial nets, attention mechanism, positron images, data scarcity" 55,Revisit Recurrent Attention Model from an Active Sampling Perspective,"We revisit the Recurrent Attention Model), a recurrent neural network for visual attention, from an active information sampling perspective.We borrow ideas from neuroscience research on the role of active information sampling in the context of visual attention and gaze, where the author suggested three types of motives for active information sampling strategies.We find the original RAM model only implements one of them.We identify three key weakness of the original RAM and provide a simple solution by adding two extra terms on the objective function.The modified RAM1) achieves faster convergence,2) allows dynamic decision making per sample without loss of accuracy, and3) generalizes much better on longer sequence of glimpses which is not trained for, compared with the original RAM."," Inspired by neuroscience research, solve three key weakness of the widely-cited recurrent attention model by simply adding two terms on the objective function." 56,Active Learning Graph Neural Networks via Node Feature Propagation,"Graph Neural Networks for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data.However, a large quantity of labeled graphs is difficult to obtain, which significantly limit the true success of GNNs.Although active learning has been widely studied for addressing label-sparse issues with other data types like text, images, etc., how to make it effective over graphs is an open question for research. In this paper, we present the investigation on active learning with GNNs for node classification tasks. Specifically, we propose a new method, which uses node feature propagation followed by K-Medoids clustering of the nodes for instance selection in active learning.With a theoretical bound analysis we justify the design choice of our approach.In our experiments on four benchmark dataset, the proposed method outperforms other representative baseline methods consistently and significantly.",This paper introduces a clustering-based active learning algorithm on graphs. 57,InfoCNF: Efficient Conditional Continuous Normalizing Flow Using Adaptive Solvers,"Continuous Normalizing Flows have emerged as promising deep generative models for a wide range of tasks thanks to their invertibility and exact likelihood estimation.However, conditioning CNFs on signals of interest for conditional image generation and downstream predictive tasks is inefficient due to the high-dimensional latent code generated by the model, which needs to be of the same size as the input data.In this paper, we propose InfoCNF, an efficient conditional CNF that partitions the latent space into a class-specific supervised code and an unsupervised code that shared among all classes for efficient use of labeled information.Since the partitioning strategy increases the number of function evaluations, InfoCNF also employs gating networks to learn the error tolerances of its ordinary differential equation solvers for better speed and performance.We show empirically that InfoCNF improves the test accuracy over the baseline while yielding comparable likelihood scores and reducing the NFEs on CIFAR10.Furthermore, applying the same partitioning strategy in InfoCNF on time-series data helps improve extrapolation performance.","We propose the InfoCNF, an efficient conditional CNF that employs gating networks to learn the error tolerances of the ODE solvers " 58,Unsupervised Learning via Meta-Learning,"A central goal of unsupervised learning is to acquire representations from unlabeled data or experience that can be used for more effective learning of downstream tasks from modest amounts of labeled data.Many prior unsupervised learning works aim to do so by developing proxy objectives based on reconstruction, disentanglement, prediction, and other metrics.Instead, we develop an unsupervised meta-learning method that explicitly optimizes for the ability to learn a variety of tasks from small amounts of data.To do so, we construct tasks from unlabeled data in an automatic way and run meta-learning over the constructed tasks.Surprisingly, we find that, when integrated with meta-learning, relatively simple task construction mechanisms, such as clustering embeddings, lead to good performance on a variety of downstream, human-specified tasks.Our experiments across four image datasets indicate that our unsupervised meta-learning approach acquires a learning algorithm without any labeled data that is applicable to a wide range of downstream classification tasks, improving upon the embedding learned by four prior unsupervised learning methods.","An unsupervised learning method that uses meta-learning to enable efficient learning of downstream image classification tasks, outperforming state-of-the-art methods." 59,Latent Domain Transfer: Crossing modalities with Bridging Autoencoders,"Domain transfer is a exciting and challenging branch of machine learning because models must learn to smoothly transfer between domains, preserving local variations and capturing many aspects of variation without labels.However, most successful applications to date require the two domains to be closely related,utilizing similar or shared networks to transform domain specific properties like texture, coloring, and line shapes.Here, we demonstrate that it is possible to transfer across modalities by first abstracting the data with latent generative models and then learning transformations between latent spaces.We find that a simple variational autoencoder is able to learn a shared latent space to bridge between two generative models in an unsupervised fashion, and even between different types of models.We can further impose desired semantic alignment of attributes with a linear classifier in the shared latent space.The proposed variation autoencoder enables preserving both locality and semantic alignment through the transfer process, as shown in the qualitative and quantitative evaluations.Finally, the hierarchical structure decouples the cost of training the base generative models and semantic alignments, enabling computationally efficient and data efficient retraining of personalized mapping functions.",Conditional VAE on top of latent spaces of pre-trained generative models that enables transfer between drastically different domains while preserving locality and semantic alignment. 60,Adversarial Inductive Transfer Learning with input and output space adaptation,"We propose Adversarial Inductive Transfer Learning, a method for addressing discrepancies in input and output spaces between source and target domains.AITL utilizes adversarial domain adaptation and multi-task learning to address these discrepancies.Our motivating application is pharmacogenomics where the goal is to predict drug response in patients using their genomic information.The challenge is that clinical data with drug response outcome is very limited, creating a need for transfer learning to bridge the gap between large pre-clinical pharmacogenomics datasets and clinical datasets.Discrepancies exist between1) the genomic data of pre-clinical and clinical datasets, and2) the different measures of the drug response.To the best of our knowledge, AITL is the first adversarial inductive transfer learning method to address both input and output discrepancies.Experimental results indicate that AITL outperforms state-of-the-art pharmacogenomics and transfer learning baselines and may guide precision oncology more accurately.",A novel method of inductive transfer learning that employs adversarial learning and multi-task learning to address the discrepancy in input and output space 61,End-to-end named entity recognition and relation extraction using pre-trained language models,"Named entity recognition and relation extraction are two important tasks in information extraction and retrieval.Recent work has demonstrated that it is beneficial to learn these tasks jointly, which avoids the propagation of error inherent in pipeline-based systems and improves performance.However, state-of-the-art joint models typically rely on external natural language processing tools, such as dependency parsers, limiting their usefulness to domains where those tools perform well.The few neural, end-to-end models that have been proposed are trained almost completely from scratch.In this paper, we propose a neural, end-to-end model for jointly extracting entities and their relations which does not rely on external NLP tools and which integrates a large, pre-trained language model.""Because the bulk of our model's parameters are pre-trained and we eschew recurrence for self-attention, our model is fast to train."", 'On 5 datasets across 3 domains, our model matches or exceeds state-of-the-art performance, sometimes by a large margin.","A novel, high-performing architecture for end-to-end named entity recognition and relation extraction that is fast to train." 62,Music Transformer: Generating Music with Long-Term Structure,"Music relies heavily on repetition to build structure and meaning. Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure. The Transformer, a sequence model based on self-attention, has achieved compelling results in many generation tasks that require maintaining long-range coherence.This suggests that self-attention might also be well-suited to modeling music.In musical composition and performance, however, relative timing is critically important. Existing approaches for representing relative positional information in the Transformer modulate attention based on pairwise distance. This is impractical for long sequences such as musical compositions since their memory complexity is quadratic in the sequence length. We propose an algorithm that reduces the intermediate memory requirements to linear in the sequence length.This enables us to demonstrate that a Transformer with our modified relative attention mechanism can generate minute-long compositions with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies. We evaluate the Transformer with our relative attention mechanism on two datasets, JSB Chorales and Piano-e-competition, and obtain state-of-the-art results on the latter.",We show the first successful use of Transformer in generating music that exhibits long-term structure. 63,Width-Based Lookaheads Augmented with Base Policies for Stochastic Shortest Paths,"Sequential decision problems for real-world applications often need to be solved in real-time, requiring algorithms to perform well with a restricted computational budget.Width-based lookaheads have shown state-of-the-art performance in classical planning problems as well as over the Atari games with tight budgets.In this work we investigate width-based lookaheads over Stochastic Shortest paths.We analyse why width-based algorithms perform poorly over SSP problems, and overcome these pitfalls proposing a method to estimate costs-to-go.We formalize width-based lookaheads as an instance of the rollout algorithm, give a definition of width for SSP problems and explain its sample complexity.Our experimental results over a variety of SSP benchmarks show the algorithm to outperform other state-of-the-art rollout algorithms such as UCT and RTDP.",We propose a new Monte Carlo Tree Search / rollout algorithm that relies on width-based search to construct a lookahead. 64,Deep Within-Class Covariance Analysis for Robust Deep Audio Representation Learning,"Deep Neural Networks are known for excellent performance in supervised tasks such as classification.Convolutional Neural Networks, in particular, can learn effective features and build high-level representations that can be used forclassification, but also for querying and nearest neighbor search.However, CNNs have also been shown to suffer from a performance drop when the distribution of the data changes from training to test data.In this paper we analyze the internalrepresentations of CNNs and observe that the representations of unseen data in each class, spread more in the embedding space of the CNN compared to representations of the training data.More importantly, this difference is more extreme if the unseen data comes from a shifted distribution.Based on this observation, we objectively evaluate the degree of representation’s variance in each class by applying eigenvalue decomposition on the within-class covariance of the internal representations of CNNs and observe the same behaviour.This can be problematic as larger variances might lead to mis-classification if the sample crosses the decision boundary of its class.We apply nearest neighbor classification on the representations and empirically show that the embeddings with the high variance actually have significantly worse KNN classification performances, although this could not be foreseen from their end-to-end classification results.To tackle this problem, we propose Deep Within-Class Covariance Analysis, a deep neural network layer that significantly reduces the within-class covariance of a DNN’s representation, improving performance on unseen test data from a shifted distribution.We empirically evaluate DWCCA on two datasets for Acoustic Scene Classification.We demonstrate that not only does DWCCA significantly improve the network’s internal representation, italso increases the end-to-end classification accuracy, especially when the test set exhibits a slight distribution shift.By adding DWCCA to a VGG neural network, we achieve around 6 percentage points improvement in the case of a distributionmismatch.",We propose a novel deep neural network layer for normalising within-class covariance of an internal representation in a neural network that results in significantly improving the generalisation of the learned representations. 65,Learning Diverse Generations using Determinantal Point Processes,"Generative models have proven to be an outstanding tool for representing high-dimensional probability distributions and generating realistic looking images.A fundamental characteristic of generative models is their ability to produce multi-modal outputs.However, while training, they are often susceptible to mode collapse, which means that the model is limited in mapping the input noise to only a few modes of the true data distribution.In this paper, we draw inspiration from Determinantal Point Process to devise a generative model that alleviates mode collapse while producing higher quality samples.DPP is an elegant probabilistic measure used to model negative correlations within a subset and hence quantify its diversity.We use DPP kernel to model the diversity in real data as well as in synthetic data.Then, we devise a generation penalty term that encourages the generator to synthesize data with a similar diversity to real data.In contrast to previous state-of-the-art generative models that tend to use additional trainable parameters or complex training paradigms, our method does not change the original training scheme.Embedded in an adversarial training and variational autoencoder, our Generative DPP approach shows a consistent resistance to mode-collapse on a wide-variety of synthetic data and natural image datasets including MNIST, CIFAR10, and CelebA, while outperforming state-of-the-art methods for data-efficiency, convergence-time, and generation quality.Our code will be made publicly available.",The addition of a diversity criterion inspired from DPP in the GAN objective avoids mode collapse and leads to better generations. 66,The role of over-parametrization in generalization of neural networks,"Despite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization.In this work we suggest a novel complexity measure based on unit-wise capacities resulting in a tighter generalization bound for two layer ReLU networks.Our capacity bound correlates with the behavior of test error with increasing network sizes, and could partly explain the improvement in generalization with over-parametrization.We further present a matching lower bound for the Rademacher complexity that improves over previous capacity lower bounds for neural networks.",We suggest a generalization bound that could partly explain the improvement in generalization with over-parametrization. 67,Going Deeper with Lean Point Networks,"We introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of multiple state-of-the-art networks, thus allowing to design deeper and more accurate networks.The novel processing blocks that facilitate efficient information flow are a convolution-type operation block for point sets that blends neighborhood information in a memory-efficient manner; a multi-resolution point cloud processing block; and a crosslink block that efficiently shares information across low- and high-resolution processing branches.Combining these blocks, we design significantly wider and deeper architectures.We extensively evaluate the proposed architectures on multiple point segmentation benchmarks and report systematic improvements in terms of both accuracy and memory consumption by using our generic modules in conjunction with multiple recent architectures.We report a 9.7% increase in IoU on the PartNet dataset, which is the most complex, while decreasing memory footprint by 57%.","We introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of multiple state-of-the-art networks, thus allowing to design deeper and more accurate networks." 68,Learned in Speech Recognition: Contextual Acoustic Word Embeddings,"End-to-end acoustic-to-word speech recognition models have recently gained popularity because they are easy to train, scale well to large amounts of training data, and do not require a lexicon.In addition, word models may also be easier to integrate with downstream tasks such as spoken language understanding, because inference is much simplified compared to phoneme, character or any other sort of sub-word units.In this paper, we describe methods to construct contextual acoustic word embeddings directly from a supervised sequence-to-sequence acoustic-to-word speech recognition model using the learned attention distribution.On a suite of 16 standard sentence evaluation tasks, our embeddings show competitive performance against a word2vec model trained on the speech transcriptions.In addition, we evaluate these embeddings on a spoken language understanding task and observe that our embeddings match the performance of text-based embeddings in a pipeline of first performing speech recognition and then constructing word embeddings from transcriptions.",Methods to learn contextual acoustic word embeddings from an end-to-end speech recognition model that perform competitively with text-based word embeddings. 69,UNSUPERVISED MONOCULAR DEPTH ESTIMATION WITH CLEAR BOUNDARIES,"Unsupervised monocular depth estimation has made great progress after deeplearning is involved.Training with binocular stereo images is considered as agood option as the data can be easily obtained.However, the depth or disparityprediction results show poor performance for the object boundaries.The mainreason is related to the handling of occlusion areas during the training.In this paper,we propose a novel method to overcome this issue.Exploiting disparity mapsproperty, we generate an occlusion mask to block the back-propagation of the occlusionareas during image warping.We also design new networks with flippedstereo images to induce the networks to learn occluded boundaries.It shows thatour method achieves clearer boundaries and better evaluation results on KITTIdriving dataset and Virtual KITTI dataset.",This paper propose a mask method which solves the previous blurred results of unsupervised monocular depth estimation caused by occlusion 70,Graph Classification with 2D Convolutional Neural Networks,"Graph classification is currently dominated by graph kernels, which, while powerful, suffer some significant limitations.Convolutional Neural Networks offer a very appealing alternative.However, processing graphs with CNNs is not trivial.To address this challenge, many sophisticated extensions of CNNs have recently been proposed.In this paper, we reverse the problem: rather than proposing yet another graph CNN model, we introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs.Despite its simplicity, our method proves very competitive to state-of-the-art graph kernels and graph CNNs, and outperforms them by a wide margin on some datasets.It is also preferable to graph kernels in terms of time complexity.Code and data are publicly available.",We introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs. 71,Benefits of Depth for Long-Term Memory of Recurrent Networks,"The key attribute that drives the unprecedented success of modern Recurrent Neural Networks on learning tasks which involve sequential data, is their ever-improving ability to model intricate long-term temporal dependencies.""However, a well established measure of RNNs' long-term memory capacity is lacking, and thus formal understanding of their ability to correlate data throughout time is limited."", ""Though depth efficiency in convolutional networks is well established by now, it does not suffice in order to account for the success of deep RNNs on inputs of varying lengths, and the need to address their 'time-series expressive power' arises."", 'In this paper, we analyze the effect of depth on the ability of recurrent networks to express correlations ranging over long time-scales.To meet the above need, we introduce a measure of the information flow across time that can be supported by the network, referred to as the Start-End separation rank.Essentially, this measure reflects the distance of the function realized by the recurrent network from a function that models no interaction whatsoever between the beginning and end of the input sequence.We prove that deep recurrent networks support Start-End separation ranks which are exponentially higher than those supported by their shallow counterparts.Moreover, we show that the ability of deep recurrent networks to correlate different parts of the input sequence increases exponentially as the input sequence extends, while that of vanilla shallow recurrent networks does not adapt to the sequence length at all.Thus, we establish that depth brings forth an overwhelming advantage in the ability of recurrent networks to model long-term dependencies, and provide an exemplar of quantifying this key attribute which may be readily extended to other RNN architectures of interest, e.g. variants of LSTM networks.We obtain our results by considering a class of recurrent networks referred to as Recurrent Arithmetic Circuits, which merge the hidden state with the input via the Multiplicative Integration operation.",We propose a measure of long-term memory and prove that deep recurrent networks are much better fit to model long-term temporal dependencies than shallow ones. 72,Contextual and neural representations of sequentially complex animal vocalizations,"Holistically exploring the perceptual and neural representations underlying animal communication has traditionally been very difficult because of the complexity of the underlying signal.We present here a novel set of techniques to project entire communicative repertoires into low dimensional spaces that can be systematically sampled from, exploring the relationship between perceptual representations, neural representations, and the latent representational spaces learned by machine learning algorithms.We showcase this method in one ongoing experiment studying sequential and temporal maintenance of context in songbird neural and perceptual representations of syllables.We further discuss how studying the neural mechanisms underlying the maintenance of the long-range information content present in birdsong can inform and be informed by machine sequence modeling.","We compare perceptual, neural, and modeled representations of animal communication using machine learning, behavior, and physiology. " 73,What Information Does a ResNet Compress?,"The information bottleneck principle suggests that SGD-based training of deep neural networks results in optimally compressed hidden layers, from an information theoretic perspective.However, this claim was established on toy data.The goal of the work we present here is to test these claims in a realistic setting using a larger and deeper convolutional architecture, a ResNet model.We trained PixelCNN++ models as inverse representation decoders to measure the mutual information between hidden layers of a ResNet and input image data, when trained for classification and autoencoding.We find that two stages of learning happen for both training regimes, and that compression does occur, even for an autoencoder.Sampling images by conditioning on hidden layers’ activations offers an intuitive visualisation to understand what a ResNets learns to forget.","The Information Bottleneck Principle applied to ResNets, using PixelCNN++ models to decode mutual information and conditionally generate images for information illustration" 74,Hope For The Best But Prepare For The Worst: Cautious Adaptation In RL Agents,"We study the problem of safe adaptation: given a model trained on a variety of past experiences for some task, can this model learn to perform that task in a new situation while avoiding catastrophic failure?This problem setting occurs frequently in real-world reinforcement learning scenarios such as a vehicle adapting to drive in a new city, or a robotic drone adapting a policy trained only in simulation.While learning without catastrophic failures is exceptionally difficult, prior experience can allow us to learn models that make this much easier.These models might not directly transfer to new settings, but can enable cautious adaptation that is substantially safer than na\\""ve adaptation as well as learning from scratch.Building on this intuition, we propose risk-averse domain adaptation.RADA works in two steps: it first trains probabilistic model-based RL agents in a population of source domains to gain experience and capture epistemic uncertainty about the environment dynamics.Then, when dropped into a new environment, it employs a pessimistic exploration policy, selecting actions that have the best worst-case performance as forecasted by the probabilistic model.We show that this simple maximin policy accelerates domain adaptation in a safety-critical driving environment with varying vehicle sizes.We compare our approach against other approaches for adapting to new environments, including meta-reinforcement learning.",Adaptation of an RL agent in a target environment with unknown dynamics is fast and safe when we transfer prior experience in a variety of environments and then select risk-averse actions during adaptation. 75,"The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision","We propose the Neuro-Symbolic Concept Learner, a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers.Our model builds an object-based scene representation and translates sentences into executable, symbolic programs.To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation.Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to.Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences.We use curriculum learning to guide the searching over the large compositional space of images and language.Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences.Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains.It also empowers applications including visual question answering and bidirectional image-text retrieval.","We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them." 76,Gaussian Process Meta-Representations For Hierarchical Neural Network Weight Priors,"Bayesian inference offers a theoretically grounded and general way to train neural networks and can potentially give calibrated uncertainty.However, it is challenging to specify a meaningful and tractable prior over the network parameters, and deal with the weight correlations in the posterior.To this end, this paper introduces two innovations: a Gaussian process-based hierarchical model for the network parameters based on recently introduced unit embeddings that can flexibly encode weight structures, and input-dependent contextual variables for the weight prior that can provide convenient ways to regularize the function space being modeled by the network through the use of kernels.We show these models provide desirable test-time uncertainty estimates, demonstrate cases of modeling inductive biases for neural networks with kernels and demonstrate competitive predictive performance on an active learning benchmark.","We introduce a Gaussian Process Prior over weights in a neural network and explore its ability to model input-dependent weights with benefits to various tasks, including uncertainty estimation and generalization in the low-sample setting." 77,Character-level Translation with Self-attention,"We perform an in-depth investigation of the suitability of self-attention models for character-level neural machine translation.We test the standard transformer model, as well as a novel variant in which the encoder block combines information from nearby characters using convolution.We perform extensive experiments on WMT and UN datasets, testing both bilingual and multilingual translation to English using up to three input languages.Our transformer variant consistently outperforms the standard transformer at the character-level and converges faster while learning more robust character-level alignments.",We perform an in-depth investigation of the suitability of self-attention models for character-level neural machine translation. 78,Learning to diagnose from scratch by exploiting dependencies among labels,"The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures.Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies.Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset.This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples -- ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks.We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training.Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice.",we present the state-of-the-art results of using neural networks to diagnose chest x-rays 79,Analysis of Video Feature Learning in Two-Stream CNNs on the Example of Zebrafish Swim Bout Classification,"Semmelhack et al. have achieved high classification accuracy in distinguishing swim bouts of zebrafish using a Support Vector Machine.Convolutional Neural Networks have reached superior performance in various image recognition tasks over SVMs, but these powerful networks remain a black box.Reaching better transparency helps to build trust in their classifications and makes learned features interpretable to experts.Using a recently developed technique called Deep Taylor Decomposition, we generated heatmaps to highlight input regions of high relevance for predictions.""We find that our CNN makes predictions by analyzing the steadiness of the tail's trunk, which markedly differs from the manually extracted features used by Semmelhack et al.."", 'We further uncovered that the network paid attention to experimental artifacts.Removing these artifacts ensured the validity of predictions.After correction, our best CNN beats the SVM by 6.12%, achieving a classification accuracy of 96.32%.Our work thus demonstrates the utility of AI explainability for CNNs.",We demonstrate the utility of a recent AI explainability technique by visualizing the learned features of a CNN trained on binary classification of zebrafish movements. 80,INTERNAL-CONSISTENCY CONSTRAINTS FOR EMERGENT COMMUNICATION,"When communicating, humans rely on internally-consistent language representations.That is, as speakers, we expect listeners to behave the same way we do when we listen.This work proposes several methods for encouraging such internal consistency in dialog agents in an emergent communication setting.We consider two hypotheses about the effect of internal-consistency constraints:1) that they improve agents’ ability to refer to unseen referents, and2) that they improve agents’ ability to generalize across communicative roles.While we do not find evidence in favor of the former, our results show significant support for the latter.",Internal-consistency constraints improve agents ability to develop emergent protocols that generalize across communicative roles. 81,Discovering the compositional structure of vector representations with Role Learning Networks,"Neural networks are able to perform tasks that rely on compositional structure even though they lack obvious mechanisms for representing this structure.To analyze the internal representations that enable such success, we propose ROLE, a technique that detects whether these representations implicitly encode symbolic structure.ROLE learns to approximate the representations of a target encoder E by learning a symbolic constituent structure and an embedding of that structure into E’s representational vector space.The constituents of the approximating symbol structure are defined by structural positions — roles — that can be filled by symbols.We show that when E is constructed to explicitly embed a particular type of structure, ROLE successfully extracts the ground-truth roles defining that structure.We then analyze a seq2seq network trained to perform a more complex compositional task, where there is no ground truth role scheme available.For this model, ROLE successfully discovers an interpretable symbolic structure that the model implicitly uses to perform the SCAN task, providing a comprehensive account of the link between the representations and the behavior of a notoriously hard-to-interpret type of model.We verify the causal importance of the discovered symbolic structure by showing that, when we systematically manipulate hidden embeddings based on this symbolic structure, the model’s output is also changed in the way predicted by our analysis.Finally, we use ROLE to explore whether popular sentence embedding models are capturing compositional structure and find evidence that they are not; we conclude by discussing how insights from ROLE can be used to impart new inductive biases that will improve the compositional abilities of such models.",We introduce a new analysis technique that discovers interpretable compositional structure in notoriously hard-to-interpret recurrent neural networks. 82,A Unified Theory of Early Visual Representations from Retina to Cortex through Anatomically Constrained Deep CNNs,"The vertebrate visual system is hierarchically organized to process visual information in successive stages.Neural representations vary drastically across the first stages of visual processing: at the output of the retina, ganglion cell receptive fields exhibit a clear antagonistic center-surround structure, whereas in the primary visual cortex, typical RFs are sharply tuned to a precise orientation.There is currently no unified theory explaining these differences in representations across layers.Here, using a deep convolutional neural network trained on image recognition as a model of the visual system, we show that such differences in representation can emerge as a direct consequence of different neural resource constraints on the retinal and cortical networks, and for the first time we find a single model from which both geometries spontaneously emerge at the appropriate stages of visual processing.The key constraint is a reduced number of neurons at the retinal output, consistent with the anatomy of the optic nerve as a stringent bottleneck.Second, we find that, for simple downstream cortical networks, visual representations at the retinal output emerge as nonlinear and lossy feature detectors, whereas they emerge as linear and faithful encoders of the visual scene for more complex cortical networks.This result predicts that the retinas of small vertebrates should perform sophisticated nonlinear computations, extracting features directly relevant to behavior, whereas retinas of large animals such as primates should mostly encode the visual scene linearly and respond to a much broader range of stimuli.These predictions could reconcile the two seemingly incompatible views of the retina as either performing feature extraction or efficient coding of natural scenes, by suggesting that all vertebrates lie on a spectrum between these two objectives, depending on the degree of neural resources allocated to their visual system.",We reproduced neural representations found in biological visual systems by simulating their neural resource constraints in a deep convolutional model. 83,Identifying Generalization Properties in Neural Networks,"While it has not yet been proven, empirical evidence suggests that model generalization is related to local properties of the optima which can be described via the Hessian.We connect model generalization with the local property of a solution under the PAC-Bayes paradigm.In particular, we prove that model generalization ability is related to the Hessian, the higher-order ""smoothness"" terms characterized by the Lipschitz constant of the Hessian, and the scales of the parameters.Guided by the proof, we propose a metric to score the generalization capability of the model, as well as an algorithm that optimizes the perturbed model accordingly.",a theory connecting Hessian of the solution and the generalization power of the model 84,EnGAN: Latent Space MCMC and Maximum Entropy Generators for Energy-based Models,"Unsupervised learning is about capturing dependencies between variables and is driven by the contrast between the probable vs improbable configurations of these variables, often either via a generative model which only samples probable ones or with an energy function which is low for probable ones and high for improbable ones.Here we consider learning both an energy function and an efficient approximate sampling mechanism for the corresponding distribution.Whereas the critic in generative adversarial networks learns to separate data and generator samples, introducing an entropy maximization regularizer on the generator can turn the interpretation of the critic into an energy function, which separates the training distribution from everything else, and thus can be used for tasks like anomaly or novelty detection.This paper is motivated by the older idea of sampling in latent space rather than data space because running a Monte-Carlo Markov Chain in latent space has been found to be easier and more efficient, and because a GAN-like generator can convert latent space samples to data space samples.For this purpose, we show how a Markov chain can be run in latent space whose samples can be mapped to data space, producing better samples.These samples are also used for the negative phase gradient required to estimate the log-likelihood gradient of the data space energy function.To maximize entropy at the output of the generator, we take advantage of recently introduced neural estimators of mutual information.We find that in addition to producing a useful scoring function for anomaly detection, the resulting approach produces sharp samples while covering the modes well, leading to high Inception and Fréchet scores.","We introduced entropy maximization to GANs, leading to a reinterpretation of the critic as an energy function." 85,“Style” Transfer for Musical Audio Using Multiple Time-Frequency Representations,"Neural Style Transfer has become a popular technique forgenerating images of distinct artistic styles using convolutional neural networks.Thisrecent success in image style transfer has raised the question ofwhether similar methods can be leveraged to alter the “style” of musicalaudio.In this work, we attempt long time-scale high-quality audio transferand texture synthesis in the time-domain that captures harmonic,rhythmic, and timbral elements related to musical style, using examples thatmay have different lengths and musical keys.We demonstrate the abilityto use randomly initialized convolutional neural networks to transferthese aspects of musical style from one piece onto another using 3different representations of audio: the log-magnitude of the Short TimeFourier Transform, the Mel spectrogram, and the Constant-Q Transformspectrogram.We propose using these representations as a way ofgenerating and modifying perceptually significant characteristics ofmusical audio content.""We demonstrate each representation's"", 'shortcomings and advantages over others by carefully designingneural network structures that complement the nature of musical audio.Finally, we show that the mostcompelling “style” transfer examples make use of an ensemble of theserepresentations to help capture the varying desired characteristics ofaudio signals.","We present a long time-scale musical audio style transfer algorithm which synthesizes audio in the time-domain, but uses Time-Frequency representations of audio." 86,Continual adaptation for efficient machine communication,"To communicate with new partners in new contexts, humans rapidly form new linguistic conventions.Recent language models trained with deep neural networks are able to comprehend and produce the existing conventions present in their training data, but are not able to flexibly and interactively adapt those conventions on the fly as humans do.We introduce a repeated reference task as a benchmark for models of adaptation in communication and propose a regularized continual learning framework that allows an artificial agent initialized with a generic language model to more accurately and efficiently understand their partner over time.We evaluate this framework through simulations on COCO and in real-time reference game experiments with human partners.",We propose a repeated reference benchmark task and a regularized continual learning approach for adaptive communication with humans in unfamiliar domains 87,FSPool: Learning Set Representations with Featurewise Sort Pooling,"Traditional set prediction models can struggle with simple datasets due to an issue we call the responsibility problem.We introduce a pooling method for sets of feature vectors based on sorting features across elements of the set.This can be used to construct a permutation-equivariant auto-encoder that avoids this responsibility problem.On a toy dataset of polygons and a set version of MNIST, we show that such an auto-encoder produces considerably better reconstructions and representations.Replacing the pooling function in existing set encoders with FSPool improves accuracy and convergence speed on a variety of datasets.",Sort in encoder and undo sorting in decoder to avoid responsibility problem in set auto-encoders 88,PLEX: PLanner and EXecutor for Embodied Learning in Navigation,"We present a method for policy learning to navigate indoor environments.We adopt a hierarchical policy approach, where two agents are trained to work in cohesion with one another to perform a complex navigation task.A Planner agent operates at a higher level and proposes sub-goals for an Executor agent.""The Executor reports an embedding summary back to the Planner as additional side information at the end of its series of operations for the Planner's next sub-goal proposal."", 'The end goal is generated by the environment and exposed to the Planner which then decides which set of sub-goals to propose to the Executor.We show that this Planner-Executor setup drastically increases the sample efficiency of our method over traditional single agent approaches, effectively mitigating the difficulty accompanying long series of actions with a sparse reward signal.On the challenging Habitat environment which requires navigating various realistic indoor environments, we demonstrate that our approach offers a significant improvement over prior work for navigation.",We present a hierarchical learning framework for navigation within an embodied learning setting 89,RTFM: Generalising to New Environment Dynamics via Reading,"Obtaining policies that can generalise to new environments in reinforcement learning is challenging.In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments.We propose a grounded policy learning problem, Read to Fight Monsters, in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations.We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information.In addition, we propose txt2π, a model that captures three-way interactions between the goal, document, and observations.On RTFM, txt2π generalises to new environments with dynamics not seen during training via reading.Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM.Through curriculum learning, txt2π produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps.",We show language understanding via reading is promising way to learn policies that generalise to new environments. 90,Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization,"An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data.We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other.Thus changes to the network parameters during training are biased towards those that simultaneously benefit many examples when such similarity exists.We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning.Furthermore, our analysis is not just descriptive, but prescriptive.It suggests a natural modification to gradient descent that can greatly reduce overfitting.",We propose a hypothesis for why gradient descent generalizes based on how per-example gradients interact with each other. 91,"Deep 3D Pan via Local adaptive ""t-shaped"" convolutions with global and local adaptive dilations"," Recent advances in deep learning have shown promising results in many low-level vision tasks.However, solving the single-image-based view synthesis is still an open problem.In particular, the generation of new images at parallel camera views given a single input image is of great interest, as it enables 3D visualization of the 2D input scenery.We propose a novel network architecture to perform stereoscopic view synthesis at arbitrary camera positions along the X-axis, or Deep 3D Pan, with ""t-shaped"" adaptive kernels equipped with globally and locally adaptive dilations.""Our proposed network architecture, the monster-net, is devised with a novel t-shaped adaptive kernel with globally and locally adaptive dilation, which can efficiently incorporate global camera shift into and handle local 3D geometries of the target image's pixels for the synthesis of naturally looking 3D panned views when a 2-D input image is given."", 'Extensive experiments were performed on the KITTI, CityScapes and our VXXLXX_STEREO indoors dataset to prove the efficacy of our method.Our monster-net significantly outperforms the state-of-the-art method, SOTA, by a large margin in all metrics of RMSE, PSNR, and SSIM.Our proposed monster-net is capable of reconstructing more reliable image structures in synthesized images with coherent geometry.Moreover, the disparity information that can be extracted from the ""t-shaped"" kernel is much more reliable than that of the SOTA for the unsupervised monocular depth estimation task, confirming the effectiveness of our method.",Novel architecture for stereoscopic view synthesis at arbitrary camera shifts utilizing adaptive t-shaped kernels with adaptive dilations. 92,Cutting Down Training Memory by Re-fowarding,"Deep Neutral Networks require huge GPU memory when training on modern image/video databases.Unfortunately, the GPU memory as a hardware resource is always finite, which limits the image resolution, batch size, and learning rate that could be used for better DNN performance.In this paper, we propose a novel training approach, called Re-forwarding, that substantially reduces memory usage in training.Our approach automatically finds a subset of vertices in a DNN computation graph, and stores tensors only at these vertices during the first forward.During backward, extra local forwards are conducted to compute the missing tensors between the subset of vertices.The total memory cost becomes the sum of the memory cost at the subset of vertices and the maximum memory cost among local re-forwards.Re-forwarding trades training time overheads for memory and does not compromise any performance in testing.We propose theories and algorithms that achieve the optimal memory solutions for DNNs with either linear or arbitrary computation graphs.Experiments show that Re-forwarding cuts down up-to 80% of training memory on popular DNNs such as Alexnet, VGG, ResNet, Densenet and Inception net.","This paper proposes fundamental theory and optimal algorithms for DNN training, which reduce up to 80% of training memory for popular DNNs." 93,On the Universal Approximability and Complexity Bounds of Quantized ReLU Neural Networks,"Compression is a key step to deploy large neural networks on resource-constrained platforms.As a popular compression technique, quantization constrains the number of distinct weight values and thus reducing the number of bits required to represent and store each weight.In this paper, we study the representation power of quantized neural networks.First, we prove the universal approximability of quantized ReLU networks on a wide class of functions.Then we provide upper bounds on the number of weights and the memory size for a given approximation error bound and the bit-width of weights for function-independent and function-dependent structures.Our results reveal that, to attain an approximation error bound of, the number of weights needed by a quantized network is no more than times that of an unquantized network.This overhead is of much lower order than the lower bound of the number of weights needed for the error bound, supporting the empirical success of various quantization techniques.To the best of our knowledge, this is the first in-depth study on the complexity bounds of quantized neural networks.",This paper proves the universal approximability of quantized ReLU neural networks and puts forward the complexity bound given arbitrary error. 94,CAQL: Continuous Action Q-Learning,"Reinforcement learning with value-based methods has shown success in a variety of domains such asgames and recommender systems.When the action space is finite, these algorithms implicitly finds a policy by learning the optimal value function, which are often very efficient.However, one major challenge of extending Q-learning to tackle continuous-action RL problems is that obtaining optimal Bellman backup requires solving a continuous action-maximization problem.While it is common to restrict the parameterization of the Q-function to be concave in actions to simplify the max-Q problem, such a restriction might lead to performance degradation.Alternatively, when the Q-function is parameterized with a generic feed-forward neural network, the max-Q problem can be NP-hard.In this work, we propose the CAQL method which minimizes the Bellman residual using Q-learning with one of several plug-and-play action optimizers.In particular, leveraging the strides of optimization theories in deep NN, we show that max-Q problem can be solved optimally with mixed-integer programming---when the Q-function has sufficient representation power, this MIP-based optimization induces better policies and is more robust than counterparts, e.g., CEM or GA, that approximate the max-Q solution.To speed up training of CAQL, we develop three techniques, namely dynamic tolerance, dual filtering, and clustering.To speed up inference of CAQL, we introduce the action function that concurrently learns the optimal policy.To demonstrate the efficiency of CAQL we compare it with state-of-the-art RL algorithms on benchmark continuous control problems that have different degrees of action constraints and show that CAQL significantly outperforms policy-based methods in heavily constrained environments.",A general framework of value-based reinforcement learning for continuous control 95,Generative Adversarial Network Training is a Continual Learning Problem,"Generative Adversarial Networks have proven to be a powerful framework for learning to draw samples from complex distributions.However, GANs are also notoriously difficult to train, with mode collapse and oscillations a common problem.We hypothesize that this is at least in part due to the evolution of the generator distribution and the catastrophic forgetting tendency of neural networks, which leads to the discriminator losing the ability to remember synthesized samples from previous instantiations of the generator.Recognizing this, our contributions are twofold.First, we show that GAN training makes for a more interesting and realistic benchmark for continual learning methods evaluation than some of the more canonical datasets.Second, we propose leveraging continual learning techniques to augment the discriminator, preserving its ability to recognize previous generator samples.We show that the resulting methods add only a light amount of computation, involve minimal changes to the model, and result in better overall performance on the examined image and text generation tasks.",Generative Adversarial Network Training is a Continual Learning Problem. 96,Network Reparameterization for Unseen Class Categorization,"Many problems with large-scale labeled training data have been impressively solved by deep learning.However, Unseen Class Categorization with minimal information provided about target classes is the most commonly encountered setting in industry, which remains a challenging research problem in machine learning.Previous approaches to UCC either fail to generate a powerful discriminative feature extractor or fail to learn a flexible classifier that can be easily adapted to unseen classes.In this paper, we propose to address these issues through network reparameterization, , reparametrizing the learnable weights of a network as a function of other variables, by which we decouple the feature extraction part and the classification part of a deep classification model to suit the special setting of UCC, securing both strong discriminability and excellent adaptability.Extensive experiments for UCC on several widely-used benchmark datasets in the settings of zero-shot and few-shot learning demonstrate that, our method with network reparameterization achieves state-of-the-art performance.",A unified frame for both few-shot learning and zero-shot learning based on network reparameterization 97,GraphQA: Protein Model Quality Assessment using Graph Convolutional Network,"Proteins are ubiquitous molecules whose function in biological processes is determined by their 3D structure.""Experimental identification of a protein's structure can be time-consuming, prohibitively expensive, and not always possible."", 'Alternatively, protein folding can be modeled using computational methods, which however are not guaranteed to always produce optimal results.GraphQA is a graph-based method to estimate the quality of protein models, that possesses favorable properties such as representation learning, explicit modeling of both sequential and 3D structure, geometric invariance and computational efficiency.In this work, we demonstrate significant improvements of the state-of-the-art for both hand-engineered and representation-learning approaches, as well as carefully evaluating the individual contributions of GraphQA.",GraphQA is a graph-based method for protein Quality Assessment that improves the state-of-the-art for both hand-engineered and representation-learning approaches 98,Learning in Confusion: Batch Active Learning with Noisy Oracle,"We study the problem of training machine learning models incrementally using active learning with access to imperfect or noisy oracles.We specifically consider the setting of batch active learning, in which multiple samples are selected as opposed to a single sample as in classical settings so as to reduce the training overhead.Our approach bridges between uniform randomness and score based importance sampling of clusters when selecting a batch of new samples.Experiments onbenchmark image classification datasets shows improvement over existing active learning strategies.We introduce an extra denoising layer to deep networks to make active learning robust to label noises and show significant improvements.",We address the active learning in batch setting with noisy oracles and use model uncertainty to encode the decision quality of active learning algorithm during acquisition. 99,Learning to Understand Goal Specifications by Modelling Reward,"Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards.However, this places on environment designers the onus of designing language-conditional reward functions which may not be easily or tractably implemented as the complexity of the environment and the language scales.To overcome this limitation, we present a framework within which instruction-conditional RL agents are trained using rewards obtained not from the environment, but from reward models which are jointly trained from expert examples. As reward models improve, they learn to accurately reward agents for completing tasks for environment configurations---and for instructions---not present amongst the expert data.This framework effectively separates the representation of what instructions require from how they can be executed.In a simple grid world, it enables an agent to learn a range of commands requiring interaction with blocks and understanding of spatial relations and underspecified abstract arrangements.We further show the method allows our agent to adapt to changes in the environment without requiring new expert examples.","We propose AGILE, a framework for training agents to perform instructions from examples of respective goal-states." 100,Multitask Soft Option Learning,"We present Multitask Soft Option Learning, a hierarchical multi-task framework based on Planning-as-Inference.MSOL extends the concept of Options, using separate variational posteriors for each task, regularized by a shared prior.The learned soft-options are temporally extended, allowing a higher-level master policy to train faster on new tasks by making decisions with lower frequency.Additionally, MSOL allows fine-tuning of soft-options for new tasks without unlearning previously useful behavior, and avoids problems with local minima in multitask training.We demonstrate empirically that MSOL significantly outperforms both hierarchical and flat transfer-learning baselines in challenging multi-task environments.","In Hierarchical RL, we introduce the notion of a 'soft', i.e. adaptable, option and show that this helps learning in multitask settings." 101,Guided variational autoencoder for disentanglement learning,"We propose an algorithm, guided variational autoencoder, that is able to learn a controllable generative model by performing latent representation disentanglement learning.The learning objective is achieved by providing signal to the latent encoding/embedding in VAE without changing its main backbone architecture, hence retaining the desirable properties of the VAE.We design an unsupervised and a supervised strategy in Guided-VAE and observe enhanced modeling and controlling capability over the vanilla VAE.In the unsupervised strategy, we guide the VAE learning by introducing a lightweight decoder that learns latent geometric transformation and principal components; in the supervised strategy, we use an adversarial excitation and inhibition mechanism to encourage the disentanglement of the latent variables.Guided-VAE enjoys its transparency and simplicity for the general representation learning task, as well as disentanglement learning.On a number of experiments for representation learning, improved synthesis/sampling, better disentanglement for classification, and reduced classification errors in meta learning have been observed.",Learning a controllable generative model by performing latent representation disentanglement learning. 102,Learning to Generate Filters for Convolutional Neural Networks,"Conventionally, convolutional neural networks process different images with the same set of filters.However, the variations in images pose a challenge to this fashion.In this paper, we propose to generate sample-specific filters for convolutional layers in the forward pass.Since the filters are generated on-the-fly, the model becomes more flexible and can better fit the training data compared to traditional CNNs.In order to obtain sample-specific features, we extract the intermediate feature maps from an autoencoder.As filters are usually high dimensional, we propose to learn a set of coefficients instead of a set of filters.These coefficients are used to linearly combine the base filters from a filter repository to generate the final filters for a CNN.The proposed method is evaluated on MNIST, MTFL and CIFAR10 datasets.Experiment results demonstrate that the classification accuracy of the baseline model can be improved by using the proposed filter generation method.",dynamically generate filters conditioned on the input image for CNNs in each forward pass 103,Doubly Nested Network for Resource-Efficient Inference,"We propose a new anytime neural network which allows partial evaluation by subnetworks with different widths as well as depths.Compared to conventional anytime networks only with the depth controllability, the increased architectural diversity leads to higher resource utilization and consequent performance improvement under various and dynamic resource budgets.We highlight architectural features to make our scheme feasible as well as efficient, and show its effectiveness in image classification tasks.",We propose a new anytime neural network which allows partial evaluation by subnetworks with different widths as well as depths. 104,Learning to Make Generalizable and Diverse Predictions for Retrosynthesis,"We propose a new model for making generalizable and diverse retrosynthetic reaction predictions.Given a target compound, the task is to predict the likely chemical reactants to produce the target.This generative task can be framed as a sequence-to-sequence problem by using the SMILES representations of the molecules.Building on top of the popular Transformer architecture, we propose two novel pre-training methods that construct relevant auxiliary tasks for our problem.Furthermore, we incorporate a discrete latent variable model into the architecture to encourage the model to produce a diverse set of alternative predictions.On the 50k subset of reaction examples from the United States patent literature benchmark dataset, our model greatly improves performance over the baseline, while also generating predictions that are more diverse.",We propose a new model for making generalizable and diverse retrosynthetic reaction predictions. 105,Trust-PCL: An Off-Policy Trust Region Method for Continuous Control,"Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning.While current trust region strategies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment.To address this problem, we propose an off-policy trust region method, Trust-PCL, which exploits an observation that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path.The introduction of relative entropy regularization allows Trust-PCL to maintain optimization stability while exploiting off-policy data to improve sample efficiency.When evaluated on a number of continuous control tasks, Trust-PCL significantly improves the solution quality and sample efficiency of TRPO.",We extend recent insights related to softmax consistency to achieve state-of-the-art results in continuous control. 106,Can Deep Reinforcement Learning solve Erdos-Selfridge-Spencer Games?,"Deep reinforcement learning has achieved many recent successes, but our understanding of its strengths and limitations is hampered by the lack of rich environments in which we can fully characterize optimal behavior, and correspondingly diagnose individual actions against such a characterization.Here we consider a family of combinatorial games, arising from work of Erdos, Selfridge, and Spencer, and we propose their use as environments for evaluating and comparing different approaches to reinforcement learning.These games have a number of appealing features: they are challenging for current learning approaches, but they form a low-dimensional, simply parametrized environment where there is a linear closed form solution for optimal behavior from any state, and the difficulty of the game can be tuned by changing environment parameters in an interpretable way.We use these Erdos-Selfridge-Spencer games not only to compare different algorithms, but also to compare approaches based on supervised and reinforcement learning, to analyze the power of multi-agent approaches in improving performance, and to evaluate generalization to environments outside the training set.","We adapt a family of combinatorial games with tunable difficulty and an optimal policy expressible as linear network, developing it as a rich environment for reinforcement learning, showing contrasts in performance with supervised learning, and analyzing multiagent learning and generalization. " 107,ODIN: Outlier Detection In Neural Networks,"Adoption of deep learning in safety-critical systems raise the need for understanding what deep neural networks do not understand.Several methodologies to estimate model uncertainty have been proposed, but these methodologies constrain either how the neural network is trained or constructed.We present Outlier Detection In Neural networks, an assumption-free method for detecting outlier observations during prediction, based on principles widely used in manufacturing process monitoring.By using a linear approximation of the hidden layer manifold, we add prediction-time outlier detection to models after training without altering architecture or training.We demonstrate that ODIN efficiently detect outliers during prediction on Fashion-MNIST, ImageNet-synsets and speech command recognition.",An add-on method for deep learning to detect outliers during prediction-time 108,A Simple Fully Connected Network for Composing Word Embeddings from Characters,"This work introduces a simple network for producing character aware word embeddings.Position agnostic and position aware character embeddings are combined to produce an embedding vector for each word.The learned word representations are shown to be very sparse and facilitate improved results on language modeling tasks, despite using markedly fewer parameters, and without the need to apply dropout.A final experiment suggests that weight sharing contributes to sparsity, increases performance, and prevents overfitting.","A fully connected architecture is used to produce word embeddings from character representations, outperforms traditional embeddings and provides insight into sparsity and dropout." 109,Attacking Binarized Neural Networks,"Neural networks with low-precision weights and activations offer compellingefficiency advantages over their full-precision equivalents.The two mostfrequently discussed benefits of quantization are reduced memory consumption,and a faster forward pass when implemented with efficient bitwiseoperations.We propose a third benefit of very low-precision neural networks:improved robustness against some adversarial attacks, and in the worst case,performance that is on par with full-precision models.We focus on the verylow-precision case where weights and activations are both quantized to1,and note that stochastically quantizing weights in just one layer can sharplyreduce the impact of iterative attacks.We observe that non-scaled binary neuralnetworks exhibit a similar effect to the original procedure that led to , and a false notion of security.We address this by conducting both black-box and white-box experiments withbinary models that do not artificially mask gradients.","We conduct adversarial attacks against binarized neural networks and show that we reduce the impact of the strongest attacks, while maintaining comparable accuracy in a black-box setting" 110,Empirical observations on the instability of aligning word vector spaces with GANs,"Unsupervised bilingual dictionary induction is useful for unsupervised machine translation and for cross-lingual transfer of models into low-resource languages.One approach to UBDI is to align word vector spaces in different languages using Generative adversarial networks with linear generators, achieving state-of-the-art performance for several language pairs.For some pairs, however, GAN-based induction is unstable or completely fails to align the vector spaces.We focus on cases where linear transformations provably exist, but the performance of GAN-based UBDI depends heavily on the model initialization.We show that the instability depends on the shape and density of the vector sets, but not on noise; it is the result of local optima, but neither over-parameterization nor changing the batch size or the learning rate consistently reduces instability.Nevertheless, we can stabilize GAN-based UBDI through best-of-N model selection, based on an unsupervised stopping criterion.","An empirical investigation of GAN-based alignment of word vector spaces, focusing on cases, where linear transformations provably exist, but training is unstable." 111,Dual-Component Deep Domain Adaptation: A New Approach for Cross Project Software Vulnerability Detection,"Owing to the ubiquity of computer software, software vulnerability detection has become an important problem in the software industry and in the field of computer security.One of the most crucial issues in SVD is coping with the scarcity of labeled vulnerabilities in projects that require the laborious manual labeling of code by software security experts.One possible way to address is to employ deep domain adaptation which has recently witnessed enormous success in transferring learning from structural labeled to unlabeled data sources.The general idea is to map both source and target data into a joint feature space and close the discrepancy gap of those data in this joint feature space.Generative adversarial network is a technique that attempts to bridge the discrepancy gap and also emerges as a building block to develop deep domain adaptation approaches with state-of-the-art performance.However, deep domain adaptation approaches using the GAN principle to close the discrepancy gap are subject to the mode collapsing problem that negatively impacts the predictive performance.Our aim in this paper is to propose Dual Generator-Discriminator Deep Code Domain Adaptation Network for tackling the problem of transfer learning from labeled to unlabeled software projects in the context of SVD in order to resolve the mode collapsing problem faced in previous approaches.The experimental results on real-world software projects show that our proposed method outperforms state-of-the-art baselines by a wide margin.",Our aim in this paper is to propose a new approach for tackling the problem of transfer learning from labeled to unlabeled software projects in the context of SVD in order to resolve the mode collapsing problem faced in previous approaches. 112,Fast Node Embeddings: Learning Ego-Centric Representations,"Representation learning is one of the foundations of Deep Learning and allowed important improvements on several Machine Learning tasks, such as Neural Machine Translation, Question Answering and Speech Recognition.Recent works have proposed new methods for learning representations for nodes and edges in graphs.Several of these methods are based on the SkipGram algorithm, and they usually process a large number of multi-hop neighbors in order to produce the context from which node representations are learned.In this paper, we propose an effective and also efficient method for generating node embeddings in graphs that employs a restricted number of permutations over the immediate neighborhood of a node as context to generate its representation, thus ego-centric representations.We present a thorough evaluation showing that our method outperforms state-of-the-art methods in six different datasets related to the problems of link prediction and node classification, being one to three orders of magnitude faster than baselines when generating node embeddings for very large graphs.",A faster method for generating node embeddings that employs a number of permutations over a node's immediate neighborhood as context to generate its representation. 113,Autoencoder-based Initialization for Recurrent Neural Networks with a Linear Memory,"Orthogonal recurrent neural networks address the vanishing gradient problem by parameterizing the recurrent connections using an orthogonal matrix.This class of models is particularly effective to solve tasks that require the memorization of long sequences.We propose an alternative solution based on explicit memorization using linear autoencoders for sequences.We show how a recently proposed recurrent architecture, the Linear Memory Network, composed of a nonlinear feedforward layer and a separate linear recurrence, can be used to solve hard memorization tasks.We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution.The initialization schema can be easily adapted to any recurrent architecture. We argue that this approach is superior to a random orthogonal initialization due to the autoencoder, which allows the memorization of long sequences even before training.The empirical analysis show that our approach achieves competitive results against alternative orthogonal models, and the LSTM, on sequential MNIST, permuted MNIST and TIMIT.",We show how to initialize recurrent architectures with the closed-form solution of a linear autoencoder for sequences. We show the advantages of this approach compared to orthogonal RNNs. 114,Understanding and Improving Sequence-Labeling NER with Self-Attentive LSTMs,"This paper improves upon the line of research that formulates named entity recognition as a sequence-labeling problem.We use so-called black-box long short-term memory encoders to achieve state-of-the-art results while providing insightful understanding of what the auto-regressive model learns with a parallel self-attention mechanism.Specifically, we decouple the sequence-labeling problem of NER into entity chunking, e.g., Barack_B Obama_E was_O elected_O, and entity typing, e.g., Barack_PERSON Obama_PERSON was_NONE elected_NONE, and analyze how the model learns to, or has difficulties in, capturing text patterns for each of the subtasks.The insights we gain then lead us to explore a more sophisticated deep cross-Bi-LSTM encoder, which proves better at capturing global interactions given both empirical results and a theoretical justification.","We provide insightful understanding of sequence-labeling NER and propose to use two types of cross structures, both of which bring theoretical and empirical improvements." 115,RelWalk -- A Latent Variable Model Approach to Knowledge Graph Embedding,"Knowledge Graph Embedding is the task of jointly learning entity and relation embeddings for a given knowledge graph.Existing methods for learning KGEs can be seen as a two-stage process where entities and relations in the knowledge graph are represented using some linear algebraic structures, and a scoring function is defined that evaluates the strength of a relation that holds between two entities using the corresponding relation and entity embeddings.Unfortunately, prior proposals for the scoring functions in the first step have been heuristically motivated, and it is unclear as to how the scoring functions in KGEs relate to the generation process of the underlying knowledge graph.To address this issue, we propose a generative account of the KGE learning task.Specifically, given a knowledge graph represented by a set of relational triples, where the semantic relation R holds between the two entities h and t, we extend the random walk model of word embeddings to KGE.We derive a theoretical relationship between the joint probability p and the embeddings of h, R and t.Moreover, we show that marginal loss minimisation, a popular objective used by much prior work in KGE, follows naturally from the log-likelihood ratio maximisation under the probabilities estimated from the KGEs according to our theoretical relationship.We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.The KGEs learnt by our proposed method obtain state-of-the-art performance on FB15K237 and WN18RR benchmark datasets, providing empirical evidence in support of the theory.",We present a theoretically proven generative model of knowledge graph embedding. 116,Scaling shared model governance via model splitting,"Currently the only techniques for sharing governance of a deep learning model are homomorphic encryption and secure multiparty computation.Unfortunately, neither of these techniques is applicable to the training of large neural networks due to their large computational and communication overheads.As a scalable technique for shared model governance, we propose splitting deep learning model between multiple parties.This paper empirically investigates the security guarantee of this technique, which is introduced as the problem of model completion: Given the entire training data set or an environment simulator, and a subset of the parameters of a trained deep learning model, how much training is required to recover the model’s original performance? We define a metric for evaluating the hardness of the model completion problem and study it empirically in both supervised learning on ImageNet and reinforcement learning on Atari and DeepMind Lab.Our experiments show that the model completion problem is harder in reinforcement learning than in supervised learning because of the unavailability of the trained agent’s trajectories, and its hardness depends not primarily on the number of parameters of the missing part, but more so on their type and location. Our results suggest that model splitting might be a feasible technique for shared model governance in some settings where training is very expensive.",We study empirically how hard it is to recover missing parts of trained models 117,Variational Domain Adaptation,"This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference.Unlike the existing methods on domain transfer through deep generative models, such as StarGAN and UFDN, the variational domain adaptation has three advantages.Firstly, the samples from the target are not required.Instead, the framework requires one known source as a prior and binary discriminators,, discriminating the target domain from others.Consequently, the framework regards a target as a posterior that can be explicitly formulated through the Bayesian inference,, as exhibited by a further proposed model of dual variational autoencoder.Secondly, the framework is scablable to large-scale domains.As well as VAE encodes a sample as a mode on a latent space:, DualVAE encodes a domain as a mode on the dual latent space, named domain embedding.It reformulates the posterior with a natural paring, which can be expanded to uncountable infinite domains such as continuous domains as well as interpolation.Thirdly, DualVAE fastly converges without sophisticated automatic/manual hyperparameter search in comparison to GANs as it requires only one additional parameter to VAE.Through the numerical experiment, we demonstrate the three benefits with multi-domain image generation task on CelebA with up to 60 domains, and exhibits that DualVAE records the state-of-the-art performance outperforming StarGAN and UFDN.","This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference" 118,Adversarial Training and Provable Defenses: Bridging the Gap,"We propose a new method to train neural networks based on a novel combination of adversarial training and provable defenses.The key idea is to model training as a procedure which includes both, the verifier and the adversary.In every iteration, the verifier aims to certify the network using convex relaxation while the adversary tries to find inputs inside that convex relaxation which cause verification to fail.We experimentally show that this training method is promising and achieves the best of both worlds – it produces a model with state-of-the-art accuracy and certified robustness on the challenging CIFAR-10 dataset with a 2/255 L-infinity perturbation.This is a significant improvement over the currently known best results of 68.3% accuracy and 53.9% certified robustness, achieved using a 5 times larger network than our work.",We propose a novel combination of adversarial training and provable defenses which produces a model with state-of-the-art accuracy and certified robustness on CIFAR-10. 119,Learning to Represent Programs with Graphs,"Learning tasks on source code have been considered recently, but most work has tried to transfer natural language methods and does not capitalize on the unique opportunities offered by code's known syntax."", 'For example, long-range dependencies induced by using the same variable or function in distant locations are often not considered.We propose to use graphs to represent both the syntactic and semantic structure of code and use graph-based deep learning methods to learn to reason over program structures.In this work, we present how to construct graphs from source code and how to scale Gated Graph Neural Networks training to such large graphs.We evaluate our method on two tasks: VarNaming, in which a network attempts to predict the name of a variable given its usage, and VarMisuse, in which the network learns to reason about selecting the correct variable that should be used at a given program location.Our comparison to methods that use less structured program representations shows the advantages of modeling known structure, and suggests that our models learn to infer meaningful names and to solve the VarMisuse task in many cases.Additionally, our testing showed that VarMisuse identifies a number of bugs in mature open-source projects.","Programs have structure that can be represented as graphs, and graph neural networks can learn to find bugs on such graphs" 120,Overfitting Detection of Deep Neural Networks without a Hold Out Set,"Overfitting is an ubiquitous problem in neural network training and usually mitigated using a holdout data set.Here we challenge this rationale and investigate criteria for overfitting without using a holdout data set.Specifically, we train a model for a fixed number of epochs multiple times with varying fractions of randomized labels and for a range of regularization strengths.A properly trained model should not be able to attain an accuracy greater than the fraction of properly labeled data points.Otherwise the model overfits.We introduce two criteria for detecting overfitting and one to detect underfitting.We analyze early stopping, the regularization factor, and network depth.In safety critical applications we are interested in models and parameter settings which perform well and are not likely to overfit.The methods of this paper allow characterizing and identifying such models.",We introduce and analyze several criteria for detecting overfitting. 121,Perception-Aware Point-Based Value Iteration for Partially Observable Markov Decision Processes,"Partially observable Markov decision processes are a widely-used framework to model decision-making with uncertainty about the environment and under stochastic outcome.In conventional POMDP models, the observations that the agent receives originate from fixed known distribution.However, in a variety of real-world scenarios the agent has an active role in its perception by selecting which observations to receive.Due to combinatorial nature of such selection process, it is computationally intractable to integrate the perception decision with the planning decision.To prevent such expansion of the action space, we propose a greedy strategy for observation selection that aims to minimize the uncertainty in state.We develop a novel point-based value iteration algorithm that incorporates the greedy strategy to achieve near-optimal uncertainty reduction for sampled belief points.This in turn enables the solver to efficiently approximate the reachable subspace of belief simplex by essentially separating computations related to perception from planning.Lastly, we implement the proposed solver and demonstrate its performance and computational advantage in a range of robotic scenarios where the robot simultaneously performs active perception and planning.",We develop a point-based value iteration solver for POMDPs with active perception and planning tasks. 122,Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks,"Deep neural networks, in particular convolutional neural networks, have become highly effective tools for compressing images and solving inverse problems including denoising, inpainting, and reconstruction from few and noisy measurements.This success can be attributed in part to their ability to represent and generate natural images well.Contrary to classical tools such as wavelets, image-generating deep neural networks have a large number of parameters---typically a multiple of their output dimension---and need to be trained on large datasets.In this paper, we propose an untrained simple image model, called the deep decoder, which is a deep neural network that can generate natural images from very few weight parameters.The deep decoder has a simple architecture with no convolutions and fewer weight parameters than the output dimensionality.This underparameterization enables the deep decoder to compress images into a concise set of network weights, which we show is on par with wavelet-based thresholding.Further, underparameterization provides a barrier to overfitting, allowing the deep decoder to have state-of-the-art performance for denoising.The deep decoder is simple in the sense that each layer has an identical structure that consists of only one upsampling unit, pixel-wise linear combination of channels, ReLU activation, and channelwise normalization.This simplicity makes the network amenable to theoretical analysis, and it sheds light on the aspects of neural networks that enable them to form effective signal representations.","We introduce an underparameterized, nonconvolutional, and simple deep neural network that can, without training, effectively represent natural images and solve image processing tasks like compression and denoising competitively." 123,Understanding Deep Neural Networks with Rectified Linear Units,"In this paper we investigate the family of functions representable by deep neural networks with rectified linear units.We give an algorithm to train a ReLU DNN with one hidden layer to with runtime polynomial in the data size albeit exponential in the input dimension.Further, we improve on the known lower bounds on size for approximating a ReLU deep net function by a shallower ReLU net.""Our gap theorems hold for smoothly parametrized families of hard functions, contrary to countable, discrete families known in the literature.An example consequence of our gap theorems is the following: for every natural number there exists a function representable by a ReLU DNN with hidden layers and total size, such that any ReLU DNN with at most hidden layers will require at least total nodes.Finally, for the family of DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an explicit construction of a family of functions attaining this scaling.Our construction utilizes the theory of zonotopes from polyhedral theory.","This paper 1) characterizes functions representable by ReLU DNNs, 2) formally studies the benefit of depth in such architectures, 3) gives an algorithm to implement empirical risk minimization to global optimality for two layer ReLU nets." 124,Assessing the scalability of biologically-motivated deep learning algorithms and architectures,"The backpropagation of error algorithm is often said to be impossible to implement in a real brain.The recent success of deep networks in machine learning and AI, however, has inspired a number of proposals for understanding how the brain might learn across multiple layers, and hence how it might implement or approximate BP.As of yet, none of these proposals have been rigorously evaluated on tasks where BP-guided deep learning has proved critical, or in architectures more structured than simple fully-connected networks.Here we present the first results on scaling up a biologically motivated model of deep learning to datasets which need deep networks with appropriate architectures to achieve good performance.We present results on CIFAR-10 and ImageNet. For CIFAR-10 we show that our algorithm, a straightforward, weight-transport-free variant of difference target-propagation modified to remove backpropagation from the penultimate layer, is competitive with BP in training deep networks with locally defined receptive fields that have untied weights. For ImageNet we find that both DTP and our algorithm perform significantly worse than BP, opening questions about whether different architectures or algorithms are required to scale these approaches.Our results and implementation details help establish baselines for biologically motivated deep learning schemes going forward.",Benchmarks for biologically plausible learning algorithms on complex datasets and architectures 125,LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING,"Deep neural networks usually contain millions, maybe billions, of parameters/weights, making both storage and computation very expensive.This has motivated a large body of work to reduce the complexity of the neural network by using sparsity-inducing regularizers. Another well-known approach for controlling the complexity of DNNs is parameter sharing/tying, where certain sets of weights are forced to share a common value.Some forms of weight sharing are hard-wired to express certain in- variances, with a notable example being the shift-invariance of convolutional layers.However, there may be other groups of weights that may be tied together during the learning process, thus further re- ducing the complexity of the network.In this paper, we adopt a recently proposed sparsity-inducing regularizer, named GrOWL, which encourages sparsity and, simulta- neously, learns which groups of parameters should share a common value.GrOWL has been proven effective in linear regression, being able to identify and cope with strongly correlated covariates.Unlike standard sparsity-inducing regularizers, GrOWL not only eliminates unimportant neurons by setting all the corresponding weights to zero, but also explicitly identifies strongly correlated neurons by tying the corresponding weights to a common value.This ability of GrOWL motivates the following two-stage procedure: use GrOWL regularization in the training process to simultaneously identify significant neurons and groups of parameter that should be tied together; retrain the network, enforcing the structure that was unveiled in the previous phase, i.e., keeping only the significant neurons and enforcing the learned tying structure.We evaluate the proposed approach on several benchmark datasets, showing that it can dramatically compress the network with slight or even no loss on generalization performance.",We have proposed using the recent GrOWL regularizer for simultaneous parameter sparsity and tying in DNN learning. 126,On Weight-Sharing and Bilevel Optimization in Architecture Search,"Weight-sharing—the simultaneous optimization of multiple neural networks using the same parameters—has emerged as a key component of state-of-the-art neural architecture search.However, its success is poorly understood and often found to be surprising.We argue that, rather than just being an optimization trick, the weight-sharing approach is induced by the relaxation of a structured hypothesis space, and introduces new algorithmic and theoretical challenges as well as applications beyond neural architecture search.Algorithmically, we show how the geometry of ERM for weight-sharing requires greater care when designing gradient- based minimization methods and apply tools from non-convex non-Euclidean optimization to give general-purpose algorithms that adapt to the underlying structure.We further analyze the learning-theoretic behavior of the bilevel optimization solved by practical weight-sharing methods.Next, using kernel configuration and NLP feature selection as case studies, we demonstrate how weight-sharing applies to the architecture search generalization of NAS and effectively optimizes the resulting bilevel objective.Finally, we use our optimization analysis to develop a simple exponentiated gradient method for NAS that aligns with the underlying optimization geometry and matches state-of-the-art approaches on CIFAR-10.",An analysis of the learning and optimization structures of architecture search in neural networks and beyond. 127,Practical lossless compression with latent variables using bits back coding,"Deep latent variable models have seen recent success in many data domains.Lossless compression is an application of these models which, despite having the potential to be highly useful, has yet to be implemented in a practical manner.""We present '`Bits Back with ANS', a scheme to perform lossless compression with latent variable models at a near optimal rate."", 'We demonstrate this scheme by using it to compress the MNIST dataset with a variational auto-encoder model, achieving compression rates superior to standard methods with only a simple VAE.Given that the scheme is highly amenable to parallelization, we conclude that with a sufficiently high quality generative model this scheme could be used to achieve substantial improvements in compression rate with acceptable running time.We make our implementation available open source at https://github.com/bits-back/bits-back .","We do lossless compression of large image datasets using a VAE, beat existing compression algorithms." 128,Latent Question Reformulation and Information Accumulation for Multi-Hop Machine Reading,"Multi-hop text-based question-answering is a current challenge in machine comprehension.This task requires to sequentially integrate facts from multiple passages to answer complex natural language questions.In this paper, we propose a novel architecture, called the Latent Question Reformulation Network, a multi-hop and parallel attentive network designed for question-answering tasks that require reasoning capabilities.LQR-net is composed of an association of \extbf and \extbf.The purpose of the reading module is to produce a question-aware representation of the document.From this document representation, the reformulation module extracts essential elements to calculate an updated representation of the question.This updated question is then passed to the following hop.We evaluate our architecture on the \\hotpotqa question-answering dataset designed to assess multi-hop reasoning capabilities.Our model achieves competitive results on the public leaderboard and outperforms the best current models in terms of Exact Match and score.Finally, we show that an analysis of the sequential reformulations can provide interpretable reasoning paths.","In this paper, we propose the Latent Question Reformulation Network (LQR-net), a multi-hop and parallel attentive network designed for question-answering tasks that require reasoning capabilities." 129,Explaining Time Series by Counterfactuals,"We propose a method to automatically compute the importance of features at every observation in time series, by simulating counterfactual trajectories given previous observations.We define the importance of each observation as the change in the model output caused by replacing the observation with a generated one.Our method can be applied to arbitrarily complex time series models.We compare the generated feature importance to existing methods like sensitivity analyses, feature occlusion, and other explanation baselines to show that our approach generates more precise explanations and is less sensitive to noise in the input signals.",Explaining Multivariate Time Series Models by finding important observations in time using Counterfactuals 130,Unsupervised Domain Adaptation through Self-Supervision,"This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data.Like much of previous work, we seek to align the learned representations of the source and target domains while preserving discriminability.The way we accomplish alignment is by learning to perform auxiliary self-supervised task on both domains simultaneously. Each self-supervised task brings the two domains closer together along the direction relevant to that task.Training this jointly with the main task classifier on the source domain is shown to successfully generalize to the unlabeled target domain. The presented objective is straightforward to implement and easy to optimize.We achieve state-of-the-art results on four out of seven standard benchmarks, and competitive results on segmentation adaptation.We also demonstrate that our method composes well with another popular pixel-level adaptation method.",We use self-supervision on both domain to align them for unsupervised domain adaptation. 131,Maximally Consistent Sampling and the Jaccard Index of Probability Distributions,"We introduce simple, efficient algorithms for computing a MinHash of a probability distribution, suitable for both sparse and dense data, with equivalent running times to the state of the art for both cases.The collision probability of these algorithms is a new measure of the similarity of positive vectors which we investigate in detail.We describe the sense in which this collision probability is optimal for any Locality Sensitive Hash based on sampling.We argue that this similarity measure is more useful for probability distributions than the similarity pursued by other algorithms for weighted MinHash, and is the natural generalization of the Jaccard index.",The minimum of a set of exponentially distributed hashes has a very useful collision probability that generalizes the Jaccard Index to probability distributions. 132,Graph Neural Networks with Generated Parameters for Relation Extraction,"Recently, progress has been made towards improving relational reasoning in machine learning field.Among existing models, graph neural networks is one of the most effective approaches for multi-hop relational reasoning.In fact, multi-hop relational reasoning is indispensable in many natural language processing tasks such as relation extraction.In this paper, we propose to generate the parameters of graph neural networks according to natural language sentences, which enables GNNs to process relational reasoning on unstructured text inputs.We verify GP-GNNs in relation extraction from text.Experimental results on a human-annotated dataset and two distantly supervised datasets show that our model achieves significant improvements compared to the baselines.We also perform a qualitative analysis to demonstrate that our model could discover more accurate relations by multi-hop relational reasoning.","A graph neural network model with parameters generated from natural languages, which can perform multi-hop reasoning. " 133,Online Meta-Critic Learning for Off-Policy Actor-Critic Methods,"Off-Policy Actor-Critic methods have proven successful in a variety of continuous control tasks.Normally, the critic’s action-value function is updated using temporal-difference, and the critic in turn provides a loss for the actor that trains it to take actions with higher expected return.In this paper, we introduce a novel and flexible meta-critic that observes the learning process and meta-learns an additional loss for the actor that accelerates and improves actor-critic learning.Compared to the vanilla critic, the meta-critic network is explicitly trained to accelerate the learning process; and compared to existing meta-learning algorithms, meta-critic is rapidly learned online for a single task, rather than slowly over a family of tasks.Crucially, our meta-critic framework is designed for off-policy based learners, which currently provide state-of-the-art reinforcement learning sample efficiency.We demonstrate that online meta-critic learning leads to improvements in a variety of continuous control environments when combined with contemporary Off-PAC methods DDPG, TD3 and the state-of-the-art SAC.","We present Meta-Critic, an auxiliary critic module for off-policy actor-critic methods that can be meta-learned online during single task learning." 134,Non-vacuous Generalization Bounds at the ImageNet Scale: a PAC-Bayesian Compression Approach,"Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data.Nevertheless, these networks often generalize well in practice.It has also been observed that trained networks can often be compressed to much smaller representations.The purpose of this paper is to connect these two empirical observations.Our main technical result is a generalization bound for compressed networks based on the compressed size that, combined with off-the-shelf compression algorithms, leads to state-of-the-art generalization guarantees.In particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem.Additionally, we show that compressibility of models that tend to overfit is limited.Empirical results show that an increase in overfitting increases the number of bits required to describe a trained network.",We obtain non-vacuous generalization bounds on ImageNet-scale deep neural networks by combining an original PAC-Bayes bound and an off-the-shelf neural network compression method. 135,Adversarial Gain,"Adversarial examples can be defined as inputs to a model which induce a mistake -- where the model output is different than that of an oracle, perhaps in surprising or malicious ways.Original models of adversarial attacks are primarily studied in the context of classification and computer vision tasks.While several attacks have been proposed in natural language processing settings, they often vary in defining the parameters of an attack and what a successful attack would look like.The goal of this work is to propose a unifying model of adversarial examples suitable for NLP tasks in both generative and classification settings.We define the notion of adversarial gain: based in control theory, it is a measure of the change in the output of a system relative to the perturbation of the input presented to the learner.This definition, as we show, can be used under different feature spaces and distance conditions to determine attack or defense effectiveness across different intuitive manifolds.This notion of adversarial gain not only provides a useful way for evaluating adversaries and defenses, but can act as a building block for future work in robustness under adversaries due to its rooted nature in stability and manifold theory.",We propose an alternative measure for determining effectiveness of adversarial attacks in NLP models according to a distance measure-based method like incremental L2-gain in control theory. 136,Decoupling the Layers in Residual Networks,"We propose a Warped Residual Network using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network.We apply a perturbation theory on residual networks and decouple the interactions between residual units.The resulting warp operator is a first order approximation of the output over multiple layers.The first order perturbation theory exhibits properties such as binomial path lengths and exponential gradient scaling found experimentally by Veit et al.We demonstrate through an extensive performance study that the proposed network achieves comparable predictive performance to the original residual network with the same number of parameters, while achieving a significant speed-up on the total training time.As WarpNet performs model parallelism in residual network training in which weights are distributed over different GPUs, it offers speed-up and capability to train larger networks compared to original residual networks.",We propose the Warped Residual Network using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network. 137,On Evaluating Explainability Algorithms,"A plethora of methods attempting to explain predictions of black-box models have been proposed by the Explainable Artificial Intelligence community.Yet, measuring the quality of the generated explanations is largely unexplored, making quantitative comparisons non-trivial.In this work, we propose a suite of multifaceted metrics that enables us to objectively compare explainers based on the correctness, consistency, as well as the confidence of the generated explanations.These metrics are computationally inexpensive, do not require model-retraining and can be used across different data modalities.We evaluate them on common explainers such as Grad-CAM, SmoothGrad, LIME and Integrated Gradients.Our experiments show that the proposed metrics reflect qualitative observations reported in earlier works.",We propose a suite of metrics that capture desired properties of explainability algorithms and use it to objectively compare and evaluate such methods 138,On the Confidence of Neural Network Predictions for some NLP Tasks,Neural networks are known to produce unexpected results on inputs that are far from the training distribution.One approach to tackle this problem is to detect the samples on which the trained network can not answer reliably.ODIN is a recently proposed method for out-of-distribution detection that does not modify the trained network and achieves good performance for various image classification tasks.In this paper we adapt ODIN for sentence classification and word tagging tasks.We show that the scores produced by ODIN can be used as a confidence measure for the predictions on both in-distribution and out-of-distribution datasets.,A recent out-of-distribution detection method helps to measure the confidence of RNN predictions for some NLP tasks 139,The Laplacian in RL: Learning Representations with Efficient Approximations,"The smallest eigenvectors of the graph Laplacian are well-known to provide a succinct representation of the geometry of a weighted graph.In reinforcement learning, where the weighted graph may be interpreted as the state transition process induced by a behavior policy acting on the environment, approximating the eigenvectors of the Laplacian provides a promising approach to state representation learning.However, existing methods for performing this approximation are ill-suited in general RL settings for two main reasons: First, they are computationally expensive, often requiring operations on large matrices.Second, these methods lack adequate justification beyond simple, tabular, finite-state settings.In this paper, we present a fully general and scalable method for approximating the eigenvectors of the Laplacian in a model-free RL context.We systematically evaluate our approach and empirically show that it generalizes beyond the tabular, finite-state setting.Even in tabular, finite-state settings, its ability to approximate the eigenvectors outperforms previous proposals.Finally, we show the potential benefits of using a Laplacian representation learned using our method in goal-achieving RL tasks, providing evidence that our technique can be used to significantly improve the performance of an RL agent.",We propose a scalable method to approximate the eigenvectors of the Laplacian in the reinforcement learning context and we show that the learned representations can improve the performance of an RL agent. 140,PocketFlow: An Automated Framework for Compressing and Accelerating Deep Neural Networks,"Deep neural networks are widely used in various domains, but the prohibitive computational complexity prevents their deployment on mobile devices.Numerous model compression algorithms have been proposed, however, it is often difficult and time-consuming to choose proper hyper-parameters to obtain an efficient compressed model.In this paper, we propose an automated framework for model compression and acceleration, namely PocketFlow.This is an easy-to-use toolkit that integrates a series of model compression algorithms and embeds a hyper-parameter optimization module to automatically search for the optimal combination of hyper-parameters.Furthermore, the compressed model can be converted into the TensorFlow Lite format and easily deployed on mobile devices to speed-up the inference.PocketFlow is now open-source and publicly available at https://github.com/Tencent/PocketFlow.","We propose PocketFlow, an automated framework for model compression and acceleration, to facilitate deep learning models' deployment on mobile devices." 141,AmbientGAN: Generative models from lossy measurements,"Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest.However, current techniques for training generative models require access to fully-observed samples.In many settings, it is expensive or even impossible to obtain fully-observed samples, but economical to obtain partial, noisy observations.We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest.We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models.Based on this, we propose a new method of training Generative Adversarial Networks which we call AmbientGAN.On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements.Generative models trained with our method can obtain-x higher inception scores than the baselines.","How to learn GANs from noisy, distorted, partial observations" 142,Traditional and Heavy Tailed Self Regularization in Neural Network Models,"Random Matrix Theory is applied to analyze the weight matrices of Deep Neural Networks, including both production quality, pre-trained models such as AlexNet and Inception, and smaller models trained from scratch, such as LeNet5 and a miniature-AlexNet. Empirical and theoretical results clearly indicate that the empirical spectral density of DNN layer matrices displays signatures of traditionally-regularized statistical models, even in the absence of exogenously specifying traditional forms of regularization, such as Dropout or Weight Norm constraints. Building on recent results in RMT, most notably its extension to Universality classes of Heavy-Tailed matrices, we develop a theory to identify 5+1 Phases of Training, corresponding to increasing amounts of Implicit Self-Regularization. For smaller and/or older DNNs, this Implicit Self-Regularization is like traditional Tikhonov regularization, in that there is a ""size scale"" separating signal from noise. For state-of-the-art DNNs, however, we identify a novel form of Heavy-Tailed Self-Regularization, similar to the self-organization seen in the statistical physics of disordered systems. This implicit Self-Regularization can depend strongly on the many knobs of the training process. By exploiting the generalization gap phenomena, we demonstrate that we can cause a small model to exhibit all 5+1 phases of training simply by changing the batch size.","See the abstract. (For the revision, the paper is identical, except for a 59 page Supplementary Material, which can serve as a stand-along technical report version of the paper.)" 143,Explanation-Based Attention for Semi-Supervised Deep Active Learning,We introduce an attention mechanism to improve feature extraction for deep active learning in the semi-supervised setting.The proposed attention mechanism is based on recent methods to visually explain predictions made by DNNs.We apply the proposed explanation-based attention to MNIST and SVHN classification.The conducted experiments show accuracy improvements for the original and class-imbalanced datasets with the same number of training examples and faster long-tail convergence compared to uncertainty-based methods.,We introduce an attention mechanism to improve feature extraction for deep active learning (AL) in the semi-supervised setting. 144,Barcodes as summary of objective functions' topology,"We apply canonical forms of gradient complexes to explore neural networks loss surfaces.""We present an algorithm for calculations of the objective function's barcodes of minima. "", ""Our experiments confirm two principal observations: the barcodes of minima are located in a small lower part of the range of values of objective function and increase of the neural network's depth brings down the minima's barcodes."", 'This has natural implications for the neural network learning and the ability to generalize.",We apply canonical forms of gradient complexes (barcodes) to explore neural networks loss surfaces. 145,AMPNet: Asynchronous Model-Parallel Training for Dynamic Neural Networks,"New types of compute hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs.However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon.In particular, models that exploit structured input via complex and instance-dependent control flow are difficult to accelerate using existing algorithms and hardware that typically rely on minibatching.We present an asynchronous model-parallel training algorithm that is specifically motivated by training on networks of interconnected devices.Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently, even for small minibatch sizes, resulting in shorter overall training times.Our framework opens the door for scaling up a new class of deep learning models that cannot be efficiently trained today.",Using asynchronous gradient updates to accelerate dynamic neural network training 146,Reward Design in Cooperative Multi-agent Reinforcement Learning for Packet Routing,"In cooperative multi-agent reinforcement learning, how to design a suitable reward signal to accelerate learning and stabilize convergence is a critical problem.The global reward signal assigns the same global reward to all agents without distinguishing their contributions, while the local reward signal provides different local rewards to each agent based solely on individual behavior.Both of the two reward assignment approaches have some shortcomings: the former might encourage lazy agents, while the latter might produce selfish agents.In this paper, we study reward design problem in cooperative MARL based on packet routing environments.Firstly, we show that the above two reward signals are prone to produce suboptimal policies.Then, inspired by some observations and considerations, we design some mixed reward signals, which are off-the-shelf to learn better policies.Finally, we turn the mixed reward signals into the adaptive counterparts, which achieve best results in our experiments.Other reward signals are also discussed in this paper.As reward design is a very fundamental problem in RL and especially in MARL, we hope that MARL researchers can rethink the rewards used in their systems.","We study reward design problem in cooperative MARL based on packet routing environments. The experimental results remind us to be careful to design the rewards, as they are really important to guide the agent behavior." 147,Learning to Solve Linear Inverse Problems in Imaging with Neumann Networks,"Recent advances have illustrated that it is often possible to learn to solve linear inverse problems in imaging using training data that can outperform more traditional regularized least squares solutions.Along these lines, we present some extensions of the Neumann network, a recently introduced end-to-end learned architecture inspired by a truncated Neumann series expansion of the solution map to a regularized least squares problem.Here we summarize the Neumann network approach, and show that it has a form compatible with the optimal reconstruction function for a given inverse problem.We also investigate an extension of the Neumann network that incorporates a more sample efficient patch-based regularization approach.","Neumann networks are an end-to-end, sample-efficient learning approach to solving linear inverse problems in imaging that are compatible with the MSE optimal approach and admit an extension to patch-based learning." 148,Global-to-local Memory Pointer Networks for Task-Oriented Dialogue,"End-to-end task-oriented dialogue is challenging since knowledge bases are usually large, dynamic and hard to incorporate into a learning framework.We propose the global-to-local memory pointer networks to address this issue.In our model, a global memory encoder and a local memory decoder are proposed to share external knowledge.The encoder encodes dialogue history, modifies global contextual representation, and generates a global memory pointer.The decoder first generates a sketch response with unfilled slots.Next, it passes the global memory pointer to filter the external knowledge for relevant information, then instantiates the slots via the local memory pointers.We empirically show that our model can improve copy accuracy and mitigate the common out-of-vocabulary problem.As a result, GLMP is able to improve over the previous state-of-the-art models in both simulated bAbI Dialogue dataset and human-human Stanford Multi-domain Dialogue dataset on automatic and human evaluation.","GLMP: Global memory encoder (context RNN, global pointer) and local memory decoder (sketch RNN, local pointer) that share external knowledge (MemNN) are proposed to strengthen response generation in task-oriented dialogue." 149,ACE: Artificial Checkerboard Enhancer to Induce and Evade Adversarial Attacks,"The checkerboard phenomenon is one of the well-known visual artifacts in the computer vision field.The origins and solutions of checkerboard artifacts in the pixel space have been studied for a long time, but their effects on the gradient space have rarely been investigated.In this paper, we revisit the checkerboard artifacts in the gradient space which turn out to be the weak point of a network architecture.We explore image-agnostic property of gradient checkerboard artifacts and propose a simple yet effective defense method by utilizing the artifacts.We introduce our defense module, dubbed Artificial Checkerboard Enhancer, which induces adversarial attacks on designated pixels.This enables the model to deflect attacks by shifting only a single pixel in the image with a remarkable defense rate.We provide extensive experiments to support the effectiveness of our work for various attack scenarios using state-of-the-art attack methods.Furthermore, we show that ACE is even applicable to large-scale datasets including ImageNet dataset and can be easily transferred to various pretrained networks.",We propose a novel aritificial checkerboard enhancer (ACE) module which guides attacks to a pre-specified pixel space and successfully defends it with a simple padding operation. 150,Convergence Behaviour of Some Gradient-Based Methods on Bilinear Zero-Sum Games,"Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, and understanding the dynamics of gradient algorithms for solving such formulations has been a grand challenge.As a first step, we restrict to bilinear zero-sum games and give a systematic analysis of popular gradient updates, for both simultaneous and alternating versions.We provide exact conditions for their convergence and find the optimal parameter setup and convergence rates.In particular, our results offer formal evidence that alternating updates converge ""better"" than simultaneous ones.","We systematically analyze the convergence behaviour of popular gradient algorithms for solving bilinear games, with both simultaneous and alternating updates." 151,Cross-Linked Variational Autoencoders for Generalized Zero-Shot Learning,"Most approaches in generalized zero-shot learning rely on cross-modal mapping between an image feature space and a class embedding space or on generating artificial image features.However, learning a shared cross-modal embedding by aligning the latent spaces of modality-specific autoencoders is shown to be promising in zero-shot learning.While following the same direction, we also take artificial feature generation one step further and propose a model where a shared latent space of image features and class embeddings is learned by aligned variational autoencoders, for the purpose of generating latent features to train a softmax classifier.We evaluate our learned latent features on conventional benchmark datasets and establish a new state of the art on generalized zero-shot as well as on few-shot learning.Moreover, our results on ImageNet with various zero-shot splits show that our latent features generalize well in large-scale settings.",We use VAEs to learn a shared latent space embedding between image features and attributes and thereby achieve state-of-the-art results in generalized zero-shot learning. 152,Spatial Information is Overrated for Image Classification,"Intuitively, image classification should profit from using spatial information.Recent work, however, suggests that this might be overrated in standard CNNs.In this paper, we are pushing the envelope and aim to further investigate the reliance on and necessity of spatial information.We propose and analyze three methods, namely Shuffle Conv, GAP+FC and 1x1 Conv, that destroy spatial information during both training and testing phases.We extensively evaluate these methods on several object recognition datasets with a wide range of CNN architectures.Interestingly, we consistently observe that spatial information can be completely deleted from a significant number of layers with no or only small performance drops.",Spatial information at last layers is not necessary for a good classification accuracy. 153,Unlabeled Disentangling of GANs with Guided Siamese Networks,"Disentangling underlying generative factors of a data distribution is important for interpretability and generalizable representations.In this paper, we introduce two novel disentangling methods.Our first method, Unlabeled Disentangling GAN, decomposes the latent noise by generating similar/dissimilar image pairs and it learns a distance metric on these pairs with siamese networks and a contrastive loss.This pairwise approach provides consistent representations for similar data points.Our second method modifies the UD-GAN with user-defined guidance functions, which restrict the information that goes into the siamese networks.This constraint helps UD-GAN-G to focus on the desired semantic variations in the data.We show that both our methods outperform existing unsupervised approaches in quantitative metrics that measure semantic accuracy of the learned representations.In addition, we illustrate that simple guidance functions we use in UD-GAN-G allow us to directly capture the desired variations in the data.",We use Siamese Networks to guide and disentangle the generation process in GANs without labeled data. 154,Predicted Variables in Programming,"We present Predicted Variables, an approach to making machine learning a first class citizen in programming languages.There is a growing divide in approaches to building systems: using human experts on the one hand, and using behavior learned from data on the other hand.PVars aim to make using ML in programming easier by hybridizing the two.We leverage the existing concept of variables and create a new type, a predicted variable.PVars are akin to native variables with one important distinction: PVars determine their value using ML when evaluated.We describe PVars and their interface, how they can be used in programming, and demonstrate the feasibility of our approach on three algorithmic problems: binary search, QuickSort, and caches.We show experimentally that PVars are able to improve over the commonly used heuristics and lead to a better performance than the original algorithms.As opposed to previous work applying ML to algorithmic problems, PVars have the advantage that they can be used within the existing frameworks and do not require the existing domain knowledge to be replaced.PVars allow for a seamless integration of ML into existing systems and algorithms.Our PVars implementation currently relies on standard Reinforcement Learning methods.To learn faster, PVars use the heuristic function, which they are replacing, as an initial function.We show that PVars quickly pick up the behavior of the initial function and then improve performance beyond that without ever performing substantially worse -- allowing for a safe deployment in critical applications.","We present Predicted Variables, an approach to making machine learning a first class citizen in programming languages." 155,Unsupervised Hierarchical Video Prediction,"Much recent research has been devoted to video prediction and generation, but mostly for short-scale time horizons.The hierarchical video prediction method by Villegas et al. is an example of a state of the art method for long term video prediction. However, their method has limited applicability in practical settings as it requires a ground truth pose at training time. This paper presents a long term hierarchical video prediction model that does not have such a restriction.We show that the network learns its own higher-level structure that works better in cases where the ground truth pose does not fully capture all of the information needed to predict the next frame. This method gives sharper results than other video prediction methods which do not require a ground truth pose, and its efficiency is shown on the Humans 3.6M and Robot Pushing datasets.",We show ways to train a hierarchical video prediction model without needing pose labels. 156,Classification in the dark using tactile exploration,"Combining information from different sensory modalities to execute goal directed actions is a key aspect of human intelligence.Specifically, human agents are very easily able to translate the task communicated in one sensory domain into a representation that enables them to complete this task when they can only sense their environment using a separate sensory modality.In order to build agents with similar capabilities, in this work we consider the problem of a retrieving a target object from a drawer.The agent is provided with an image of a previously unseen object and it explores objects in the drawer using only tactile sensing to retrieve the object that was shown in the image without receiving any visual feedback.Success at this task requires close integration of visual and tactile sensing.We present a method for performing this task in a simulated environment using an anthropomorphic hand.We hope that future research in the direction of combining sensory signals for acting will find the object retrieval from a drawer to be a useful benchmark problem","In this work, we study the problem of learning representations to identify novel objects by exploring objects using tactile sensing. Key point here is that the query is provided in image domain." 157,Scaling Hierarchical Coreference with Homomorphic Compression,"Locality sensitive hashing schemes such as \\simhash provide compact representations of multisets from which similarity can be estimated.However, in certain applications, we need to estimate the similarity of dynamically changing sets. In this case, we need the representation to be a homomorphism so that the hash of unions and differences of sets can be computed directly from the hashes of operands. We propose two representations that have this property for cosine similarity, and make substantial progress on a third representation for Jaccard similarity.We employ these hashes to compress the sufficient statistics of a conditional random field coreference model and study how this compression affects our ability to compute similarities as entities are split and merged during inference.\\cutWe also provide novel statistical analysis of \\simhash to help justify it as an estimator inside a CRF, showing that the bias and variance reduce quickly with the number of bits.""On a problem of author coreference, we find that our \\simhash scheme allows scaling the hierarchical coreference algorithm by an order of magnitude without degrading its statistical performance or the model's coreference accuracy, as long as we employ at least 128 or 256 bits.Angle-preserving random projections further improve the coreference quality, potentially allowing even fewer dimensions to be used.",We employ linear homomorphic compression schemes to represent the sufficient statistics of a conditional random field model of coreference and this allows us to scale inference and improve speed by an order of magnitude. 158,Formal Limitations on the Measurement of Mutual Information,"Motivated by applications to unsupervised learning, we consider the problem of measuring mutual information.Recent analysis has shown that naive kNN estimators of mutual information have serious statistical limitations motivating more refined methods.In this paper we prove that serious statistical limitations are inherent to any measurement method.More specifically, we show that any distribution-free high-confidence lower bound on mutual information cannot be larger than where is the size of the data sample.We also analyze the Donsker-Varadhan lower bound on KL divergence in particular and show that, when simple statistical considerations are taken into account, this bound can never produce a high-confidence value larger than.While large high-confidence lower bounds are impossible, in practice one can use estimators without formal guarantees.We suggest expressing mutual information as a difference of entropies and using cross entropy as an entropy estimator. We observe that, although cross entropy is only an upper bound on entropy, cross-entropy estimates converge to the true cross entropy at the rate of.",We give a theoretical analysis of the measurement and optimization of mutual information. 159,Neuron Hierarchical Networks,"In this paper, we propose a neural network framework called neuron hierarchical network, that evolves beyond the hierarchy in layers, and concentrates on the hierarchy of neurons.We observe mass redundancy in the weights of both handcrafted and randomly searched architectures.Inspired by the development of human brains, we prune low-sensitivity neurons in the model and add new neurons to the graph, and the relation between individual neurons are emphasized and the existence of layers weakened.We propose a process to discover the best base model by random architecture search, and discover the best locations and connections of the added neurons by evolutionary search.Experiment results show that the NHN achieves higher test accuracy on Cifar-10 than state-of-the-art handcrafted and randomly searched architectures, while requiring much fewer parameters and less searching time.","By breaking the layer hierarchy, we propose a 3-step approach to the construction of neuron-hierarchy networks that outperform NAS, SMASH and hierarchical representation with fewer parameters and shorter searching time." 160,Learning To Simulate,"Simulation is a useful tool in situations where training data for machine learning models is costly to annotate or even hard to acquire.In this work, we propose a reinforcement learning-based method for automatically adjusting the parameters of any simulator, thereby controlling the distribution of synthesized data in order to maximize the accuracy of a model trained on that data.In contrast to prior art that hand-crafts these simulation parameters or adjusts only parts of the available parameters, our approach fully controls the simulator with the actual underlying goal of maximizing accuracy, rather than mimicking the real data distribution or randomly generating a large volume of data.We find that our approach quickly converges to the optimal simulation parameters in controlled experiments and can indeed discover good sets of parameters for an image rendering simulator in actual computer vision applications.",We propose an algorithm that automatically adjusts parameters of a simulation engine to generate training data for a neural network such that validation accuracy is maximized. 161,Noise Regularization for Conditional Density Estimation,"Modelling statistical relationships beyond the conditional mean is crucial in many settings.Conditional density estimation aims to learn the full conditional probability density from data.Though highly expressive, neural network based CDE models can suffer from severe over-fitting when trained with the maximum likelihood objective.Due to the inherent structure of such models, classical regularization approaches in the parameter space are rendered ineffective.To address this issue, we develop a model-agnostic noise regularization method for CDE that adds random perturbations to the data during training.We demonstrate that the proposed approach corresponds to a smoothness regularization and prove its asymptotic consistency.In our experiments, noise regularization significantly and consistently outperforms other regularization methods across seven data sets and three CDE models.The effectiveness of noise regularization makes neural network based CDE the preferable method over previous non- and semi-parametric approaches, even when training data is scarce.",A model-agnostic regularization scheme for neural network-based conditional density estimation. 162,PatchVAE: Learning Local Latent Codes for Recognition,"Unsupervised representation learning holds the promise of exploiting large amount of available unlabeled data to learn general representations.A promising technique for unsupervised learning is the framework of Variational Auto-encoders.However, unsupervised representations learned by VAEs are significantly outperformed by those learned by supervising for recognition.Our hypothesis is that to learn useful representations for recognition the model needs to be encouraged to learn about repeating and consistent patterns in data.Drawing inspiration from the mid-level representation discovery work, we propose PatchVAE, that reasons about images at patch level.Our key contribution is a bottleneck formulation in a VAE framework that encourages mid-level style representations.Our experiments demonstrate that representations learned by our method perform much better on the recognition tasks compared to those learned by vanilla VAEs.",A patch-based bottleneck formulation in a VAE framework that learns unsupervised representations better suited for visual recognition. 163,Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization,"Vanishing and exploding gradients are two of the main obstacles in training deep neural networks, especially in capturing long range dependencies in recurrent neural networks.In this paper, we present an efficient parametrization of the transition matrix of an RNN that allows us to stabilize the gradients that arise in its training.Specifically, we parameterize the transition matrix by its singular value decomposition, which allows us to explicitly track and control its singular values.We attain efficiency by using tools that are common in numerical linear algebra, namely Householder reflectors for representing the orthogonal matrices that arise in the SVD.By explicitly controlling the singular values, our proposed svdRNN method allows us to easily solve the exploding gradient problem and we observe that it empirically solves the vanishing gradient issue to a large extent.We note that the SVD parameterization can be used for any rectangular weight matrix, hence it can be easily extended to any deep neural network, such as a multi-layer perceptron.Theoretically, we demonstrate that our parameterization does not lose any expressive power, and show how it potentially makes the optimization process easier.Our extensive experimental results also demonstrate that the proposed framework converges faster, and has good generalization, especially when the depth is large.","To solve the gradient vanishing/exploding problems, we proprose an efficient parametrization of the transition matrix of RNN that loses no expressive power, converges faster and has good generalization." 164,Total Style Transfer with a Single Feed-Forward Network,"Recent image style transferring methods achieved arbitrary stylization with input content and style images.To transfer the style of an arbitrary image to a content image, these methods used a feed-forward network with a lowest-scaled feature transformer or a cascade of the networks with a feature transformer of a corresponding scale.However, their approaches did not consider either multi-scaled style in their single-scale feature transformer or dependency between the transformed feature statistics across the cascade networks.This shortcoming resulted in generating partially and inexactly transferred style in the generated images.To overcome this limitation of partial style transfer, we propose a total style transferring method which transfers multi-scaled feature statistics through a single feed-forward process.First, our method transforms multi-scaled feature maps of a content image into those of a target style image by considering both inter-channel correlations in each single scaled feature map and inter-scale correlations between multi-scaled feature maps.Second, each transformed feature map is inserted into the decoder layer of the corresponding scale using skip-connection.Finally, the skip-connected multi-scaled feature maps are decoded into a stylized image through our trained decoder network.",A paper suggesting a method to transform the style of images using deep neural networks. 165,I Know the Feeling: Learning to Converse with Empathy,"Beyond understanding what is being discussed, human communication requires an awareness of what someone is feeling.One challenge for dialogue agents is recognizing feelings in the conversation partner and replying accordingly, a key communicative skill that is trivial for humans.Research in this area is made difficult by the paucity of suitable publicly available datasets both for emotion and dialogues.This work proposes a new task for empathetic dialogue generation and EmpatheticDialogues, a dataset of 25k conversations grounded in emotional situations to facilitate training and evaluating dialogue systems.Our experiments indicate that dialogue models that use our dataset are perceived to be more empathetic by human evaluators, while improving on other metrics as well, compared to models merely trained on large-scale Internet conversation data.We also present empirical comparisons of several ways to improve the performance of a given model by leveraging existing models or datasets without requiring lengthy re-training of the full model.","We improve existing dialogue systems for responding to people sharing personal stories, incorporating emotion prediction representations and also release a new benchmark and dataset of empathetic dialogues." 166,Economy Statistical Recurrent Units For Inferring Nonlinear Granger Causality,"Granger causality is a widely-used criterion for analyzing interactions in large-scale networks.As most physical interactions are inherently nonlinear, we consider the problem of inferring the existence of pairwise Granger causality between nonlinearly interacting stochastic processes from their time series measurements.Our proposed approach relies on modeling the embedded nonlinearities in the measurements using a component-wise time series prediction model based on Statistical Recurrent Units.We make a case that the network topology of Granger causal relations is directly inferrable from a structured sparse estimate of the internal parameters of the SRU networks trained to predict the processes’ time series measurements.We propose a variant of SRU, called economy-SRU, which, by design has considerably fewer trainable parameters, and therefore less prone to overfitting.The economy-SRU computes a low-dimensional sketch of its high-dimensional hidden state in the form of random projections to generate the feedback for its recurrent processing.Additionally, the internal weight parameters of the economy-SRU are strategically regularized in a group-wise manner to facilitate the proposed network in extracting meaningful predictive features that are highly time-localized to mimic real-world causal events.Extensive experiments are carried out to demonstrate that the proposed economy-SRU based time series prediction model outperforms the MLP, LSTM and attention-gated CNN-based time series models considered previously for inferring Granger causality.",A new recurrent neural network architecture for detecting pairwise Granger causality between nonlinearly interacting time series. 167,Stochastic Training of Graph Convolutional Networks,"Graph convolutional networks are powerful deep neural networks for graph-structured data.""However, GCN computes nodes' representation recursively from their neighbors, making the receptive field size grow exponentially with the number of layers.Previous attempts on reducing the receptive field size by subsampling neighbors do not have any convergence guarantee, and their receptive field size per node is still in the order of hundreds.In this paper, we develop a preprocessing strategy and two control variate based algorithms to further reduce the receptive field size.""Our algorithms are guaranteed to converge to GCN's local optimum regardless of the neighbor sampling size."", 'Empirical results show that our algorithms have a similar convergence speed per epoch with the exact algorithm even using only two neighbors per node.The time consumption of our algorithm on the Reddit dataset is only one fifth of previous neighbor sampling algorithms.",A control variate based stochastic training algorithm for graph convolutional networks that the receptive field can be only two neighbors per node. 168,Instant Quantization of Neural Networks using Monte Carlo Methods,"Low bit-width integer weights and activations are very important for efficient inference, especially with respect to lower power consumption.We propose to apply Monte Carlo methods and importance sampling to sparsify and quantize pre-trained neural networks without any retraining.We obtain sparse, low bit-width integer representations that approximate the full precision weights and activations.The precision, sparsity, and complexity are easily configurable by the amount of sampling performed.Our approach, called Monte Carlo Quantization, is linear in both time and space, while the resulting quantized sparse networks show minimal accuracy loss compared to the original full-precision networks.Our method either outperforms or achieves results competitive with methods that do require additional training on a variety of challenging tasks.",Monte Carlo methods for quantizing pre-trained models without any additional training. 169,INFORMATION MAXIMIZATION AUTO-ENCODING,"We propose the Information Maximization Autoencoder, an information theoretic approach to simultaneously learn continuous and discrete representations in an unsupervised setting.Unlike the Variational Autoencoder framework, IMAE starts from a stochastic encoder that seeks to map each input data to a hybrid discrete and continuous representation with the objective of maximizing the mutual information between the data and their representations.A decoder is included to approximate the posterior distribution of the data given their representations, where a high fidelity approximation can be achieved by leveraging the informative representations. We show that the proposed objective is theoretically valid and provides a principled framework for understanding the tradeoffs regarding informativeness of each representation factor, disentanglement of representations, and decoding quality.","Information theoretical approach for unsupervised learning of unsupervised learning of a hybrid of discrete and continuous representations, " 170,Improving generalization by regularizing in $L^2$ function space,"Learning rules for neural networks necessarily include some form of regularization.Most regularization techniques are conceptualized and implemented in the space of parameters.However, it is also possible to regularize in the space of functions.Here, we propose to measure networks in an Hilbert space, and test a learning rule that regularizes the distance a network can travel through-space each update. This approach is inspired by the slow movement of gradient descent through parameter space as well as by the natural gradient, which can be derived from a regularization term upon functional change.The resulting learning rule, which we call Hilbert-constrained gradient descent, is thus closely related to the natural gradient but regularizes a different and more calculable metric over the space of functions.Experiments show that the HCGD is efficient and leads to considerably better generalization.","It's important to consider optimization in function space, not just parameter space. We introduce a learning rule that reduces distance traveled in function space, just like SGD limits distance traveled in parameter space." 171,Stochastic Gradient Descent with Biased but Consistent Gradient Estimators,"Stochastic gradient descent, which dates back to the 1950s, is one of the most popular and effective approaches for performing stochastic optimization.Research on SGD resurged recently in machine learning for optimizing convex loss functions and training nonconvex deep neural networks.The theory assumes that one can easily compute an unbiased gradient estimator, which is usually the case due to the sample average nature of empirical risk minimization.There exist, however, many scenarios where an unbiased estimator may be as expensive to compute as the full gradient because training examples are interconnected.Recently, Chen et al. proposed using a consistent gradient estimator as an economic alternative.Encouraged by empirical success, we show, in a general setting, that consistent estimators result in the same convergence behavior as do unbiased ones.Our analysis covers strongly convex, convex, and nonconvex objectives.We verify the results with illustrative experiments on synthetic and real-world data.This work opens several new research directions, including the development of more efficient SGD updates with consistent estimators and the design of efficient training algorithms for large-scale graphs.",Convergence theory for biased (but consistent) gradient estimators in stochastic optimization and application to graph convolutional networks 172,Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers,"We consider the problem of uncertainty estimation in the context of deep neural classification.In this context, all known methods are based on extracting uncertainty signals from a trained network optimized to solve the classification problem at hand.We demonstrate that such techniques tend to introduce biased estimates for instances whose predictions are supposed to be highly confident.We argue that this deficiency is an artifact of the dynamics of training with SGD-like optimizers, and it has some properties similar to overfitting.Based on this observation, we develop an uncertainty estimation algorithm that selectively estimates the uncertainty of highly confident points, using earlier snapshots of the trained model, before their estimates are jittered.We present extensive experiments indicating that the proposed algorithm provides uncertainty estimates that are consistently better than all known methods.",We use snapshots from the training process to improve any uncertainty estimation method of a DNN classifier. 173,FairFace: A Novel Face Attribute Dataset for Bias Measurement and Mitigation,"Existing public face image datasets are strongly biased toward Caucasian faces, and other races are significantly underrepresented.The models trained from such datasets suffer from inconsistent classification accuracy, which limits the applicability of face analytic systems to non-White race groups.To mitigate the race bias problem in these datasets, we constructed a novel face image dataset containing 108,501 images which is balanced on race.We define 7 race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino.Images were collected from the YFCC-100M Flickr dataset and labeled with race, gender, and age groups.Evaluations were performed on existing face attribute datasets as well as novel image datasets to measure the generalization performance.We find that the model trained from our dataset is substantially more accurate on novel datasets and the accuracy is consistent across race and gender groups.We also compare several commercial computer vision APIs and report their balanced accuracy across gender, race, and age groups.","A new face image dataset for balanced race, gender, and age which can be used for bias measurement and mitigation" 174,A Learned Representation for Scalable Vector Graphics,"Dramatic advances in generative models have resulted in near photographic quality for artificially rendered faces, animals and other objects in the natural world.In spite of such advances, a higher level understanding of vision and imagery does not arise from exhaustively modeling an object, but instead identifying higher-level attributes that best summarize the aspects of an object. In this work we attempt to model the drawing process of fonts by building sequential generative models of vector graphics. This model has the benefit of providing a scale-invariant representation for imagery whose latent representation may be systematically manipulated and exploited to perform style propagation.We demonstrate these results on a large dataset of fonts and highlight how such a model captures the statistical dependencies and richness of this dataset.We envision that our model can find use as a tool for designers to facilitate font design.","We attempt to model the drawing process of fonts by building sequential generative models of vector graphics (SVGs), a highly structured representation of font characters." 175,Unsupervised Discovery of Dynamic Neural Circuits,"What can we learn about the functional organization of cortical microcircuits from large-scale recordings of neural activity? ', ""To obtain an explicit and interpretable model of time-dependent functional connections between neurons and to establish the dynamics of the cortical information flow, we develop 'dynamic neural relational inference'."", 'We study both synthetic and real-world neural spiking data and demonstrate that the developed method is able to uncover the dynamic relations between neurons more reliably than existing baselines.","We develop 'dynamic neural relational inference', a variational autoencoder model that can explicitly and interpretably represent the hidden dynamic relations between neurons." 176,Exploring the Hidden Dimension in Accelerating Convolutional Neural Networks,DeePa is a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training process of convolutional neural networks.DeePa optimizes parallelism at the granularity of each individual layer in the network.We present an elimination-based algorithm that finds an optimal parallelism configuration for every layer.Our evaluation shows that DeePa achieves up to 6.5× speedup compared to state-of-the-art deep learning frameworks and reduces data transfers by up to 23×.,"To the best of our knowledge, DeePa is the first deep learning framework that controls and optimizes the parallelism of CNNs in all parallelizable dimensions at the granularity of each layer." 177,Learning Backpropagation-Free Deep Architectures with Kernels,"One can substitute each neuron in any neural network with a kernel machine and obtain a counterpart powered by kernel machines.The new network inherits the expressive power and architecture of the original but works in a more intuitive way since each node enjoys the simple interpretation as a hyperplane.Further, using the kernel multilayer perceptron as an example, we prove that in classification, an optimal representation that minimizes the risk of the network can be characterized for each hidden layer.This result removes the need of backpropagation in learning the model and can be generalized to any feedforward kernel network.Moreover, unlike backpropagation, which turns models into black boxes, the optimal hidden representation enjoys an intuitive geometric interpretation, making the dynamics of learning in a deep kernel network simple to understand.Empirical results are provided to validate our theory.",We combine kernel method with connectionist models and show that the resulting deep architectures can be trained layer-wise and have more transparent learning dynamics. 178,Stochastic Learning of Additive Second-Order Penalties with Applications to Fairness,"Many notions of fairness may be expressed as linear constraints, and the resulting constrained objective is often optimized by transforming the problem into its Lagrangian dual with additive linear penalties.In non-convex settings, the resulting problem may be difficult to solve as the Lagrangian is not guaranteed to have a deterministic saddle-point equilibrium. In this paper, we propose to modify the linear penalties to second-order ones, and we argue that this results in a more practical training procedure in non-convex, large-data settings.For one, the use of second-order penalties allows training the penalized objective with a fixed value of the penalty coefficient, thus avoiding the instability and potential lack of convergence associated with two-player min-max games.Secondly, we derive a method for efficiently computing the gradients associated with the second-order penalties in stochastic mini-batch settings.Our resulting algorithm performs well empirically, learning an appropriately fair classifier on a number of standard benchmarks.",We propose a method to stochastically optimize second-order penalties and show how this may apply to training fairness-aware classifiers. 179,Understanding and Exploiting the Low-Rank Structure of Deep Networks,"Training methods for deep networks are primarily variants on stochastic gradient descent. Techniques that use second-order information are rarely used because of the computational cost and noise associated with those approaches in deep learning contexts. However, in this paper, we show how feedforward deep networks exhibit a low-rank derivative structure. This low-rank structure makes it possible to use second-order information without needing approximations and without incurring a significantly greater computational cost than gradient descent. To demonstrate this capability, we implement Cubic Regularization on a feedforward deep network with stochastic gradient descent and two of its variants. There, we use CR to calculate learning rates on a per-iteration basis while training on the MNIST and CIFAR-10 datasets. CR proved particularly successful in escaping plateau regions of the objective function. We also found that this approach requires less problem-specific information than other first-order methods in order to perform well.","We show that deep learning network derivatives have a low-rank structure, and this structure allows us to use second-order derivative information to calculate learning rates adaptively and in a computationally feasible manner." 180,Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness,"The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training, has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations.Nonetheless, min-max optimization beyond the purpose of AT has not been rigorously explored in the research of adversarial attack and defense.In particular, given a set of risk sources, minimizing the maximal loss induced from the domain set can be reformulated as a general min-max problem that is different from AT.Examples of this general formulation include attacking model ensembles, devising universal perturbation under multiple inputs or data transformations, and generalized AT over different types of attack models.We show that these problems can be solved under a unified and theoretically principled min-max optimization framework. We also show that the self-adjusted domain weights learned from our method provides a means to explain the difficulty level of attack and defense over multiple domains.Extensive experiments show that our approach leads to substantial performance improvement over the conventional averaging strategy.",A unified min-max optimization framework for adversarial attack and defense 181,Dimensionality Reduction for Representing the Knowledge of Probabilistic Models,"Most deep learning models rely on expressive high-dimensional representations to achieve good performance on tasks such as classification.However, the high dimensionality of these representations makes them difficult to interpret and prone to over-fitting.We propose a simple, intuitive and scalable dimension reduction framework that takes into account the soft probabilistic interpretation of standard deep models for classification.When applying our framework to visualization, our representations more accurately reflect inter-class distances than standard visualization techniques such as t-SNE.We show experimentally that our framework improves generalization performance to unseen categories in zero-shot learning.We also provide a finite sample error upper bound guarantee for the method.",dimensionality reduction for cases where examples can be represented as soft probability distributions 182,Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration,"Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments.These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces.However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy.In this work, we propose an approach using deep representation learning algorithms to learn an adequate goal space.This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space.We present experiments with a simulated robot arm interacting with an object, and we show that exploration algorithms using such learned representations can closely match, and even sometimes improve, the performance obtained using engineered representations.","We propose a novel Intrinsically Motivated Goal Exploration architecture with unsupervised learning of goal space representations, and evaluate how various implementations enable the discovery of a diversity of policies." 183,Post-training for Deep Learning,"One of the main challenges of deep learning methods is the choice of an appropriate training strategy.In particular, additional steps, such as unsupervised pre-training, have been shown to greatly improve the performances of deep structures.In this article, we propose an extra training step, called post-training, which only optimizes the last layer of the network.We show that this procedure can be analyzed in the context of kernel theory, with the first layers computing an embedding of the data and the last layer a statistical model to solve the task based on this embedding.This step makes sure that the embedding, or representation, of the data is used in the best possible way for the considered task.This idea is then tested on multiple architectures with various data sets, showing that it consistently provides a boost in performance.","We propose an additional training step, called post-training, which computes optimal weights for the last layer of the network." 184,Compressing Word Embeddings via Deep Compositional Code Learning,"Natural language processing models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint.Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance.For this purpose, we propose to construct the embeddings with few basis vectors.For each word, the composition of basis vectors is determined by a hash code.To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme.Each code is composed of multiple discrete numbers, such as, where the value of each component is limited to a fixed range.We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick.Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ~ 99% in machine translation tasks without performance loss.In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate.Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture.",Compressing the word embeddings over 94% without hurting the performance. 185,Deep Anomaly Detection with Outlier Exposure,"It is important to detect anomalous inputs when deploying machine learning systems.The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples.At the same time, diverse image and text data are available in enormous quantities.We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure.This enables anomaly detectors to generalize and detect unseen anomalies.In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance.We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue.We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.","OE teaches anomaly detectors to learn heuristics for detecting unseen anomalies; experiments are in classification, density estimation, and calibration in NLP and vision settings; we do not tune on test distribution samples, unlike previous work" 186,Beyond GANs: Transforming without a Target Distribution,"While generative neural networks can learn to transform a specific input dataset into a specific target dataset, they require having just such a paired set of input/output datasets.For instance, to fool the discriminator, a generative adversarial network exclusively trained to transform images of black-haired *men* to blond-haired *men* would need to change gender-related characteristics as well as hair color when given images of black-haired *women* as input.This is problematic, as often it is possible to obtain *a* pair of distributions but then have a second source distribution where the target distribution is unknown.The computational challenge is that generative models are good at generation within the manifold of the data that they are trained on.However, generating new samples outside of the manifold or extrapolating ""out-of-sample"" is a much harder problem that has been less well studied.To address this, we introduce a technique called *neuron editing* that learns how neurons encode an edit for a particular transformation in a latent space.We use an autoencoder to decompose the variation within the dataset into activations of different neurons and generate transformed data by defining an editing transformation on those neurons.""By performing the transformation in a latent trained space, we encode fairly complex and non-linear transformations to the data with much simpler distribution shifts to the neuron's activations."", 'Our technique is general and works on a wide variety of data domains and applications.We first demonstrate it on image transformations and then move to our two main biological applications: removal of batch artifacts representing unwanted noise and modeling the effect of drug treatments to predict synergy between drugs.",A method for learning a transformation between one pair of source/target datasets and applying it a separate source dataset for which there is no target dataset 187,TRUNCATED HORIZON POLICY SEARCH: COMBINING REINFORCEMENT LEARNING & IMITATION LEARNING,"In this paper, we propose to combine imitation and reinforcement learning via the idea of reward shaping using an oracle.We study the effectiveness of the near- optimal cost-to-go oracle on the planning horizon and demonstrate that the cost- to-go oracle shortens the learner’s planning horizon as function of its accuracy: a globally optimal oracle can shorten the planning horizon to one, leading to a one- step greedy Markov Decision Process which is much easier to optimize, while an oracle that is far away from the optimality requires planning over a longer horizon to achieve near-optimal performance.Hence our new insight bridges the gap and interpolates between imitation learning and reinforcement learning.Motivated by the above mentioned insights, we propose Truncated HORizon Policy Search, a method that focuses on searching for policies that maximize the total reshaped reward over a finite planning horizon when the oracle is sub-optimal.We experimentally demonstrate that a gradient-based implementation of THOR can achieve superior performance compared to RL baselines and IL baselines even when the oracle is sub-optimal.",Combining Imitation Learning and Reinforcement Learning to learn to outperform the expert 188,Unsupervised Demixing of Structured Signals from Their Superposition Using GANs,"Recently, Generative Adversarial Networks have emerged as a popular alternative for modeling complex high dimensional distributions.Most of the existing works implicitly assume that the clean samples from the target distribution are easily available.However, in many applications, this assumption is violated.In this paper, we consider the observation setting in which the samples from a target distribution are given by the superposition of two structured components, and leverage GANs for learning of the structure of the components.We propose a novel framework, demixing-GAN, which learns the distribution of two components at the same time.Through extensive numerical experiments, we demonstrate that the proposed framework can generate clean samples from unknown distributions, which further can be used in demixing of the unseen test images.",An unsupervised learning approach for separating two structured signals from their superposition 189,On the relationship between Normalising Flows and Variational- and Denoising Autoencoders,"Normalising Flows are a class of likelihood-based generative models that have recently gained popularity.They are based on the idea of transforming a simple density into that of the data.We seek to better understand this class of models, and how they compare to previously proposed techniques for generative modeling and unsupervised representation learning.For this purpose we reinterpret NFs in the framework of Variational Autoencoders, and present a new form of VAE that generalises normalising flows.The new generalised model also reveals a close connection to denoising autoencoders, and we therefore call our model the Variational Denoising Autoencoder.Using our unified model, we systematically examine the model space between flows, variational autoencoders, and denoising autoencoders, in a set of preliminary experiments on the MNIST handwritten digits.The experiments shed light on the modeling assumptions implicit in these models, and they suggest multiple new directions for future research in this space.","We explore the relationship between Normalising Flows and Variational- and Denoising Autoencoders, and propose a novel model that generalises them." 190,Multi-agent query reformulation: Challenges and the role of diversity,"We investigate methods to efficiently learn diverse strategies in reinforcement learning for a generative structured prediction problem: query reformulation.In the proposed framework an agent consists of multiple specialized sub-agents and a meta-agent that learns to aggregate the answers from sub-agents to produce a final answer.Sub-agents are trained on disjoint partitions of the training data, while the meta-agent is trained on the full training set.Our method makes learning faster, because it is highly parallelizable, and has better generalization performance than strong baselines, such asan ensemble of agents trained on the full data.We evaluate on the tasks of document retrieval and question answering.Theimproved performance seems due to the increased diversity of reformulation strategies.This suggests that multi-agent, hierarchical approaches might play an important role in structured prediction tasks of this kind.However, we also find that it is not obvious how to characterize diversity in this context, and a first attempt based on clustering did not produce good results.Furthermore, reinforcement learning for the reformulation task is hard in high-performance regimes.At best, it only marginally improves over the state of the art, which highlights the complexity of training models in this framework for end-to-end language understanding problems.",We use reinforcement learning for query reformulation on two tasks and surprisingly find that when training multiple agents diversity of the reformulations is more important than specialisation. 191,Safe Policy Learning for Continuous Control,"We study continuous action reinforcement learning problems in which it is crucial that the agent interacts with the environment only through safe policies, i.e.,~policies that keep the agent in desirable situations, both during training and at convergence.We formulate these problems as Markov decision processes and present safe policy optimization algorithms that are based on a Lyapunov approach to solve them.Our algorithms can use any standard policy gradient method, such as deep deterministic policy gradient or proximal policy optimization, to train a neural network policy, while guaranteeing near-constraint satisfaction for every policy update by projecting either the policy parameter or the selected action onto the set of feasible solutions induced by the state-dependent linearized Lyapunov constraints.Compared to the existing constrained PG algorithms, ours are more data efficient as they are able to utilize both on-policy and off-policy data.Moreover, our action-projection algorithm often leads to less conservative policy updates and allows for natural integration into an end-to-end PG training pipeline.We evaluate our algorithms and compare them with the state-of-the-art baselines on several simulated tasks, as well as a real-world robot obstacle-avoidance problem, demonstrating their effectiveness in terms of balancing performance and constraint satisfaction.",A general framework for incorporating long-term safety constraints in policy-based reinforcement learning 192,Evaluation of generative networks through their data augmentation capacity,"Generative networks are known to be difficult to assess.Recent works on generative models, especially on generative adversarial networks, produce nice samples of varied categories of images.But the validation of their quality is highly dependent on the method used.A good generator should generate data which contain meaningful and varied information and that fit the distribution of a dataset.This paper presents a new method to assess a generator.Our approach is based on training a classifier with a mixture of real and generated samples.We train a generative model over a labeled training set, then we use this generative model to sample new data points that we mix with the original training data.This mixture of real and generated data is thus used to train a classifier which is afterwards tested on a given labeled test dataset.We compare this result with the score of the same classifier trained on the real training data mixed with noise.""By computing the classifier's accuracy with different ratios of samples from both distributions we are able to estimate if the generator successfully fits and is able to generalize the distribution of the dataset."", 'Our experiments compare the result of different generators from the VAE and GAN framework on MNIST and fashion MNIST dataset.",Evaluating generative networks through their data augmentation capacity on discrimative models. 193,Improving Neural Abstractive Summarization Using Transfer Learning and Factuality-Based Evaluation: Towards Automating Science Journalism,"We propose Automating Science Journalism, the process of producing a press release from a scientific paper, as a novel task that can serve as a new benchmark for neural abstractive summarization.ASJ is a challenging task as it requires long source texts to be summarized to long target texts, while also paraphrasing complex scientific concepts to be understood by the general audience.For this purpose, we introduce a specialized dataset for ASJ that contains scientific papers and their press releases from Science Daily.While state-of-the-art sequence-to-sequence models could easily generate convincing press releases for ASJ, these are generally nonfactual and deviate from the source.To address this issue, we improve seq2seq generation via transfer learning by co-training with new targets: scientific abstracts of sources and partitioned press releases.We further design a measure for factuality that scores how pertinent to the scientific papers the press releases under our seq2seq models are.Our quantitative and qualitative evaluation shows sizable improvements over a strong baseline, suggesting that the proposed framework could improve seq2seq summarization beyond ASJ.",New: application of seq2seq modelling to automating sciene journalism; highly abstractive dataset; transfer learning tricks; automatic evaluation measure. 194,Design for Interpretability,"The interpretability of an AI agent's behavior is of utmost importance for effective human-AI interaction."", 'To this end, there has been increasing interest in characterizing and generating interpretable behavior of the agent.""An alternative approach to guarantee that the agent generates interpretable behavior would be to design the agent's environment such that uninterpretable behaviors are either prohibitively expensive or unavailable to the agent."", 'To date, there has been work under the umbrella of goal or plan recognition design exploring this notion of environment redesign in some specific instances of interpretable of behavior.In this position paper, we scope the landscape of interpretable behavior and environment redesign in all its different flavors.Specifically, we focus on three specific types of interpretable behaviors -- explicability, legibility, and predictability -- and present a general framework for the problem of environment design that can be instantiated to achieve each of the three interpretable behaviors.We also discuss how specific instantiations of this framework correspond to prior works on environment design and identify exciting opportunities for future work.",We present an approach to redesign the environment such that uninterpretable agent behaviors are minimized or eliminated. 195,Learning to Infer,"Inference models, which replace an optimization-based inference procedure with a learned model, have been fundamental in advancing Bayesian deep learning, the most notable example being variational auto-encoders.In this paper, we propose iterative inference models, which learn how to optimize a variational lower bound through repeatedly encoding gradients.Our approach generalizes VAEs under certain conditions, and by viewing VAEs in the context of iterative inference, we provide further insight into several recent empirical findings.We demonstrate the inference optimization capabilities of iterative inference models, explore unique aspects of these models, and show that they outperform standard inference models on typical benchmark data sets.",We propose a new class of inference models that iteratively encode gradients to estimate approximate posterior distributions. 196,Spike-based causal inference for weight alignment,"In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients.For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes.This produces the so-called ""weight transport problem"" for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli.This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm.However, such random weights do not appear to work well for large networks.Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem.The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design.We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights.As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST and CIFAR-10.Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem.",We present a learning rule for feedback weights in a spiking neural network that addresses the weight transport problem. 197,Variational Diffusion Autoencoders with Random Walk Sampling,"Variational inference methods and especially variational autoencoders specify scalable generative models that enjoy an intuitive connection to manifold learning --- with many default priors the posterior/likelihood pair/ can be viewed as an approximate homeomorphism between the data manifold and a latent Euclidean space.However, these approximations are well-documented to become degenerate in training.Unless the subjective prior is carefully chosen, the topologies of the prior and data distributions often will not match.Conversely, diffusion maps automatically the data topology and enjoy a rigorous connection to manifold learning, but do not scale easily or provide the inverse homeomorphism.In this paper, we propose \extbf a principled measure for recognizing the mismatch between data and latent distributions and \extbf a method that combines the advantages of variational inference and diffusion maps to learn a homeomorphic generative model.The measure, the , is a sufficient condition for a homeomorphism and easy to compute and interpret.The method, the , is a novel generative algorithm that first infers the topology of the data distribution, then models a diffusion random walk over the data.To achieve efficient computation in VDAEs, we use stochastic versions of both variational inference and manifold learning optimization.We prove approximation theoretic results for the dimension dependence of VDAEs, and that locally isotropic sampling in the latent space results in a random walk over the reconstructed manifold.Finally, we demonstrate the utility of our method on various real and synthetic datasets, and show that it exhibits performance superior to other generative models.",We combine variational inference and manifold learning (specifically VAEs and diffusion maps) to build a generative model based on a diffusion random walk on a data manifold; we generate samples by drawing from the walk's stationary distribution. 198,Gradient Surgery for Multi-Task Learning,"While deep learning and deep reinforcement learning systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge, particularly as these algorithms learn individual tasks from scratch.Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning.However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently.The reasons why multi-task learning is so challenging compared to single task learning are not fully understood.Motivated by the insight that gradient interference causes optimization challenges, we develop a simple and general approach for avoiding interference between gradients from different tasks, by altering the gradients through a technique we refer to as “gradient surgery”.We propose a form of gradient surgery that projects the gradient of a task onto the normal plane of the gradient of any other task that has a conflicting gradient.On a series of challenging multi-task supervised and multi-task reinforcement learning problems, we find that this approach leads to substantial gains in efficiency and performance. Further, it can be effectively combined with previously-proposed multi-task architectures for enhanced performance in a model-agnostic way.","We develop a simple and general approach for avoiding interference between gradients from different tasks, which improves the performance of multi-task learning in both the supervised and reinforcement learning domains." 199,Metropolis-Hastings view on variational inference and adversarial training,"In this paper we propose to view the acceptance rate of the Metropolis-Hastings algorithm as a universal objective for learning to sample from target distribution -- given either as a set of samples or in the form of unnormalized density.This point of view unifies the goals of such approaches as Markov Chain Monte Carlo, Generative Adversarial Networks, variational inference.To reveal the connection we derive the lower bound on the acceptance rate and treat it as the objective for learning explicit and implicit samplers.The form of the lower bound allows for doubly stochastic gradient optimization in case the target distribution factorizes.We empirically validate our approach on Bayesian inference for neural networks and generative models for images.",Learning to sample via lower bounding the acceptance rate of the Metropolis-Hastings algorithm 200,Learning Video Representations using Contrastive Bidirectional Transformer,"This paper proposes a self-supervised learning approach for video features that results in significantly improved performance on downstream tasks compared to existing methods.Our method extends the BERT model for text sequences to the case of sequences of real-valued feature vectors, by replacing the softmax loss with noise contrastive estimation.We also show how to learn representations from sequences of visual features and sequences of words derived from ASR, and show that such cross-modal training helps even more.",Generalized BERT for continuous and cross-modal inputs; state-of-the-art self-supervised video representations. 201,DDRprog: A CLEVR Differentiable Dynamic Reasoning Programmer,"We present a generic dynamic architecture that employs a problem specific differentiable forking mechanism to leverage discrete logical information about the problem data structure.We adapt and apply our model to CLEVR Visual Question Answering, giving rise to the DDRprog architecture; compared to previous approaches, our model achieves higher accuracy in half as many epochs with five times fewer learnable parameters.Our model directly models underlying question logic using a recurrent controller that jointly predicts and executes functional neural modules; it explicitly forks subprocesses to handle logical branching.While FiLM and other competitive models are static architectures with less supervision, we argue that inclusion of program labels enables learning of higher level logical operations -- our architecture achieves particularly high performance on questions requiring counting and integer comparison. We further demonstrate the generality of our approach though DDRstack -- an application of our method to reverse Polish notation expression evaluation in which the inclusion of a stack assumption allows our approach to generalize to long expressions, significantly outperforming an LSTM with ten times as many learnable parameters.",A generic dynamic architecture that employs a problem specific differentiable forking mechanism to encode hard data structure assumptions. Applied to CLEVR VQA and expression evaluation. 202,Support-guided Adversarial Imitation Learning,"We propose Support-guided Adversarial Imitation Learning, a generic imitation learning framework that unifies support estimation of the expert policy with the family of Adversarial Imitation Learning algorithms.SAIL addresses two important challenges of AIL, including the implicit reward bias and potential training instability.We also show that SAIL is at least as efficient as standard AIL.In an extensive evaluation, we demonstrate that the proposed method effectively handles the reward bias and achieves better performance and training stability than other baseline methods on a wide range of benchmark control tasks.","We unify support estimation with the family of Adversarial Imitation Learning algorithms into Support-guided Adversarial Imitation Learning, a more robust and stable imitation learning framework." 203,Meta-Graph: Few shot Link Prediction via Meta Learning,"We consider the task of few shot link prediction, where the goal is to predict missing edges across multiple graphs using only a small sample of known edges.We show that current link prediction methods are generally ill-equipped to handle this task---as they cannot effectively transfer knowledge between graphs in a multi-graph setting and are unable to effectively learn from very sparse data.To address this challenge, we introduce a new gradient-based meta learning framework, Meta-Graph, that leverages higher-order gradients along with a learned graph signature function that conditionally generates a graph neural network initialization.Using a novel set of few shot link prediction benchmarks, we show that Meta-Graph enables not only fast adaptation but also better final convergence and can effectively learn using only a small sample of true edges.",We apply gradient based meta-learning to the graph domain and introduce a new graph specific transfer function to further bootstrap the process. 204,Incremental training of multi-generative adversarial networks,"Generative neural networks map a standard, possibly distribution to a complex high-dimensional distribution, which represents the real world data set.However, a determinate input distribution as well as a specific architecture of neural networks may impose limitations on capturing the diversity in the high dimensional target space.To resolve this difficulty, we propose a training framework that greedily produce a series of generative adversarial networks that incrementally capture the diversity of the target space.""We show theoretically and empirically that our training algorithm converges to the theoretically optimal distribution, the projection of the real distribution onto the convex hull of the network's distribution space.",We propose a new method to incrementally train a mixture generative model to approximate the information projection of the real data distribution. 205,Generative Models for Low-Dimensional Video Representation and Compressive Sensing,"Generative priors have become highly effective in solving inverse problems including denoising, inpainting, and reconstruction from few and noisy measurements.With a generative model we can represent an image with a much lower dimensional latent codes.In the context of compressive sensing, if the unknown image belongs to the range of a pretrained generative network, then we can recover the image by estimating the underlying compact latent code from the available measurements.However, recent studies revealed that even untrained deep neural networks can work as a prior for recovering natural images.These approaches update the network weights keeping latent codes fixed to reconstruct the target image from the given measurements.In this paper, we optimize over network weights and latent codes to use untrained generative network as prior for video compressive sensing problem.We show that by optimizing over latent code, we can additionally get concise representation of the frames which retain the structural similarity of the video frames.We also apply low-rank constraint on the latent codes to represent the video sequences in even lower dimensional latent space.We empirically show that our proposed methods provide better or comparable accuracy and low computational complexity compared to the existing methods.",Recover videos from compressive measurements by learning a low-dimensional (low-rank) representation directly from measurements while training a deep generator. 206,Lookahead: A Far-sighted Alternative of Magnitude-based Pruning,"Magnitude-based pruning is one of the simplest methods for pruning neural networks.Despite its simplicity, magnitude-based pruning and its variants demonstrated remarkable performances for pruning modern architectures.Based on the observation that the magnitude-based pruning indeed minimizes the Frobenius distortion of a linear operator corresponding to a single layer, we develop a simple pruning method, coined lookahead pruning, by extending the single layer optimization to a multi-layer optimization.Our experimental results demonstrate that the proposed method consistently outperforms the magnitude pruning on various networks including VGG and ResNet, particularly in the high-sparsity regime.",We study a multi-layer generalization of the magnitude-based pruning. 207,Multi-objective training of Generative Adversarial Networks with multiple discriminators,"Recent literature has demonstrated promising results on the training of Generative Adversarial Networks by employing a set of discriminators, as opposed to the traditional game involving one generator against a single adversary.Those methods perform single-objective optimization on some simple consolidation of the losses, e.g. an average.In this work, we revisit the multiple-discriminator approach by framing the simultaneous minimization of losses provided by different models as a multi-objective optimization problem.Specifically, we evaluate the performance of multiple gradient descent and the hypervolume maximization algorithm on a number of different datasets.Moreover, we argue that the previously proposed methods and hypervolume maximization can all be seen as variations of multiple gradient descent in which the update direction computation can be done efficiently.Our results indicate that hypervolume maximization presents a better compromise between sample quality and diversity, and computational cost than previous methods.","We introduce hypervolume maximization for training GANs with multiple discriminators, showing performance improvements in terms of sample quality and diversity. " 208,Lyceum: An efficient and scalable ecosystem for robot learning,"We introduce Lyceum, a high-performance computational ecosystem for robotlearning. Lyceum is built on top of the Julia programming language and theMuJoCo physics simulator, combining the ease-of-use of a high-level program-ming language with the performance of native C. Lyceum is up to 10-20Xfaster compared to other popular abstractions like OpenAI’sGymand Deep-Mind’sdm-control. This substantially reduces training time for various re-inforcement learning algorithms; and is also fast enough to support real-timemodel predictive control with physics simulators. Lyceum has a straightfor-ward API and supports parallel computation across multiple cores or machines.The code base, tutorials, and demonstration videos can be found at: https://sites.google.com/view/lyceum-anon.",A high performance robotics simulation and algorithm development framework. 209,Universal Source-Free Domain Adaptation,"There is a strong incentive to develop versatile learning techniques that can transfer the knowledge of class-separability from a labeled source domain to an unlabeled target domain in the presence of a domain-shift.Existing domain adaptation approaches are not equipped for practical DA scenarios as a result of their reliance on the knowledge of source-target label-set relationship.Furthermore, almost all the prior unsupervised DA works require coexistence of source and target samples even during deployment, making them unsuitable for incremental, real-time adaptation.Devoid of such highly impractical assumptions, we propose a novel two-stage learning process.Initially, in the procurement-stage, the objective is to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift.To achieve this, we enhance the model’s ability to reject out-of-source distribution samples by leveraging the available source data, in a novel generative classifier framework.Subsequently, in the deployment-stage, the objective is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps, with no access to the previously seen source samples.To achieve this, in contrast to the usage of complex adversarial training regimes, we define a simple yet effective source-free adaptation objective by utilizing a novel instance-level weighing mechanism, named as Source Similarity Metric.A thorough evaluation shows the practical usability of the proposed learning framework with superior DA performance even over state-of-the-art source-dependent approaches.",A novel unsupervised domain adaptation paradigm - performing adaptation without accessing the source data ('source-free') and without any assumption about the source-target category-gap ('universal'). 210,Learning to Multi-Task by Active Sampling,"One of the long-standing challenges in Artificial Intelligence for learning goal-directed behavior is to build a single agent which can solve multiple tasks.Recent progress in multi-task learning for goal-directed sequential problems has been in the form of distillation based learning wherein a student network learns from multiple task-specific expert networks by mimicking the task-specific policies of the expert networks.While such approaches offer a promising solution to the multi-task learning problem, they require supervision from large expert networks which require extensive data and computation time for training.In this work, we propose an efficient multi-task learning framework which solves multiple goal-directed tasks in an on-line setup without the need for expert supervision.Our work uses active learning principles to achieve multi-task learning by sampling the harder tasks more than the easier ones.We propose three distinct models under our active sampling framework.An adaptive method with extremely competitive multi-tasking performance.A UCB-based meta-learner which casts the problem of picking the next task to train on as a multi-armed bandit problem.A meta-learning method that casts the next-task picking problem as a full Reinforcement Learning problem and uses actor-critic methods for optimizing the multi-tasking performance directly.We demonstrate results in the Atari 2600 domain on seven multi-tasking instances: three 6-task instances, one 8-task instance, two 12-task instances and one 21-task instance.",Letting a meta-learner decide the task to train on for an agent in a multi-task setting improves multi-tasking ability substantially 211,ASGen: Answer-containing Sentence Generation to Pre-Train Question Generator for Scale-up Data in Question Answering,"Numerous machine reading comprehension datasets often involve manual annotation, requiring enormous human effort, and hence the size of the dataset remains significantly smaller than the size of the data available for unsupervised learning.Recently, researchers proposed a model for generating synthetic question-and-answer data from large corpora such as Wikipedia.This model is utilized to generate synthetic data for training an MRC model before fine-tuning it using the original MRC dataset.This technique shows better performance than other general pre-training techniques such as language modeling, because the characteristics of the generated data are similar to those of the downstream MRC data.However, it is difficult to have high-quality synthetic data comparable to human-annotated MRC datasets.To address this issue, we propose Answer-containing Sentence Generation, a novel pre-training method for generating synthetic data involving two advanced techniques, dynamically determining K answers and pre-training the question generator on the answer-containing sentence generation task.We evaluate the question generation capability of our method by comparing the BLEU score with existing methods and test our method by fine-tuning the MRC model on the downstream MRC data after training on synthetic data.Experimental results show that our approach outperforms existing generation methods and increases the performance of the state-of-the-art MRC models across a range of MRC datasets such as SQuAD-v1.1, SQuAD-v2.0, KorQuAD and QUASAR-T without any architectural modifications to the original MRC model.","We propose Answer-containing Sentence Generation (ASGen), a novel pre-training method for generating synthetic data for machine reading comprehension." 212,Fix-Net: pure fixed-point representation of deep neural networks,"Deep neural networks dominate current research in machine learning.Due to massive GPU parallelization DNN training is no longer a bottleneck, and large models with many parameters and high computational effort lead common benchmark tables.In contrast, embedded devices have a very limited capability.As a result, both model size and inference time must be significantly reduced if DNNs are to achieve suitable performance on embedded devices.We propose a soft quantization approach to train DNNs that can be evaluated using pure fixed-point arithmetic.By exploiting the bit-shift mechanism, we derive fixed-point quantization constraints for all important components, including batch normalization and ReLU.Compared to floating-point arithmetic, fixed-point calculations significantly reduce computational effort whereas low-bit representations immediately decrease memory costs.We evaluate our approach with different architectures on common benchmark data sets and compare with recent quantization approaches.We achieve new state of the art performance using 4-bit fixed-point models with an error rate of 4.98% on CIFAR-10.",Soft quantization approach to learn pure fixed-point representations of deep neural networks 213,WSNet: Learning Compact and Efficient Networks with Weight Sampling,"We present a new approach and a novel architecture, termed WSNet, for learning compact and efficient deep neural networks.Existing approaches conventionally learn full model parameters independently and then compress them via processing such as model pruning or filter factorization.Alternatively, WSNet proposes learning model parameters by sampling from a compact set of learnable parameters, which naturally enforces throughout the learning process.We demonstrate that such a novel weight sampling approach promotes both weights and computation sharing favorably.By employing this method, we can more efficiently learn much smaller networks with competitive performance compared to baseline networks with equal numbers of convolution filters.Specifically, we consider learning compact and efficient 1D convolutional neural networks for audio classification.Extensive experiments on multiple audio classification datasets verify the effectiveness of WSNet.Combined with weight quantization, the resulted models are up to \extbf smaller and theoretically up to \extbf faster than the well-established baselines, without noticeable performance drop.",We present a novel network architecture for learning compact and efficient deep neural networks 214,Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking,"Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples for object detection models.However, in such visual perception pipeline the detected objects must also be tracked, in a process called Multiple Object Tracking, to build the moving trajectories of surrounding obstacles.Since MOT is designed to be robust against errors in object detection, it poses a general challenge to existing attack techniques that blindly target objection detection: we find that a success rate of over 98% is needed for them to actually affect the tracking results, a requirement that no existing attack technique can satisfy.In this paper, we are the first to study adversarial machine learning attacks against the complete visual perception pipeline in autonomous driving, and discover a novel attack technique, tracker hijacking, that can effectively fool MOT using AEs on object detection.Using our technique, successful AEs on as few as one single frame can move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards.We perform evaluation using the Berkeley Deep Drive dataset and find that on average when 3 frames are attacked, our attack can have a nearly 100% success rate while attacks that blindly target object detection only have up to 25%.",We study the adversarial machine learning attacks against the Multiple Object Tracking mechanisms for the first time. 215,CRAP: Semi-supervised Learning via Conditional Rotation Angle Prediction,"Self-supervised learning, aiming at learning feature representations through ingeniously designed pretext tasks without human annotation, has achieved compelling progress in the past few years.Very recently, SlfSL has also been identified as a promising solution for semi-supervised learning since it offers a new paradigm to utilize unlabeled data.This work further explores this direction by proposing a new framework to seamlessly couple SlfSL with SemSL.Our insight is that the prediction target in SemSL can be modeled as the latent factor in the predictor for the SlfSL target.Marginalizing over the latent factor naturally derives a new formulation which marries the prediction targets of these two learning processes.By implementing this framework through a simple-but-effective SlfSL approach -- rotation angle prediction, we create a new SemSL approach called Conditional Rotation Angle Prediction.Specifically, CRAP is featured by adopting a module which predicts the image rotation angle \extbf.Through experimental evaluation, we show that CRAP achieves superior performance over the other existing ways of combining SlfSL and SemSL.Moreover, the proposed SemSL framework is highly extendable.By augmenting CRAP with a simple SemSL technique and a modification of the rotation angle prediction task, our method has already achieved the state-of-the-art SemSL performance.",Coupling semi-supervised learning with self-supervised learning and explicitly modeling the self-supervised task conditioned on the semi-supervised one 216,Seq2Slate: Re-ranking and Slate Optimization with RNNs,"Ranking is a central task in machine learning and information retrieval.In this task, it is especially important to present the user with a slate of items that is appealing as a whole.This in turn requires taking into account interactions between items, since intuitively, placing an item on the slate affects the decision of which other items should be chosen alongside it.In this work, we propose a sequence-to-sequence model for ranking called seq2slate.At each step, the model predicts the next item to place on the slate given the items already chosen.The recurrent nature of the model allows complex dependencies between items to be captured directly in a flexible and scalable way.We show how to learn the model end-to-end from weak supervision in the form of easily obtained click-through data.We further demonstrate the usefulness of our approach in experiments on standard ranking benchmarks as well as in a real-world recommendation system.","A pointer network architecture for re-ranking items, learned from click-through logs." 217,Model-based imitation learning from state trajectories,"Imitation learning from demonstrations usually relies on learning a policy from trajectories of optimal states and actions.However, in real life expert demonstrations, often the action information is missing and only state trajectories are available.We present a model-based imitation learning method that can learn environment-specific optimal actions only from expert state trajectories.Our proposed method starts with a model-free reinforcement learning algorithm with a heuristic reward signal to sample environment dynamics, which is then used to train the state-transition probability.Subsequently, we learn the optimal actions from expert state trajectories by supervised learning, while back-propagating the error gradients through the modeled environment dynamics.Experimental evaluations show that our proposed method successfully achieves performance similar to trajectory-based traditional imitation learning methods even in the absence of action information, with much fewer iterations compared to conventional model-free reinforcement learning methods.We also demonstrate that our method can learn to act from only video demonstrations of expert agent for simple games and can learn to achieve desired performance in less number of iterations.",Learning to imitate an expert in the absence of optimal actions learning a dynamics model while exploring the environment. 218,Boosting Ticket: Towards Practical Pruning for Adversarial Training with Lottery Ticket Hypothesis,"Recent research has proposed the lottery ticket hypothesis, suggesting that for a deep neural network, there exist trainable sub-networks performing equally or better than the original model with commensurate training steps.While this discovery is insightful, finding proper sub-networks requires iterative training and pruning.The high cost incurred limits the applications of the lottery ticket hypothesis.We show there exists a subset of the aforementioned sub-networks that converge significantly faster during the training process and thus can mitigate the cost issue.We conduct extensive experiments to show such sub-networks consistently exist across various model structures for a restrictive setting of hyperparameters. As a practical application of our findings, we demonstrate that such sub-networks can help in cutting down the total time of adversarial training, a standard approach to improve robustness, by up to 49% on CIFAR-10 to achieve the state-of-the-art robustness.",We show the possibility of pruning to find a small sub-network with significantly higher convergence rate than the full model. 219,Variational Inference of Disentangled Latent Concepts from Unlabeled Observations,"Disentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representations, transferability to other tasks, interpretability, etc.We consider the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and propose a variational inference based approach to infer disentangled latent factors.We introduce a regularizer on the expectation of the approximate posterior over observed data that encourages the disentanglement.""We also propose a new disentanglement metric which is better aligned with the qualitative disentanglement observed in the decoder's output."", 'We empirically observe significant improvement over existing methods in terms of both disentanglement and data likelihood.",We propose a variational inference based approach for encouraging the inference of disentangled latents. We also propose a new metric for quantifying disentanglement. 220,Action Semantics Network: Considering the Effects of Actions in Multiagent Systems,"In multiagent systems, each agent makes individual decisions but all of them contribute globally to the system evolution.""Learning in MASs is difficult since each agent's selection of actions must take place in the presence of other co-learning agents."", 'Moreover, the environmental stochasticity and uncertainties increase exponentially with the increase in the number of agents.Previous works borrow various multiagent coordination mechanisms into deep learning architecture to facilitate multiagent coordination.However, none of them explicitly consider action semantics between agents that different actions have different influences on other agents.In this paper, we propose a novel network architecture, named Action Semantics Network, that explicitly represents such action semantics between agents.""ASN characterizes different actions' influence on other agents using neural networks based on the action semantics between them."", 'ASN can be easily combined with existing deep reinforcement learning algorithms to boost their performance.Experimental results on StarCraft II micromanagement and Neural MMO show ASN significantly improves the performance of state-of-the-art DRL approaches compared with several network architectures.",Our proposed ASN characterizes different actions' influence on other agents using neural networks based on the action semantics between them. 221,Gating out sensory noise in a spike-based Long Short-Term Memory network,"Spiking neural networks are being investigated both as biologically plausible models of neural computation and also as a potentially more efficient type of neural network.While convolutional spiking neural networks have been demonstrated to achieve near state-of-the-art performance, only one solution has been proposed to convert gated recurrent neural networks, so far.Recurrent neural networks in the form of networks of gating memory cells have been central in state-of-the-art solutions in problem domains that involve sequence recognition or generation.Here, we design an analog gated LSTM cell where its neurons can be substituted for efficient stochastic spiking neurons.These adaptive spiking neurons implement an adaptive form of sigma-delta coding to convert internally computed analog activation values to spike-trains.For such neurons, we approximate the effective activation function, which resembles a sigmoid.We show how analog neurons with such activation functions can be used to create an analog LSTM cell; networks of these cells can then be trained with standard backpropagation.We train these LSTM networks on a noisy and noiseless version of the original sequence prediction task from Hochreiter & Schmidhuber, and also on a noisy and noiseless version of a classical working memory reinforcement learning task, the T-Maze.Substituting the analog neurons for corresponding adaptive spiking neurons, we then show that almost all resulting spiking neural network equivalents correctly compute the original tasks.", We demonstrate a gated recurrent asynchronous spiking neural network that corresponds to an LSTM unit. 222,VideoEpitoma: Efficient Recognition of Long-range Actions,"CNNs are widely successful in recognizing human actions in videos, albeit with a great cost of computation.This cost is significantly higher in the case of long-range actions, where a video can span up to a few minutes, on average.The goal of this paper is to reduce the computational cost of these CNNs, without sacrificing their performance.We propose VideoEpitoma, a neural network architecture comprising two modules: a timestamp selector and a video classifier.Given a long-range video of thousands of timesteps, the selector learns to choose only a few but most representative timesteps for the video.This selector resides on top of a lightweight CNN such as MobileNet and uses a novel gating module to take a binary decision: consider or discard a video timestep.This decision is conditioned on both the timestep-level feature and the video-level consensus.A heavyweight CNN model such as I3D takes the selected frames as input and performs video classification.Using off-the-shelf video classifiers, VideoEpitoma reduces the computation by up to 50% without compromising the accuracy.In addition, we show that if trained end-to-end, the selector learns to make better choices to the benefit of the classifier, despite the selector and the classifier residing on two different CNNs.Finally, we report state-of-the-art results on two datasets for long-range action recognition: Charades and Breakfast Actions, with much-reduced computation.In particular, we match the accuracy of I3D by using less than half of the computation.","Efficient video classification using frame-based conditional gating module for selecting most-dominant frames, followed by temporal modeling and classifier." 223,Differentiable Perturb-and-Parse: Semi-Supervised Parsing with a Structured Variational Autoencoder,"Human annotation for syntactic parsing is expensive, and large resources are available only for a fraction of languages.A question we ask is whether one can leverage abundant unlabeled texts to improve syntactic parsers, beyond just using the texts to obtain more generalisable lexical features.To this end, we propose a novel latent-variable generative model for semi-supervised syntactic dependency parsing.As exact inference is intractable, we introduce a differentiable relaxation to obtain approximate samples and compute gradients with respect to the parser parameters.Our method relies on differentiable dynamic programming over stochastically perturbed edge scores.We demonstrate effectiveness of our approach with experiments on English, French and Swedish.",Differentiable dynamic programming over perturbed input weights with application to semi-supervised VAE 224,Learning how to explain neural networks: PatternNet and PatternAttribution,"DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks.We show that these methods do not produce the theoretically correct explanation for a linear model.Yet they are used on multi-layer networks with millions of parameters.This is a cause for concern since linear models are simple neural networks.We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models.Based on our analysis of linear models we propose a generalization that yields two explanation techniques that are theoretically sound for linear models and produce improved explanations for deep networks.","Without learning, it is impossible to explain a machine learning model's decisions." 225,"GRAPHS, ENTITIES, AND STEP MIXTURE","Graph neural networks have shown promising results on representing and analyzing diverse graph-structured data such as social, citation, and protein interaction networks.Existing approaches commonly suffer from the oversmoothing issue, regardless of whether policies are edge-based or node-based for neighborhood aggregation.Most methods also focus on transductive scenarios for fixed graphs, leading to poor generalization performance for unseen graphs.To address these issues, we propose a new graph neural network model that considers both edge-based neighborhood relationships and node-based entity features, i.e. Graph Entities with Step Mixture via random walk.GESM employs a mixture of various steps through random walk to alleviate the oversmoothing problem and attention to use node information explicitly.These two mechanisms allow for a weighted neighborhood aggregation which considers the properties of entities and relations.With intensive experiments, we show that the proposed GESM achieves state-of-the-art or comparable performances on four benchmark graph datasets comprising transductive and inductive learning tasks.Furthermore, we empirically demonstrate the significance of considering global information.The source code will be publicly available in the near future.",Simple and effective graph neural network with mixture of random walk steps and attention 226,Unsupervised Deep Basis Pursuit: Learning inverse problems without ground-truth data,"Basis pursuit is a compressed sensing optimization in which the l1-norm is minimized subject to model error constraints.Here we use a deep neural network prior instead of l1-regularization.Using known noise statistics, we jointly learn the prior and reconstruct images without access to ground-truth data.During training, we use alternating minimization across an unrolled iterative network and jointly solve for the neural network weights and training set image reconstructions.At inference, we fix the weights and pass the measurements through the network.We compare reconstruction performance between unsupervised and supervised methods.We hypothesize this technique could be used to learn reconstruction when ground-truth data are unavailable, such as in high-resolution dynamic MRI.",We present an unsupervised deep learning reconstruction for imaging inverse problems that combines neural networks with model-based constraints. 227,Multi-Class Few Shot Learning Task and Controllable Environment,"Deep learning approaches usually require a large amount of labeled data to generalize.However, humans can learn a new concept only by a few samples.One of the high cogntition human capablities is to learn several concepts at the same time.In this paper, we address the task of classifying multiple objects by seeing only a few samples from each category.""To the best of authors' knowledge, there is no dataset specially designed for few-shot multiclass classification."", 'We design a task of mutli-object few class classification and an environment for easy creating controllable datasets for this task.We demonstrate that the proposed dataset is sound using a method which is an extension of prototypical networks.",We introduce a diagnostic task which is a variation of few-shot learning and introduce a dataset for it. 228,Encoder-decoder Network as Loss Function for Summarization,"We present a new approach to defining a sequence loss function to train a summarizer by using a secondary encoder-decoder as a loss function, alleviating a shortcoming of word level training for sequence outputs.The technique is based on the intuition that if a summary is a good one, it should contain the most essential information from the original article, and therefore should itself be a good input sequence, in lieu of the original, from which a summary can be generated.We present experimental results where we apply this additional loss function to a general abstractive summarizer on a news summarization dataset.The result is an improvement in the ROUGE metric and an especially large improvement in human evaluations, suggesting enhanced performance that is competitive with specialized state-of-the-art models.",We present the use of a secondary encoder-decoder as a loss function to help train a summarizer. 229,Unsupervised Video-to-Video Translation via Self-Supervised Learning,"Existing unsupervised video-to-video translation methods fail to produce translated videos which are frame-wise realistic, semantic information preserving and video-level consistent.In this work, we propose a novel unsupervised video-to-video translation model.Our model decomposes the style and the content, uses specialized encoder-decoder structure and propagates the inter-frame information through bidirectional recurrent neural network units.The style-content decomposition mechanism enables us to achieve long-term style-consistent video translation results as well as provides us with a good interface for modality flexible translation.In addition, by changing the input frames and style codes incorporated in our translation, we propose a video interpolation loss, which captures temporal information within the sequence to train our building blocks in a self-supervised manner.Our model can produce photo-realistic, spatio-temporal consistent translated videos in a multimodal way.Subjective and objective experimental results validate the superiority of our model over the existing methods.",A temporally consistent and modality flexible unsupervised video-to-video translation framework trained in a self-supervised manner. 230,Towards an argumentation-based approach to explainable planning,"Providing transparency of AI planning systems is crucial for their success in practical applications.In order to create a transparent system, a user must be able to query it for explanations about its outputs.We argue that a key underlying principle for this is the use of causality within a planning model, and that argumentation frameworks provide an intuitive representation of such causality.In this paper, we discuss how argumentation can aid in extracting causalities in plans and models, and how they can create explanations from them.",Argumentation frameworks are used to represent causality of plans/models to be utilized for explanations. 231,Tranquil Clouds: Neural Networks for Learning Temporally Coherent Features in Point Clouds,"Point clouds, as a form of Lagrangian representation, allow for powerful and flexible applications in a large number of computational disciplines.We propose a novel deep-learning method to learn stable and temporally coherent feature spaces for points clouds that change over time.We identify a set of inherent problems with these approaches: without knowledge of the time dimension, the inferred solutions can exhibit strong flickering, and easy solutions to suppress this flickering can result in undesirable local minima that manifest themselves as halo structures.We propose a novel temporal loss function that takes into account higher time derivatives of the point positions, and encourages mingling, i.e., to prevent the aforementioned halos.We combine these techniques in a super-resolution method with a truncation approach to flexibly adapt the size of the generated positions.We show that our method works for large, deforming point sets from different sources to demonstrate the flexibility of our approach.",We propose a generative neural network approach for temporally coherent point clouds. 232,Prior Convictions: Black-box Adversarial Attacks with Bandits and Priors,"We study the problem of generating adversarial examples in a black-box setting in which only loss-oracle access to a model is available.We introduce a framework that conceptually unifies much of the existing work on black-box attacks, and demonstrate that the current state-of-the-art methods are optimal in a natural sense.Despite this optimality, we show how to improve black-box attacks by bringing a new element into the problem: gradient priors.We give a bandit optimization-based algorithm that allows us to seamlessly integrate any such priors, and we explicitly identify and incorporate two examples.The resulting methods use two to four times fewer queries and fail two to five times less than the current state-of-the-art.The code for reproducing our work is available at https://git.io/fAjOJ.","We present a unifying view on black-box adversarial attacks as a gradient estimation problem, and then present a framework (based on bandits optimization) to integrate priors into gradient estimation, leading to significantly increased performance." 233,Storyboarding of Recipes: Grounded Contextual Generation,"Information need of humans is essentially multimodal in nature, enabling maximum exploitation of situated context.We introduce a dataset for sequential procedural text generation from images in cooking domain.The dataset consists of 16,441 cooking recipes with 160,479 photos associated with different steps.We setup a baseline motivated by the best performing model in terms of human evaluation for the Visual Story Telling task.In addition, we introduce two models to incorporate high level structure learnt by a Finite State Machine in neural sequential generation process by: Scaffolding Structure in Decoder Scaffolding Structure in Loss.These models show an improvement in empirical as well as human evaluation.Our best performing model achieves a METEOR score of 0.31, which is an improvement of 0.6 over the baseline model.We also conducted human evaluation of the generated grounded recipes, which reveal that 61% found that our proposed model is better than the baseline model in terms of overall recipes, and 72.5% preferred our model in terms of coherence and structure.We also discuss analysis of the output highlighting key important NLP issues for prospective directions.",The paper presents two techniques to incorporate high level structure in generating procedural text from a sequence of images. 234,Gradient Estimators for Implicit Models,"Implicit models, which allow for the generation of samples but not for point-wise evaluation of probabilities, are omnipresent in real-world problems tackled by machine learning and a hot topic of current research.Some examples include data simulators that are widely used in engineering and scientific research, generative adversarial networks for image synthesis, and hot-off-the-press approximate inference techniques relying on implicit distributions.The majority of existing approaches to learning implicit models rely on approximating the intractable distribution or optimisation objective for gradient-based optimisation, which is liable to produce inaccurate updates and thus poor models.This paper alleviates the need for such approximations by proposing the , which directly estimates the score function of the implicitly defined distribution.The efficacy of the proposed estimator is empirically demonstrated by examples that include meta-learning for approximate inference and entropy regularised GANs that provide improved sample diversity.","We introduced a novel gradient estimator using Stein's method, and compared with other methods on learning implicit models for approximate inference and image generation." 235,Data Augmentation in Training CNNs: Injecting Noise to Images,"Noise injection is a fundamental tool for data augmentation, and yet there is no widely accepted procedure to incorporate it with learning frameworks.This study analyzes the effects of adding or applying different noise models of varying magnitudes to Convolutional Neural Network architectures.Noise models that are distributed with different density functions are given common magnitude levels via Structural Similarity metric in order to create an appropriate ground for comparison.The basic results are conforming with the most of the common notions in machine learning, and also introduces some novel heuristics and recommendations on noise injection.The new approaches will provide better understanding on optimal learning procedures for image classification.",Ideal methodology to inject noise to input data during CNN training 236,Distribution Matching Prototypical Network for Unsupervised Domain Adaptation,"State-of-the-art Unsupervised Domain Adaptation methods learn transferable features by minimizing the feature distribution discrepancy between the source and target domains.Different from these methods which do not model the feature distributions explicitly, in this paper, we explore explicit feature distribution modeling for UDA.In particular, we propose Distribution Matching Prototypical Network to model the deep features from each domain as Gaussian mixture distributions.With explicit feature distribution modeling, we can easily measure the discrepancy between the two domains.In DMPN, we propose two new domain discrepancy losses with probabilistic interpretations.The first one minimizes the distances between the corresponding Gaussian component means of the source and target data.The second one minimizes the pseudo negative log likelihood of generating the target features from source feature distribution.To learn both discriminative and domain invariant features, DMPN is trained by minimizing the classification loss on the labeled source data and the domain discrepancy losses together.Extensive experiments are conducted over two UDA tasks.Our approach yields a large margin in the Digits Image transfer task over state-of-the-art approaches.More remarkably, DMPN obtains a mean accuracy of 81.4% on VisDA 2017 dataset.The hyper-parameter sensitivity analysis shows that our approach is robust w.r.t hyper-parameter changes.",We propose to explicitly model deep feature distributions of source and target data as Gaussian mixture distributions for Unsupervised Domain Adaptation (UDA) and achieve superior results in multiple UDA tasks than state-of-the-art methods. 237,Learning World Graph Decompositions To Accelerate Reinforcement Learning,"Efficiently learning to solve tasks in complex environments is a key challenge for reinforcement learning agents. We propose to decompose a complex environment using a task-agnostic world graphs, an abstraction that accelerates learning by enabling agents to focus exploration on a subspace of the environment.The nodes of a world graph are important waypoint states and edges represent feasible traversals between them. Our framework has two learning phases: 1) identifying world graph nodes and edges by training a binary recurrent variational auto-encoder on trajectory data and2) a hierarchical RL framework that leverages structural and connectivity knowledge from the learned world graph to bias exploration towards task-relevant waypoints and regions.We show that our approach significantly accelerates RL on a suite of challenging 2D grid world tasks: compared to baselines, world graph integration doubles achieved rewards on simpler tasks, e.g. MultiGoal, and manages to solve more challenging tasks, e.g. Door-Key, where baselines fail.",We learn a task-agnostic world graph abstraction of the environment and show how using it for structured exploration can significantly accelerate downstream task-specific RL. 238,Learning to Represent Programs with Property Signatures,"We introduce the notion of property signatures, a representation for programs andprogram specifications meant for consumption by machine learning algorithms.Given a function with input type τ_in and output type τ_out, a property is a functionof type: → Bool that describes some simple propertyof the function under consideration.For instance, if τ_in and τ_out are both listsof the same type, one property might ask ‘is the input list the same length as theoutput list?’.If we have a list of such properties, we can evaluate them all for ourfunction to get a list of outputs that we will call the property signature.Crucially,we can ‘guess’ the property signature for a function given only a set of input/outputpairs meant to specify that function.We discuss several potential applications ofproperty signatures and show experimentally that they can be used to improveover a baseline synthesizer so that it emits twice as many programs in less thanone-tenth of the time.",We represent a computer program using a set of simpler programs and use this representation to improve program synthesis techniques. 239,Maintaining cooperation in complex social dilemmas using deep reinforcement learning,"Social dilemmas are situations where individuals face a temptation to increase their payoffs at a cost to total welfare.Building artificially intelligent agents that achieve good outcomes in these situations is important because many real world interactions include a tension between selfish interests and the welfare of others.We show how to modify modern reinforcement learning methods to construct agents that act in ways that are simple to understand, nice, provokable, and forgiving.We show both theoretically and experimentally that such agents can maintain cooperation in Markov social dilemmas.Our construction does not require training methods beyond a modification of self-play, thus if an environment is such that good strategies can be constructed in the zero-sum case then we can construct agents that solve social dilemmas in this environment.",How can we build artificial agents that solve social dilemmas (situations where individuals face a temptation to increase their payoffs at a cost to total welfare)? 240,Detecting Anomalies in Communication Packet Streams based on Generative Adversarial Networks,"The fault diagnosis in a modern communication system is traditionally supposed to be difficult, or even impractical for a purely data-driven machine learning approach, for it is a humanmade system of intensive knowledge.A few labeled raw packet streams extracted from fault archive can hardly be sufficient to deduce the intricate logic of underlying protocols.In this paper, we supplement these limited samples with two inexhaustible data sources: the unlabeled records probed from a system in service, and the labeled data simulated in an emulation environment.To transfer their inherent knowledge to the target domain, we construct a directed information flow graph, whose nodes are neural network components consisting of two generators, three discriminators and one classifier, and whose every forward path represents a pair of adversarial optimization goals, in accord with the semi-supervised and transfer learning demands.The multi-headed network can be trained in an alternative approach, at each iteration of which we select one target to update the weights along the path upstream, and refresh the residual layer-wisely to all outputs downstream.The actual results show that it can achieve comparable accuracy on classifying Transmission Control Protocol streams without deliberate expert features.The solution has relieved operation engineers from massive works of understanding and maintaining rules, and provided a quick solution independent of specific protocols.","semi-supervised and transfer learning on packet flow classification, via a system of cooperative or adversarial neural blocks" 241,Kronecker Recurrent Units,"Our work addresses two important issues with recurrent neural networks: they are over-parameterized, and the recurrent weight matrix is ill-conditioned.The former increases the sample complexity of learning and the training time.The latter causes the vanishing and exploding gradient problem.We present a flexible recurrent neural network model called Kronecker Recurrent Units.KRU achieves parameter efficiency in RNNs through a Kronecker factored recurrent matrix.It overcomes the ill-conditioning of the recurrent matrix by enforcing soft unitary constraints on the factors.Thanks to the small dimensionality of the factors, maintaining these constraints is computationally efficient.Our experimental results on seven standard data-sets reveal that KRU can reduce the number of parameters by three orders of magnitude in the recurrent weight matrix compared to the existing recurrent models, without trading the statistical performance.These results in particular show that while there are advantages in having a high dimensional recurrent space, the capacity of the recurrent part of the model can be dramatically reduced.",Out work presents a Kronecker factorization of recurrent weight matrices for parameter efficient and well conditioned recurrent neural networks. 242,Adversarially Robust Representations with Smooth Encoders,"This paper studies the undesired phenomena of over-sensitivity of representations learned by deep networks to semantically-irrelevant changes in data.We identify a cause for this shortcoming in the classical Variational Auto-encoder objective, the evidence lower bound.We show that the ELBO fails to control the behaviour of the encoder out of the support of the empirical data distribution and this behaviour of the VAE can lead to extreme errors in the learned representation.This is a key hurdle in the effective use of representations for data-efficient learning and transfer.To address this problem, we propose to augment the data with specifications that enforce insensitivity of the representation with respect to families of transformations.To incorporate these specifications, we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point.For certain choices of parameters, our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations.We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner, without a reference to a particular downstream task and without a costly supervised adversarial training procedure.",We propose a method for computing adversarially robust representations in an entirely unsupervised way. 243,Gradient-Based Neural DAG Learning,"We propose a novel score-based approach to learning a directed acyclic graph from observational data.We adapt a recently proposed continuous constrained optimization formulation to allow for nonlinear relationships between variables using neural networks.This extension allows to model complex interactions while being more global in its search compared to other greedy approaches.In addition to comparing our method to existing continuous optimization methods, we provide missing empirical comparisons to nonlinear greedy search methods.On both synthetic and real-world data sets, this new method outperforms current continuous methods on most tasks while being competitive with existing greedy search methods on important metrics for causal inference.",We are proposing a new score-based approach to structure/causal learning leveraging neural networks and a recent continuous constrained formulation to this problem 244,Optimal Attacks against Multiple Classifiers,"We study the problem of designing provably optimal adversarial noise algorithms that induce misclassification in settings where a learner aggregates decisions from multiple classifiers.Given the demonstrated vulnerability of state-of-the-art models to adversarial examples, recent efforts within the field of robust machine learning have focused on the use of ensemble classifiers as a way of boosting the robustness of individual models.In this paper, we design provably optimal attacks against a set of classifiers.We demonstrate how this problem can be framed as finding strategies at equilibrium in a two player, zero sum game between a learner and an adversary and consequently illustrate the need for randomization in adversarial attacks.The main technical challenge we consider is the design of best response oracles that can be implemented in a Multiplicative Weight Updates framework to find equilibrium strategies in the zero-sum game.We develop a series of scalable noise generation algorithms for deep neural networks, and show that it outperforms state-of-the-art attacks on various image classification tasks.Although there are generally no guarantees for deep learning, we show this is a well-principled approach in that it is provably optimal for linear classifiers.The main insight is a geometric characterization of the decision space that reduces the problem of designing best response oracles to minimizing a quadratic function over a set of convex polytopes.","Paper analyzes the problem of designing adversarial attacks against multiple classifiers, introducing algorithms that are optimal for linear classifiers and which provide state-of-the-art results for deep learning." 245,Learning Parametric Closed-Loop Policies for Markov Potential Games,"Multiagent systems where the agents interact among themselves and with an stochastic environment can be formalized as stochastic games.We study a subclass of these games, named Markov potential games, that appear often in economic and engineering applications when the agents share some common resource.We consider MPGs with continuous state-action variables, coupled constraints and nonconvex rewards.Previous analysis followed a variational approach that is only valid for very simple cases; or considered deterministic dynamics and provided open-loop analysis, studying strategies that consist in predefined action sequences, which are not optimal for stochastic environments.We present a closed-loop analysis for MPGs and consider parametric policies that depend on the current state and where agents adapt to stochastic transitions.We provide easily verifiable, sufficient and necessary conditions for a stochastic game to be an MPG, even for complex parametric functions; and show that a closed-loop Nash equilibrium can be found by solving a related optimal control problem.This is useful since solving an OCP---which is a single-objective problem---is usually much simpler than solving the original set of coupled OCPs that form the game---which is a multiobjective control problem.This is a considerable improvement over the previously standard approach for the CL analysis of MPGs, which gives no approximate solution if no NE belongs to the chosen parametric family, and which is practical only for simple parametric forms.We illustrate the theoretical contributions with an example by applying our approach to a noncooperative communications engineering game.We then solve the game with a deep reinforcement learning algorithm that learns policies that closely approximates an exact variational NE of the game.",We present general closed loop analysis for Markov potential games and show that deep reinforcement learning can be used for learning approximate closed-loop Nash equilibrium. 246,HiLLoC: lossless image compression with hierarchical latent variable models,"We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model.""We use this property, applying fully convolutional models to lossless compression, demonstrating a method to scale the VAE-based 'Bits-Back with ANS' algorithm for lossless compression to large color photographs, and achieving state of the art for compression of full size ImageNet images."", 'We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression results.","We scale up lossless compression with latent variables, beating existing approaches on full-size ImageNet images." 247,Adversarial Spheres," State of the art computer vision models have been shown to be vulnerable to small adversarial perturbations of the input.In other words, most images in the data distribution are both correctly classified by the model and are very close to a visually similar misclassified image.Despite substantial research interest, the cause of the phenomenon is still poorly understood and remains unsolved.We hypothesize that this counter intuitive behavior is a naturally occurring result of the high dimensional geometry of the data manifold.As a first step towards exploring this hypothesis, we study a simple synthetic dataset of classifying between two concentric high dimensional spheres.For this dataset we show a fundamental tradeoff between the amount of test error and the average distance to nearest error.In particular, we prove that any model which misclassifies a small constant fraction of a sphere will be vulnerable to adversarial perturbations of size.Surprisingly, when we train several different architectures on this dataset, all of their error sets naturally approach this theoretical bound.As a result of the theory, the vulnerability of neural networks to small adversarial perturbations is a logical consequence of the amount of test error observed.We hope that our theoretical analysis of this very simple case will point the way forward to explore how the geometry of complex real-world data sets leads to adversarial examples.",We hypothesize that the vulnerability of image models to small adversarial perturbation is a naturally occurring result of the high dimensional geometry of the data manifold. We explore and theoretically prove this hypothesis for a simple synthetic dataset. 248,Fast Task Inference with Variational Intrinsic Successor Features,"It has been established that diverse behaviors spanning the controllable subspace of a Markov decision process can be trained by rewarding a policy for being distinguishable from other policies.However, one limitation of this formulation is the difficulty to generalize beyond the finite set of behaviors being explicitly learned, as may be needed in subsequent tasks.Successor features provide an appealing solution to this generalization problem, but require defining the reward function as linear in some grounded feature space.""In this paper, we show that these two techniques can be combined, and that each method solves the other's primary limitation."", 'To do so we introduce Variational Intrinsic Successor FeatuRes, a novel algorithm which learns controllable features that can be leveraged to provide enhanced generalization and fast task inference through the successor features framework.We empirically validate VISR on the full Atari suite, in a novel setup wherein the rewards are only exposed briefly after a long unsupervised phase.Achieving human-level performance on 12 games and beating all baselines, we believe VISR represents a step towards agents that rapidly learn from limited feedback.","We introduce Variational Intrinsic Successor FeatuRes (VISR), a novel algorithm which learns controllable features that can be leveraged to provide fast task inference through the successor features framework." 249,Domain Adaptation for Structured Output via Disentangled Patch Representations,"Predicting structured outputs such as semantic segmentation relies on expensive per-pixel annotations to learn strong supervised models like convolutional neural networks.However, these models trained on one data domain may not generalize well to other domains unequipped with annotations for model finetuning.To avoid the labor-intensive process of annotation, we develop a domain adaptation method to adapt the source data to the unlabeled target domain.To this end, we propose to learn discriminative feature representations of patches based on label histograms in the source domain, through the construction of a disentangled space.With such representations as guidance, we then use an adversarial learning scheme to push the feature representations in target patches to the closer distributions in source ones.In addition, we show that our framework can integrate a global alignment process with the proposed patch-level alignment and achieve state-of-the-art performance on semantic segmentation.Extensive ablation studies and experiments are conducted on numerous benchmark datasets with various settings, such as synthetic-to-real and cross-city scenarios.",A domain adaptation method for structured output via learning patch-level discriminative feature representations 250,Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models,"Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs.So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information or on confidence scores such as class probabilities, neither of which are available in most real-world scenarios.In many such cases one currently needs to retreat to transfer-based attacks which rely on cumbersome substitute models, need access to the training data and can be defended against.Here we emphasise the importance of attacks which solely rely on the final model decision.Such decision-based attacks are applicable to real-world black-box models such as autonomous cars, need less knowledge and are easier to apply than transfer-based attacks and are more robust to simple defences than gradient- or score-based attacks.Previous attacks in this category were limited to simple models or simple datasets.Here we introduce the Boundary Attack, a decision-based attack that starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial.The attack is conceptually simple, requires close to no hyperparameter tuning, does not rely on substitute models and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet.We apply the attack on two black-box algorithms from Clarifai.com.The Boundary Attack in particular and the class of decision-based attacks in general open new avenues to study the robustness of machine learning models and raise new questions regarding the safety of deployed machine learning systems.An implementation of the attack is available as part of Foolbox.",A novel adversarial attack that can directly attack real-world black-box machine learning models without transfer. 251,Learning Differentially Private Recurrent Language Models,"We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent.In particular, we add user-level privacy protection to the federated averaging algorithm, which makes large step updates from user-level data.Our work demonstrates that given a dataset with a sufficiently large number of users, achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work.We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.",User-level differential privacy for recurrent neural network language models is possible with a sufficiently large dataset. 252,"Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency","Convolutional neural networks are commonly trained using a fixed spatial image size predetermined for a given model.Although trained on images of a specific size, it is well established that CNNs can be used to evaluate a wide range of image sizes at test time, by adjusting the size of intermediate feature maps.In this work, we describe and evaluate a novel mixed-size training regime that mixes several image sizes at training time.We demonstrate that models trained using our method are more resilient to image size changes and generalize well even on small images.This allows faster inference by using smaller images at test time.For instance, we receive a 76.43% top-1 accuracy using ResNet50 with an image size of 160, which matches the accuracy of the baseline model with 2x fewer computations.Furthermore, for a given image size used at test time, we show this method can be exploited either to accelerate training or the final test accuracy.For example, we are able to reach a 79.27% accuracy with a model evaluated at a 288 spatial size for a relative improvement of 14% over the baseline.",Training convnets with mixed image size can improve results across multiple sizes at evaluation 253,Twin Networks: Matching the Future for Sequence Generation,"We propose a simple technique for encouraging generative RNNs to plan ahead.""We train a backward recurrent network to generate a given sequence in reverse order, and we encourage states of the forward model to predict cotemporal states of the backward model."", 'The backward network is used only during training, and plays no role during sampling or inference.We hypothesize that our approach eases modeling of long-term dependencies by implicitly forcing the forward states to hold information about the longer-term future.We show empirically that our approach achieves 9% relative improvement for a speech recognition task, and achieves significant improvement on a COCO caption generation task.",The paper introduces a method of training generative recurrent networks that helps to plan ahead. We run a second RNN in a reverse direction and make a soft constraint between cotemporal forward and backward states. 254,A Case for Object Compositionality in Deep Generative Models of Images,"Deep generative models seek to recover the process with which the observed data was generated.They may be used to synthesize new samples or to subsequently extract representations.Successful approaches in the domain of images are driven by several core inductive biases.However, a bias to account for the compositional way in which humans structure a visual scene in terms of objects has frequently been overlooked.In this work we propose to structure the generator of a GAN to consider objects and their relations explicitly, and generate images by means of composition.This provides a way to efficiently learn a more accurate generative model of real-world images, and serves as an initial step towards learning corresponding object representations.We evaluate our approach on several multi-object image datasets, and find that the generator learns to identify and disentangle information corresponding to different objects at a representational level.A human study reveals that the resulting generative model is better at generating images that are more faithful to the reference distribution.","We propose to structure the generator of a GAN to consider objects and their relations explicitly, and generate images by means of composition" 255,Selfish Emergent Communication,"Current literature in machine learning holds that unaligned, self-interested agents do not learn to use an emergent communication channel.We introduce a new sender-receiver game to study emergent communication for this spectrum of partially-competitive scenarios and put special care into evaluation.We find that communication can indeed emerge in partially-competitive scenarios, and we discover three things that are tied to improving it.First, that selfish communication is proportional to cooperation, and it naturally occurs for situations that are more cooperative than competitive.Second, that stability and performance are improved by using LOLA, especially in more competitive scenarios.And third, that discrete protocols lend themselves better to learning cooperative communication than continuous ones.","We manage to emerge communication with selfish agents, contrary to the current view in ML" 256,Stop memorizing: A data-dependent regularization framework for intrinsic pattern learning,"Deep neural networks typically have enough capacity to fit random data by brute force even when conventional data-dependent regularizations focusing on the geometry of the features are imposed.We find out that the reason for this is the inconsistency between the enforced geometry and the standard softmax cross entropy loss.To resolve this, we propose a new framework for data-dependent DNN regularization, the Geometrically-Regularized-Self-Validating neural Networks.During training, the geometry enforced on one batch of features is simultaneously validated on a separate batch using a validation loss consistent with the geometry.We study a particular case of GRSVNet, the Orthogonal-Low-rank Embedding-GRSVNet, which is capable of producing highly discriminative features residing in orthogonal low-rank subspaces.Numerical experiments show that OLE-GRSVNet outperforms DNNs with conventional regularization when trained on real data.More importantly, unlike conventional DNNs, OLE-GRSVNet refuses to memorize random data or random labels, suggesting it only learns intrinsic patterns by reducing the memorizing capacity of the baseline DNN.",we propose a new framework for data-dependent DNN regularization that can prevent DNNs from overfitting random data or random labels. 257,On the Inductive Bias of Word-Character-Level Multi-Task Learning for Speech Recognition,"End-to-end automatic speech recognition commonly transcribes audio signals into sequences of characters while its performance is evaluated by measuring the word-error rate.This suggests that predicting sequences of words directly may be helpful instead.However, training with word-level supervision can be more difficult due to the sparsity of examples per label class.In this paper we analyze an end-to-end ASR model that combines a word-and-character representation in a multi-task learning framework.We show that it improves on the WER and study how the word-level model can benefit from character-level supervision by analyzing the learned inductive preference bias of each model component empirically.We find that by adding character-level supervision, the MTL model interpolates between recognizing more frequent words and shorter words.",Multi-task learning improves word-and-character-level speech recognition by interpolating the preference biases of its components: frequency- and word length-preference. 258,Spreading vectors for similarity search,"Discretizing floating-point vectors is a fundamental step of modern indexing methods.State-of-the-art techniques learn parameters of the quantizers on training data for optimal performance, thus adapting quantizers to the data.In this work, we propose to reverse this paradigm and adapt the data to the quantizer: we train a neural net whose last layers form a fixed parameter-free quantizer, such as pre-defined points of a sphere.As a proxy objective, we design and train a neural network that favors uniformity in the spherical latent space, while preserving the neighborhood structure after the mapping. For this purpose, we propose a new regularizer derived from the Kozachenko-Leonenko differential entropy estimator and combine it with a locality-aware triplet loss.Experiments show that our end-to-end approach outperforms most learned quantization methods, and is competitive with the state of the art on widely adopted benchmarks.Further more, we show that training without the quantization step results in almost no difference in accuracy, but yields a generic catalyser that can be applied with any subsequent quantization technique.","We learn a neural network that uniformizes the input distribution, which leads to competitive indexing performance in high-dimensional space" 259,Reanalysis of Variance Reduced Temporal Difference Learning,"Temporal difference learning is a popular algorithm for policy evaluation in reinforcement learning, but the vanilla TD can substantially suffer from the inherent optimization variance.A variance reduced TD algorithm was proposed by Korda and La, which applies the variance reduction technique directly to the online TD learning with Markovian samples.In this work, we first point out the technical errors in the analysis of VRTD in Korda and La, and then provide a mathematically solid analysis of the non-asymptotic convergence of VRTD and its variance reduction performance.We show that VRTD is guaranteed to converge to a neighborhood of the fixed-point solution of TD at a linear convergence rate.Furthermore, the variance error and the bias error of VRTD are significantly reduced by the batch size of variance reduction in comparison to those of vanilla TD.",This paper provides a rigorous study of the variance reduced TD learning and characterizes its advantage over vanilla TD learning 260,Domain Adaptive Multibranch Networks,"We tackle unsupervised domain adaptation by accounting for the fact that different domains may need to be processed differently to arrive to a common feature representation effective for recognition.To this end, we introduce a deep learning framework where each domain undergoes a different sequence of operations, allowing some, possibly more complex, domains to go through more computations than others.This contrasts with state-of-the-art domain adaptation techniques that force all domains to be processed with the same series of operations, even when using multi-stream architectures whose parameters are not shared.As evidenced by our experiments, the greater flexibility of our method translates to higher accuracy.Furthermore, it allows us to handle any number of domains simultaneously.","A Multiflow Network is a dynamic architecture for domain adaptation that learns potentially different computational graphs per domain, so as to map them to a common representation where inference can be performed in a domain-agnostic fashion." 261,IMPACT: Importance Weighted Asynchronous Architectures with Clipped Target Networks,"The practical usage of reinforcement learning agents is often bottlenecked by the duration of training time.To accelerate training, practitioners often turn to distributed reinforcement learning architectures to parallelize and accelerate the training process.However, modern methods for scalable reinforcement learning often tradeoff between the throughput of samples that an RL agent can learn from and the quality of learning from each sample.In these scalable RL architectures, as one increases sample throughput), sample efficiency drops significantly.To address this, we propose a new distributed reinforcement learning algorithm, IMPACT.IMPACT extends PPO with three changes: a target network for stabilizing the surrogate objective, a circular buffer, and truncated importance sampling.In discrete action-space environments, we show that IMPACT attains higher reward and, simultaneously, achieves up to 30% decrease in training wall-time than that of IMPALA.For continuous control environments, IMPACT trains faster than existing scalable agents while preserving the sample efficiency of synchronous PPO.",IMPACT helps RL agents train faster by decreasing training wall-clock time and increasing sample efficiency simultaneously. 262,Coloring graph neural networks for node disambiguation,"In this paper, we show that a simple coloring scheme can improve, both theoretically and empirically, the expressive power of Message Passing Neural Networks.More specifically, we introduce a graph neural network called Colored Local Iterative Procedure that uses colors to disambiguate identical node attributes, and show that this representation is a universal approximator of continuous functions on graphs with node attributes.Our method relies on separability, a key topological characteristic that allows to extend well-chosen neural networks into universal representations.Finally, we show experimentally that CLIP is capable of capturing structural characteristics that traditional MPNNs fail to distinguish, while being state-of-the-art on benchmark graph classification datasets.","This paper introduces a coloring scheme for node disambiguation in graph neural networks based on separability, proven to be a universal MPNN extension." 263,Non-Sequential Melody Generation,"In this paper we present a method for algorithmic melody generation using a generative adversarial network without recurrent components.Music generation has been successfully done using recurrent neural networks, where the model learns sequence information that can help create authentic sounding melodies. Here, we use DCGAN architecture with dilated convolutions and towers to capture sequential information as spatial image information, and learn long-range dependencies in fixed-length melody forms such as Irish traditional reel.",Representing melodies as images with semantic units aligned we can generate them using a DCGAN without any recurrent components. 264,Hallucinations in neural machine translation,"Neural machine translation systems have reached state of the art performance in translating text and widely deployed. Yet little is understood about how these systems function or break. Here we show that NMT systems are susceptible to producing highly pathological translations that are completely untethered from the source material, which we term hallucinations. Such pathological translations are problematic because they are are deeply disturbing of user trust and easy to find. We describe a method t generate hallucinations and show that many common variations of the NMT architecture are susceptible to them.We study a variety of approaches to reduce the frequency of hallucinations, including data augmentation, dynamical systems and regularization techniques and show that data augmentation significantly reduces hallucination frequency.Finally, we analyze networks that produce hallucinations and show signatures of hallucinations in the attention matrix and in the stability measures of the decoder.","We introduce and analyze the phenomenon of ""hallucinations"" in NMT, or spurious translations unrelated to source text, and propose methods to reduce its frequency." 265,Can I Trust the Explainer? Verifying Post-Hoc Explanatory Methods,"For AI systems to garner widespread public acceptance, we must develop methods capable of explaining the decisions of black-box models such as neural networks.In this work, we identify two issues of current explanatory methods.First, we show that two prevalent perspectives on explanations—feature-additivity and feature-selection—lead to fundamentally different instance-wise explanations.In the literature, explainers from different perspectives are currently being directly compared, despite their distinct explanation goals.The second issue is that current post-hoc explainers have only been thoroughly validated on simple models, such as linear regression, and, when applied to real-world neural networks, explainers are commonly evaluated under the assumption that the learned models behave reasonably.However, neural networks often rely on unreasonable correlations, even when producing correct decisions.We introduce a verification framework for explanatory methods under the feature-selection perspective.Our framework is based on a non-trivial neural network architecture trained on a real-world task, and for which we are able to provide guarantees on its inner workings.We validate the efficacy of our evaluation by showing the failure modes of current explainers.We aim for this framework to provide a publicly available,1 off-the-shelf evaluation when the feature-selection perspective on explanations is needed.",An evaluation framework based on a real-world neural network for post-hoc explanatory methods 266,EfferenceNets for latent space planning,"Planning in high-dimensional space remains a challenging problem, even with recent advances in algorithms and computational power.We are inspired by efference copy and sensory reafference theory from neuroscience. Our aim is to allow agents to form mental models of their environments for planning. The cerebellum is emulated with a two-stream, fully connected, predictor network.The network receives as inputs the efference as well as the features of the current state.Building on insights gained from knowledge distillation methods, we choose as our features the outputs of a pre-trained network, yielding a compressed representation of the current state. The representation is chosen such that it allows for fast search using classical graph search algorithms.We display the effectiveness of our approach on a viewpoint-matching task using a modified best-first search algorithm.",We present a neuroscience-inspired method based on neural networks for latent space search 267,Decoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity Tasks,"Experimental evidence indicates that simple models outperform complex deep networks on many unsupervised similarity tasks.Introducing the concept of an optimal representation space, we provide a simple theoretical resolution to this apparent paradox.In addition, we present a straightforward procedure that, without any retraining or architectural modifications, allows deep recurrent models to perform equally well when compared to shallow models.To validate our analysis, we conduct a set of consistent empirical evaluations and introduce several new sentence embedding models in the process.Even though this work is presented within the context of natural language processing, the insights are readily applicable to other domains that rely on distributed representations for transfer tasks.","By introducing the notion of an optimal representation space, we provide a theoretical argument and experimental validation that an unsupervised model for sentences can perform well on both supervised similarity and unsupervised transfer tasks." 268,Large-scale Cloze Test Dataset Designed by Teachers,"Cloze test is widely adopted in language exams to evaluate students' language proficiency."", 'In this paper, we propose the first large-scale human-designed cloze test dataset CLOTH in which the questions were used in middle-school and high-school language exams.With the missing blanks carefully created by teachers and candidate choices purposely designed to be confusing, CLOTH requires a deeper language understanding and a wider attention span than previous automatically generated cloze datasets.We show humans outperform dedicated designed baseline models by a significant margin, even when the model is trained on sufficiently large external data.We investigate the source of the performance gap, trace model deficiencies to some distinct properties of CLOTH, and identify the limited ability of comprehending a long-term context to be the key bottleneck.""In addition, we find that human-designed data leads to a larger gap between the model's performance and human performance when compared to automatically generated data.",A cloze test dataset designed by teachers to assess language proficiency 269,Emergence of functional and structural properties of the head direction system by optimization of recurrent neural networks,"Recent work suggests goal-driven training of neural networks can be used to model neural activity in the brain.While response properties of neurons in artificial neural networks bear similarities to those in the brain, the network architectures are often constrained to be different.Here we ask if a neural network can recover both neural representations and, if the architecture is unconstrained and optimized, also the anatomical properties of neural circuits.We demonstrate this in a system where the connectivity and the functional organization have been characterized, namely, the head direction circuit of the rodent and fruit fly.We trained recurrent neural networks to estimate head direction through integration of angular velocity.We found that the two distinct classes of neurons observed in the head direction system, the Ring neurons and the Shifter neurons, emerged naturally in artificial neural networks as a result of training.Furthermore, connectivity analysis and in-silico neurophysiology revealed structural and mechanistic similarities between artificial networks and the head direction system.Overall, our results show that optimization of RNNs in a goal-driven task can recapitulate the structure and function of biological circuits, suggesting that artificial neural networks can be used to study the brain at the level of both neural activity and anatomical organization.",Artificial neural networks trained with gradient descent are capable of recapitulating both realistic neural activity and the anatomical organization of a biological circuit. 270,Efficient Sparse-Winograd Convolutional Neural Networks,"Convolutional Neural Networks are computationally intensive, which limits their application on mobile devices.Their energy is dominated by the number of multiplies needed to perform the convolutions.Winograd’s minimal filtering algorithm and network pruning can reduce the operation count, but these two methods cannot be straightforwardly combined — applying the Winograd transform fills in the sparsity in both the weights and the activations.We propose two modifications to Winograd-based CNNs to enable these methods to exploit sparsity.First, we move the ReLU operation into the Winograd domain to increase the sparsity of the transformed activations.Second, we prune the weights in the Winograd domain to exploit static weight sparsity.For models on CIFAR-10, CIFAR-100 and ImageNet datasets, our method reduces the number of multiplications by 10.4x, 6.8x and 10.8x respectively with loss of accuracy less than 0.1%, outperforming previous baselines by 2.0x-3.0x.We also show that moving ReLU to the Winograd domain allows more aggressive pruning.",Prune and ReLU in Winograd domain for efficient convolutional neural network 271,Advanced Neuroevolution: A gradient-free algorithm to train Deep Neural Networks,"In this paper we present a novel optimization algorithm called Advanced Neuroevolution.The aim for this algorithm is to train deep neural networks, and eventually act as an alternative to Stochastic Gradient Descent and its variants as needed.We evaluated our algorithm on the MNIST dataset, as well as on several global optimization problems such as the Ackley function.We find the algorithm performing relatively well for both cases, overtaking other global optimization algorithms such as Particle Swarm Optimization and Evolution Strategies.",A new algorithm to train deep neural networks. Tested on optimization functions and MNIST. 272,Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches,"Stochastic neural net weights are used in a variety of contexts, including regularization, Bayesian neural nets, exploration in reinforcement learning, and evolution strategies.Unfortunately, due to the large number of weights, all the examples in a mini-batch typically share the same weight perturbation, thereby limiting the variance reduction effect of large mini-batches.We introduce flipout, an efficient method for decorrelating the gradients within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example.Empirically, flipout achieves the ideal linear variance reduction for fully connected networks, convolutional networks, and RNNs.We find significant speedups in training neural networks with multiplicative Gaussian perturbations.We show that flipout is effective at regularizing LSTMs, and outperforms previous methods.Flipout also enables us to vectorize evolution strategies: in our experiments, a single GPU with flipout can handle the same throughput as at least 40 CPU cores using existing methods, equivalent to a factor-of-4 cost reduction on Amazon Web Services.","We introduce flipout, an efficient method for decorrelating the gradients computed by stochastic neural net weights within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example." 273,LIA: Latently Invertible Autoencoder with Adversarial Learning,"Deep generative models such as Variational AutoEncoder and Generative Adversarial Network play an increasingly important role in machine learning and computer vision.However, there are two fundamental issues hindering their real-world applications: the difficulty of conducting variational inference in VAE and the functional absence of encoding real-world samples in GAN.In this paper, we propose a novel algorithm named Latently Invertible Autoencoder to address the above two issues in one framework.An invertible network and its inverse mapping are symmetrically embedded in the latent space of VAE.Thus the partial encoder first transforms the input into feature vectors and then the distribution of these feature vectors is reshaped to fit a prior by the invertible network.""The decoder proceeds in the reverse order of the encoder's composite mappings."", 'A two-stage stochasticity-free training scheme is designed to train LIA via adversarial learning, in the sense that the decoder of LIA is first trained as a standard GAN with the invertible network and then the partial encoder is learned from an autoencoder by detaching the invertible network from LIA. Experiments conducted on the FFHQ face dataset and three LSUN datasets validate the effectiveness of LIA for inference and generation.",A new model Latently Invertible Autoencoder is proposed to solve the problem of variational inference in VAE using the invertible network and two-stage adversarial training. 274,Parametrized Hierarchical Procedures for Neural Programming,"Neural programs are highly accurate and structured policies that perform algorithmic tasks by controlling the behavior of a computation mechanism.Despite the potential to increase the interpretability and the compositionality of the behavior of artificial agents, it remains difficult to learn from demonstrations neural networks that represent computer programs.The main challenges that set algorithmic domains apart from other imitation learning domains are the need for high accuracy, the involvement of specific structures of data, and the extremely limited observability.To address these challenges, we propose to model programs as Parametrized Hierarchical Procedures.A PHP is a sequence of conditional operations, using a program counter along with the observation to select between taking an elementary action, invoking another PHP as a sub-procedure, and returning to the caller.We develop an algorithm for training PHPs from a set of supervisor demonstrations, only some of which are annotated with the internal call structure, and apply it to efficient level-wise training of multi-level PHPs.We show in two benchmarks, NanoCraft and long-hand addition, that PHPs can learn neural programs more accurately from smaller amounts of both annotated and unannotated demonstrations.","We introduce the PHP model for hierarchical representation of neural programs, and an algorithm for learning PHPs from a mixture of strong and weak supervision." 275,Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation,"Deep Reinforcement Learning has managed to achieve state-of-the-art results in learning control policies directly from raw pixels.However, despite its remarkable success, it fails to generalize, a fundamental component required in a stable Artificial Intelligence system.Using the Atari game Breakout, we demonstrate the difficulty of a trained agent in adjusting to simple modifications in the raw image, ones that a human could adapt to trivially.In transfer learning, the goal is to use the knowledge gained from the source task to make the training of the target task faster and better.We show that using various forms of fine-tuning, a common method for transfer learning, is not effective for adapting to such small visual changes.In fact, it is often easier to re-train the agent from scratch than to fine-tune a trained agent.We suggest that in some cases transfer learning can be improved by adding a dedicated component whose goal is to learn to visually map between the known domain and the new one.Concretely, we use Unaligned Generative Adversarial Networks to create a mapping function to translate images in the target task to corresponding images in the source task.These mapping functions allow us to transform between various variations of the Breakout game, as well as between different levels of a Nintendo game, Road Fighter.We show that learning this mapping is substantially more efficient than re-training.A visualization of a trained agent playing Breakout and Road Fighter, with and without the GAN transfer, can be seen in \\url and \\url.","We propose a method of transferring knowledge between related RL tasks using visual mappings, and demonstrate its effectiveness on visual variants of the Atari Breakout game and different levels of Road Fighter, a Nintendo car driving game." 276,Revisiting the Generalization of Adaptive Gradient Methods,"A commonplace belief in the machine learning community is that using adaptive gradient methods hurts generalization.We re-examine this belief both theoretically and experimentally, in light of insights and trends from recent years.We revisit some previous oft-cited experiments and theoretical accounts in more depth, and provide a new set of experiments in larger-scale, state-of-the-art settings.We conclude that with proper tuning, the improved training performance of adaptive optimizers does not in general carry an overfitting penalty, especially in contemporary deep learning.""Finally, we synthesize a user's guide to adaptive optimizers, including some proposed modifications to AdaGrad to mitigate some of its empirical shortcomings.","Adaptive gradient methods, when done right, do not incur a generalization penalty. " 277,Adaptive Posterior Learning: few-shot learning with a surprise-based memory module,"The ability to generalize quickly from few observations is crucial for intelligent systems.In this paper we introduce APL, an algorithm that approximates probability distributions by remembering the most surprising observations it has encountered.These past observations are recalled from an external memory module and processed by a decoder network that can combine information from different memory slots to generalize beyond direct recall.We show this algorithm can perform as well as state of the art baselines on few-shot classification benchmarks with a smaller memory footprint. In addition, its memory compression allows it to scale to thousands of unknown labels. Finally, we introduce a meta-learning reasoning task which is more challenging than direct classification.In this setting, APL is able to generalize with fewer than one example per class via deductive reasoning.",We introduce a model which generalizes quickly from few observations by storing surprising information and attending over the most relevant data at each time point. 278,"Fast and Accurate Text Classification: Skimming, Rereading and Early Stopping","Recent advances in recurrent neural nets have shown much promise in many applications in natural language processing.For most of these tasks, such as sentiment analysis of customer reviews, a recurrent neural net model parses the entire review before forming a decision.We argue that reading the entire input is not always necessary in practice, since a lot of reviews are often easy to classify, i.e., a decision can be formed after reading some crucial sentences or words in the provided text.In this paper, we present an approach of fast reading for text classification.Inspired by several well-known human reading techniques, our approach implements an intelligent recurrent agent which evaluates the importance of the current snippet in order to decide whether to make a prediction, or to skip some texts, or to re-read part of the sentence.Our agent uses an RNN module to encode information from the past and the current tokens, and applies a policy module to form decisions.With an end-to-end training algorithm based on policy gradient, we train and test our agent on several text classification datasets and achieve both higher efficiency and better accuracy compared to previous approaches.","We develop an end-to-end trainable approach for skimming, rereading and early stopping applicable to classification tasks. " 279,Efficient Exploration via State Marginal Matching,"Reinforcement learning agents need to explore their unknown environments to solve the tasks given to them.The Bayes optimal solution to exploration is intractable for complex environments, and while several exploration methods have been proposed as approximations, it remains unclear what underlying objective is being optimized by existing exploration methods, or how they can be altered to incorporate prior knowledge about the task.Moreover, it is unclear how to acquire a single exploration strategy that will be useful for solving multiple downstream tasks.We address these shortcomings by learning a single exploration policy that can quickly solve a suite of downstream tasks in a multi-task setting, amortizing the cost of learning to explore.We recast exploration as a problem of State Marginal Matching, where we aim to learn a policy for which the state marginal distribution matches a given target state distribution, which can incorporate prior knowledge about the task.We optimize the objective by reducing it to a two-player, zero-sum game between a state density model and a parametric policy.""Our theoretical analysis of this approach suggests that prior exploration methods do not learn a policy that does distribution matching, but acquire a replay buffer that performs distribution matching, an observation that potentially explains these prior methods' success in single-task settings."", 'On both simulated and real-world tasks, we demonstrate that our algorithm explores faster and adapts more quickly than prior methods.",We view exploration in RL as a problem of matching a marginal distribution over states. 280,HexaConv,"The effectiveness of Convolutional Neural Networks stems in large part from their ability to exploit the translation invariance that is inherent in many learning problems.Recently, it was shown that CNNs can exploit other invariances, such as rotation invariance, by using group convolutions instead of planar convolutions.However, for reasons of performance and ease of implementation, it has been necessary to limit the group convolution to transformations that can be applied to the filters without interpolation.Thus, for images with square pixels, only integer translations, rotations by multiples of 90 degrees, and reflections are admissible.Whereas the square tiling provides a 4-fold rotational symmetry, a hexagonal tiling of the plane has a 6-fold rotational symmetry.In this paper we show how one can efficiently implement planar convolution and group convolution over hexagonal lattices, by re-using existing highly optimized convolution routines.We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget.Furthermore, we find that the increased degree of symmetry of the hexagonal grid increases the effectiveness of group convolutions, by allowing for more parameter sharing.We show that our method significantly outperforms conventional CNNs on the AID aerial scene classification dataset, even outperforming ImageNet pre-trained models.","We introduce G-HexaConv, a group equivariant convolutional neural network on hexagonal lattices." 281,FAST OBJECT LOCALIZATION VIA SENSITIVITY ANALYSIS,"Deep Convolutional Neural Networks have been repeatedly shown to perform well on image classification tasks, successfully recognizing a broad array of objects when given sufficient training data.Methods for object localization, however, are still in need of substantial improvement.Common approaches to this problem involve the use of a sliding window, sometimes at multiple scales, providing input to a deep CNN trained to classify the contents of the window.In general, these approaches are time consuming, requiring many classification calculations.In this paper, we offer a fundamentally different approach to the localization of recognized objects in images.Our method is predicated on the idea that a deep CNN capable of recognizing an object must implicitly contain knowledge about object location in its connection weights.We provide a simple method to interpret classifier weights in the context of individual classified images.This method involves the calculation of the derivative of network generated activation patterns, such as the activation of output class label units, with regard to each in- put pixel, performing a sensitivity analysis that identifies the pixels that, in a local sense, have the greatest influence on internal representations and object recognition.These derivatives can be efficiently computed using a single backward pass through the deep CNN classifier, producing a sensitivity map of the image.We demonstrate that a simple linear mapping can be learned from sensitivity maps to bounding box coordinates, localizing the recognized object.Our experimental results, using real-world data sets for which ground truth localization information is known, reveal competitive accuracy from our fast technique.",Proposing a novel object localization(detection) approach based on interpreting the deep CNN using internal representation and network's thoughts 282,Trellis Networks for Sequence Modeling,"We present trellis networks, a new architecture for sequence modeling.On the one hand, a trellis network is a temporal convolutional network with special structure, characterized by weight tying across depth and direct injection of the input into deep layers.On the other hand, we show that truncated recurrent networks are equivalent to trellis networks with special sparsity structure in their weight matrices.Thus trellis networks with general weight matrices generalize truncated recurrent networks.We leverage these connections to design high-performing trellis networks that absorb structural and algorithmic elements from both recurrent and convolutional models.Experiments demonstrate that trellis networks outperform the current state of the art methods on a variety of challenging benchmarks, including word-level language modeling and character-level language modeling tasks, and stress tests designed to evaluate long-term memory retention.The code is available at https://github.com/locuslab/trellisnet .",Trellis networks are a new sequence modeling architecture that bridges recurrent and convolutional models and sets a new state of the art on word- and character-level language modeling. 283,Training Domain Specific Models for Energy-Efficient Object Detection,"We propose an end-to-end framework for training domain specific models to obtain both high accuracy and computational efficiency for object detection tasks.DSMs are trained with distillation and focus on achieving high accuracy at a limited domain.We argue that DSMs can capture essential features well even with a small model size, enabling higher accuracy and efficiency than traditional techniques. In addition, we improve the training efficiency by reducing the dataset size by culling easy to classify images from the training set.For the limited domain, we observed that compact DSMs significantly surpass the accuracy of COCO trained models of the same size.By training on a compact dataset, we show that with an accuracy drop of only 3.6%, the training time can be reduced by 93%.",High object-detection accuracy can be obtained by training domain specific compact models and the training can be very short. 284,Bootstrapping the Expressivity with Model-based Planning,"We compare the model-free reinforcement learning with the model-based approaches through the lens of the expressive power of neural networks for policies,-functions, and dynamics. We show, theoretically and empirically, that even for one-dimensional continuous state space, there are many MDPs whose optimal-functions and policies are much more complex than the dynamics.We hypothesize many real-world MDPs also have a similar property.For these MDPs, model-based planning is a favorable algorithm, because the resulting policies can approximate the optimal policy significantly better than a neural network parameterization can, and model-free or model-based policy optimization rely on policy parameterization.Motivated by the theory, we apply a simple multi-step model-based bootstrapping planner to bootstrap a weak-function into a stronger policy.Empirical results show that applying BOOTS on top of model-based or model-free policy optimization algorithms at the test time improves the performance on MuJoCo benchmark tasks.","We compare deep model-based and model-free RL algorithms by studying the approximability of-functions, policies, and dynamics by neural networks. " 285,Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions,"Common-sense physical reasoning is an essential ingredient for any intelligent agent operating in the real-world.For example, it can be used to simulate the environment, or to infer the state of parts of the world that are currently unobserved.In order to match real-world conditions this causal knowledge must be learned without access to supervised data.To address this problem we present a novel method that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion.It incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and learn efficiently.On videos of bouncing balls we show the superior modelling capabilities of our method compared to other unsupervised neural approaches that do not incorporate such prior knowledge.We demonstrate its ability to handle occlusion and show that it can extrapolate learned knowledge to scenes with different numbers of objects.",We introduce a novel approach to common-sense physical reasoning that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion 286,Simplicity bias in the parameter-function map of deep neural networks,"The idea that neural networks may exhibit a bias towards simplicity has a long history.Simplicity bias provides a way to quantify this intuition. It predicts, for a broad class of input-output maps which can describe many systems in science and engineering, that simple outputs are exponentially more likely to occur upon uniform random sampling of inputs than complex outputs are. This simplicity bias behaviour has been observed for systems ranging from the RNA sequence to secondary structure map, to systems of coupled differential equations, to models of plant growth. Deep neural networks can be viewed as a mapping from the space of parameters to the space of functions. We show that this parameter-function map obeys the necessary conditions for simplicity bias, and numerically show that it is hugely biased towards functions with low descriptional complexity. We also demonstrate a Zipf like power-law probability-rank relation. A bias towards simplicity may help explain why neural nets generalize so well.",A very strong bias towards simple outpouts is observed in many simple input-ouput maps. The parameter-function map of deep networks is found to be biased in the same way. 287,Learning Multi-Agent Communication Through Structured Attentive Reasoning,"Learning communication via deep reinforcement learning has recently been shown to be an effective way to solve cooperative multi-agent tasks.""However, learning which communicated information is beneficial for each agent's decision-making remains a challenging task."", 'In order to address this problem, we introduce a fully differentiable framework for communication and reasoning, enabling agents to solve cooperative tasks in partially-observable environments.The framework is designed to facilitate explicit reasoning between agents, through a novel memory-based attention network that can learn selectively from its past memories.""The model communicates through a series of reasoning steps that decompose each agent's intentions into learned representations that are used first to compute the relevance of communicated information, and second to extract information from memories given newly received information."", 'By selectively interacting with new information, the model effectively learns a communication protocol directly, in an end-to-end manner.We empirically demonstrate the strength of our model in cooperative multi-agent tasks, where inter-agent communication and reasoning over prior information substantially improves performance compared to baselines.",Novel architecture of memory based attention mechanism for multi-agent communication. 288,Symplectic ODE-Net: Learning Hamiltonian Dynamics with Control,"In this paper, we introduce Symplectic ODE-Net, a deep learning framework which can infer the dynamics of a physical system from observed state trajectories.To achieve better generalization with fewer training samples, SymODEN incorporates appropriate inductive bias by designing the associated computation graph in a physics-informed manner.In particular, we enforce Hamiltonian dynamics with control to learn the underlying dynamics in a transparent way which can then be leveraged to draw insight about relevant physical aspects of the system, such as mass and potential energy.In addition, we propose a parametrization which can enforce this Hamiltonian formalism even when the generalized coordinate data is embedded in a high-dimensional space or we can only access velocity data instead of generalized momentum.This framework, by offering interpretable, physically-consistent models for physical systems, opens up new possibilities for synthesizing model-based control strategies.","This work enforces Hamiltonian dynamics with control to learn system models from embedded position and velocity data, and exploits this physically-consistent dynamics to synthesize model-based control via energy shaping." 289,On Federated Learning of Deep Networks from Non-IID Data: Parameter Divergence and the Effects of Hyperparametric Methods,"Federated learning, where a global model is trained by iterative parameter averaging of locally-computed updates, is a promising approach for distributed training of deep networks; it provides high communication-efficiency and privacy-preservability, which allows to fit well into decentralized data environments, e.g., mobile-cloud ecosystems.However, despite the advantages, the federated learning-based methods still have a challenge in dealing with non-IID training data of local devices.In this regard, we study the effects of a variety of hyperparametric conditions under the non-IID environments, to answer important concerns in practical implementations: We first investigate parameter divergence of local updates to explain performance degradation from non-IID data.The origin of the parameter divergence is also found both empirically and theoretically. We then revisit the effects of optimizers, network depth/width, and regularization techniques; our observations show that the well-known advantages of the hyperparameter optimization strategies could rather yield diminishing returns with non-IID data. We finally provide the reasons of the failure cases in a categorized way, mainly based on metrics of the parameter divergence.","We investigate the internal reasons of our observations, the diminishing effects of the well-known hyperparameter optimization methods on federated learning from decentralized non-IID data." 290,Adversarial Imitation Attack,"Deep learning models are known to be vulnerable to adversarial examples.A practical adversarial attack should require as little as possible knowledge of attacked models T. Current substitute attacks need pre-trained models to generate adversarial examples and their attack success rates heavily rely on the transferability of adversarial examples.Current score-based and decision-based attacks require lots of queries for the T. In this study, we propose a novel adversarial imitation attack.First, it produces a replica of the T by a two-player game like the generative adversarial networks.The objective of the generative model G is to generate examples which lead D returning different outputs with T. The objective of the discriminative model D is to output the same labels with T under the same inputs.Then, the adversarial examples generated by D are utilized to fool the T. Compared with the current substitute attacks, imitation attack can use less training data to produce a replica of T and improve the transferability of adversarial examples.Experiments demonstrate that our imitation attack requires less training data than the black-box substitute attacks, but achieves an attack success rate close to the white-box attack on unseen data with no query.",A novel adversarial imitation attack to fool machine learning models. 291,LARGE BATCH SIZE TRAINING OF NEURAL NETWORKS WITH ADVERSARIAL TRAINING AND SECOND-ORDER INFORMATION,"Stochastic Gradient Descent methods using randomly selected batches are widely-used to train neural network models.Performing design exploration to find the best NN for a particular task often requires extensive training with different models on a large dataset, which is very computationally expensive.The most straightforward method to accelerate this computation is to distribute the batch of SGD over multiple processors.However, large batch training often times leads to degradation in accuracy, poor generalization, and even poor robustness to adversarial attacks. Existing solutions for large batch training either do not work or require massive hyper-parameter tuning.""To address this issue, we propose a novel large batch training method which combines recent results in adversarial training and second order optimization."", 'We extensively evaluate our method on Cifar-10/100, SVHN, TinyImageNet, and ImageNet datasets, using multiple NNs, including residual networks as well as compressed networks such as SqueezeNext. Our new approach exceeds the performance of the existing solutions in terms of both accuracy and the number of SGD iterations.We emphasize that this is achieved without any additional hyper-parameter tuning to tailor our method to any of these experiments.",Large batch size training using adversarial training and second order information 292,Egocentric Spatial Memory Network,"Inspired by neurophysiological discoveries of navigation cells in the mammalianbrain, we introduce the first deep neural network architecture for modeling EgocentricSpatial Memory.It learns to estimate the pose of the agent andprogressively construct top-down 2D global maps from egocentric views in a spatiallyextended environment.During the exploration, our proposed ESM networkmodel updates belief of the global map based on local observations using a recurrentneural network.It also augments the local mapping with a novel externalmemory to encode and store latent representations of the visited places based ontheir corresponding locations in the egocentric coordinate.This enables the agentsto perform loop closure and mapping correction.This work contributes in thefollowing aspects: first, our proposed ESM network provides an accurate mappingability which is vitally important for embodied agents to navigate to goal locations.In the experiments, we demonstrate the functionalities of the ESM network inrandom walks in complicated 3D mazes by comparing with several competitivebaselines and state-of-the-art Simultaneous Localization and Mappingalgorithms.Secondly, we faithfully hypothesize the functionality and the workingmechanism of navigation cells in the brain.Comprehensive analysis of our modelsuggests the essential role of individual modules in our proposed architecture anddemonstrates efficiency of communications among these modules.We hope thiswork would advance research in the collaboration and communications over bothfields of computer science and computational neuroscience.",first deep neural network for modeling Egocentric Spatial Memory inspired by neurophysiological discoveries of navigation cells in mammalian brain 293,Lipschitz regularized Deep Neural Networks generalize,"We show that if the usual training loss is augmented by a Lipschitz regularization term, then the networks generalize. We prove generalization by first establishing a stronger convergence result, along with a rate of convergence. A second result resolves a question posed in Zhang et al.: how can a model distinguish between the case of clean labels, and randomized labels? Our answer is that Lipschitz regularization using the Lipschitz constant of the clean data makes this distinction. In this case, the model learns a different function which we hypothesize correctly fails to learn the dirty labels. ",We prove generalization of DNNs by adding a Lipschitz regularization term to the training loss. We resolve a question posed in Zhang et al. (2016). 294,Training wide residual networks for deployment using a single bit for each weight,"For fast and energy-efficient deployment of trained deep neural networks on resource-constrained embedded hardware, each learned weight parameter should ideally be represented and stored using a single bit. Error-rates usually increase when this requirement is imposed.Here, we report large improvements in error rates on multiple datasets, for deep convolutional neural networks deployed with 1-bit-per-weight.Using wide residual networks as our main baseline, our approach simplifies existing methods that binarize weights by applying the sign function in training; we apply scaling factors for each layer with constant unlearned values equal to the layer-specific standard deviations used for initialization.For CIFAR-10, CIFAR-100 and ImageNet, and models with 1-bit-per-weight requiring less than 10 MB of parameter memory, we achieve error rates of 3.9%, 18.5% and 26.0% / 8.5% respectively.We also considered MNIST, SVHN and ImageNet32, achieving 1-bit-per-weight test results of 0.27%, 1.9%, and 41.3% / 19.1% respectively.For CIFAR, our error rates halve previously reported values, and are within about 1% of our error-rates for the same network with full-precision weights.For networks that overfit, we also show significant improvements in error rate by not learning batch normalization scale and offset parameters.This applies to both full precision and 1-bit-per-weight networks.Using a warm-restart learning-rate schedule, we found that training for 1-bit-per-weight is just as fast as full-precision networks, with better accuracy than standard schedules, and achieved about 98%-99% of peak performance in just 62 training epochs for CIFAR-10/100.For full training code and trained models in MATLAB, Keras and PyTorch see https://github.com/McDonnell-Lab/1-bit-per-weight/ .","We train wide residual networks that can be immediately deployed using only a single bit for each convolutional weight, with signficantly better accuracy than past methods." 295,Immersive Visualization of the Classical Non-Euclidean Spaces using Real-Time Ray Tracing,This paper presents a system for immersive visualization of Non-Euclidean spaces using real-time ray tracing.It exploits the capabilities of the new generation of GPU’s based on the NVIDIA’s Turing architecture in order to develop new methods for intuitive exploration of landscapes featuring non-trivial geometry and topology in virtual reality.,Immersive Visualization of the Classical Non-Euclidean Spaces using Real-Time Ray Tracing. 296,The Set Autoencoder: Unsupervised Representation Learning for Sets,"We propose the set autoencoder, a model for unsupervised representation learning for sets of elements.It is closely related to sequence-to-sequence models, which learn fixed-sized latent representations for sequences, and have been applied to a number of challenging supervised sequence tasks such as machine translation, as well as unsupervised representation learning for sequences.In contrast to sequences, sets are permutation invariant.The proposed set autoencoder considers this fact, both with respect to the input as well as the output of the model.On the input side, we adapt a recently-introduced recurrent neural architecture using a content-based attention mechanism.On the output side, we use a stable marriage algorithm to align predictions to labels in the learning phase.We train the model on synthetic data sets of point clouds and show that the learned representations change smoothly with translations in the inputs, preserve distances in the inputs, and that the set size is represented directly.We apply the model to supervised tasks on the point clouds using the fixed-size latent representation.For a number of difficult classification problems, the results are better than those of a model that does not consider the permutation invariance.Especially for small training sets, the set-aware model benefits from unsupervised pretraining.","We propose the set autoencoder, a model for unsupervised representation learning for sets of elements." 297,Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning,"Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a considerable amount of experience to be collected by the agent.In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt.However, not all tasks are easily or automatically reversible.In practice, this learning process requires considerable human intervention.In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and backward policy, with the backward policy resetting the environment for a subsequent attempt.By learning a value function for the backward policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts.Our experiments illustrate that proper use of the backward policy can greatly reduce the number of manual resets required to learn a task and can reduce the number of unsafe actions that lead to non-reversible states.","We propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and backward policy, with the backward policy resetting the environment for a subsequent attempt." 298,Do Language Models Have Common Sense?,"It has been argued that current machine learning models do not have commonsense, and therefore must be hard-coded with prior knowledge.Here we show surprising evidence that language models can already learn to capture certain common sense knowledge.Our key observation is that a language model can compute the probability of any statement, and this probability can be used to evaluate the truthfulness of that statement. On the Winograd Schema Challenge, language models are 11% higher in accuracy than previous state-of-the-art supervised methods.Language models can also be fine-tuned for the task of Mining Commonsense Knowledge on ConceptNet to achieve an F1 score of 0.912 and 0.824, outperforming previous best results. Further analysis demonstrates that language models can discover unique features of Winograd Schema contexts that decide the correct answers without explicit supervision.",We present evidence that LMs do capture common sense with state-of-the-art results on both Winograd Schema Challenge and Commonsense Knowledge Mining. 299,Convolutional Sequence Modeling Revisited,"This paper revisits the problem of sequence modeling using convolutionalarchitectures. Although both convolutional and recurrent architectures havealong history in sequence prediction, the current ""default"" mindset in much ofthe deep learning community is that generic sequence modeling is best handledusing recurrent networks. The goal of this paper is to question this assumption.Specifically, we consider a simple generic temporal convolution network,which adopts features from modern ConvNet architectures such as a dilations andresidual connections. We show that on a variety of sequence modeling tasks,including many frequently used as benchmarks for evaluating recurrent networks,the TCN outperforms baseline RNN methods andsometimes even highly specialized approaches. We further show that thepotential ""infinite memory"" advantage that RNNs have over TCNs is largelyabsent in practice: TCNs indeed exhibit longer effective history sizes than theirrecurrent counterparts. As a whole, we argue that it may be time toconsiderConvNets as the default ""go to"" architecture for sequence modeling.",We argue that convolutional networks should be considered the default starting point for sequence modeling tasks. 300,Neural network gradient-based learning of black-box function interfaces,"Deep neural networks work well at approximating complicated functions when provided with data and trained by gradient descent methods.At the same time, there is a vast amount of existing functions that programmatically solve different tasks in a precise manner eliminating the need for training.In many cases, it is possible to decompose a task to a series of functions, of which for some we may prefer to use a neural network to learn the functionality, while for others the preferred method would be to use existing black-box functions.We propose a method for end-to-end training of a base neural network that integrates calls to existing black-box functions.We do so by approximating the black-box functionality with a differentiable neural network in a way that drives the base network to comply with the black-box function interface during the end-to-end optimization process.At inference time, we replace the differentiable estimator with its external black-box non-differentiable counterpart such that the base network output matches the input arguments of the black-box function.""Using this Estimate and Replace paradigm, we train a neural network, end to end, to compute the input to black-box functionality while eliminating the need for intermediate labels."", 'We show that by leveraging the existing precise black-box function during inference, the integrated model generalizes better than a fully differentiable model, and learns more efficiently compared to RL-based methods.",Training DNNs to interface w\\ black box functions w\\o intermediate labels by using an estimator sub-network that can be replaced with the black box after training 301,Boosting the Actor with Dual Critic,"This paper proposes a new actor-critic-style algorithm called Dual Actor-Critic or Dual-AC. It is derived in a principled way from the Lagrangian dual form of the Bellman optimality equation, which can be viewed as a two-player game between the actor and a critic-like function, which is named as dual critic. Compared to its actor-critic relatives, Dual-AC has the desired property that the actor and dual critic are updated cooperatively to optimize the same objective function, providing a more transparent way for learning the critic that is directly related to the objective function of the actor.We then provide a concrete algorithm that can effectively solve the minimax optimization problem, using techniques of multi-step bootstrapping, path regularization, and stochastic dual ascent algorithm.We demonstrate that the proposed algorithm achieves the state-of-the-art performances across several benchmarks.","We propose Dual Actor-Critic algorithm, which is derived in a principled way from the Lagrangian dual form of the Bellman optimality equation. The algorithm achieves the state-of-the-art performances across several benchmarks." 302,Robust Domain Adaptation By Augmented Cyclic Adversarial Learning,"Training a model to perform a task typically requires a large amount of data from the domains in which the task will be applied.However, it is often the case that data are abundant in some domains but scarce in others.Domain adaptation deals with the challenge of adapting a model trained from a data-rich source domain to perform well in a data-poor target domain.In general, this requires learning plausible mappings between domains.CycleGAN is a powerful framework that efficiently learns to map inputs from one domain to another using adversarial training and a cycle-consistency constraint.However, the conventional approach of enforcing cycle-consistency via reconstruction may be overly restrictive in cases where one or more domains have limited training data.In this paper, we propose an augmented cyclic adversarial learning model that enforces the cycle-consistency constraint via an external task specific model, which encourages the preservation of task-relevant content as opposed to exact reconstruction.This task specific model both relaxes the cycle-consistency constraint and complements the role of the discriminator during training, serving as an augmented information source for learning the mapping.We explore adaptation in speech and visual domains in low resource in supervised setting.In speech domains, we adopt a speech recognition model from each domain as the task specific model.Our approach improves absolute performance of speech recognition by 2% for female speakers in the TIMIT dataset, where the majority of training samples are from male voices.In low-resource visual domain adaptation, the results show that our approach improves absolute performance by 14% and 4% when adapting SVHN to MNIST and vice versa, respectively, which outperforms unsupervised domain adaptation methods that require high-resource unlabeled target domain.",A robust domain adaptation by employing a task specific loss in cyclic adversarial learning 303,Learning Self-Imitating Diverse Policies,"The success of popular algorithms for deep reinforcement learning, such as policy-gradients and Q-learning, relies heavily on the availability of an informative reward signal at each timestep of the sequential decision-making process.When rewards are only sparsely available during an episode, or a rewarding feedback is provided only after episode termination, these algorithms perform sub-optimally due to the difficultly in credit assignment.Alternatively, trajectory-based policy optimization methods, such as cross-entropy method and evolution strategies, do not require per-timestep rewards, but have been found to suffer from high sample complexity by completing forgoing the temporal nature of the problem.Improving the efficiency of RL algorithms in real-world problems with sparse or episodic rewards is therefore a pressing need.In this work, we introduce a self-imitation learning algorithm that exploits and explores well in the sparse and episodic reward settings.We view each policy as a state-action visitation distribution and formulate policy optimization as a divergence minimization problem.We show that with Jensen-Shannon divergence, this divergence minimization problem can be reduced into a policy-gradient algorithm with shaped rewards learned from experience replays.Experimental results indicate that our algorithm works comparable to existing algorithms in environments with dense rewards, and significantly better in environments with sparse and episodic rewards.We then discuss limitations of self-imitation learning, and propose to solve them by using Stein variational policy gradient descent with the Jensen-Shannon kernel to learn multiple diverse policies.We demonstrate its effectiveness on a challenging variant of continuous-control MuJoCo locomotion tasks.",Policy optimization by using past good rollouts from the agent; learning shaped rewards via divergence minimization; SVPG with JS-kernel for population-based exploration. 304,Taking Apart Autoencoders: How do They Encode Geometric Shapes ?,"We study the precise mechanisms which allow autoencoders to encode and decode a simple geometric shape, the disk.In this carefully controlled setting, we are able to describe the specific form of the optimal solution to the minimisation problem of the training step.We show that the autoencoder indeed approximates this solution during training.Secondly, we identify a clear failure in the generalisation capacity of the autoencoder, namely its inability to interpolate data.Finally, we explore several regularisation schemes to resolve the generalisation problem.Given the great attention that has been recently given to the generative capacity of neural networks, we believe that studying in depth simple geometric cases sheds some light on the generation process and can provide a minimal requirement experimental setup for more complex architectures.",We study the functioning of autoencoders in a simple setting and advise new strategies for their regularisation in order to obtain bettre generalisation with latent interpolation in mind for image sythesis. 305,How to make someone speak a language that they don't know.,"We present a simple idea that allows to record a speaker in a given language and synthesize their voice in other languages that they may not even know.These techniques open a wide range of potential applications such as cross-language communication, language learning or automatic video dubbing.We call this general problem multi-language speaker-conditioned speech synthesis and we present a simple but strong baseline for it.Our model architecture is similar to the encoder-decoder Char2Wav model or Tacotron.The main difference is that, instead of conditioning on characters or phonemes that are specific to a given language, we condition on a shared phonetic representation that is universal to all languages.This cross-language phonetic representation of text allows to synthesize speech in any language while preserving the vocal characteristics of the original speaker.Furthermore, we show that fine-tuning the weights of our model allows us to extend our results to speakers outside of the training dataset.",We present a simple idea that allows to record a speaker in a given language and synthesize their voice in other languages that they may not even know. 306,Sample Efficient Imitation Learning for Continuous Control,"The goal of imitation learning is to enable a learner to imitate expert behavior given expert demonstrations.Recently, generative adversarial imitation learning has shown significant progress on IL for complex continuous tasks.However, GAIL and its extensions require a large number of environment interactions during training.In real-world environments, the more an IL method requires the learner to interact with the environment for better imitation, the more training time it requires, and the more damage it causes to the environments and the learner itself.We believe that IL algorithms could be more applicable to real-world problems if the number of interactions could be reduced.In this paper, we propose a model-free IL algorithm for continuous control.Our algorithm is made up mainly three changes to the existing adversarial imitation learning methods – adopting off-policy actor-critic algorithm to optimize the learner policy, estimating the state-action value using off-policy samples without learning reward functions, and representing the stochastic policy function so that its outputs are bounded.Experimental results show that our algorithm achieves competitive results with GAIL while significantly reducing the environment interactions.","In this paper, we proposed a model-free, off-policy IL algorithm for continuous control. Experimental results showed that our algorithm achieves competitive results with GAIL while significantly reducing the environment interactions." 307,Multiple-Attribute Text Rewriting,"The dominant approach to unsupervised ""style transfer\'\' in text is based on the idea of learning a latent representation, which is independent of the attributes specifying its ""style\'\'.In this paper, we show that this condition is not necessary and is not always met in practice, even with domain adversarial training that explicitly aims at learning such disentangled representations.We thus propose a new model that controls several factors of variation in textual data where this condition on disentanglement is replaced with a simpler mechanism based on back-translation.Our method allows control over multiple attributes, like gender, sentiment, product type, etc., and a more fine-grained control on the trade-off between content preservation and change of style with a pooling operator in the latent space.Our experiments demonstrate that the fully entangled model produces better generations, even when tested on new and more challenging benchmarks comprising reviews with multiple sentences and multiple attributes.",A system for rewriting text conditioned on multiple controllable attributes 308,Inferring Dynamical Systems with Long-Range Dependencies through Line Attractor Regularization,"Vanilla RNN with ReLU activation have a simple structure that is amenable to systematic dynamical systems analysis and interpretation, but they suffer from the exploding vs. vanishing gradients problem.Recent attempts to retain this simplicity while alleviating the gradient problem are based on proper initialization schemes or orthogonality/unitary constraints on the RNN’s recurrency matrix, which, however, comes with limitations to its expressive power with regards to dynamical systems phenomena like chaos or multi-stability.Here, we instead suggest a regularization scheme that pushes part of the RNN’s latent subspace toward a line attractor configuration that enables long short-term memory and arbitrarily slow time scales.We show that our approach excels on a number of benchmarks like the sequential MNIST or multiplication problems, and enables reconstruction of dynamical systems which harbor widely different time scales.",We develop a new optimization approach for vanilla ReLU-based RNN that enables long short-term memory and identification of arbitrary nonlinear dynamical systems with widely differing time scales. 309,Wasserstein-Bounded Generative Adversarial Networks,"In the field of Generative Adversarial Networks, how to design a stable training strategy remains an open problem.Wasserstein GANs have largely promoted the stability over the original GANs by introducing Wasserstein distance, but still remain unstable and are prone to a variety of failure modes.In this paper, we present a general framework named Wasserstein-Bounded GAN, which improves a large family of WGAN-based approaches by simply adding an upper-bound constraint to the Wasserstein term.Furthermore, we show that WBGAN can reasonably measure the difference of distributions which almost have no intersection.Experiments demonstrate that WBGAN can stabilize as well as accelerate convergence in the training processes of a series of WGAN-based variants.",Propose an improved framework for WGANs and demonstrate its better performance in theory and practice. 310,Diminishing Batch Normalization,"In this paper, we propose a generalization of the BN algorithm, diminishing batch normalization, where we update the BN parameters in a diminishing moving average way.Batch normalization is very effective in accelerating the convergence of a neural network training phase that it has become a common practice.Our proposed DBN algorithm remains the overall structure of the original BN algorithm while introduces a weighted averaging update to some trainable parameters.We provide an analysis of the convergence of the DBN algorithm that converges to a stationary point with respect to trainable parameters.Our analysis can be easily generalized for original BN algorithm by setting some parameters to constant.To the best knowledge of authors, this analysis is the first of its kind for convergence with Batch Normalization introduced.We analyze a two-layer model with arbitrary activation function.The primary challenge of the analysis is the fact that some parameters are updated by gradient while others are not.The convergence analysis applies to any activation function that satisfies our common assumptions.For the analysis, we also show the sufficient and necessary conditions for the stepsizes and diminishing weights to ensure the convergence.In the numerical experiments, we use more complex models with more layers and ReLU activation.We observe that DBN outperforms the original BN algorithm on Imagenet, MNIST, NI and CIFAR-10 datasets with reasonable complex FNN and CNN models.","We propose a extension of the batch normalization, show a first-of-its-kind convergence analysis for this extension and show in numerical experiments that it has better performance than the original batch normalizatin." 311,AMUSED: A Multi-Stream Vector Representation Method for Use In Natural Dialogue,"The problem of building a coherent and non-monotonous conversational agent with proper discourse and coverage is still an area of open research.Current architectures only take care of semantic and contextual information for a given query and fail to completely account for syntactic and external knowledge which are crucial for generating responses in a chit-chat system.To overcome this problem, we propose an end to end multi-stream deep learning architecture which learns unified embeddings for query-response pairs by leveraging contextual information from memory networks and syntactic information by incorporating Graph Convolution Networks over their dependency parse.A stream of this network also utilizes transfer learning by pre-training a bidirectional transformer to extract semantic representation for each input sentence and incorporates external knowledge through the neighbourhood of the entities from a Knowledge Base.We benchmark these embeddings on next sentence prediction task and significantly improve upon the existing techniques.Furthermore, we use AMUSED to represent query and responses along with its context to develop a retrieval based conversational agent which has been validated by expert linguists to have comprehensive engagement with humans.","This paper provides a multi -stream end to end approach to learn unified embeddings for query-response pairs in dialogue systems by leveraging contextual, syntactic, semantic and external information together." 312,Guiding MCTS with Generalized Policies for Probabilistic Planning,"We examine techniques for combining generalized policies with search algorithms to exploit the strengths and overcome the weaknesses of each when solving probabilistic planning problems.The Action Schema Network is a recent contribution to planning that uses deep learning and neural networks to learn generalized policies for probabilistic planning problems.ASNets are well suited to problems where local knowledge of the environment can be exploited to improve performance, but may fail to generalize to problems they were not trained on.Monte-Carlo Tree Search is a forward-chaining state space search algorithm for optimal decision making which performs simulations to incrementally build a search tree and estimate the values of each state.Although MCTS can achieve state-of-the-art results when paired with domain-specific knowledge, without this knowledge, MCTS requires a large number of simulations in order to obtain reliable estimates in the search tree.By combining ASNets with MCTS, we are able to improve the capability of an ASNet to generalize beyond the distribution of problems it was trained on, as well as enhance the navigation of the search space by MCTS.",Techniques for combining generalized policies with search algorithms to exploit the strengths and overcome the weaknesses of each when solving probabilistic planning problems 313,AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty,"Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice.When the train and test distributions are mismatched, accuracy can plummet.Currently there are few techniques that improve robustness to unforeseen data shifts encountered during deployment.In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers.We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions.AugMix significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half.","We obtain state-of-the-art on robustness to data shifts, and we maintain calibration under data shift even though even when accuracy drops" 314,At Your Fingertips: Automatic Piano Fingering Detection,"Automatic Piano Fingering is a hard task which computers can learn using data.As data collection is hard and expensive, we propose to automate this process by automatically extracting fingerings from public videos and MIDI files, using computer-vision techniques.Running this process on 90 videos results in the largest dataset for piano fingering with more than 150K notes.We show that when running a previously proposed model for automatic piano fingering on our dataset and then fine-tuning it on manually labeled piano fingering data, we achieve state-of-the-art results.In addition to the fingering extraction method, we also introduce a novel method for transferring deep-learning computer-vision models to work on out-of-domain data, by fine-tuning it on out-of-domain augmentation proposed by a Generative Adversarial Network.For demonstration, we anonymously release a visualization of the output of our process for a single video on https://youtu.be/Gfs1UWQhr5Q","We automatically extract fingering information from videos of piano performances, to be used in automatic fingering prediction models." 315,A DIRT-T Approach to Unsupervised Domain Adaptation,"Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable.A recent approach for finding a common representation of the two domains is via domain adversarial training, which attempts to induce a feature extractor that matches the source and target feature distributions in some feature space.However, domain adversarial training faces two critical limitations:1) if the feature extraction function has high-capacity, then feature distribution matching is a weak constraint,2) in non-conservative domain adaptation, training the model to do well on the source domain hurts performance on the target domain.In this paper, we address these issues through the lens of the cluster assumption, i.e., decision boundaries should not cross high-density data regions.We propose two novel and related models:1) the Virtual Adversarial Domain Adaptation model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption;2) the Decision-boundary Iterative Refinement Training with a Teacher model, which takes the VADA model as initialization and employs natural gradient steps to further minimize the cluster assumption violation.Extensive empirical results demonstrate that the combination of these two models significantly improve the state-of-the-art performance on the digit, traffic sign, and Wi-Fi recognition domain adaptation benchmarks.",SOTA on unsupervised domain adaptation by leveraging the cluster assumption. 316,Continuous Graph Flow,"In this paper, we propose Continuous Graph Flow, a generative continuous flow based method that aims to model complex distributions of graph-structured data. Once learned, the model can be applied to an arbitrary graph, defining a probability density over the random variables represented by the graph.It is formulated as an ordinary differential equation system with shared and reusable functions that operate over the graphs. This leads to a new type of neural graph message passing scheme that performs continuous message passing over time.This class of models offers several advantages: a flexible representation that can generalize to variable data dimensions; ability to model dependencies in complex data distributions; reversible and memory-efficient; and exact and efficient computation of the likelihood of the data.We demonstrate the effectiveness of our model on a diverse set of generation tasks across different domains: graph generation, image puzzle generation, and layout generation from scene graphs.Our proposed model achieves significantly better performance compared to state-of-the-art models.",Graph generative models based on generalization of message passing to continuous time using ordinary differential equations 317,On the Information Bottleneck Theory of Deep Learning,"The practical successes of deep neural networks have not been matched by theoretical progress that satisfyingly explains their behavior.In this work, we study the information bottleneck theory of deep learning, which makes three specific claims: first, that deep networks undergo two distinct phases consisting of an initial fitting phase and a subsequent compression phase; second, that the compression phase is causally related to the excellent generalization performance of deep networks; and third, that the compression phase occurs due to the diffusion-like behavior of stochastic gradient descent.Here we show that none of these claims hold true in the general case.Through a combination of analytical results and simulation, we demonstrate that the information plane trajectory is predominantly a function of the neural nonlinearity employed: double-sided saturating nonlinearities like tanh yield a compression phase as neural activations enter the saturation regime, but linear activation functions and single-sided saturating nonlinearities like the widely used ReLU in fact do not.Moreover, we find that there is no evident causal connection between compression and generalization: networks that do not compress are still capable of generalization, and vice versa.Next, we show that the compression phase, when it exists, does not arise from stochasticity in training by demonstrating that we can replicate the IB findings using full batch gradient descent rather than stochastic gradient descent.Finally, we show that when an input domain consists of a subset of task-relevant and task-irrelevant information, hidden representations do compress the task-irrelevant information, although the overall information about the input may monotonically increase with training time, and that this compression happens concurrently with the fitting process rather than during a subsequent compression period.",We show that several claims of the information bottleneck theory of deep learning are not true in the general case. 318,Adversarial Vulnerability of Neural Networks Increases with Input Dimension,"Over the past four years, neural networks have been proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions.We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs.For most current network architectures, we prove that the L1-norm of these gradients grows as the square root of the input size.These nets therefore become increasingly vulnerable with growing image size.Our proofs rely on the network’s weight distribution at initialization, but extensive experiments confirm that our conclusions still hold after usual training.",Neural nets have large gradients by design; that makes them adversarially vulnerable. 319,Online Hyperparameter Adaptation via Amortized Proximal Optimization,"Effective performance of neural networks depends critically on effective tuning of optimization hyperparameters, especially learning rates.We present Amortized Proximal Optimization, which takes the perspective that each optimization step should approximately minimize a proximal objective.Optimization hyperparameters are adapted to best minimize the proximal objective after one weight update.We show that an idealized version of APO achieves global convergence to stationary point and locally second-order convergence to global optimum for neural networks.APO incurs minimal computational overhead.We experiment with using APO to adapt a variety of optimization hyperparameters online during training, including learning rates, damping coefficients, and gradient variance exponents.For a variety of network architectures and optimization algorithms, we show that with minimal tuning, APO performs competitively with carefully tuned optimizers.","We introduce amortized proximal optimization (APO), a method to adapt a variety of optimization hyperparameters online during training, including learning rates, damping coefficients, and gradient variance exponents." 320,Interpreting Word Embeddings with Eigenvector Analysis,"Dense word vectors have proven their values in many downstream NLP tasks over the past few years.However, the dimensions of such embeddings are not easily interpretable.Out of the d-dimensions in a word vector, we would not be able to understand what high or low values mean.Previous approaches addressing this issue have mainly focused on either training sparse/non-negative constrained word embeddings, or post-processing standard pre-trained word embeddings.On the other hand, we analyze conventional word embeddings trained with Singular Value Decomposition, and reveal similar interpretability.We use a novel eigenvector analysis method inspired from Random Matrix Theory and show that semantically coherent groups not only form in the row space, but also the column space.This allows us to view individual word vector dimensions as human-interpretable semantic features.","Without requiring any constraints or post-processing, we show that the salient dimensions of word vectors can be interpreted as semantic features. " 321,Towards Reverse-Engineering Black-Box Neural Networks,"Many deployed learned models are black boxes: given input, returns output.Internal information about the model, such as the architecture, optimisation procedure, or training data, is not disclosed explicitly as it might contain proprietary information or make the system more vulnerable.This work shows that such attributes of neural networks can be exposed from a sequence of queries.This has multiple implications.On the one hand, our work exposes the vulnerability of black-box neural networks to different types of attacks -- we show that the revealed internal information helps generate more effective adversarial examples against the black box model.On the other hand, this technique can be used for better protection of private content from automatic recognition models using adversarial examples.Our paper suggests that it is actually hard to draw a line between white box and black box models.","Querying a black-box neural network reveals a lot of information about it; we propose novel ""metamodels"" for effectively extracting information from a black box." 322,CWAE-IRL: Formulating a supervised approach to Inverse Reinforcement Learning problem,"Inverse reinforcement learning is used to infer the reward function from the actions of an expert running a Markov Decision Process.A novel approach using variational inference for learning the reward function is proposed in this research.Using this technique, the intractable posterior distribution of the continuous latent variable is analytically approximated to appear to be as close to the prior belief while trying to reconstruct the future state conditioned on the current state and action.The reward function is derived using a well-known deep generative model known as Conditional Variational Auto-encoder with Wasserstein loss function, thus referred to as Conditional Wasserstein Auto-encoder-IRL, which can be analyzed as a combination of the backward and forward inference.This can then form an efficient alternative to the previous approaches to IRL while having no knowledge of the system dynamics of the agent.Experimental results on standard benchmarks such as objectworld and pendulum show that the proposed algorithm can effectively learn the latent reward function in complex, high-dimensional environments.",Using a supervised latent variable modeling framework to determine reward in inverse reinforcement learning task 323,Targeted sampling of enlarged neighborhood via Monte Carlo tree search for TSP,"The travelling salesman problem is a well-known combinatorial optimization problem with a variety of real-life applications.We tackle TSP by incorporating machine learning methodology and leveraging the variable neighborhood search strategy.More precisely, the search process is considered as a Markov decision process, where a 2-opt local search is used to search within a small neighborhood, while a Monte Carlo tree search method, is used to sample a number of targeted actions within an enlarged neighborhood.This new paradigm clearly distinguishes itself from the existing machine learning based paradigms for solving the TSP, which either uses an end-to-end ML model, or simply applies traditional techniques after ML for post optimization.Experiments based on two public data sets show that, our approach clearly dominates all the existing learning based TSP algorithms in terms of performance, demonstrating its high potential on the TSP.More importantly, as a general framework without complicated hand-crafted rules, it can be readily extended to many other combinatorial optimization problems.",This paper combines Monte Carlo tree search with 2-opt local search in a variable neighborhood mode to solve the TSP effectively. 324,LEARNING NEUROSYMBOLIC GENERATIVE MODELS VIA PROGRAM SYNTHESIS,"Significant strides have been made toward designing better generative models in recent years.Despite this progress, however, state-of-the-art approaches are still largely unable to capture complex global structure in data.For example, images of buildings typically contain spatial patterns such as windows repeating at regular intervals; state-of-the-art generative methods can’t easily reproduce these structures.We propose to address this problem by incorporating programs representing global structure into the generative model—e.g., a 2D for-loop may represent a configuration of windows.Furthermore, we propose a framework for learning these models by leveraging program synthesis to generate training data.On both synthetic and real-world data, we demonstrate that our approach is substantially better than the state-of-the-art at both generating and completing images that contain global structure.",Applying program synthesis to the tasks of image completion and generation within a deep learning framework 325,Learning Human Postural Control with Hierarchical Acquisition Functions,"Learning control policies in robotic tasks requires a large number of interactions due to small learning rates, bounds on the updates or unknown constraints.In contrast humans can infer protective and safe solutions after a single failure or unexpected observation.In order to reach similar performance, we developed a hierarchical Bayesian optimization algorithm that replicates the cognitive inference and memorization process for avoiding failures in motor control tasks.A Gaussian Process implements the modeling and the sampling of the acquisition function.This enables rapid learning with large learning rates while a mental replay phase ensures that policy regions that led to failures are inhibited during the sampling process. The features of the hierarchical Bayesian optimization method are evaluated in a simulated and physiological humanoid postural balancing task.We quantitatively compare the human learning performance to our learning approach by evaluating the deviations of the center of mass during training.Our results show that we can reproduce the efficient learning of human subjects in postural control tasks which provides a testable model for future physiological motor control tasks.In these postural control tasks, our method outperforms standard Bayesian Optimization in the number of interactions to solve the task, in the computational demands and in the frequency of observed failures.",This paper presents a computational model for efficient human postural control adaptation based on hierarchical acquisition functions with well-known features. 326,Toward Evaluating Robustness of Deep Reinforcement Learning with Continuous Control,"Deep reinforcement learning has achieved great success in many previously difficult reinforcement learning tasks, yet recent studies show that deep RL agents are also unavoidably susceptible to adversarial perturbations, similar to deep neural networks in classification tasks.Prior works mostly focus on model-free adversarial attacks and agents with discrete actions.In this work, we study the problem of continuous control agents in deep RL with adversarial attacks and propose the first two-step algorithm based on learned model dynamics.Extensive experiments on various MuJoCo domains demonstrate that our proposed framework is much more effective and efficient than model-free based attacks baselines in degrading agent performance as well as driving agents to unsafe states.",We study the problem of continuous control agents in deep RL with adversarial attacks and proposed a two-step algorithm based on learned model dynamics. 327,The Implicit Bias of Depth: How Incremental Learning Drives Generalization,"A leading hypothesis for the surprising generalization of neural networks is that the dynamics of gradient descent bias the model towards simple solutions, by searching through the solution space in an incremental order of complexity.We formally define the notion of incremental learning dynamics and derive the conditions on depth and initialization for which this phenomenon arises in deep linear models.Our main theoretical contribution is a dynamical depth separation result, proving that while shallow models can exhibit incremental learning dynamics, they require the initialization to be exponentially small for these dynamics to present themselves.However, once the model becomes deeper, the dependence becomes polynomial and incremental learning can arise in more natural settings.We complement our theoretical findings by experimenting with deep matrix sensing, quadratic neural networks and with binary classification using diagonal and convolutional linear networks, showing all of these models exhibit incremental learning.","We study the sparsity-inducing bias of deep models, caused by their learning dynamics." 328,Parametric Adversarial Divergences are Good Task Losses for Generative Modeling,"Generative modeling of high dimensional data like images is a notoriously difficult and ill-defined problem.In particular, how to evaluate a learned generative model is unclear.In this paper, we argue that *adversarial learning*, pioneered with generative adversarial networks, provides an interesting framework to implicitly define more meaningful task losses for unsupervised tasks, such as for generating ""visually realistic"" images.By relating GANs and structured prediction under the framework of statistical decision theory, we put into light links between recent advances in structured prediction theory and the choice of the divergence in GANs.We argue that the insights about the notions of ""hard"" and ""easy"" to learn losses can be analogously extended to adversarial divergences.We also discuss the attractive properties of parametric adversarial divergences for generative modeling, and perform experiments to show the importance of choosing a divergence that reflects the final task.","Parametric adversarial divergences implicitly define more meaningful task losses for generative modeling, we make parallels with structured prediction to study the properties of these divergences and their ability to encode the task of interest." 329,A Fair Comparison of Graph Neural Networks for Graph Classification,"Experimental reproducibility and replicability are critical topics in machine learning.Authors have often raised concerns about their lack in scientific publications to improve the quality of the field.Recently, the graph representation learning field has attracted the attention of a wide research community, which resulted in a large stream of works.As such, several Graph Neural Network models have been developed to effectively tackle graph classification.However, experimental procedures often lack rigorousness and are hardly reproducible.Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art.To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks.Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet.We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.",We provide a rigorous comparison of different Graph Neural Networks for graph classification. 330,Adversarial Learning of General Transformations for Data Augmentation,"Data augmentation is fundamental against overfitting in large convolutional neural networks, especially with a limited training dataset.In images, DA is usually based on heuristic transformations, like geometric or color transformations.Instead of using predefined transformations, our work learns data augmentation directly from the training data by learning to transform images with an encoder-decoder architecture combined with a spatial transformer network.The transformed images still belong to the same class, but are new, more complex samples for the classifier.Our experiments show that our approach is better than previous generative data augmentation methods, and comparable to predefined transformation methods when training an image classifier.",Automatic Learning of data augmentation using a GAN based architecture to improve an image classifier 331,Semi-Supervised Learning via New Deep Network Inversion,"We exploit a recently derived inversion scheme for arbitrary deep neural networks to develop a new semi-supervised learning framework that applies to a wide range of systems and problems. The approach reaches current state-of-the-art methods on MNIST and provides reasonable performances on SVHN and CIFAR10.Through the introduced method, residual networks are for the first time applied to semi-supervised tasks.Experiments with one-dimensional signals highlight the generality of the method.Importantly, our approach is simple, efficient, and requires no change in the deep network architecture.",We exploit an inversion scheme for arbitrary deep neural networks to develop a new semi-supervised learning framework applicable to many topologies. 332,Learning with Little Data: Evaluation of Deep Learning Algorithms,"Deep learning has become a widely used tool in many computational and classification problems.Nevertheless obtaining and labeling data, which is needed for strong results, is often expensive or even not possible.In this paper three different algorithmic approaches to deal with limited access to data are evaluated and compared to each other.We show the drawbacks and benefits of each method.One successful approach, especially in one- or few-shot learning tasks, is the use of external data during the classification task.Another successful approach, which achieves state of the art results in semi-supervised learning benchmarks, is consistency regularization.Especially virtual adversarial training has shown strong results and will be investigated in this paper.The aim of consistency regularization is to force the network not to change the output, when the input or the network itself is perturbed.Generative adversarial networks have also shown strong empirical results.In many approaches the GAN architecture is used in order to create additional data and therefor to increase the generalization capability of the classification network.Furthermore we consider the use of unlabeled data for further performance improvement.The use of unlabeled data is investigated both for GANs and VAT.","Comparison of siamese neural networks, GANs, and VAT for few shot learning. " 333,FusionNet: Fusing via Fully-aware Attention with Application to Machine Comprehension,"This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives.First, it puts forward a novel concept of ""History of Word"" to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation.Second, it identifies an attention scoring function that better utilizes the ""history of word"" concept.Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text and exploit it in its counterpart layer by layer.We apply FusionNet to the Stanford Question Answering Dataset and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing.Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.","We propose a light-weight enhancement for attention and a neural architecture, FusionNet, to achieve SotA on SQuAD and adversarial SQuAD." 334,Hierarchical Adversarially Learned Inference,"We propose a novel hierarchical generative model with a simple Markovian structure and a corresponding inference model.Both the generative and inference model are trained using the adversarial learning paradigm.We demonstrate that the hierarchical structure supports the learning of progressively more abstract representations as well as providing semantically meaningful reconstructions with different levels of fidelity.Furthermore, we show that minimizing the Jensen-Shanon divergence between the generative and inference network is enough to minimize the reconstruction error. The resulting semantically meaningful hierarchical latent structure discovery is exemplified on the CelebA dataset. There, we show that the features learned by our model in an unsupervised way outperform the best handcrafted features. Furthermore, the extracted features remain competitive when compared to several recent deep supervised approaches on an attribute prediction task on CelebA.""Finally, we leverage the model's inference network to achieve state-of-the-art performance on a semi-supervised variant of the MNIST digit classification task.",Adversarially trained hierarchical generative model with robust and semantically learned latent representation. 335,Learning to Discretize: Solving 1D Scalar Conservation Laws via Deep Reinforcement Learning,"Conservation laws are considered to be fundamental laws of nature.It has broad application in many fields including physics, chemistry, biology, geology, and engineering.Solving the differential equations associated with conservation laws is a major branch in computational mathematics.Recent success of machine learning, especially deep learning, in areas such as computer vision and natural language processing, has attracted a lot of attention from the community of computational mathematics and inspired many intriguing works in combining machine learning with traditional methods.In this paper, we are the first to explore the possibility and benefit of solving nonlinear conservation laws using deep reinforcement learning.As a proof of concept, we focus on 1-dimensional scalar conservation laws.We deploy the machinery of deep reinforcement learning to train a policy network that can decide on how the numerical solutions should be approximated in a sequential and spatial-temporal adaptive manner.We will show that the problem of solving conservation laws can be naturally viewed as a sequential decision making process and the numerical schemes learned in such a way can easily enforce long-term accuracy.Furthermore, the learned policy network is carefully designed to determine a good local discrete approximation based on the current state of the solution, which essentially makes the proposed method a meta-learning approach.In other words, the proposed method is capable of learning how to discretize for a given situation mimicking human experts.Finally, we will provide details on how the policy network is trained, how well it performs compared with some state-of-the-art numerical solvers such as WENO schemes, and how well it generalizes.Our code is released anomynously at \\url.","We observe that numerical PDE solvers can be regarded as Markov Desicion Processes, and propose to use Reinforcement Learning to solve 1D scalar Conservation Laws" 336,Spatial Broadcast Decoder: A Simple Architecture for Disentangled Representations in VAEs,"We present a neural rendering architecture that helps variational autoencoders learn disentangled representations.Instead of the deconvolutional network typically used in the decoder of VAEs, we tile the latent vector across space, concatenate fixed X- and Y-“coordinate” channels, and apply a fully convolutional network with 1x1 stride.This provides an architectural prior for dissociating positional from non-positional features in the latent space, yet without providing any explicit supervision to this effect.We show that this architecture, which we term the Spatial Broadcast decoder, improves disentangling, reconstruction accuracy, and generalization to held-out regions in data space. We show the Spatial Broadcast Decoder is complementary to state-of-the-art disentangling techniques and when incorporated improves their performance.",We introduce a neural rendering architecture that helps VAEs learn disentangled latent representations. 337,Critical Learning Periods in Deep Networks,"Similar to humans and animals, deep artificial neural networks exhibit critical periods during which a temporary stimulus deficit can impair the development of a skill.The extent of the impairment depends on the onset and length of the deficit window, as in animal models, and on the size of the neural network.Deficits that do not affect low-level statistics, such as vertical flipping of the images, have no lasting effect on performance and can be overcome with further training. To better understand this phenomenon, we use the Fisher Information of the weights to measure the effective connectivity between layers of a network during training. Counterintuitively, information rises rapidly in the early phases of training, and then decreases, preventing redistribution of information resources in a phenomenon we refer to as a loss of ""Information Plasticity"". Our analysis suggests that the first few epochs are critical for the creation of strong connections that are optimal relative to the input data distribution.Once such strong connections are created, they do not appear to change during additional training.These findings suggest that the initial learning transient, under-scrutinized compared to asymptotic behavior, plays a key role in determining the outcome of the training process.Our findings, combined with recent theoretical results in the literature, also suggest that forgetting is critical to achieving invariance and disentanglement in representation learning.Finally, critical periods are not restricted to biological systems, but can emerge naturally in learning systems, whether biological or artificial, due to fundamental constrains arising from learning dynamics and information processing.","Sensory deficits in early training phases can lead to irreversible performance loss in both artificial and neuronal networks, suggesting information phenomena as the common cause, and point to the importance of the initial transient and forgetting." 338,Learnable Embedding Space for Efficient Neural Architecture Compression,"We propose a method to incrementally learn an embedding space over the domain of network architectures, to enable the careful selection of architectures for evaluation during compressed architecture search.Given a teacher network, we search for a compressed network architecture by using Bayesian Optimization with a kernel function defined over our proposed embedding space to select architectures for evaluation.We demonstrate that our search algorithm can significantly outperform various baseline methods, such as random search and reinforcement learning.The compressed architectures found by our method are also better than the state-of-the-art manually-designed compact architecture ShuffleNet.We also demonstrate that the learned embedding space can be transferred to new settings for architecture search, such as a larger teacher network or a teacher network in a different architecture family, without any training.","We propose a method to incrementally learn an embedding space over the domain of network architectures, to enable the careful selection of architectures for evaluation during compressed architecture search." 339,Learning agents with prioritization and parameter noise in continuous state and action space,"Reinforcement Learning problem can be solved in two different ways - the Value function-based approach and the policy optimization-based approach - to eventually arrive at an optimal policy for the given environment.One of the recent breakthroughs in reinforcement learning is the use of deep neural networks as function approximators to approximate the value function or q-function in a reinforcement learning scheme.This has led to results with agents automatically learning how to play games like alpha-go showing better-than-human performance.Deep Q-learning networks and Deep Deterministic Policy Gradient are two such methods that have shown state-of-the-art results in recent times.Among the many variants of RL, an important class of problems is where the state and action spaces are continuous --- autonomous robots, autonomous vehicles, optimal control are all examples of such problems that can lend themselves naturally to reinforcement based algorithms, and have continuous state and action spaces.In this paper, we adapt and combine approaches such as DQN and DDPG in novel ways to outperform the earlier results for continuous state and action space problems. We believe these results are a valuable addition to the fast-growing body of results on Reinforcement Learning, more so for continuous state and action space problems.",Improving the performance of an RL agent in the continuous action and state space domain by using prioritised experience replay and parameter noise. 340,Natural Language Inference over Interaction Space,"Natural Language Inference task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis.We introduce Interactive Inference Network, a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space.We show that an interaction tensor contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information.One instance of such architecture, Densely Interactive Inference Network, demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus.""It's noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI dataset with respect to the strongest published system.",show multi-channel attention weight contains semantic feature to solve natural language inference task. 341,DppNet: Approximating Determinantal Point Processes with Deep Networks,"Determinantal Point Processes provide an elegant and versatile way to sample sets of items that balance the point-wise quality with the set-wise diversity of selected items.For this reason, they have gained prominence in many machine learning applications that rely on subset selection.However, sampling from a DPP over a ground set of size N is a costly operation, requiring in general an O preprocessing cost and an O sampling cost for subsets of size k. We approach this problem by introducing DppNets: generative deep models that produce DPP-like samples for arbitrary ground sets. We develop an inhibitive attention mechanism based on transformer networks that captures a notion of dissimilarity between feature vectors. We show theoretically that such an approximation is sensible as it maintains the guarantees of inhibition or dissimilarity that makes DPP so powerful and unique. Empirically, we demonstrate that samples from our model receive high likelihood under the more expensive DPP alternative.",We approximate Determinantal Point Processes with neural nets; we justify our model theoretically and empirically. 342,BA-Net: Dense Bundle Adjustment Networks,"This paper introduces a network architecture to solve the structure-from-motion problem via feature-metric bundle adjustment, which explicitly enforces multi-view geometry constraints in the form of feature-metric error.The whole pipeline is differentiable, so that the network can learn suitable features that make the BA problem more tractable.Furthermore, this work introduces a novel depth parameterization to recover dense per-pixel depth.The network first generates several basis depth maps according to the input image, and optimizes the final depth as a linear combination of these basis depth maps via feature-metric BA.The basis depth maps generator is also learned via end-to-end training.The whole system nicely combines domain knowledge and deep learning to address the challenging dense SfM problem.Experiments on large scale real data prove the success of the proposed method.",This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature bundle adjustment (BA) 343,TD Learning with Constrained Gradients,"Temporal Difference Learning with function approximation is known to be unstable.Previous work like t and t has presented alternative objectives that are stable to minimize.However, in practice, TD-learning with neural networks requires various tricks like using a target network that updates slowly p.In this work we propose a constraint on the TD update that minimizes change to the target values.This constraint can be applied to the gradients of any TD objective, and can be easily applied to nonlinear function approximation.We validate this update by applying our technique to deep Q-learning, and training without a target network.""We also show that adding this constraint on Baird's counterexample keeps Q-learning from diverging.",We show that adding a constraint to TD updates stabilizes learning and allows Deep Q-learning without a target network 344,Neural Program Planner for Structured Predictions,"We consider the problem of weakly supervised structured prediction with reinforcement learning – for example, given a database table and a question, perform a sequence of computation actions on the table, which generates a response and receives a binary success-failure reward. This line of research has been successful by leveraging RL to directly optimizes the desired metrics of the SP tasks – for example, the accuracy in question answering or BLEU score in machine translation. However, different from the common RL settings, the environment dynamics is deterministic in SP, which hasn’t been fully utilized by the model-freeRL methods that are usually applied.Since SP models usually have full access to the environment dynamics, we propose to apply model-based RL methods, which rely on planning as a primary model component.We demonstrate the effectiveness of planning-based SP with a Neural Program Planner, which, given a set of candidate programs from a pretrained search policy, decides which program is the most promising considering all the information generated from executing these programs.We evaluate NPP on weakly supervised program synthesis from natural language by stacked learning a planning module based on pretrained search policies.On the WIKITABLEQUESTIONS benchmark, NPP achieves a new state-of-the-art of 47.2% accuracy.",A model-based planning component improves RL-based semantic parsing on WikiTableQuestions. 345,PACT: Parameterized Clipping Activation for Quantized Neural Networks,"Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost.To address this cost, a number of quantization schemeshave been proposed - but most of these techniques focused on quantizing weights, which are relatively smaller in size compared to activations.This paper proposes a novel quantization scheme for activations during training - that enables neural networks to work well with ultra low precision weights and activations without any significant accuracy degradation. This technique, PArameterized Clipping acTi-vation, uses an activation clipping parameter α that is optimized duringtraining to find the right quantization scale.PACT allows quantizing activations toarbitrary bit precisions, while achieving much better accuracy relative to publishedstate-of-the-art quantization schemes.We show, for the first time, that both weights and activations can be quantized to 4-bits of precision while still achieving accuracy comparable to full precision networks across a range of popular models and datasets.We also show that exploiting these reduced-precision computational units in hardware can enable a super-linear improvement in inferencing performance dueto a significant reduction in the area of accelerator compute engines coupled with the ability to retain the quantized model and activation data in on-chip memories.",A new way of quantizing activation of Deep Neural Network via parameterized clipping which optimizes the quantization scale via stochastic gradient descent. 346,The Generalization-Stability Tradeoff in Neural Network Pruning,"Pruning neural network parameters is often viewed as a means to compress models, but pruning has also been motivated by the desire to prevent overfitting.This motivation is particularly relevant given the perhaps surprising observation that a wide variety of pruning approaches increase test accuracy despite sometimes massive reductions in parameter counts.""To better understand this phenomenon, we analyze the behavior of pruning over the course of training, finding that pruning's effect on generalization relies more on the instability it generates than on the final size of the pruned model."", 'We demonstrate that even the pruning of unimportant parameters can lead to such instability, and show similarities between pruning and regularizing by injecting noise, suggesting a mechanism for pruning-based generalization improvements that is compatible with the strong generalization recently observed in over-parameterized networks.","We demonstrate that pruning methods which introduce greater instability into the loss also confer improved generalization, and explore the mechanisms underlying this effect." 347,On the Adversarial Robustness of Neural Networks without Weight Transport,"Neural networks trained with backpropagation, the standard algorithm of deep learning which uses weight transport, are easily fooled by existing gradient-based adversarial attacks.This class of attacks are based on certain small perturbations of the inputs to make networks misclassify them.We show that less biologically implausible deep neural networks trained with feedback alignment, which do not use weight transport, can be harder to fool, providing actual robustness.Tested on MNIST, deep neural networks trained without weight transport have an adversarial accuracy of 98% compared to 0.03% for neural networks trained with backpropagation and generate non-transferable adversarial examples.However, this gap decreases on CIFAR-10 but is still significant particularly for small perturbation magnitude less than 1 ⁄ 2.",Less biologically implausible deep neural networks trained without weight transport can be harder to fool. 348,A Generative Model For Electron Paths,"Chemical reactions can be described as the stepwise redistribution of electrons in molecules.As such, reactions are often depicted using ""arrow-pushing"" diagrams which show this movement as a sequence of arrows.We propose an electron path prediction model to learn these sequences directly from raw reaction data.Instead of predicting product molecules directly from reactant molecules in one shot, learning a model of electron movement has the benefits of being easy for chemists to interpret, incorporating constraints of chemistry, such as balanced atom counts before and after the reaction, and naturally encoding the sparsity of chemical reactions, which usually involve changes in only a small number of atoms in the reactants.We design a method to extract approximate reaction paths from any dataset of atom-mapped reaction SMILES strings.Our model achieves excellent performance on an important subset of the USPTO reaction dataset, comparing favorably to the strongest baselines.Furthermore, we show that our model recovers a basic knowledge of chemistry without being explicitly trained to do so.",A generative model for reaction prediction that learns the mechanistic electron steps of a reaction directly from raw reaction data. 349,Predict Responsibly: Increasing Fairness by Learning to Defer,"When machine learning models are used for high-stakes decisions, they should predict accurately, fairly, and responsibly.To fulfill these three requirements, a model must be able to output a reject option when it is not qualified to make a prediction.In this work, we propose learning to defer, a method by which a model can defer judgment to a downstream decision-maker such as a human user.We show that learning to defer generalizes the rejection learning framework in two ways: by considering the effect of other agents in the decision-making process, and by allowing for optimization of complex objectives.We propose a learning algorithm which accounts for potential biases held by decision-makerslater in a pipeline.Experiments on real-world datasets demonstrate that learningto defer can make a model not only more accurate but also less biased.Even whenoperated by highly biased users, we show thatdeferring models can still greatly improve the fairness of the entire pipeline.","Incorporating the ability to say I-don't-know can improve the fairness of a classifier without sacrificing too much accuracy, and this improvement magnifies when the classifier has insight into downstream decision-making." 350,HTN Planning with Semantic Attachments,"Hierarchical Task Networks generate plans using a decomposition process guided by extra domain knowledge to guide search towards a planning task.While many HTN planners can make calls to external processes during the decomposition process, this is a computationally expensive process, so planner implementations often use such calls in an ad-hoc way using very specialized domain knowledge to limit the number of calls.Conversely, the few classical planners that are capable of using external calls during planning do so in much more limited ways by generating a fixed number of ground operators at problem grounding time.In this paper we develop the notion of semantic attachments for HTN planning using semi co-routines, allowing such procedurally defined predicates to link the planning process to custom unifications outside of the planner.The resulting planner can then use such co-routines as part of its backtracking mechanism to search through parallel dimensions of the state-space.We show empirically that our planner outperforms the state-of-the-art numeric planners in a number of domains using minimal extra domain knowledge.",An approach to perform HTN planning using external procedures to evaluate predicates at runtime (semantic attachments). 351,"Don't Settle for Average, Go for the Max: Fuzzy Sets and Max-Pooled Word Vectors","Recent literature suggests that averaged word vectors followed by simple post-processing outperform many deep learning methods on semantic textual similarity tasks.Furthermore, when averaged word vectors are trained supervised on large corpora of paraphrases, they achieve state-of-the-art results on standard STS benchmarks.Inspired by these insights, we push the limits of word embeddings even further.We propose a novel fuzzy bag-of-words representation for text that contains all the words in the vocabulary simultaneously but with different degrees of membership, which are derived from similarities between word vectors.We show that max-pooled word vectors are only a special case of fuzzy BoW and should be compared via fuzzy Jaccard index rather than cosine similarity.Finally, we propose DynaMax, a completely unsupervised and non-parametric similarity measure that dynamically extracts and max-pools good features depending on the sentence pair.This method is both efficient and easy to implement, yet outperforms current baselines on STS tasks by a large margin and is even competitive with supervised word vectors trained to directly optimise cosine similarity.",Max-pooled word vectors with fuzzy Jaccard set similarity are an extremely competitive baseline for semantic similarity; we propose a simple dynamic variant that performs even better. 352,Reparameterized Variational Divergence Minimization for Stable Imitation,"State-of-the-art results in imitation learning are currently held by adversarial methods that iteratively estimate the divergence between student and expert policies and then minimize this divergence to bring the imitation policy closer to expert behavior.Analogous techniques for imitation learning from observations alone, however, have not enjoyed the same ubiquitous successes.Recent work in adversarial methods for generative models has shown that the measure used to judge the discrepancy between real and synthetic samples is an algorithmic design choice, and that different choices can result in significant differences in model performance.Choices including Wasserstein distance and various-divergences have already been explored in the adversarial networks literature, while more recently the latter class has been investigated for imitation learning.Unfortunately, we find that in practice this existing imitation-learning framework for using-divergences suffers from numerical instabilities stemming from the combination of function approximation and policy-gradient reinforcement learning.In this work, we alleviate these challenges and offer a reparameterization of adversarial imitation learning as-divergence minimization before further extending the framework to handle the problem of imitation from observations only.Empirically, we demonstrate that our design choices for coupling imitation learning and-divergences are critical to recovering successful imitation policies.Moreover, we find that with the appropriate choice of-divergence, we can obtain imitation-from-observation algorithms that outperform baseline approaches and more closely match expert performance in continous-control tasks with low-dimensional observation spaces.With high-dimensional observations, we still observe a significant gap with and without action labels, offering an interesting avenue for future work.","The overall goal of this work is to enable sample-efficient imitation from expert demonstrations, both with and without the provision of expert action labels, through the use of f-divergences." 353,Tracking momentary attention fluctuations with an EEG-based cognitive brain-machine interface,"Momentary fluctuations in attention correlate with neural activity fluctuations in primate visual areas.Yet, the link between such momentary neural fluctuations and attention state remains to be shown in the human brain.We investigate this link using a real-time cognitive brain machine interface based on steady state visually evoked potentials: occipital EEG potentials evoked by rhythmically flashing stimuli.Tracking momentary fluctuations in SSVEP power, in real-time, we presented stimuli time-locked to when this power reached high or low thresholds.""We observed a significant increase in discrimination accuracy when stimuli were triggered during high SSVEP power epochs, at the location cued for attention."", 'Our results indicate a direct link between attention’s effects on perceptual accuracy and and neural gain in EEG-SSVEP power, in the human brain.","With a cognitive brain-machine interface, we show a direct link between attentional effects on perceptual accuracy and neural gain in EEG-SSVEP power, in the human brain." 354,Defending Against Adversarial Examples by Regularized Deep Embedding,"Recent studies have demonstrated the vulnerability of deep convolutional neural networks against adversarial examples.Inspired by the observation that the intrinsic dimension of image data is much smaller than its pixel space dimension and the vulnerability of neural networks grows with the input dimension, we propose to embed high-dimensional input images into a low-dimensional space to perform classification.However, arbitrarily projecting the input images to a low-dimensional space without regularization will not improve the robustness of deep neural networks.We propose a new framework, Embedding Regularized Classifier, which improves the adversarial robustness of the classifier through embedding regularization.Experimental results on several benchmark datasets show that, our proposed framework achieves state-of-the-art performance against strong adversarial attack methods.",A general and easy-to-use framework that improves the adversarial robustness of deep classification models through embedding regularization. 355,Robust Task Clustering for Deep and Diverse Multi-Task and Few-Shot Learning,"We investigate task clustering for deep learning-based multi-task and few-shot learning in the settings with large numbers of diverse tasks.Our method measures task similarities using cross-task transfer performance matrix.Although this matrix provides us critical information regarding similarities between tasks, the uncertain task-pairs, i.e., the ones with extremely asymmetric transfer scores, may collectively mislead clustering algorithms to output an inaccurate task-partition.Moreover, when the number of tasks is large, generating the full transfer performance matrix can be very time consuming.To overcome these limitations, we propose a novel task clustering algorithm to estimate the similarity matrix based on the theory of matrix completion.The proposed algorithm can work on partially-observed similarity matrices based on only sampled task-pairs with reliable scores, ensuring its efficiency and robustness.Our theoretical analysis shows that under mild assumptions, the reconstructed matrix perfectly matches the underlying “true” similarity matrix with an overwhelming probability.The final task partition is computed by applying an efficient spectral clustering algorithm to the recovered matrix.Our results show that the new task clustering method can discover task clusters that benefit both multi-task learning and few-shot learning setups for sentiment classification and dialog intent classification tasks.",We propose a matrix-completion based task clustering algorithm for deep multi-task and few-shot learning in the settings with large numbers of diverse tasks. 356,Machine Learning by Two-Dimensional Hierarchical Tensor Networks: A Quantum Information Theoretic Perspective on Deep Architectures,"The resemblance between the methods used in studying quantum-many body physics and in machine learning has drawn considerable attention.In particular, tensor networks and deep learning architectures bear striking similarities to the extent that TNs can be used for machine learning.Previous results used one-dimensional TNs in image recognition, showing limited scalability and a request of high bond dimension.In this work, we train two-dimensional hierarchical TNs to solve image recognition problems, using a training algorithm derived from the multipartite entanglement renormalization ansatz.This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning.While keeping the TN unitary in the training phase, TN states can be defined, which optimally encodes each class of the images into a quantum many-body state.We study the quantum features of the TN states, including quantum entanglement and fidelity.We suggest these quantities could be novel properties that characterize the image classes, as well as the machine learning tasks.Our work could be further applied to identifying possible quantum properties of certain artificial intelligence methods.","This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning." 357,Learning to Control PDEs with Differentiable Physics,"Predicting outcomes and planning interactions with the physical world are long-standing goals for machine learning.A variety of such tasks involves continuous physical systems, which can be described by partial differential equations with many degrees of freedom.Existing methods that aim to control the dynamics of such systems are typically limited to relatively short time frames or a small number of interaction parameters.We present a novel hierarchical predictor-corrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames.We propose to split the problem into two distinct tasks: planning and control.To this end, we introduce a predictor network that plans optimal trajectories and a control network that infers the corresponding control parameters.Both stages are trained end-to-end using a differentiable PDE solver.We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs such as the incompressible Navier-Stokes equations.",We train a combination of neural networks to predict optimal trajectories for complex physical systems. 358,Deterministic PAC-Bayesian generalization bounds for deep networks via generalizing noise-resilience,"The ability of overparameterized deep networks to generalize well has been linked to the fact that stochastic gradient descent finds solutions that lie in flat, wide minima in the training loss -- minima where the output of the network is resilient to small random noise added to its parameters.So far this observation has been used to provide generalization guarantees only for neural networks whose parameters are either or .In this work, we present a general PAC-Bayesian framework that leverages this observation to provide a bound on the original network learned -- a network that is deterministic and uncompressed. What enables us to do this is a key novelty in our approach: our framework allows us to show that if on training data, the interactions between the weight matrices satisfy certain conditions that imply a wide training loss minimum, these conditions themselves to the interactions between the matrices on test data, thereby implying a wide test loss minimum.We then apply our general framework in a setup where we assume that the pre-activation values of the network are not too small.In this setup, we provide a generalization guarantee for the original network, that does not scale with product of the spectral norms of the weight matrices -- a guarantee that would not have been possible with prior approaches.","We provide a PAC-Bayes based generalization guarantee for uncompressed, deterministic deep networks by generalizing noise-resilience of the network on the training data to the test data." 359,Universal Approximation with Certified Networks,"Training neural networks to be certifiably robust is critical to ensure their safety against adversarial attacks.However, it is currently very difficult to train a neural network that is both accurate and certifiably robust.In this work we take a step towards addressing this challenge.We prove that for every continuous function, there exists a network such that: approximates arbitrarily close, and simple interval bound propagation of a region through yields a result that is arbitrarily close to the optimal output of on.Our result can be seen as a Universal Approximation Theorem for interval-certified ReLU networks.To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.",We prove that for a large class of functions f there exists an interval certified robust network approximating f up to arbitrary precision. 360,PRUNING WITH HINTS: AN EFFICIENT FRAMEWORK FOR MODEL ACCELERATION,"In this paper, we propose an efficient framework to accelerate convolutional neural networks.We utilize two types of acceleration methods: pruning and hints.Pruning can reduce model size by removing channels of layers.Hints can improve the performance of student model by transferring knowledge from teacher model.We demonstrate that pruning and hints are complementary to each other.On one hand, hints can benefit pruning by maintaining similar feature representations.On the other hand, the model pruned from teacher networks is a good initialization for student model, which increases the transferability between two networks.Our approach performs pruning stage and hints stage iteratively to further improve theperformance.Furthermore, we propose an algorithm to reconstruct the parameters of hints layer and make the pruned model more suitable for hints.Experiments were conducted on various tasks including classification and pose estimation.Results on CIFAR-10, ImageNet and COCO demonstrate the generalization and superiority of our framework.",This is a work aiming for boosting all the existing pruning and mimic method. 361,Data-dependent Gaussian Prior Objective for Language Generation,"For typical sequence prediction problems such as language generation, maximum likelihood estimation has commonly been adopted as it encourages the predicted sequence most consistent with the ground-truth sequence to have the highest probability of occurring.However, MLE focuses on once-to-all matching between the predicted sequence and gold-standard, consequently treating all incorrect predictions as being equally incorrect.We refer to this drawback as in this paper.""Treating all incorrect predictions as equal unfairly downplays the nuance of these sequences' detailed token-wise structure."", 'To counteract this, we augment the MLE loss by introducing an extra Kullback--Leibler divergence term derived by comparing a data-dependent Gaussian prior and the detailed training prediction.The proposed data-dependent Gaussian prior objective is defined over a prior topological order of tokens and is poles apart from the data-independent Gaussian prior commonly adopted in smoothing the training of MLE.Experimental results show that the proposed method makes effective use of a more detailed prior in the data and has improved performance in typical language generation tasks, including supervised and unsupervised machine translation, text summarization, storytelling, and image captioning.","We introduce an extra data-dependent Gaussian prior objective to augment the current MLE training, which is designed to capture the prior knowledge in the ground-truth data." 362,Interactive Classification by Asking Informative Questions,"We propose an interactive classification approach for natural language queries.Instead of classifying given the natural language query only, we ask the user for additional information using a sequence of binary and multiple-choice questions.At each turn, we use a policy controller to decide if to present a question or pro-vide the user the final answer, and select the best question to ask by maximizing the system information gain.Our formulation enables bootstrapping the system without any interaction data, instead relying on non-interactive crowdsourcing an-notation tasks.Our evaluation shows the interaction helps the system increase its accuracy and handle ambiguous queries, while our approach effectively balances the number of questions and the final accuracy.",We propose an interactive approach for classifying natural language queries by asking users for additional information using information gain and a reinforcement learning policy controller. 363,Convolutional Mesh Autoencoders for 3D Face Representation,"Convolutional neural networks have achieved state of the art performance on recognizing and representing audio, images, videos and 3D volumes; that is, domains where the input can be characterized by a regular graph structure.However, generalizing CNNs to irregular domains like 3D meshes is challenging.Additionally, training data for 3D meshes is often limited.In this work, we generalize convolutional autoencoders to mesh surfaces.We perform spectral decomposition of meshes and apply convolutions directly in frequency space.In addition, we use max pooling and introduce upsampling within the network to represent meshes in a low dimensional space.We construct a complex dataset of 20,466 high resolution meshes with extreme facial expressions and encode it using our Convolutional Mesh Autoencoder.Despite limited training data, our method outperforms state-of-the-art PCA models of faces with 50% lower error, while using 75% fewer parameters.",Convolutional autoencoders generalized to mesh surfaces for encoding and reconstructing extreme 3D facial expressions. 364,Jiffy: A Convolutional Approach to Learning Time Series Similarity,"Computing distances between examples is at the core of many learning algorithms for time series.Consequently, a great deal of work has gone into designing effective time series distance measures.We present Jiffy, a simple and scalable distance metric for multivariate time series.Our approach is to reframe the task as a representation learning problem---rather than design an elaborate distance function, we use a CNN to learn an embedding such that the Euclidean distance is effective.By aggressively max-pooling and downsampling, we are able to construct this embedding using a highly compact neural network.Experiments on a diverse set of multivariate time series datasets show that our approach consistently outperforms existing methods.",Jiffy is a convolutional approach to learning a distance metric for multivariate time series that outperforms existing methods in terms of nearest-neighbor classification accuracy. 365,BEHAVIOR MODULE IN NEURAL NETWORKS,"Prefrontal cortex is a part of the brain which is responsible for behavior repertoire.Inspired by PFC functionality and connectivity, as well as human behavior formation process, we propose a novel modular architecture of neural networks with a Behavioral Module and corresponding end-to-end training strategy. This approach allows the efficient learning of behaviors and preferences representation.This property is particularly useful for user modeling and recommendation tasks, as allows learning personalized representations of different user states. In the experiment with video games playing, the resultsshow that the proposed method allows separation of main task’s objectives andbehaviors between different BMs.The experiments also show network extendability through independent learning of new behavior patterns.Moreover, we demonstrate a strategy for an efficient transfer of newly learned BMs to unseen tasks.",Extendable Modular Architecture is proposed for developing of variety of Agent Behaviors in DQN. 366,Observational Overfitting in Reinforcement Learning,"A major component of overfitting in model-free reinforcement learning involves the case where the agent may mistakenly correlate reward with certain spurious features from the observations generated by the Markov Decision Process.We provide a general framework for analyzing this scenario, which we use to design multiple synthetic benchmarks from only modifying the observation space of an MDP.When an agent overfits to different observation spaces even if the underlying MDP dynamics is fixed, we term this observational overfitting.Our experiments expose intriguing properties especially with regards to implicit regularization, and also corroborate results from previous works in RL generalization and supervised learning.",We isolate one factor of RL generalization by analyzing the case when the agent only overfits to the observations. We show that architectural implicit regularizations occur in this regime. 367,Neural Language Modeling by Jointly Learning Syntax and Lexicon,"We propose a neural language model capable of unsupervised syntactic structure induction.The model leverages the structure information to form better semantic representations and better language modeling.Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information.On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation.In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks, that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model.In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network.Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks.","In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model." 368,Super-AND: A Holistic Approach to Unsupervised Embedding Learning,"Unsupervised embedding learning aims to extract good representations from data without the use of human-annotated labels.Such techniques are apparently in the limelight because of the challenges in collecting massive-scale labels required for supervised learning.This paper proposes a comprehensive approach, called Super-AND, which is based on the Anchor Neighbourhood Discovery model.Multiple losses defined in Super-AND make similar samples gather even within a low-density space and keep features invariant against augmentation.As a result, our model outperforms existing approaches in various benchmark datasets and achieves an accuracy of 89.2% in CIFAR-10 with the Resnet18 backbone network, a 2.9% gain over the state-of-the-art.",We proposed a comprehensive approach for unsupervised embedding learning on the basis of AND algorithm. 369,Classification and Disease Localization in Histopathology Using Only Global Labels: A Weakly-Supervised Approach,"Analysis of histopathology slides is a critical step for many diagnoses, and in particular in oncology where it defines the gold standard.In the case of digital histopathological analysis, highly trained pathologists must review vast whole-slide-images of extreme digital resolution across multiple zoom levels in order to locate abnormal regions of cells, or in some cases single cells, out of millions.The application of deep learning to this problem is hampered not only by small sample sizes, as typical datasets contain only a few hundred samples, but also by the generation of ground-truth localized annotations for training interpretable classification and segmentation models.We propose a method for disease available during training.Even without pixel-level annotations, we are able to demonstrate performance comparable with models trained with strong annotations on the Camelyon-16 lymph node metastases detection challenge.We accomplish this through the use of pre-trained deep convolutional networks, feature embedding, as well as learning via top instances and negative evidence, a multiple instance learning technique fromatp the field of semantic segmentation and object detection.",We propose a weakly supervised learning method for the classification and localization of cancers in extremely high resolution histopathology whole slide images using only image-wide labels. 370,Using Ontologies To Improve Performance In Massively Multi-label Prediction,"Massively multi-label prediction/classification problems arise in environments like health-care or biology where it is useful to make very precise predictions.One challenge with massively multi-label problems is that there is often a long-tailed frequency distribution for the labels, resulting in few positive examples for the rare labels.We propose a solution to this problem by modifying the output layer of a neural network to create a Bayesian network of sigmoids which takes advantage of ontology relationships between the labels to help share information between the rare and the more common labels. We apply this method to the two massively multi-label tasks of disease prediction and protein function prediction and obtain significant improvements in per-label AUROC and average precision.", We propose a new method for using ontology information to improve performance on massively multi-label prediction/classification problems. 371,A Greedy Approach to Max-Sliced Wasserstein GANs,"Generative Adversarial Networks have made data generation possible in various use cases, but in case of complex, high-dimensional distributions it can be difficult to train them, because of convergence problems and the appearance of mode collapse.Sliced Wasserstein GANs and especially the application of the Max-Sliced Wasserstein distance made it possible to approximate Wasserstein distance during training in an efficient and stable way and helped ease convergence problems of these architectures.This method transforms sample assignment and distance calculation into sorting the one-dimensional projection of the samples, which results a sufficient approximation of the high-dimensional Wasserstein distance.In this paper we will demonstrate that the approximation of the Wasserstein distance by sorting the samples is not always the optimal approach and the greedy assignment of the real and fake samples can result faster convergence and better approximation of the original distribution.",We apply a greedy assignment on the projected samples instead of sorting to approximate Wasserstein distance 372,Slow Thinking Enables Task-Uncertain Lifelong and Sequential Few-Shot Learning,"Lifelong machine learning focuses on adapting to novel tasks without forgetting the old tasks, whereas few-shot learning strives to learn a single task given a small amount of data.These two different research areas are crucial for artificial general intelligence, however, their existing studies have somehow assumed some impractical settings when training the models.For lifelong learning, the nature of incoming tasks during inference time is assumed to be known at training time.As for few-shot learning, it is commonly assumed that a large number of tasks is available during training.Humans, on the other hand, can perform these learning tasks without regard to the aforementioned assumptions.Inspired by how the human brain works, we propose a novel model, called the Slow Thinking to Learn, that makes sophisticated predictions by iteratively considering interactions between current and previously seen tasks at runtime.Having conducted experiments, the results empirically demonstrate the effectiveness of STL for more realistic lifelong and few-shot learning settings.",This paper studies the interactions between the fast-learning and slow-prediction models and demonstrate how such interactions can improve machine capability to solve the joint lifelong and few-shot learning problems. 373,Variational Hetero-Encoder Randomized GANs for Joint Image-Text Modeling,"For bidirectional joint image-text modeling, we develop variational hetero-encoder randomized generative adversarial network, a versatile deep generative model that integrates a probabilistic text decoder, probabilistic image encoder, and GAN into a coherent end-to-end multi-modality learning framework.VHE randomized GAN encodes an image to decode its associated text, and feeds the variational posterior as the source of randomness into the GAN image generator.We plug three off-the-shelf modules, including a deep topic model, a ladder-structured image encoder, and StackGAN++, into VHE-GAN, which already achieves competitive performance.This further motivates the development of VHE-raster-scan-GAN that generates photo-realistic images in not only a multi-scale low-to-high-resolution manner, but also a hierarchical-semantic coarse-to-fine fashion.By capturing and relating hierarchical semantic and visual concepts with end-to-end training, VHE-raster-scan-GAN achieves state-of-the-art performance in a wide variety of image-text multi-modality learning and generation tasks.","A novel Bayesian deep learning framework that captures and relates hierarchical semantic and visual concepts, performing well on a variety of image and text modeling and generation tasks." 374,Learning How to Ground a Plan - Partial Grounding in Classical Planning,"Current classical planners are very successful in finding plans, even for large planning instances.To do so, most planners rely on a preprocessing stage that computes a grounded representation of the task.Whenever the grounded task is too big to be generated the instance cannot even be tackled by the actual planner.To address this issue, we introduce a partial grounding approach that grounds only a projection of the task, when complete grounding is not feasible.We propose a guiding mechanism that, for a given domain, identifies the parts of a task that are relevant to find a plan by using off-the-shelf machine learning methods.Our empirical evaluation attests that the approach is capable of solving planning instances that are too big to be fully grounded.","This paper introduces partial grounding to tackle the problem that arises when the full grounding process, i.e., the translation of a PDDL input task into a ground representation like STRIPS, is infeasible due to memory or time constraints." 375,Response Characterization for Auditing Cell Dynamics in Long Short-term Memory Networks,"In this paper, we introduce a novel method to interpret recurrent neural networks, particularly long short-term memory networks at the cellular level.We propose a systematic pipeline for interpreting individual hidden state dynamics within the network using response characterization methods.""The ranked contribution of individual cells to the network's output is computed by analyzing a set of interpretable metrics of their decoupled step and sinusoidal responses."", ""As a result, our method is able to uniquely identify neurons with insightful dynamics, quantify relationships between dynamical properties and test accuracy through ablation analysis, and interpret the impact of network capacity on a network's dynamical distribution."", 'Finally, we demonstrate generalizability and scalability of our method by evaluating a series of different benchmark sequential datasets.",Introducing the response charactrization method for interpreting cell dynamics in learned long short-term memory (LSTM) networks. 376,Emergence of grid-like representations by training recurrent neural networks to perform spatial localization,"Decades of research on the neural code underlying spatial navigation have revealed a diverse set of neural response properties.The Entorhinal Cortex of the mammalian brain contains a rich set of spatial correlates, including grid cells which encode space using tessellating patterns.However, the mechanisms and functional significance of these spatial representations remain largely mysterious.As a new way to understand these neural representations, we trained recurrent neural networks to perform navigation tasks in 2D arenas based on velocity inputs.Surprisingly, we find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells.All these different functional types of neurons have been observed experimentally.The order of the emergence of grid-like and border cells is also consistent with observations from developmental studies.Together, our results suggest that grid cells, border cells and others as observed in EC may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits.","To our knowledge, this is the first study to show how neural representations of space, including grid-like cells and border cells as observed in the brain, could emerge from training a recurrent neural network to perform navigation tasks." 377,Music Source Separation in the Waveform Domain,"Source separation for music is the task of isolating contributions, or stems, from different instruments recorded individually and arranged together to form a song.Such components include voice, bass, drums and any other accompaniments.While end-to-end models that directly generate the waveform are state-of-the-art in many audio synthesis problems, the best multi-instrument source separation models generate masks on the magnitude spectrum and achieve performances far above current end-to-end, waveform-to-waveform models.We present an in-depth analysis of a new architecture, which we will refer to as Demucs, based on a convolutional autoencoder, with a bidirectional LSTM at the bottleneck layer and skip-connections as in U-Networks.Compared to the state-of-the-art waveform-to-waveform model, Wave-U-Net, the main features of our approach in addition of the bi-LSTM are the use of trans-posed convolution layers instead of upsampling-convolution blocks, the use of gated linear units, exponentially growing the number of channels with depth and a new careful initialization of the weights. Results on the MusDB dataset show that our architecture achieves a signal-to-distortion ratio nearly 2.2 points higher than the best waveform-to-waveform competitor.This makes our model match the state-of-the-art performances on this dataset, bridging the performance gap between models that operate on the spectrogram and end-to-end approaches.",We match the performance of spectrogram based model with a model trained end-to-end in the waveform domain 378,$\alpha^{\alpha}$-Rank: Scalable Multi-agent Evaluation through Evolution,"Although challenging, strategy profile evaluation in large connected learner networks is crucial for enabling the next wave of machine learning applications.Recently,-Rank, an evolutionary algorithm, has been proposed as a solution for ranking joint policy profiles in multi-agent systems.-Rank claimed scalability through a polynomial time implementation with respect to the total number of pure strategy profiles.In this paper, we formally prove that such a claim is not grounded.In fact, we show that-Rank exhibits an exponential complexity in number of agents, hindering its application beyond a small finite number of joint profiles.Realizing such a limitation, we contribute by proposing a scalable evaluation protocol that we title -Rank.Our method combines evolutionary dynamics with stochastic optimization and double oracles for scalable ranking with linear time and memory complexities.Our contributions allow us, for the first time, to conduct large-scale evaluation experiments of multi-agent systems, where we show successful results on large joint strategy profiles with sizes in the order of -- a setting not evaluable using current techniques.",We provide a scalable solution to multi-agent evaluation with linear rate complexity in both time and memory in terms of number of agents 379,DivideMix: Learning with Noisy Labels as Semi-supervised Learning,"Deep neural networks are known to be annotation-hungry.Numerous efforts have been devoted to reducing the annotation cost when learning with deep networks.Two prominent directions include learning with noisy labels and semi-supervised learning by exploiting unlabeled data.In this work, we propose DivideMix, a novel framework for learning with noisy labels by leveraging semi-supervised learning techniques.In particular, DivideMix models the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples, and trains the model on both the labeled and unlabeled data in a semi-supervised manner.To avoid confirmation bias, we simultaneously train two diverged networks where each network uses the dataset division from the other network.During the semi-supervised training phase, we improve the MixMatch strategy by performing label co-refinement and label co-guessing on labeled and unlabeled samples, respectively.Experiments on multiple benchmark datasets demonstrate substantial improvements over state-of-the-art methods.Code is available at https://github.com/LiJunnan1992/DivideMix .",We propose a novel semi-supervised learning approach with SOTA performance on combating learning with noisy labels. 380,Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network,"We present a new algorithm to train a robust neural network against adversarial attacks.Our algorithm is motivated by the following two ideas.First, although recent work has demonstrated that fusing randomness can improve the robustness of neural networks, we noticed that adding noise blindly to all the layers is not the optimal way to incorporate randomness.Instead, we model randomness under the framework of Bayesian Neural Network to formally learn the posterior distribution of models in a scalable way.Second, we formulate the mini-max problem in BNN to learn the best model distribution under adversarial attacks, leading to an adversarial-trained Bayesian neural net.Experiment results demonstrate that the proposed algorithm achieves state-of-the-art performance under strong attacks.On CIFAR-10 with VGG network, our model leads to 14% accuracy improvement compared with adversarial training and random self-ensemble under PGD attack with 0.035 distortion, and the gap becomes even larger on a subset of ImageNet.","We design an adversarial training method to Bayesian neural networks, showing a much stronger defense to white-box adversarial attacks" 381,Analyzing Federated Learning through an Adversarial Lens,"Federated learning distributes model training among a multitude of agents, who, guided by privacy concerns, perform training using their local data but share only model parameter updates, for iterative aggregation at the server.In this work, we explore the threat of model poisoning attacks on federated learning initiated by a single, non-colluding malicious agent where the adversarial objective is to cause the model to misclassify a set of chosen inputs with high confidence.""We explore a number of strategies to carry out this attack, starting with simple boosting of the malicious agent's update to overcome the effects of other agents' updates."", 'To increase attack stealth, we propose an alternating minimization strategy, which alternately optimizes for the training loss and the adversarial objective.""We follow up by using parameter estimation for the benign agents' updates to improve on attack success."", 'Finally, we use a suite of interpretability techniques to generate visual explanations of model decisions for both benign and malicious models and show that the explanations are nearly visually indistinguishable.Our results indicate that even a highly constrained adversary can carry out model poisoning attacks while simultaneously maintaining stealth, thus highlighting the vulnerability of the federated learning setting and the need to develop effective defense strategies.",Effective model poisoning attacks on federated learning able to cause high-confidence targeted misclassification of desired inputs 382,Learning Noise-Invariant Representations for Robust Speech Recognition,"Despite rapid advances in speech recognition, current models remain brittle to superficial perturbations to their inputs.Small amounts of noise can destroy the performance of an otherwise state-of-the-art model.To harden models against background noise, practitioners often perform data augmentation, adding artificially-noised examples to the training set, carrying over the original label.""In this paper, we hypothesize that a clean example and its superficially perturbed counterparts shouldn't merely map to the same class--- they should map to the same representation."", 'We propose invariant-representation-learning: At each training iteration, for each training example, we sample a noisy counterpart.We then apply a penalty term to coerce matched representations at each layer.Our key results, demonstrated on the LibriSpeech dataset are the following:', "" IRL significantly reduces character error rateson both `clean' and `other' test sets;"", "" on several out-of-domain noise settings, IRL's benefits are even more pronounced."", 'Careful ablations confirm that our results are not simply due to shrinking activations at the chosen layers."," In this paper, we hypothesize that superficially perturbed data points shouldn’t merely map to the same class---they should map to the same representation." 383,Adversarial Feature Learning under Accuracy Constraint for Domain Generalization,"Learning domain-invariant representation is a dominant approach for domain generalization.However, previous methods based on domain invariance overlooked the underlying dependency of classes on domains, which is responsible for the trade-off between classification accuracy and the invariance.This study proposes a novel method, which maximizes domain invariance within a range that does not interfere with accuracy.Empirical validations show that the performance of AFLAC is superior to that of baseline methods, supporting the importance of considering the dependency and the efficacy of the proposed method to overcome the problem.",Address the trade-off caused by the dependency of classes on domains by improving domain adversarial nets 384,FVD: A new Metric for Video Generation,"Recent advances in deep generative models have lead to remarkable progress in synthesizing high quality images.Following their successful application in image processing and representation learning, an important next step is to consider videos.Learning generative models of video is a much harder task, requiring a model to capture the temporal dynamics of a scene, in addition to the visual presentation of objects.While recent generative models of video have had some success, current progress is hampered by the lack of qualitative metrics that consider visual quality, temporal coherence, and diversity of samples.To this extent we propose Fréchet Video Distance, a new metric for generative models of video based on FID.We contribute a large-scale human study, which confirms that FVD correlates well with qualitative human judgment of generated videos.",We propose FVD: a new metric for generative models of video based on FID. A large-scale human study confirms that FVD correlates well with qualitative human judgment of generated videos. 385,Deep Generative Dual Memory Network for Continual Learning,"Despite advances in deep learning, artificial neural networks do not learn the same way as humans do.Today, neural networks can learn multiple tasks when trained on them jointly, but cannot maintain performance on learnt tasks when tasks are presented one at a time -- this phenomenon called catastrophic forgetting is a fundamental challenge to overcome before neural networks can learn continually from incoming data.In this work, we derive inspiration from human memory to develop an architecture capable of learning continuously from sequentially incoming tasks, while averting catastrophic forgetting.Specifically, our model consists of a dual memory architecture to emulate the complementary learning systems in the human brain and maintains a consolidated long-term memory via generative replay of past experiences.We substantiate our claim that replay should be generative, show the benefits of generative replay and dual memory via experiments, and demonstrate improved performance retention even for small models with low capacity.Our architecture displays many important characteristics of the human memory and provides insights on the connection between sleep and learning in humans.","A dual memory architecture inspired from human brain to learn sequentially incoming tasks, while averting catastrophic forgetting." 386,word2ket: Space-efficient Word Embeddings inspired by Quantum Entanglement,"Deep learning natural language processing models often use vector word embeddings, such as word2vec or GloVe, to represent words.A discrete sequence of words can be much more easily integrated with downstream neural layers if it is represented as a sequence of continuous vectors.Also, semantic relationships between words, learned from a text corpus, can be encoded in the relative configurations of the embedding vectors.However, storing and accessing embedding vectors for all words in a dictionary requires large amount of space, and may stain systems with limited GPU memory.Here, we used approaches inspired by quantum computing to propose two related methods, word2ket and word2ketXS, for storing word embedding matrix during training and inference in a highly efficient way.Our approach achieves a hundred-fold or more reduction in the space required to store the embeddings with almost no relative drop in accuracy in practical natural language processing tasks.",We use ideas from quantum computing to proposed word embeddings that utilize much fewer trainable parameters. 387,Compositional Obverter Communication Learning from Raw Visual Input,"One of the distinguishing aspects of human language is its compositionality, which allows us to describe complex environments with limited vocabulary.Previously, it has been shown that neural network agents can learn to communicate in a highly structured, possibly compositional language based on disentangled input.Humans, however, do not learn to communicate based on well-summarized features.In this work, we train neural agents to simultaneously develop visual perception from raw image pixels, and learn to communicate with a sequence of discrete symbols.The agents play an image description game where the image contains factors such as colors and shapes.We train the agents using the obverter technique where an agent introspects to generate messages that maximize its own understanding.Through qualitative analysis, visualization and a zero-shot test, we show that the agents can develop, out of raw image pixels, a language with compositional properties, given a proper pressure from the environment.",We train neural network agents to develop a language with compositional properties from raw pixel input. 388,Diverse Trajectory Forecasting with Determinantal Point Processes,"The ability to forecast a set of likely yet diverse possible future behaviors of an agent is essential for safety-critical perception systems.In particular, a set of possible future behaviors generated by the system must be diverse to account for all possible outcomes in order to take necessary safety precautions.It is not sufficient to maintain a set of the most likely future outcomes because the set may only contain perturbations of a dominating single outcome.While generative models such as variational autoencoders have been shown to be a powerful tool for learning a distribution over future trajectories, randomly drawn samples from the learned implicit likelihood model may not be diverse -- the likelihood model is derived from the training data distribution and the samples will concentrate around the major mode of the data.In this work, we propose to learn a diversity sampling function that generates a diverse yet likely set of future trajectories.The DSF maps forecasting context features to a set of latent codes which can be decoded by a generative model into a set of diverse trajectory samples.Concretely, the process of identifying the diverse set of samples is posed as DSF parameter estimation.To learn the parameters of the DSF, the diversity of the trajectory samples is evaluated by a diversity loss based on a determinantal point process.Gradient descent is performed over the DSF parameters, which in turn moves the latent codes of the sample set to find an optimal set of diverse yet likely trajectories.Our method is a novel application of DPPs to optimize a set of items in continuous space.We demonstrate the diversity of the trajectories produced by our approach on both low-dimensional 2D trajectory data and high-dimensional human motion data.",We learn a diversity sampling function with DPPs to obtain a diverse set of samples from a generative model. 389,Language Modeling Teaches You More than Translation Does: Lessons Learned Through Auxiliary Task Analysis,"There is mounting evidence that pretraining can be valuable for neural network language understanding models, but we do not yet have a clear understanding of how the choice of pretraining objective affects the type of linguistic information that models learn.With this in mind, we compare four objectives---language modeling, translation, skip-thought, and autoencoding---on their ability to induce syntactic and part-of-speech information, holding constant the genre and quantity of training data.We find that representations from language models consistently perform best on our syntactic auxiliary prediction tasks, even when trained on relatively small amounts of data, which suggests that language modeling may be the best data-rich pretraining task for transfer learning applications requiring syntactic information.We also find that a randomly-initialized, frozen model can perform strikingly well on our auxiliary tasks, but that this effect disappears when the amount of training data for the auxiliary tasks is reduced.",Representations from language models consistently perform better than translation encoders on syntactic auxiliary prediction tasks. 390,Surrogate-Based Constrained Langevin Sampling With Applications to Optimal Material Configuration Design,"We consider the problem of generating configurations that satisfy physical constraints for optimal material nano-pattern design, where multiple properties need to be simultaneously satisfied. Consider, for example, the trade-off between thermal resistance, electrical conductivity, and mechanical stability needed to design a nano-porous template with optimal thermoelectric efficiency. To that end, we leverage the posterior regularization framework andshow that this constraint satisfaction problem can be formulated as sampling froma Gibbs distribution. The main challenges come from the black-box nature ofthose physical constraints, since they are obtained via solving highly non-linearPDEs.To overcome those difficulties, we introduce Surrogate-based Constrained Langevin dynamics for black-box sampling.We explore two surrogate approaches.The first approach exploits zero-order approximation of gradients in the Langevin Sampling and we refer to it as Zero-Order Langevin.In practice, this approach can be prohibitive since we still need to often query the expensive PDE solvers.The second approach approximates the gradients in the Langevin dynamics with deep neural networks, allowing us an efficient sampling strategy using the surrogate model.We prove the convergence of those two approaches when the target distribution is log-concave and smooth.We show the effectiveness of both approaches in designing optimal nano-porous material configurations, where the goal is to produce nano-pattern templates with low thermal conductivity and reasonable mechanical stability.",We propose surrogate based Constrained Langevin sampling with application in nano-porous material configuration design. 391,Biomedical Named Entity Recognition via Reference-Set Augmented Bootstrapping,"We present a weakly-supervised data augmentation approach to improve Named Entity Recognition in a challenging domain: extracting biomedical entities from the scientific literature.First, we train a neural NER model over a small seed of fully-labeled examples.Second, we use a reference set of entity names to identify entity mentions with high precision, but low recall, on an unlabeled corpus.Third, we use the NNER model to assign weak labels to the corpus.Finally, we retrain our NNER model iteratively over the augmented training set, including the seed, the reference-set examples, and the weakly-labeled examples, which results in refined labels.We show empirically that this augmented bootstrapping process significantly improves NER performance, and discuss the factors impacting the efficacy of the approach.",Augmented bootstrapping approach combining information from a reference set with iterative refinements of soft labels to improve Name Entity Recognition from biomedical literature. 392,Quantum Semi-Supervised Kernel Learning,"Quantum machine learning methods have the potential to facilitate learning using extremely large datasets.While the availability of data for training machine learning models is steadily increasing, oftentimes it is much easier to collect feature vectors that to obtain the corresponding labels.One of the approaches for addressing this issue is to use semi-supervised learning, which leverages not only the labeled samples, but also unlabeled feature vectors.Here, we present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines.The algorithm uses recent advances in quantum sample-based Hamiltonian simulation to extend the existing Quantum LS-SVM algorithm to handle the semi-supervised term in the loss, while maintaining the same quantum speedup as the Quantum LS-SVM.","We extend quantum SVMs to semi-supervised setting, to deal with the likely problem of many missing class labels in huge datasets." 393,Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equations,"Deep neural networks have become the state-of-the-art models in numerous machine learning tasks.However, general guidance to network architecture design is still missing.In our work, we bridge deep neural network design with numerical differential equations.We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations.This finding brings us a brand new perspective on the design of effective deep architectures.We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks.As an example, we propose a linear multi-step architecture which is inspired by the linear multi-step method solving ordinary differential equations.The LM-architecture is an effective structure that can be used on any ResNet-like networks.In particular, we demonstrate that LM-ResNet and LM-ResNeXt can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters.In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress the original networks while maintaining a similar performance.This can be explained mathematically using the concept of modified equation from numerical analysis.Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks.Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture.As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.",This paper bridges deep network architectures with numerical (stochastic) differential equations. This new perspective enables new designs of more effective deep neural networks. 394,pix2code: Generating Code from a Graphical User Interface Screenshot,"Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications.In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77% of accuracy for three different platforms.",CNN and LSTM to generate markup-like code describing graphical user interface images. 395,Hyperbolic Image Embeddings,"Computer vision tasks such as image classification, image retrieval and few-shot learning are currently dominated by Euclidean and spherical embeddings, so that the final decisions about class belongings or the degree of similarity are made using linear hyperplanes, Euclidean distances, or spherical geodesic distances.In this work, we demonstrate that in many practical scenarios hyperbolic embeddings provide a better alternative.","We show that hyperbolic embeddings are useful for high-level computer vision tasks, especially for few-shot classification." 396,SOM-VAE: Interpretable Discrete Representation Learning on Time Series,"High-dimensional time series are common in many domains.Since human cognition is not optimized to work well in high-dimensional spaces, these areas could benefit from interpretable low-dimensional representations.However, most representation learning algorithms for time series data are difficult to interpret.This is due to non-intuitive mappings from data features to salient properties of the representation and non-smoothness over time.To address this problem, we propose a new representation learning framework building on ideas from interpretable discrete dimensionality reduction and deep generative modeling.This framework allows us to learn discrete representations of time series, which give rise to smooth and interpretable embeddings with superior clustering performance.We introduce a new way to overcome the non-differentiability in discrete representation learning and present a gradient-based version of the traditional self-organizing map algorithm that is more performant than the original.Furthermore, to allow for a probabilistic interpretation of our method, we integrate a Markov model in the representation space.This model uncovers the temporal transition structure, improves clustering performance even further and provides additional explanatory insights as well as a natural representation of uncertainty.We evaluate our model in terms of clustering performance and interpretability on staticMNIST data, a time series of linearly interpolatedMNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set.Our learned representations compare favorably with competitor methods and facilitate downstream tasks on the real world data.","We present a method to learn interpretable representations on time series using ideas from variational autoencoders, self-organizing maps and probabilistic models." 397,Autoregressive Convolutional Neural Networks for Asynchronous Time Series,"We propose Significance-Offset Convolutional Neural Network, a deep convolutional network architecture for regression of multivariate asynchronous time series. The model is inspired by standard autoregressive models and gating mechanisms used in recurrent neural networks. It involves an AR-like weighting system, where the final predictor is obtained as a weighted sum of adjusted regressors, while the weights are data-dependent functions learnt through a convolutional network.The architecture was designed for applications on asynchronous time series and is evaluated on such datasets: a hedge fund proprietary dataset of over 2 million quotes for a credit derivative index, an artificially generated noisy autoregressive series and household electricity consumption dataset. The pro-posed architecture achieves promising results as compared to convolutional and recurrent neural networks.The code for the numerical experiments and the architecture implementation will be shared online to make the research reproducible.",Convolutional architecture for learning data-dependent weights for autoregressive forecasting of time series. 398,MixUp as Directional Adversarial Training,"MixUp is a data augmentation scheme in which pairs of training samples and their corresponding labels are mixed using linear coefficients.Without label mixing, MixUp becomes a more conventional scheme: input samples are moved but their original labels are retained.Because samples are preferentially moved in the direction of other classes \\iffalse -- which are typically clustered in input space -- \\fi we refer to this method as directional adversarial training, or DAT.We show that under two mild conditions, MixUp asymptotically convergences to a subset of DAT.We define untied MixUp, a superset of MixUp wherein training labels are mixed with different linear coefficients to those of their corresponding samples.We show that under the same mild conditions, untied MixUp converges to the entire class of DAT schemes.Motivated by the understanding that UMixUp is both a generalization of MixUp and a form of adversarial training, we experiment with different datasets and loss functions to show that UMixUp provides improved performance over MixUp.In short, we present a novel interpretation of MixUp as belonging to a class highly analogous to adversarial training, and on this basis we introduce a simple generalization which outperforms MixUp.","We present a novel interpretation of MixUp as belonging to a class highly analogous to adversarial training, and on this basis we introduce a simple generalization which outperforms MixUp" 399,Differentiable Reasoning over a Virtual Knowledge Base,"We consider the task of answering complex multi-hop questions using a corpus as a virtual knowledge base.In particular, we describe a neural module, DrKIT, that traverses textual data like a virtual KB, softly following paths of relations between mentions of entities in the corpus.At each step the operation uses a combination of sparse-matrix TFIDF indices and maximum inner product search on a special index of contextual representations.This module is differentiable, so the full system can be trained completely end-to-end using gradient based methods, starting from natural language inputs.We also describe a pretraining scheme for the index mention encoder by generating hard negative examples using existing knowledge bases.We show that DrKIT improves accuracy by 9 points on 3-hop questions in the MetaQA dataset, cutting the gap between text-based and KB-based state-of-the-art by 70%.DrKIT is also very efficient, processing upto 10x more queries per second than existing state-of-the-art QA systems.",Differentiable multi-hop access to a textual knowledge base of indexed contextual representations 400,Structured Deep Factorization Machine: Towards General-Purpose Architectures,"In spite of their great success, traditional factorization algorithms typically do not support features, or their complexity scales quadratically with the number of features.On the other hand, neural methods allow large feature sets, but are often designed for a specific application.We propose novel deep factorization methods that allow efficient and flexible feature representation.For example, we enable describing items with natural language with complexity linear to the vocabulary size—this enables prediction for unseen items and avoids the cold start problem.We show that our architecture can generalize some previously published single-purpose neural architectures.Our experiments suggest improved training times and accuracy compared to shallow methods.",Scalable general-purpose factorization algorithm-- also helps to circumvent cold start problem. 401,AuthAR: Concurrent Authoring of Tutorials for AR Assembly Guidance,"Augmented Reality can assist with physical tasks such as object assembly through the use of “situated instructions”.These instructions can be in the form of videos, pictures, text or guiding animations, where the most helpful media among these is highly dependent on both the user and the nature of the task.Our work supports the authoring of AR tutorials for assembly tasks with little overhead beyond simply performing the task itself.The presented system, AuthAR reduces the time and effort required to build interactive AR tutorials by automatically generating key components of the AR tutorial while the author is assembling the physical pieces.Further, the system guides authors through the process of adding videos, pictures, text and animations to the tutorial.This concurrent assembly and tutorial generation approach allows for authoring of portable tutorials that fit the preferences of different end users.","We present a mixed media assembly tutorial authoring system that streamlines creation of videos, images, text and dynamic instructions in situ." 402,Using Clinical Notes for ICU Management,"Monitoring patients in ICU is a challenging and high-cost task.""Hence, predicting the condition of patients during their ICU stay can help provide better acute care and plan the hospital's resources."", 'There has been continuous progress in machine learning research for ICU management, and most of this work has focused on using time series signals recorded by ICU instruments.In our work, we show that adding clinical notes as another modality improves the performance of the model for three benchmark tasks: in-hospital mortality prediction, modeling decompensation, and length of stay forecasting that play an important role in ICU management.While the time-series data is measured at regular intervals, doctor notes are charted at irregular times, making it challenging to model them together.We propose a method to model them jointly, achieving considerable improvement across benchmark tasks over baseline time-series model.",We demostarte that using clinical notes in conjuntion with ICU instruments data improves the perfomance on ICU management benchmark tasks 403,Learning Expensive Coordination: An Event-Based Deep RL Approach,"Existing works in deep Multi-Agent Reinforcement Learning mainly focus on coordinating cooperative agents to complete certain tasks jointly.However, in many cases of the real world, agents are self-interested such as employees in a company and clubs in a league.Therefore, the leader, i.e., the manager of the company or the league, needs to provide bonuses to followers for efficient coordination, which we call expensive coordination.The main difficulties of expensive coordination are that', ""i) the leader has to consider the long-term effect and predict the followers' behaviors when assigning bonuses and"", ""ii) the complex interactions between followers make the training process hard to converge, especially when the leader's policy changes with time."", 'In this work, we address this problem through an event-based deep RL approach.Our main contributions are threefold."" We model the leader's decision-making process as a semi-Markov Decision Process and propose a novel multi-agent event-based policy gradient to learn the leader's long-term policy."", "" We exploit the leader-follower consistency scheme to design a follower-aware module and a follower-specific attention module to predict the followers' behaviors and make accurate response to their behaviors."", "" We propose an action abstraction-based policy gradient algorithm to reduce the followers' decision space and thus accelerate the training process of followers."", 'Experiments in resource collections, navigation, and the predator-prey game reveal that our approach outperforms the state-of-the-art methods dramatically.",We propose an event-based policy gradient to train the leader and an action abstraction policy gradient to train the followers in leader-follower Markov game. 404,Emergence of Compositional Language with Deep Generational Transmission,"Recent work has studied the emergence of language among deep reinforcement learning agents that must collaborate to solve a task.Of particular interest are the factors that cause language to be compositional---i.e., express meaning by combining words which themselves have meaning.Evolutionary linguists have found that in addition to structural priors like those already studied in deep learning, the dynamics of transmitting language from generation to generation contribute significantly to the emergence of compositionality.In this paper, we introduce these cultural evolutionary dynamics into language emergence by periodically replacing agents in a population to create a knowledge gap, implicitly inducing cultural transmission of language.We show that this implicit cultural transmission encourages the resulting languages to exhibit better compositional generalization.",We use cultural transmission to encourage compositionality in languages that emerge from interactions between neural agents. 405,THE LOCAL DIMENSION OF DEEP MANIFOLD,"Based on our observation that there exists a dramatic drop for the singular values of the fully connected layers or a single feature map of the convolutional layer, and that the dimension of the concatenated feature vector almost equals the summation of the dimension on each feature map, we propose a singular value decomposition based approach to estimate the dimension of the deep manifolds for a typical convolutional neural network VGG19.We choose three categories from the ImageNet, namely Persian Cat, Container Ship and Volcano, and determine the local dimension of the deep manifolds of the deep layers through the tangent space of a target image.Through several augmentation methods, we found that the Gaussian noise method is closer to the intrinsic dimension, as by adding random noise to an image we are moving in an arbitrary dimension, and when the rank of the feature matrix of the augmented images does not increase we are very closeto the local dimension of the manifold.We also estimate the dimension of the deep manifold based on the tangent space for each of the maxpooling layers.Our results show that the dimensions of different categories are close to each other and decline quickly along the convolutional layers and fully connected layers.Furthermore, we show that the dimensions decline quickly inside the Conv5 layer.Our work provides new insights for the intrinsic structure of deep neural networks and helps unveiling the inner organization of the black box of deep neural networks.",We propose a SVD based method to explore the local dimension of activation manifold in deep neural networks. 406,Faster and Just As Accurate: A Simple Decomposition for Transformer Models,"Large pre-trained Transformers such as BERT have been tremendously effective for many NLP tasks. However, inference in these large-capacity models is prohibitively slow and expensive. Transformers are essentially a stack of self-attention layers which encode each input position using the entire input sequence as its context. However, we find that it may not be necessary to apply this expensive sequence-wide self-attention over at all layers. Based on this observation, we propose a decomposition to a pre-trained Transformer that allows the lower layers to process segments of the input independently enabling parallelism and caching. We show that the information loss due to this decomposition can be recovered in the upper layers with auxiliary supervision during fine-tuning. We evaluate de-composition with pre-trained BERT models on five different paired-input tasks in question answering, sentence similarity, and natural language inference. Results show that decomposition enables faster inference, significant memory reduction while retaining most of the original performance. We will release the code at.","Inference in large Transformers is expensive due to the self-attention in multiple layers. We show a simple decomposition technique can yield a faster, low memory-footprint model that is just as accurate of the original models." 407,Deep Randomized Least Squares Value Iteration,"Exploration while learning representations is one of the main challenges DeepReinforcement Learning faces today.As the learned representation is dependant in the observed data, the exploration strategy has a crucial role.The popular DQN algorithm has improved significantly the capabilities of ReinforcementLearning algorithms to learn state representations from raw data, yet, it usesa naive exploration strategy which is statistically inefficient.The RandomizedLeast Squares Value Iteration algorithm, on theother hand, explores and generalizes efficiently via linearly parameterized valuefunctions.However, it is based on hand-designed state representation that requiresprior engineering work for every environment.In this paper, we propose a DeepLearning adaptation for RLSVI.Rather than using hand-design state representation, we use a state representation that is being learned directly from the data by aDQN agent.As the representation is being optimized during the learning process,a key component for the suggested method is a likelihood matching mechanism,which adapts to the changing representations.We demonstrate the importance ofthe various properties of our algorithm on a toy problem and show that our methodoutperforms DQN in five Atari benchmarks, reaching competitive results with theRainbow algorithm.",A Deep Learning adaptation of Randomized Least Squares Value Iteration 408,TrojanNet: Exposing the Danger of Trojan Horse Attack on Neural Networks,"The complexity of large-scale neural networks can lead to poor understanding of their internal details.We show that this opaqueness provides an opportunity for adversaries to embed unintended functionalities into the network in the form of Trojan horse attacks.Our novel framework hides the existence of a malicious network within a benign transport network.Our attack is flexible, easy to execute, and difficult to detect.""We prove theoretically that the malicious network's detection is computationally infeasible and demonstrate empirically that the transport network does not compromise its disguise."", 'Our attack exposes an important, previously unknown loophole that unveils a new direction in machine learning security.","Parameters of a trained neural network can be permuted to produce a completely separate model for a different task, enabling the embedding of Trojan horse networks inside another network." 409,RPGAN: random paths as a latent space for GAN interpretability,"In this paper, we introduce Random Path Generative Adversarial Network --- an alternative scheme of GANs that can serve as a tool for generative model analysis.While the latent space of a typical GAN consists of input vectors, randomly sampled from the standard Gaussian distribution, the latent space of RPGAN consists of random paths in a generator network.As we show, this design allows to associate different layers of the generator with different regions of the latent space, providing their natural interpretability.With experiments on standard benchmarks, we demonstrate that RPGAN reveals several interesting insights about roles that different layers play in the image generation process.Aside from interpretability, the RPGAN model also provides competitive generation quality and allows efficient incremental learning on new data.","We introduce an alternative GAN design based on random routes in generator, which can serve as a tool for generative models interpretability." 410,A Kolmogorov Complexity Approach to Generalization in Deep Learning,"Deep artificial neural networks can achieve an extremely small difference between training and test accuracies on identically distributed training and test sets, which is a standard measure of generalization.However, the training and test sets may not be sufficiently representative of the empirical sample set, which consists of real-world input samples.When samples are drawn from an underrepresented or unrepresented subset during inference, the gap between the training and inference accuracies can be significant.To address this problem, we first reformulate a classification algorithm as a procedure for searching for a source code that maps input features to classes.We then derive a necessary and sufficient condition for generalization using a universal cognitive similarity metric, namely information distance, based on Kolmogorov complexity.Using this condition, we formulate an optimization problem to learn a more general classification function.To achieve this end, we extend the input features by concatenating encodings of them, and then train the classifier on the extended features.As an illustration of this idea, we focus on image classification, where we use channel codes on the input features as a systematic way to improve the degree to which the training and test sets are representative of the empirical sample set.To showcase our theoretical findings, considering that corrupted or perturbed input features belong to the empirical sample set, but typically not to the training and test sets, we demonstrate through extensive systematic experiments that, as a result of learning a more general classification function, a model trained on encoded input features is significantly more robust to common corruptions, e.g., Gaussian and shot noise, as well as adversarial perturbations, e.g., those found via projected gradient descent, than the model trained on uncoded input features.","We present a theoretical and experimental framework for defining, understanding, and achieving generalization, and as a result robustness, in deep learning by drawing on algorithmic information theory and coding theory." 411,Bayesian Model Selection for Identifying Markov Equivalent Causal Graphs,"Many approaches to causal discovery are limited by their inability to discriminate between Markov equivalent graphs given only observational data.We formulate causal discovery as a marginal likelihood based Bayesian model selection problem.We adopt a parameterization based on the notion of the independence of causal mechanisms which renders Markov equivalent graphs distinguishable.We complement this with an empirical Bayesian approach to setting priors so that the actual underlying causal graph is assigned a higher marginal likelihood than its alternatives.Adopting a Bayesian approach also allows for straightforward modeling of unobserved confounding variables, for which we provide a variational algorithm to approximate the marginal likelihood, since this desirable feat renders the computation of the marginal likelihood intractable.We believe that the Bayesian approach to causal discovery both allows the rich methodology of Bayesian inference to be used in various difficult aspects of this problem and provides a unifying framework to causal discovery research.We demonstrate promising results in experiments conducted on real data, supporting our modeling approach and our inference methodology.",We cast causal structure discovery as a Bayesian model selection in a way that allows us to discriminate between Markov equivalent graphs to identify the unique causal graph. 412,Lower Bounds for Compressed Sensing with Generative Models," The goal of compressed sensing is to learn a structured signal from a limited number of noisy linear measurements. In traditional compressed sensing, structure is represented by sparsity in some known basis.Inspired by the success of deep learning in modeling images, recent work starting with~ has instead considered structure to come from a generative model . We present two results establishing the difficulty of this latter task, showing that existing bounds are tight. First, we provide a lower bound matching the~ upper bound for compressed sensing from-Lipschitz generative models . In particular, there exists such a function that requires roughly linear measurements for sparse recovery to be possible. This holds even for the more relaxed goal of recovery. Second, we show that generative models generalize sparsity as a representation of structure. In particular, we construct a ReLU-based neural network with layers and activations per layer, such that the range of contains all-sparse vectors.",Lower bound for compressed sensing w/ generative models that matches known upper bounds 413,What is image captioning made of?,"We hypothesize that end-to-end neural image captioning systems work seemingly well because they exploit and learn ‘distributional similarity’ in a multimodal feature space, by mapping a test image to similar training images in this space and generating a caption from the same space.To validate our hypothesis, we focus on the ‘image’ side of image captioning, and vary the input image representation but keep the RNN text generation model of a CNN-RNN constant.We propose a sparse bag-of-objects vector as an interpretable representation to investigate our distributional similarity hypothesis.We found that image captioning models are capable of separating structure from noisy input representations; experience virtually no significant performance loss when a high dimensional representation is compressed to a lower dimensional space; cluster images with similar visual and linguistic information together; are heavily reliant on test sets with a similar distribution as the training set; repeatedly generate the same captions by matching images and ‘retrieving’ a caption in the joint visual-textual space.Our experiments all point to one fact: that our distributional similarity hypothesis holds.We conclude that, regardless of the image representation, image captioning systems seem to match images and generate captions in a learned joint image-text semantic subspace.",This paper presents an empirical analysis on the role of different types of image representations and probes the properties of these representations for the task of image captioning. 414,IncSQL: Training Incremental Text-to-SQL Parsers with Non-Deterministic Oracles,"We present a sequence-to-action parsing approach for the natural language to SQL task that incrementally fills the slots of a SQL query with feasible actions from a pre-defined inventory.To account for the fact that typically there are multiple correct SQL queries with the same or very similar semantics, we draw inspiration from syntactic parsing techniques and propose to train our sequence-to-action models with non-deterministic oracles.We evaluate our models on the WikiSQL dataset and achieve an execution accuracy of 83.7% on the test set, a 2.1% absolute improvement over the models trained with traditional static oracles assuming a single correct target SQL query.When further combined with the execution-guided decoding strategy, our model sets a new state-of-the-art performance at an execution accuracy of 87.1%.",We design incremental sequence-to-action parsers for text-to-SQL task and achieve SOTA results. We further improve by using non-deterministic oracles to allow multiple correct action sequences. 415,Faster Discovery of Neural Architectures by Searching for Paths in a Large Model,"We propose Efficient Neural Architecture Search, a faster and less expensive approach to automated model design than previous methods.In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model.The controller is trained with policy gradient to select a path that maximizes the expected reward on the validation set.Meanwhile the model corresponding to the selected path is trained to minimize the cross entropy loss.On the Penn Treebank dataset, ENAS can discover a novel architecture thats achieves a test perplexity of 57.8, which is state-of-the-art among automatic model design methods on Penn Treebank.On the CIFAR-10 dataset, ENAS can design novel architectures that achieve a test error of 2.89%, close to the 2.65% achieved by standard NAS.Most importantly, our experiments show that ENAS is more than 10x faster and 100x less resource-demanding than NAS.","An approach that speeds up neural architecture search by 10x, whilst using 100x less computing resource." 416,Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data,"Nowadays, deep neural networks have become the main instrument for machine learning tasks within a wide range of domains, including vision, NLP, and speech.Meanwhile, in an important case of heterogenous tabular data, the advantage of DNNs over shallow counterparts remains questionable.In particular, there is no sufficient evidence that deep learning machinery allows constructing methods that outperform gradient boosting decision trees, which are often the top choice for tabular problems.In this paper, we introduce Neural Oblivious Decision Ensembles, a new deep learning architecture, designed to work with any tabular data.In a nutshell, the proposed NODE architecture generalizes ensembles of oblivious decision trees, but benefits from both end-to-end gradient-based optimization and the power of multi-layer hierarchical representation learning.With an extensive experimental comparison to the leading GBDT packages on a large number of tabular datasets, we demonstrate the advantage of the proposed NODE architecture, which outperforms the competitors on most of the tasks.We open-source the PyTorch implementation of NODE and believe that it will become a universal framework for machine learning on tabular data.",We propose a new DNN architecture for deep learning on tabular data 417,Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain Adaptation on Person Re-identification,"Person re-identification aims at identifying the same persons' images across different cameras."", 'However, domain diversities between different datasets pose an evident challenge for adapting the re-ID model trained on one dataset to another one.State-of-the-art unsupervised domain adaptation methods for person re-ID transferred the learned knowledge from the source domain by optimizing with pseudo labels created by clustering algorithms on the target domain.Although they achieved state-of-the-art performances, the inevitable label noise caused by the clustering procedure was ignored.""Such noisy pseudo labels substantially hinders the model's capability on further improving feature representations on the target domain."", 'In order to mitigate the effects of noisy pseudo labels, we propose to softly refine the pseudo labels in the target domain by proposing an unsupervised framework, Mutual Mean-Teaching, to learn better features from the target domain via off-line refined hard pseudo labels and on-line refined soft pseudo labels in an alternative training manner. In addition, the common practice is to adopt both the classification loss and the triplet loss jointly for achieving optimal performances in person re-ID models.However, conventional triplet loss cannot work with softly refined labels.To solve this problem, a novel soft softmax-triplet loss is proposed to support learning with soft pseudo triplet labels for achieving the optimal domain adaptation performance.The proposed MMT framework achieves considerable improvements of 14.4%, 18.2%, 13.1% and 16.4% mAP on Market-to-Duke, Duke-to-Market, Market-to-MSMT and Duke-to-MSMT unsupervised domain adaptation tasks.",A framework that conducts online refinement of pseudo labels with a novel soft softmax-triplet loss for unsupervised domain adaptation on person re-identification. 418,Certifying Neural Network Audio Classifiers,"We present the first end-to-end verifier of audio classifiers.Compared to existing methods, our approach enables analysis of both, the entire audio processing stage as well as recurrent neural network architectures.The audio processing is verified using novel convex relaxations tailored to feature extraction operations used in audio while recurrent architectures are certified via a novel binary relaxation for the recurrent unit update.We show the verifier scales to large networks while computing significantly tighter bounds than existing methods for common audio classification benchmarks: on the challenging Google Speech Commands dataset we certify 95% more inputs than the interval approximation, for a perturbation of -90dB.",We present the first approach to certify robustness of neural networks against noise-based perturbations in the audio domain. 419,Robust training with ensemble consensus,"Since deep neural networks are over-parameterized, they can memorize noisy examples.We address such memorizing issue in the presence of annotation noise.From the fact that deep neural networks cannot generalize neighborhoods of the features acquired via memorization, we hypothesize that noisy examples do not consistently incur small losses on the network under a certain perturbation.Based on this, we propose a novel training method called Learning with Ensemble Consensus that prevents overfitting noisy examples by eliminating them using the consensus of an ensemble of perturbed networks.One of the proposed LECs, LTEC outperforms the current state-of-the-art methods on noisy MNIST, CIFAR-10, and CIFAR-100 in an efficient manner.",This work presents a method of generating and using ensembles effectively to identify noisy examples in the presence of annotation noise. 420,GLAD: Learning Sparse Graph Recovery,"Recovering sparse conditional independence graphs from data is a fundamental problem in machine learning with wide applications.A popular formulation of the problem is an regularized maximum likelihood estimation.Many convex optimization algorithms have been designed to solve this formulation to recover the graph structure.Recently, there is a surge of interest to learn algorithms directly based on data, and in this case, learn to map empirical covariance to the sparse precision matrix.However, it is a challenging task in this case, since the symmetric positive definiteness and sparsity of the matrix are not easy to enforce in learned algorithms, and a direct mapping from data to precision matrix may contain many parameters.We propose a deep learning architecture, GLAD, which uses an Alternating Minimization algorithm as our model inductive bias, and learns the model parameters via supervised learning.We show that GLAD learns a very compact and effective model for recovering sparse graphs from data.",A data-driven learning algorithm based on unrolling the Alternating Minimization optimization for sparse graph recovery. 421,Double Neural Counterfactual Regret Minimization,"Counterfactual regret minimization is a fundamental and effective technique for solving Imperfect Information Games.However, the original CFR algorithm only works for discrete states and action spaces, and the resulting strategy is maintained as a tabular representation.Such tabular representation limits the method from being directly applied to large games.In this paper, we propose a double neural representation for the IIGs, where one neural network represents the cumulative regret, and the other represents the average strategy. Such neural representations allow us to avoid manual game abstraction and carry out end-to-end optimization.To make the learning efficient, we also developed several novel techniques including a robust sampling method and a mini-batch Monte Carlo Counterfactual Regret Minimization method, which may be of independent interests. Empirically, on games tractable to tabular approaches, neural strategies trained with our algorithm converge comparably to their tabular counterparts, and significantly outperform those based on deep reinforcement learning. On extremely large games with billions of decision nodes, our approach achieved strong performance while using hundreds of times less memory than the tabular CFR.""On head-to-head matches of hands-up no-limit texas hold'em, our neural agent beat the strong agent ABS-CFR by chips per game."", ""It's a successful application of neural CFR in large games.",We proposed a double neural framework to solve large-scale imperfect information game. 422,Correctness Verification of Neural Network,"We present the first verification that a neural network for perception tasks producesa correct output within a specified tolerance for every input of interest.We definecorrectness relative to a specification which identifies 1) a state space consisting ofall relevant states of the world and 2) an observation process that produces neuralnetwork inputs from the states of the world.Tiling the state and input spaces witha finite number of tiles, obtaining ground truth bounds from the state tiles andnetwork output bounds from the input tiles, then comparing the ground truth andnetwork output bounds delivers an upper bound on the network output error forany input of interest.Results from two case studies highlight the ability of ourtechnique to deliver tight error bounds for all inputs of interest and show how theerror bounds vary over the state and input spaces.",We present the first verification that a neural network for perception tasks produces a correct output within a specified tolerance for every input of interest. 423,Evaluating Lossy Compression Rates of Deep Generative Models,"Deep generative models have achieved remarkable progress in recent years.Despite this progress, quantitative evaluation and comparison of generative models remains as one of the important challenges.One of the most popular metrics for evaluating generative models is the log-likelihood.While the direct computation of log-likelihood can be intractable, it has been recently shown that the log-likelihood of some of the most interesting generative models such as variational autoencoders or generative adversarial networks can be efficiently estimated using annealed importance sampling.In this work, we argue that the log-likelihood metric by itself cannot represent all the different performance characteristics of generative models, and propose to use rate distortion curves to evaluate and compare deep generative models.We show that we can approximate the entire rate distortion curve using one single run of AIS for roughly the same computational cost as a single log-likelihood estimate.We evaluate lossy compression rates of different deep generative models such as VAEs, GANs and adversarial autoencoders on MNIST and CIFAR10, and arrive at a number of insights not obtainable from log-likelihoods alone.","We study rate distortion approximations for evaluating deep generative models, and show that rate distortion curves provide more insights about the model than the log-likelihood alone while requiring roughly the same computational cost." 424,"Learning to Adapt in Dynamic, Real-World Environments through Meta-Reinforcement Learning","Although reinforcement learning methods can achieve impressive results in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations or unseen situations cause proficient but specialized policies to fail at test time.Given that it is impractical to train separate policies to accommodate all situations the agent may see in the real world, this work proposes to learn how to quickly and effectively adapt online to new tasks.To enable sample-efficient learning, we consider learning online adaptation in the context of model-based reinforcement learning.Our approach uses meta-learning to train a dynamics model prior such that, when combined with recent data, this prior can be rapidly adapted to the local context.Our experiments demonstrate online adaptation for continuous control tasks on both simulated and real-world agents.We first show simulated agents adapting their behavior online to novel terrains, crippled body parts, and highly-dynamic environments.""We also illustrate the importance of incorporating online adaptation into autonomous agents that operate in the real world by applying our method to a real dynamic legged millirobot: We demonstrate the agent's learned ability to quickly adapt online to a missing leg, adjust to novel terrains and slopes, account for miscalibration or errors in pose estimation, and compensate for pulling payloads.",A model-based meta-RL algorithm that enables a real robot to adapt online in dynamic environments 425,Latent forward model for Real-time Strategy game planning with incomplete information,"Model-free deep reinforcement learning approaches have shown superhuman performance in simulated environments.During training, these approaches often implicitly construct a latent space that contains key information for decision making.In this paper, we learn a forward model on this latent space and apply it to model-based planning in miniature Real-time Strategy game with incomplete information.We first show that the latent space constructed from existing actor-critic models contains relevant information of the game, and design training procedure to learn forward models.We also show that our learned forward model can predict meaningful future state and is usable for latent space Monte-Carlo Tree Search, in terms of win rates against rule-based agents.","The paper analyzes the latent space learned by model-free approaches in a miniature incomplete information game, trains a forward model in the latent space and apply it to Monte-Carlo Tree Search, yielding positive performance." 426,A Tensor Analysis on Dense Connectivity via Convolutional Arithmetic Circuits,"Several state of the art convolutional networks rely on inter-connecting different layers to ease the flow of information and gradient between their input and output layers.These techniques have enabled practitioners to successfully train deep convolutional networks with hundreds of layers.Particularly, a novel way of interconnecting layers was introduced as the Dense Convolutional Network and has achieved state of the art performance on relevant image recognition tasks.Despite their notable empirical success, their theoretical understanding is still limited.In this work, we address this problem by analyzing the effect of layer interconnection on the overall expressive power of a convolutional network.In particular, the connections used in DenseNet are compared with other types of inter-layer connectivity.We carry out a tensor analysis on the expressive power inter-connections on convolutional arithmetic circuits and relate our results to standard convolutional networks.The analysis leads to performance bounds and practical guidelines for design of ConvACs.The generalization of these results are discussed for other kinds of convolutional networks via generalized tensor decompositions.",We analyze the expressive power of the connections used in DenseNets via tensor decompositions. 427,Deep Reinforcement Learning with Implicit Human Feedback,"We consider the following central question in the field of Deep Reinforcement Learning:How can we use implicit human feedback to accelerate and optimize the training of a DRL algorithm?State-of-the-art methods rely on any human feedback to be provided explicitly, requiring the active participation of humans.In this work, we investigate an alternative paradigm, where non-expert humans are silently observing the agent interacting with the environment.""The human's intrinsic reactions to the agent's behavior is sensed as implicit feedback by placing electrodes on the human scalp and monitoring what are known as event-related electric potentials."", ""The implicit feedback is then used to augment the agent's learning in the RL tasks."", 'We develop a system to obtain and accurately decode the implicit human feedback for state-action pairs in an Atari-type environment.As a baseline contribution, we demonstrate the feasibility of capturing error-potentials of a human observer watching an agent learning to play several different Atari-games using an electroencephalogram cap, and then decoding the signals appropriately and using them as an auxiliary reward function to a DRL algorithm with the intent of accelerating its learning of the game.Building atop the baseline, we then make the following novel contributions in our work: We argue that the definition of error-potentials is generalizable across different environments; specifically we show that error-potentials of an observer can be learned for a specific game, and the definition used as-is for another game without requiring re-learning of the error-potentials. We propose two different frameworks to combine recent advances in DRL into the error-potential based feedback system in a sample-efficient manner, allowing humans to provide implicit feedback while training in the loop, or prior to the training of the RL agent. Finally, we scale the implicit human feedback based RL to reasonably complex environments and demonstrate the significance of our approach through synthetic and real user experiments.","We use implicit human feedback (via error-potentials, EEG) to accelerate and optimize the training of a DRL algorithm, in a practical manner." 428,Riemannian Adaptive Optimization Methods,"Several first order stochastic optimization methods commonly used in the Euclidean domain such as stochastic gradient descent, accelerated gradient descent or variance reduced methods have already been adapted to certain Riemannian settings.However, some of the most popular of these optimization tools - namely Adam, Adagrad and the more recent Amsgrad - remain to be generalized to Riemannian manifolds.We discuss the difficulty of generalizing such adaptive schemes to the most agnostic Riemannian setting, and then provide algorithms and convergence proofs for geodesically convex objectives in the particular case of a product of Riemannian manifolds, in which adaptivity is implemented across manifolds in the cartesian product.Our generalization is tight in the sense that choosing the Euclidean space as Riemannian manifold yields the same algorithms and regret bounds as those that were already known for the standard algorithms.Experimentally, we show faster convergence and to a lower train loss value for Riemannian adaptive methods over their corresponding baselines on the realistic task of embedding the WordNet taxonomy in the Poincare ball.","Adapting Adam, Amsgrad, Adagrad to Riemannian manifolds. " 429,Differentiable Hebbian Consolidation for Continual Learning,"Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge.However, catastrophic forgetting poses a grand challenge for neural networks performing such learning process.Thus, neural networks that are deployed in the real world often struggle in scenarios where the data distribution is non-stationary, imbalanced, or not always fully available, i.e., rare edge cases.We propose a Differentiable Hebbian Consolidation model which is composed of a Differentiable Hebbian Plasticity Softmax layer that adds a rapid learning plastic component to the fixed parameters of the softmax output layer; enabling learned representations to be retained for a longer timescale.We demonstrate the flexibility of our method by integrating well-known task-specific synaptic consolidation methods to penalize changes in the slow weights that are important for each target task.We evaluate our approach on the Permuted MNIST, Split MNIST and Vision Datasets Mixture benchmarks, and introduce an imbalanced variant of Permuted MNIST --- a dataset that combines the challenges of class imbalance and concept drift.Our proposed model requires no additional hyperparameters and outperforms comparable baselines by reducing forgetting.",Hebbian plastic weights can behave as a compressed episodic memory storage in neural networks and with the combination of task-specific synaptic consolidation can improve the ability to alleviate catastrophic forgetting in continual learning. 430,"Deep, Skinny Neural Networks are not Universal Approximators","In order to choose a neural network architecture that will be effective for a particular modeling problem, one must understand the limitations imposed by each of the potential options.These limitations are typically described in terms of information theoretic bounds, or by comparing the relative complexity needed to approximate example functions between different architectures.In this paper, we examine the topological constraints that the architecture of a neural network imposes on the level sets of all the functions that it is able to approximate.This approach is novel for both the nature of the limitations and the fact that they are independent of network depth for a broad family of activation functions.","This paper proves that skinny neural networks cannot approximate certain functions, no matter how deep they are." 431,CLN2INV: Learning Loop Invariants with Continuous Logic Networks,"Program verification offers a framework for ensuring program correctness and therefore systematically eliminating different classes of bugs.Inferring loop invariants is one of the main challenges behind automated verification of real-world programs which often contain many loops.In this paper, we present Continuous Logic Network, a novel neural architecture for automatically learning loop invariants directly from program execution traces.Unlike existing neural networks, CLNs can learn precise and explicit representations of formulas in Satisfiability Modulo Theories for loop invariants from program execution traces.We develop a new sound and complete semantic mapping for assigning SMT formulas to continuous truth values that allows CLNs to be trained efficiently.We use CLNs to implement a new inference system for loop invariants, CLN2INV, that significantly outperforms existing approaches on the popular Code2Inv dataset.CLN2INV is the first tool to solve all 124 theoretically solvable problems in the Code2Inv dataset.Moreover, CLN2INV takes only 1.1 second on average for each problem, which is 40 times faster than existing approaches.We further demonstrate that CLN2INV can even learn 12 significantly more complex loop invariants than the ones required for the Code2Inv dataset.","We introduce the Continuous Logic Network (CLN), a novel neural architecture for automatically learning loop invariants and general SMT formulas." 432,Discrete Flows: Invertible Generative Models of Discrete Data,"While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown.In this paper, we show that flows can in fact be extended to discrete events---and under a simple change-of-variables formula not requiring log-determinant-Jacobian computations.Discrete flows have numerous applications.We display proofs of concept under 2 flow architectures: discrete autoregressive flows enable bidirectionality, allowing for example tokens in text to depend on both left-to-right and right-to-left contexts in an exact language model; and discrete bipartite flows enable parallel generation such as exact nonautoregressive text modeling.",We extend autoregressive flows and RealNVP to discrete data. 433,Hybed: Hyperbolic Neural Graph Embedding,"Neural embeddings have been used with great success in Natural Language Processing where they provide compact representations that encapsulate word similarity and attain state-of-the-art performance in a range of linguistic tasks.The success of neural embeddings has prompted significant amounts of research into applications in domains other than language.One such domain is graph-structured data, where embeddings of vertices can be learned that encapsulate vertex similarity and improve performance on tasks including edge prediction and vertex labelling.For both NLP and graph-based tasks, embeddings in high-dimensional Euclidean spaces have been learned.However, recent work has shown that the appropriate isometric space for embedding complex networks is not the flat Euclidean space, but a negatively curved hyperbolic space.We present a new concept that exploits these recent insights and propose learning neural embeddings of graphs in hyperbolic space.We provide experimental evidence that hyperbolic embeddings significantly outperform Euclidean embeddings on vertex classification tasks for several real-world public datasets.",We learn neural embeddings of graphs in hyperbolic instead of Euclidean space 434,Generating Wikipedia by Summarizing Long Sequences,"We show that generating English Wikipedia articles can be approached as a multi-document summarization of source documents.We use extractive summarizationto coarsely identify salient information and a neural abstractive model to generatethe article.For the abstractive model, we introduce a decoder-only architecturethat can scalably attend to very long sequences, much longer than typical encoder-decoder architectures used in sequence transduction.We show that this model cangenerate fluent, coherent multi-sentence paragraphs and even whole Wikipediaarticles.When given reference documents, we show it can extract relevant factualinformation as reflected in perplexity, ROUGE scores and human evaluations.",We generate Wikipedia articles abstractively conditioned on source document text. 435,SoftAdam: Unifying SGD and Adam for better stochastic gradient descent,"Abstract Stochastic gradient descent and Adam are commonly used to optimize deep neural networks, but choosing one usually means making tradeoffs between speed, accuracy and stability.Here we present an intuition for why the tradeoffs exist as well as a method for unifying the two in a continuous way.This makes it possible to control the way models are trained in much greater detail.We show that for default parameters, the new algorithm equals or outperforms SGD and Adam across a range of models for image classification tasks and outperforms SGD for language modeling tasks.",An algorithm for unifying SGD and Adam and empirical study of its performance 436,Geometric Operator Convolutional Neural Network,"The Convolutional Neural Network has been successfully applied in many fields during recent decades; however it lacks the ability to utilize prior domain knowledge when dealing with many realistic problems.We present a framework called Geometric Operator Convolutional Neural Network that uses domain knowledge, wherein the kernel of the first convolutional layer is replaced with a kernel generated by a geometric operator function.This framework integrates many conventional geometric operators, which allows it to adapt to a diverse range of problems.Under certain conditions, we theoretically analyze the convergence and the bound of the generalization errors between GO-CNNs and common CNNs.Although the geometric operator convolution kernels have fewer trainable parameters than common convolution kernels, the experimental results indicate that GO-CNN performs more accurately than common CNN on CIFAR-10/100.Furthermore, GO-CNN reduces dependence on the amount of training examples and enhances adversarial stability.",Traditional image processing algorithms are combined with Convolutional Neural Networks,a new neural network. 437,Deep Learning of Determinantal Point Processes via Proper Spectral Sub-gradient,"Determinantal point processes is an effective tool to deliver diversity on multiple machine learning and computer vision tasks.Under deep learning framework, DPP is typically optimized via approximation, which is not straightforward and has some conflict with diversity requirement.We note, however, there has been no deep learning paradigms to optimize DPP directly since it involves matrix inversion which may result in highly computational instability.This fact greatly hinders the wide use of DPP on some specific objectives where DPP serves as a term to measure the feature diversity.In this paper, we devise a simple but effective algorithm to address this issue to optimize DPP term directly expressed with L-ensemble in spectral domain over gram matrix, which is more flexible than learning on parametric kernels.By further taking into account some geometric constraints, our algorithm seeks to generate valid sub-gradients of DPP term in case when the DPP gram matrix is not invertible.In this sense, our algorithm can be easily incorporated with multiple deep learning tasks.Experiments show the effectiveness of our algorithm, indicating promising performance for practical learning problems.",We proposed a specific back-propagation method via proper spectral sub-gradient to integrate determinantal point process to deep learning framework. 438,A cluster-to-cluster framework for neural machine translation,"The quality of a machine translation system depends largely on the availability of sizable parallel corpora.For the recently popular Neural Machine Translation framework, data sparsity problem can become even more severe.With large amount of tunable parameters, the NMT model may overfit to the existing language pairs while failing to understand the general diversity in language.In this paper, we advocate to broadcast every sentence pair as two groups of similar sentences to incorporate more diversity in language expressions, which we name as parallel cluster.Then we define a more general cluster-to-cluster correspondence score and train our model to maximize this score.Since direct maximization is difficult, we derive its lower-bound as our surrogate objective, which is found to generalize point-point Maximum Likelihood Estimation and point-to-cluster Reward Augmented Maximum Likelihood algorithms as special cases.Based on this novel objective function, we delineate four potential systems to realize our cluster-to-cluster framework and test their performances in three recognized translation tasks, each task with forward and reverse translation directions.In each of the six experiments, our proposed four parallel systems have consistently proved to outperform the MLE baseline, RL and RAML systems significantly.Finally, we have performed case study to empirically analyze the strength of the cluster-to-cluster NMT framework.","We invent a novel cluster-to-cluster framework for NMT training, which can better understand the both source and target language diversity." 439,Learn to Explain Efficiently via Neural Logic Inductive Learning,"The capability of making interpretable and self-explanatory decisions is essential for developing responsible machine learning systems.In this work, we study the learning to explain the problem in the scope of inductive logic programming.We propose Neural Logic Inductive Learning, an efficient differentiable ILP framework that learns first-order logic rules that can explain the patterns in the data.In experiments, compared with the state-of-the-art models, we find NLIL is able to search for rules that are x10 times longer while remaining x3 times faster.We also show that NLIL can scale to large image datasets, i.e. Visual Genome, with 1M entities.",An efficient differentiable ILP model that learns first-order logic rules that can explain the data. 440,Synthesizing Robust Adversarial Examples,"Neural network-based classifiers parallel or exceed human-level accuracy on many common tasks and are used in practical systems.Yet, neural networks are susceptible to adversarial examples, carefully perturbed inputs that cause networks to misbehave in arbitrarily chosen ways.When generated with standard methods, these examples do not consistently fool a classifier in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations.Adversarial examples generated using standard techniques require complete control over direct input to the classifier, which is impossible in many real-world systems.We introduce the first method for constructing real-world 3D objects that consistently fool a neural network across a wide distribution of angles and viewpoints.We present a general-purpose algorithm for generating adversarial examples that are robust across any chosen distribution of transformations.We demonstrate its application in two dimensions, producing adversarial images that are robust to noise, distortion, and affine transformation.Finally, we apply the algorithm to produce arbitrary physical 3D-printed adversarial objects, demonstrating that our approach works end-to-end in the real world.Our results show that adversarial examples are a practical concern for real-world systems.",We introduce a new method for synthesizing adversarial examples robust in the physical world and use it to fabricate the first 3D adversarial objects. 441,Learning Representations of Sets through Optimized Permutations,"Representations of sets are challenging to learn because operations on sets should be permutation-invariant.To this end, we propose a Permutation-Optimisation module that learns how to permute a set end-to-end.The permuted set can be further processed to learn a permutation-invariant representation of that set, avoiding a bottleneck in traditional set models.""We demonstrate our model's ability to learn permutations and set representations with either explicit or implicit supervision on four datasets, on which we achieve state-of-the-art results: number sorting, image mosaics, classification from image mosaics, and visual question answering.","Learn how to permute a set, then encode permuted set with RNN to obtain a set representation." 442,Jointly Learning to Construct and Control Agents using Deep Reinforcement Learning,"The physical design of a robot and the policy that controls its motion are inherently coupled.However, existing approaches largely ignore this coupling, instead choosing to alternate between separate design and control phases, which requires expert intuition throughout and risks convergence to suboptimal designs.In this work, we propose a method that jointly optimizes over the physical design of a robot and the corresponding control policy in a model-free fashion, without any need for expert supervision.Given an arbitrary robot morphology, our method maintains a distribution over the design parameters and uses reinforcement learning to train a neural network controller.Throughout training, we refine the robot distribution to maximize the expected reward.This results in an assignment to the robot parameters and neural network policy that are jointly optimal.We evaluate our approach in the context of legged locomotion, and demonstrate that it discovers novel robot designs and walking gaits for several different morphologies, achieving performance comparable to or better than that of hand-crafted designs.",Use deep reinforcement learning to design the physical attributes of a robot jointly with a control policy. 443,Mint: Matrix-Interleaving for Multi-Task Learning,"Deep learning enables training of large and flexible function approximators from scratch at the cost of large amounts of data.Applications of neural networks often consider learning in the context of a single task.However, in many scenarios what we hope to learn is not just a single task, but a model that can be used to solve multiple different tasks.Such multi-task learning settings have the potential to improve data efficiency and generalization by sharing data and representations across tasks.However, in some challenging multi-task learning settings, particularly in reinforcement learning, it is very difficult to learn a single model that can solve all the tasks while realizing data efficiency and performance benefits.Learning each of the tasks independently from scratch can actually perform better in such settings, but it does not benefit from the representation sharing that multi-task learning can potentially provide.In this work, we develop an approach that endows a single model with the ability to represent both extremes: joint training and independent training.To this end, we introduce matrix-interleaving, a modification to standard neural network models that projects the activations for each task into a different learned subspace, represented by a per-task and per-layer matrix.By learning these matrices jointly with the other model parameters, the optimizer itself can decide how much to share representations between tasks.On three challenging multi-task supervised learning and reinforcement learning problems with varying degrees of shared task structure, we find that this model consistently matches or outperforms joint training and independent training, combining the best elements of both.","We propose an approach that endows a single model with the ability to represent both extremes: joint training and independent training, which leads to effective multi-task learning." 444,Improving the Generalization of Visual Navigation Policies using Invariance Regularization,"Training agents to operate in one environment often yields overfitted models that are unable to generalize to the changes in that environment.However, due to the numerous variations that can occur in the real-world, the agent is often required to be robust in order to be useful.This has not been the case for agents trained with reinforcement learning algorithms.In this paper, we investigate the overfitting of RL agents to the training environments in visual navigation tasks.Our experiments show that deep RL agents can overfit even when trained on multiple environments simultaneously.We propose a regularization method which combines RL with supervised learning methods by adding a term to the RL objective that would encourage the invariance of a policy to variations in the observations that ought not to affect the action taken.The results of this method, called invariance regularization, show an improvement in the generalization of policies to environments not seen during training.","We propose a regularization term that, when added to the reinforcement learning objective, allows the policy to maximize the reward and simultaneously learn to be invariant to the irrelevant changes within the input.." 445,Neural Machine Translation with Universal Visual Representation,"Though visual information has been introduced for enhancing neural machine translation, its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations.In this paper, we present a universal visual representation learned over the monolingual corpora with image annotations, which overcomes the lack of large-scale bilingual sentence-image pairs, thereby extending image applicability in NMT.In detail, a group of images with similar topics to the source sentence will be retrieved from a light topic-image lookup table learned over the existing sentence-image pairs, and then is encoded as image representations by a pre-trained ResNet.An attention layer with a gated weighting is to fuse the visual information and text information as input to the decoder for predicting target translations.In particular, the proposed method enables the visual information to be integrated into large-scale text-only NMT in addition to the multimodel NMT.""Experiments on four widely used translation datasets, including the WMT'16 English-to-Romanian, WMT'14 English-to-German, WMT'14 English-to-French, and Multi30K, show that the proposed approach achieves significant improvements over strong baselines.","This work proposed a universal visual representation for neural machine translation (NMT) using retrieved images with similar topics to source sentence, extending image applicability in NMT." 446,A new dog learns old tricks: RL finds classic optimization algorithms,"This paper introduces a novel framework for learning algorithms to solve online combinatorial optimization problems.Towards this goal, we introduce a number of key ideas from traditional algorithms and complexity theory.First, we draw a new connection between primal-dual methods and reinforcement learning.Next, we introduce the concept of adversarial distributions, which are distributions that encourage the learner to find algorithms that work well in the worst case.We test our new ideas on a number of optimization problem such as the AdWords problem, the online knapsack problem, and the secretary problem.Our results indicate that the models have learned behaviours that are consistent with the traditional optimal algorithms for these problems.","By combining ideas from traditional algorithms design and reinforcement learning, we introduce a novel framework for learning algorithms that solve online combinatorial optimization problems." 447,A Functional Characterization of Randomly Initialized Gradient Descent in Deep ReLU Networks,"Despite their popularity and successes, deep neural networks are poorly understood theoretically and treated as 'black box' systems."", 'Using a functional view of these networks gives us a useful new lens with which to understand them.This allows us us to theoretically or experimentally probe properties of these networks, including the effect of standard initializations, the value of depth, the underlying loss surface, and the origins of generalization.One key result is that generalization results from smoothness of the functional approximation, combined with a flat initial approximation.This smoothness increases with number of units, explaining why massively overparamaterized networks continue to generalize well.","A functional approach reveals that flat initialization, preserved by gradient descent, leads to generalization ability." 448,The Effect of Network Depth on the Optimization Landscape,"It is well-known that deeper neural networks are harder to train than shallower ones.In this short paper, we use the eigenvalue spectrum of the Hessian to explore how the loss landscape changes as the network gets deeper, and as residual connections are added to the architecture.Computing a series of quantitative measures on the Hessian spectrum, we show that the Hessian eigenvalue distribution in deeper networks has substantially heavier tails, which makes the network harder to optimize with first-order methods.We show that adding residual connections mitigates this effect substantially, suggesting a mechanism by which residual connections improve training.",Network depth increases outlier eigenvalues in the Hessian. Residual connections mitigate this. 449,HOW IMPORTANT ARE NETWORK WEIGHTS? TO WHAT EXTENT DO THEY NEED AN UPDATE?,"In the context of optimization, a gradient of a neural network indicates the amount a specific weight should change with respect to the loss.Therefore, small gradients indicate a good value of the weight that requires no change and can be kept frozen during training.This paper provides an experimental study on the importance of a neural network weights, and to which extent do they need to be updated.We wish to show that starting from the third epoch, freezing weights which have no informative gradient and are less likely to be changed during training, results in a very slight drop in the overall accuracy.We experiment on the MNIST, CIFAR10 and Flickr8k datasets using several architectures.On CIFAR10, we show that freezing 80% of the VGG19 network parameters from the third epoch onwards results in 0.24% drop in accuracy, while freezing 50% of Resnet-110 parameters results in 0.9% drop in accuracy and finally freezing 70% of Densnet-121 parameters results in 0.57% drop in accuracy.Furthermore, to experiemnt with real-life applications, we train an image captioning model with attention mechanism on the Flickr8k dataset using LSTM networks, freezing 60% of the parameters from the third epoch onwards, resulting in a better BLEU-4 score than the fully trained model.Our source code can be found in the appendix.","An experimental paper that proves the amount of redundant weights that can be freezed from the third epoch only, with only a very slight drop in accuracy." 450,Rethinking Curriculum Learning With Incremental Labels And Adaptive Compensation,"Like humans, deep networks learn better when samples are organized and introduced in a meaningful order or curriculum.While conventional approaches to curriculum learning emphasize the difficulty of samples as the core incremental strategy, it forces networks to learn from small subsets of data while introducing pre-computation overheads.In this work, we propose Learning with Incremental Labels and Adaptive Compensation, which introduces a novel approach to curriculum learning.LILAC emphasizes incrementally learning labels instead of incrementally learning difficult samples.It works in two distinct phases: first, in the incremental label introduction phase, we unmask ground-truth labels in fixed increments during training, to improve the starting point from which networks learn.In the adaptive compensation phase, we compensate for failed predictions by adaptively altering the target vector to a smoother distribution.We evaluate LILAC against the closest comparable methods in batch and curriculum learning and label smoothing, across three standard image benchmarks, CIFAR-10, CIFAR-100, and STL-10.We show that our method outperforms batch learning with higher mean recognition accuracy as well as lower standard deviation in performance consistently across all benchmarks.We further extend LILAC to state-of-the-art performance across CIFAR-10 using simple data augmentation while exhibiting label order invariance among other important properties.",A novel approach to curriculum learning by incrementally learning labels and adaptively smoothing labels for mis-classified samples which boost average performance and decreases standard deviation. 451,Learning to Compute Word Embeddings On the Fly,"Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare.""Learning representations for words in the long tail of this distribution requires enormous amounts of data."", 'Representations of rare words trained directly on end tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation.We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained end-to-end for the downstream task.We show that this improves results against baselines where embeddings are trained on the end task for reading comprehension, recognizing textual entailment and language modeling.",We propose a method to deal with rare words by computing their embedding from definitions. 452,Deep Generative Classifier for Out-of-distribution Sample Detection,"The capability of reliably detecting out-of-distribution samples is one of the key factors in deploying a good classifier, as the test distribution always does not match with the training distribution in most real-world applications.In this work, we propose a deep generative classifier which is effective to detect out-of-distribution samples as well as classify in-distribution samples, by integrating the concept of Gaussian discriminant analysis into deep neural networks.Unlike the discriminative classifier that only focuses on the decision boundary partitioning its latent space into multiple regions, our generative classifier aims to explicitly model class-conditional distributions as separable Gaussian distributions.Thereby, we can define the confidence score by the distance between a test sample and the center of each distribution.Our empirical evaluation on multi-class images and tabular data demonstrate that the generative classifier achieves the best performances in distinguishing out-of-distribution samples, and also it can be generalized well for various types of deep neural networks.","This paper proposes a deep generative classifier which is effective to detect out-of-distribution samples as well as classify in-distribution samples, by integrating the concept of Gaussian discriminant analysis into deep neural networks." 453,Isolating effects of age with fair representation learning when assessing dementia,"One of the most prevalent symptoms among the elderly population, dementia, can be detected by classifiers trained on linguistic features extracted from narrative transcripts.However, these linguistic features are impacted in a similar but different fashion by the normal aging process.Aging is therefore a confounding factor, whose effects have been hard for machine learning classifiers to isolate.In this paper, we show that deep neural network classifiers can infer ages from linguistic features, which is an entanglement that could lead to unfairness across age groups.We show this problem is caused by undesired activations of v-structures in causality diagrams, and it could be addressed with fair representation learning.We build neural network classifiers that learn low-dimensional representations reflecting the impacts of dementia yet discarding the effects of age.To evaluate these classifiers, we specify a model-agnostic score measuring how classifier results are disentangled from age.Our best models outperform baseline neural network classifiers in disentanglement, while compromising accuracy by as little as 2.56% and 2.25% on DementiaBank and the Famous People dataset respectively.",Show that age confounds cognitive impairment detection + solve with fair representation learning + propose metrics and models. 454,NetScore: Towards Universal Metrics for Large-scale Performance Analysis of Deep Neural Networks for Practical On-Device Edge Usage,"Much of the focus in the design of deep neural networks had been on improving accuracy, leading to more powerful yet highly complex network architectures that are difficult to deploy in practical scenarios. As a result, there has been a recent interest in the design of quantitative metrics for evaluating deep neural networks that accounts for more than just model accuracy as the sole indicator of network performance. In this study, we continue the conversation towards universal metrics for evaluating the performance of deep neural networks for practical on-device edge usage by introducing NetScore, a new metric designed specifically to provide a quantitative assessment of the balance between accuracy, computational complexity, and network architecture complexity of a deep neural network. In what is one of the largest comparative analysis between deep neural networks in literature, the NetScore metric, the top-1 accuracy metric, and the popular information density metric were compared across a diverse set of 60 different deep convolutional neural networks for image classification on the ImageNet Large Scale Visual Recognition Challenge dataset. The evaluation results across these three metrics for this diverse set of networks are presented in this study to act as a reference guide for practitioners in the field. ","We introduce NetScore, new metric designed to provide a quantitative assessment of the balance between accuracy, computational complexity, and network architecture complexity of a deep neural network." 455,Predict then Propagate: Graph Neural Networks meet Personalized PageRank,"Neural message passing algorithms for semi-supervised classification on graphs have recently achieved great success.However, for classifying a node these methods only consider nodes that are a few propagation steps away and the size of this utilized neighborhood is hard to extend.In this paper, we use the relationship between graph convolutional networks and PageRank to derive an improved propagation scheme based on personalized PageRank.We utilize this propagation procedure to construct a simple model, personalized propagation of neural predictions, and its fast approximation, APPNP.""Our model's training time is on par or faster and its number of parameters on par or lower than previous models."", 'It leverages a large, adjustable neighborhood for classification and can be easily combined with any neural network.We show that this model outperforms several recently proposed methods for semi-supervised classification in the most thorough study done so far for GCN-like models.Our implementation is available online.",Personalized propagation of neural predictions (PPNP) improves graph neural networks by separating them into prediction and propagation via personalized PageRank. 456,Should All Cross-Lingual Embeddings Speak English?,"Most of recent work in cross-lingual word embeddings is severely Anglocentric.The vast majority of lexicon induction evaluation dictionaries are between English and another language, and the English embedding space is selected by default as the hub when learning in a multilingual setting.With this work, however, we challenge these practices.First, we show that the choice of hub language can significantly impact downstream lexicon induction performance.Second, we both expand the current evaluation dictionary collection to include all language pairs using triangulation, and also create new dictionaries for under-represented languages.Evaluating established methods over all these language pairs sheds light into their suitability and presents new challenges for the field.Finally, in our analysis we identify general guidelines for strong cross-lingual embeddings baselines, based on more than just Anglocentric experiments.","The choice of the hub (target) language affects the quality of cross-lingual embeddings, which shouldn't be evaluated only on English-centric dictionaries." 457,The divergences minimized by non-saturating GANs,"Interpreting generative adversarial network training as approximate divergence minimization has been theoretically insightful, has spurred discussion, and has lead to theoretically and practically interesting extensions such as f-GANs and Wasserstein GANs.For both classic GANs and f-GANs, there is an original variant of training and a ""non-saturating"" variant which uses an alternative form of generator update.The original variant is theoretically easier to study, but the alternative variant frequently performs better and is recommended for use in practice.The alternative generator update is often regarded as a simple modification to deal with optimization issues, and it appears to be a common misconception that the two variants minimize the same divergence.In this short note we derive the divergences approximately minimized by the original and alternative variants of GAN and f-GAN training.This highlights important differences between the two variants.For example, we show that the alternative variant of KL-GAN training actually minimizes the reverse KL divergence, and that the alternative variant of conventional GAN training minimizes a ""softened"" version of the reverse KL.We hope these results may help to clarify some of the theoretical discussion surrounding the divergence minimization view of GAN training.","Typical GAN training doesn't optimize Jensen-Shannon, but something like a reverse KL divergence." 458,"Buy 4 REINFORCE Samples, Get a Baseline for Free!","REINFORCE can be used to train models in structured prediction settings to directly optimize the test-time objective.However, the common case of sampling one prediction per datapoint is data-inefficient.We show that by drawing multiple samples per datapoint, we can learn with significantly less data, as we freely obtain a REINFORCE baseline to reduce variance.Additionally we derive a REINFORCE estimator with baseline, based on sampling without replacement.Combined with a recent technique to sample sequences without replacement using Stochastic Beam Search, this improves the training procedure for a sequence model that predicts the solution to the Travelling Salesman Problem.","We show that by drawing multiple samples (predictions) per input (datapoint), we can learn with less data as we freely obtain a REINFORCE baseline." 459,Automatic Goal Generation for Reinforcement Learning Agents,"Reinforcement learning is a powerful technique to train an agent to perform a task. However, an agent that is trained using RL is only capable of achieving the single task that is specified via its reward function. Such an approach does not scale well to settings in which an agent needs to perform a diverse set of tasks, such as navigating to varying positions in a room or moving objects to varying locations. Instead, we propose a method that allows an agent to automatically discover the range of tasks that it is capable of performing in its environment. We use a generator network to propose tasks for the agent to try to achieve, each task being specified as reaching a certain parametrized subset of the state-space. The generator network is optimized using adversarial training to produce tasks that are always at the appropriate level of difficulty for the agent. Our method thus automatically produces a curriculum of tasks for the agent to learn. We show that, by using this framework, an agent can efficiently and automatically learn to perform a wide set of tasks without requiring any prior knowledge of its environment.Our method can also learn to achieve tasks with sparse rewards, which pose significant challenges for traditional RL methods.",We efficiently solve multi-task problems with an automatic curriculum generation algorithm based on a generative model that tracks the learning agent's performance. 460,Are adversarial examples inevitable?,"A wide range of defenses have been proposed to harden neural networks against adversarial attacks.However, a pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable?This paper analyzes adversarial examples from a theoretical perspective, and identifies fundamental bounds on the susceptibility of a classifier to adversarial attacks. We show that, for certain classes of problems, adversarial examples are inescapable. ', ""Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier's robustness against adversarial examples.","This paper identifies classes of problems for which adversarial examples are inescapable, and derives fundamental bounds on the susceptibility of any classifier to adversarial examples. " 461,WRPN: Wide Reduced-Precision Networks,"For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters in deep neural networks.Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs.One way to reduce this large memory footprint is to reduce the precision of activations.However, past works have shown that reducing the precision of activations hurts model accuracy.We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy.We reduce the precision of activation maps and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network.As a result, one can significantly improve the execution efficiency and speed up the training and inference process with appropriate hardware support.We call our scheme WRPN -- wide reduced-precision networks.We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.","Lowering precision (to 4-bits, 2-bits and even binary) and widening the filter banks gives as accurate network as those obtained with FP32 weights and activations." 462,Semi-Supervised Named Entity Recognition with CRF-VAEs,"We investigate methods for semi-supervised learning of a neural linear-chain conditional random field for Named Entity Recognition by treating the tagger as the amortized variational posterior in a generative model of text given tags.We first illustrate how to incorporate a CRF in a VAE, enabling end-to-end training on semi-supervised data.We then investigate a series of increasingly complex deep generative models of tokens given tags enabled by end-to-end optimization, comparing the proposed models against supervised and strong CRF SSL baselines on the Ontonotes5 NER dataset.We find that our best proposed model consistently improves performance by F1 in low- and moderate-resource regimes and easily addresses degenerate model behavior in a more difficult, partially supervised setting.",We embed a CRF in a VAE of tokens and NER tags for semi-supervised learning and show improvements in low-resource settings. 463,ProxQuant: Quantized Neural Networks via Proximal Operators,"To make deep neural networks feasible in resource-constrained environments, it is beneficial to quantize models by using low-precision weights.One common technique for quantizing neural networks is the straight-through gradient method, which enables back-propagation through the quantization mapping.Despite its empirical success, little is understood about why the straight-through gradient method works.Building upon a novel observation that the straight-through gradient method is in fact identical to the well-known Nesterov’s dual-averaging algorithm on a quantization constrained optimization problem, we propose a more principled alternative approach, called ProxQuant , that formulates quantized network training as a regularized learning problem instead and optimizes it via the prox-gradient method.ProxQuant does back-propagation on the underlying full-precision vector and applies an efficient prox-operator in between stochastic gradient steps to encourage quantizedness.For quantizing ResNets and LSTMs, ProxQuant outperforms state-of-the-art results on binary quantization and is on par with state-of-the-art on multi-bit quantization.For binary quantization, our analysis shows both theoretically and experimentally that ProxQuant is more stable than the straight-through gradient method, challenging the indispensability of the straight-through gradient method and providing a powerful alternative.",A principled framework for model quantization using the proximal gradient method. 464,Adversarial Example Detection and Classification with Asymmetrical Adversarial Training,"The vulnerabilities of deep neural networks against adversarial examples have become a significant concern for deploying these models in sensitive domains.Devising a definitive defense against such attacks is proven to be challenging, and the methods relying on detecting adversarial samples are only valid when the attacker is oblivious to the detection mechanism.In this paper, we consider the adversarial detection problem under the robust optimization framework.We partition the input space into subspaces and train adversarial robust subspace detectors using asymmetrical adversarial training.The integration of the classifier and detectors presents a detection mechanism that provides a performance guarantee to the adversary it considered.We demonstrate that AAT promotes the learning of class-conditional distributions, which further gives rise to generative detection/classification approaches that are both robust and more interpretable.We provide comprehensive evaluations of the above methods, and demonstrate their competitive performances and compelling properties on adversarial detection and robust classification problems.","A new generative modeling technique based on asymmetrical adversarial training, and its applications to adversarial example detection and robust classification" 465,Meta-learning curiosity algorithms,"Exploration is a key component of successful reinforcement learning, but optimal approaches are computationally intractable, so researchers have focused on hand-designing mechanisms based on exploration bonuses and intrinsic reward, some inspired by curious behavior in natural systems. In this work, we propose a strategy for encoding curiosity algorithms as programs in a domain-specific language and searching, during a meta-learning phase, for algorithms that enable RL agents to perform well in new domains. Our rich language of programs, which can combine neural networks with other building blocks including nearest-neighbor modules and can choose its own loss functions, enables the expression of highly generalizable programs that perform well in domains as disparate as grid navigation with image input, acrobot, lunar lander, ant and hopper. ', ""To make this approach feasible, we develop several pruning techniques, including learning to predict a program's success based on its syntactic properties.We demonstrate the effectiveness of the approach empirically, finding curiosity strategies that are similar to those in published literature, as well as novel strategies that are competitive with them and generalize well.",Meta-learning curiosity algorithms by searching through a rich space of programs yields novel mechanisms that generalize across very different reinforcement-learning domains. 466,Measuring Compositionality in Representation Learning,"Many machine learning algorithms represent input data with vector embeddings or discrete codes.When inputs exhibit compositional structure, it is natural to ask whether this compositional structure is reflected in the the inputs’ learned representations.While the assessment of compositionality in languages has received significant attention in linguistics and adjacent fields, the machine learning literature lacks general-purpose tools for producing graded measurements of compositional structure in more general representation spaces.We describe a procedure for evaluating compositionality by measuring how well the true representation-producing model can be approximated by a model that explicitly composes a collection of inferred representational primitives.We use the procedure to provide formal and empirical characterizations of compositional structure in a variety of settings, exploring the relationship between compositionality and learning dynamics, human judgments, representational similarity, and generalization.","This paper proposes a simple procedure for evaluating compositional structure in learned representations, and uses the procedure to explore the role of compositionality in four learning problems." 467,RNA Secondary Structure Prediction By Learning Unrolled Algorithms,"In this paper, we propose an end-to-end deep learning model, called E2Efold, for RNA secondary structure prediction which can effectively take into account the inherent constraints in the problem.The key idea of E2Efold is to directly predict the RNA base-pairing matrix, and use an unrolled constrained programming algorithm as a building block in the architecture to enforce constraints.With comprehensive experiments on benchmark datasets, we demonstrate the superior performance of E2Efold: it predicts significantly better structures compared to previous SOTA and runs as efficient as the fastest algorithms in terms of inference time.","A DL model for RNA secondary structure prediction, which uses an unrolled algorithm in the architecture to enforce constraints." 468,Eligibility traces provide a data-inspired alternative to backpropagation through time,"Learning in recurrent neural networks is most often implemented by gradient descent using backpropagation through time, but BPTT does not model accurately how the brain learns.Instead, many experimental results on synaptic plasticity can be summarized as three-factor learning rules involving eligibility traces of the local neural activity and a third factor.We present here eligibility propagation, a new factorization of the loss gradients in RNNs that fits the framework of three factor learning rules when derived for biophysical spiking neuron models.When tested on the TIMIT speech recognition benchmark, it is competitive with BPTT both for training artificial LSTM networks and spiking RNNs.Further analysis suggests that the diversity of learning signals and the consideration of slow internal neural dynamics are decisive to the learning efficiency of e-prop.",We present eligibility propagation an alternative to BPTT that is compatible with experimental data on synaptic plasticity and competes with BPTT on machine learning benchmarks. 469,Learning Finite State Representations of Recurrent Policy Networks,"Recurrent neural networks are an effective representation of control policies for a wide range of reinforcement and imitation learning problems.RNN policies, however, are particularly difficult to explain, understand, and analyze due to their use of continuous-valued memory vectors and observation features.In this paper, we introduce a new technique, Quantized Bottleneck Insertion, to learn finite representations of these vectors and features.The result is a quantized representation of the RNN that can be analyzed to improve our understanding of memory use and general behavior.We present results of this approach on synthetic environments and six Atari games.The resulting finite representations are surprisingly small in some cases, using as few as 3 discrete memory states and 10 observations for a perfect Pong policy.We also show that these finite policy representations lead to improved interpretability.",Extracting a finite state machine from a recurrent neural network via quantization for the purpose of interpretability with experiments on Atari. 470,On-Policy Trust Region Policy Optimisation with Replay Buffers,"Building upon the recent success of deep reinforcement learning methods, we investigate the possibility of on-policy reinforcement learning improvement by reusing the data from several consecutive policies.On-policy methods bring many benefits, such as ability to evaluate each resulting policy.However, they usually discard all the information about the policies which existed before.In this work, we propose adaptation of the replay buffer concept, borrowed from the off-policy learning setting, to the on-policy algorithms.To achieve this, the proposed algorithm generalises the Q-, value and advantage functions for data from multiple policies.The method uses trust region optimisation, while avoiding some of the common problems of the algorithms such as TRPO or ACKTR: it uses hyperparameters to replace the trust region selection heuristics, as well as the trainable covariance matrix instead of the fixed one.In many cases, the method not only improves the results comparing to the state-of-the-art trust region on-policy learning algorithms such as ACKTR and TRPO, but also with respect to their off-policy counterpart DDPG. ",We investigate the theoretical and practical evidence of on-policy reinforcement learning improvement by reusing the data from several consecutive policies. 471,Convolutional Neural Networks on Non-uniform Geometrical Signals Using Euclidean Spectral Transformation,"Convolutional Neural Networks have been successful in processing data signals that are uniformly sampled in the spatial domain.However, most data signals do not natively exist on a grid, and in the process of being sampled onto a uniform physical grid suffer significant aliasing error and information loss.Moreover, signals can exist in different topological structures as, for example, points, lines, surfaces and volumes.It has been challenging to analyze signals with mixed topologies.To this end, we develop mathematical formulations for Non-Uniform Fourier Transforms to directly, and optimally, sample nonuniform data signals of different topologies defined on a simplex mesh into the spectral domain with no spatial sampling error.The spectral transform is performed in the Euclidean space, which removes the translation ambiguity from works on the graph spectrum.Our representation has four distinct advantages: the process causes no spatial sampling error during initial sampling, the generality of this approach provides a unified framework for using CNNs to analyze signals of mixed topologies, it allows us to leverage state-of-the-art backbone CNN architectures for effective learning without having to design a particular architecture for a particular data structure in an ad-hoc fashion, and the representation allows weighted meshes where each element has a different weight indicating local properties.We achieve good results on-par with state-of-the-art for 3D shape retrieval task, and new state-of-the-art for point cloud to surface reconstruction task.","We use non-Euclidean Fourier Transformation of shapes defined by a simplicial complex for deep learning, achieving significantly better results than point-based sampling techiques used in current 3D learning literature." 472,Causal Discovery with Reinforcement Learning,"Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences.Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph according to a predefined score function.While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are less satisfactory in practice due to finite data and possible violation of assumptions.Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning to search for the DAG with the best scoring.Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards.The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity.In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward.We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows for a flexible score function under the acyclicity constraint.",We apply reinforcement learning to score-based causal discovery and achieve promising results on both synthetic and real datasets 473,Learning Topics using Semantic Locality,"The topic modeling discovers the latent topic probability of given the text documents.To generate the more meaningful topic that better represents the given document, we proposed a universal method which can be used in the data preprocessing stage.The method consists of three steps.First, it generates the word/word-pair from every single document.Second, it applies a two way parallel TF-IDF algorithm to word/word-pair for semantic filtering.Third, it uses the k-means algorithm to merge the word pairs that have the similar semantic meaning.Experiments are carried out on the Open Movie Database, Reuters Dataset and 20NewsGroup Dataset and use the mean Average Precision score as the evaluation metric.Comparing our results with other state-of-the-art topic models, such as Latent Dirichlet allocation and traditional Restricted Boltzmann Machines.Our proposed data preprocessing can improve the generated topic accuracy by up to 12.99%.How the number of clusters and the number of word pairs should be adjusted for different type of text document is also discussed.",We proposed a universal method which can be used in the data preprocessing stage to generate the more meaningful topic that better represents the given document 474,Learning to Transfer via Modelling Multi-level Task Dependency,"Multi-task learning has been successful in modeling multiple related tasks with large, carefully curated labeled datasets.By leveraging the relationships among different tasks, multi-task learning framework can improve the performance significantly.However, most of the existing works are under the assumption that the predefined tasks are related to each other.Thus, their applications on real-world are limited, because rare real-world problems are closely related.Besides, the understanding of relationships among tasks has been ignored by most of the current methods.Along this line, we propose a novel multi-task learning framework - Learning To Transfer Via Modelling Multi-level Task Dependency, which constructed attention based dependency relationships among different tasks.At the same time, the dependency relationship can be used to guide what knowledge should be transferred, thus the performance of our model also be improved.To show the effectiveness of our model and the importance of considering multi-level dependency relationship, we conduct experiments on several public datasets, on which we obtain significant improvements over current methods.",We propose a novel multi-task learning framework which extracts multi-view dependency relationship automatically and use it to guide the knowledge transfer among different tasks. 475,A quantifiable testing of global translational invariance in Convolutional and Capsule Networks," We design simple and quantifiable testing of global translation-invariance in deep learning models trained on the MNIST dataset.Experiments on convolutional and capsules neural networks show that both models have poor performance in dealing with global translation-invariance; however, the performance improved by using data augmentation.Although the capsule network is better on the MNIST testing dataset, the convolutional neural network generally has better performance on the translation-invariance.",Testing of global translational invariance in Convolutional and Capsule Networks 476,Non-Gaussian processes and neural networks at finite widths,"Gaussian processes are ubiquitous in nature and engineering.A case in point is a class of neural networks in the infinite-width limit, whose priors correspond to Gaussian processes.Here we perturbatively extend this correspondence to finite-width neural networks, yielding non-Gaussian processes as priors.The methodology developed herein allows us to track the flow of preactivation distributions by progressively integrating out random variables from lower to higher layers, reminiscent of renormalization-group flow.We further develop a perturbative prescription to perform Bayesian inference with weakly non-Gaussian priors.",We develop an analytical method to study Bayesian inference of finite-width neural networks and find that the renormalization-group flow picture naturally emerges. 477,Distillation $\approx$ Early Stopping? Harvesting Dark Knowledge Utilizing Anisotropic Information Retrieval For Overparameterized NN,"Distillation is a method to transfer knowledge from one model to another and often achieves higher accuracy with the same capacity.In this paper, we aim to provide a theoretical understanding on what mainly helps with the distillation.Our answer is ""early stopping"".Assuming that the teacher network is overparameterized, we argue that the teacher network is essentially harvesting dark knowledge from the data via early stopping.This can be justified by a new concept, Anisotropic In- formation Retrieval, which means that the neural network tends to fit the informative information first and the non-informative information later.Motivated by the recent development on theoretically analyzing overparame- terized neural networks, we can characterize AIR by the eigenspace of the Neural Tangent Kernel.AIR facilities a new understanding of distillation.With that, we further utilize distillation to refine noisy labels.We propose a self-distillation al- gorithm to sequentially distill knowledge from the network in the previous training epoch to avoid memorizing the wrong labels.We also demonstrate, both theoret- ically and empirically, that self-distillation can benefit from more than just early stopping.Theoretically, we prove convergence of the proposed algorithm to the ground truth labels for randomly initialized overparameterized neural networks in terms of l2 distance, while the previous result was on convergence in 0-1 loss.The theoretical result ensures the learned neural network enjoy a margin on the training data which leads to better generalization.Empirically, we achieve better testing accuracy and entirely avoid early stopping which makes the algorithm more user-friendly.","theoretically understand the regularization effect of distillation. We show that early stopping is essential in this process. From this perspective, we developed a distillation method for learning with corrupted Label with theoretical guarantees." 478,Lifelong Learning with Output Kernels,"Lifelong learning poses considerable challenges in terms of effectiveness and overall computational tractability for real-time performance. This paper addresses continuous lifelong multitask learning by jointly re-estimating the inter-task relations and the per-task model parameters at each round, assuming data arrives in a streaming fashion.We propose a novel algorithm called for lifelong learning setting.To avoid the memory explosion, we propose a robust budget-limited versions of the proposed algorithm that efficiently utilize the relationship between the tasks to bound the total number of representative examples in the support set. In addition, we propose a two-stage budgeted scheme for efficiently tackling the task-specific budget constraints in lifelong learning.Our empirical results over three datasets indicate superior AUC performance for OOKLA and its budget-limited cousins over strong baselines.",a novel approach for online lifelong learning using output kernels. 479,Construction-Planning Models in Minecraft,"Minecraft is a videogame that offers many interesting challenges for AI systems.In this paper, we focus in construction scenarios where an agent must build a complex structure made of individual blocks.As higher-level objects are formed of lower-level objects, the construction can naturally be modelled as a hierarchical task network.We model a house-construction scenario in classical and HTN planning and compare the advantages and disadvantages of both kinds of models.",We model a house-construction scenario in Minecraft in classical and HTN planning and compare the advantages and disadvantages of both kinds of models. 480,Pragmatic Evaluation of Adversarial Examples in Natural Language,"Attacks on natural language models are difficult to compare due to their different definitions of what constitutes a successful attack.We present a taxonomy of constraints to categorize these attacks.For each constraint, we present a real-world use case and a way to measure how well generated samples enforce the constraint.We then employ our framework to evaluate two state-of-the art attacks which fool models with synonym substitution.These attacks claim their adversarial perturbations preserve the semantics and syntactical correctness of the inputs, but our analysis shows these constraints are not strongly enforced.For a significant portion of these adversarial examples, a grammar checker detects an increase in errors.Additionally, human studies indicate that many of these adversarial examples diverge in semantic meaning from the input or do not appear to be human-written.Finally, we highlight the need for standardized evaluation of attacks that share constraints.Without shared evaluation metrics, it is up to researchers to set thresholds that determine the trade-off between attack quality and attack success.We recommend well-designed human studies to determine the best threshold to approximate human judgement.","We present a framework for evaluating adversarial examples in natural language processing and demonstrate that generated adversarial examples are often not semantics-preserving, syntactically correct, or non-suspicious." 481,INTERPRETATION OF NEURAL NETWORK IS FRAGILE,"In order for machine learning to be deployed and trusted in many applications, it is crucial to be able to reliably explain why the machine learning algorithm makes certain predictions.For example, if an algorithm classifies a given pathology image to be a malignant tumor, then the doctor may need to know which parts of the image led the algorithm to this classification.How to interpret black-box predictors is thus an important and active area of research. A fundamental question is: how much can we trust the interpretation itself?In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different}interpretations.We systematically characterize the fragility of the interpretations generated by several widely-used feature-importance interpretation methods on ImageNet and CIFAR-10.Our experiments show that even small random perturbation can change the feature importance and new systematic perturbations can lead to dramatically different interpretations without changing the label.We extend these results to show that interpretations based on exemplars are similarly fragile.Our analysis of the geometry of the Hessian matrix gives insight on why fragility could be a fundamental challenge to the current interpretation approaches.",Can we trust a neural network's explanation for its prediction? We examine the robustness of several popular notions of interpretability of neural networks including saliency maps and influence functions and design adversarial examples against them. 482,Stochastic AUC Maximization with Deep Neural Networks,"Stochastic AUC maximization has garnered an increasing interest due to better fit to imbalanced data classification.However, existing works are limited to stochastic AUC maximization with a linear predictive model, which restricts its predictive power when dealing with extremely complex data.In this paper, we consider stochastic AUC maximization problem with a deep neural network as the predictive model.Building on the saddle point reformulation of a surrogated loss of AUC, the problem can be cast into a min-max problem.The main contribution made in this paper is to make stochastic AUC maximization more practical for deep neural networks and big data with theoretical insights as well.In particular, we propose to explore Polyak-\\Lojasiewicz condition that has been proved and observed in deep learning, which enables us to develop new stochastic algorithms with even faster convergence rate and more practical step size scheme.An AdaGrad-style algorithm is also analyzed under the PL condition with adaptive convergence rate.Our experimental results demonstrate the effectiveness of the proposed algorithms.","The paper designs two algorithms for the stochastic AUC maximization problem with state-of-the-art complexities when using deep neural network as predictive model, which are also verified by empirical studies." 483,Goal-conditioned Imitation Learning,"Designing rewards for Reinforcement Learning is challenging because it needs to convey the desired task, be efficient to optimize, and be easy to compute.The latter is particularly problematic when applying RL to robotics, where detecting whether the desired configuration is reached might require considerable supervision and instrumentation.Furthermore, we are often interested in being able to reach a wide range of configurations, hence setting up a different reward every time might be unpractical.Methods like Hindsight Experience Replay have recently shown promise to learn policies able to reach many goals, without the need of a reward.Unfortunately, without tricks like resetting to points along the trajectory, HER might take a very long time to discover how to reach certain areas of the state-space.In this work we investigate different approaches to incorporate demonstrations to drastically speed up the convergence to a policy able to reach any goal, also surpassing the performance of an agent trained with other Imitation Learning algorithms.Furthermore, our method can be used when only trajectories without expert actions are available, which can leverage kinestetic or third person demonstration.","We tackle goal-conditioned tasks by combining Hindsight Experience Replay and Imitation Learning algorithms, showing faster convergence than the first and higher final performance than the second." 484,PAC-Bayesian Neural Network Bounds,"Bayesian neural networks, which both use the negative log-likelihood loss function and average their predictions using a learned posterior over the parameters, have been used successfully across many scientific fields, partly due to their ability to `effortlessly' extract desired representations from many large-scale datasets."", 'However, generalization bounds for this setting is still missing.In this paper, we present a new PAC-Bayesian generalization bound for the negative log-likelihood loss which utilizes the for the log-Sobolev inequality to bound the moment generating function of the learners risk.",We derive a new PAC-Bayesian Bound for unbounded loss functions (e.g. Negative Log-Likelihood). 485,Rethinking Data Augmentation: Self-Supervision and Self-Distillation,"Data augmentation techniques, e.g., flipping or cropping, which systematically enlarge the training dataset by explicitly generating more training samples, are effective in improving the generalization performance of deep neural networks.In the supervised setting, a common practice for data augmentation is to assign the same label to all augmented samples of the same source.However, if the augmentation results in large distributional discrepancy among them, forcing their label invariance may be too difficult to solve and often hurts the performance.To tackle this challenge, we suggest a simple yet effective idea of learning the joint distribution of the original and self-supervised labels of augmented samples.The joint learning framework is easier to train, and enables an aggregated inference combining the predictions from different augmented samples for improving the performance.Further, to speed up the aggregation process, we also propose a knowledge transfer technique, self-distillation, which transfers the knowledge of augmentation into the model itself.We demonstrate the effectiveness of our data augmentation framework on various fully-supervised settings including the few-shot and imbalanced classification scenarios.",We propose a simple self-supervised data augmentation technique which improves performance of fully-supervised scenarios including few-shot learning and imbalanced classification. 486,Unsupervised Learning of Automotive 3D Crash Simulations using LSTMs,"Long short-term memory networks allow to exhibit temporal dynamic behavior with feedback connections and seem a natural choice for learning sequences of 3D meshes.We introduce an approach for dynamic mesh representations as used for numerical simulations of car crashes.To bypass the complication of using 3D meshes, we transform the surface mesh sequences into spectral descriptors that efficiently encode the shape.A two branch LSTM based network architecture is chosen to learn the representations and dynamics of the crash during the simulation.The architecture is based on unsupervised video prediction by an LSTM without any convolutional layer.It uses an encoder LSTM to map an input sequence into a fixed length vector representation.On this representation one decoder LSTM performs the reconstruction of the input sequence, while the other decoder LSTM predicts the future behavior by receiving initial steps of the sequence as seed.The spatio-temporal error behavior of the model is analysed to study how well the model can extrapolate the learned spectral descriptors into the future, that is, how well it has learned to represent the underlying dynamical structural mechanics.Considering that only a few training examples are available, which is the typical case for numerical simulations, the network performs very well.",A two branch LSTM based network architecture learns the representation and dynamics of 3D meshes of numerical crash simulations. 487,Estimating encoding models of cortical auditory processing using naturalistic stimuli and transfer learning,"The purpose of an encoding model is to predict brain activity given a stimulus.In this contribution, we attempt at estimating a whole brain encoding model of auditory perception in a naturalistic stimulation setting.We analyze data from an open dataset, in which 16 subjects watched a short movie while their brain activity was being measured using functional MRI.We extracted feature vectors aligned with the timing of the audio from the movie, at different layers of a Deep Neural Network pretrained on the classification of auditory scenes.fMRI data was parcellated using hierarchical clustering on 500 parcels, and encoding models were estimated using a fully connected neural network with one hidden layer, trained to predict the signals for each parcel from the DNN features.Individual encoding models were successfully trained and predicted brain activity on unseen data, in parcels located in the superior temporal lobe, as well as dorsolateral prefrontal regions, which are usually considered as areas involved in auditory and language processing.Taken together, this contribution extends previous attempts on estimating encoding models, by showing the ability to model brain activity using a generic DNN to extract auditory features, suggesting a degree of similarity between internal DNN representations and brain activity in naturalistic settings.",Feature vectors from SoundNet can predict brain activity of subjects watching a movie in auditory and language related brain regions. 488,Implicit Autoencoders,"In this paper, we describe the ""implicit autoencoder"", a generative autoencoder in which both the generative path and the recognition path are parametrized by implicit distributions.We use two generative adversarial networks to define the reconstruction and the regularization cost functions of the implicit autoencoder, and derive the learning rules based on maximum-likelihood learning.Using implicit distributions allows us to learn more expressive posterior and conditional likelihood distributions for the autoencoder.Learning an expressive conditional likelihood distribution enables the latent code to only capture the abstract and high-level information of the data, while the remaining information is captured by the implicit conditional likelihood distribution.For example, we show that implicit autoencoders can disentangle the global and local information, and perform deterministic or stochastic reconstructions of the images.We further show that implicit autoencoders can disentangle discrete underlying factors of variation from the continuous factors in an unsupervised fashion, and perform clustering and semi-supervised learning.","We propose a generative autoencoder that can learn expressive posterior and conditional likelihood distributions using implicit distributions, and train the model using a new formulation of the ELBO." 489,Mutual Exclusivity as a Challenge for Deep Neural Networks,"Strong inductive biases allow children to learn in fast and adaptable ways.Children use the mutual exclusivity bias to help disambiguate how words map to referents, assuming that if an object has one label then it does not need another.In this paper, we investigate whether or not standard neural architectures have a ME bias, demonstrating that they lack this learning assumption.Moreover, we show that their inductive biases are poorly matched to lifelong learning formulations of classification and translation.We demonstrate that there is a compelling case for designing neural networks that reason by mutual exclusivity, which remains an open challenge.","Children use the mutual exclusivity (ME) bias to learn new words, while standard neural nets show the opposite bias, hindering learning in naturalistic scenarios such as lifelong learning." 490,Spiking Recurrent Networks as a Model to Probe Neuronal Timescales Specific to Working Memory,"Cortical neurons process and integrate information on multiple timescales.In addition, these timescales or temporal receptive fields display functional and hierarchical organization.For instance, areas important for working memory, such as prefrontal cortex, utilize neurons with stable temporal receptive fields and long timescales to support reliable representations of stimuli.Despite of the recent advances in experimental techniques, the underlying mechanisms for the emergence of neuronal timescales long enough to support WM are unclear and challenging to investigate experimentally.Here, we demonstrate that spiking recurrent neural networks designed to perform a WM task reproduce previously observed experimental findings and that these models could be utilized in the future to study how neuronal timescales specific to WM emerge.","Spiking recurrent neural networks performing a working memory task utilize long heterogeneous timescales, strikingly similar to those observed in prefrontal cortex." 491,GENERATIVE LOW-SHOT NETWORK EXPANSION,"Conventional deep learning classifiers are static in the sense that they are trained ona predefined set of classes and learning to classify a novel class typically requiresre-training.In this work, we address the problem of Low-shot network-expansionlearning.We introduce a learning framework which enables expanding a pre-trained deep network to classify novel classes when the number of examples for thenovel classes is particularly small.We present a simple yet powerful distillationmethod where the base network is augmented with additional weights to classifythe novel classes, while keeping the weights of the base network unchanged.Weterm this learning hard distillation, since we preserve the response of the networkon the old classes to be equal in both the base and the expanded network.Weshow that since only a small number of weights needs to be trained, the harddistillation excels for low-shot training scenarios.Furthermore, hard distillationavoids detriment to classification performance on the base classes.Finally, weshow that low-shot network expansion can be done with a very small memoryfootprint by using a compact generative model of the base classes training datawith only a negligible degradation relative to learning with the full training set."," In this paper, we address the problem of Low-shot network-expansion learning" 492,Modeling question asking using neural program generation,"People ask questions that are far richer, more informative, and more creative than current AI systems.We propose a neural program generation framework for modeling human question asking, which represents questions as formal programs and generates programs with an encoder-decoder based deep neural network.From extensive experiments using an information-search game, we show that our method can ask optimal questions in synthetic settings, and predict which questions humans are likely to ask in unconstrained settings.We also propose a novel grammar-based question generation framework trained with reinforcement learning, which is able to generate creative questions without supervised data.","We introduce a model of human question asking that combines neural networks and symbolic programs, which can learn to generate good questions with or without supervised examples." 493,Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations,"Deep networks have achieved impressive results across a variety of important tasks.However, a known weakness is a failure to perform well when evaluated on data which differ from the training distribution, even if these differences are very small, as is the case with adversarial examples. We propose , a simple transformation of existing networks, which “fortifies” the hidden layers in a deep network by identifying when the hidden states are off of the data manifold, and maps these hidden states back to parts of the data manifold where the network performs well.Our principal contribution is to show that fortifying these hidden states improves the robustness of deep networks and our experiments demonstrate improved robustness to standard adversarial attacks in both black-box and white-box threat models; suggest that our improvements are not primarily due to the problem of deceptively good results due to degraded quality in the gradient signal and show the advantage of doing this fortification in the hidden layers instead of the input space. We demonstrate improvements in adversarial robustness on three datasets, across several attack parameters, both white-box and black-box settings, and the most widely studied attacks. We show that these improvements are achieved across a wide variety of hyperparameters. ",Better adversarial training by learning to map back to the data manifold with autoencoders in the hidden states. 494,Cross-Entropy Loss Leads To Poor Margins,"Neural networks could misclassify inputs that are slightly different from their training data, which indicates a small margin between their decision boundaries and the training dataset.In this work, we study the binary classification of linearly separable datasets and show that linear classifiers could also have decision boundaries that lie close to their training dataset if cross-entropy loss is used for training.In particular, we show that if the features of the training dataset lie in a low-dimensional affine subspace and the cross-entropy loss is minimized by using a gradient method, the margin between the training points and the decision boundary could be much smaller than the optimal value.This result is contrary to the conclusions of recent related works such as, and we identify the reason for this contradiction.In order to improve the margin, we introduce differential training, which is a training paradigm that uses a loss function defined on pairs of points from each class.We show that the decision boundary of a linear classifier trained with differential training indeed achieves the maximum margin.The results reveal the use of cross-entropy loss as one of the hidden culprits of adversarial examples and introduces a new direction to make neural networks robust against them.",We show that minimizing the cross-entropy loss by using a gradient method could lead to a very poor margin if the features of the dataset lie on a low-dimensional subspace. 495,Rotational Unit of Memory ,"The concepts of unitary evolution matrices and associative memory have boosted the field of Recurrent Neural Networks to state-of-the-art performance in a variety of sequential tasks. However, RNN still has a limited capacity to manipulate long-term memory. To bypass this weakness the most successful applications of RNN use external techniques such as attention mechanisms.In this paper we propose a novel RNN model that unifies the state-of-the-art approaches: Rotational Unit of Memory.The core of RUM is its rotational operation, which is, naturally, a unitary matrix, providing architectures with the power to learn long-term dependencies by overcoming the vanishing and exploding gradients problem. Moreover, the rotational unit also serves as associative memory.We evaluate our model on synthetic memorization, question answering and language modeling tasks. RUM learns the Copying Memory task completely and improves the state-of-the-art result in the Recall task. RUM’s performance in the bAbI Question Answering task is comparable to that of models with attention mechanism.We also improve the state-of-the-art result to 1.189 bits-per-character loss in the Character Level Penn Treebank task, which is to signify the applications of RUM to real-world sequential data.The universality of our construction, at the core of RNN, establishes RUM as a promising approach to language modeling, speech recognition and machine translation.",A novel RNN model which outperforms significantly the current frontier of models in a variety of sequential tasks. 496,Surprising Negative Results for Generative Adversarial Tree Search ,"While many recent advances in deep reinforcement learning rely on model-free methods, model-based approaches remain an alluring prospect for their potential to exploit unsupervised data to learn environment dynamics.One prospect is to pursue hybrid approaches, as in AlphaGo, which combines Monte-Carlo Tree Search—a model-based method—with deep-Q networks—a model-free method.MCTS requires generating rollouts, which is computationally expensive.In this paper, we propose to simulate roll-outs, exploiting the latest breakthroughs in image-to-image transduction, namely Pix2Pix GANs, to predict the dynamics of the environment.Our proposed algorithm, generative adversarial tree search, simulates rollouts up to a specified depth using both a GAN- based dynamics model and a reward predictor.GATS employs MCTS for planning over the simulated samples and uses DQN to estimate the Q-function at the leaf states.Our theoretical analysis establishes some favorable properties of GATS vis-a-vis the bias-variance trade-off and empirical results show that on 5 popular Atari games, the dynamics and reward predictors converge quickly to accurate solutions.However, GATS fails to outperform DQNs in 4 out of 5 games.Notably, in these experiments, MCTS has only short rollouts, while previous successes of MCTS have involved tree depth in the hundreds.We present a hypothesis for why tree search with short rollouts can fail even given perfect modeling.",Surprising negative results on Model Based + Model deep RL 497,IB-GAN: Disentangled Representation Learning with Information Bottleneck GAN,"We present a novel architecture of GAN for a disentangled representation learning.The new model architecture is inspired by Information Bottleneck theory thereby named IB-GAN.IB-GAN objective is similar to that of InfoGAN but has a crucial difference; a capacity regularization for mutual information is adopted, thanks to which the generator of IB-GAN can harness a latent representation in disentangled and interpretable manner.To facilitate the optimization of IB-GAN in practice, a new variational upper-bound is derived.With experiments on CelebA, 3DChairs, and dSprites datasets, we demonstrate that the visual quality of samples generated by IB-GAN is often better than those by β-VAEs.Moreover, IB-GAN achieves much higher disentanglement metrics score than β-VAEs or InfoGAN on the dSprites dataset.","Inspired by Information Bottleneck theory, we propose a new architecture of GAN for a disentangled representation learning" 498,Demystifying MMD GANs,"We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy as critic, termed MMD GANs.As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters.We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic.Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs.In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance.We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.",Explain bias situation with MMD GANs; MMD GANs work with smaller critic networks than WGAN-GPs; new GAN evaluation metric. 499,Neural separation of observed and unobserved distributions,"Separating mixed distributions is a long standing challenge for machine learning and signal processing.Applications include: single-channel multi-speaker separation, singing voice separation and separating reflections from images.Most current methods either rely on making strong assumptions on the source distributions or rely on having training samples of each source in the mixture.In this work, we tackle the scenario of extracting an unobserved distribution additively mixed with a signal from an observed distribution.We introduce a new method: Neural Egg Separation - an iterative method that learns to separate the known distribution from progressively finer estimates of the unknown distribution.In some settings, Neural Egg Separation is initialization sensitive, we therefore introduce GLO Masking which ensures a good initialization.Extensive experiments show that our method outperforms current methods that use the same level of supervision and often achieves similar performance to full supervision.",An iterative neural method for extracting signals that are only observed mixed with other signals 500,Physiological Signal Embeddings (PHASE) via Interpretable Stacked Models,"In health, machine learning is increasingly common, yet neural network embedding learning is arguably under-utilized for physiological signals. This inadequacy stands out in stark contrast to more traditional computer science domains, such as computer vision, and natural language processing. For physiological signals, learning feature embeddings is a natural solution to data insufficiency caused by patient privacy concerns -- rather than share data, researchers may share informative embedding models, which map patient data to an output embedding. Here, we present the PHASE framework, which consists of three components: i) learning neural network embeddings of physiological signals, ii) predicting outcomes based on the learned embedding, and iii) interpreting the prediction results by estimating feature attributions in the ""stacked"" models. PHASE is novel in three ways: 1) To our knowledge, PHASE is the first instance of transferal of neural networks to create physiological signal embeddings.2) We present a tractable method to obtain feature attributions through stacked models. We prove that our stacked model attributions can approximate Shapley values -- attributions known to have desirable properties -- for arbitrary sets of models.3) PHASE was extensively tested in a cross-hospital setting including publicly available data. In our experiments, we show that PHASE significantly outperforms alternative embeddings -- such as raw, exponential moving average/variance, and autoencoder -- currently in use.Furthermore, we provide evidence that transferring neural network embedding/representation learners between distinct hospitals still yields performant embeddings and offer recommendations when transference is ineffective.",Physiological signal embeddings for prediction performance and hospital transference with a general Shapley value interpretability method for stacked models. 501,NOODL: Provable Online Dictionary Learning and Sparse Coding,"We consider the dictionary learning problem, where the aim is to model the given data as a linear combination of a few columns of a matrix known as a dictionary, where the sparse weights forming the linear combination are known as coefficients.Since the dictionary and coefficients, parameterizing the linear model are unknown, the corresponding optimization is inherently non-convex.This was a major challenge until recently, when provable algorithms for dictionary learning were proposed.Yet, these provide guarantees only on the recovery of the dictionary, without explicit recovery guarantees on the coefficients.Moreover, any estimation error in the dictionary adversely impacts the ability to successfully localize and estimate the coefficients.This potentially limits the utility of existing provable dictionary learning methods in applications where coefficient recovery is of interest.To this end, we develop NOODL: a simple Neurally plausible alternating Optimization-based Online Dictionary Learning algorithm, which recovers both the dictionary and coefficients exactly at a geometric rate, when initialized appropriately.Our algorithm, NOODL, is also scalable and amenable for large scale distributed implementations in neural architectures, by which we mean that it only involves simple linear and non-linear operations.Finally, we corroborate these theoretical results via experimental evaluation of the proposed algorithm with the current state-of-the-art techniques.",We present a provable algorithm for exactly recovering both factors of the dictionary learning model. 502,Tangent-Normal Adversarial Regularization for Semi-supervised Learning,"The ever-increasing size of modern datasets combined with the difficulty of obtaining label information has made semi-supervised learning of significant practical importance in modern machine learning applications.In comparison to supervised learning, the key difficulty in semi-supervised learning is how to make full use of the unlabeled data.In order to utilize manifold information provided by unlabeled data, we propose a novel regularization called the tangent-normal adversarial regularization, which is composed by two parts.The two parts complement with each other and jointly enforce the smoothness along two different directions that are crucial for semi-supervised learning.One is applied along the tangent space of the data manifold, aiming to enforce local invariance of the classifier on the manifold, while the other is performed on the normal space orthogonal to the tangent space, intending to impose robustness on the classifier against the noise causing the observed data deviating from the underlying data manifold. Both of the two regularizers are achieved by the strategy of virtual adversarial training.Our method has achieved state-of-the-art performance on semi-supervised learning tasks on both artificial dataset and practical datasets.","We propose a novel manifold regularization strategy based on adversarial training, which can significantly improve the performance of semi-supervised learning." 503,Simplicial Complex Networks,"Universal approximation property of neural networks is one of the motivations to use these models in various real-world problems.However, this property is not the only characteristic that makes neural networks unique as there is a wide range of other approaches with similar property.Another characteristic which makes these models interesting is that they can be trained with the backpropagation algorithm which allows an efficient gradient computation and gives these universal approximators the ability to efficiently learn complex manifolds from a large amount of data in different domains.Despite their abundant use in practice, neural networks are still not well understood and a broad range of ongoing research is to study the interpretability of neural networks.On the other hand, topological data analysis relies on strong theoretical framework of topology along with other mathematical tools for analyzing possibly complex datasets.In this work, we leverage a universal approximation theorem originating from algebraic topology to build a connection between TDA and common neural network training framework.We introduce the notion of automatic subdivisioning and devise a particular type of neural networks for regression tasks: Simplicial Complex Networks.""SCN's architecture is defined with a set of bias functions along with a particular policy during the forward pass which alternates the common architecture search framework in neural networks."", 'We believe the view of SCNs can be used as a step towards building interpretable deep learning models.Finally, we verify its performance on a set of regression problems.",A novel method for supervised learning through subdivisioning the input space along with function approximation. 504,Adversarial Attacks on Node Embeddings,"The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks.However, despite the proliferation of such methods there is currently no study of their robustness to adversarial attacks.We provide the first adversarial vulnerability analysis on the widely used family of methods based on random walks.We derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks.We further show that our attacks are transferable since they generalize to many models, and are successful even when the attacker is restricted.",Adversarial attacks on unsupervised node embeddings based on eigenvalue perturbation theory. 505,Hierarchical Subtask Discovery with Non-Negative Matrix Factorization,"Hierarchical reinforcement learning methods offer a powerful means of planning flexible behavior in complicated domains.However, learning an appropriate hierarchical decomposition of a domain into subtasks remains a substantial challenge.We present a novel algorithm for subtask discovery, based on the recently introduced multitask linearly-solvable Markov decision process framework.The MLMDP can perform never-before-seen tasks by representing them as a linear combination of a previously learned basis set of tasks.In this setting, the subtask discovery problem can naturally be posed as finding an optimal low-rank approximation of the set of tasks the agent will face in a domain.We use non-negative matrix factorization to discover this minimal basis set of tasks, and show that the technique learns intuitive decompositions in a variety of domains.Our method has several qualitatively desirable features: it is not limited to learning subtasks with single goal states, instead learning distributed patterns of preferred states; it learns qualitatively different hierarchical decompositions in the same domain depending on the ensemble of tasks the agent will face; and it may be straightforwardly iterated to obtain deeper hierarchical decompositions.",We present a novel algorithm for hierarchical subtask discovery which leverages the multitask linear Markov decision process framework. 506,Long Term Memory Network for Combinatorial Optimization Problems,This paper introduces a framework for solving combinatorial optimization problems by learning from input-output examples of optimization problems.We introduce a new memory augmented neural model in which the memory is not resettable.We used deep reinforcement learning to train a memory controller agent to store useful memories.Our model was able to outperform hand-crafted solver on Binary Linear Programming.The proposed model is tested on different Binary LP instances with large number of variables and constrains.,We propose a memory network model to solve Binary LP instances where the memory information is perseved for long-term use. 507,COLD FUSION: TRAINING SEQ2SEQ MODELS TOGETHER WITH LANGUAGE MODELS,"Sequence-to-sequence models with attention have excelled at tasks which involve generating natural language sentences such as machine translation, image captioning and speech recognition.Performance has further been improved by leveraging unlabeled data, often in the form of a language model.In this work, we present the Cold Fusion method, which leverages a pre-trained language model during training, and show its effectiveness on the speech recognition task.We show that Seq2Seq models with Cold Fusion are able to better utilize language information enjoyingi) faster convergence and better generalization, andii) almost complete transfer to a new domain while using less than 10% of the labeled training data.","We introduce a novel method to train Seq2Seq models with language models that converge faster, generalize better and can almost completely transfer to a new domain using less than 10% of labeled data." 508,Online Meta-Learning,"A central capability of intelligent systems is the ability to continuously build upon previous experiences to speed up and enhance learning of new tasks.Two distinct research paradigms have studied this question.Meta-learning views this problem as learning a prior over model parameters that is amenable for fast adaptation on a new task, but typically assumes the set of tasks are available together as a batch.In contrast, online learning considers a sequential setting in which problems are revealed one after the other, but conventionally train only a single model without any task-specific adaptation.This work introduces an online meta-learning setting, which merges ideas from both the aforementioned paradigms to better capture the spirit and practice of continual lifelong learning.We propose the follow the meta leader algorithm which extends the MAML algorithm to this setting.Theoretically, this work provides an O regret guarantee for the FTML algorithm.Our experimental evaluation on three different large-scale tasks suggest that the proposed algorithm significantly outperforms alternatives based on traditional online learning approaches.",We introduce the online meta learning problem setting to better capture the spirit and practice of continual lifelong learning. 509,Do Attention Heads in BERT Track Syntactic Dependencies?,"We investigate the extent to which individual attention heads in pretrained transformer language models, such as BERT and RoBERTa, implicitly capture syntactic dependency relations.We employ two methods—taking the maximum attention weight and computing the maximum spanning tree—to extract implicit dependency relations from the attention weights of each layer/head, and compare them to the ground-truth Universal Dependency trees.We show that, for some UD relation types, there exist heads that can recover the dependency type significantly better than baselines on parsed English text, suggesting that some self-attention heads act as a proxy for syntactic structure.We also analyze BERT fine-tuned on two datasets—the syntax-oriented CoLA and the semantics-oriented MNLI—to investigate whether fine-tuning affects the patterns of their self-attention, but we do not observe substantial differences in the overall dependency relations extracted using our methods.Our results suggest that these models have some specialist attention heads that track individual dependency types, but no generalist head that performs holistic parsing significantly better than a trivial baseline, and that analyzing attention weights directly may not reveal much of the syntactic knowledge that BERT-style models are known to learn.",Attention weights don't fully expose what BERT knows about syntax. 510,FACE SUPER-RESOLUTION GUIDED BY 3D FACIAL PRIORS,"State-of-the-art face super-resolution methods employ deep convolutional neural networks to learn a mapping between low- and high-resolution facial patterns by exploring local appearance knowledge.However, most of these methods do not well exploit facial structures and identity information, and struggle to deal with facial images that exhibit large pose variation and misalignment.In this paper, we propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.Firstly, the 3D face rendering branch is set up to obtain 3D priors of salient facial structures and identity knowledge.Secondly, the Spatial Attention Mechanism is used to better exploit this hierarchical information for the super-resolution problem.Extensive experiments demonstrate that the proposed algorithm achieves superior face super-resolution results and outperforms the state-of-the-art.",We propose a novel face super resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures. 511,Cross-View Training for Semi-Supervised Learning,"We present Cross-View Training, a simple but effective method for deep semi-supervised learning.On labeled examples, the model is trained with standard cross-entropy loss.On an unlabeled example, the model first performs inference to produce soft targets.The model then learns from these soft targets.We deviate from prior work by adding multiple auxiliary student prediction layers to the model.The input to each student layer is a sub-network of the full model that has a restricted view of the input .The students can learn from the teacher because the teacher sees more of each example.Concurrently, the students improve the quality of the representations used by the teacher as they learn to make predictions with limited data.When combined with Virtual Adversarial Training, CVT improves upon the current state-of-the-art on semi-supervised CIFAR-10 and semi-supervised SVHN.We also apply CVT to train models on five natural language processing tasks using hundreds of millions of sentences of unlabeled data.On all tasks CVT substantially outperforms supervised learning alone, resulting in models that improve upon or are competitive with the current state-of-the-art.","Self-training with different views of the input gives excellent results for semi-supervised image recognition, sequence tagging, and dependency parsing." 512,"Non-reversibly updating a uniform [0,1] value for accept/reject decisions","I show how it can be beneficial to express Metropolis accept/reject decisions in terms of comparison with a uniform [0,1] value, and to then update this uniform value non-reversibly, as part of the Markov chain state, rather than sampling it independently each iteration.This provides a small improvement for random walk Metropolis and Langevin updates in high dimensions. It produces a larger improvement when using Langevin updates with persistent momentum, giving performance comparable to that of Hamiltonian Monte Carlo with long trajectories. This is of significance when some variables are updated by other methods, since if HMC is used, these updates can be done only between trajectories, whereas they can be done more often with Langevin updates. This is seen for a Bayesian neural network model, in which connection weights are updated by persistent Langevin or HMC, while hyperparameters are updated by Gibbs sampling.",A non-reversible way of making accept/reject decisions can be beneficial 513,ES-MAML: Simple Hessian-Free Meta Learning,"We introduce ES-MAML, a new framework for solving the model agnostic meta learning problem based on Evolution Strategies.Existing algorithms for MAML are based on policy gradients, and incur significant difficulties when attempting to estimate second derivatives using backpropagation on stochastic policies.We show how ES can be applied to MAML to obtain an algorithm which avoids the problem of estimating second derivatives, and is also conceptually simple and easy to implement.Moreover, ES-MAML can handle new types of nonsmooth adaptation operators, and other techniques for improving performance and estimation of ES methods become applicable.We show empirically that ES-MAML is competitive with existing methods and often yields better adaptation with fewer queries.","We provide a new framework for MAML in the ES/blackbox setting, and show that it allows deterministic and linear policies, better exploration, and non-differentiable adaptation operators." 514,Bijectors.jl: Flexible transformations for probability distributions,"Transforming one probability distribution to another is a powerful tool in Bayesian inference and machine learning.Some prominent examples are constrained-to-unconstrained transformations of distributions for use in Hamiltonian Monte-Carlo and constructing flexible and learnable densities such as normalizing flows.We present Bijectors.jl, a software package for transforming distributions implemented in Julia, available at github.com/TuringLang/Bijectors.jl.The package provides a flexible and composable way of implementing transformations of distributions without being tied to a computational framework.We demonstrate the use of Bijectors.jl on improving variational inference by encoding known statistical dependencies into the variational posterior using normalizing flows, providing a general approach to relaxing the mean-field assumption usually made in variational inference.",We present a software framework for transforming distributions and demonstrate its flexibility on relaxing mean-field assumptions in variational inference with the use of coupling flows to replicate structure from the target generative model. 515,Multi-Domain Adversarial Learning,"Multi-domain learning aims at obtaining a model with minimal average risk across multiple domains.Our empirical motivation is automated microscopy data, where cultured cells are imaged after being exposed to known and unknown chemical perturbations, and each dataset displays significant experimental bias.This paper presents a multi-domain adversarial learning approach, MuLANN, to leverage multiple datasets with overlapping but distinct class sets, in a semi-supervised setting.Our contributions include:i) a bound on the average- and worst-domain risk in MDL, obtained using the H-divergence;ii) a new loss to accommodate semi-supervised multi-domain learning and domain adaptation;iii) the experimental validation of the approach, improving on the state of the art on two standard image benchmarks, and a novel bioimage dataset, Cell.",Adversarial Domain adaptation and Multi-domain learning: a new loss to handle multi- and single-domain classes in the semi-supervised setting. 516,Distribution Regression Network,"We introduce our Distribution Regression Network which performs regression from input probability distributions to output probability distributions.Compared to existing methods, DRN learns with fewer model parameters and easily extends to multiple input and multiple output distributions.On synthetic and real-world datasets, DRN performs similarly or better than the state-of-the-art.Furthermore, DRN generalizes the conventional multilayer perceptron.In the framework of MLP, each node encodes a real number, whereas in DRN, each node encodes a probability distribution.",A learning network which generalizes the MLP framework to perform distribution-to-distribution regression 517,Time-Dependent Representation for Neural Event Sequence Prediction,"Existing sequence prediction methods are mostly concerned with time-independent sequences, in which the actual time span between events is irrelevant and the distance between events is simply the difference between their order positions in the sequence.While this time-independent view of sequences is applicable for data such as natural languages, e.g., dealing with words in a sentence, it is inappropriate and inefficient for many real world events that are observed and collected at unequally spaced points of time as they naturally arise, e.g., when a person goes to a grocery store or makes a phone call.The time span between events can carry important information about the sequence dependence of human behaviors.In this work, we propose a set of methods for using time in sequence prediction.Because neural sequence models such as RNN are more amenable for handling token-like input, we propose two methods for time-dependent event representation, based on the intuition on how time is tokenized in everyday life and previous work on embedding contextualization.We also introduce two methods for using next event duration as regularization for training a sequence prediction model.We discuss these methods based on recurrent neural nets.We evaluate these methods as well as baseline models on five datasets that resemble a variety of sequence prediction tasks.The experiments revealed that the proposed methods offer accuracy gain over baseline models in a range of settings.",Proposed methods for time-dependent event representation and regularization for sequence prediction; Evaluated these methods on five datasets that involve a range of sequence prediction tasks. 518,Keep Doing What Worked: Behavior Modelling Priors for Offline Reinforcement Learning,"Off-policy reinforcement learning algorithms promise to be applicable in settings where only a fixed data-set of environment interactions is available and no new experience can be acquired.This property makes these algorithms appealing for real world problems such as robot control.In practice, however, standard off-policy algorithms fail in the batch setting for continuous control.In this paper, we propose a simple solution to this problem.It admits the use of data generated by arbitrary behavior policies and uses a learned prior -- the advantage-weighted behavior model -- to bias the RL policy towards actions that have previously been executed and are likely to be successful on the new task.Our method can be seen as an extension of recent work on batch-RL that enables stable learning from conflicting data-sources.We find improvements on competitive baselines in a variety of RL tasks -- including standard continuous control benchmarks and multi-task learning for simulated and real-world robots.","We develop a method for stable offline reinforcement learning from logged data. The key is to regularize the RL policy towards a learned ""advantage weighted"" model of the data." 519,iSOM-GSN: An Integrative Approach for Transforming Multi-omic Data into Gene Similarity Networks via Self-organizing Maps,"One of the main challenges in applying graph convolutional neural networks on gene-interaction data is the lack of understanding of the vector space to which they belong and also the inherent difficulties involved in representing those interactions on a significantly lower dimension, viz Euclidean spaces.The challenge becomes more prevalent when dealing with various types of heterogeneous data.""We introduce a systematic, generalized method, called iSOM-GSN, used to transform multi-omic data with higher dimensions onto a two-dimensional grid."", 'Afterwards, we apply a convolutional neural network to predict disease states of various types.""Based on the idea of Kohonen's self-organizing map, we generate a two-dimensional grid for each sample for a given set of genes that represent a gene similarity network.We have tested the model to predict breast and prostate cancer using gene expression, DNA methylation and copy number alteration, yielding prediction accuracies in the 94-98% range for tumor stages of breast cancer and calculated Gleason scores of prostate cancer with just 11 input genes for both cases.The scheme not only outputs nearly perfect classification accuracy, but also provides an enhanced scheme for representation learning, visualization, dimensionality reduction, and interpretation of the results.",This paper presents a deep learning model that combines self-organizing maps and convolutional neural networks for representation learning of multi-omics data 520,Disagreement-Regularized Imitation Learning,"We present a simple and effective algorithm designed to address the covariate shift problem in imitation learning.It operates by training an ensemble of policies on the expert demonstration data, and using the variance of their predictions as a cost which is minimized with RL together with a supervised behavioral cloning cost.Unlike adversarial imitation methods, it uses a fixed reward function which is easy to optimize.We prove a regret bound for the algorithm in the tabular setting which is linear in the time horizon multiplied by a coefficient which we show to be low for certain problems in which behavioral cloning fails.We evaluate our algorithm empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning.",Method for addressing covariate shift in imitation learning using ensemble uncertainty 521,Improved Disentanglement through Learned Aggregation of Convolutional Feature Maps,"We present and discuss a simple image preprocessing method for learning disentangled latent factors.In particular, we utilize the implicit inductive bias contained in features from networks pretrained on the ImageNet database.We enhance this bias by explicitly fine-tuning such pretrained networks on tasks useful for the NeurIPS2019 disentanglement challenge, such as angle and position estimation or color classification.Furthermore, we train a VAE on regionally aggregate feature maps, and discuss its disentanglement performance using metrics proposed in recent literature.",We use supervised finetuning of feature vectors to improve transfer from simulation to the real world 522,Hindsight policy gradients,"A reinforcement learning agent that needs to pursue different goals across episodes requires a goal-conditional policy.In addition to their potential to generalize desirable behavior to unseen goals, such policies may also enable higher-level planning based on subgoals.In sparse-reward environments, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended appears crucial to enable sample efficient learning.However, reinforcement learning agents have only recently been endowed with such capacity for hindsight.In this paper, we demonstrate how hindsight can be introduced to policy gradient methods, generalizing this idea to a broad class of successful algorithms.Our experiments on a diverse selection of sparse-reward environments show that hindsight leads to a remarkable increase in sample efficiency.",We introduce the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended to policy gradient methods. 523,CATER: A diagnostic dataset for Compositional Actions & TEmporal Reasoning,"Computer vision has undergone a dramatic revolution in performance, driven in large part through deep features trained on large-scale supervised datasets.However, much of these improvements have focused on static image analysis; video understanding has seen rather modest improvements.Even though new datasets and spatiotemporal models have been proposed, simple frame-by-frame classification methods often still remain competitive.We posit that current video datasets are plagued with implicit biases over scene and object structure that can dwarf variations in temporal structure.In this work, we build a video dataset with fully observable and controllable object and scene bias, and which truly requires spatiotemporal understanding in order to be solved.Our dataset, named CATER, is rendered synthetically using a library of standard 3D objects, and tests the ability to recognize compositions of object movements that require long-term reasoning.In addition to being a challenging dataset, CATER also provides a plethora of diagnostic tools to analyze modern spatiotemporal video architectures by being completely observable and controllable.Using CATER, we provide insights into some of the most recent state of the art deep video architectures.","We propose a new video understanding benchmark, with tasks that by-design require temporal reasoning to be solved, unlike most existing video datasets." 524,Efficient and Robust Asynchronous Federated Learning with Stragglers,"We address the efficiency issues caused by the straggler effect in the recently emerged federated learning, which collaboratively trains a model on decentralized non-i.i.d. data across massive worker devices without exchanging training data in the unreliable and heterogeneous networks.We propose a novel two-stage analysis on the error bounds of general federated learning, which provides practical insights into optimization.As a result, we propose a novel easy-to-implement federated learning algorithm that uses asynchronous settings and strategies to control discrepancies between the global model and delayed models and adjust the number of local epochs with the estimation of staleness to accelerate convergence and resist performance deterioration caused by stragglers.Experiment results show that our algorithm converges fast and robust on the existence of massive stragglers.",We propose an efficient and robust asynchronous federated learning algorithm on the existence of stragglers 525,Modifying memories in a Recurrent Neural Network Unit,Long Short-Term Memory units have the ability to memorise and use long-term dependencies between inputs to generate predictions on time series data.We introduce the concept of modifying the cell state of LSTMs using rotation matrices parametrised by a new set of trainable weights.This addition shows significant increases of performance on some of the tasks from the bAbI dataset.,Adding a new set of weights to the LSTM that rotate the cell memory improves performance on some bAbI tasks. 526,Sinkhorn Permutation Variational Marginal Inference,"We address the problem of marginal inference for an exponential family defined over the set of permutation matrices.This problem is known to quickly become intractable as the size of the permutation increases, since its involves the computation of the permanent of a matrix, a #P-hard problem.We introduce Sinkhorn variational marginal inference as a scalable alternative, a method whose validity is ultimately justified by the so-called Sinkhorn approximation of the permanent.We demonstrate the efectiveness of our method in the problem of probabilistic identification of neurons in the worm C.elegans","New methodology for variational marginal inference of permutations based on Sinkhorn algorithm, applied to probabilistic identification of neurons" 527,Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach,"The robustness of neural networks to adversarial examples has received great attention due to security implications.Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness.In this paper, we provide theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and propose to use the Extreme Value Theory for efficient evaluation.Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks.Experimental results on various networks, including ResNet, Inception-v3 and MobileNet, show that CLEVER is aligned with the robustness indication measured by the and norms of adversarial examples from powerful attacks, and defended networks using defensive distillation or bounded ReLU indeed give better CLEVER scores.To the best of our knowledge, CLEVER is the first attack-independent robustness metric that can be applied to any neural network classifiers.","We propose the first attack-independent robustness metric, a.k.a CLEVER, that can be applied to any neural network classifier." 528,SSoC: Learning Spontaneous and Self-Organizing Communication for Multi-Agent Collaboration,"Multi-agent collaboration is required by numerous real-world problems.Although distributed setting is usually adopted by practical systems, local range communication and information aggregation still matter in fulfilling complex tasks.For multi-agent reinforcement learning, many previous studies have been dedicated to design an effective communication architecture.However, existing models usually suffer from an ossified communication structure, e.g., most of them predefine a particular communication mode by specifying a fixed time frequency and spatial scope for agents to communicate regardless of necessity.Such design is incapable of dealing with multi-agent scenarios that are capricious and complicated, especially when only partial information is available.Motivated by this, we argue that the solution is to build a spontaneous and self-organizing communication learning scheme.By treating the communication behaviour as an explicit action, SSoC learns to organize communication in an effective and efficient way.Particularly, it enables each agent to spontaneously decide when and who to send messages based on its observed states.In this way, a dynamic inter-agent communication channel is established in an online and self-organizing manner.The agents also learn how to adaptively aggregate the received messages and its own hidden states to execute actions.Various experiments have been conducted to demonstrate that SSoC really learns intelligent message passing among agents located far apart.With such agile communications, we observe that effective collaboration tactics emerge which have not been mastered by the compared baselines.",This paper proposes a spontaneous and self-organizing communication (SSoC) learning scheme for multi-agent RL tasks. 529,BERT for Sequence-to-Sequence Multi-Label Text Classification,"We study the BERT language representation model and the sequence generation model with BERT encoder for multi-label text classification task.We experiment with both models and explore their special qualities for this setting.We also introduce and examine experimentally a mixed model, which is an ensemble of multi-label BERT and sequence generating BERT models.Our experiments demonstrated that BERT-based models and the mixed model, in particular, outperform current baselines in several metrics achieving state-of-the-art results on three well-studied multi-label classification datasets with English texts and two private Yandex Taxi datasets with Russian texts.",On using BERT as an encoder for sequential prediction of labels in multi-label text classification task 530,DeepEnFM: Deep neural networks with Encoder enhanced Factorization Machine,"Click Through Rate prediction is a critical task in industrial applications, especially for online social and commerce applications.It is challenging to find a proper way to automatically discover the effective cross features in CTR tasks.We propose a novel model for CTR tasks, called Deep neural networks with Encoder enhanced Factorization Machine.Instead of learning the cross features directly, DeepEnFM adopts the Transformer encoder as a backbone to align the feature embeddings with the clues of other fields.The embeddings generated from encoder are beneficial for the further feature interactions.Particularly, DeepEnFM utilizes a bilinear approach to generate different similarity functions with respect to different field pairs.Furthermore, the max-pooling method makes DeepEnFM feasible to capture both the supplementary and suppressing information among different attention heads.Our model is validated on the Criteo and Avazu datasets, and achieves state-of-art performance.",DNN and Encoder enhanced FM with bilinear attention and max-pooling for CTR 531,Bayesian Prediction of Future Street Scenes using Synthetic Likelihoods,"For autonomous agents to successfully operate in the real world, the ability to anticipate future scene states is a key competence.In real-world scenarios, future states become increasingly uncertain and multi-modal, particularly on long time horizons.Dropout based Bayesian inference provides a computationally tractable, theoretically well grounded approach to learn different hypotheses/models to deal with uncertain futures and make predictions that correspond well to observations -- are well calibrated.However, it turns out that such approaches fall short to capture complex real-world scenes, even falling behind in accuracy when compared to the plain deterministic approaches.This is because the used log-likelihood estimate discourages diversity.In this work, we propose a novel Bayesian formulation for anticipating future scene states which leverages synthetic likelihoods that encourage the learning of diverse models to accurately capture the multi-modal nature of future scene states.We show that our approach achieves accurate state-of-the-art predictions and calibrated probabilities through extensive experiments for scene anticipation on Cityscapes dataset.Moreover, we show that our approach generalizes across diverse tasks such as digit generation and precipitation forecasting.",Dropout based Bayesian inference is extended to deal with multi-modality and is evaluated on scene anticipation tasks. 532,Robust Conditional Generative Adversarial Networks,"Conditional generative adversarial networks have led to large improvements in the task of conditional image generation, which lies at the heart of computer vision.The major focus so far has been on performance improvement, while there has been little effort in making cGAN more robust to noise.The regression might lead to arbitrarily large errors in the output, which makes cGAN unreliable for real-world applications.In this work, we introduce a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue.Our model augments the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold even in the presence of intense noise.We prove that RoCGAN share similar theoretical properties as GAN and experimentally verify that our model outperforms existing state-of-the-art cGAN architectures by a large margin in a variety of domains including images from natural scenes and faces.","We introduce a new type of conditional GAN, which aims to leverage structure in the target space of the generator. We augment the generator with a new, unsupervised pathway to learn the target structure. " 533,ATLPA:ADVERSARIAL TOLERANT LOGIT PAIRING WITH ATTENTION FOR CONVOLUTIONAL NEURAL NETWORK,"Though deep neural networks have achieved the state of the art performance in visual classification, recent studies have shown that they are all vulnerable to the attack of adversarial examples.To solve the problem, some regularization adversarial training methods, constraining the output label or logit, have been studied.In this paper, we propose a novel regularized adversarial training framework ATLPA,namely Adversarial Tolerant Logit Pairing with Attention.Instead of constraining a hard distribution in adversarial training, ATLPA uses Tolerant Logit which consists of confidence distribution on top-k classes and captures inter-class similarities at the image level.Specifically, in addition to minimizing the empirical loss, ATLPA encourages attention map for pairs of examples to be similar.When applied to clean examples and their adversarial counterparts, ATLPA improves accuracy on adversarial examples over adversarial training.We evaluate ATLPA with the state of the art algorithms, the experiment results show that our method outperforms these baselines with higher accuracy.Compared with previous work, our work is evaluated under highly challenging PGD attack: the maximum perturbation is 64 and 128 with 10 to 200 attack iterations.","In this paper, we propose a novel regularized adversarial training framework ATLPA,namely Adversarial Tolerant Logit Pairing with Attention." 534,Generalizing Reinforcement Learning to Unseen Actions,"A fundamental trait of intelligence is the ability to achieve goals in the face of novel circumstances.In this work, we address one such setting which requires solving a task with a novel set of actions.Empowering machines with this ability requires generalization in the way an agent perceives its available actions along with the way it uses these actions to solve tasks.Hence, we propose a framework to enable generalization over both these aspects: understanding an action’s functionality, and using actions to solve tasks through reinforcement learning.Specifically, an agent interprets an action’s behavior using unsupervised representation learning over a collection of data samples reflecting the diverse properties of that action.We employ a reinforcement learning architecture which works over these action representations, and propose regularization metrics essential for enabling generalization in a policy.We illustrate the generalizability of the representation learning method and policy, to enable zero-shot generalization to previously unseen actions on challenging sequential decision-making environments.Our results and videos can be found at sites.google.com/view/action-generalization/",We address the problem of generalization of reinforcement learning to unseen action spaces. 535,Intensity-Free Learning of Temporal Point Processes,"Temporal point processes are the dominant paradigm for modeling sequences of events happening at irregular intervals.The standard way of learning in such models is by estimating the conditional intensity function. However, parameterizing the intensity function usually incurs several trade-offs.We show how to overcome the limitations of intensity-based approaches by directly modeling the conditional distribution of inter-event times. We draw on the literature on normalizing flows to design models that are flexible and efficient.We additionally propose a simple mixture model that matches the flexibility of flow-based models, but also permits sampling and computing moments in closed form. The proposed models achieve state-of-the-art performance in standard prediction tasks and are suitable for novel applications, such as learning sequence embeddings and imputing missing data.","Learn in temporal point processes by modeling the conditional density, not the conditional intensity." 536,AdaGAN: Adaptive GAN for Many-to-Many Non-Parallel Voice Conversion,"Voice Conversion is a task of converting perceived speaker identity from a source speaker to a particular target speaker.Earlier approaches in the literature primarily find a mapping between the given source-target speaker-pairs.Developing mapping techniques for many-to-many VC using non-parallel data, including zero-shot learning remains less explored areas in VC.Most of the many-to-many VC architectures require training data from all the target speakers for whom we want to convert the voices.In this paper, we propose a novel style transfer architecture, which can also be extended to generate voices even for target speakers whose data were not used in the training.In particular, propose Adaptive Generative Adversarial Network, new architectural training procedure help in learning normalized speaker-independent latent representation, which will be used to generate speech with different speaking styles in the context of VC.We compare our results with the state-of-the-art StarGAN-VC architecture.In particular, the AdaGAN achieves 31.73%, and 10.37% relative improvement compared to the StarGAN in MOS tests for speech quality and speaker similarity, respectively.The key strength of the proposed architectures is that it yields these results with less computational complexity.AdaGAN is 88.6% less complex than StarGAN-VC in terms of FLoating Operation Per Second, and 85.46% less complex in terms of trainable parameters. ",Novel adaptive instance normalization based GAN framework for non parallel many-to-many and zero-shot VC. 537,Sparse Transformer: Concentrated Attention Through Explicit Selection,"Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks.Self attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context.To tackle the problem, we propose a novel model called Sparse Transformer.Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments.Extensive experimental results on a series of natural language processing tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Sparse Transformer in model performance. Sparse Transformer reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation and IWSLT 2014 German-to-English translation.""In addition, we conduct qualitative analysis to account for Sparse Transformer's superior performance.",This work propose Sparse Transformer to improve the concentration of attention on the global context through an explicit selection of the most relevant segments for sequence to sequence learning. 538,Data-Efficient Image Recognition with Contrastive Predictive Coding,"Human observers can learn to recognize new categories of objects from a handful of examples, yet doing so with machine perception remains an open challenge.We hypothesize that data-efficient recognition is enabled by representations which make the variability in natural signals more predictable, as suggested by recent perceptual evidence.We therefore revisit and improve Contrastive Predictive Coding, a recently-proposed unsupervised learning framework, and arrive at a representation which enables generalization from small amounts of labeled data.When provided with only 1% of ImageNet labels, this model retains a strong classification performance, 73% Top-5 accuracy, outperforming supervised networks by 28% and state-of-the-art semi-supervised methods by 14%.We also find this representation to serve as a useful substrate for object detection on the PASCAL-VOC 2007 dataset, approaching the performance of representations trained with a fully annotated ImageNet dataset.",Unsupervised representations learned with Contrastive Predictive Coding enable data-efficient image classification. 539,Sparse Networks from Scratch: Faster Training without Losing Performance,"We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels.We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients to identify layers and weights which reduce the error efficiently.Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer.Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights.We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms.Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training.In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network.",Redistributing and growing weights according to the momentum magnitude enables the training of sparse networks from random initializations that can reach dense performance levels with 5% to 50% weights while accelerating training by up to 5.6x. 540,Understanding Local Minima in Neural Networks by Loss Surface Decomposition,"To provide principled ways of designing proper Deep Neural Network models, it is essential to understand the loss surface of DNNs under realistic assumptions.We introduce interesting aspects for understanding the local minima and overall structure of the loss surface.The parameter domain of the loss surface can be decomposed into regions in which activation values are consistent.We found that, in each region, the loss surface have properties similar to that of linear neural networks where every local minimum is a global minimum.This means that every differentiable local minimum is the global minimum of the corresponding region.We prove that for a neural network with one hidden layer using rectified linear units under realistic assumptions.There are poor regions that lead to poor local minima, and we explain why such regions exist even in the overparameterized DNNs.",The loss surface of neural networks is a disjoint union of regions where every local minimum is a global minimum of the corresponding region. 541,Unsupervised Distillation of Syntactic Information from Contextualized Word Representations,"Contextualized word representations, such as ELMo and BERT, were shown to perform well on a various of semantic and structural task.In this work, we tackle the task of unsupervised disentanglement between semantics and structure in neural language representations: we aim to learn a transformation of the contextualized vectors, that discards the lexical semantics, but keeps the structural information.To this end, we automatically generate groups of sentences which are structurally similar but semantically different, and use metric-learning approach to learn a transformation that emphasizes the structural component that is encoded in the vectors.We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.Finally, we demonstrate the utility of our distilled representations by showing that they outperform the original contextualized representations in few-shot parsing setting.",We distill language models representations for syntax by unsupervised metric learning 542,Semi-parametric topological memory for navigation,"We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals.The proposed semi-parametric topological memory consists of a graph with nodes corresponding to locations in the environment and a deep network capable of retrieving nodes from the graph based on observations.The graph stores no metric information, only connectivity of locations corresponding to the nodes.We use SPTM as a planning module in a navigation system.Given only 5 minutes of footage of a previously unseen maze, an SPTM-based navigation agent can build a topological map of the environment and use it to confidently navigate towards goals.The average success rate of the SPTM agent in goal-directed navigation across test environments is higher than the best-performing baseline by a factor of three.","We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals." 543,Image Classification Through Top-Down Image Pyramid Traversal,"The available resolution in our visual world is extremely high, if not infinite.Existing CNNs can be applied in a fully convolutional way to images of arbitrary resolution, but as the size of the input increases, they can not capture contextual information.In addition, computational requirements scale linearly to the number of input pixels, and resources are allocated uniformly across the input, no matter how informative different image regions are.We attempt to address these problems by proposing a novel architecture that traverses an image pyramid in a top-down fashion, while it uses a hard attention mechanism to selectively process only the most informative image parts.We conduct experiments on MNIST and ImageNet datasets, and we show that our models can significantly outperform fully convolutional counterparts, when the resolution of the input is that big that the receptive field of the baselines can not adequately cover the objects of interest.Gains in performance come for less FLOPs, because of the selective processing that we follow.Furthermore, our attention mechanism makes our predictions more interpretable, and creates a trade-off between accuracy and complexity that can be tuned both during training and testing time.","We propose a novel architecture that traverses an image pyramid in a top-down fashion, while it visits only the most informative regions along the way." 544,Self-Tuning Networks: Bilevel Optimization of Hyperparameters using Structured Best-Response Functions,"Hyperparameter optimization can be formulated as a bilevel optimization problem, where the optimal parameters on the training set depend on the hyperparameters.We aim to adapt regularization hyperparameters for neural networks by fitting compact approximations to the best-response function, which maps hyperparameters to optimal weights and biases.We show how to construct scalable best-response approximations for neural networks by modeling the best-response as a single network whose hidden units are gated conditionally on the regularizer.We justify this approximation by showing the exact best-response for a shallow linear network with L2-regularized Jacobian can be represented by a similar gating mechanism.We fit this model using a gradient-based hyperparameter optimization algorithm which alternates between approximating the best-response around the current hyperparameters and optimizing the hyperparameters using the approximate best-response function.Unlike other gradient-based approaches, we do not require differentiating the training loss with respect to the hyperparameters, allowing us to tune discrete hyperparameters, data augmentation hyperparameters, and dropout probabilities.Because the hyperparameters are adapted online, our approach discovers hyperparameter schedules that can outperform fixed hyperparameter values.Empirically, our approach outperforms competing hyperparameter optimization methods on large-scale deep learning problems.We call our networks, which update their own hyperparameters online during training, Self-Tuning Networks.","We use a hypernetwork to predict optimal weights given hyperparameters, and jointly train everything together." 545,On the Evaluation of Conditional GANs,"Conditional Generative Adversarial Networks are finding increasingly widespread use in many application domains.Despite outstanding progress, quantitative evaluation of such models often involves multiple distinct metrics to assess different desirable properties, such as image quality, conditional consistency, and intra-conditioning diversity.In this setting, model benchmarking becomes a challenge, as each metric may indicate a different ""best"" model.In this paper, we propose the Frechet Joint Distance, which is defined as the Frechet distance between joint distributions of images and conditioning, allowing it to implicitly capture the aforementioned properties in a single metric.We conduct proof-of-concept experiments on a controllable synthetic dataset, which consistently highlight the benefits of FJD when compared to currently established metrics.Moreover, we use the newly introduced metric to compare existing cGAN-based models for a variety of conditioning modalities.We show that FJD can be used as a promising single metric for model benchmarking.","We propose a new metric for evaluating conditional GANs that captures image quality, conditional consistency, and intra-conditioning diversity in a single measure." 546,TreeQN and ATreeC: Differentiable Tree-Structured Models for Deep Reinforcement Learning,"Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL.On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori.However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning.To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions.TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values.We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network.Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the tree.We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks on multiple Atari games.Furthermore, we present ablation studies that demonstrate the effect of different auxiliary losses on learning transition models.","We present TreeQN and ATreeC, new architectures for deep reinforcement learning in discrete-action domains that integrate differentiable on-line tree planning into the action-value function or policy." 547,Neural Message Passing for Multi-Label Classification,"Multi-label classification is the task of assigning a set of target labels for a given sample.Modeling the combinatorial label interactions in MLC has been a long-haul challenge.Recurrent neural network based encoder-decoder models have shown state-of-the-art performance for solving MLC.However, the sequential nature of modeling label dependencies through an RNN limits its ability in parallel computation, predicting dense labels, and providing interpretable results.In this paper, we propose Message Passing Encoder-Decoder Networks, aiming to provide fast, accurate, and interpretable MLC.MPED networks model the joint prediction of labels by replacing all RNNs in the encoder-decoder architecture with message passing mechanisms and dispense with autoregressive inference entirely. The proposed models are simple, fast, accurate, interpretable, and structure-agnostic.Experiments on seven real-world MLC datasets show the proposed models outperform autoregressive RNN models across five different metrics with a significant speedup during training and testing time.",We propose Message Passing Encoder-Decode networks for a fast and accurate way of modelling label dependencies for multi-label classification. 548,Few-Shot Regression via Learning Sparsifying Basis Functions,"Recent few-shot learning algorithms have enabled models to quickly adapt to new tasks based on only a few training samples.Previous few-shot learning works have mainly focused on classification and reinforcement learning.In this paper, we propose a few-shot meta-learning system that focuses exclusively on regression tasks.Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of sparsifying basis functions.This enables a few labeled samples to approximate the function.We design a Basis Function Learner network to encode basis functions for a task distribution, and a Weights Generator network to generate the weight vector for a novel task.We show that our model outperforms the current state of the art meta-learning methods in various regression tasks.",We propose a method of doing few-shot regression by learning a set of basis functions to represent the function distribution. 549,Distilling the Knowledge of BERT for Text Generation,"Large-scale pre-trained language model, such as BERT, has recently achieved great success in a wide range of language understanding tasks.However, it remains an open question how to utilize BERT for text generation tasks.In this paper, we present a novel approach to addressing this challenge in a generic sequence-to-sequence setting.We first propose a new task, Conditional Masked Language Modeling, to enable fine-tuning of BERT on target text-generation dataset.The fine-tuned BERT is then exploited as extra supervision to improve conventional Seq2Seq models for text generation.""By leveraging BERT's idiosyncratic bidirectional nature, distilling the knowledge learned from BERT can encourage auto-regressive Seq2Seq models to plan ahead, imposing global sequence-level supervision for coherent text generation."", 'Experiments show that the proposed approach significantly outperforms strong baselines of Transformer on multiple text generation tasks, including machine translation and text summarization.Our proposed model also achieves new state-of-the-art results on the IWSLT German-English and English-Vietnamese MT datasets.",We propose a model-agnostic way to leverage BERT for text generation and achieve improvements over Transformer on 2 tasks over 4 datasets. 550,Brain-inspired Robust Vision using Convolutional Neural Networks with Feedback,"Humans have the remarkable ability to correctly classify images despite possible degradation.Many studies have suggested that this hallmark of human vision results from the interaction between feedforward signals from bottom-up pathways of the visual cortex and feedback signals provided by top-down pathways.Motivated by such interaction, we propose a new neuro-inspired model, namely Convolutional Neural Networks with Feedback.CNN-F extends CNN with a feedback generative network, combining bottom-up and top-down inference to perform approximate loopy belief propagation. ', ""We show that CNN-F's iterative inference allows for disentanglement of latent variables across layers."", 'We validate the advantages of CNN-F over the baseline CNN.Our experimental results suggest that the CNN-F is more robust to image degradation such as pixel noise, occlusion, and blur. Furthermore, we show that the CNN-F is capable of restoring original images from the degraded ones with high reconstruction accuracy while introducing negligible artifacts.",CNN-F extends CNN with a feedback generative network for robust vision. 551,Approximation and non-parametric estimation of ResNet-type convolutional neural networks via block-sparse fully-connected neural networks,"We develop new approximation and statistical learning theories of convolutional neural networks via the ResNet-type structure where the channel size, filter size, and width are fixed.It is shown that a ResNet-type CNN is a universal approximator and its expression ability is no worse than fully-connected neural networks with a structure even if the size of each layer in the CNN is fixed.Our result is general in the sense that we can automatically translate any approximation rate achieved by block-sparse FNNs into that by CNNs.Thanks to the general theory, it is shown that learning on CNNs satisfies optimality in approximation and estimation of several important function classes.As applications, we consider two types of function classes to be estimated: the Barron class and H\\""older class.We prove the clipped empirical risk minimization estimator can achieve the same rate as FNNs even the channel size, filter size, and width of CNNs are constant with respect to the sample size.This is minimax optimal for the H\\""older class.Our proof is based on sophisticated evaluations of the covering number of CNNs and the non-trivial parameter rescaling technique to control the Lipschitz constant of CNNs to be constructed.",It is shown that ResNet-type CNNs are a universal approximator and its expression ability is not worse than fully connected neural networks (FNNs) with a structure even if the size of each layer in the CNN is fixed. 552,Attentive Weights Generation for Few Shot Learning via Information Maximization,"Few shot image classification aims at learning a classifier from limited labeled data.Generating the classification weights has been applied in many meta-learning approaches for few shot image classification due to its simplicity and effectiveness.However, we argue that it is difficult to generate the exact and universal classification weights for all the diverse query samples from very few training samples.In this work, we introduce Attentive Weights Generation for few shot learning via Information Maximization, which addresses current issues by two novel contributions.i) AWGIM generates different classification weights for different query samples by letting each of query samples attends to the whole support set.ii) To guarantee the generated weights adaptive to different query sample, we re-formulate the problem to maximize the lower bound of mutual information between generated weights and query as well as support data.As far as we can see, this is the first attempt to unify information maximization into few shot learning.Both two contributions are proved to be effective in the extensive experiments and we show that AWGIM is able to achieve state-of-the-art performance on benchmark datasets.",A novel few shot learning method to generate query-specific classification weights via information maximization. 553,SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering,"Conversational question answering is a novel QA task that requires the understanding of dialogue context.Different from traditional single-turn machine reading comprehension, CQA is a comprehensive task comprised of passage reading, coreference resolution, and contextual understanding.In this paper, we propose an innovative contextualized attention-based deep neural network, SDNet, to fuse context into traditional MRC models.Our model leverages both inter-attention and self-attention to comprehend the conversation and passage.Furthermore, we demonstrate a novel method to integrate the BERT contextual model as a sub-module in our network.Empirical results show the effectiveness of SDNet.""On the CoQA leaderboard, it outperforms the previous best model's F1 score by 1.6%."", 'Our ensemble model further improves the F1 score by 2.7%.",A neural method for conversational question answering with attention mechanism and a novel usage of BERT as contextual embedder 554,Feature Partitioning for Efficient Multi-Task Architectures,"Multi-task learning promises to use less data, parameters, and time than training separate single-task models.But realizing these benefits in practice is challenging.In particular, it is difficult to define a suitable architecture that has enough capacity to support many tasks while not requiring excessive compute for each individual task.There are difficult trade-offs when deciding how to allocate parameters and layers across a large set of tasks.To address this, we propose a method for automatically searching over multi-task architectures that accounts for resource constraints.We define a parameterization of feature sharing strategies for effective coverage and sampling of architectures.We also present a method for quick evaluation of such architectures with feature distillation.Together these contributions allow us to quickly optimize for parameter-efficient multi-task models.We benchmark on Visual Decathlon, demonstrating that we can automatically search for and identify architectures that effectively make trade-offs between task resource requirements while maintaining a high level of final performance.",automatic search for multi-task architectures that reduce per-task feature use 555,Situating Sentence Embedders with Nearest Neighbor Overlap,"As distributed approaches to natural language semantics have developed and diversified, embedders for linguistic units larger than words have come to play an increasingly important role. To date, such embedders have been evaluated using benchmark tasks and linguistic probes. We propose a comparative approach, nearest neighbor overlap, that quantifies similarity between embedders in a task-agnostic manner. ', ""N2O requires only a collection of examples and is simple to understand: two embedders are more similar if, for the same set of inputs, there is greater overlap between the inputs' nearest neighbors.We use N2O to compare 21 sentence embedders and show the effects of different design choices and architectures.","We propose nearest neighbor overlap, a procedure which quantifies similarity between embedders in a task-agnostic manner, and use it to compare 21 sentence embedders." 556,"``""Best-of-Many-Samples"" Distribution Matching","Generative Adversarial Networks can achieve state-of-the-art sample quality in generative modelling tasks but suffer from the mode collapse problem.Variational Autoencoders on the other hand explicitly maximize a reconstruction-based data log-likelihood forcing it to cover all modes, but suffer from poorer sample quality.Recent works have proposed hybrid VAE-GAN frameworks which integrate a GAN-based synthetic likelihood to the VAE objective to address both the mode collapse and sample quality issues, with limited success.This is because the VAE objective forces a trade-off between the data log-likelihood and divergence to the latent prior.The synthetic likelihood ratio term also shows instability during training.We propose a novel objective with a ""Best-of-Many-Samples"" reconstruction cost and a stable direct estimate of the synthetic likelihood.This enables our hybrid VAE-GAN framework to achieve high data log-likelihood and low divergence to the latent prior at the same time and shows significant improvement over both hybrid VAE-GANS and plain GANs in mode coverage and quality.",We propose a new objective for training hybrid VAE-GANs which lead to significant improvement in mode coverage and quality. 557,Lifelong Learning via Online Leverage Score Sampling,"In order to mimic the human ability of continual acquisition and transfer of knowledge across various tasks, a learning system needs the capability for life-long learning, effectively utilizing the previously acquired skills.As such, the key challenge is to transfer and generalize the knowledge learned from one task to other tasks, avoiding interference from previous knowledge and improving the overall performance.In this paper, within the continual learning paradigm, we introduce a method that effectively forgets the less useful data samples continuously across different tasks.The method uses statistical leverage score information to measure the importance of the data samples in every task and adopts frequent directions approach to enable a life-long learning property.This effectively maintains a constant training size across all tasks.We first provide some mathematical intuition for the method and then demonstrate its effectiveness with experiments on variants of MNIST and CIFAR100 datasets.",A new method uses statistical leverage score information to measure the importance of the data samples in every task and adopts frequent directions approach to enable a life-long learning property. 558,Polar Transformer Networks,"Convolutional neural networks are inherently equivariant to translation.Efforts to embed other forms of equivariance have concentrated solely on rotation.We expand the notion of equivariance in CNNs through the Polar Transformer Network.PTN combines ideas from the Spatial Transformer Network and canonical coordinate representations.The result is a network invariant to translation and equivariant to both rotation and scale.PTN is trained end-to-end and composed of three distinct stages: a polar origin predictor, the newly introduced polar transformer module and a classifier.PTN achieves state-of-the-art on rotated MNIST and the newly introduced SIM2MNIST dataset, an MNIST variation obtained by adding clutter and perturbing digits with translation, rotation and scaling.The ideas of PTN are extensible to 3D which we demonstrate through the Cylindrical Transformer Network.","We learn feature maps invariant to translation, and equivariant to rotation and scale." 559,Recasting Gradient-Based Meta-Learning as Hierarchical Bayes,"Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task.Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks.Here, we reformulate the model-agnostic meta-learning algorithm of Finn et al. as a method for probabilistic inference in a hierarchical Bayesian model.In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference.Furthermore, the identification of MAML as hierarchical Bayes provides a way to understand the algorithm’s operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies for efficient inference.We use this opportunity to propose an improvement to the MAML algorithm that makes use of techniques from approximate inference and curvature estimation.","A specific gradient-based meta-learning algorithm, MAML, is equivalent to an inference procedure in a hierarchical Bayesian model. We use this connection to improve MAML via methods from approximate inference and curvature estimation." 560,Autostacker: an Automatic Evolutionary Hierarchical Machine Learning System,"This work provides an automatic machine learning modelling architecture called Autostacker.Autostacker improves the prediction accuracy of machine learning baselines by utilizing an innovative hierarchical stacking architecture and an efficient parameter search algorithm.Neither prior domain knowledge about the data nor feature preprocessing is needed.We significantly reduce the time of AutoML with a naturally inspired algorithm - Parallel Hill Climbing.By parallelizing PHC, Autostacker can provide candidate pipelines with sufficient prediction accuracy within a short amount of time.These pipelines can be used as is or as a starting point for human experts to build on.By focusing on the modelling process, Autostacker breaks the tradition of following fixed order pipelines by exploring not only single model pipeline but also innovative combinations and structures.As we will show in the experiment section, Autostacker achieves significantly better performance both in terms of test accuracy and time cost comparing with human initial trials and recent popular AutoML system.",Automate machine learning system with efficient search algorithm and innovative structure to provide better model baselines. 561,Batch simulations and uncertainty quantification in Gaussian process surrogate-based approximate Bayesian computation,"Surrogate models can be used to accelerate approximate Bayesian computation.In one such framework the discrepancy between simulated and observed data is modelled with a Gaussian process.So far principled strategies have been proposed only for sequential selection of the simulation locations.To address this limitation, we develop Bayesian optimal design strategies to parallellise the expensive simulations.We also touch the problem of quantifying the uncertainty of the ABC posterior due to the limited budget of simulations.",We propose principled batch Bayesian experimental design strategies and a method for uncertainty quantification of the posterior summaries in a Gaussian process surrogate-based approximate Bayesian computation framework. 562,Efficient Dictionary Learning with Gradient Descent,"Randomly initialized first-order optimization algorithms are the method of choice for solving many high-dimensional nonconvex problems in machine learning, yet general theoretical guarantees cannot rule out convergence to critical points of poor objective value.For some highly structured nonconvex problems however, the success of gradient descent can be understood by studying the geometry of the objective.We study one such problem -- complete orthogonal dictionary learning, and provide converge guarantees for randomly initialized gradient descent to the neighborhood of a global optimum.The resulting rates scale as low order polynomials in the dimension even though the objective possesses an exponential number of saddle points.This efficient convergence can be viewed as a consequence of negative curvature normal to the stable manifolds associated with saddle points, and we provide evidence that this feature is shared by other nonconvex problems of importance as well.",We provide an efficient convergence rate for gradient descent on the complete orthogonal dictionary learning objective based on a geometric analysis. 563,Communication Algorithms via Deep Learning,"Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age.Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century.In this paper we study whether it is possible to automate the discovery of decoding algorithms via deep learning.We study a family of sequential codes parametrized by recurrent neural network architectures.We show that cre- atively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise channel, which itself is achieved by breakthrough algorithms of our times.We show strong gen- eralizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting.",We show that creatively designed and trained RNN architectures can decode well known sequential codes and achieve close to optimal performances. 564,AdaShift: Decorrelation and Convergence of Adaptive Learning Rate Methods,"Adam is shown not being able to converge to the optimal solution in certain cases.Researchers recently propose several algorithms to avoid the issue of non-convergence of Adam, but their efficiency turns out to be unsatisfactory in practice.In this paper, we provide a new insight into the non-convergence issue of Adam as well as other adaptive learning rate methods.We argue that there exists an inappropriate correlation between gradient and the second moment term in Adam, which results in that a large gradient is likely to have small step size while a small gradient may have a large step size.We demonstrate that such unbalanced step sizes are the fundamental cause of non-convergence of Adam, and we further prove that decorrelating and will lead to unbiased step size for each gradient, thus solving the non-convergence problem of Adam.Finally, we propose AdaShift, a novel adaptive learning rate method that decorrelates and by temporal shifting, i.e., using temporally shifted gradient to calculate.The experiment results demonstrate that AdaShift is able to address the non-convergence issue of Adam, while still maintaining a competitive performance with Adam in terms of both training speed and generalization.",We analysis and solve the non-convergence issue of Adam. 565,Generative Multi Source Domain Adaptation,"Most domain adaptation methods consider the problem of transferring knowledge to the target domain from a single source dataset.However, in practical applications, we typically have access to multiple sources.In this paper we propose the first approach for Multi-Source Domain Adaptation based on Generative Adversarial Networks.Our method is inspired by the observation that the appearance of a given image depends on three factors: the domain, the style and the content.For this reason we propose to project the image features onto a space where only the dependence from the content is kept, and then re-project this invariant representation onto the pixel space using the target domain and style.In this way, new labeled images can be generated which are used to train a final target classifier.We test our approach using common MSDA benchmarks, showing that it outperforms state-of-the-art methods.","In this paper we propose generative method for multisource domain adaptation based on decomposition of content, style and domain factors." 566,Implicit Maximum Likelihood Estimation,"Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly.We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions.Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite.We also demonstrate encouraging experimental results.",We develop a new likelihood-free parameter estimation method that is equivalent to maximum likelihood under some conditions 567,Maximum Likelihood Constraint Inference for Inverse Reinforcement Learning,"While most approaches to the problem of Inverse Reinforcement Learning focus on estimating a reward function that best explains an expert agent’s policy or demonstrated behavior on a control task, it is often the case that such behavior is more succinctly represented by a simple reward combined with a set of hard constraints.In this setting, the agent is attempting to maximize cumulative rewards subject to these given constraints on their behavior.We reformulate the problem of IRL on Markov Decision Processes such that, given a nominal model of the environment and a nominal reward function, we seek to estimate state, action, and feature constraints in the environment that motivate an agent’s behavior.Our approach is based on the Maximum Entropy IRL framework, which allows us to reason about the likelihood of an expert agent’s demonstrations given our knowledge of an MDP.Using our method, we can infer which constraints can be added to the MDP to most increase the likelihood of observing these demonstrations.We present an algorithm which iteratively infers the Maximum Likelihood Constraint to best explain observed behavior, and we evaluate its efficacy using both simulated behavior and recorded data of humans navigating around an obstacle.","Our method infers constraints on task execution by leveraging the principle of maximum entropy to quantify how demonstrations differ from expected, un-constrained behavior." 568,The Ingredients of Real World Robotic Reinforcement Learning,"The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning.In this work, we discuss the required elements of a robotic system that can continually and autonomously improve with data collected in the real world, and propose a particular instantiation of such a system.Subsequently, we investigate a number of challenges of learning without instrumentation -- including the lack of episodic resets, state estimation, and hand-engineered rewards -- and propose simple, scalable solutions to these challenges.We demonstrate the efficacy of our proposed system on dexterous robotic manipulation tasks in simulation and the real world, and also provide an insightful analysis and ablation study of the challenges associated with this learning paradigm.",System to learn robotic tasks in the real world with reinforcement learning without instrumentation 569,Investigation of using disentangled and interpretable representations with language conditioning for cross-lingual voice conversion,"We study the problem of cross-lingual voice conversion in non-parallel speech corpora and one-shot learning setting.Most prior work require either parallel speech corpora or enough amount of training data from a target speaker.""However, we convert an arbitrary sentences of an arbitrary source speaker to target speaker's given only one target speaker training utterance."", 'To achieve this, we formulate the problem as learning disentangled speaker-specific and context-specific representations and follow the idea of [1] which uses Factorized Hierarchical Variational Autoencoder.""After training FHVAE on multi-speaker training data, given arbitrary source and target speakers' utterance, we estimate those latent representations and then reconstruct the desired utterance of converted voice to that of target speaker."", 'We use multi-language speech corpus to learn a universal model that works for all of the languages.We investigate the use of a one-hot language embedding to condition the model on the language of the utterance being queried and show the effectiveness of the approach.We conduct voice conversion experiments with varying size of training utterances and it was able to achieve reasonable performance with even just one training utterance.We also investigate the effect of using or not using the language conditioning.Furthermore, we visualize the embeddings of the different languages and sexes.Finally, in the subjective tests, for one language and cross-lingual voice conversion, our approach achieved moderately better or comparable results compared to the baseline in speech quality and similarity.","We use a Variational Autoencoder to separate style and content, and achieve voice conversion by modifying style embedding and decoding. We investigate using a multi-language speech corpus and investigate its effects." 570,Learn to Pay Attention,"We propose an end-to-end-trainable attention module for convolutional neural network architectures built for image classification.The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map.Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parametrised by the score matrices, must alone be used for classification.Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values.Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter.Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets.When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset.We also demonstrate improved robustness against the fast gradient sign method of adversarial attack.",The paper proposes a method for forcing CNNs to leverage spatial attention in learning more object-centric representations that perform better in various respects. 571,Boundary Seeking GANs,"Generative adversarial networks are a learning framework that rely on training a discriminator to estimate a measure of difference between a target and generated distributions.GANs, as normally formulated, rely on the generated samples being completely differentiable w.r.t.the generative parameters, and thus do not work for discrete data.We introduce a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator.The importance weights have a strong connection to the decision boundary of the discriminator, and we call our method boundary-seeking GANs.We demonstrate the effectiveness of the proposed algorithm with discrete image and character-based natural language generation. In addition, the boundary-seeking objective extends to continuous data, which can be used to improve stability of training, and we demonstrate this on Celeba, Large-scale Scene Understanding bedrooms, and Imagenet without conditioning.",We address training GANs with discrete data by formulating a policy gradient that generalizes across f-divergences 572,Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines,"Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates.The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces.To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP.We demonstrate and quantify the benefit of the action-dependent baseline through both theoretical analysis as well as numerical results, including an analysis of the suboptimality of the optimal state-dependent baseline.The result is a computationally efficient policy gradient algorithm, which scales to high-dimensional control problems, as demonstrated by a synthetic 2000-dimensional target matching task.Our experimental results indicate that action-dependent baselines allow for faster learning on standard reinforcement learning benchmarks and high-dimensional hand manipulation and synthetic tasks.Finally, we show that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks.",Action-dependent baselines can be bias-free and yield greater variance reduction than state-only dependent baselines for policy gradient methods. 573,Active Multitask Learning with Committees,"The cost of annotating training data has traditionally been a bottleneck for supervised learning approaches.The problem is further exacerbated when supervised learning is applied to a number of correlated tasks simultaneously since the amount of labels required scales with the number of tasks.To mitigate this concern, we propose an active multitask learning algorithm that achieves knowledge transfer between tasks.The approach forms a so-called committee for each task that jointly makes decisions and directly shares data across similar tasks.Our approach reduces the number of queries needed during training while maintaining high accuracy on test data.Empirical results on benchmark datasets show significant improvements on both accuracy and number of query requests.",We propose an active multitask learning algorithm that achieves knowledge transfer between tasks. 574,Quantifying the Cost of Reliable Photo Authentication via High-Performance Learned Lossy Representations,"Detection of photo manipulation relies on subtle statistical traces, notoriously removed by aggressive lossy compression employed online.We demonstrate that end-to-end modeling of complex photo dissemination channels allows for codec optimization with explicit provenance objectives.We design a lightweight trainable lossy image codec, that delivers competitive rate-distortion performance, on par with best hand-engineered alternatives, but has lower computational footprint on modern GPU-enabled platforms.Our results show that significant improvements in manipulation detection accuracy are possible at fractional costs in bandwidth/storage.Our codec improved the accuracy from 37% to 86% even at very low bit-rates, well below the practicality of JPEG.",We learn an efficient lossy image codec that can be optimized to facilitate reliable photo manipulation detection at fractional cost in payload/quality and even at low bitrates. 575,R-TRANSFORMER: RECURRENT NEURAL NETWORK ENHANCED TRANSFORMER,"Recurrent Neural Networks have long been the dominating choice for sequence modeling.However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure.Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently.Notably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks.Despite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts.In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks.The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings.We evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks.",This paper proposes an effective generic sequence model which leverages the strengths of both RNNs and Multi-head attention. 576,Posterior Control of Blackbox Generation ,"Many tasks in natural language processing and related domains require high precision output that obeys dataset-specific constraints.This level of fine-grained control can be difficult to obtain in large-scale neural network models.In this work, we propose a structured latent-variable approach that adds discrete control states within a standard autoregressive neural paradigm.Under this formulation, we can include a range of rich, posterior constraints to enforce task-specific knowledge that is effectively trained into the neural model.This approach allows us to provide arbitrary grounding of internal model decisions, without sacrificing any representational power of neural models.Experiments consider applications of this approach for text generation and part-of-speech induction.For natural language generation, we find that this method improves over standard benchmarks, while also providing fine-grained control.","A structured latent-variable approach that adds discrete control states within a standard autoregressive neural paradigm to provide arbitrary grounding of internal model decisions, without sacrificing any representational power of neural models." 577,Classifier-to-Generator Attack: Estimation of Training Data Distribution from Classifier,"Suppose a deep classification model is trained with samples that need to be kept private for privacy or confidentiality reasons.In this setting, can an adversary obtain the private samples if the classification model is given to the adversary?We call this reverse engineering against the classification model the Classifier-to-Generator Attack.This situation arises when the classification model is embedded into mobile devices for offline prediction.For C2G attack, we introduce a novel GAN, PreImageGAN.In PreImageGAN, the generator is designed to estimate the the sample distribution conditioned by the preimage of classification model,, where is the random variable on the sample space and is the probability vector representing the target label arbitrary specified by the adversary.In experiments, we demonstrate PreImageGAN works successfully with hand-written character recognition and face recognition.In character recognition, we show that, given a recognition model of hand-written digits, PreImageGAN allows the adversary to extract alphabet letter images without knowing that the model is built for alphabet letter images.In face recognition, we show that, when an adversary obtains a face recognition model for a set of individuals, PreImageGAN allows the adversary to extract face images of specific individuals contained in the set, even when the adversary has no knowledge of the face of the individuals.",Estimation of training data distribution from trained classifier using GAN. 578,Sample Complexity Lower Bounds for Compressive Sensing with Generative Models,"The goal of standard compressive sensing is to estimate an unknown vector from linear measurements under the assumption of sparsity in some basis.Recently, it has been shown that significantly fewer measurements may be required if the sparsity assumption is replaced by the assumption that the unknown vector lies near the range of a suitably-chosen generative model. In particular, in it was shown that roughly random Gaussian measurements suffice for accurate recovery when the-input generative model is bounded and-Lipschitz, and that measurements suffice for-input ReLU networks with depth and width. In this paper, we establish corresponding algorithm-independent lower bounds on the sample complexity using tools from minimax statistical analysis. In accordance with the above upper bounds, our results are summarized as follows: We construct an-Lipschitz generative model capable of generating group-sparse signals, and show that the resulting necessary number of measurements is; Using similar ideas, we construct two-layer ReLU networks of high width requiring measurements, as well as lower-width deep ReLU networks requiring measurements. As a result, we establish that the scaling laws derived in are optimal or near-optimal in the absence of further assumptions.","We establish that the scaling laws derived in (Bora et al., 2017) are optimal or near-optimal in the absence of further assumptions." 579,Corpus Based Amharic Sentiment Lexicon Generation,"Sentiment classification is an active research area with several applications including analysis of political opinions, classifying comments, movie reviews, news reviews and product reviews.To employ rule based sentiment classification, we require sentiment lexicons.However, manual construction of sentiment lexicon is time consuming and costly for resource-limited languages.To bypass manual development time and costs, we tried to build Amharic Sentiment Lexicons relying on corpus based approach.The intention of this approach is to handle sentiment terms specific to Amharic language from Amharic Corpus.Small set of seed terms are manually prepared from three parts of speech such as noun, adjective and verb.We developed algorithms for constructing Amharic sentiment lexicons automatically from Amharic news corpus.Corpus based approach is proposed relying on the word co-occurrence distributional embedding including frequency based embedding.First we build word-context unigram frequency count matrix and transform it to point-wise mutual Information matrix.Using this matrix, we computed the cosine distance of mean vector of seed lists and each word in the corpus vocabulary.Based on the threshold value, the top closest words to the mean vector of seed list are added to the lexicon.Then the mean vector of the new sentiment seed list is updated and process is repeated until we get sufficient terms in the lexicon.Using PPMI with threshold value of 100 and 200, we got corpus based Amharic Sentiment lexicons of size 1811 and 3794 respectively by expanding 519 seeds.Finally, the lexicon generated in corpus based approach is evaluated.",Corpus based Algorithm is developed generate Amharic Sentiment lexicon relying on corpus 580,Optimistic Exploration even with a Pessimistic Initialisation,"Optimistic initialisation is an effective strategy for efficient exploration in reinforcement learning.In the tabular case, all provably efficient model-free algorithms rely on it.However, model-free deep RL algorithms do not use optimistic initialisation despite taking inspiration from these provably efficient tabular algorithms.In particular, in scenarios with only positive rewards, Q-values are initialised at their lowest possible values due to commonly used network initialisation schemes, a pessimistic initialisation.Merely initialising the network to output optimistic Q-values is not enough, since we cannot ensure that they remain optimistic for novel state-action pairs, which is crucial for exploration.We propose a simple count-based augmentation to pessimistically initialised Q-values that separates the source of optimism from the neural network.We show that this scheme is provably efficient in the tabular setting and extend it to the deep RL setting.Our algorithm, Optimistic Pessimistically Initialised Q-Learning, augments the Q-value estimates of a DQN-based agent with count-derived bonuses to ensure optimism during both action selection and bootstrapping.We show that OPIQ outperforms non-optimistic DQN variants that utilise a pseudocount-based intrinsic motivation in hard exploration tasks, and that it predicts optimistic estimates for novel state-action pairs.","We augment the Q-value estimates with a count-based bonus that ensures optimism during action selection and bootstrapping, even if the Q-value estimates are pessimistic." 581,Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor,"Model-free deep reinforcement learning algorithms have been demonstrated on a range of challenging decision making and control tasks.However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning.Both of these challenges severely limit the applicability of such methods to complex, real-world domains.In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework.In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible.Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods.By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods.Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.","We propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework." 582,Understanding the Relation Between Maximum-Entropy Inverse Reinforcement Learning and Behaviour Cloning,"In many settings, it is desirable to learn decision-making and control policies through learning or from expert demonstrations.The most common approaches under this framework are Behaviour Cloning, and Inverse Reinforcement Learning.Recent methods for IRL have demonstrated the capacity to learn effective policies with access to a very limited set of demonstrations, a scenario in which BC methods often fail.Unfortunately, directly comparing the algorithms for these methods does not provide adequate intuition for understanding this difference in performance.This is the motivating factor for our work.We begin by presenting-MAX, a generalization of AIRL, a state-of-the-art IRL method.-MAX provides grounds for more directly comparing the objectives for LfD.We demonstrate that-MAX, and by inheritance AIRL, is a subset of the cost-regularized IRL framework laid out by Ho & Ermon.We conclude by empirically evaluating the factors of difference between various LfD objectives in the continuous control domain.",Distribution matching through divergence minimization provides a common ground for comparing adversarial Maximum-Entropy Inverse Reinforcement Learning methods to Behaviour Cloning. 583,Harnessing Structures for Value-Based Planning and Reinforcement Learning,"Value-based methods constitute a fundamental methodology in planning and deep reinforcement learning.In this paper, we propose to exploit the underlying structures of the state-action value function, i.e., Q function, for both planning and deep RL.In particular, if the underlying system dynamics lead to some global structures of the Q function, one should be capable of inferring the function better by leveraging such structures.Specifically, we investigate the low-rank structure, which widely exists for big data matrices.We verify empirically the existence of low-rank Q functions in the context of control and deep RL tasks.""As our key contribution, by leveraging Matrix Estimation techniques, we propose a general framework to exploit the underlying low-rank structure in Q functions, leading to a more efficient planning procedure for classical control, and additionally, a simple scheme that can be applied to any value-based RL techniques to consistently achieve better performance on low-rank tasks."", 'Extensive experiments on control tasks and Atari games confirm the efficacy of our approach.",We propose a generic framework that allows for exploiting the low-rank structure in both planning and deep reinforcement learning. 584,Evaluating Semantic Representations of Source Code,"Learned representations of source code enable various software developer tools, e.g., to detect bugs or to predict program properties.At the core of code representations often are word embeddings of identifier names in source code, because identifiers account for the majority of source code vocabulary and convey important semantic information.Unfortunately, there currently is no generally accepted way of evaluating the quality of word embeddings of identifiers, and current evaluations are biased toward specific downstream tasks.This paper presents IdBench, the first benchmark for evaluating to what extent word embeddings of identifiers represent semantic relatedness and similarity.The benchmark is based on thousands of ratings gathered by surveying 500 software developers.We use IdBench to evaluate state-of-the-art embedding techniques proposed for natural language, an embedding technique specifically designed for source code, and lexical string distance functions, as these are often used in current developer tools.Our results show that the effectiveness of embeddings varies significantly across different embedding techniques and that the best available embeddings successfully represent semantic relatedness.On the downside, no existing embedding provides a satisfactory representation of semantic similarities, e.g., because embeddings consider identifiers with opposing meanings as similar, which may lead to fatal mistakes in downstream developer tools.IdBench provides a gold standard to guide the development of novel embeddings that address the current limitations.",A benchmark to evaluate neural embeddings of identifiers in source code. 585,Improving MMD-GAN Training with Repulsive Loss Function,"Generative adversarial nets are widely used to learn the data sampling process and their performance may heavily depend on the loss functions, given a limited computational budget.This study revisits MMD-GAN that uses the maximum mean discrepancy as the loss function for GAN and makes two contributions.First, we argue that the existing MMD loss function may discourage the learning of fine details in data as it attempts to contract the discriminator outputs of real data.To address this issue, we propose a repulsive loss function to actively learn the difference among the real data by simply rearranging the terms in MMD.Second, inspired by the hinge loss, we propose a bounded Gaussian kernel to stabilize the training of MMD-GAN with the repulsive loss function.The proposed methods are applied to the unsupervised image generation tasks on CIFAR-10, STL-10, CelebA, and LSUN bedroom datasets.Results show that the repulsive loss function significantly improves over the MMD loss at no additional computational cost and outperforms other representative loss functions.The proposed methods achieve an FID score of 16.21 on the CIFAR-10 dataset using a single DCGAN network and spectral normalization.",Rearranging the terms in maximum mean discrepancy yields a much better loss function for the discriminator of generative adversarial nets 586,Succinct Source Coding of Deep Neural Networks,"Deep neural networks have shown incredible performance for inference tasks in a variety of domains.Unfortunately, most current deep networks are enormous cloud-based structures that require significant storage space, which limits scaling of deep learning as a service and use for on-device augmented intelligence. This paper finds algorithms that directly use lossless compressed representations of deep feedforward networks, to perform inference without full decompression.The basic insight that allows less rate than naive approaches is the recognition that the bipartite graph layers of feedforward networks have a kind of permutation invariance to the labeling of nodes, in terms of inferential operation and that the inference operation depends locally on the edges directly connected to it.We also provide experimental results of our approach on the MNIST dataset.","This paper finds algorithms that directly use lossless compressed representations of deep feedforward networks, to perform inference without full decompression." 587,A Variational Inequality Perspective on Generative Adversarial Networks,"Generative adversarial networks form a generative modeling approach known for producing appealing samples, but they are notably difficult to train.One common way to tackle this issue has been to propose new formulations of the GAN objective.Yet, surprisingly few studies have looked at optimization methods designed for this adversarial training.In this work, we cast GAN optimization problems in the general variational inequality framework.Tapping into the mathematical programming literature, we counter some common misconceptions about the difficulties of saddle point optimization and propose to extend methods designed for variational inequalities to the training of GANs.We apply averaging, extrapolation and a computationally cheaper variant that we call extrapolation from the past to the stochastic gradient method and Adam.",We cast GANs in the variational inequality framework and import techniques from this literature to optimize GANs better; we give algorithmic extensions and empirically test their performance for training GANs. 588,Automated Relational Meta-learning,"In order to efficiently learn with small amount of data on new tasks, meta-learning transfers knowledge learned from previous tasks to the new ones.However, a critical challenge in meta-learning is the task heterogeneity which cannot be well handled by traditional globally shared meta-learning methods.In addition, current task-specific meta-learning methods may either suffer from hand-crafted structure design or lack the capability to capture complex relations between tasks.In this paper, motivated by the way of knowledge organization in knowledge bases, we propose an automated relational meta-learning framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph.When a new task arrives, it can quickly find the most relevant structure and tailor the learned structure knowledge to the meta-learner.As a result, the proposed framework not only addresses the challenge of task heterogeneity by a learned meta-knowledge graph, but also increases the model interpretability.We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.",Addressing task heterogeneity problem in meta-learning by introducing meta-knowledge graph 589,Deep Boosting of Diverse Experts,"In this paper, a deep boosting algorithm is developed tolearn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNswith diverse capabilities, e.g., these base deep CNNs aresequentially trained to recognize a set ofobject classes in an easy-to-hard way according to theirlearning complexities.Our experimental results have demonstratedthat our deep boosting algorithm can significantly improve theaccuracy rates on large-scale visual recognition.", A deep boosting algorithm is developed to learn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNs. 590,A Universal Music Translation Network,"We present a method for translating music across musical instruments and styles.This method is based on unsupervised training of a multi-domain wavenet autoencoder, with a shared encoder and a domain-independent latent space that is trained end-to-end on waveforms.Employing a diverse training dataset and large net capacity, the single encoder allows us to translate also from musical domains that were not seen during training.We evaluate our method on a dataset collected from professional musicians, and achieve convincing translations.We also study the properties of the obtained translation and demonstrate translating even from a whistle, potentially enabling the creation of instrumental music by untrained humans.",An automatic method for converting music between instruments and styles 591,Testing Robustness Against Unforeseen Adversaries,"Most existing defenses against adversarial attacks only consider robustness to L_p-bounded distortions.In reality, the specific attack is rarely known in advance and adversaries are free to modify images in ways which lie outside any fixed distortion model; for example, adversarial rotations lie outside the set of L_p-bounded distortions.In this work, we advocate measuring robustness against a much broader range of unforeseen attacks, attacks whose precise form is unknown during defense design.We propose several new attacks and a methodology for evaluating a defense against a diverse range of unforeseen distortions.First, we construct novel adversarial JPEG, Fog, Gabor, and Snow distortions to simulate more diverse adversaries.We then introduce UAR, a summary metric that measures the robustness of a defense against a given distortion. Using UAR to assess robustness against existing and novel attacks, we perform an extensive study of adversarial robustness.We find that evaluation against existing L_p attacks yields redundant information which does not generalize to other attacks; we instead recommend evaluating against our significantly more diverse set of attacks.We further find that adversarial training against either one or multiple distortions fails to confer robustness to attacks with other distortion types. These results underscore the need to evaluate and study robustness against unforeseen distortions.",We propose several new attacks and a methodology to measure robustness against unforeseen adversarial attacks. 592,Deep-Net: Deep Neural Network for Cyber Security Use Cases,"Deep neural networks have witnessed as a powerful approach in this year by solving long-standing Artificialintelligence supervised and unsupervised tasks exists in natural language processing, speech processing, computer vision and others.In this paper, we attempt to apply DNNs on three different cyber security use cases: Android malware classification, incident detection and fraud detection.The data set of each use case contains real known benign and malicious activities samples.These use cases are part of Cybersecurity Data Mining Competition 2017.The efficient network architecture for DNNs is chosen by conducting various trails of experiments for network parameters and network structures.The experiments of such chosen efficient configurations of DNNs are run up to 1000 epochs with learning rate set in the range [0.01-0.5].Experiments of DNNs performed well in comparison to the classical machine learning algorithm in all cases of experiments of cyber security use cases.This is due to the fact that DNNs implicitly extract and build better features, identifies the characteristics of the data that lead to better accuracy.The best accuracy obtained by DNNs and XGBoost on Android malware classification 0.940 and 0.741, incident detection 1.00 and 0.997, and fraud detection 0.972 and 0.916 respectively.The accuracy obtained by DNNs varies -0.05%, +0.02%, -0.01% from the top scored system in CDMC 2017 tasks.",Deep-Net: Deep Neural Network for Cyber Security Use Cases 593,Discovering Motor Programs by Recomposing Demonstrations,"In this paper, we present an approach to learn recomposable motor primitives across large-scale and diverse manipulation demonstrations.Current approaches to decomposing demonstrations into primitives often assume manually defined primitives and bypass the difficulty of discovering these primitives.On the other hand, approaches in primitive discovery put restrictive assumptions on the complexity of a primitive, which limit applicability to narrow tasks.Our approach attempts to circumvent these challenges by jointly learning both the underlying motor primitives and recomposing these primitives to form the original demonstration.Through constraints on both the parsimony of primitive decomposition and the simplicity of a given primitive, we are able to learn a diverse set of motor primitives, as well as a coherent latent representation for these primitives.We demonstrate, both qualitatively and quantitatively, that our learned primitives capture semantically meaningful aspects of a demonstration.This allows us to compose these primitives in a hierarchical reinforcement learning setup to efficiently solve robotic manipulation tasks like reaching and pushing.","We learn a space of motor primitives from unannotated robot demonstrations, and show these primitives are semantically meaningful and can be composed for new robot tasks." 594,Weak Supervision for Time Series: Wearable Sensor Classification with Limited Labeled Data,"Using modern deep learning models to make predictions on time series data from wearable sensors generally requires large amounts of labeled data.However, labeling these large datasets can be both cumbersome and costly.""In this paper, we apply weak supervision to time series data, and programmatically label a dataset from sensors worn by patients with Parkinson's."", 'We then built a LSTM model that predicts when these patients exhibit clinically relevant freezing behavior.We show that when our model is trained using patient-specific data, we come within 9% AUROC of a model trained using hand-labeled data and when we assume no prior observations of subjects, our weakly supervised model matched performance with hand-labeled data.These results demonstrate that weak supervision may help reduce the need to painstakingly hand label time series training data.",We demonstrate the feasibility of a weakly supervised time series classification approach for wearable sensor data. 595,Learning Semantic Correspondences from Noisy Data-text Pairs by Local-to-Global Alignments,"Learning semantic correspondence between the structured data and associated texts is a core problem for many downstream NLP applications, e.g., data-to-text generation.Recent neural generation methods require to use large scale training data.However, the collected data-text pairs for training are usually loosely corresponded, where texts contain additional or contradicted information compare to its paired input.In this paper, we propose a local-to-global alignment framework to learn semantic correspondences from loosely related data-text pairs.First, a local alignment model based on multi-instance learning is applied to build the semantic correspondences within a data-text pair.Then, a global alignment model built on top of a memory guided conditional random field layer is designed to exploit dependencies among alignments in the entire training corpus, where the memory is used to integrate the alignment clues provided by the local alignment model.Therefore, it is capable of inducing missing alignments for text spans that are not supported by its imperfect paired input.Experiments on recent restaurant dataset show that our proposed method can improve the alignment accuracy and as a by product, our method is also applicable to induce semantically equivalent training data-text pairs for neural generation models.",We propose a local-to-global alignment framework to learn semantic correspondences from noisy data-text pairs with weak supervision 596,Learning to Reach Goals Without Reinforcement Learning,"Imitation learning algorithms provide a simple and straightforward approach for training control policies via standard supervised learning methods.By maximizing the likelihood of good actions provided by an expert demonstrator, supervised imitation learning can produce effective policies without the algorithmic complexities and optimization challenges of reinforcement learning, at the cost of requiring an expert demonstrator -- typically a person -- to provide the demonstrations.In this paper, we ask: can we use imitation learning to train effective policies without any expert demonstrations?The key observation that makes this possible is that, in the multi-task setting, trajectories that are generated by a suboptimal policy can still serve as optimal examples for other tasks.In particular, in the setting where the tasks correspond to different goals, every trajectory is a successful demonstration for the state that it actually reaches.Informed by this observation, we propose a very simple algorithm for learning behaviors without any demonstrations, user-provided reward functions, or complex reinforcement learning methods.Our method simply maximizes the likelihood of actions the agent actually took in its own previous rollouts, conditioned on the goal being the state that it actually reached.Although related variants of this approach have been proposed previously in imitation learning settings with example demonstrations, we present the first instance of this approach as a method for learning goal-reaching policies entirely from scratch.We present a theoretical result linking self-supervised imitation learning and reinforcement learning, and empirical results showing that it performs competitively with more complex reinforcement learning methods on a range of challenging goal reaching problems.",Learning how to reach goals from scratch by using imitation learning with data relabeling 597,Interactive Boosting of Neural Networks for Small-sample Image Classification,"Neural networks have recently shown excellent performance on numerous classi- fication tasks.These networks often have a large number of parameters and thus require much data to train.When the number of training data points is small, however, a network with high flexibility will quickly overfit the training data, resulting in a large model variance and a poor generalization performance.To address this problem, we propose a new ensemble learning method called InterBoost for small-sample image classification.In the training phase, InterBoost first randomly generates two complementary datasets to train two base networks of the same structure, separately, and then next two complementary datasets for further training the networks are generated through interaction between the two base networks trained previously.This interactive training process continues iteratively until a stop criterion is met.In the testing phase, the outputs of the two networks are combined to obtain one final score for classification.Detailed analysis of the method is provided for an in-depth understanding of its mechanism.","In the paper, we proposed an ensemble method called InterBoost for training neural networks for small-sample classification. The method has better generalization performance than other ensemble methods, and reduces variances significantly." 598,Detecting Statistical Interactions from Neural Network Weights,"Interpreting neural networks is a crucial and challenging task in machine learning.In this paper, we develop a novel framework for detecting statistical interactions captured by a feedforward multilayer neural network by directly interpreting its learned weights.Depending on the desired interactions, our method can achieve significantly better or similar interaction detection performance compared to the state-of-the-art without searching an exponential solution space of possible interactions.We obtain this accuracy and efficiency by observing that interactions between input features are created by the non-additive effect of nonlinear activation functions, and that interacting paths are encoded in weight matrices.We demonstrate the performance of our method and the importance of discovered interactions via experimental results on both synthetic datasets and real-world application datasets.",We detect statistical interactions captured by a feedforward multilayer neural network by directly interpreting its learned weights. 599,Benchmarking the Neural Linear Model for Regression,"The neural linear model is a simple adaptive Bayesian linear regression method that has recently been used in a number of problems ranging from Bayesian optimization to reinforcement learning.Despite its apparent successes in these settings, to the best of our knowledge there has been no systematic exploration of its capabilities on simple regression tasks.""In this work we characterize these on the UCI datasets, a popular benchmark for Bayesian regression models, as well as on the recently introduced gap datasets, which are better tests of out-of-distribution uncertainty."", 'We demonstrate that the neural linear model is a simple method that shows competitive performance on these tasks.","We benchmark the neural linear model on the UCI and UCI ""gap"" datasets." 600,Provable Convergence and Global Optimality of Generative Adversarial Network,"Generative adversarial networks train implicit generative models through solving minimax problems.Such minimax problems are known as nonconvex- nonconcave, for which the dynamics of first-order methods are not well understood.In this paper, we consider GANs in the type of the integral probability metrics with the generator represented by an overparametrized neural network.When the discriminator is solved to approximate optimality in each iteration, we prove that stochastic gradient descent on a regularized IPM objective converges globally to a stationary point with a sublinear rate.Moreover, we prove that when the width of the generator network is sufficiently large and the discriminator function class has enough discriminative ability, the obtained stationary point corresponds to a generator that yields a distribution that is close to the distribution of the observed data in terms of the total variation.To the best of our knowledge, we seem to first establish both the global convergence and global optimality of training GANs when the generator is parametrized by a neural network.",We establish global convergence to optimality for IPM-based GANs where the generator is an overparametrized neural network. 601,Multi-scale Attributed Node Embedding,"We present network embedding algorithms that capture information about a node from the local distribution over node attributes around it, as observed over random walks following an approach similar to Skip-gram.Observations from neighborhoods of different sizes are either pooled or encoded distinctly in a multi-scale approach. Capturing attribute-neighborhood relationships over multiple scales is useful for a diverse range of applications, including latent feature identification across disconnected networks with similar attributes.We prove theoretically that matrices of node-feature pointwise mutual information are implicitly factorized by the embeddings.Experiments show that our algorithms are robust, computationally efficient and outperform comparable models on social, web and citation network datasets.",We develop efficient multi-scale approximate attributed network embedding procedures with provable properties. 602,A Closer Look at Few-shot Classification,"Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples.While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult.In this paper, we present1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the gap across methods including the baseline,2) a slightly modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini-ImageNet and the CUB datasets, and3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms.Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones.In a realistic, cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.", A detailed empirical study in few-shot classification that revealing challenges in standard evaluation setting and showing a new direction. 603,Bayesian Inference of Temporal Specifications to Explain How Plans Differ,"Temporal logics are useful for describing dynamic system behavior, and have been successfully used as a language for goal definitions during task planning.Prior works on inferring temporal logic specifications have focused on ""summarizing"" the input dataset -- i.e., finding specifications that are satisfied by all plan traces belonging to the given set.In this paper, we examine the problem of inferring specifications that describe temporal differences between two sets of plan traces.We formalize the concept of providing such contrastive explanations, then present a Bayesian probabilistic model for inferring contrastive explanations as linear temporal logic specifications.We demonstrate the efficacy, scalability, and robustness of our model for inferring correct specifications across various benchmark planning domains and for a simulated air combat mission.",We present a Bayesian inference model to infer contrastive explanations (as LTL specifications) describing how two sets of plan traces differ. 604,On the Decision Boundaries of Deep Neural Networks: A Tropical Geometry Perspective,"This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piece-wise linear non-linearity activations.We use tropical geometry, a new development in the area of algebraic geometry, to provide a characterization of the decision boundaries of a simple neural network of the form.Specifically, we show that the decision boundaries are a subset of a tropical hypersurface, which is intimately related to a polytope formed by the convex hull of two zonotopes.The generators of the zonotopes are precise functions of the neural network parameters.We utilize this geometric characterization to shed light and new perspective on three tasks.In doing so, we propose a new tropical perspective for the lottery ticket hypothesis, where we see the effect of different initializations on the tropical geometric representation of the decision boundaries.Also, we leverage this characterization as a new set of tropical regularizers, which deal directly with the decision boundaries of a network.We investigate the use of these regularizers in neural network pruning and in generating adversarial input attacks.",Tropical geometry can be leveraged to represent the decision boundaries of neural networks and bring to light interesting insights. 605,Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems,"First-order methods such as stochastic gradient descent are currently the standard algorithm for training deep neural networks.Second-order methods, despite their better convergence rate, are rarely used in practice due to the pro- hibitive computational cost in calculating the second-order information.In this paper, we propose a novel Gram-Gauss-Newton algorithm to train deep neural networks for regression problems with square loss.Our method draws inspiration from the connection between neural network optimization and kernel regression of neural tangent kernel.Different from typical second-order methods that have heavy computational cost in each iteration, GGN only has minor overhead compared to first-order methods such as SGD.We also give theoretical results to show that for sufficiently wide neural networks, the convergence rate of GGN is quadratic.Furthermore, we provide convergence guarantee for mini-batch GGN algorithm, which is, to our knowledge, the first convergence result for the mini-batch version of a second-order method on overparameterized neural net- works.Preliminary experiments on regression tasks demonstrate that for training standard networks, our GGN algorithm converges much faster and achieves better performance than SGD.","A novel Gram-Gauss-Newton method to train neural networks, inspired by neural tangent kernel and Gauss-Newton method, with fast convergence speed both theoretically and experimentally." 606,Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments,"Recent pretrained sentence encoders achieve state of the art results on language understanding tasks, but does this mean they have implicit knowledge of syntactic structures?We introduce a grammatically annotated development set for the Corpus of Linguistic Acceptability, which we use to investigate the grammatical knowledge of three pretrained encoders, including the popular OpenAI Transformer and BERT.We fine-tune these encoders to do acceptability classification over CoLA and compare the models’ performance on the annotated analysis set.Some phenomena, e.g. modification by adjuncts, are easy to learn for all models, while others, e.g. long-distance movement, are learned effectively only by models with strong overall performance, and others still, e.g. morphological agreement, are hardly learned by any model.",We investigate the implicit syntactic knowledge of sentence embeddings using a new analysis set of grammatically annotated sentences with acceptability judgments. 607,On the effect of the activation function on the distribution of hidden nodes in a deep network,"We analyze the joint probability distribution on the lengths of thevectors of hidden variables in different layers of a fully connecteddeep network, when the weights and biases are chosen randomly according toGaussian distributions, and the input is binary-valued. We showthat, if the activation function satisfies a minimal set ofassumptions, satisfied by all activation functions that we know thatare used in practice, then, as the width of the network gets large,', ""the length process converges in probability to a length map"", 'that is determined as a simple function of the variances of therandom weights and biases, and the activation function.We also show that this convergence may fail for activation functionsthat violate our assumptions.","We prove that, for activation functions satisfying some conditions, as a deep network gets wide, the lengths of the vectors of hidden variables converge to a length map." 608,Task Level Data Augmentation for Meta-Learning,"Data augmentation is one of the most effective approaches for improving the accuracy of modern machine learning models, and it is also indispensable to train a deep model for meta-learning.However, most current data augmentation implementations applied in meta-learning are the same as those used in the conventional image classification.""In this paper, we introduce a new data augmentation method for meta-learning, which is named as Task Level Data Augmentation."", 'The basic idea of Task Aug is to increase the number of image classes rather than the number of images in each class.In contrast, with a larger amount of classes, we can sample more diverse task instances during training.This allows us to train a deep network by meta-learning methods with little over-fitting.Experimental results show that our approach achieves state-of-the-art performance on miniImageNet, CIFAR-FS, and FC100 few-shot learning benchmarks.Once paper is accepted, we will provide the link to code.",We propose a data augmentation approach for meta-learning and prove that it is valid. 609,Generalized Bayesian Posterior Expectation Distillation for Deep Neural Networks,"In this paper, we present a general framework for distilling expectations with respect to the Bayesian posterior distribution of a deep neural network, significantly extending prior work on a method known as Bayesian Dark Knowledge."" Our generalized framework applies to the case of classification models and takes as input the architecture of a teacher"" network, a general posterior expectation of interest, and the architecture of a student"" network.The distillation method performs an online compression of the selected posterior expectation using iteratively generated Monte Carlo samples from the parameter posterior of the teacher model.We further consider the problem of optimizing the student model architecture with respect to an accuracy-speed-storage trade-off.We present experimental results investigating multiple data sets, distillation targets, teacher model architectures, and approaches to searching for student model architectures.We establish the key result that distilling into a student model with an architecture that matches the teacher, as is done in Bayesian Dark Knowledge, can lead to sub-optimal performance.Lastly, we show that student architecture search methods can identify student models with significantly improved performance.",A general framework for distilling Bayesian posterior expectations for deep neural networks. 610,Towards Hierarchical Discrete Variational Autoencoders,"Variational Autoencoders have proven to be powerful latent variable models.How- ever, the form of the approximate posterior can limit the expressiveness of the model.Categorical distributions are flexible and useful building blocks for example in neural memory layers.We introduce the Hierarchical Discrete Variational Autoencoder: a hi- erarchy of variational memory layers.The Concrete/Gumbel-Softmax relaxation allows maximizing a surrogate of the Evidence Lower Bound by stochastic gradient ascent.We show that, when using a limited number of latent variables, HD-VAE outperforms the Gaussian baseline on modelling multiple binary image datasets.Training very deep HD-VAE remains a challenge due to the relaxation bias that is induced by the use of a surrogate objective.We introduce a formal definition and conduct a preliminary theoretical and empirical study of the bias.","In this paper, we introduce a discrete hierarchy of categorical latent variables that we train using the Concrete/Gumbel-Softmax relaxation and we derive an upper bound for the absolute difference between the unbiased and the biased objective." 611,PowerSGD: Powered Stochastic Gradient Descent Methods for Accelerated Non-Convex Optimization,"In this paper, we propose a novel technique for improving the stochastic gradient descent method to train deep networks, which we term .The proposed PowerSGD method simply raises the stochastic gradient to a certain power during iterations and introduces only one additional parameter, namely, the power exponent.We further propose PowerSGD with momentum, which we term , and provide convergence rate analysis on both PowerSGD and PowerSGDM methods.Experiments are conducted on popular deep learning models and benchmark datasets.Empirical results show that the proposed PowerSGD and PowerSGDM obtain faster initial training speed than adaptive gradient methods, comparable generalization ability with SGD, and improved robustness to hyper-parameter selection and vanishing gradients.PowerSGD is essentially a gradient modifier via a nonlinear transformation.As such, it is orthogonal and complementary to other techniques for accelerating gradient-based optimization.",We propose a new class of optimizers for accelerated non-convex optimization via a nonlinear gradient transformation. 612,Hierarchical Visuomotor Control of Humanoids,"We aim to build complex humanoid agents that integrate perception, motor control, and memory.In this work, we partly factor this problem into low-level motor control from proprioception and high-level coordination of the low-level skills informed by vision.We develop an architecture capable of surprisingly flexible, task-directed motor control of a relatively high-DoF humanoid body by combining pre-training of low-level motor controllers with a high-level, task-focused controller that switches among low-level sub-policies.The resulting system is able to control a physically-simulated humanoid body to solve tasks that require coupling visual perception from an unstabilized egocentric RGB camera during locomotion in the environment.Supplementary video link: https://youtu.be/fBoir7PNxPk","Solve tasks involving vision-guided humanoid locomotion, reusing locomotion behavior from motion capture data." 613,Decoupling Gating from Linearity,"The gap between the empirical success of deep learning and the lack of strong theoretical guarantees calls for studying simpler models.By observing that a ReLU neuron is a product of a linear function with a gate, where both share a jointly trained weight vector, we propose to decouple the two.We introduce GaLU networks — networks in which each neuron is a product of a Linear Unit, defined by a weight vector which is being trained, with a Gate, defined by a different weight vector which is not being trained.Generally speaking, given a base model and a simpler version of it, the two parameters that determine the quality of the simpler version are whether its practical performance is close enough to the base model and whether it is easier to analyze it theoretically.We show that GaLU networks perform similarly to ReLU networks on standard datasets and we initiate a study of their theoretical properties, demonstrating that they are indeed easier to analyze.We believe that further research of GaLU networks may be fruitful for the development of a theory of deep learning.",We propose Gated Linear Unit networks — a model that performs similarly to ReLU networks on real data while being much easier to analyze theoretically. 614,NADS: Neural Architecture Distribution Search for Uncertainty Awareness,"Machine learning systems often encounter Out-of-Distribution errors when dealing with testing data coming from a different distribution from the one used for training.With their growing use in critical applications, it becomes important to develop systems that are able to accurately quantify its predictive uncertainty and screen out these anomalous inputs.However, unlike standard learning tasks, there is currently no well established guiding principle for designing architectures that can accurately quantify uncertainty.Moreover, commonly used OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples.To address these problems, we first seek to identify guiding principles for designing uncertainty-aware architectures, by proposing Neural Architecture Distribution Search.Unlike standard neural architecture search methods which seek for a single best performing architecture, NADS searches for a distribution of architectures that perform well on a given task, allowing us to identify building blocks common among all uncertainty aware architectures.With this formulation, we are able to optimize a stochastic outlier detection objective and construct an ensemble of models to perform OoD detection.We perform multiple OoD detection experiments and observe that our NADS performs favorably compared to state-of-the-art OoD detection methods.",We propose an architecture search method to identify a distribution of architectures and use it to construct a Bayesian ensemble for outlier detection. 615,Outlier Detection from Image Data,"Modern applications from Autonomous Vehicles to Video Surveillance generate massive amounts of image data.In this work we propose a novel image outlier detection approach that leverages the cutting-edge image classifier to discover outliers without using any labeled outlier.We observe that although intuitively the confidence that a convolutional neural network has that an image belongs to a particular class could serve as outlierness measure to each image, directly applying this confidence to detect outlier does not work well.This is because CNN often has high confidence on an outlier image that does not belong to any target class due to its generalization ability that ensures the high accuracy in classification.To solve this issue, we propose a Deep Neural Forest-based approach that harmonizes the contradictory requirements of accurately classifying images and correctly detecting the outlier images.Our experiments using several benchmark image datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN demonstrate the effectiveness of our IOD approach for outlier detection, capturing more than 90% of outliers generated by injecting one image dataset into another, while still preserving the classification accuracy of the multi-class classification problem.","A novel approach that detects outliers from image data, while preserving the classification accuracy of image classification" 616,CloudLSTM: A Recurrent Neural Model for Spatiotemporal Point-cloud Stream Forecasting,"This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources.We design a Dynamic Point-cloud Convolution operator as the core component of CloudLSTMs, which performs convolution directly over point-clouds and extracts local spatial features from sets of neighboring points that surround different elements of the input.This operator maintains the permutation invariance of sequence-to-sequence learning frameworks, while representing neighboring correlations at each time step -- an important aspect in spatiotemporal predictive learning.The D-Conv operator resolves the grid-structural data requirements of existing spatiotemporal forecasting models and can be easily plugged into traditional LSTM architectures with sequence-to-sequence learning and attention mechanisms. We apply our proposed architecture to two representative, practical use cases that involve point-cloud streams, i.e. mobile service traffic forecasting and air quality indicator forecasting.Our results, obtained with real-world datasets collected in diverse scenarios for each use case, show that CloudLSTM delivers accurate long-term predictions, outperforming a variety of neural network models.","This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources." 617,TransINT: Embedding Implication Rules in Knowledge Graphs with Isomorphic Intersections of Linear Subspaces,"Knowledge Graphs, composed of entities and relations, provide a structured representation of knowledge.For easy access to statistical approaches on relational data, multiple methods to embed a KG as components of R^d have been introduced.We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space.TransINT maps set of entities to continuous sets of vectors that are inclusion-ordered isomorphically to relation implications.With a novel parameter sharing scheme, TransINT enables automatic training on missing but implied facts without rule grounding.We achieve new state-of-the-art performances with signficant margins in Link Prediction and Triple Classification on FB122 dataset, with boosted performance even on test instances that cannot be inferred by logical rules.The angles between the continuous sets embedded by TransINT provide an interpretable way to mine semantic relatedness and implication rules among relations.","We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space in an explainable, robust, and geometrically coherent way." 618,SCL: Towards Accurate Domain Adaptive Object Detection via Gradient Detach Based Stacked Complementary Losses,"Unsupervised domain adaptive object detection aims to learn a robust detector on the domain shift circumstance, where the training domain is label-rich with bounding box annotations, while the testing domain is label-agnostic and the feature distributions between training and testing domains are dissimilar or even totally different.In this paper, we propose a gradient detach based Stacked Complementary Losses method that uses detection objective as the primary objective, and cuts in several auxiliary losses in different network stages to utilize information from the complement data that can be effective in adapting model parameters to both source and target domains.A gradient detach operation is applied between detection and context sub-networks during training to force networks to learn discriminative representations.We argue that the conventional training with primary objective mainly leverages the information from the source-domain for maximizing likelihood and ignores the complement data in shallow layers of networks, which leads to an insufficient integration within different domains.Thus, our proposed method is a more syncretic adaptation learning process.We conduct comprehensive experiments on seven datasets, the results demonstrate that our method performs favorably better than the state-of-the-art methods by a large margin.For instance, from Cityscapes to FoggyCityscapes, we achieve 37.9% mAP, outperforming the previous art Strong-Weak by 3.6%.",We introduce a new gradient detach based complementary objective training strategy for domain adaptive object detection. 619,Multi-Task Learning by Deep Collaboration and Application in Facial Landmark Detection,"Convolutional neural networks have become the most successful and popular approach in many vision-related domains.While CNNs are particularly well-suited for capturing a proper hierarchy of concepts from real-world images, they are limited to domains where data is abundant.Recent attempts have looked into mitigating this data scarcity problem by casting their original single-task problem into a new multi-task learning problem.The main goal of this inductive transfer mechanism is to leverage domain-specific information from related tasks, in order to improve generalization on the main task.While recent results in the deep learning community have shown the promising potential of training task-specific CNNs in a soft parameter sharing framework, integrating the recent DL advances for improving knowledge sharing is still an open problem.In this paper, we propose the Deep Collaboration Network, a novel approach for connecting task-specific CNNs in a MTL framework.We define connectivity in terms of two distinct non-linear transformation blocks.One aggregates task-specific features into global features, while the other merges back the global features with each task-specific network.Based on the observation that task relevance depends on depth, our transformation blocks use skip connections as suggested by residual network approaches, to more easily deactivate unrelated task-dependent features.To validate our approach, we employed facial landmark detection datasets as they are readily amenable to MTL, given the number of tasks they include.Experimental results show that we can achieve up to 24.31% relative improvement in landmark failure rate over other state-of-the-art MTL approaches.We finally perform an ablation study showing that our approach effectively allows knowledge sharing, by leveraging domain-specific features at particular depths from tasks that we know are related.",We propose a novel approach for connecting task-specific networks in a multi-task learning setting based on recent residual network advances. 620,The Power of Semantic Similarity based Soft-Labeling for Generalized Zero-Shot Learning,"Zero-Shot Learning is a classification task where some classes referred as unseen classes have no labeled training images.Instead, we only have side information about seen and unseen classes, often in the form of semantic or descriptive attributes.Lack of training images from a set of classes restricts the use of standard classification techniques and losses, including the popular cross-entropy loss.The key step in tackling ZSL problem is bridging visual to semantic space via learning a nonlinear embedding.A well established approach is to obtain the semantic representation of the visual information and perform classification in the semantic space.In this paper, we propose a novel architecture of casting ZSL as a fully connected neural-network with cross-entropy loss to embed visual space to semantic space.During training in order to introduce unseen visual information to the network, we utilize soft-labeling based on semantic similarities between seen and unseen classes.To the best of our knowledge, such similarity based soft-labeling is not explored for cross-modal transfer and ZSL.We evaluate the proposed model on five benchmark datasets for zero-shot learning, AwA1, AwA2, aPY, SUN and CUB datasets, and show that, despite the simplicity, our approach achieves the state-of-the-art performance in Generalized-ZSL setting on all of these datasets and outperforms the state-of-the-art for some datasets.",How to use cross-entropy loss for zero shot learning with soft labeling on unseen classes : a simple and effective solution that achieves state-of-the-art performance on five ZSL benchmark datasets. 621,Growing Action Spaces,"In complex tasks, such as those with large combinatorial action spaces, random exploration may be too inefficient to achieve meaningful learning progress.In this work, we use a curriculum of progressively growing action spaces to accelerate learning.We assume the environment is out of our control, but that the agent may set an internal curriculum by initially restricting its action space.Our approach uses off-policy reinforcement learning to estimate optimal value functions for multiple action spaces simultaneously and efficiently transfers data, value estimates, and state representations from restricted action spaces to the full task.We show the efficacy of our approach in proof-of-concept control tasks and on challenging large-scale StarCraft micromanagement tasks with large, multi-agent action spaces.",Progressively growing the available action space is a great curriculum for learning agents 622,Improving Irregularly Sampled Time Series Learning with Dense Descriptors of Time,"Supervised learning with irregularly sampled time series have been a challenge to Machine Learning methods due to the obstacle of dealing with irregular time intervals.Some papers introduced recently recurrent neural network models that deals with irregularity, but most of them rely on complex mechanisms to achieve a better performance.This work propose a novel method to represent timestamps as dense vectors using sinusoidal functions, called Time Embeddings.As a data input method it and can be applied to most machine learning models.The method was evaluated with two predictive tasks from MIMIC III, a dataset of irregularly sampled time series of electronic health records.Our tests showed an improvement to LSTM-based and classical machine learning models, specially with very irregular data.",A novel method to create dense descriptors of time (Time Embeddings) to make simple models understand temporal structures 623,Supervised Community Detection with Line Graph Neural Networks,"Community detection in graphs can be solved via spectral methods or posterior inference under certain probabilistic graphical models.Focusing on random graph families such as the stochastic block model, recent research has unified both approaches and identified both statistical and computational detection thresholds in terms of the signal-to-noise ratio.By recasting community detection as a node-wise classification problem on graphs, we can also study it from a learning perspective.We present a novel family of Graph Neural Networks for solving community detection problems in a supervised learning setting.We show that, in a data-driven manner and without access to the underlying generative models, they can match or even surpass the performance of the belief propagation algorithm on binary and multiclass stochastic block models, which is believed to reach the computational threshold in these cases.In particular, we propose to augment GNNs with the non-backtracking operator defined on the line graph of edge adjacencies.The GNNs are achieved good performance on real-world datasets. In addition, we perform the first analysis of the optimization landscape of using GNNs to solve community detection problems, demonstrating that under certain simplifications and assumptions, the loss value at any local minimum is close to the loss value at the global minimum/minima.",We propose a novel graph neural network architecture based on the non-backtracking matrix defined over the edge adjacencies and demonstrate its effectiveness in community detection tasks on graphs. 624,"Unrolled, model-based networks for lensless imaging","We develop end-to-end learned reconstructions for lensless mask-based cameras, including an experimental system for capturing aligned lensless and lensed images for training. Various reconstruction methods are explored, on a scale from classic iterative approaches to deep learned methods with many learned parameters. In the middle ground, we present several variations of unrolled alternating direction method of multipliers with varying numbers of learned parameters.The network structure combines knowledge of the physical imaging model with learned parameters updated from the data, which compensate for artifacts caused by physical approximations.Our unrolled approach is 20X faster than classic methods and produces better reconstruction quality than both the classic and deep methods on our experimental system. ",We improve the reconstruction time and quality on an experimental mask-based lensless imager using an end-to-end learning approach which incorporates knowledge of the imaging model. 625,SEGEN: SAMPLE-ENSEMBLE GENETIC EVOLUTIONARY NETWORK MODEL,"Deep learning, a rebranding of deep neural network research works, has achieved a remarkable success in recent years.With multiple hidden layers, deep learning models aim at computing the hierarchical feature representations of the observational data.Meanwhile, due to its severe disadvantages in data consumption, computational resources, parameter tuning costs and the lack of result explainability, deep learning has also suffered from lots of criticism.In this paper, we will introduce a new representation learning model, namely “Sample-Ensemble Genetic Evolutionary Network”, which can serve as an alternative approach to deep learning models.Instead of building one single deep model, based on a set of sampled sub-instances, SEGEN adopts a genetic-evolutionary learning strategy to build a group of unit models generations by generations.The unit models incorporated in SEGEN can be either traditional machine learning models or the recent deep learning models with a much “narrower” and “shallower” architecture.The learning results of each instance at the final generation will be effectively combined from each unit model via diffusive propagation and ensemble learning strategies.From the computational perspective, SEGEN requires far less data, fewer computational resources and parameter tuning efforts, but has sound theoretic interpretability of the learning process and results.Extensive experiments have been done on several different real-world benchmark datasets, and the experimental results obtained by SEGEN have demonstrated its advantages over the state-of-the-art representation learning models.","We introduce a new representation learning model, namely “Sample-Ensemble Genetic Evolutionary Network” (SEGEN), which can serve as an alternative approach to deep learning models." 626,Learning to learn to communicate,"How can we teach artificial agents to use human language flexibly to solve problems in a real-world environment?We have one example in nature of agents being able to solve this problem: human babies eventually learn to use human language to solve problems, and they are taught with an adult human-in-the-loop.Unfortunately, current machine learning methods are too data inefficient to learn a language in this way.An outstanding goal is finding an algorithm with a suitable ‘language learning prior’ that allows it to learn human language, while minimizing the number of required human interactions.In this paper, we propose to learn such a prior in simulation, leveraging the increasing amount of available compute for machine learning experiments.We call our approach Learning to Learn to Communicate.Specifically, in L2C we train a meta-learning agent in simulation to interact with populations of pre-trained agents, each with their own distinct communication protocol.Once the meta-learning agent is able to quickly adapt to each population of agents, it can be deployed in new populations unseen during training, including populations of humans.To show the promise of the L2C framework, we conduct some preliminary experiments in a Lewis signaling game, where we show that agentstrained with L2C are able to learn a simple form of human language in fewer iterations than randomly initialized agents.","We propose to use meta-learning for more efficient language learning, via a kind of 'domain randomization'. " 627,Scheduling the Learning Rate Via Hypergradients: New Insights and a New Algorithm,"We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization. This allows us to explicitly search for schedules that achieve good generalization.We describe the structure of the gradient of a validation error w.r.t.the learning rates, the hypergradient, and based on this we introduce a novel online algorithm.Our method adaptively interpolates between two recently proposed techniques, featuring increased stability and faster convergence.We show empirically that the proposed technique compares favorably with baselines and related methodsin terms of final test accuracy.",MARTHE: a new method to fit task-specific learning rate schedules from the perspective of hyperparameter optimization 628,Interactive Image Generation Using Scene Graphs,Recent years have witnessed some exciting developments in the domain of generating images from scene-based text descriptions.These approaches have primarily focused on generating images from a static text description and are limited to generating images in a single pass.They are unable to generate an image interactively based on an incrementally additive text description.We propose a method to generate an image incrementally based on a sequence of graphs of scene descriptions.We propose a recurrent network architecture that preserves the image content generated in previous steps and modifies the cumulative image as per the newly provided scene information.Our model utilizes Graph Convolutional Networks to cater to variable-sized scene graphs along with Generative Adversarial image translation networks to generate realistic multi-object images without needing any intermediate supervision during training.We experiment with Coco-Stuff dataset which has multi-object images along with annotations describing the visual scene and show that our model significantly outperforms other approaches on the same dataset in generating visually consistent images for incrementally growing scene graphs.,Interactively generating image from incrementally growing scene graphs in multiple steps using GANs while preserving the contents of image generated in previous steps 629,Needles in Haystacks: On Classifying Tiny Objects in Large Images,"In some important computer vision domains, such as medical or hyperspectral imaging, we care about the classification of tiny objects in large images.However, most Convolutional Neural Networks for image classification were developed using biased datasets that contain large objects, in mostly central image positions.To assess whether classical CNN architectures work well for tiny object classification we build a comprehensive testbed containing two datasets: one derived from MNIST digits and one from histopathology images.This testbed allows controlled experiments to stress-test CNN architectures with a broad spectrum of signal-to-noise ratios.Our observations indicate that: There exists a limit to signal-to-noise below which CNNs fail to generalize and that this limit is affected by dataset size - more data leading to better performances; however, the amount of training data required for the model to generalize scales rapidly with the inverse of the object-to-image ratio in general, higher capacity models exhibit better generalization; when knowing the approximate object sizes, adapting receptive field is beneficial; and for very small signal-to-noise ratio the choice of global pooling operation affects optimization, whereas for relatively large signal-to-noise values, all tested global pooling operations exhibit similar performance.","We study low- and very-low-signal-to-noise classification scenarios, where objects that correlate with class label occupy tiny proportion of the entire image (e.g. medical or hyperspectral imaging)." 630,On the Relationship between Self-Attention and Convolutional Layers,"Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block.Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks.This raises the question: do learned attention layers operate similarly to convolutional layers?This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice.Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer.Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis.Our code is publicly available.",A self-attention layer can perform convolution and often learns to do so in practice. 631,Learning-Based Low-Rank Approximations,"We introduce a “learning-based” algorithm for the low-rank decomposition problem: given an matrix, and a parameter, compute a rank- matrix that minimizes the approximation loss."", 'The algorithm uses a training set of input matrices in order to optimize its performance.Specifically, some of the most efficient approximate algorithms for computing low-rank approximations proceed by computing a projection, where is a sparse random “sketching matrix”, and then performing the singular value decomposition of.We show how to replace the random matrix with a “learned” matrix of the same sparsity to reduce the error.Our experiments show that, for multiple types of data sets, a learned sketch matrix can substantially reduce the approximation loss compared to a random matrix, sometimes by one order of magnitude.We also study mixed matrices where only some of the rows are trained and the remaining ones are random, and show that matrices still offer improved performance while retaining worst-case guarantees.",Learning-based algorithms can improve upon the performance of classical algorithms for the low-rank approximation problem while retaining the worst-case guarantee. 632,SNODE: Spectral Discretization of Neural ODEs for System Identification,"This paper proposes the use of spectral element methods p for fast and accurate training of Neural Ordinary Differential Equations for system identification.This is achieved by expressing their dynamics as a truncated series of Legendre polynomials.The series coefficients, as well as the network weights, are computed by minimizing the weighted sum of the loss function and the violation of the ODE-Net dynamics.The problem is solved by coordinate descent that alternately minimizes, with respect to the coefficients and the weights, two unconstrained sub-problems using standard backpropagation and gradient methods.The resulting optimization scheme is fully time-parallel and results in a low memory footprint.Experimental comparison to standard methods, such as backpropagation through explicit solvers and the adjoint technique p, on training surrogate models of small and medium-scale dynamical systems shows that it is at least one order of magnitude faster at reaching a comparable value of the loss function.The corresponding testing MSE is one order of magnitude smaller as well, suggesting generalization capabilities increase.",This paper proposes the use of spectral element methods for fast and accurate training of Neural Ordinary Differential Equations for system identification. 633,Scheduled Intrinsic Drive: A Hierarchical Take on Intrinsically Motivated Exploration,"Exploration in sparse reward reinforcement learning remains an open challenge.Many state-of-the-art methods use intrinsic motivation to complement the sparse extrinsic reward signal, giving the agent more opportunities to receive feedback during exploration.Commonly these signals are added as bonus rewards, which results in a mixture policy that neither conducts exploration nor task fulfillment resolutely.In this paper, we instead learn separate intrinsic and extrinsic task policies and schedule between these different drives to accelerate exploration and stabilize learning.Moreover, we introduce a new type of intrinsic reward denoted as successor feature control, which is general and not task-specific.It takes into account statistics over complete trajectories and thus differs from previous methods that only use local information to evaluate intrinsic motivation.We evaluate our proposed scheduled intrinsic drive agent using three different environments with pure visual inputs: VizDoom, DeepMind Lab and DeepMind Control Suite.The results show a substantially improved exploration efficiency with SFC and the hierarchical usage of the intrinsic drives.A video of our experimental results can be found at https://gofile.io/?c=HpEwTd.",A new intrinsic reward signal based on successor features and a novel way to combine extrinsic and intrinsic reward. 634,Learning Exploration Policies for Model-Agnostic Meta-Reinforcement Learning,"Meta-Reinforcement learning approaches aim to develop learning procedures that can adapt quickly to a distribution of tasks with the help of a few examples.Developing efficient exploration strategies capable of finding the most useful samples becomes critical in such settings.Existing approaches to finding efficient exploration strategies add auxiliary objectives to promote exploration by the pre-update policy, however, this makes the adaptation using a few gradient steps difficult as the pre-update and post-update policies are quite different.Instead, we propose to explicitly model a separate exploration policy for the task distribution.Having two different policies gives more flexibility in training the exploration policy and also makes adaptation to any specific task easier.We show that using self-supervised or supervised learning objectives for adaptation stabilizes the training process and also demonstrate the superior performance of our model compared to prior works in this domain.",We propose to use a separate exploration policy to collect the pre-adaptation trajectories in MAML. We also show that using a self-supervised objective in the inner loop leads to more stable training and much better performance. 635,Supersymmetric Artificial Neural Network,"The “Supersymmetric Artificial Neural Network” in deep learningTw), espouses the importance of considering biological constraints in the aim of further generalizing backward propagation.Looking at the progression of ‘solution geometries’; going from SO representation to SU representation has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable.The Supersymmetric Artificial Neural Network explores a natural step forward, namely SU representation.These supersymmetric biological brain representations can be represented by supercharge compatible special unitary notation SU, orTw parameterized by θ, bar, which are supersymmetric directions, unlike θ seen in the typical non-supersymmetric deep learning model.Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of “partner potential” signals for example.","Generalizing backward propagation, using formal methods from supersymmetry." 636,Regularizing Trajectories to Mitigate Catastrophic Forgetting,"Regularization-based continual learning approaches generally prevent catastrophic forgetting by augmenting the training loss with an auxiliary objective.However in most practical optimization scenarios with noisy data and/or gradients, it is possible that stochastic gradient descent can inadvertently change critical parameters.In this paper, we argue for the importance of regularizing optimization trajectories directly.We derive a new co-natural gradient update rule for continual learning whereby the new task gradients are preconditioned with the empirical Fisher information of previously learnt tasks.We show that using the co-natural gradient systematically reduces forgetting in continual learning.Moreover, it helps combat overfitting when learning a new task in a low resource scenario.",Regularizing the optimization trajectory with the Fisher information of old tasks reduces catastrophic forgetting greatly 637,Neural Sketch Learning for Conditional Program Generation,"We study the problem of generating source code in a strongly typed,Java-like programming language, given a label carrying a small amount of information about thecode that is desired.The generated programs are expected to respect a`""realistic"" relationship between programs and labels, as exemplifiedby a corpus of labeled programs available during training.Two challenges in such *conditional program generation* are thatthe generated programs must satisfy a rich set of syntactic andsemantic constraints, and that source code contains many low-levelfeatures that impede learning. We address these problems by traininga neural generator not on code but on *program sketches*, ormodels of program syntax that abstract out names and operations thatdo not generalize across programs.During generation, we infer aposterior distribution over sketches, then concretize samples fromthis distribution into type-safe programs using combinatorialtechniques. We implement our ideas in a system for generatingAPI-heavy Java code, and show that it can often predict the entirebody of a method given just a few API calls or data types that appearin the method.","We give a method for generating type-safe programs in a Java-like language, given a small amount of syntactic information about the desired code." 638,Improving Sequential Latent Variable Models with Autoregressive Flows,"We propose an approach for sequence modeling based on autoregressive normalizing flows.Each autoregressive transform, acting across time, serves as a moving reference frame for modeling higher-level dynamics.This technique provides a simple, general-purpose method for improving sequence modeling, with connections to existing and classical techniques.We demonstrate the proposed approach both with standalone models, as well as a part of larger sequential latent variable models.Results are presented on three benchmark video datasets, where flow-based dynamics improve log-likelihood performance over baseline models.",We show how autoregressive flows can be used to improve sequential latent variable models. 639,Equilibrium Propagation with Continual Weight Updates,"Equilibrium Propagation is a learning algorithm that bridges Machine Learning and Neuroscience, by computing gradients closely matching those of Backpropagation Through Time, but with a learning rule local in space.Given an input x and associated target y, EP proceeds in two phases: in the first phase neurons evolve freely towards a first steady state; in the second phase output neurons are nudged towards y until they reach a second steady state.However, in existing implementations of EP, the learning rule is not local in time:the weight update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.This is a major impediment to the biological plausibility of EP and its efficient hardware implementation.In this work, we propose a version of EP named Continual Equilibrium Propagation where neuron and synapse dynamics occur simultaneously throughout the second phase, so that the weight update becomes local in time.We prove theoretically that, provided the learning rates are sufficiently small, at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss given by BPTT.We demonstrate training with C-EP on MNIST and generalize C-EP to neural networks where neurons are connected by asymmetric connections.We show through experiments that the more the network updates follows the gradients of BPTT, the best it performs in terms of training.These results bring EP a step closer to biology while maintaining its intimate link with backpropagation.","We propose a continual version of Equilibrium Propagation, where neuron and synapse dynamics occur simultaneously throughout the second phase, with theoretical guarantees and numerical simulations." 640,Meta Module Network for Compositional Visual Reasoning,"There are two main lines of research on visual reasoning: neural module network with explicit multi-hop reasoning through handcrafted neural modules, and monolithic network with implicit reasoning in the latent feature space.The former excels in interpretability and compositionality, while the latter usually achieves better performance due to model flexibility and parameter efficiency. In order to bridge the gap of the two, we present Meta Module Network, a novel hybrid approach that can efficiently utilize a Meta Module to perform versatile functionalities, while preserving compositionality and interpretability through modularized design.The proposed model first parses an input question into a functional program through a Program Generator.Instead of handcrafting a task-specific network to represent each function like traditional NMN, we use Recipe Encoder to translate the functions into their corresponding recipes, which are used to dynamically instantiate the Meta Module into Instance Modules.To endow different instance modules with designated functionality, a Teacher-Student framework is proposed, where a symbolic teacher pre-executes against the scene graphs to provide guidelines for the instantiated modules to follow.In a nutshell, MMN adopts the meta module to increase its parameterization efficiency, and uses recipe encoding to improve its generalization ability over NMN.Experiments conducted on the GQA benchmark demonstrates that: MMN achieves significant improvement over both NMN and monolithic network baselines; MMN is able to generalize to unseen but related functions.",We propose a new Meta Module Network to resolve some of the restrictions of previous Neural Module Network to achieve strong performance on realistic visual reasoning dataset. 641,CopyCAT: Taking Control of Neural Policies with Constant Attacks,"We propose a new perspective on adversarial attacks against deep reinforcement learning agents.""Our main contribution is CopyCAT, a targeted attack able to consistently lure an agent into following an outsider's policy."", 'It is pre-computed, therefore fast inferred, and could thus be usable in a real-time scenario.We show its effectiveness on Atari 2600 games in the novel read-only setting.""In the latter, the adversary cannot directly modify the agent's state -its representation of the environment- but can only attack the agent's observation -its perception of the environment."", ""Directly modifying the agent's state would require a write-access to the agent's inner workings and we argue that this assumption is too strong in realistic settings.",We propose a new attack for taking full control of neural policies in realistic settings. 642,Collaborative Generated Hashing for Market Analysis and Fast Cold-start Recommendation,"Cold-start and efficiency issues of the Top-k recommendation are critical to large-scale recommender systems.Previous hybrid recommendation methods are effective to deal with the cold-start issues by extracting real latent factors of cold-start items from side information, but they still suffer low efficiency in online recommendation caused by the expensive similarity search in real latent space.This paper presents a collaborative generated hashing to improve the efficiency by denoting users and items as binary codes, which applies to various settings: cold-start users, cold-start items and warm-start ones.Specifically, CGH is designed to learn hash functions of users and items through the Minimum Description Length principle; thus, it can deal with various recommendation settings.In addition, CGH initiates a new marketing strategy through mining potential users by a generative step.To reconstruct effective users, the MDL principle is used to learn compact and informative binary codes from the content data.Extensive experiments on two public datasets show the advantages for recommendations in various settings over competing baselines and analyze the feasibility of the application in marketing.",It can generate effective hash codes for efficient cold-start recommendation and meanwhile provide a feasible marketing strategy. 643,Learning To Solve Circuit-SAT: An Unsupervised Differentiable Approach,"Recent efforts to combine Representation Learning with Formal Methods, commonly known as the Neuro-Symbolic Methods, have given rise to a new trend of applying rich neural architectures to solve classical combinatorial optimization problems.In this paper, we propose a neural framework that can learn to solve the Circuit Satisfiability problem.Our framework is built upon two fundamental contributions: a rich embedding architecture that encodes the problem structure and an end-to-end differentiable training procedure that mimics Reinforcement Learning and trains the model directly toward solving the SAT problem.The experimental results show the superior out-of-sample generalization performance of our framework compared to the recently developed NeuroSAT method.",We propose a neural framework that can learn to solve the Circuit Satisfiability problem from (unlabeled) circuit instances. 644,Connecting the Dots Between MLE and RL for Sequence Generation,"Sequence generation models such as recurrent networks can be trained with a diverse set of learning algorithms.For example, maximum likelihood learning is simple and efficient, yet suffers from the exposure bias problem.Reinforcement learning like policy gradient addresses the problem but can have prohibitively poor exploration efficiency.A variety of other algorithms such as RAML, SPG, and data noising, have also been developed in different perspectives.This paper establishes a formal connection between these algorithms.We present a generalized entropy regularized policy optimization formulation, and show that the apparently divergent algorithms can all be reformulated as special instances of the framework, with the only difference being the configurations of reward function and a couple of hyperparameters.The unified interpretation offers a systematic view of the varying properties of exploration and learning efficiency.Besides, based on the framework, we present a new algorithm that dynamically interpolates among the existing algorithms for improved learning.Experiments on machine translation and text summarization demonstrate the superiority of the proposed algorithm.","A unified perspective of various learning algorithms for sequence generation, such as MLE, RL, RAML, data noising, etc." 645,Global reasoning network for image super-resolution,"Recent image super-resolution studies leverage very deep convolutional neural networks and the rich hierarchical features they offered, which leads to better reconstruction performance than conventional methods.However, the small receptive fields in the up-sampling and reconstruction process of those models stop them to take full advantage of global contextual information.This causes problems for further performance improvement.In this paper, inspired by image reconstruction principles of human visual system, we propose an image super-resolution global reasoning network to effectively learn the correlations between different regions of an image, through global reasoning.Specifically, we propose global reasoning up-sampling module and global reasoning reconstruction block.They construct a graph model to perform relation reasoning on regions of low resolution images.They aim to reason the interactions between different regions in the up-sampling and reconstruction process and thus leverage more contextual information to generate accurate details.Our proposed SRGRN are more robust and can handle low resolution images that are corrupted by multiple types of degradation.Extensive experiments on different benchmark data-sets show that our model outperforms other state-of-the-art methods.Also our model is lightweight and consumes less computing power, which makes it very suitable for real life deployment.",A state-of-the-art model based on global reasoning for image super-resolution 646,Demystifying Graph Neural Network Via Graph Filter Assessment,"Graph Neural Networks have received tremendous attention recently due to their power in handling graph data for different downstream tasks across different application domains.The key of GNN is its graph convolutional filters, and recently various kinds of filters are designed.However, there still lacks in-depth analysis on Whether there exists a best filter that can perform best on all graph data; Which graph properties will influence the optimal choice of graph filter; How to design appropriate filter adaptive to the graph data.In this paper, we focus on addressing the above three questions.We first propose a novel assessment tool to evaluate the effectiveness of graph convolutional filters for a given graph.""Using the assessment tool, we find out that there is no single filter as a `silver bullet' that perform the best on all possible graphs."", ""In addition, different graph structure properties will influence the optimal graph convolutional filter's design choice."", 'Based on these findings, we develop Adaptive Filter Graph Neural Network, a simple but powerful model that can adaptively learn task-specific filter.For a given graph, it leverages graph filter assessment as regularization and learns to combine from a set of base filters.Experiments on both synthetic and real-world benchmark datasets demonstrate that our proposed model can indeed learn an appropriate filter and perform well on graph tasks.",Propose an assessment framework to analyze and learn graph convolutional filter 647,Mincut Pooling in Graph Neural Networks,"The advance of node pooling operations in Graph Neural Networks has lagged behind the feverish design of new message-passing techniques, and pooling remains an important and challenging endeavor for the design of deep architectures.In this paper, we propose a pooling operation for GNNs that leverages a differentiable unsupervised loss based on the minCut optimization objective.For each node, our method learns a soft cluster assignment vector that depends on the node features, the target inference task, and, thanks to the minCut objective, also on the connectivity structure of the graph.Graph pooling is obtained by applying the matrix of assignment vectors to the adjacency matrix and the node features.We validate the effectiveness of the proposed pooling method on a variety of supervised and unsupervised tasks.","A new pooling layer for GNNs that learns how to pool nodes, according to their features, the graph connectivity, and the dowstream task objective." 648,TuckER: Tensor Factorization for Knowledge Graph Completion,"Knowledge graphs are structured representations of real world facts.However, they typically contain only a small subset of all possible facts.Link prediction is the task of inferring missing facts based on existing ones.We propose TuckER, a relatively simple yet powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples.By using this particular decomposition, parameters are shared between relations, enabling multi-task learning.TuckER outperforms previous state-of-the-art models across several standard link prediction datasets."," We propose TuckER, a relatively simple but powerful linear model for link prediction in knowledge graphs, based on Tucker decomposition of the binary tensor representation of knowledge graph triples. " 649,Backprop with Approximate Activations for Memory-efficient Network Training,"With innovations in architecture design, deeper and wider neural network models deliver improved performance on a diverse variety of tasks.But the increased memory footprint of these models presents a challenge during training, when all intermediate layer activations need to be stored for back-propagation.Limited GPU memory forces practitioners to make sub-optimal choices: either train inefficiently with smaller batches of examples; or limit the architecture to have lower depth and width, and fewer layers at higher spatial resolutions.""This work introduces an approximation strategy that significantly reduces a network's memory footprint during training, but has negligible effect on training performance and computational expense."", 'During the forward pass, we replace activations with lower-precision approximations immediately after they have been used by subsequent layers, thus freeing up memory.The approximate activations are then used during the backward pass.""This approach limits the accumulation of errors across the forward and backward pass---because the forward computation across the network still happens at full precision, and the approximation has a limited effect when computing gradients to a layer's input."", 'Experiments, on CIFAR and ImageNet, show that using our approach with 8- and even 4-bit fixed-point approximations of 32-bit floating-point activations has only a minor effect on training and validation performance, while affording significant savings in memory usage.","An algorithm to reduce the amount of memory required for training deep networks, based on an approximation strategy." 650,Learning-Based Frequency Estimation Algorithms,"Estimating the frequencies of elements in a data stream is a fundamental task in data analysis and machine learning.The problem is typically addressed using streaming algorithms which can process very large data using limited storage.""Today's streaming algorithms, however, cannot exploit patterns in their input to improve performance."", 'We propose a new class of algorithms that automatically learn relevant patterns in the input data and use them to improve its frequency estimates. The proposed algorithms combine the benefits of machine learning with the formal guarantees available through algorithm theory. We prove that our learning-based algorithms have lower estimation errors than their non-learning counterparts. We also evaluate our algorithms on two real-world datasets and demonstrate empirically their performance gains.","Data stream algorithms can be improved using deep learning, while retaining performance guarantees." 651,Link Prediction in Hypergraphs using Graph Convolutional Networks,"Link prediction in simple graphs is a fundamental problem in which new links between nodes are predicted based on the observed structure of the graph.However, in many real-world applications, there is a need to model relationships among nodes which go beyond pairwise associations.For example, in a chemical reaction, relationship among the reactants and products is inherently higher-order.Additionally, there is need to represent the direction from reactants to products.Hypergraphs provide a natural way to represent such complex higher-order relationships.Even though Graph Convolutional Networks have recently emerged as a powerful deep learning-based approach for link prediction over simple graphs, their suitability for link prediction in hypergraphs is unexplored -- we fill this gap in this paper and propose Neural Hyperlink Predictor. NHP adapts GCNs for link prediction in hypergraphs. We propose two variants of NHP --NHP-U and NHP-D -- for link prediction over undirected and directed hypergraphs, respectively.To the best of our knowledge, NHP-D is the first method for link prediction over directed hypergraphs.""Through extensive experiments on multiple real-world datasets, we show NHP's effectiveness.",We propose Neural Hyperlink Predictor (NHP). NHP adapts graph convolutional networks for link prediction in hypergraphs 652,Adversarial Decomposition of Text Representation,"In this paper, we present a method for adversarial decomposition of text representation.This method can be used to decompose a representation of an input sentence into several independent vectors, where each vector is responsible for a specific aspect of the input sentence.We evaluate the proposed method on two case studies: the conversion between different social registers and diachronic language change.We show that the proposed method is capable of fine-grained con- trolled change of these aspects of the input sentence.For example, our model is capable of learning a continuous representation of the style of the sentence, in line with the reality of language use.The model uses adversarial-motivational training and includes a special motivational loss, which acts opposite to the discriminator and encourages a better decomposition.Finally, we evaluate the obtained meaning embeddings on a downstream task of para- phrase detection and show that they are significantly better than embeddings of a regular autoencoder.",A method which learns separate representations for the meaning and the form of a sentence 653,Exploring the Design of Patient-Generated Data Visualizations,"We were approached by a group of healthcare providers who are involved in the care of chronic patients looking for potential technologies to facilitate the process of reviewing patient-generated data during clinical visits. ', ""Aiming at understanding the healthcare providers' attitudes towards reviewing patient-generated data, we conducted a focus group with a mixed group of healthcare providers."", ""Next, to gain the patients' perspectives, we interviewed eight chronic patients, collected a sample of their data and designed a series of visualizations representing patient data we collected."", 'Last, we sought feedback on the visualization designs from healthcare providers who requested this exploration.""We found four factors shaping patient-generated data: data & context, patient's motivation, patient's time commitment, and patient's support circle."", 'Informed by the results of our studies, we discussed the importance of designing patient-generated visualizations for individuals by considering both patient and healthcare provider rather than designing with the purpose of generalization and provided guidelines for designing future patient-generated data visualizations.",We explored the visualization designs that can support chronic patients to present and review their health data with healthcare providers during clinical visits. 654,Stochastic Neural Physics Predictor,"Recently, neural-network based forward dynamics models have been proposed that attempt to learn the dynamics of physical systems in a deterministic way.While near-term motion can be predicted accurately, long-term predictions suffer from accumulating input and prediction errors which can lead to plausible but different trajectories that diverge from the ground truth. A system that predicts distributions of the future physical states for long time horizons based on its uncertainty is thus a promising solution. In this work, we introduce a novel robust Monte Carlo sampling based graph-convolutional dropout method that allows us to sample multiple plausible trajectories for an initial state given a neural-network based forward dynamics predictor. By introducing a new shape preservation loss and training our dynamics model recurrently, we stabilize long-term predictions.We show that our model’s long-term forward dynamics prediction errors on complicated physical interactions of rigid and deformable objects of various shapes are significantly lower than existing strong baselines.Lastly, we demonstrate how generating multiple trajectories with our Monte Carlo dropout method can be used to train model-free reinforcement learning agents faster and to better solutions on simple manipulation tasks.",We propose a stochastic differentiable forward dynamics predictor that is able to sample multiple physically plausible trajectories under the same initial input state and show that it can be used to train model-free policies more efficiently. 655,The Comparative Power of ReLU Networks and Polynomial Kernels in the Presence of Sparse Latent Structure,"There has been a large amount of interest, both in the past and particularly recently, into the relative advantage of different families of universal function approximators, for instance neural networks, polynomials, rational functions, etc.However, current research has focused almost exclusively on understanding this problem in a worst case setting: e.g. characterizing the best L1 or L_ approximation in a boxIn this setting many classical tools from approximation theory can be effectively used.However, in typical applications we expect data to be high dimensional, but structured -- so, it would only be important to approximate the desired function well on the relevant part of its domain, e.g. a small manifold on which real input data actually lies.Moreover, even within this domain the desired quality of approximation may not be uniform; for instance in classification problems, the approximation needs to be more accurate near the decision boundary.These issues, to the best of our knowledge, have remain unexplored until now.With this in mind, we analyze the performance of neural networks and polynomial kernels in a natural regression setting where the data enjoys sparse latent structure, and the labels depend in a simple way on the latent variables.We give an almost-tight theoretical analysis of the performance of both neural networks and polynomials for this problem, as well as verify our theory with simulations.Our results both involve new techniques, which may be of independent interest, and show substantial qualitative differences with what is known in the worst-case setting.",Beyond-worst-case analysis of the representational power of ReLU nets & polynomial kernels -- in particular in the presence of sparse latent structure. 656,Controlling generative models with continuous factors of variations,"Recent deep generative models can provide photo-realistic images as well as visual or textual content embeddings useful to address various tasks of computer vision and natural language processing.Their usefulness is nevertheless often limited by the lack of control over the generative process or the poor understanding of the learned representation.To overcome these major issues, very recent works have shown the interest of studying the semantics of the latent space of generative models.In this paper, we propose to advance on the interpretability of the latent space of generative models by introducing a new method to find meaningful directions in the latent space of any generative model along which we can move to control precisely specific properties of the generated image like position or scale of the object in the image.Our method is weakly supervised and particularly well suited for the search of directions encoding simple transformations of the generated image, such as translation, zoom or color variations.We demonstrate the effectiveness of our method qualitatively and quantitatively, both for GANs and variational auto-encoders.",A model to control the generation of images with GAN and beta-VAE with regard to scale and position of the objects 657,"Kaleidoscope: An Efficient, Learnable Representation For All Structured Linear Maps","Modern neural network architectures use structured linear transformations, such as low-rank matrices, sparse matrices, permutations, and the Fourier transform, to improve inference speed and reduce memory usage compared to general linear maps.However, choosing which of the myriad structured transformations to use is a laborious task that requires trading off speed, space, and accuracy.We consider a different approach: we introduce a family of matrices called kaleidoscope matrices that provably capture any structured matrix with near-optimal space and time complexity.We empirically validate that K-matrices can be automatically learned within end-to-end pipelines to replace hand-crafted procedures, in order to improve model quality.For example, replacing channel shuffles in ShuffleNet improves classification accuracy on ImageNet by up to 5%.Learnable K-matrices can also simplify hand-engineered pipelines---we replace filter bank feature computation in speech data preprocessing with a kaleidoscope layer, resulting in only 0.4% loss in accuracy on the TIMIT speech recognition task.K-matrices can also capture latent structure in models: for a challenging permuted image classification task, adding a K-matrix to a standard convolutional architecture can enable learning the latent permutation and improve accuracy by over 8 points.We provide a practically efficient implementation of our approach, and use K-matrices in a Transformer network to attain 36% faster end-to-end inference speed on a language translation task.","We propose a differentiable family of ""kaleidoscope matrices,"" prove that all structured matrices can be represented in this form, and use them to replace hand-crafted linear maps in deep learning models." 658,Decentralized Deep Learning with Arbitrary Communication Compression,"Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters.As current approaches are limited by network bandwidth, we propose the use of communication compression in the decentralized training context.We show that Choco-SGD achieves linear speedup in the number of workers for arbitrary high compression ratios on general non-convex functions, and non-IID training data. We demonstrate the practical performance of the algorithm in two key scenarios: the training of deep learning models over decentralized user devices, connected by a peer-to-peer network and in a datacenter.","We propose Choco-SGD---decentralized SGD with compressed communication---for non-convex objectives and show its strong performance in various deep learning applications (on-device learning, datacenter case)." 659,Entropy-SGD optimizes the prior of a PAC-Bayes bound: Data-dependent PAC-Bayes priors via differential privacy,"We show that Entropy-SGD, when viewed as a learning algorithm, optimizes a PAC-Bayes bound on the risk of a Gibbs classifier, i.e., a randomized classifier obtained by a risk-sensitive perturbation of the weights of a learned classifier.Entropy-SGD works by optimizing the bound’s prior, violating the hypothesis of the PAC-Bayes theorem that the prior is chosen independently of the data.Indeed, available implementations of Entropy-SGD rapidly obtain zero training error on random labels and the same holds of the Gibbs posterior.In order to obtain a valid generalization bound, we show that an ε-differentially private prior yields a valid PAC-Bayes bound, a straightforward consequence of results connecting generalization with differential privacy.Using stochastic gradient Langevin dynamics to approximate the well-known exponential release mechanism, we observe that generalization error on MNIST falls within the bounds computed under the assumption that SGLD produces perfect samples.In particular, Entropy-SGLD can be configured to yield relatively tight generalization bounds and still fit real labels, although these same settings do not obtain state-of-the-art performance.","We show that Entropy-SGD optimizes the prior of a PAC-Bayes bound, violating the requirement that the prior be independent of data; we use differential privacy to resolve this and improve generalization." 660,Efficient Neural Network Compression via Transfer Learning for Industrial Optical Inspection,"In this paper, we investigate learning the deep neural networks for automated optical inspection in industrial manufacturing.Our preliminary result has shown the stunning performance improvement by transfer learning from the completely dissimilar source domain: ImageNet.Further study for demystifying this improvement shows that the transfer learning produces a highly compressible network, which was not the case for the network learned from scratch.The experimental result shows that there is a negligible accuracy drop in the network learned by transfer learning until it is compressed to 1/128 reduction of the number of convolution filters.This result is contrary to the compression without transfer learning which loses more than 5% accuracy at the same compression rate.",We experimentally show that transfer learning makes sparse features in the network and thereby produces a more compressible network. 661,Generalized Label Propagation Methods for Semi-Supervised Learning,"The key challenge in semi-supervised learning is how to effectively leverage unlabeled data to improve learning performance.The classical label propagation method, despite its popularity, has limited modeling capability in that it only exploits graph information for making predictions.In this paper, we consider label propagation from a graph signal processing perspective and decompose it into three components: signal, filter, and classifier.By extending the three components, we propose a simple generalized label propagation framework for semi-supervised learning.GLP naturally integrates graph and data feature information, and offers the flexibility of selecting appropriate filters and domain-specific classifiers for different applications.Interestingly, GLP also provides new insight into the popular graph convolutional network and elucidates its working mechanisms.Extensive experiments on three citation networks, one knowledge graph, and one image dataset demonstrate the efficiency and effectiveness of GLP.","We extend the classical label propation methods to jointly model graph and feature information from a graph filtering perspective, and show connections to the graph convlutional networks." 662,DeepOBS: A Deep Learning Optimizer Benchmark Suite,"Because the choice and tuning of the optimizer affects the speed, and ultimately the performance of deep learning, there is significant past and recent research in this area.Yet, perhaps surprisingly, there is no generally agreed-upon protocol for the quantitative and reproducible evaluation of optimization strategies for deep learning.We suggest routines and benchmarks for stochastic optimization, with special focus on the unique aspects of deep learning, such as stochasticity, tunability and generalization.As the primary contribution, we present DeepOBS, a Python package of deep learning optimization benchmarks.The package addresses key challenges in the quantitative assessment of stochastic optimizers, and automates most steps of benchmarking.The library includes a wide and extensible set of ready-to-use realistic optimization problems, such as training Residual Networks for image classification on ImageNet or character-level language prediction models, as well as popular classics like MNIST and CIFAR-10.The package also provides realistic baseline results for the most popular optimizers on these test problems, ensuring a fair comparison to the competition when benchmarking new optimizers, and without having to run costly experiments.It comes with output back-ends that directly produce LaTeX code for inclusion in academic publications.It supports TensorFlow and is available open source.","We provide a software package that drastically simplifies, automates, and improves the evaluation of deep learning optimizers." 663,Supervised Contextual Embeddings for Transfer Learning in Natural Language Processing Tasks,"Pre-trained word embeddings are the primarymethod for transfer learning in several Natural Language Processing tasks.Recentworks have focused on using unsupervisedtechniques such as language modeling to obtain these embeddings.In contrast, this workfocuses on extracting representations frommultiple pre-trained supervised models, whichenriches word embeddings with task and domain specific knowledge.Experiments performed in cross-task, cross-domain and crosslingual settings indicate that such supervisedembeddings are helpful, especially in the lowresource setting, but the extent of gains is dependent on the nature of the task and domain.",extract contextual embeddings from off-the-shelf supervised model. Helps downstream NLP models in low-resource settings 664,Adaptive Gradient-Based Meta-Learning Methods,We build a theoretical framework for understanding practical meta-learning methods that enables the integration of sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms in order to provide within-task performance guarantees.Our approach improves upon recent analyses of parameter-transfer by enabling the task-similarity to be learned adaptively and by improving transfer-risk bounds in the setting of statistical learning-to-learn.It also leads to straightforward derivations of average-case regret bounds for efficient algorithms in settings where the task-environment changes dynamically or the tasks share a certain geometric structure.,Practical adaptive algorithms for gradient-based meta-learning with provable guarantees. 665,Sentence embedding with contrastive multi-views learning,"In this work, we propose a self-supervised method to learn sentence representations with an injection of linguistic knowledge.Multiple linguistic frameworks propose diverse sentence structures from which semantic meaning might be expressed out of compositional words operations.We aim to take advantage of this linguist diversity and learn to represent sentences by contrasting these diverse views.Formally, multiple views of the same sentence are mapped to close representations.On the contrary, views from other sentences are mapped further.By contrasting different linguistic views, we aim at building embeddings which better capture semantic and which are less sensitive to the sentence outward form.",We aim to exploit the diversity of linguistic structures to build sentence representations. 666,Coordinate-VAE: Unsupervised clustering and de-noising of peripheral nervous system data,"The peripheral nervous system represents the input/output system for the brain.Cuff electrodes implanted on the peripheral nervous system allow observation and control over this system, however, the data produced by these electrodes have a low signal-to-noise ratio and a complex signal content.In this paper, we consider the analysis of neural data recorded from the vagus nerve in animal models, and develop an unsupervised learner based on convolutional neural networks that is able to simultaneously de-noise and cluster regions of the data by signal content.",Unsupervised analysis of data recorded from the peripheral nervous system denoises and categorises signals. 667,Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier,"Adversarial attacks on convolutional neural networks have gained significant attention and there have been active research efforts on defense mechanisms.Stochastic input transformation methods have been proposed, where the idea is to recover the image from adversarial attack by random transformation, and to take the majority vote as consensus among the random samples.However, the transformation improves the accuracy on adversarial images at the expense of the accuracy on clean images.While it is intuitive that the accuracy on clean images would deteriorate, the exact mechanism in which how this occurs is unclear.In this paper, we study the distribution of softmax induced by stochastic transformations.We observe that with random transformations on the clean images, although the mass of the softmax distribution could shift to the wrong class, the resulting distribution of softmax could be used to correct the prediction.Furthermore, on the adversarial counterparts, with the image transformation, the resulting shapes of the distribution of softmax are similar to the distributions from the clean images.With these observations, we propose a method to improve existing transformation-based defenses.We train a separate lightweight distribution classifier to recognize distinct features in the distributions of softmax outputs of transformed images.Our empirical studies show that our distribution classifier, by training on distributions obtained from clean images only, outperforms majority voting for both clean and adversarial images.Our method is generic and can be integrated with existing transformation-based defenses.",We enhance existing transformation-based defenses by using a distribution classifier on the distribution of softmax obtained from transformed images. 668,Actor-Attention-Critic for Multi-Agent Reinforcement Learning,"Reinforcement learning in multi-agent scenarios is important for real-world applications but presents challenges beyond those seen in single-agent settings.We present an actor-critic algorithm that trains decentralized policies in multi-agent settings, using centrally computed critics that share an attention mechanism which selects relevant information for each agent at every timestep.This attention mechanism enables more effective and scalable learning in complex multi-agent environments, when compared to recent approaches.Our approach is applicable not only to cooperative settings with shared rewards, but also individualized reward settings, including adversarial settings, and it makes no assumptions about the action spaces of the agents.As such, it is flexible enough to be applied to most multi-agent learning problems",We propose an approach to learn decentralized policies in multi-agent settings using attention-based critics and demonstrate promising results in environments with complex interactions. 669,Learning DNA folding patterns with Recurrent Neural Networks ,"The recent expansion of machine learning applications to molecular biology proved to have a significant contribution to our understanding of biological systems, and genome functioning in particular.Technological advances enabled the collection of large epigenetic datasets, including information about various DNA binding factors and DNA spatial structure.Several studies have confirmed the correlation between DNA binding factors and Topologically Associating Domains in DNA structure.However, the information about physical proximity represented by genomic coordinate was not yet used for the improvement of the prediction models.In this research, we focus on Machine Learning methods for prediction of folding patterns of DNA in a classical model organism Drosophila melanogaster.The paper considers linear models with four types of regularization, Gradient Boosting and Recurrent Neural Networks for the prediction of chromatin folding patterns from epigenetic marks.The bidirectional LSTM RNN model outperformed all the models and gained the best prediction scores.This demonstrates the utilization of complex models and the importance of memory of sequential DNA states for the chromatin folding.We identify informative epigenetic features that lead to the further conclusion of their biological significance.",We apply RNN to solve the biological problem of chromatin folding patterns prediction from epigenetic marks and demonstrate for the first time that utilization of memory of sequential states on DNA molecule is significant for the best performance. 670,Data-Independent Neural Pruning via Coresets,"Previous work showed empirically that large neural networks can be significantly reduced in size while preserving their accuracy.Model compression became a central research topic, as it is crucial for deployment of neural networks on devices with limited computational and memory resources.The majority of the compression methods are based on heuristics and offer no worst-case guarantees on the trade-off between the compression rate and the approximation error for an arbitrarily new sample.We propose the first efficient, data-independent neural pruning algorithm with a provable trade-off between its compression rate and the approximation error for any future test sample.Our method is based on the coreset framework, which finds a small weighted subset of points that provably approximates the original inputs.Specifically, we approximate the output of a layer of neurons by a coreset of neurons in the previous layer and discard the rest.We apply this framework in a layer-by-layer fashion from the top to the bottom.Unlike previous works, our coreset is data independent, meaning that it provably guarantees the accuracy of the function for any input, including an adversarial one.We demonstrate the effectiveness of our method on popular network architectures.In particular, our coresets yield 90% compression of the LeNet-300-100 architecture on MNIST while improving the accuracy.","We propose an efficient, provable and data independent method for network compression via neural pruning using coresets of neurons -- a novel construction proposed in this paper." 671,Memory Augmented Control Networks,"Planning problems in partially observable environments cannot be solved directly with convolutional networks and require some form of memory.But, even memory networks with sophisticated addressing schemes are unable to learn intelligent reasoning satisfactorily due to the complexity of simultaneously learning to access memory and plan.To mitigate these challenges we propose the Memory Augmented Control Network.The network splits planning into a hierarchical process.At a lower level, it learns to plan in a locally observed space.At a higher level, it uses a collection of policies computed on locally observed spaces to learn an optimal plan in the global environment it is operating in.The performance of the network is evaluated on path planning tasks in environments in the presence of simple and complex obstacles and in addition, is tested for its ability to generalize to new environments not seen in the training set.",Memory Augmented Network to plan in partially observable environments. 672,BERT Wears GloVes: Distilling Static Embeddings from Pretrained Contextual Representations,"Contextualized word representations such as ELMo and BERT have become the de facto starting point for incorporating pretrained representations for downstream NLP tasks.In these settings, contextual representations have largely made obsolete their static embedding predecessors such as Word2Vec and GloVe.However, static embeddings do have their advantages in that they are straightforward to understand and faster to use.Additionally, embedding analysis methods for static embeddings are far more diverse and mature than those available for their dynamic counterparts.In this work, we introduce simple methods for generating static lookup table embeddings from existing pretrained contextual representations and demonstrate they outperform Word2Vec and GloVe embeddings on a variety of word similarity and word relatedness tasks.In doing so, our results also reveal insights that may be useful for subsequent downstream tasks using our embeddings or the original contextual models.Further, we demonstrate the increased potential for analysis by applying existing approaches for estimating social bias in word embeddings.Our analysis constitutes the most comprehensive study of social bias in contextual word representations and reveals a number of inconsistencies in current techniques for quantifying social bias in word embeddings.We publicly release our code and distilled word embeddings to support reproducible research and the broader NLP community.",A procedure for distilling contextual models into static embeddings; we apply our method to 9 popular models and demonstrate clear gains in representation quality wrt Word2Vec/GloVe and improved analysis potential by thoroughly studying social bias. 673,Augmenting Supervised Learning by Meta-learning Unsupervised Local Rules,"The brain performs unsupervised learning and simultaneous supervised learning.This raises the question as to whether a hybrid of supervised and unsupervised methods will produce better learning.Inspired by the rich space of Hebbian learning rules, we set out to directly learn the unsupervised learning rule on local information that best augments a supervised signal.We present the Hebbian-augmented training algorithm for combining gradient-based learning with an unsupervised rule on pre-synpatic activity, post-synaptic activities, and current weights.""We test HAT's effect on a simple problem and find consistently higher performance than supervised learning alone."", 'This finding provides empirical evidence that unsupervised learning on synaptic activities provides a strong signal that can be used to augment gradient-based methods. We further find that the meta-learned update rule is a time-varying function; thus, it is difficult to pinpoint an interpretable Hebbian update rule that aids in training. ', ""We do find that the meta-learner eventually degenerates into a non-Hebbian rule that preserves important weights so as not to disturb the learner's convergence.",Metalearning unsupervised update rules for neural networks improves performance and potentially demonstrates how neurons in the brain learn without access to global labels. 674,Interpretable and robust blind image denoising with bias-free convolutional neural networks,"Deep convolutional networks often append additive constant terms to their convolution operations, enabling a richer repertoire of functional mappings.Biases are also used to facilitate training, by subtracting mean response over batches of training images.Recent state-of-the-art blind denoising methods seem to require these terms for their success.Here, however, we show that bias terms used in most CNNs interfere with the interpretability of these networks, do not help performance, and in fact prevent generalization of performance to noise levels not including in the training data.In particular, bias-free CNNs are locally linear, and hence amenable to direct analysis with linear-algebraic tools.These analyses provide interpretations of network functionality in terms of projection onto a union of low-dimensional subspaces, connecting the learning-based method to more traditional denoising methodology.Additionally, BF-CNNs generalize robustly, achieving near-state-of-the-art performance at noise levels well beyond the range over which they have been trained.",We show that removing constant terms from CNN architectures provides interpretability of the denoising method via linear-algebra techniques and also boosts generalization performance across noise levels. 675,Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks,"The selection of initial parameter values for gradient-based optimization of deep neural networks is one of the most impactful hyperparameter choices in deep learning systems, affecting both convergence times and model performance.Yet despite significant empirical and theoretical analysis, relatively little has been proved about the concrete effects of different initialization schemes.In this work, we analyze the effect of initialization in deep linear networks, and provide for the first time a rigorous proof that drawing the initial weights from the orthogonal group speeds up convergence relative to the standard Gaussian initialization with iid weights.We show that for deep networks, the width needed for efficient convergence to a global minimum with orthogonal initializations is independent of the depth, whereas the width needed for efficient convergence with Gaussian initializations scales linearly in the depth.Our results demonstrate how the benefits of a good initialization can persist throughout learning, suggesting an explanation for the recent empirical successes found by initializing very deep non-linear networks according to the principle of dynamical isometry.","We provide for the first time a rigorous proof that orthogonal initialization speeds up convergence relative to Gaussian initialization, for deep linear networks." 676,Learning to Learn with Conditional Class Dependencies,"Neural networks can learn to extract statistical properties from data, but they seldom make use of structured information from the label space to help representation learning.Although some label structure can implicitly be obtained when training on huge amounts of data, in a few-shot learning context where little data is available, making explicit use of the label structure can inform the model to reshape the representation space to reflect a global sense of class dependencies. We propose a meta-learning framework, Conditional class-Aware Meta-Learning, that conditionally transforms feature representations based on a metric space that is trained to capture inter-class dependencies.This enables a conditional modulation of the feature representations of the base-learner to impose regularities informed by the label space.Experiments show that the conditional transformation in CAML leads to more disentangled representations and achieves competitive results on the miniImageNet benchmark.",CAML is an instance of MAML with conditional class dependencies. 677,Loss Functions for Multiset Prediction,"We study the problem of multiset prediction.The goal of multiset prediction is to train a predictor that maps an input to a multiset consisting of multiple items.Unlike existing problems in supervised learning, such as classification, ranking and sequence generation, there is no known order among items in a target multiset, and each item in the multiset may appear more than once, making this problem extremely challenging.In this paper, we propose a novel multiset loss function by viewing this problem from the perspective of sequential decision making.The proposed multiset loss function is empirically evaluated on two families of datasets, one synthetic and the other real, with varying levels of difficulty, against various baseline loss functions including reinforcement learning, sequence, and aggregated distribution matching loss functions.The experiments reveal the effectiveness of the proposed loss function over the others.","We study the problem of multiset prediction and propose a novel multiset loss function, providing analysis and empirical evidence that demonstrates its effectiveness." 678,A theoretical framework for deep and locally connected ReLU network,"Understanding theoretical properties of deep and locally connected nonlinear network, such as deep convolutional neural network, is still a hard problem despite its empirical success.In this paper, we propose a novel theoretical framework for such networks with ReLU nonlinearity.The framework bridges data distribution with gradient descent rules, favors disentangled representations and is compatible with common regularization techniques such as Batch Norm, after a novel discovery of its projection nature.""The framework is built upon teacher-student setting, by projecting the student's forward/backward pass onto the teacher's computational graph."", 'We do not impose unrealistic assumptions.Our framework could help facilitate theoretical analysis of many practical issues, e.g. disentangled representations in deep networks.",This paper presents a theoretical framework that models data distribution explicitly for deep and locally connected ReLU network 679,Evolving intrinsic motivations for altruistic behavior,"Multi-agent cooperation is an important feature of the natural world.Many tasks involve individual incentives that are misaligned with the common good, yet a wide range of organisms from bacteria to insects and humans are able to overcome their differences and collaborate.Therefore, the emergence of cooperative behavior amongst self-interested individuals is an important question for the fields of multi-agent reinforcement learning and evolutionary theory.Here, we study a particular class of multi-agent problems called intertemporal social dilemmas, where the conflict between the individual and the group is particularly sharp.By combining MARL with appropriately structured natural selection, we demonstrate that individual inductive biases for cooperation can be learned in a model-free way.To achieve this, we introduce an innovative modular architecture for deep reinforcement learning agents which supports multi-level selection.We present results in two challenging environments, and interpret these in the context of cultural and ecological evolution.","We introduce a biologically-inspired modular evolutionary algorithm in which deep RL agents learn to cooperate in a difficult multi-agent social game, which could help to explain the evolution of altruism." 680,Neural Networks with Structural Resistance to Adversarial Attacks,"In adversarial attacks to machine-learning classifiers, small perturbations are added to input that is correctly classified.The perturbations yield adversarial examples, which are virtually indistinguishable from the unperturbed input, and yet are misclassified.In standard neural networks used for deep learning, attackers can craft adversarial examples from most input to cause a misclassification of their choice.We introduce a new type of network units, called RBFI units, whose non-linear structure makes them inherently resistant to adversarial attacks.On permutation-invariant MNIST, in absence of adversarial attacks, networks using RBFI units match the performance of networks using sigmoid units, and are slightly below the accuracy of networks with ReLU units.When subjected to adversarial attacks based on projected gradient descent or fast gradient-sign methods, networks with RBFI units retain accuracies above 75%, while ReLU or Sigmoid see their accuracies reduced to below 1%.Further, RBFI networks trained on regular input either exceed or closely match the accuracy of sigmoid and ReLU network trained with the help of adversarial examples.The non-linear structure of RBFI units makes them difficult to train using standard gradient descent.We show that RBFI networks of RBFI units can be efficiently trained to high accuracies using pseudogradients, computed using functions especially crafted to facilitate learning instead of their true derivatives.","We introduce a type of neural network that is structurally resistant to adversarial attacks, even when trained on unaugmented training sets. The resistance is due to the stability of network units wrt input perturbations." 681,Robustness May Be at Odds with Accuracy,"We show that there exists an inherent tension between the goal of adversarial robustness and that of standard generalization.Specifically, training robust models may not only be more resource-consuming, but also lead to a reduction of standard accuracy.We demonstrate that this trade-off between the standard accuracy of a model and its robustness to adversarial perturbations provably exists even in a fairly simple and natural setting.These findings also corroborate a similar phenomenon observed in practice.Further, we argue that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers.These differences, in particular, seem to result in unexpected benefits: the features learned by robust models tend to align better with salient data characteristics and human perception.","We show that adversarial robustness might come at the cost of standard classification performance, but also yields unexpected benefits." 682,mixup: Beyond Empirical Risk Minimization,"Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples.In this work, we propose mixup, a simple learning principle to alleviate these issues.In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.",Training on convex combinations between random training examples and their labels improves generalization in deep neural networks 683,Spike Sorting using the Neural Clustering Process,"We present a novel approach to spike sorting for high-density multielectrode probes using the Neural Clustering Process, a recently introduced neural architecture that performs scalable amortized approximate Bayesian inference for efficient probabilistic clustering.To optimally encode spike waveforms for clustering, we extended NCP by adding a convolutional spike encoder, which is learned end-to-end with the NCP network.Trained purely on labeled synthetic spikes from a simple generative model, the NCP spike sorting model shows promising performance for clustering multi-channel spike waveforms.The model provides higher clustering quality than an alternative Bayesian algorithm, finds more spike templates with clear receptive fields on real data and recovers more ground truth neurons on hybrid test data compared to a recent spike sorting algorithm.Furthermore, NCP is able to handle the clustering uncertainty of ambiguous small spikes by GPU-parallelized posterior sampling.The source code is publicly available.","We present a novel approach to spike sorting using the Neural Clustering Process (NCP), a recently introduced neural architecture that performs scalable amortized approximate Bayesian inference for efficient probabilistic clustering." 684,"VARIATIONAL SGD: DROPOUT , GENERALIZATION AND CRITICAL POINT AT THE END OF CONVEXITY","The goal of the paper is to propose an algorithm for learning the most generalizable solution from given training data.It is shown that Bayesian approach leads to a solution that dependent on statistics of training data and not on particularsamples.The solution is stable under perturbations of training data because it is defined by an integral contribution of multiple maxima of the likelihood and not by a single global maximum.Specifically, the Bayesian probability distributionof parameters of a probabilistic model given by a neural network is estimated via recurrent variational approximations.Derived recurrent update rules correspond to SGD-type rules for finding a minimum of an effective loss that is an average of an original negative log-likelihood over the Gaussian distributions of weights, which makes it a function of means and variances.The effective loss is convex for large variances and non-convex in the limit of small variances.Among stationary solutions of the update rules there are trivial solutions with zero variances at local minima of the original loss and a single non-trivial solution with finite variances that is a critical point at the end of convexity of the effective lossin the mean-variance space.At the critical point both first- and second-order gradients of the effective loss w.r.t. means are zero.The empirical study confirms that the critical point represents the most generalizable solution.While the location ofthe critical point in the weight space depends on specifics of the used probabilistic model some properties at the critical point are universal and model independent.",Proposed method for finding the most generalizable solution that is stable w.r.t. perturbations of trainig data. 685,Optimal Transport for Distribution Adaptation in Bayesian Hilbert Maps,"Parameters are one of the most critical components of machine learning models.As datasets and learning domains change, it is often necessary and time-consuming to re-learn entire models.Rather than re-learning the parameters from scratch, replacing learning with optimization, we propose a framework building upon the theory of to adapt model parameters by discovering correspondences between models and data, significantly amortizing the training cost.We demonstrate our idea on the challenging problem of creating probabilistic spatial representations for autonomous robots.Although recent mapping techniques have facilitated robust occupancy mapping, learning all spatially-diverse parameters in such approximate Bayesian models demand considerable computational time, discouraging them to be used in real-world robotic mapping.Considering the fact that the geometric features a robot would observe with its sensors are similar across various environments, in this paper, we demonstrate how to re-use parameters and hyperparameters learned in different domains.This adaptation is computationally more efficient than variational inference and Monte Carlo techniques.A series of experiments conducted on realistic settings verified the possibility of transferring thousands of such parameters with a negligible time and memory cost, enabling large-scale mapping in urban environments.",We present a method of adapting hyperparameters of probabilistic models using optimal transport with applications in robotics 686,Learning an off-policy predictive state representation for deep reinforcement learning for vision-based steering in autonomous driving,"An algorithm is introduced for learning a predictive state representation with off-policy temporal difference learning that is then used to learn to steer a vehicle with reinforcement learning. There are three components being learned simultaneously: the off-policy predictions as a compact representation of state, the behavior policy distribution for estimating the off-policy predictions, and the deterministic policy gradient for learning to act. A behavior policy discriminator is learned and used for estimating the important sampling ratios needed to learn the predictive representation off-policy with general value functions. A linear deterministic policy gradient method is used to train the agent with only the predictive representations while the predictions are being learned. All three components are combined, demonstrated and evaluated on the problem of steering the vehicle from images in the TORCS racing simulator environment.Steering from only images is a challenging problem where evaluation is completed on a held-out set of tracks that were never seen during training in order to measure the generalization of the predictions and controller. Experiments show the proposed method is able to steer smoothly and navigate many but not all of the tracks available in TORCS with performance that exceeds DDPG using only images as input and approaches the performance of an ideal non-vision based kinematics model.",An algorithm to learn a predictive state representation with general value functions and off-policy learning is applied to the problem of vision-based steering in autonomous driving. 687,End-to-end learning of energy-based representations for irregularly-sampled signals and images,"For numerous domains, including for instance earth observation, medical imaging, astrophysics,..., available image and signal datasets often irregular space-time sampling patterns and large missing data rates.These sampling properties is a critical issue to apply state-of-the-art learning-based to fully benefit from the available large-scale observations and reach breakthroughs in the reconstruction and identification of processes of interest.In this paper, we address the end-to-end learning of representations of signals, images and image sequences from irregularly-sampled data, when the training data involved missing data.From an analogy to Bayesian formulation, we consider energy-based representations.Two energy forms are investigated: one derived from auto-encoders and one relating to Gibbs energies.The learning stage of these energy-based representations involve a joint interpolation issue, which resorts to solving an energy minimization problem under observation constraints.Using a neural-network-based implementation of the considered energy forms, we can state an end-to-end learning scheme from irregularly-sampled data.We demonstrate the relevance of the proposed representations for different case-studies: namely, multivariate time series, 2 images and image sequences.",We address the end-to-end learning of energy-based representations for signal and image observation dataset with irregular sampling patterns. 688,Side-Tuning: Network Adaptation via Additive Side Networks,"When training a neural network for a desired task, one may prefer to adapt a pretrained network rather than start with a randomly initialized one -- due to lacking enough training data, performing lifelong learning where the system has to learn a new task while being previously trained for other tasks, or wishing to encode priors in the network via preset weights.The most commonly employed approaches for network adaptation are fine-tuning and using the pre-trained network as a fixed feature extractor, among others.In this paper we propose a straightforward alternative: Side-Tuning.Side-tuning adapts a pretrained network by training a lightweight ""side"" network that is fused with the pre-rained network using a simple additive process.This simple method works as well as or better than existing solutions while it resolves some of the basic issues with fine-tuning, fixed features, and several other common baselines.""In particular, side-tuning is less prone to overfitting when little training data is available, yields better results than using a fixed feature extractor, and doesn't suffer from catastrophic forgetting in lifelong learning.We demonstrate the performance of side-tuning under a diverse set of scenarios, including lifelong learning, reinforcement learning, imitation learning, NLP question-answering, and single-task transfer learning, with consistently promising results.","Side-tuning adapts a pre-trained network by training a lightweight ""side"" network that is fused with the (unchanged) pre-trained network using a simple additive process." 689,Generating Differentially Private Datasets Using GANs,"In this paper, we present a technique for generating artificial datasets that retain statistical properties of the real data while providing differential privacy guarantees with respect to this data.We include a Gaussian noise layer in the discriminator of a generative adversarial network to make the output and the gradients differentially private with respect to the training data, and then use the generator component to synthesise privacy-preserving artificial dataset.Our experiments show that under a reasonably small privacy budget we are able to generate data of high quality and successfully train machine learning models on this artificial data.",Train GANs with differential privacy to generate artificial privacy-preserving datasets. 690,Explaining AlphaGo: Interpreting Contextual Effects in Neural Networks,"This paper presents two methods to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network.Unlike convolutional studies that visualize image appearances corresponding to the network output or a neural activation from a global perspective, our research aims to clarify how a certain input unit collaborates with other units to constitute inference patterns of the neural network and thus contribute to the network output.The analysis of local contextual effects w.r.t. certain input units is of special values in real applications.In particular, we used our methods to explain the gaming strategy of the alphaGo Zero model in experiments, and our method successfully disentangled the rationale of each move during the game.",This paper presents methods to disentangle and interpret contextual effects that are encoded in a deep neural network. 691,Recovering the Lowest Layer of Deep Networks with High Threshold Activations,"Giving provable guarantees for learning neural networks is a core challenge of machine learning theory.Most prior work gives parameter recovery guarantees for one hidden layer networks, however, the networks used in practice have multiple non-linear layers.In this work, we show how we can strengthen such results to deeper networks -- we address the problem of uncovering the lowest layer in a deep neural network under the assumption that the lowest layer uses a high threshold before applying the activation, the upper network can be modeled as a well-behaved polynomial and the input distribution is gaussian.","We provably recover the lowest layer in a deep neural network assuming that the lowest layer uses a ""high threshold"" activation and the above network is a ""well-behaved"" polynomial." 692,Federated Optimization for Heterogeneous Networks,"Federated learning involves training and effectively combining machine learning models from distributed partitions of data on edge devices, and be naturally viewed as a multi- task learning problem.While Federated Averaging is the leading optimization method for training non-convex models in this setting, its behavior is not well understood in realistic federated settings when the devices/tasks are statistically heterogeneous, i.e., where each device collects data in a non-identical fashion.In this work, we introduce a framework, called FedProx, to tackle statistical heterogeneity.FedProx encompasses FedAvg as a special case.We provide convergence guarantees for FedProx through a device dissimilarity assumption.Our empirical evaluation validates our theoretical analysis and demonstrates the improved robustness and stability of FedProx for learning in heterogeneous networks.","We introduce FedProx, a framework to tackle statistical heterogeneity in federated settings with convergence guarantees and improved robustness and stability." 693,Co-evolution of language and agents in referential games,"Referential games offer a grounded learning environment for neural agents which accounts for the fact that language is functionally used to communicate.However, they do not take into account a second constraint considered to be fundamental for the shape of human language: that it must be learnable by new language learners and thus has to overcome a transmission bottleneck.In this work, we insert such a bottleneck in a referential game, by introducing a changing population of agents in which new agents learn by playing with more experienced agents.We show that mere cultural transmission results in a substantial improvement in language efficiency and communicative success, measured in convergence speed, degree of structure in the emerged languages and within-population consistency of the language.However, as our core contribution, we show that the optimal situation is to co-evolve language and agents.When we allow the agent population to evolve through genotypical evolution, we achieve across the board improvements on all considered metrics.These results stress that for language emergence studies cultural evolution is important, but also the suitability of the architecture itself should be considered.","We enable both the cultural evolution of language and the genetic evolution of agents in a referential game, using a new Language Transmission Engine." 694,Parallel Recurrent Data Augmentation for GAN training with Limited and Diverse Data,"The need for large amounts of training image data with clearly defined features is a major obstacle to applying generative adversarial networks on image generation where training data is limited but diverse, since insufficient latent feature representation in the already scarce data often leads to instability and mode collapse during GAN training.To overcome the hurdle of limited data when applying GAN to limited datasets, we propose in this paper the strategy of , where the GAN model progressively enriches its training set with sample images constructed from GANs trained in parallel at consecutive training epochs.Experiments on a variety of small yet diverse datasets demonstrate that our method, with little model-specific considerations, produces images of better quality as compared to the images generated without such strategy.The source code and generated images of this paper will be made public after review.","We introduced a novel, simple, and efficient data augmentation method that boosts the performances of existing GANs when training data is limited and diverse. " 695,The Virtual Patch Clamp: Imputing C. elegans Membrane Potentials from Calcium Imaging,"We develop a stochastic whole-brain and body simulator of the nematode roundworm Caenorhabditis elegans and show that it is sufficiently regularizing to allow imputation of latent membrane potentials from partial calcium fluorescence imaging observations.""This is the first attempt we know of to complete the circle, where an anatomically grounded whole-connectome simulator is used to impute a time-varying brain state at single-cell fidelity from covariates that are measurable in practice.Using state of the art Bayesian machine learning methods to condition on readily obtainable data, our method paves the way for neuroscientists to recover interpretable connectome-wide state representations, automatically estimate physiologically relevant parameter values from data, and perform simulations investigating intelligent lifeforms in silico.",We develop a whole-connectome and body simulator for C. elegans and demonstrate joint state-space and parameter inference in the simulator. 696,Capsule Graph Neural Network,"The high-quality node embeddings learned from the Graph Neural Networks have been applied to a wide range of node-based applications and some of them have achieved state-of-the-art performance.However, when applying node embeddings learned from GNNs to generate graph embeddings, the scalar node representation may not suffice to preserve the node/graph properties efficiently, resulting in sub-optimal graph embeddings.Inspired by the Capsule Neural Network, we propose the Capsule Graph Neural Network, which adopts the concept of capsules to address the weakness in existing GNN-based graph embeddings algorithms.By extracting node features in the form of capsules, routing mechanism can be utilized to capture important information at the graph level.As a result, our model generates multiple embeddings for each graph to capture graph properties from different aspects.The attention module incorporated in CapsGNN is used to tackle graphs with various sizes which also enables the model to focus on critical parts of the graphs.Our extensive evaluations with 10 graph-structured datasets demonstrate that CapsGNN has a powerful mechanism that operates to capture macroscopic properties of the whole graph by data-driven.It outperforms other SOTA techniques on several graph classification tasks, by virtue of the new instrument.","Inspired by CapsNet, we propose a novel architecture for graph embeddings on the basis of node features extracted from GNN." 697,Generative Restricted Kernel Machines,"We introduce a novel framework for generative models based on Restricted Kernel Machines with multi-view generation and uncorrelated feature learning capabilities, called Gen-RKM.To incorporate multi-view generation, this mechanism uses a shared representation of data from various views.The mechanism is flexible to incorporate both kernel-based, neural network and convolutional based models within the same setting.To update the parameters of the network, we propose a novel training procedure which jointly learns the features and shared representation.Experiments demonstrate the potential of the framework through qualitative evaluation of generated samples.",Gen-RKM: a novel framework for generative models using Restricted Kernel Machines with multi-view generation and uncorrelated feature learning. 698,Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling,"Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo.This paper contributes the first large-scale systematic study comparing different pretraining tasks in this context, both as complements to language modeling and as potential alternatives.The primary results of the study support the use of language modeling as a pretraining task and set a new state of the art among comparable models using multitask learning with language models.However, a closer look at these results reveals worryingly strong baselines and strikingly varied results across target tasks, suggesting that the widely-used paradigm of pretraining and freezing sentence encoders may not be an ideal platform for further work.","We compare many tasks and task combinations for pretraining sentence-level BiLSTMs for NLP tasks. Language modeling is the best single pretraining task, but simple baselines also do well." 699,Bandlimiting Neural Networks Against Adversarial Attacks,"In this paper, we study the adversarial attack and defence problem in deep learning from the perspective of Fourier analysis.We first explicitly compute the Fourier transform of deep ReLU neural networks and show that there exist decaying but non-zero high frequency components in the Fourier spectrum of neural networks.We then demonstrate that the vulnerability of neural networks towards adversarial samples can be attributed to these insignificant but non-zero high frequency components.Based on this analysis, we propose to use a simple post-averaging technique to smooth out these high frequency components to improve the robustness of neural networks against adversarial attacks.Experimental results on the ImageNet and the CIFAR-10 datasets have shown that our proposed method is universally effective to defend many existing adversarial attacking methods proposed in the literature, including FGSM, PGD, DeepFool and C&W attacks.Our post-averaging method is simple since it does not require any re-training, and meanwhile it can successfully defend over 80-96% of the adversarial samples generated by these methods without introducing significant performance degradation on the original clean images.","An insight into the reason of adversarial vulnerability, an effective defense method against adversarial attacks." 700,REFINING MONTE CARLO TREE SEARCH AGENTS BY MONTE CARLO TREE SEARCH,"Reinforcement learning methods that continuously learn neural networks by episode generation with game tree search have been successful in two-person complete information deterministic games such as chess, shogi, and Go.However, there are only reports of practical cases and there are little evidence to guarantee the stability and the final performance of learning process.In this research, the coordination of episode generation was focused on.By means of regarding the entire system as game tree search, the new method can handle the trade-off between exploitation and exploration during episode generation.The experiments with a small problem showed that it had robust performance compared to the existing method, Alpha Zero.",Apply Monte Carlo Tree Search to episode generation in Alpha Zero 701,Geom-GCN: Geometric Graph Convolutional Networks,"Message-passing neural networks have been successfully applied in a wide variety of applications in the real world.""However, two fundamental weaknesses of MPNNs' aggregators limit their ability to represent graph-structured data: losing the structural information of nodes in neighborhoods and lacking the ability to capture long-range dependencies in disassortative graphs."", 'Few studies have noticed the weaknesses from different perspectives.From the observations on classical neural network and network geometry, we propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses. The behind basic idea is the aggregation on a graph can benefit from a continuous space underlying the graph.The proposed aggregation scheme is permutation-invariant and consists of three modules, node embedding, structural neighborhood, and bi-level aggregation.We also present an implementation of the scheme in graph convolutional networks, termed Geom-GCN, to perform transductive learning on graphs.Experimental results show the proposed Geom-GCN achieved state-of-the-art performance on a wide range of open datasets of graphs.","For graph neural networks, the aggregation on a graph can benefit from a continuous space underlying the graph." 702,Code Synthesis with Priority Queue Training,"We consider the task of program synthesis in the presence of a reward function over the output of programs, where the goal is to find programs with maximal rewards.We introduce a novel iterative optimization scheme, where we train an RNN on a dataset of K best programs from a priority queue of the generated programs so far.Then, we synthesize new programs and add them to the priority queue by sampling from the RNN.We benchmark our algorithm called priority queue training against genetic algorithm and reinforcement learning baselines on a simple but expressive Turing complete programming language called BF.Our experimental results show that our deceptively simple PQT algorithm significantly outperforms the baselines.By adding a program length penalty to the reward function, we are able to synthesize short, human readable programs.",We use a simple search algorithm involving an RNN and priority queue to find solutions to coding tasks. 703,Graph Wavelet Neural Network,"We present graph wavelet neural network, a novel graph convolutional neural network, leveraging graph wavelet transform to address the shortcomings of previous spectral graph CNN methods that depend on graph Fourier transform.Different from graph Fourier transform, graph wavelet transform can be obtained via a fast algorithm without requiring matrix eigendecomposition with high computational cost.Moreover, graph wavelets are sparse and localized in vertex domain, offering high efficiency and good interpretability for graph convolution.The proposed GWNN significantly outperforms previous spectral graph CNNs in the task of graph-based semi-supervised classification on three benchmark datasets: Cora, Citeseer and Pubmed.","We present graph wavelet neural network (GWNN), a novel graph convolutional neural network (CNN), leveraging graph wavelet transform to address the shortcoming of previous spectral graph CNN methods that depend on graph Fourier transform." 704,How Does Learning Rate Decay Help Modern Neural Networks?,"Learning rate decay is a technique for training modern neural networks.It starts with a large learning rate and then decays it multiple times.It is empirically observed to help both optimization and generalization.Common beliefs in how lrDecay works come from the optimization analysis of Gradient Descent:1) an initially large learning rate accelerates training or helps the network escape spurious local minima;2) decaying the learning rate helps the network converge to a local minimum and avoid oscillation.Despite the popularity of these common beliefs, experiments suggest that they are insufficient in explaining the general effectiveness of lrDecay in training modern neural networks that are deep, wide, and nonconvex.We provide another novel explanation: an initially large learning rate suppresses the network from memorizing noisy data while decaying the learning rate improves the learning of complex patterns.The proposed explanation is validated on a carefully-constructed dataset with tractable pattern complexity.And its implication, that additional patterns learned in later stages of lrDecay are more complex and thus less transferable, is justified in real-world datasets.We believe that this alternative explanation will shed light into the design of better training strategies for modern neural networks.",We provide another novel explanation of learning rate decay: an initially large learning rate suppresses the network from memorizing noisy data while decaying the learning rate improves the learning of complex patterns. 705,UCB EXPLORATION VIA Q-ENSEMBLES,"We show how an ensemble of-functions can be leveraged for more effective exploration in deep reinforcement learning.We build on well established algorithms from the bandit setting, and adapt them to the-learning setting.We propose an exploration strategy based on upper-confidence bounds.Our experiments show significant gains on the Atari benchmark.","Adapting UCB exploration to ensemble Q-learning improves over prior methods such as Double DQN, A3C+ on Atari benchmark" 706,Neural Probabilistic Motor Primitives for Humanoid Control,"We focus on the problem of learning a single motor module that can flexibly express a range of behaviors for the control of high-dimensional physically simulated humanoids.To do this, we propose a motor architecture that has the general structure of an inverse model with a latent-variable bottleneck.We show that it is possible to train this model entirely offline to compress thousands of expert policies and learn a motor primitive embedding space.The trained neural probabilistic motor primitive system can perform one-shot imitation of whole-body humanoid behaviors, robustly mimicking unseen trajectories.Additionally, we demonstrate that it is also straightforward to train controllers to reuse the learned motor primitive space to solve tasks, and the resulting movements are relatively naturalistic.To support the training of our model, we compare two approaches for offline policy cloning, including an experience efficient method which we call linear feedback policy cloning.We encourage readers to view a supplementary video summarizing our results.",Neural Probabilistic Motor Primitives compress motion capture tracking policies into one flexible model capable of one-shot imitation and reuse as a low-level controller. 707,PAGANDA: An Adaptive Task-Independent Automatic Data Augmentation,"Data augmentation is a useful technique to enlarge the size of the training set and prevent overfitting for different machine learning tasks when training data is scarce.However, current data augmentation techniques rely heavily on human design and domain knowledge, and existing automated approaches are yet to fully exploit the latent features in the training dataset.In this paper we propose , where the training set adaptively enriches itself with sample images automatically constructed from Generative Adversarial Networks trained in parallel.We demonstrate by experiments that our data augmentation strategy, with little model-specific considerations, can be easily adapted to cross-domain deep learning/machine learning tasks such as image classification and image inpainting, while significantly improving model performance in both tasks.Our source code and experimental details are available at \\url.",We present an automated adaptive data augmentation that works for multiple different tasks. 708,Regularizing Predictions via Class-wise Self-knowledge Distillation,"Deep neural networks with millions of parameters may suffer from poor generalizations due to overfitting.To mitigate the issue, we propose a new regularization method that penalizes the predictive distribution between similar samples.In particular, we distill the predictive distribution between different samples of the same label and augmented samples of the same source during training.In other words, we regularize the dark knowledge of a single network, i.e., a self-knowledge distillation technique, to force it output more meaningful predictions. We demonstrate the effectiveness of the proposed method via experiments on various image classification tasks: it improves not only the generalization ability, but also the calibration accuracy of modern neural networks.",We propose a new regularization technique based on the knowledge distillation. 709,Regularizing and Optimizing LSTM Language Models,"In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models.We propose the weight-dropped LSTM, which uses DropConnect on hidden-to-hidden weights, as a form of recurrent regularization.Further, we introduce NT-ASGD, a non-monotonically triggered variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a NT condition as opposed to being tuned by the user.Using these and other regularization strategies, our ASGD Weight-Dropped LSTM achieves state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2.In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.We also explore the viability of the proposed regularization and optimization strategies in the context of the quasi-recurrent neural network and demonstrate comparable performance to the AWD-LSTM counterpart.The code for reproducing the results is open sourced and is available at https://github.com/salesforce/awd-lstm-lm.",Effective regularization and optimization strategies for LSTM-based language models achieves SOTA on PTB and WT2. 710,Are there any 'object detectors' in the hidden layers of CNNs trained to identify objects or scenes?,"Various methods of measuring unit selectivity have been developed with the aim of better understanding how neural networks work. But the different measures provide divergent estimates of selectivity, and this has led to different conclusions regarding the conditions in which selective object representations are learned and the functional relevance of these representations.In an attempt to better characterize object selectivity, we undertake a comparison of various selectivity measures on a large set of units in AlexNet, including localist selectivity, precision, class-conditional mean activity selectivity, network dissection, the human interpretation of activation maximization images, and standard signal-detection measures. We find that the different measures provide different estimates of object selectivity, with precision and CCMAS measures providing misleadingly high estimates.Indeed, the most selective units had a poor hit-rate or a high false-alarm rate in object classification, making them poor object detectors. ', ""We fail to find any units that are even remotely as selective as the 'grandmother cell' units reported in recurrent neural networks."", ""In order to generalize these results, we compared selectivity measures on a few units in VGG-16 and GoogLeNet trained on the ImageNet or Places-365 datasets that have been described as 'object detectors'."", 'Again, we find poor hit-rates and high false-alarm rates for object classification.","Looking for object detectors using many different selectivity measures; CNNs are slightly selective , but not enough to be termed object detectors." 711,An image representation based convolutional network for DNA classification,"The folding structure of the DNA molecule combined with helper molecules, also referred to as the chromatin, is highly relevant for the functional properties of DNA.The chromatin structure is largely determined by the underlying primary DNA sequence, though the interaction is not yet fully understood.In this paper we develop a convolutional neural network that takes an image-representation of primary DNA sequence as its input, and predicts key determinants of chromatin structure.The method is developed such that it is capable of detecting interactions between distal elements in the DNA sequence, which are known to be highly relevant.Our experiments show that the method outperforms several existing methods both in terms of prediction accuracy and training time.",A method to transform DNA sequences into 2D images using space-filling Hilbert Curves to enhance the strengths of CNNs 712,Learning to cluster in order to transfer across domains and tasks,"This paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster.The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning.We begin by reducing categorical information to pairwise constraints, which only considers whether two instances belong to the same class or not.This similarity is category-agnostic and can be learned from data in the source domain using a similarity network.We then present two novel approaches for performing transfer learning using this similarity function.First, for unsupervised domain adaptation, we design a new loss function to regularize classification with a constrained clustering loss, hence learning a clustering network with the transferred similarity metric generating the training inputs.Second, for cross-task learning, we propose a framework to reconstruct and estimate the number of semantic clusters, again using the clustering network.Since the similarity network is noisy, the key is to use a robust clustering algorithm, and we show that our formulation is more robust than the alternative constrained and unconstrained clustering approaches.Using this method, we first show state of the art results for the challenging cross-task problem, applied on Omniglot and ImageNet.Our results show that we can reconstruct semantic clusters with high accuracy.We then evaluate the performance of cross-domain transfer using images from the Office-31 and SVHN-MNIST tasks and present top accuracy on both datasets. ', ""Our approach doesn't explicitly deal with domain discrepancy."", 'If we combine with a domain adaptation loss, it shows further improvement.",A learnable clustering objective to facilitate transfer learning across domains and tasks 713,Robust Cross-lingual Embeddings from Parallel Sentences ,"Recent advances in cross-lingual word embeddings have primarily relied on mapping-based methods, which project pretrained word embeddings from different languages into a shared space through a linear transformation.However, these approaches assume word embedding spaces are isomorphic between different languages, which has been shown not to hold in practice, and fundamentally limits their performance.This motivates investigating joint learning methods which can overcome this impediment, by simultaneously learning embeddings across languages via a cross-lingual term in the training objective.Given the abundance of parallel data available, we propose a bilingual extension of the CBOW method which leverages sentence-aligned corpora to obtain robust cross-lingual word and sentence representations.Our approach significantly improves cross-lingual sentence retrieval performance over all other approaches, as well as convincingly outscores mapping methods while maintaining parity with jointly trained methods on word-translation.It also achieves parity with a deep RNN method on a zero-shot cross-lingual document classification task, requiring far fewer computational resources for training and inference.As an additional advantage, our bilingual method also improves the quality of monolingual word vectors despite training on much smaller datasets. We make our code and models publicly available.",Joint method for learning cross-lingual embeddings with state-of-art performance for cross-lingual tasks and mono-lingual quality 714,Temporal Difference Variational Auto-Encoder,"To act and plan in complex environments, we posit that agents should have a mental simulator of the world with three characteristics: it should build an abstract state representing the condition of the world; it should form a belief which represents uncertainty on the world; it should go beyond simple step-by-step simulation, and exhibit temporal abstraction.Motivated by the absence of a model satisfying all these requirements, we propose TD-VAE, a generative sequence model that learns representations containing explicit beliefs about states several steps into the future, and that can be rolled out directly without single-step transitions.TD-VAE is trained on pairs of temporally separated time points, using an analogue of temporal difference learning used in reinforcement learning.","Generative model of temporal data, that builds online belief state, operates in latent space, does jumpy predictions and rollouts of states." 715,Information Theoretic Co-Training,"This paper introduces an information theoretic co-training objective for unsupervised learning. We consider the problem of predicting the future', "". Rather than predict future sensations we predict hypotheses to be confirmed by future sensations"", '. More formally, we assume a population distribution on pairs where we can think of as a past sensation and as a future sensation. We train both a predictor model and a confirmation model where we view as hypotheses or facts. For a population distribution on pairs we focus on the problem of measuring the mutual information between and. By the data processing inequality this mutual information is at least as large as the mutual information between and under the distribution on triples defined by the confirmation model.The information theoretic training objective for and can be viewed as a form of co-training where we want the prediction from to match the confirmation from.We give experiments on applications to learning phonetics on the TIMIT dataset.",Presents an information theoretic training objective for co-training and demonstrates its power in unsupervised learning of phonetics. 716,Score and Lyrics-Free Singing Voice Generation,"Generative models for singing voice have been mostly concerned with the task of ""singing voice synthesis,"" i.e., to produce singing voice waveforms given musical scores and text lyrics.In this work, we explore a novel yet challenging alternative: singing voice generation without pre-assigned scores and lyrics, in both training and inference time.In particular, we experiment with three different schemes:1) free singer, where the model generates singing voices without taking any conditions;2) accompanied singer, where the model generates singing voices over a waveform of instrumental music; and3) solo singer, where the model improvises a chord sequence first and then uses that to generate voices.We outline the associated challenges and propose a pipeline to tackle these new tasks.This involves the development of source separation and transcription models for data preparation, adversarial networks for audio generation, and customized metrics for evaluation.",Our models generate singing voices without lyrics and scores. They take accompaniment as input and output singing voices. 717,Distilling Neural Networks for Faster and Greener Dependency Parsing,"The carbon footprint of natural language processing research has been increasing in recent years due to its reliance on large and inefficient neural network implementations.Distillation is a network compression technique which attempts to impart knowledge from a large model to a smaller one.We use teacher-student distillation to improve the efficiency of the Biaffine dependency parser which obtains state-of-the-art performance with respect to accuracy and parsing speed. When distilling to 20% of the original model’s trainable parameters, we only observe an average decrease of ∼1 point for both UAS and LAS across a number of diverse Universal Dependency treebanks while being 2.26x faster than the baseline model on CPU at inference time.We also observe a small increase in performance when compressing to 80% for some treebanks. Finally, through distillation we attain a parser which is not only faster but also more accurate than the fastest modern parser on the Penn Treebank.",We increase the efficiency of neural network dependency parsers with teacher-student distillation. 718,Adversarially Regularized Autoencoders,"While autoencoders are a key technique in representation learning for continuous structures, such as images or wave forms, developing general-purpose autoencoders for discrete structures, such as text sequence or discretized images, has proven to be more challenging.In particular, discrete inputs make it more difficult to learn a smooth encoder that preserves the complex local relationships in the input space.In this work, we propose an adversarially regularized autoencoder with the goal of learning more robust discrete-space representations.ARAE jointly trains both a rich discrete-space encoder, such as an RNN, and a simpler continuous space generator function, while using generative adversarial network training to constrain the distributions to be similar.This method yields a smoother contracted code space that maps similar inputs to nearby codes, and also an implicit latent variable GAN model for generation.Experiments on text and discretized images demonstrate that the GAN model produces clean interpolations and captures the multimodality of the original space, and that the autoencoder produces improvements in semi-supervised learning as well as state-of-the-art results in unaligned text style transfer task using only a shared continuous-space representation.","Adversarially Regularized Autoencoders learn smooth representations of discrete structures allowing for interesting results in text generation, such as unaligned style transfer, semi-supervised learning, and latent space interpolation and arithmetic." 719,Dependent Bidirectional RNN with Extended-long Short-term Memory,"In this work, we first conduct mathematical analysis on the memory, which isdefined as a function that maps an element in a sequence to the current output,of three RNN cells; namely, the simple recurrent neural network, the longshort-term memory and the gated recurrent unit.Based on theanalysis, we propose a new design, called the extended-long short-term memory, to extend the memory length of a cell.Next, we present a multi-taskRNN model that is robust to previous erroneous predictions, called the dependentbidirectional recurrent neural network, for the sequence-in-sequenceout problem.Finally, the performance of the DBRNN model with theELSTM cell is demonstrated by experimental results.",A recurrent neural network cell with extended-long short-term memory and a multi-task RNN model for sequence-in-sequence-out problems 720,Measuring the Intrinsic Dimension of Objective Landscapes,"Many recently trained neural networks employ large numbers of parameters to achieve good performance.One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem.But how accurate are such notions?How many parameters are really needed?In this paper we attempt to answer this question by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace.We slowly increase the dimension of this subspace, note at which dimension solutions first appear, and define this to be the intrinsic dimension of the objective landscape.The approach is simple to implement, computationally tractable, and produces several suggestive conclusions.Many problems have smaller intrinsic dimensions than one might suspect, and the intrinsic dimension for a given dataset varies little across a family of models with vastly different sizes.This latter result has the profound implication that once a parameter space is large enough to solve a problem, extra parameters serve directly to increase the dimensionality of the solution manifold.Intrinsic dimension allows some quantitative comparison of problem difficulty across supervised, reinforcement, and other types of learning where we conclude, for example, that solving the inverted pendulum problem is 100 times easier than classifying digits from MNIST, and playing Atari Pong from pixels is about as hard as classifying CIFAR-10.In addition to providing new cartography of the objective landscapes wandered by parameterized models, the method is a simple technique for constructively obtaining an upper bound on the minimum description length of a solution.A byproduct of this construction is a simple approach for compressing networks, in some cases by more than 100 times.",We train in random subspaces of parameter space to measure how many dimensions are really needed to find a solution. 721,TWIN GRAPH CONVOLUTIONAL NETWORKS: GCN WITH DUAL GRAPH SUPPORT FOR SEMI-SUPERVISED LEARNING,"Graph Neural Networks as a combination of Graph Signal Processing and Deep Convolutional Networks shows great power in pattern recognition in non-Euclidean domains.In this paper, we propose a new method to deploy two pipelines based on the duality of a graph to improve accuracy.By exploring the primal graph and its dual graph where nodes and edges can be treated as one another, we have exploited the benefits of both vertex features and edge features.As a result, we have arrived at a framework that has great potential in both semisupervised and unsupervised learning.",A primal dual graph neural network model for semi-supervised learning 722,Semi-supervised Autoencoding Projective Dependency Parsing,"We describe two end-to-end autoencoding models for semi-supervised graph-based dependency parsing.The first model is a Local Autoencoding Parser encoding the input using continuous latent variables in a sequential manner; The second model is a Global Autoencoding Parser encoding the input into dependency trees as latent variables, with exact inference.Both models consist of two parts: an encoder enhanced by deep neural networks that can utilize the contextual information to encode the input into latent variables, and a decoder which is a generative model able to reconstruct the input.Both LAP and GAP admit a unified structure with different loss functions for labeled and unlabeled data with shared parameters.We conducted experiments on WSJ and UD dependency parsing data sets, showing that our models can exploit the unlabeled data to boost the performance given a limited amount of labeled data.",We describe two end-to-end autoencoding parsers for semi-supervised graph-based dependency parsing. 723,GATED FAST WEIGHTS FOR ASSOCIATIVE RETRIEVAL,"We improve previous end-to-end differentiable neural networks with fastweight memories.A gate mechanism updates fast weights at every time step ofa sequence through two separate outer-product-based matrices generated by slowparts of the net.The system is trained on a complex sequence to sequence variationof the Associative Retrieval Problem with roughly 70 times more temporalmemory than similar-sized standard recurrent NNs.In terms of accuracy and number of parameters, our architecture outperformsa variety of RNNs, including Long Short-Term Memory, Hypernetworks,and related fast weight architectures.",An improved Fast Weight network which shows better results on a general toy task. 724,Exponentially Decaying Flows for Optimization in Deep Learning,"The field of deep learning has been craving for an optimization method that shows outstanding property for both optimization and generalization. We propose a method for mathematical optimization based on flows along geodesics, that is, the shortest paths between two points, with respect to the Riemannian metric induced by a non-linear function.In our method, the flows refer to Exponentially Decaying Flows, as they can be designed to converge on the local solutions exponentially.In this paper, we conduct experiments to show its high performance on optimization benchmarks, as well as its potential for producing good machine learning benchmarks.",Introduction of a new optimization method and its application to deep learning. 725,Quantum Graph Neural Networks,"We introduce Quantum Graph Neural Networks, a new class of quantum neural network ansatze which are tailored to represent quantum processes which have a graph structure, and are particularly suitable to be executed on distributed quantum systems over a quantum network.Along with this general class of ansatze, we introduce further specialized architectures, namely, Quantum Graph Recurrent Neural Networks and Quantum Graph Convolutional Neural Networks.""We provide four example applications of QGNN's: learning Hamiltonian dynamics of quantum systems, learning how to create multipartite entanglement in a quantum network, unsupervised learning for spectral clustering, and supervised learning for graph isomorphism classification.",Introducing a new class of quantum neural networks for learning graph-based representations on quantum computers. 726,We're Here to Help: Crisis Communication and User Perception of Data Breaches,"Data breaches involve information being accessed by unauthorized parties.Our research concerns user perception of data breaches, especially issues relating to accountability.A preliminary study indicated many people had weak understanding of the issues, and felt they themselves were somehow responsible.We speculated that this impression might stem from organizational communication strategies.We therefore compared texts from organizations with external sources, such as the news media.This suggested that organizations use well-known crisis communication methods to reduce their reputational damage, and that these strategies align with repositioning of the narrative elements involved in the story.We then conducted a quantitative study, asking participants to rate either organizational texts or news texts about breaches.""The findings of this study were in line with our document analysis, and suggest that organizational communication affects the users' perception of victimization, attitudes in data protection, and accountability."", 'Our study suggests some software design and legal implications supporting users to protect themselves and develop better mental models of security breaches.","In this paper, we tested communication strategies\' influence on users mental models of a data breach.""" 727,Robust Goal Recognition with Operator-Counting Heuristics,"Goal recognition is the problem of inferring the correct goal towards which an agent executes a plan, given a set of goal hypotheses, a domain model, and a sample of the plan being executed. This is a key problem in both cooperative and competitive agent interactions and recent approaches have produced fast and accurate goal recognition algorithms. In this paper, we leverage advances in operator-counting heuristics computed using linear programs over constraints derived from classical planning problems to solve goal recognition problems. Our approach uses additional operator-counting constraints derived from the observations to efficiently infer the correct goal, and serves as basis for a number of further methods with additional constraints.",A goal recognition approach based on operator counting heuristics used to account for noise in the dataset. 728,BAM! Born-Again Multi-Task Networks for Natural Language Understanding,"It can be challenging to train multi-task neural networks that outperform or even match their single-task counterparts.To help address this, we propose using knowledge distillation where single-task models teach a multi-task model.We enhance this training with teacher annealing, a novel method that gradually transitions the model from distillation to supervised learning, helping the multi-task model surpass its single-task teachers.We evaluate our approach by multi-task fine-tuning BERT on the GLUE benchmark.Our method consistently improves over standard single-task and multi-task training.",distilling single-task models into a multi-task model improves natural language understanding performance 729,Efficacy of Pixel-Level OOD Detection for Semantic Segmentation,"The detection of out of distribution samples for image classification has been widely researched.Safety critical applications, such as autonomous driving, would benefit from the ability to localise the unusual objects causing the image to be out of distribution.This paper adapts state-of-the-art methods for detecting out of distribution images for image classification to the new task of detecting out of distribution pixels, which can localise the unusual objects.It further experimentally compares the adapted methods on two new datasets derived from existing semantic segmentation datasets using PSPNet and DeeplabV3+ architectures, as well as proposing a new metric for the task.The evaluation shows that the performance ranking of the compared methods does not transfer to the new task and every method performs significantly worse than their image-level counterparts.",Evaluating pixel-level out-of-distribution detection methods on two new real world datasets using PSPNet and DeeplabV3+. 730,Adversarial Dropout Regularization,"We present a domain adaptation method for transferring neural representations from label-rich source domains to unlabeled target domains.""Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain classifier network."", 'However, a drawback of this approach is that the domain classifier simply labels the generated features as in-domain or not, without considering the boundaries between classes.This means that ambiguous target features can be generated near class boundaries, reducing target classification accuracy.We propose a novel approach, Adversarial Dropout Regularization, which encourages the generator to output more discriminative features for the target domain.Our key idea is to replace the traditional domain critic with a critic that detects non-discriminative features by using dropout on the classifier network.The generator then learns to avoid these areas of the feature space and thus creates better features.We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvements over the state of the art.",We present a new adversarial method for adapting neural representations based on a critic that detects non-discriminative features. 731,Statistically Consistent Saliency Estimation,"The use of deep learning for a wide range of data problems has increased the need for understanding and diagnosing these models, and deep learning interpretation techniques have become an essential tool for data analysts.Although numerous model interpretation methods have been proposed in recent years, most of these procedures are based on heuristics with little or no theoretical guarantees.In this work, we propose a statistical framework for saliency estimation for black box computer vision models.We build a model-agnostic estimation procedure that is statistically consistent and passes the saliency checks of Adebayo et al..Our method requires solving a linear program, whose solution can be efficiently computed in polynomial time.Through our theoretical analysis, we establish an upper bound on the number of model evaluations needed to recover the region of importance with high probability, and build a new perturbation scheme for estimation of local gradients that is shown to be more efficient than the commonly used random perturbation schemes.Validity of the new method is demonstrated through sensitivity analysis.",We propose a statistical framework and a theoretically consistent procedure for saliency estimation. 732,Compositional Embeddings: Joint Perception and Comparison of Class Label Sets,"We explore the idea of compositional set embeddings that can be used to infer notjust a single class, but the set of classes associated with the input data.This can be useful, for example, in multi-object detection inimages, or multi-speaker diarization in audio.In particular, wedevise and implement two novel models consisting of an embedding functionf trained jointly with a “composite” function g that computes set union opera-tions between the classes encoded in two embedding vectors; and embeddingf trained jointly with a “query” function h that computes whether the classes en-coded in one embedding subsume the classes encoded in another embedding.Incontrast to prior work, these models must both perceive the classes associatedwith the input examples, and also encode the relationships between different classlabel sets.In experiments conducted on simulated data, OmniGlot, and COCOdatasets, the proposed composite embedding models outperform baselines basedon traditional embedding approaches.",We explored how a novel method of compositional set embeddings can both perceive and represent not just a single class but an entire set of classes that is associated with the input data. 733,CoDraw: Collaborative Drawing as a Testbed for Grounded Goal-driven Communication,"In this work, we propose a goal-driven collaborative task that contains language, vision, and action in a virtual environment as its core components.Specifically, we develop a Collaborative image-Drawing game between two agents, called CoDraw.Our game is grounded in a virtual world that contains movable clip art objects.The game involves two players: a Teller and a Drawer.The Teller sees an abstract scene containing multiple clip art pieces in a semantically meaningful configuration, while the Drawer tries to reconstruct the scene on an empty canvas using available clip art pieces.The two players communicate via two-way communication using natural language.We collect the CoDraw dataset of ~10K dialogs consisting of ~138K messages exchanged between human agents.We define protocols and metrics to evaluate the effectiveness of learned agents on this testbed, highlighting the need for a novel ""crosstalk"" condition which pairs agents trained independently on disjoint subsets of the training data for evaluation.We present models for our task, including simple but effective baselines and neural network approaches trained using a combination of imitation learning and goal-driven training.All models are benchmarked using both fully automated evaluation and by playing the game with live human agents.","We introduce a dataset, models, and training + evaluation protocols for a collaborative drawing task that allows studying goal-driven and perceptually + actionably grounded language generation and understanding. " 734,Bias-Resilient Neural Network,"Presence of bias and confounding effects is inarguably one of the most critical challenges in machine learning applications that has alluded to pivotal debates in the recent years.Such challenges range from spurious associations of confounding variables in medical studies to the bias of race in gender or face recognition systems.One solution is to enhance datasets and organize them such that they do not reflect biases, which is a cumbersome and intensive task.The alternative is to make use of available data and build models considering these biases.Traditional statistical methods apply straightforward techniques such as residualization or stratification to precomputed features to account for confounding variables.However, these techniques are not in general applicable to end-to-end deep learning methods.In this paper, we propose a method based on the adversarial training strategy to learn discriminative features unbiased and invariant to the confounder.This is enabled by incorporating a new adversarial loss function that encourages a vanished correlation between the bias and learned features.We apply our method to a synthetic, a medical diagnosis, and a gender classification dataset.Our results show that the learned features by our method not only result in superior prediction performance but also are uncorrelated with the bias or confounder variables.The code is available at http://blinded_for_review/.",We propose a method based on the adversarial training strategy to learn discriminative features unbiased and invariant to the confounder(s) by incorporating a loss function that encourages a vanished correlation between the bias and learned features. 735,Making question answering more robust through relevant context selection,"Existing neural question answering models are required to reason over and draw complicated inferences from a long context for most large-scale QA datasets.However, if we view QA as a combined retrieval and reasoning task, we can assume the existence of a minimal context which is necessary and sufficient to answer a given question.Recent work has shown that a sentence selector module that selects a shorter context and feeds it to the downstream QA model achieves performance comparable to a QA model trained on full context, while also being more interpretable.Recent work has also shown that most state-of-the-art QA models break when adversarially generated sentences are appended to the context.While humans are immune to such distractor sentences, QA models get easily misled into selecting answers from these sentences.We hypothesize that the sentence selector module can filter out extraneous context, thereby allowing the downstream QA model to focus and reason over the parts of the context that are relevant to the question.In this paper, we show that the sentence selector itself is susceptible to adversarial inputs.However, we demonstrate that a pipeline consisting of a sentence selector module followed by the QA model can be made more robust to adversarial attacks in comparison to a QA model trained on full context.Thus, we provide evidence towards a modular approach for question answering that is more robust and interpretable.",A modular approach consisting of a sentence selector module followed by the QA model can be made more robust to adversarial attacks in comparison to a QA model trained on full context. 736,Riemannian TransE: Multi-relational Graph Embedding in Non-Euclidean Space,"Multi-relational graph embedding which aims at achieving effective representations with reduced low-dimensional parameters, has been widely used in knowledge base completion.Although knowledge base data usually contains tree-like or cyclic structure, none of existing approaches can embed these data into a compatible space that in line with the structure.To overcome this problem, a novel framework, called Riemannian TransE, is proposed in this paper to embed the entities in a Riemannian manifold.Riemannian TransE models each relation as a move to a point and defines specific novel distance dissimilarity for each relation, so that all the relations are naturally embedded in correspondence to the structure of data.Experiments on several knowledge base completion tasks have shown that, based on an appropriate choice of manifold, Riemannian TransE achieves good performance even with a significantly reduced parameters.",Multi-relational graph embedding with Riemannian manifolds and TransE-like loss function. 737,Enabling Continual Learning in Neural Networks with Meta Learning,"Catastrophic forgetting in neural networks is one of the most well-known problems in continual learning.Previous attempts on addressing the problem focus on preventing important weights from changing.Such methods often require task boundaries to learn effectively and do not support backward transfer learning.In this paper, we propose a meta-learning algorithm which learns to reconstruct the gradients of old tasks w.r.t. the current parameters and combines these reconstructed gradients with the current gradient to enable continual learning and backward transfer learning from the current task to previous tasks.Experiments on standard continual learning benchmarks show that our algorithm can effectively prevent catastrophic forgetting and supports backward transfer learning.",We propose a meta learning algorithm for continual learning which can effectively prevent catastrophic forgetting problem and support backward transfer learning. 738,Geometry of Deep Convolutional Networks," We give a formal procedure for computing preimages of convolutional network outputs using the dual basis defined from the set of hyperplanes associated with the layers of the network.We point out the special symmetry associated with arrangements of hyperplanes of convolutional networks that take the form of regular multidimensional polyhedral cones.We discuss the efficiency of of large number of layers of nested cones that result from incremental small size convolutions in order to give a good compromise between efficient contraction of data to low dimensions and shaping of preimage manifolds.""We demonstrate how a specific network flattens a non linear input manifold to an affine output manifold and discuss it's relevance to understanding classification properties of deep networks.",Analysis of deep convolutional networks in terms of associated arrangement of hyperplanes 739,Bayesian Meta Sampling for Fast Uncertainty Adaptation,"Meta learning has been making impressive progress for fast model adaptation.However, limited work has been done on learning fast uncertainty adaption for Bayesian modeling.In this paper, we propose to achieve the goal by placing meta learning on the space of probability measures, inducing the concept of meta sampling for fast uncertainty adaption.Specifically, we propose a Bayesian meta sampling framework consisting of two main components: a meta sampler and a sample adapter.The meta sampler is constructed by adopting a neural-inverse-autoregressive-flow structure, a variant of the recently proposed neural autoregressive flows, to efficiently generate meta samples to be adapted.The sample adapter moves meta samples to task-specific samples, based on a newly proposed and general Bayesian sampling technique, called optimal-transport Bayesian sampling.The combination of the two components allows a simple learning procedure for themeta sampler to be developed, which can be efficiently optimized via standard back-propagation.Extensive experimental results demonstrate the efficiency and effectiveness of the proposed framework, obtaining better sample quality and fasteruncertainty adaption compared to related methods.",We proposed a Bayesian meta sampling method for adapting the model uncertainty in meta learning 740,Context-adaptive Entropy Model for End-to-end Optimized Image Compression,"We propose a context-adaptive entropy model for use in end-to-end optimized image compression.Our model exploits two types of contexts, bit-consuming contexts and bit-free contexts, distinguished based upon whether additional bitallocation is required.Based on these contexts, we allow the model to more accurately estimate the distribution of each latent representation with a more generalized form of the approximation models, which accordingly leads to anenhanced compression performance.Based on the experimental results, the proposed method outperforms the traditional image codecs, such as BPG and JPEG2000, as well as other previous artificial-neural-network based approaches, in terms of the peak signal-to-noise ratio and multi-scale structural similarity index.The test code is publicly available at https://github.com/JooyoungLeeETRI/CA_Entropy_Model.","Context-adaptive entropy model for use in end-to-end optimized image compression, which significantly improves compression performance" 741,Manifold Mixup: Learning Better Representations by Interpolating Hidden States,"Deep networks often perform well on the data distribution on which they are trained, yet give incorrect answers when evaluated on points from off of the training distribution.This is exemplified by the adversarial examples phenomenon but can also be seen in terms of model generalization and domain shift. Ideally, a model would assign lower confidence to points unlike those from the training distribution. We propose a regularizer which addresses this issue by training with interpolated hidden states and encouraging the classifier to be less confident at these points. Because the hidden states are learned, this has an important effect of encouraging the hidden states for a class to be concentrated in such a way so that interpolations within the same class or between two different classes do not intersect with the real data points from other classes. This has a major advantage in that it avoids the underfitting which can result from interpolating in the input space. We prove that the exact condition for this problem of underfitting to be avoided by Manifold Mixup is that the dimensionality of the hidden states exceeds the number of classes, which is often the case in practice. Additionally, this concentration can be seen as making the features in earlier layers more discriminative. We show that despite requiring no significant additional computation, Manifold Mixup achieves large improvements over strong baselines in supervised learning, robustness to single-step adversarial attacks, semi-supervised learning, and Negative Log-Likelihood on held out samples.","A method for learning better representations, that acts as a regularizer and despite its no significant additional computation cost , achieves improvements over strong baselines on Supervised and Semi-supervised Learning tasks." 742,epsilon-Rotation Invariant Euclidean Spheres Packing in Slicer3D,"Sometimes SRS requires using sphere packing on a Region of Interest such as cancer to determine a treatment plan. We have developed a sphere packing algorithm which packs non-intersecting spheres inside the ROI. The region of interest in our case are those voxels which are identified as cancer tissues. In this paper, we analyze the rotational invariant properties of our sphere-packing algorithm which is based on distance transformations.Epsilon-Rotation invariant means the ability to arbitrary rotate the 3D ROI while keeping the volume properties remaining same within some limit of epsilon.The applied rotations produce spherical packing which remains highly correlated as we analyze the geometrically properties of sphere packing before and after the rotation of the volume data for the ROI.Our novel sphere packing algorithm has high degree of rotation invariance within the range of +/- epsilon.Our method used a shape descriptor derived from the values of the disjoint set of spheres form the distance-based sphere packing algorithm to extract the invariant descriptor from the ROI.We demonstrated by implementing these ideas using Slicer3D platform available for our research. The data is based on sing MRI Stereotactic images.We presented several performance results on different benchmarks data of over 30 patients in Slicer3D platform.","Packing region of Interest (ROI) such as cancerous regions identified in 3D Volume Data, Packing spheres inside the ROI, rotating the ROI , measures of difference in sphere packing before and after the rotation." 743,Emergence of Implicit Filter Sparsity in Convolutional Neural Networks,"We show implicit filter level sparsity manifests in convolutional neural networks which employ Batch Normalization and ReLU activation, and are trained using adaptive gradient descent techniques with L2 regularization or weight decay.Through an extensive empirical study we hypothesize the mechanism be hind the sparsification process.We find that the interplay of various phenomena influences the strength of L2 and weight decay regularizers, leading the supposedly non sparsity inducing regularizers to induce filter sparsity. In this workshop article we summarize some of our key findings and experiments, and present additional results on modern network architectures such as ResNet-50.","Filter level sparsity emerges implicitly in CNNs trained with adaptive gradient descent approaches due to various phenomena, and the extent of sparsity can be inadvertently affected by different seemingly unrelated hyperparameters." 744,A Unified Framework for Lifelong Learning in Deep Neural Networks,"Humans can learn a variety of concepts and skills incrementally over the course of their lives while exhibiting an array of desirable properties, such as non-forgetting, concept rehearsal, forward transfer and backward transfer of knowledge, few-shot learning, and selective forgetting.Previous approaches to lifelong machine learning can only demonstrate subsets of these properties, often by combining multiple complex mechanisms. In this Perspective, we propose a powerful unified framework that can demonstrate all of the properties by utilizing a small number of weight consolidation parameters in deep neural networks.In addition, we are able to draw many parallels between the behaviours and mechanisms of our proposed framework and those surrounding human learning, such as memory loss or sleep deprivation.This Perspective serves as a conduit for two-way inspiration to further understand lifelong learning in machines and humans.","Drawing parallels with human learning, we propose a unified framework to exhibit many lifelong learning abilities in neural networks by utilizing a small number of weight consolidation parameters." 745,Improving Limited Angle CT Reconstruction with a Robust GAN Prior,"Limited angle CT reconstruction is an under-determined linear inverse problem that requires appropriate regularization techniques to be solved.In this work we study how pre-trained generative adversarial networks can be used to clean noisy, highly artifact laden reconstructions from conventional techniques, by effectively projecting onto the inferred image manifold.In particular, we use a robust version of the popularly used GAN prior for inverse problems, based on a recent technique called corruption mimicking, that significantly improves the reconstruction quality.The proposed approach operates in the image space directly, as a result of which it does not need to be trained or require access to the measurement model, is scanner agnostic, and can work over a wide range of sensing scenarios.",We show that robust GAN priors work better than GAN priors for limited angle CT reconstruction which is a highly under-determined inverse problem. 746,Reducing Distant Supervision Noise with Maxpooled Attention and Sentence-Level Supervision,"We propose an effective multitask learning setup for reducing distant supervision noise by leveraging sentence-level supervision.We show how sentence-level supervision can be used to improve the encoding of individual sentences, and to learn which input sentences are more likely to express the relationship between a pair of entities.We also introduce a novel neural architecture for collecting signals from multiple input sentences, which combines the benefits of attention and maxpooling.The proposed method increases AUC by 10%, and outperforms recently published results on the FB-NYT dataset.","A new form of attention that works well for the distant supervision setting, and a multitask learning approach to add sentence-level annotations. " 747,Generative Code Modeling with Graphs,"Generative models forsource code are an interesting structured prediction problem, requiring to reason about both hard syntactic and semantic constraints as well as about natural, likely programs.We present a novel model for this problem that uses a graph to represent the intermediate state of the generated output.Our model generates code by interleaving grammar-driven expansion steps with graph augmentation and neural message passing steps.An experimental evaluation shows that our new model can generate semantically meaningful expressions, outperforming a range of strong baselines.",Representing programs as graphs including semantics helps when generating programs 748,"INFERENCE, PREDICTION, AND ENTROPY RATE OF CONTINUOUS-TIME, DISCRETE-EVENT PROCESSES","The inference of models, prediction of future symbols, and entropy rate estimation of discrete-time, discrete-event processes is well-worn ground.However, many time series are better conceptualized as continuous-time, discrete-event processes.Here, we provide new methods for inferring models, predicting future symbols, and estimating the entropy rate of continuous-time, discrete-event processes.The methods rely on an extension of Bayesian structural inference that takes advantage of neural network’s universal approximation power.Based on experiments with simple synthetic data, these new methods seem to be competitive with state-of- the-art methods for prediction and entropy rate estimation as long as the correct model is inferred.","A new method for inferring a model of, estimating the entropy rate of, and predicting continuous-time, discrete-event processes." 749,Shape Features Improve General Model Robustness,"Recent studies show that convolutional neural networks are vulnerable under various settings, including adversarial examples, backdoor attacks, and distribution shifting. Motivated by the findings that human visual system pays more attention to global structure for recognition while CNNs are biased towards local texture features in images, we propose a unified framework EdgeGANRob based on robust edge features to improve the robustness of CNNs in general, which first explicitly extracts shape/structure features from a given image and then reconstructs a new image by refilling the texture information with a trained generative adversarial network.In addition, to reduce the sensitivity of edge detection algorithm to adversarial perturbation, we propose a robust edge detection approach Robust Canny based on the vanilla Canny algorithm.To gain more insights, we also compare EdgeGANRob with its simplified backbone procedure EdgeNetRob, which performs learning tasks directly on the extracted robust edge features.We find that EdgeNetRob can help boost model robustness significantly but at the cost of the clean model accuracy.EdgeGANRob, on the other hand, is able to improve clean model accuracy compared with EdgeNetRob and without losing the robustness benefits introduced by EdgeNetRob.Extensive experiments show that EdgeGANRob is resilient in different learning tasks under diverse settings.",A unified model to improve model robustness against multiple tasks 750,PROGRESSIVE LEARNING AND DISENTANGLEMENT OF HIERARCHICAL REPRESENTATIONS,"Learning rich representation from data is an important task for deep generative models such as variational auto-encoder.However, by extracting high-level abstractions in the bottom-up inference process, the goal of preserving all factors of variations for top-down generation is compromised.Motivated by the concept of “starting small”, we present a strategy to progressively learn independent hierarchical representations from high- to low-levels of abstractions.The model starts with learning the most abstract representation, and then progressively grow the network architecture to introduce new representations at different levels of abstraction.We quantitatively demonstrate the ability of the presented model to improve disentanglement in comparison to existing works on two benchmark datasets using three disentanglement metrics, including a new metric we proposed to complement the previously-presented metric of mutual information gap.We further present both qualitative and quantitative evidence on how the progression of learning improves disentangling of hierarchical representations.By drawing on the respective advantage of hierarchical representation learning and progressive learning, this is to our knowledge the first attempt to improve disentanglement by progressively growing the capacity of VAE to learn hierarchical representations.",We proposed a progressive learning method to improve learning and disentangling latent representations at different levels of abstraction. 751,Learning Sparse Latent Representations with the Deep Copula Information Bottleneck,"Deep latent variable models are powerful tools for representation learning.In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them.To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space.Building on that, we show how this transformation translates to sparsity of the latent space in the new model. We evaluate our method on artificial and real data.",We apply the copula transformation to the Deep Information Bottleneck which leads to restored invariance properties and a disentangled latent space with superior predictive capabilities. 752,Measuring Compositional Generalization: A Comprehensive Method on Realistic Data,"State-of-the-art machine learning methods exhibit limited compositional generalization.At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements.We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks.We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures.We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy.We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings.",Benchmark and method to measure compositional generalization by maximizing divergence of compound frequency at small divergence of atom frequency. 753,Layerwise Learning Rates for Object Features in Unsupervised and Supervised Neural Networks And Consequent Predictions for the Infant Visual System,"To understand how object vision develops in infancy and childhood, it will be necessary to develop testable computational models.Deep neural networks have proven valuable as models of adult vision, but it is not yet clear if they have any value as models of development.As a first model, we measured learning in a DNN designed to mimic the architecture and representational geometry of the visual system.We quantified the development of explicit object representations at each level of this network through training by freezing the convolutional layers and training an additional linear decoding layer.We evaluate decoding accuracy on the whole ImageNet validation set, and also for individual visual classes.CORnet, however, uses supervised training and because infants have only extremely impoverished access to labels they must instead learn in an unsupervised manner.We therefore also measured learning in a state-of-the-art unsupervised network.CORnet and DeepCluster differ in both supervision and in the convolutional networks at their heart, thus to isolate the effect of supervision, we ran a control experiment in which we trained the convolutional network from DeepCluster in a supervised manner.We make predictions on how learning should develop across brain regions in infants.In all three networks, we also tested for a relationship in the order in which infants and machines acquire visual classes, and found only evidence for a counter-intuitive relationship.We discuss the potential reasons for this.",Unsupervised networks learn from bottom up; machines and infants acquire visual classes in different orders 754,Gradient Descent Happens in a Tiny Subspace,"We show that in a variety of large-scale deep learning scenarios the gradient dynamically converges to a very small subspace after a short period of training.The subspace is spanned by a few top eigenvectors of the Hessian, and is mostly preserved over long periods of training.A simple argument then suggests that gradient descent may happen mostly in this subspace.We give an example of this effect in a solvable model of classification, and we comment on possible implications for optimization and learning.","For classification problems with k classes, we show that the gradient tends to live in a tiny, slowly-evolving subspace spanned by the eigenvectors corresponding to the k-largest eigenvalues of the Hessian." 755,Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering,"Answering questions that require multi-hop reasoning at web-scale necessitates retrieving multiple evidence documents, one of which often has little lexical or semantic relationship to the question.This paper introduces a new graph-based recurrent retrieval approach that learns to retrieve reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions.Our retriever model trains a recurrent neural network that learns to sequentially retrieve evidence paragraphs in the reasoning path by conditioning on the previously retrieved documents.Our reader model ranks the reasoning paths and extracts the answer span included in the best reasoning path.Experimental results show state-of-the-art results in three open-domain QA datasets, showcasing the effectiveness and robustness of our method.Notably, our method achieves significant improvement in HotpotQA, outperforming the previous best model by more than 14 points.",Graph-based recurrent retriever that learns to retrieve reasoning paths over Wikipedia Graph outperforms the most recent state of the art on HotpotQA by more than 14 points. 756,A shallow feature extraction network with a large receptive field for stereo matching tasks,"Stereo matching is one of the important basic tasks in the computer vision field.In recent years, stereo matching algorithms based on deep learning have achieved excellent performance and become the mainstream research direction.Existing algorithms generally use deep convolutional neural networks to extract more abstract semantic information, but we believe that the detailed information of the spatial structure is more important for stereo matching tasks.Based on this point of view, this paper proposes a shallow feature extraction network with a large receptive field.The network consists of three parts: a primary feature extraction module, an atrous spatial pyramid pooling module and a feature fusion module.The primary feature extraction network contains only three convolution layers.This network utilizes the basic feature extraction ability of the shallow network to extract and retain the detailed information of the spatial structure.In this paper, the dilated convolution and atrous spatial pyramid pooling module is introduced to increase the size of receptive field.In addition, a feature fusion module is designed, which integrates the feature maps with multiscale receptive fields and mutually complements the feature information of different scales.We replaced the feature extraction part of the existing stereo matching algorithms with our shallow feature extraction network, and achieved state-of-the-art performance on the KITTI 2015 dataset.Compared with the reference network, the number of parameters is reduced by 42%, and the matching accuracy is improved by 1.9%.","We introduced a shallow featrue extraction network with a large receptive field for stereo matching tasks, which uses a simple structure to get better performance." 757,Deep Learning with Logged Bandit Feedback,"We propose a new output layer for deep neural networks that permits the use of logged contextual bandit feedback for training.Such contextual bandit feedback can be available in huge quantities at little cost, opening up a path for training deep networks on orders of magnitude more data.To this effect, we propose a Counterfactual Risk Minimization approach for training deep networks using an equivariant empirical risk estimator with variance regularization, BanditNet, and show how the resulting objective can be decomposed in a way that allows Stochastic Gradient Descent training.We empirically demonstrate the effectiveness of the method by showing how deep networks -- ResNets in particular -- can be trained for object recognition without conventionally labeled images.",The paper proposes a new output layer for deep networks that permits the use of logged contextual bandit feedback for training. 758,Detecting Egregious Responses in Neural Sequence-to-sequence Models,"In this work, we attempt to answer a critical question: whether there exists some input sequence that will cause a well-trained discrete-space neural network sequence-to-sequence model to generate egregious outputs.And if such inputs exist, how to find them efficiently.We adopt an empirical methodology, in which we first create lists of egregious output sequences, and then design a discrete optimization algorithm to find input sequences that will cause the model to generate them.Moreover, the optimization algorithm is enhanced for large vocabulary search and constrained to search for input sequences that are likely to be input by real-world users.In our experiments, we apply this approach to dialogue response generation models trained on three real-world dialogue data-sets: Ubuntu, Switchboard and OpenSubtitles, testing whether the model can generate malicious responses.We demonstrate that given the trigger inputs our algorithm finds, a significant number of malicious sentences are assigned large probability by the model, which reveals an undesirable consequence of standard seq2seq training.",This paper aims to provide an empirical answer to the question of whether well-trained dialogue response model can output malicious responses. 759,RTC-VAE: HARNESSING THE PECULIARITY OF TOTAL CORRELATION IN LEARNING DISENTANGLED REPRESENTATIONS,"In the problem of unsupervised learning of disentangled representations, one of the promising methods is to penalize the total correlation of sampled latent vari-ables. Unfortunately, this well-motivated strategy often fail to achieve disentanglement due to a problematic difference between the sampled latent representation and its corresponding mean representation. We provide a theoretical explanation that low total correlation of sample distribution cannot guarantee low total correlation of the mean representation.We prove that for the mean representation of arbitrarily high total correlation, there exist distributions of latent variables of abounded total correlation. However, we still believe that total correlation could be a key to the disentanglement of unsupervised representative learning, and we propose a remedy, RTC-VAE, which rectifies the total correlation penalty. Experiments show that our model has a more reasonable distribution of the mean representation compared with baseline models, e.g.,β-TCVAE and FactorVAE.",diagnosed all the problem of STOA VAEs theoretically and qualitatively 760,Sparse and Structured Visual Attention,"Visual attention mechanisms have been widely used in image captioning models.In this paper, to better link the image structure with the generated text, we replace the traditional softmax attention mechanism by two alternative sparsity-promoting transformations: sparsemax and Total-Variation Sparse Attention.With sparsemax, we obtain sparse attention weights, selecting relevant features. In order to promote sparsity and encourage fusing of the related adjacent spatial locations, we propose TVmax. By selecting relevant groups of features, the TVmax transformation improves interpretability.We present results in the Microsoft COCO and Flickr30k datasets, obtaining gains in comparison to softmax. TVmax outperforms the other compared attention mechanisms in terms of human-rated caption quality and attention relevance.","We propose a new sparse and structured attention mechanism, TVmax, which promotes sparsity and encourages the weight of related adjacent locations to be the same." 761,Learning pronunciation from a foreign language in speech synthesis networks,"Although there are more than 65,000 languages in the world, the pronunciations of many phonemes sound similar across the languages.""When people learn a foreign language, their pronunciation often reflect their native language's characteristics."", 'That motivates us to investigate how the speech synthesis network learns the pronunciation when multi-lingual dataset is given.In this study, we train the speech synthesis network bilingually in English and Korean, and analyze how the network learns the relations of phoneme pronunciation between the languages.Our experimental result shows that the learned phoneme embedding vectors are located closer if their pronunciations are similar across the languages.""Based on the result, we also show that it is possible to train networks that synthesize English speaker's Korean speech and vice versa."", 'In another experiment, we train the network with limited amount of English dataset and large Korean dataset, and analyze the required amount of dataset to train a resource-poor language with the help of resource-rich languages.",Learned phoneme embeddings of multilingual neural speech synthesis network could represent relations of phoneme pronunciation between the languages. 762,Fast Sparse ConvNets,"Historically, the pursuit of efficient inference has been one of the driving forces be-hind the research into new deep learning architectures and building blocks.Some of the recent examples include: the squeeze-and-excitation module of, depthwise separable convolutions in Xception, and the inverted bottleneck in MobileNet v2. Notably, in all of these cases, the resulting building blocks enabled not only higher efficiency, but also higher accuracy, and found wide adoption in the field.In this work, we further expand the arsenal of efficient building blocks for neural network architectures; but instead of combining standard primitives, we advocate for the replacement of these dense primitives with their sparse counterparts. While the idea of using sparsity to decrease the parameter count is not new, the conventional wisdom is that this reduction in theoretical FLOPs does not translate into real-world efficiency gains. We aim to correct this misconception by introducing a family of efficient sparse kernels for several hardware platforms, which we plan to open-source for the benefit of the community.Equipped with our efficient implementation of sparse primitives, we show that sparse versions of MobileNet v1 and MobileNet v2 architectures substantially outperform strong dense baselines on the efficiency-accuracy curve. On Snapdragon 835 our sparse networks outperform their dense equivalents by 1.1−2.2x – equivalent to approximately one entire generation of improvement. We hope that our findings will facilitate wider adoption of sparsity as a tool for creating efficient and accurate deep learning architectures.",Sparse MobileNets are faster than Dense ones with the appropriate kernels. 763,Graph inference learning for semi-supervised classification,"In this work, we address the semi-supervised classification of graph data, where the categories of those unlabeled nodes are inferred from labeled nodes as well as graph structures.Recent works often solve this problem with the advanced graph convolution in a conventional supervised manner, but the performance could be heavily affected when labeled data is scarce.Here we propose a Graph Inference Learning framework to boost the performance of node classification by learning the inference of node labels on graph topology.To bridge the connection of two nodes, we formally define a structure relation by encapsulating node attributes, between-node paths and local topological structures together, which can make inference conveniently deduced from one node to another node.For learning the inference process, we further introduce meta-optimization on structure relations from training nodes to validation nodes, such that the learnt graph inference capability can be better self-adapted into test nodes.Comprehensive evaluations on four benchmark datasets demonstrate the superiority of our GIL when compared with other state-of-the-art methods in the semi-supervised node classification task.", We propose a novel graph inference learning framework by building structure relations to infer unknown node labels from those labeled nodes in an end-to-end way. 764,Logically-Constrained Neural Fitted Q-iteration,"This paper proposes a method for efficient training of Q-function for continuous-state Markov Decision Processes, such that the traces of the resulting policies satisfy a Linear Temporal Logic property.LTL, a modal logic, can express a wide range of time-dependent logical properties including safety and liveness.We convert the LTL property into a limit deterministic Buchi automaton with which a synchronized product MDP is constructed.The control policy is then synthesised by a reinforcement learning algorithm assuming that no prior knowledge is available from the MDP.The proposed method is evaluated in a numerical study to test the quality of the generated control policy and is compared against conventional methods for policy synthesis such as MDP abstraction and approximate dynamic programming.",As safety is becoming a critical notion in machine learning we believe that this work can act as a foundation for a number of research directions such as safety-aware learning algorithms. 765,Efficient Bayesian Inference for Nested Simulators,"We introduce two approaches for conducting efficient Bayesian inference in stochastic simulators containing nested stochastic sub-procedures, i.e., internal procedures for which the density cannot be calculated directly such as rejection sampling loops.The resulting class of simulators are used extensively throughout the sciences and can be interpreted as probabilistic generative models.However, drawing inferences from them poses a substantial challenge due to the inability to evaluate even their unnormalised density, preventing the use of many standard inference procedures like Markov Chain Monte Carlo.To address this, we introduce inference algorithms based on a two-step approach that first approximates the conditional densities of the individual sub-procedures, before using these approximations to run MCMC methods on the full program.Because the sub-procedures can be dealt with separately and are lower-dimensional than that of the overall problem, this two-step process allows them to be isolated and thus be tractably dealt with, without placing restrictions on the overall dimensionality of the problem.We demonstrate the utility of our approach on a simple, artificially constructed simulator.","We introduce two approaches for efficient and scalable inference in stochastic simulators for which the density cannot be evaluated directly due to, for example, rejection sampling loops." 766,Adversarial Training Can Hurt Generalization,"While adversarial training can improve robust accuracy, it sometimes hurts standard accuracy.Previous work has studied this tradeoff between standard and robust accuracy, but only in the setting where no predictor performs well on both objectives in the infinite data limit.In this paper, we show that even when the optimal predictor with infinite data performs well on both objectives, a tradeoff can still manifest itself with finite data.Furthermore, since our construction is based on a convex learning problem, we rule out optimization concerns, thus laying bare a fundamental tension between robustness and generalization.Finally, we show that robust self-training mostly eliminates this tradeoff by leveraging unlabeled data.","Even if there is no tradeoff in the infinite data limit, adversarial training can have worse standard accuracy even in a convex problem." 767,Log-DenseNet: How to Sparsify a DenseNet,"Skip connections are increasingly utilized by deep neural networks to improve accuracy and cost-efficiency.In particular, the recent DenseNet is efficient in computation and parameters, and achieves state-of-the-art predictions by directly connecting each feature layer to all previous ones.""However, DenseNet's extreme connectivity pattern may hinder its scalability to high depths, and in applications like fully convolutional networks, full DenseNet connections are prohibitively expensive."", 'This work first experimentally shows that one key advantage of skip connections is to have short distances among feature layers during backpropagation.Specifically, using a fixed number of skip connections, the connection patterns with shorter backpropagation distance among layers have more accurate predictions.Following this insight, we propose a connection template, Log-DenseNet, which, in comparison to DenseNet, only slightly increases the backpropagation distances among layers from 1 to , but uses only total connections instead of.Hence, \\logdenses are easier to scale than DenseNets, and no longer require careful GPU memory management.We demonstrate the effectiveness of our design principle by showing better performance than DenseNets on tabula rasa semantic segmentation, and competitive results on visual recognition.","We show shortcut connections should be placed in patterns that minimize between-layer distances during backpropagation, and design networks that achieve log L distances using L log(L) connections." 768,Weakly Supervised Disentanglement with Guarantees,"Learning disentangled representations that correspond to factors of variation in real-world data is critical to interpretable and human-controllable machine learning.Recently, concerns about the viability of learning disentangled representations in a purely unsupervised manner has spurred a shift toward the incorporation of weak supervision.However, there is currently no formalism that identifies when and how weak supervision will guarantee disentanglement.To address this issue, we provide a theoretical framework—including a calculus of disentanglement— to assist in analyzing the disentanglement guarantees conferred by weak supervision when coupled with learning algorithms based on distribution matching.We empirically verify the guarantees and limitations of several weak supervision methods, demonstrating the predictive power and usefulness of our theoretical framework.",We construct a theoretical framework for weakly supervised disentanglement and conducted lots of experiments to back up the theory. 769,A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning,"Despite the growing interest in continual learning, most of its contemporary works have been studied in a rather restricted setting where tasks are clearly distinguishable, and task boundaries are known during training.However, if our goal is to develop an algorithm that learns as humans do, this setting is far from realistic, and it is essential to develop a methodology that works in a task-free manner.Meanwhile, among several branches of continual learning, expansion-based methods have the advantage of eliminating catastrophic forgetting by allocating new resources to learn new data.In this work, we propose an expansion-based approach for task-free continual learning.Our model, named Continual Neural Dirichlet Process Mixture, consists of a set of neural network experts that are in charge of a subset of the data.CN-DPM expands the number of experts in a principled way under the Bayesian nonparametric framework.With extensive experiments, we show that our model successfully performs task-free continual learning for both discriminative and generative tasks such as image classification and image generation.",We propose an expansion-based approach for task-free continual learning for the first time. Our model consists of a set of neural network experts and expands the number of experts under the Bayesian nonparametric principle. 770,Amortized Nesterov's Momentum: Robust and Lightweight Momentum for Deep Learning,"Stochastic Gradient Descent with Nesterov's momentum is a widely used optimizer in deep learning, which is observed to have excellent generalization performance."", ""However, due to the large stochasticity, SGD with Nesterov's momentum is not robust, i.e., its performance may deviate significantly from the expectation."", ""In this work, we propose Amortized Nesterov's Momentum, a special variant of Nesterov's momentum which has more robust iterates, faster convergence in the early stage and higher efficiency."", 'Our experimental results show that this new momentum achieves similar generalization performance with little-to-no tuning.In the convex case, we provide optimal convergence rates for our new methods and discuss how the theorems explain the empirical results.","Amortizing Nesterov's momentum for more robust, lightweight and fast deep learning training." 771,FAVAE: SEQUENCE DISENTANGLEMENT USING IN- FORMATION BOTTLENECK PRINCIPLE,"A state-of-the-art generative model, a ”factorized action variational autoencoder,” is presented for learning disentangled and interpretable representations from sequential data via the information bottleneck without supervision.The purpose of disentangled representation learning is to obtain interpretable and transferable representations from data.We focused on the disentangled representation of sequential data because there is a wide range of potential applications if disentanglement representation is extended to sequential data such as video, speech, and stock price data.Sequential data is characterized by dynamic factors and static factors: dynamic factors are time-dependent, and static factors are independent of time.Previous works succeed in disentangling static factors and dynamic factors by explicitly modeling the priors of latent variables to distinguish between static and dynamic factors.However, this model can not disentangle representations between dynamic factors, such as disentangling ”picking” and ”throwing” in robotic tasks.In this paper, we propose new model that can disentangle multiple dynamic factors.Since our method does not require modeling priors, it is capable of disentangling ”between” dynamic factors.In experiments, we show that FAVAE can extract the disentangled dynamic factors.",We propose new model that can disentangle multiple dynamic factors in sequential data 772,Verification of Generative-Model-Based Visual Transformations,"Generative networks are promising models for specifying visual transformations.Unfortunately, certification of generative models is challenging as one needs to capture sufficient non-convexity so to produce precise bounds on the output.Existing verification methods either fail to scale to generative networks or do not capture enough non-convexity.In this work, we present a new verifier, called ApproxLine, that can certify non-trivial properties of generative networks.ApproxLine performs both deterministic and probabilistic abstract interpretation and captures infinite sets of outputs of generative networks.""We show that ApproxLine can verify interesting interpolations in the network's latent space.",We verify deterministic and probabilistic properties of neural networks using non-convex relaxations over visible transformations specified by generative models 773,Multi-View Summarization and Activity Recognition Meet Edge Computing in IoT Environments,"Multi-view video summarization lacks researchers’ attention due to their major challenges of inter-view correlations and overlapping of cameras.Most of the prior MVS works are offline, relying on only summary, needing extra communication bandwidth and transmission time with no focus on uncertain environments.Different from the existing methods, we propose edge intelligence based MVS and spatio-temporal features based activity recognition for IoT environments.We segment the multi-view videos on each slave device over edge into shots using light-weight CNN object detection model and compute mutual information among them to generate summary.Our system does not rely on summary only but encode and transmit it to a master device with neural computing stick for intelligently computing inter-view correlations and efficiently recognizing activities, thereby saving computation resources, communication bandwidth, and transmission time.Experiments report an increase of 0.4 in F-measure score on MVS Office dataset as well as 0.2% and 2% increase in activity recognition accuracy over UCF-50 and YouTube 11 datasets, respectively, with lower storage and transmission time compared to state-of-the-art.The time complexity is decreased from 1.23 to 0.45 secs for a single frame processing, thereby generating 0.75 secs faster MVS.Furthermore, we made a new dataset by synthetically adding fog to an MVS dataset to show the adaptability of our system for both certain and uncertain surveillance environments.",An efficient multi-view video summarization scheme advanced to activity recognition in IoT environments. 774,The Natural Tendency of Feed Forward Neural Networks to Favor Invariant Units,"A central goal in the study of the primate visual cortex and hierarchical models for object recognition is understanding how and why single units trade off invariance versus sensitivity to image transformations.For example, in both deep networks and visual cortex there is substantial variation from layer-to-layer and unit-to-unit in the degree of translation invariance.Here, we provide theoretical insight into this variation and its consequences for encoding in a deep network.Our critical insight comes from the fact that rectification simultaneously decreases response variance and correlation across responses to transformed stimuli, naturally inducing a positive relationship between invariance and dynamic range.Invariant input units then tend to drive the network more than those sensitive to small image transformations.We discuss consequences of this relationship for AI: deep nets naturally weight invariant units over sensitive units, and this can be strengthened with training, perhaps contributing to generalization performance.Our results predict a signature relationship between invariance and dynamic range that can now be tested in future neurophysiological studies.",Rectification in deep neural networks naturally leads them to favor an invariant representation. 775,Improved Speech Enhancement with the Wave-U-Net,"We study the use of the Wave-U-Net architecture for speech enhancement, a model introduced by Stoller et al for the separation of music vocals and accompaniment. This end-to-end learning method for audio source separation operates directly in the time domain, permitting the integrated modelling of phase information and being able to take large temporal contexts into account. Our experiments show that the proposed method improves several metrics, namely PESQ, CSIG, CBAK, COVL and SSNR, over the state-of-the-art with respect to the speech enhancement task on the Voice Bank corpus dataset.We find that a reduced number of hidden layers is sufficient for speech enhancement in comparison to the original system designed for singing voice separation in music.We see this initial result as an encouraging signal to further explore speech enhancement in the time-domain, both as an end in itself and as a pre-processing step to speech recognition systems.","The Wave-U-Net architecture, recently introduced by Stoller et al for music source separation, is highly effective for speech enhancement, beating the state of the art." 776,ReNeg and Backseat Driver: Learning from demonstration with continuous human feedback,"Reinforcement learning is a powerful framework for solving problems by exploring and learning from mistakes.However, in the context of autonomous vehicle control, requiring an agent to make mistakes, or even allowing mistakes, can be quite dangerous and costly in the real world.For this reason, AV RL is generally only viable in simulation.Because these simulations have imperfect representations, particularly with respect to graphics, physics, and human interaction, we find motivation for a framework similar to RL, suitable to the real world.To this end, we formulate a learning framework that learns from restricted exploration by having a human demonstrator do the exploration.Existing work on learning from demonstration typically either assumes the collected data is performed by an optimal expert, or requires potentially dangerous exploration to find the optimal policy.We propose an alternative framework that learns continuous control from only safe behavior.One of our key insights is that the problem becomes tractable if the feedback score that rates the demonstration applies to the atomic action, as opposed to the entire sequence of actions.""We use human experts to collect driving data as well as to label the driving data through a framework we call Backseat Driver, giving us state-action pairs matched with scalar values representing the score for the action."", 'We call the more general learning framework ReNeg, since it learns a regression from states to actions given negative as well as positive examples.We empirically validate several models in the ReNeg framework, testing on lane-following with limited data.We find that the best solution in this context outperforms behavioral cloning has strong connections to stochastic policy gradient approaches.",We introduce a novel framework for learning from demonstration that uses continuous human feedback; we evaluate this framework on continuous control for autonomous vehicles. 777,Generating Diverse High-Resolution Images with VQ-VAE,"We explore the use of Vector Quantized Variational AutoEncoder models for large scale image generation.To this end, we scale and enhance the autoregressive priors used in VQ-VAE to generate synthetic samples of much higher coherence and fidelity than possible before. We use simple feed-forward encoder and decoder networks, thus our model is an attractive candidate for applications where the encoding and decoding speed is critical.Additionally, this allows us to only sample autoregressively in the compressed latent space, which is an order of magnitude faster than sampling in the pixel space, especially for large images.""We demonstrate that a multi-scale hierarchical organization of VQ-VAE, augmented with powerful priors over the latent codes, is able to generate samples with quality that rivals that of state of the art Generative Adversarial Networks on multifaceted datasets such as ImageNet, while not suffering from GAN's known shortcomings such as mode collapse and lack of diversity.",scale and enhance VQ-VAE with powerful priors to generate near realistic images. 778,A Graph Neural Network Assisted Monte Carlo Tree Search Approach to Traveling Salesman Problem,"We present a graph neural network assisted Monte Carlo Tree Search approach for the classical traveling salesman problem.We adopt a greedy algorithm framework to construct the optimal solution to TSP by adding the nodes successively.A graph neural network is trained to capture the local and global graph structure and give the prior probability of selecting each vertex every step.The prior probability provides a heuristics for MCTS, and the MCTS output is an improved probability for selecting the successive vertex, as it is the feedback information by fusing the prior with the scouting procedure.Experimental results on TSP up to 100 nodes demonstrate that the proposed method obtains shorter tours than other learning-based methods.",A Graph Neural Network Assisted Monte Carlo Tree Search Approach to Traveling Salesman Problem 779,Directional Message Passing for Molecular Graphs,"Graph neural networks have recently achieved great successes in predicting quantum mechanical properties of molecules.These models represent a molecule as a graph using only the distance between atoms and not the spatial direction from one atom to another.However, directional information plays a central role in empirical potentials for molecules, e.g. in angular potentials.To alleviate this limitation we propose directional message passing, in which we embed the messages passed between atoms instead of the atoms themselves.Each message is associated with a direction in coordinate space.These directional message embeddings are rotationally equivariant since the associated directions rotate with the molecule.We propose a message passing scheme analogous to belief propagation, which uses the directional information by transforming messages based on the angle between them.Additionally, we use spherical Bessel functions to construct a theoretically well-founded, orthogonal radial basis that achieves better performance than the currently prevalent Gaussian radial basis functions while using more than 4x fewer parameters.We leverage these innovations to construct the directional message passing neural network.DimeNet outperforms previous GNNs on average by 77% on MD17 and by 41% on QM9.",Directional message passing incorporates spatial directional information to improve graph neural networks. 780,Improving Sample Efficiency in Model-Free Reinforcement Learning from Images,"Training an agent to solve control tasks directly from high-dimensional images with model-free reinforcement learning has proven difficult.The agent needs to learn a latent representation together with a control policy to perform the task.Fitting a high-capacity encoder using a scarce reward signal is not only extremely sample inefficient, but also prone to suboptimal convergence.Two ways to improve sample efficiency are to learn a good feature representation and use off-policy algorithms.We dissect various approaches of learning good latent features, and conclude that the image reconstruction loss is the essential ingredient that enables efficient and stable representation learning in image-based RL.Following these findings, we devise an off-policy actor-critic algorithm with an auxiliary decoder that trains end-to-end and matches state-of-the-art performance across both model-free and model-based algorithms on many challenging control tasks.We release our code to encourage future research on image-based RL.",We design a simple and efficient model-free off-policy method for image-based reinforcement learning that matches the state-of-the-art model-based methods in sample efficiency 781,Chopout: A Simple Way to Train Variable Sized Neural Networks at Once," Large deep neural networks require huge memory to run and their running speed is sometimes too slow for real applications.Therefore network size reduction with keeping accuracy is crucial for practical applications.We present a novel neural network operator, chopout, with which neural networks are trained, even in a single training process, so as to truncated sub-networks perform as well as possible.Chopout is easy to implement and integrate into most type of existing neural networks.Furthermore it enables to reduce size of networks and latent representations even after training just by truncating layers.We show its effectiveness through several experiments.","We present a novel simple operator, chopout, with which neural networks are trained, even in a single training process, so as to truncated sub-networks perform as well as possible." 782,Multi-Modal Generative Adversarial Networks for Diverse Datasets,"Generative Adversarial Networks have been shown to produce realistically looking synthetic images with remarkable success, yet their performance seems less impressive when the training set is highly diverse.In order to provide a better fit to the target data distribution when the dataset includes many different classes, we propose a variant of the basic GAN model, a Multi-Modal Gaussian-Mixture GAN, where the probability distribution over the latent space is a mixture of Gaussians.We also propose a supervised variant which is capable of conditional sample synthesis.""In order to evaluate the model's performance, we propose a new scoring method which separately takes into account two measures - diversity vs. quality of the generated data.Through a series of experiments, using both synthetic and real-world datasets, we quantitatively show that GM-GANs outperform baselines, both when evaluated using the commonly used Inception Score, and when evaluated using our own alternative scoring method.In addition, we qualitatively demonstrate how the unsupervised variant of GM-GAN tends to map latent vectors sampled from different Gaussians in the latent space to samples of different classes in the data space.We show how this phenomenon can be exploited for the task of unsupervised clustering, and provide quantitative evaluation showing the superiority of our method for the unsupervised clustering of image datasets.Finally, we demonstrate a feature which further sets our model apart from other GAN models: the option to control the quality-diversity trade-off by altering, post-training, the probability distribution of the latent space.""This allows one to sample higher quality and lower diversity samples, or vice versa, according to one's needs.",Multi modal Guassian distribution of latent space in GAN models improves performance and allows to trade-off quality vs. diversity 783,SloMo: Improving Communication-Efficient Distributed SGD with Slow Momentum,"Distributed optimization is essential for training large models on large datasets.Multiple approaches have been proposed to reduce the communication overhead in distributed training, such as synchronizing only after performing multiple local SGD steps, and decentralized methods to decouple communications among workers.Although these methods run faster than AllReduce-based methods, which use blocking communication before every update, the resulting models may be less accurate after the same number of updates.Inspired by the BMUF method of Chen & Huo, we propose a slow momentum framework, where workers periodically synchronize and perform a momentum update, after multiple iterations of a base optimization algorithm.Experiments on image classification and machine translation tasks demonstrate that SloMo consistently yields improvements in optimization and generalization performance relative to the base optimizer, even when the additional overhead is amortized over many updates so that the SloMo runtime is on par with that of the base optimizer.We provide theoretical convergence guarantees showing that SloMo converges to a stationary point of smooth non-convex losses.Since BMUF is a particular instance of the SloMo framework, our results also correspond to the first theoretical convergence guarantees for BMUF.",SlowMo improves the optimization and generalization performance of communication-efficient decentralized algorithms without sacrificing speed. 784,Memorization in Overparameterized Autoencoders,"Interpolation of data in deep neural networks has become a subject of significant research interest. We prove that over-parameterized single layer fully connected autoencoders do not merely interpolate, but rather, memorize training data: they produce outputs in the span of the training examples.In contrast to fully connected autoencoders, we prove that depth is necessary for memorization in convolutional autoencoders. Moreover, we observe that adding nonlinearity to deep convolutional autoencoders results in a stronger form of memorization: instead of outputting points in the span of the training images, deep convolutional autoencoders tend to output individual training images. Since convolutional autoencoder components are building blocks of deep convolutional networks, we envision that our findings will shed light on the important question of the inductive bias in over-parameterized deep networks.", We identify memorization as the inductive bias of interpolation in overparameterized fully connected and convolutional auto-encoders. 785,Span Recovery for Deep Neural Networks with Applications to Input Obfuscation,"The tremendous success of deep neural networks has motivated the need to better understand the fundamental properties of these networks, but many of the theoretical results proposed have only been for shallow networks.In this paper, we study an important primitive for understanding the meaningful input space of a deep network: span recovery.For, let be the innermost weight matrix of an arbitrary feed forward neural network, so can be written as, for some network.The goal is then to recover the row span of given only oracle access to the value of. We show that if is a multi-layered network with ReLU activation functions, then partial recovery is possible: namely, we can provably recover linearly independent vectors in the row span of using poly non-adaptive queries to. Furthermore, if has differentiable activation functions, we demonstrate that span recovery is possible even when the output is first passed through a sign or thresholding function; in this case our algorithm is adaptive.Empirically, we confirm that full span recovery is not always possible, but only for unrealistically thin layers.For reasonably wide networks, we obtain full span recovery on both random networks and networks trained on MNIST data.Furthermore, we demonstrate the utility of span recovery as an attack by inducing neural networks to misclassify data obfuscated by controlled random noise as sensical inputs.",We provably recover the span of a deep multi-layered neural network with latent structure and empirically apply efficient span recovery algorithms to attack networks by obfuscating inputs. 786,Gradient Descent Maximizes the Margin of Homogeneous Neural Networks,"In this paper, we study the implicit regularization of the gradient descent algorithm in homogeneous neural networks, including fully-connected and convolutional neural networks with ReLU or LeakyReLU activations.In particular, we study the gradient descent or gradient flow optimizing the logistic loss or cross-entropy loss of any homogeneous model, and show that if the training loss decreases below a certain threshold, then we can define a smoothed version of the normalized margin which increases over time.We also formulate a natural constrained optimization problem related to margin maximization, and prove that both the normalized margin and its smoothed version converge to the objective value at a KKT point of the optimization problem.Our results generalize the previous results for logistic regression with one-layer or multi-layer linear networks, and provide more quantitative convergence results with weaker assumptions than previous results for homogeneous smooth neural networks.We conduct several experiments to justify our theoretical finding on MNIST and CIFAR-10 datasets.Finally, as margin is closely related to robustness, we discuss potential benefits of training longer for improving the robustness of the model.",We study the implicit bias of gradient descent and prove under a minimal set of assumptions that the parameter direction of homogeneous models converges to KKT points of a natural margin maximization problem. 787,Convolutional Tensor-Train LSTM for Long-Term Video Prediction,"Long-term video prediction is highly challenging since it entails simultaneously capturing spatial and temporal information across a long range of image frames.Standard recurrent models are ineffective since they are prone to error propagation and cannot effectively capture higher-order correlations.A potential solution is to extend to higher-order spatio-temporal recurrent models.However, such a model requires a large number of parameters and operations, making it intractable to learn in practice and is prone to overfitting.In this work, we propose convolutional tensor-train LSTM, which learns higher-orderConvolutional LSTM efficiently using convolutional tensor-train decomposition.Our proposed model naturally incorporates higher-order spatio-temporal information at a small cost of memory and computation by using efficient low-rank tensor representations.We evaluate our model on Moving-MNIST and KTH datasets and show improvements over standard ConvLSTM and better/comparable results to other ConvLSTM-based approaches, but with much fewer parameters.","we propose convolutional tensor-train LSTM, which learns higher-order Convolutional LSTM efficiently using convolutional tensor-train decomposition. " 788,Make SVM great again with Siamese kernel for few-shot learning,"While deep neural networks have shown outstanding results in a wide range of applications,learning from a very limited number of examples is still a challengingtask.Despite the difficulties of the few-shot learning, metric-learning techniquesshowed the potential of the neural networks for this task.While these methodsperform well, they don’t provide satisfactory results.In this work, the idea ofmetric-learning is extended with Support Vector Machines working mechanism,which is well known for generalization capabilities on a small dataset.Furthermore, this paper presents an end-to-end learning framework for trainingadaptive kernel SVMs, which eliminates the problem of choosing a correct kerneland good features for SVMs.Next, the one-shot learning problem is redefinedfor audio signals.Then the model was tested on vision task and speech task as well.Actually, the algorithmusing Omniglot dataset improved accuracy from 98.1% to 98.5% on the one-shotclassification task and from 98.9% to 99.3% on the few-shot classification task.","The proposed method is an end-to-end neural SVM, which is optimized for few-shot learning." 789,V4D: 4D Convonlutional Neural Networks for Video-level Representation Learning,"Most existing 3D CNN structures for video representation learning are clip-based methods, and do not consider video-level temporal evolution of spatio-temporal features.In this paper, we propose Video-level 4D Convolutional Neural Networks, namely V4D, to model the evolution of long-range spatio-temporal representation with 4D convolutions, as well as preserving 3D spatio-temporal representations with residual connections.We further introduce the training and inference methods for the proposed V4D.Extensive experiments are conducted on three video recognition benchmarks, where V4D achieves excellent results, surpassing recent 3D CNNs by a large margin.","A novel 4D CNN structure for video-level representation learning, surpassing recent 3D CNNs." 790,Differentiable Programming for Physical Simulation,"We study the problem of learning and optimizing through physical simulations via differentiable programming.We present DiffSim, a new differentiable programming language tailored for building high-performance differentiable physical simulations.We demonstrate the performance and productivity of our language in gradient-based learning and optimization tasks on 10 different physical simulators.For example, a differentiable elastic object simulator written in our language is 4.6x faster than the hand-engineered CUDA version yet runs as fast, and is 188x faster than TensorFlow.Using our differentiable programs, neural network controllers are typically optimized within only tens of iterations.Finally, we share the lessons learned from our experience developing these simulators, that is, differentiating physical simulators does not always yield useful gradients of the physical system being simulated.We systematically study the underlying reasons and propose solutions to improve gradient quality.","We study the problem of learning and optimizing through physical simulations via differentiable programming, using our proposed DiffSim programming language and compiler." 791,Local and global model interpretability via backward selection and clustering,"Local explanation frameworks aim to rationalize particular decisions made by a black-box prediction model.""Existing techniques are often restricted to a specific type of predictor or based on input saliency, which may be undesirably sensitive to factors unrelated to the model's decision making process."", 'We instead propose sufficient input subsets that identify minimal subsets of features whose observed values alone suffice for the same decision to be reached, even if all other input feature values are missing.""General principles that globally govern a model's decision-making can also be revealed by searching for clusters of such input patterns across many data points."", 'Our approach is conceptually straightforward, entirely model-agnostic, simply implemented using instance-wise backward selection, and able to produce more concise rationales than existing techniques.We demonstrate the utility of our interpretation method on neural network models trained on text and image data.",We present a method for interpreting black-box models by using instance-wise backward selection to identify minimal subsets of features that alone suffice to justify a particular decision made by the model. 792,Out-of-Distribution Detection Using Layerwise Uncertainty in Deep Neural Networks,"In this paper, we tackle the problem of detecting samples that are not drawn from the training distribution, i.e., out-of-distribution samples, in classification.Many previous studies have attempted to solve this problem by regarding samples with low classification confidence as OOD examples using deep neural networks.However, on difficult datasets or models with low classification ability, these methods incorrectly regard in-distribution samples close to the decision boundary as OOD samples.This problem arises because their approaches use only the features close to the output layer and disregard the uncertainty of the features.Therefore, we propose a method that extracts the uncertainties of features in each layer of DNNs using a reparameterization trick and combines them.In experiments, our method outperforms the existing methods by a large margin, achieving state-of-the-art detection performance on several datasets and classification models.For example, our method increases the AUROC score of prior work to 99.8% in DenseNet on the CIFAR-100 and Tiny-ImageNet datasets.",We propose a method that extracts the uncertainties of features in each layer of DNNs and combines them for detecting OOD samples when solving classification tasks. 793,Adversarial Mixup Resynthesizers,"In this paper, we explore new approaches to combining information encoded within the learned representations of autoencoders.We explore models that are capable of combining the attributes of multiple inputs such that a resynthesised output is trained to fool an adversarial discriminator for real versus synthesised data.Furthermore, we explore the use of such an architecture in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states, or masked combinations of latent representations that are consistent with a conditioned class label.We show quantitative and qualitative evidence that such a formulation is an interesting avenue of research.",We leverage deterministic autoencoders as generative models by proposing mixing functions which combine hidden states from pairs of images. These mixes are made to look realistic through an adversarial framework. 794,HANDLING CONCEPT DRIFT IN WIFI-BASED INDOOR LOCALIZATION USING REPRESENTATION LEARNING,"We outline the problem of concept drifts for time series data.In this work, we analyze the temporal inconsistency of streaming wireless signals in the context of device-free passive indoor localization.We show that data obtained from WiFi channel state information can be used to train a robust system capable of performing room level localization.One of the most challenging issues for such a system is the movement of input data distribution to an unexplored space over time, which leads to an unwanted shift in the learned boundaries of the output space.In this work, we propose a phase and magnitude augmented feature space along with a standardization technique that is little affected by drifts.We show that this robust representation of the data yields better learning accuracy and requires less number of retraining.",We introduce an augmented robust feature space for streaming wifi data that is capable of tackling concept drift for indoor localization 795,Federated Adversarial Domain Adaptation,"Federated learning improves data privacy and efficiency in machine learning performed over networks of distributed devices, such as mobile phones, IoT and wearable devices, etc.Yet models trained with federated learning can still fail to generalize to new devices due to the problem of domain shift.""Domain shift occurs when the labeled data collected by source nodes statistically differs from the target node's unlabeled data."", 'In this work, we present a principled approach to the problem of federated domain adaptation, which aims to align the representations learned among the different nodes with the data distribution of the target node.Our approach extends adversarial adaptation techniques to the constraints of the federated setting.In addition, we devise a dynamic attention mechanism and leverage feature disentanglement to enhance knowledge transfer.Empirically, we perform extensive experiments on several image and text classification tasks and show promising results under unsupervised federated domain adaptation setting.","we present a principled approach to the problem of federated domain adaptation, which aims to align the representations learned among the different nodes with the data distribution of the target node." 796,Matrix capsules with EM routing,"A capsule is a group of neurons whose outputs represent different properties of the same entity.Each layer in a capsule network contains many capsules.We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the relationship between that entity and the viewer.A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships.Each of these votes is weighted by an assignment coefficient.These coefficients are iteratively updated for each image using the Expectation-Maximization algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes.The transformation matrices are trained discriminatively by backpropagating through the unrolled iterations of EM between each pair of adjacent capsule layers.On the smallNORB benchmark, capsules reduce the number of test errors by 45% compared to the state-of-the-art.Capsules also show far more resistance to white box adversarial attacks than our baseline convolutional neural network.","Capsule networks with learned pose matrices and EM routing improves state of the art classification on smallNORB, improves generalizability to new view points, and white box adversarial robustness. " 797,Learning a Meta-Solver for Syntax-Guided Program Synthesis,"We study a general formulation of program synthesis called syntax-guided synthesis that concerns synthesizing a program that follows a given grammar and satisfies a given logical specification.Both the logical specification and the grammar have complex structures and can vary from task to task, posing significant challenges for learning across different tasks.Furthermore, training data is often unavailable for domain specific synthesis tasks.To address these challenges, we propose a meta-learning framework that learns a transferable policy from only weak supervision.Our framework consists of three components:1) an encoder, which embeds both the logical specification and grammar at the same time using a graph neural network;2) a grammar adaptive policy network which enables learning a transferable policy; and3) a reinforcement learning algorithm that jointly trains the embedding and adaptive policy.We evaluate the framework on 214 cryptographic circuit synthesis tasks.It solves 141 of them in the out-of-box solver setting, significantly outperforming a similar search-based approach but without learning, which solves only 31.The result is comparable to two state-of-the-art classical synthesis engines, which solve 129 and 153 respectively.In the meta-solver setting, the framework can efficiently adapt to unseen tasks and achieves speedup ranging from 2x up to 100x.",We propose a meta-learning framework that learns a transferable policy from only weak supervision to solve synthesis tasks with different logical specifications and grammars. 798,Large Scale Optimal Transport and Mapping Estimation,"This paper presents a novel two-step approach for the fundamental problem of learning an optimal map from one distribution to another.First, we learn an optimal transport plan, which can be thought as a one-to-many map between the two distributions.To that end, we propose a stochastic dual approach of regularized OT, and show empirically that it scales better than a recent related approach when the amount of samples is very large.Second, we estimate a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan.This parameterization allows generalization of the mapping outside the support of the input measure.We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT and Monge map between the underlying continuous measures.We showcase our proposed approach on two applications: domain adaptation and generative modeling.",Learning optimal mapping with deepNN between distributions along with theoretical guarantees. 799,Word Mover's Embedding: From Word2Vec to Document Embedding,"Learning effective text representations is a key foundation for numerous machine learning and NLP applications.While the celebrated Word2Vec technique yields semantically rich word representations, it is less clear whether sentence or document representations should be built upon word representations or from scratch.""Recent work has demonstrated that a distance measure between documents called that aligns semantically similar words, yields unprecedented KNN classification accuracy."", 'However, WMD is very expensive to compute, and is harder to apply beyond simple KNN than feature embeddings.""In this paper, we propose the , a novel approach to building an unsupervised document embedding from pre-trained word embeddings."", 'Our technique extends the theory of to show convergence of the inner product between WMEs to a positive-definite kernel that can be interpreted as a soft version of WMD.The proposed embedding is more efficient and flexible than WMD in many situations.As an example, WME with a simple linear classifier reduces the computational cost of WMD-based KNN in document length and in number of samples, while simultaneously improving accuracy.In experiments on 9 benchmark text classification datasets and 22 textual similarity tasks the proposed technique consistently matches or outperforms state-of-the-art techniques, with significantly higher accuracy on problems of short length.",A novel approach to building an unsupervised document (sentence) embeddings from pre-trainedword embeddings 800,A Statistical Approach to Assessing Neural Network Robustness,"We present a new approach to assessing the robustness of neural networks based on estimating the proportion of inputs for which a property is violated.Specifically, we estimate the probability of the event that the property is violated under an input model.Our approach critically varies from the formal verification framework in that when the property can be violated, it provides an informative notion of how robust the network is, rather than just the conventional assertion that the network is not verifiable.Furthermore, it provides an ability to scale to larger networks than formal verification approaches.Though the framework still provides a formal guarantee of satisfiability whenever it successfully finds one or more violations, these advantages do come at the cost of only providing a statistical estimate of unsatisfiability whenever no violation is found.Key to the practical success of our approach is an adaptation of multi-level splitting, a Monte Carlo approach for estimating the probability of rare events, to our statistical robustness framework.We demonstrate that our approach is able to emulate formal verification procedures on benchmark problems, while scaling to larger networks and providing reliable additional information in the form of accurate estimates of the violation probability.","We introduce a statistical approach to assessing neural network robustness that provides an informative notion of how robust a network is, rather than just the conventional binary assertion of whether or not of property is violated." 801,Overcoming the Lexical Overlap Bias Using Predicate-Argument Structures,"Recent pretrained transformer-based language models have set state-of-the-art performances on various NLP datasets.However, despite their great progress, they suffer from various structural and syntactic biases.In this work, we investigate the lexical overlap bias, e.g., the model classifies two sentences that have a high lexical overlap as entailing regardless of their underlying meaning.To improve the robustness, we enrich input sentences of the training data with their automatically detected predicate-argument structures.This enhanced representation allows the transformer-based models to learn different attention patterns by focusing on and recognizing the major semantically and syntactically important parts of the sentences. We evaluate our solution for the tasks of natural language inference and grounded commonsense inference using the BERT, RoBERTa, and XLNET models.""We evaluate the models' understanding of syntactic variations, antonym relations, and named entities in the presence of lexical overlap."", 'Our results show that the incorporation of predicate-argument structures during fine-tuning considerably improves the robustness, e.g., about 20pp on discriminating different named entities, while it incurs no additional cost at the test time and does not require changing the model or the training procedure.",Enhancing the robustness of pretrained transformer models against the lexical overlap bias by extending the input sentences of the training data with their corresponding predicate-argument structures 802,"Neural Network Regression with Beta, Dirichlet, and Dirichlet-Multinomial Outputs","We propose a method for quantifying uncertainty in neural network regression models when the targets are real values on a-dimensional simplex, such as probabilities.We show that each target can be modeled as a sample from a Dirichlet distribution, where the parameters of the Dirichlet are provided by the output of a neural network, and that the combined model can be trained using the gradient of the data likelihood.This approach provides interpretable predictions in the form of multidimensional distributions, rather than point estimates, from which one can obtain confidence intervals or quantify risk in decision making.Furthermore, we show that the same approach can be used to model targets in the form of empirical counts as samples from the Dirichlet-multinomial compound distribution.In experiments, we verify that our approach provides these benefits without harming the performance of the point estimate predictions on two diverse applications: distilling deep convolutional networks trained on CIFAR-100, and predicting the location of particle collisions in the XENON1T Dark Matter detector.",Neural network regression should use Dirichlet output distribution when targets are probabilities in order to quantify uncertainty of predictions. 803,Learning Awareness Models,"We consider the setting of an agent with a fixed body interacting with an unknown and uncertain external world.""We show that models trained to predict proprioceptive information about the agent's body come to represent objects in the external world."", ""In spite of being trained with only internally available signals, these dynamic body models come to represent external objects through the necessity of predicting their effects on the agent's own body."", 'That is, the model learns holistic persistent representations of objects in the world, even though the only training signals are body signals.Our dynamics model is able to successfully predict distributions over 132 sensor readings over 100 steps into the future and we demonstrate that even when the body is no longer in contact with an object, the latent variables of the dynamics model continue to represent its shape.We show that active data collection by maximizing the entropy of predictions about the body---touch sensors, proprioception and vestibular information---leads to learning of dynamic models that show superior performance when used for control.We also collect data from a real robotic hand and show that the same models can be used to answer questions about properties of objects in the real world.Videos with qualitative results of our models are available at https://goo.gl/mZuqAV.",We train predictive models on proprioceptive information and show they represent properties of external objects. 804,Graph Topological Features via GAN,"Inspired by the success of generative adversarial networks in image domains, we introduce a novel hierarchical architecture for learning characteristic topological features from a single arbitrary input graph via GANs.The hierarchical architecture consisting of multiple GANs preserves both local and global topological features, and automatically partitions the input graph into representative stages for feature learning.The stages facilitate reconstruction and can be used as indicators of the importance of the associated topological structures.Experiments show that our method produces subgraphs retaining a wide range of topological features, even in early reconstruction stages.This paper contains original research on combining the use of GANs and graph topological analysis.",A GAN based method to learn important topological features of an arbitrary input graph. 805,Predicting Auction Price of Vehicle License Plate with Deep Recurrent Neural Network,"In Chinese societies, superstition is of paramount importance, and vehicle license plates with desirable numbers can fetch very high prices in auctions.Unlike other valuable items, license plates are not allocated an estimated price before auction.I propose that the task of predicting plate prices can be viewed as a natural language processing task, as the value depends on the meaning of each individual character on the plate and its semantics.I construct a deep recurrent neural network to predict the prices of vehicle license plates in Hong Kong, based on the characters on a plate.I demonstrate the importance of having a deep network and of retraining.""Evaluated on 13 years of historical auction prices, the deep RNN's predictions can explain over 80 percent of price variations, outperforming previous models by a significant margin.I also demonstrate how the model can be extended to become a search engine for plates and to provide estimates of the expected price distribution.","Predicting auction price of vehicle license plates in Hong Kong with deep recurrent neural network, based on the characters on the plates." 806,Latent Convolutional Models,"We present a new latent model of natural images that can be learned on large-scale datasets.The learning process provides a latent embedding for every image in the training dataset, as well as a deep convolutional network that maps the latent space to the image space.After training, the new model provides a strong and universal image prior for a variety of image restoration tasks such as large-hole inpainting, superresolution, and colorization.To model high-resolution natural images, our approach uses latent spaces of very high dimensionality.To tackle this high dimensionality, we use latent spaces with a special manifold structure parameterized by a ConvNet of a certain architecture.In the experiments, we compare the learned latent models with latent models learned by autoencoders, advanced variants of generative adversarial networks, and a strong baseline system using simpler parameterization of the latent space.Our model outperforms the competing approaches over a range of restoration tasks.",We present a new deep latent model of natural images that can be trained from unlabeled datasets and can be utilized to solve various image restoration tasks. 807,RefNet: Automatic Essay Scoring by Pairwise Comparison,"Automatic Essay Scoring has been an active research area as it can greatly reduce the workload of teachers and prevents subjectivity bias .Most recent AES solutions apply deep neural network-based models with regression, where the neural neural-based encoder learns an essay representation that helps differentiate among the essays and the corresponding essay score is inferred by a regressor.Such DNN approach usually requires a lot of expert-rated essays as training data in order to learn a good essay representation for accurate scoring.However, such data is usually expensive and thus is sparse.Inspired by the observation that human usually scores an essay by comparing it with some references, we propose a Siamese framework called Referee Network which allows the model to compare the quality of two essays by capturing the relative features that can differentiate the essay pair.The proposed framework can be applied as an extension to regression models as it can capture additional relative features on top of internal information.Moreover, it intrinsically augment the data by pairing thus is ideal for handling data sparsity.Experiment shows that our framework can significantly improve the existing regression models and achieve acceptable performance even when the training data is greatly reduced.",Automatically score essays on sparse data by comparing new essays with known samples with Referee Network. 808,Prototype Matching Networks for Large-Scale Multi-label Genomic Sequence Classification,"One of the fundamental tasks in understanding genomics is the problem of predicting Transcription Factor Binding Sites.With more than hundreds of Transcription Factors as labels, genomic-sequence based TFBS prediction is a challenging multi-label classification task.There are two major biological mechanisms for TF binding: sequence-specific binding patterns on genomes known as “motifs” and interactions among TFs known as co-binding effects.In this paper, we propose a novel deep architecture, the Prototype Matching Network to mimic the TF binding mechanisms.Our PMN model automatically extracts prototypes for each TF through a novel prototype-matching loss.Borrowing ideas from few-shot matching models, we use the notion of support set of prototypes and an LSTM to learn how TFs interact and bind to genomic sequences.On a reference TFBS dataset with 2.1 million genomic sequences, PMN significantly outperforms baselines and validates our design choices empirically.To our knowledge, this is the first deep learning architecture that introduces prototype learning and considers TF-TF interactions for large scale TFBS prediction.Not only is the proposed architecture accurate, but it also models the underlying biology.",We combine the matching network framework for few shot learning into a large scale multi-label model for genomic sequence classification. 809,Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness,"Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e.g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models.Since collecting new training data could be costly, we focus on better utilizing the given data by inducing the regions with high sample density in the feature space, which could lead to locally sufficient samples for robust learning.We first formally show that the softmax cross-entropy loss and its variants convey inappropriate supervisory signals, which encourage the learned feature points to spread over the space sparsely in training.This inspires us to propose the Max-Mahalanobis center loss to explicitly induce dense feature regions in order to benefit robustness.Namely, the MMC loss encourages the model to concentrate on learning ordered and compact representations, which gather around the preset optimal centers for different classes.We empirically demonstrate that applying the MMC loss can significantly improve robustness even under strong adaptive attacks, while keeping state-of-the-art accuracy on clean inputs with little extra computation compared to the SCE loss.",Applying the softmax function in training leads to indirect and unexpected supervision on features. We propose a new training objective to explicitly induce dense feature regions for locally sufficient samples to benefit adversarial robustness. 810,PixelNN: Example-based Image Synthesis,"We present a simple nearest-neighbor approach that synthesizes high-frequency photorealistic images from an incomplete signal such as a low-resolution image, a surface normal map, or edges."", 'Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: they are unable to generate a large set of diverse outputs, due to the mode collapse problem. they are not interpretable, making it difficult to control the synthesized output.We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets.We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network to map the input to a image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner.Importantly, pixel-wise matching allows our method to compose novel high-frequency content by cutting-and-pasting pixels from different training exemplars. We demonstrate our approach for various input modalities, and for various domains ranging from human faces, pets, shoes, and handbags.","Pixel-wise nearest neighbors used for generating multiple images from incomplete priors such as a low-res images, surface normals, edges etc." 811,Certifying Some Distributional Robustness with Principled Adversarial Training,"Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms.We address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data.For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization.Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss.For imperceptible perturbations, our method matches or outperforms heuristic approaches.","We provide a fast, principled adversarial training procedure with computational and statistical performance guarantees." 812,Inference Suboptimality in Variational Autoencoders,"Amortized inference has led to efficient approximate inference for large datasets.The quality of posterior inference is largely determined by two factors:a) the ability of the variational distribution to model the true posterior andb) the capacity of the recognition network to generalize inference over all datapoints.We analyze approximate inference in variational autoencoders in terms of these factors.We find that suboptimal inference is often due to amortizing inference rather than the limited complexity of the approximating distribution.We show that this is due partly to the generator learning to accommodate the choice of approximation.Furthermore, we show that the parameters used to increase the expressiveness of the approximation play a role in generalizing inference rather than simply improving the complexity of the approximation.",We decompose the gap between the marginal log-likelihood and the evidence lower bound and study the effect of the approximate posterior on the true posterior distribution in VAEs. 813,Subgradient Descent Learns Orthogonal Dictionaries,"This paper concerns dictionary learning, i.e., sparse coding, a fundamental representation learning problem.We show that a subgradient descent algorithm, with random initialization, can recover orthogonal dictionaries on a natural nonsmooth, nonconvex L1 minimization formulation of the problem, under mild statistical assumption on the data.This is in contrast to previous provable methods that require either expensive computation or delicate initialization schemes.Our analysis develops several tools for characterizing landscapes of nonsmooth functions, which might be of independent interest for provable training of deep networks with nonsmooth activations, among other applications.Preliminary synthetic and real experiments corroborate our analysis and show that our algorithm works well empirically in recovering orthogonal dictionaries.",Efficient dictionary learning by L1 minimization via a novel analysis of the non-convex non-smooth geometry. 814,Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross Entropy,"We study model recovery for data classification, where the training labels are generated from a one-hidden-layer fully -connected neural network with sigmoid activations, and the goal is to recover the weight vectors of the neural network.We prove that under Gaussian inputs, the empirical risk function using cross entropy exhibits strong convexity and smoothness uniformly in a local neighborhood of the ground truth, as soon as the sample complexity is sufficiently large.This implies that if initialized in this neighborhood, which can be achieved via the tensor method, gradient descent converges linearly to a critical point that is provably close to the ground truth without requiring a fresh set of samples at each iteration.To the best of our knowledge, this is the first global convergence guarantee established for the empirical risk minimization using cross entropy via gradient descent for learning one-hidden-layer neural networks, at the near-optimal sample and computational complexity with respect to the network input dimension.",We provide the first theoretical analysis of guaranteed recovery of one-hidden-layer neural networks under cross entropy loss for classification problems. 815,Neural Network Compression using Transform Coding and Clustering,"With the deployment of neural networks on mobile devices and the necessity of transmitting neural networks over limited or expensive channels, the file size of trained model was identified as bottleneck.We propose a codec for the compressionof neural networks which is based on transform coding for convolutional and dense layers and on clustering for biases and normalizations.With this codec, we achieve average compression factors between 7.9–9.3 while the accuracy of the compressed networks for image classification decreases only by 1%–2%, respectively.",Our neural network codec (which is based on transform coding and clustering) enables a low complexity and high efficient transparent compression of neural networks. 816,"Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware","As Machine Learning gets applied to security-critical or sensitive domains, there is a growing need for integrity and privacy for outsourced ML computations.A pragmatic solution comes from Trusted Execution Environments, which use hardware and software protections to isolate sensitive computations from the untrusted software stack.However, these isolation guarantees come at a price in performance, compared to untrusted alternatives.This paper initiates the study of high performance execution of Deep Neural Networks in TEEs by efficiently partitioning DNN computations between trusted and untrusted devices.Building upon an efficient outsourcing scheme for matrix multiplication, we propose Slalom, a framework that securely delegates execution of all linear layers in a DNN from a TEE to a faster, yet untrusted, co-located processor.We evaluate Slalom by running DNNs in an Intel SGX enclave, which selectively delegates work to an untrusted GPU.For canonical DNNs we obtain 6x to 20x increases in throughput for verifiable inference, and 4x to 11x for verifiable and private inference.",We accelerate secure DNN inference in trusted execution environments (by a factor 4x-20x) by selectively outsourcing the computation of linear layers to a faster yet untrusted co-processor. 817,Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method," In recent years, three-dimensional convolutional neural network are intensively applied in the video analysis and action recognition and receives good performance.However, 3D CNN leads to massive computation and storage consumption, which hinders its deployment on mobile and embedded devices.In this paper, we propose a three-dimensional regularization-based pruning method to assign different regularization parameters to different weight groups based on their importance to the network.Our experiments show that the proposed method outperforms other popular methods in this area.","In this paper, we propose a three-dimensional regularization-based pruning method to accelerate the 3D-CNN." 818,Data Statements for NLP: Toward Mitigating System Bias and Enabling Better Science,"In this paper, we propose data statements as a design solution and professional practice for natural language processing technologists, in both research and development — through the adoption and widespread use of data statements, the field can begin to address critical scientific and ethical issues that result from the use of data from certain populations in the development of technology for other populations.We present a form that data statements can take and explore the implications of adopting them as part of regular practice.We argue that data statements will help alleviate issues related to exclusion and bias in language technology; lead to better precision in claims about how NLP research can generalize and thus better engineering results; protect companies from public embarrassment; and ultimately lead to language technology that meets its users in their own preferred linguistic style and furthermore does not mis- represent them to others.** To appear in TACL **","A practical proposal for more ethical and responsive NLP technology, operationalizing transparency of test and training data" 819,Causal Generative Neural Networks,"We introduce CGNN, a framework to learn functional causal models as generative neural networks.These networks are trained using backpropagation to minimize the maximum mean discrepancy to the observed data.Unlike previous approaches, CGNN leverages both conditional independences and distributional asymmetries to seamlessly discover bivariate and multivariate causal structures, with or without hidden variables.CGNN does not only estimate the causal structure, but a full and differentiable generative model of the data.Throughout an extensive variety of experiments, we illustrate the competitive esults of CGNN w.r.t state-of-the-art alternatives in observational causal discovery on both simulated and real data, in the tasks of cause-effect inference, v-structure identification, and multivariate causal discovery.",Discover the structure of functional causal models with generative neural networks 820,Extreme Value k-means Clustering,"Clustering is the central task in unsupervised learning and data mining.k-means is one of the most widely used clustering algorithms.Unfortunately, it is generally non-trivial to extend k-means to cluster data points beyond Gaussian distribution, particularly, the clusters with non-convex shapes.To this end, we, for the first time, introduce Extreme Value Theory to improve the clustering ability of k-means.Particularly, the Euclidean space was transformed into a novel probability space denoted as extreme value space by EVT.We thus propose a novel algorithm called Extreme Value k-means, including GEV k-means and GPD k-means.In addition, we also introduce the tricks to accelerate Euclidean distance computation in improving the computational efficiency of classical k-means.Furthermore, our EV k-means is extended to an online version, i.e., online Extreme Value k-means, in utilizing the Mini Batch k-means to cluster streaming data.Extensive experiments are conducted to validate our EV k-means and online EV k-means on synthetic datasets and real datasets.Experimental results show that our algorithms significantly outperform competitors in most cases.",This paper introduces Extreme Value Theory into k-means to measure similarity and proposes a novel algorithm called Extreme Value k-means for clustering. 821,TRL: Discriminative Hints for Scalable Reverse Curriculum Learning,"Deep reinforcement learning algorithms have proven successful in a variety of domains.However, tasks with sparse rewards remain challenging when the state space is large.Goal-oriented tasks are among the most typical problems in this domain, where a reward can only be received when the final goal is accomplished.In this work, we propose a potential solution to such problems with the introduction of an experience-based tendency reward mechanism, which provides the agent with additional hints based on a discriminative learning on past experiences during an automated reverse curriculum.This mechanism not only provides dense additional learning signals on what states lead to success, but also allows the agent to retain only this tendency reward instead of the whole histories of experience during multi-phase curriculum learning.We extensively study the advantages of our method on the standard sparse reward domains like Maze and Super Mario Bros and show that our method performs more efficiently and robustly than prior approaches in tasks with long time horizons and large state space.In addition, we demonstrate that using an optional keyframe scheme with very small quantity of key states, our approach can solve difficult robot manipulation challenges directly from perception and sparse rewards.","We propose Tendency RL to efficiently solve goal-oriented tasks with large state space using automated curriculum learning and discriminative shaping reward, which has the potential to tackle robot manipulation tasks with perception." 822,Learning to Describe Scenes with Programs,"Human scene perception goes beyond recognizing a collection of objects and their pairwise relations.We understand higher-level, abstract regularities within the scene such as symmetry and repetition.Current vision recognition modules and scene representations fall short in this dimension.In this paper, we present scene programs, representing a scene via a symbolic program for its objects, attributes, and their relations.We also propose a model that infers such scene programs by exploiting a hierarchical, object-based scene representation.Experiments demonstrate that our model works well on synthetic data and transfers to real images with such compositional structure.The use of scene programs has enabled a number of applications, such as complex visual analogy-making and scene extrapolation.","We present scene programs, a structured scene representation that captures both low-level object appearance and high-level regularity in the scene." 823,Exploiting Invariant Structures for Compression in Neural Networks,"Modern neural networks often require deep compositions of high-dimensional nonlinear functions to achieve high test accuracy, and thus can have overwhelming number of parameters.Repeated high cost in prediction at test-time makes neural networks ill-suited for devices with constrained memory or computational power.We introduce an efficient mechanism, reshaped tensor decomposition, to compress neural networks by exploiting three types of invariant structures: periodicity, modulation and low rank.Our reshaped tensor decomposition method exploits such invariance structures using a technique called tensorization combined with higher order tensor decompositions on top of the tensorized layers.Our compression method improves low rank approximation methods and can be incorporated to most of the existing compression methods for neural networks to achieve better compression.Experiments on LeNet-5, ResNet-32 and ResNet-50 demonstrate that our reshaped tensor decomposition outperforms the state-of-the-art low-rank approximation techniques under same compression rate, besides achieving orders of magnitude faster convergence rates.",Compression of neural networks which improves the state-of-the-art low rank approximation techniques and is complementary to most of other compression techniques. 824,TSInsight: A local-global attribution framework for interpretability in time-series data,"With the rise in employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before.Although many different directions regarding interpretability have been explored for visual modalities, time-series data has been neglected with only a handful of methods tested due to their poor intelligibility.We approach the problem of interpretability in a novel way by proposing TSInsight where we attach an auto-encoder with a sparsity-inducing norm on its output to the classifier and fine-tune it based on the gradients from the classifier and a reconstruction penalty.The auto-encoder learns to preserve features that are important for the prediction by the classifier and suppresses the ones that are irrelevant i.e. serves as a feature attribution method to boost interpretability.In other words, we ask the network to only reconstruct parts which are useful for the classifier i.e. are correlated or causal for the prediction.In contrast to most other attribution frameworks, TSInsight is capable of generating both instance-based and model-based explanations.We evaluated TSInsight along with other commonly used attribution methods on a range of different time-series datasets to validate its efficacy.Furthermore, we analyzed the set of properties that TSInsight achieves out of the box including adversarial robustness and output space contraction.The obtained results advocate that TSInsight can be an effective tool for the interpretability of deep time-series models.",We present an attribution technique leveraging sparsity inducing norms to achieve interpretability. 825,Variance Reduction With Sparse Gradients,"Variance reduction methods which use a mixture of large and small batch gradients, such as SVRG and SpiderBoost, require significantly more computational resources per update than SGD.We reduce the computational cost per update of variance reduction methods by introducing a sparse gradient operator blending the top-K operator and the randomized coordinate descent operator.While the computational cost of computing the derivative of a model parameter is constant, we make the observation that the gains in variance reduction are proportional to the magnitude of the derivative.In this paper, we show that a sparse gradient based on the magnitude of past gradients reduces the computational cost of model updates without a significant loss in variance reduction.Theoretically, our algorithm is at least as good as the best available algorithm under appropriate settings of parameters and can be much more efficient if our algorithm succeeds in capturing the sparsity of the gradients.Empirically, our algorithm consistently outperforms SpiderBoost using various models to solve various image classification tasks.We also provide empirical evidence to support the intuition behind our algorithm via a simple gradient entropy computation, which serves to quantify gradient sparsity at every iteration.",We use sparsity to improve the computational complexity of variance reduction methods. 826,Variational Selective Autoencoder,"Despite promising progress on unimodal data imputation, models for multimodal data imputation are far from satisfactory.In this work, we propose variational selective autoencoder for this task.Learning only from partially-observed data, VSAE can model the joint distribution of observed/unobserved modalities and the imputation mask, resulting in a unified model for various down-stream tasks including data generation and imputation.Evaluation on synthetic high-dimensional and challenging low-dimensional multimodal datasets shows significant improvement over state-of-the-art imputation models.",We propose a novel VAE-based framework learning from partially-observed data for imputation and generation. 827,On Domain Transfer When Predicting Intent in Text,"In many domains, especially enterprise text analysis, there is an abundance of data which can be used for the development of new AI-powered intelligent experiences to improve people's productivity."", 'However, there are strong-guarantees of privacy which prevent broad sampling and labeling of personal text data to learn or evaluate models of interest.Fortunately, in some cases like enterprise email, manual annotation is possible on certain public datasets.The hope is that models trained on these public datasets would perform well on the target private datasets of interest.In this paper, we study the challenges of transferring information from one email dataset to another, for predicting user intent.In particular, we present approaches to characterizing the transfer gap in text corpora from both an intrinsic and extrinsic point-of-view, and evaluate several proposed methods in the literature for bridging this gap.We conclude with raising issues for further discussion in this arena.","Insights on the domain adaptation challenge, when predicting user intent in enterprise email." 828,Sub-policy Adaptation for Hierarchical Reinforcement Learning,"Hierarchical Reinforcement Learning is a promising approach to long-horizon decision-making problems with sparse rewards.Unfortunately, most methods still decouple the lower-level skill acquisition process and the training of a higher level that controls the skills in a new task.Treating the skills as fixed can lead to significant sub-optimality in the transfer setting.In this work, we propose a novel algorithm to discover a set of skills, and continuously adapt them along with the higher level even when training on a new task.Our main contributions are two-fold.First, we derive a new hierarchical policy gradient, as well as an unbiased latent-dependent baseline.We introduce Hierarchical Proximal Policy Optimization, an on-policy method to efficiently train all levels of the hierarchy simultaneously.Second, we propose a method of training time-abstractions that improves the robustness of the obtained skills to environment changes.Code and results are available at sites.google.com/view/hippo-rl.","We propose HiPPO, a stable Hierarchical Reinforcement Learning algorithm that can train several levels of the hierarchy simultaneously, giving good performance both in skill discovery and adaptation." 829,Information lies in the eye of the beholder: The effect of representations on observed mutual information,"Learning can be framed as trying to encode the mutual information between input and output while discarding other information in the input.Since the distribution between input and output is unknown, also the true mutual information is.To quantify how difficult it is to learn a task, we calculate a observed mutual information score by dividing the estimated mutual information by the entropy of the input.We substantiate this score analytically by showing that the estimated mutual information has an error that increases with the entropy of the data.Intriguingly depending on how the data is represented the observed entropy and mutual information can vary wildly.There needs to be a match between how data is represented and how a model encodes it.Experimentally we analyze image-based input data representations and demonstrate that performance outcomes of extensive network architectures searches are well aligned to the calculated score.Therefore to ensure better learning outcomes, representations may need to be tailored to both task and model to align with the implicit distribution of the model.",We take a step towards measuring learning task difficulty and demonstrate that in practice performance strongly depends on the match of the representation of the information and the model interpreting it. 830,Learning to Treat Sepsis with Multi-Output Gaussian Process Deep Recurrent Q-Networks,"Sepsis is a life-threatening complication from infection and a leading cause of mortality in hospitals. While early detection of sepsis improves patient outcomes, there is little consensus on exact treatment guidelines, and treating septic patients remains an open problem. In this work we present a new deep reinforcement learning method that we use to learn optimal personalized treatment policies for septic patients.We model patient continuous-valued physiological time series using multi-output Gaussian processes, a probabilistic model that easily handles missing values and irregularly spaced observation times while maintaining estimates of uncertainty.The Gaussian process is directly tied to a deep recurrent Q-network that learns clinically interpretable treatment policies, and both models are learned together end-to-end. We evaluate our approach on a heterogeneous dataset of septic spanning 15 months from our university health system, and find that our learned policy could reduce patient mortality by as much as 8.2% from an overall baseline mortality rate of 13.3%. Our algorithm could be used to make treatment recommendations to physicians as part of a decision support tool, and the framework readily applies to other reinforcement learning problems that rely on sparsely sampled and frequently missing multivariate time series data.","We combine Multi-output Gaussian processes with deep recurrent Q-networks to learn optimal treatments for sepsis and show improved performance over standard deep reinforcement learning methods," 831,Neural Rendering Model: Joint Generation and Prediction for Semi-Supervised Learning,"Unsupervised and semi-supervised learning are important problems that are especially challenging with complex data like natural images.Progress on these problems would accelerate if we had access to appropriate generative models under which to pose the associated inference tasks.Inspired by the success of Convolutional Neural Networks for supervised prediction in images, we design the Neural Rendering Model, a new hierarchical probabilistic generative model whose inference calculations correspond to those in a CNN.The NRM introduces a small set of latent variables at each level of the model and enforces dependencies among all the latent variables via a conjugate prior distribution.The conjugate prior yields a new regularizer for learning based on the paths rendered in the generative model for training CNNs–the Rendering Path Normalization.We demonstrate that this regularizer improves generalization both in theory and in practice.Likelihood estimation in the NRM yields the new Max-Min cross entropy training loss, which suggests a new deep network architecture–the Max- Min network–which exceeds or matches the state-of-art for semi-supervised and supervised learning on SVHN, CIFAR10, and CIFAR100.",We develop a new deep generative model for semi-supervised learning and propose a new Max-Min cross-entropy for training CNNs. 832,Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning,"Deep reinforcement learning agents often fail to generalize to unseen environments, particularly when they are trained on high-dimensional state spaces, such as images.In this paper, we propose a simple technique to improve a generalization ability of deep RL agents by introducing a randomized neural network that randomly perturbs input observations.It enables trained agents to adapt to new domains by learning robust features invariant across varied and randomized environments.Furthermore, we consider an inference method based on the Monte Carlo approximation to reduce the variance induced by this randomization.We demonstrate the superiority of our method across 2D CoinRun, 3D DeepMind Lab exploration and 3D robotics control tasks: it significantly outperforms various regularization and data augmentation methods for the same purpose.",We propose a simple randomization technique for improving generalization in deep reinforcement learning across tasks with various unseen visual patterns. 833,"Learning Multiple Mappings: an Evaluation of Interference, Transfer, and Retention with Chorded Shortcut Buttons","Touch interactions with current mobile devices have limited expressiveness.Augmenting devices with additional degrees of freedom can add power to the interaction, and several augmentations have been proposed and tested.However, there is still little known about the effects of learning multiple sets of augmented interactions that are mapped to different applications.To better understand whether multiple command mappings can interfere with one another, or affect transfer and retention, we developed a prototype with three pushbuttons on a smartphone case that can be used to provide augmented input to the system.The buttons can be chorded to provide seven possible shortcuts or transient mode switches.We mapped these buttons to three different sets of actions, and carried out a study to see if multiple mappings affect learning and performance, transfer, and retention.Our results show that all of the mappings were quickly learned and there was no reduction in performance with multiple mappings.Transfer to a more realistic task was successful, although with a slight reduction in accuracy.Retention after one week was initially poor, but expert performance was quickly restored.Our work provides new information about the design and use of augmented input in mobile interactions.","Describes a study investigating interference, transfer, and retention of multiple mappings with the same set of chorded buttons" 834,Towards Verified Robustness under Text Deletion Interventions,"Neural networks are widely used in Natural Language Processing, yet despite their empirical successes, their behaviour is brittle: they are both over-sensitive to small input changes, and under-sensitive to deletions of large fractions of input text.This paper aims to tackle under-sensitivity in the context of natural language inference by ensuring that models do not become more confident in their predictions as arbitrary subsets of words from the input text are deleted.We develop a novel technique for formal verification of this specification for models based on the popular decomposable attention mechanism by employing the efficient yet effective interval bound propagation approach.Using this method we can efficiently prove, given a model, whether a particular sample is free from the under-sensitivity problem.We compare different training methods to address under-sensitivity, and compare metrics to measure it.In our experiments on the SNLI and MNLI datasets, we observe that IBP training leads to a significantly improved verified accuracy.On the SNLI test set, we can verify 18.4% of samples, a substantial improvement over only 2.8% using standard training.",Formal verification of a specification on a model's prediction undersensitivity using Interval Bound Propagation 835,Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks,"Training with larger number of parameters while keeping fast iterations is an increasinglyadopted strategy and trend for developing better performing Deep NeuralNetwork models.This necessitates increased memory footprint andcomputational requirements for training.Here we introduce a novel methodologyfor training deep neural networks using 8-bit floating point numbers.Reduced bit precision allows for a larger effective memory and increased computationalspeed.We name this method Shifted and Squeezed FP8.Weshow that, unlike previous 8-bit precision training methods, the proposed methodworks out of the box for representative models: ResNet50, Transformer and NCF.The method can maintain model accuracy without requiring fine-tuning loss scalingparameters or keeping certain layers in single precision.We introduce twolearnable statistics of the DNN tensors - shifted and squeezed factors that are usedto optimally adjust the range of the tensors in 8-bits, thus minimizing the loss ininformation due to quantization.","We propose a novel 8-bit format that eliminates the need for loss scaling, stochastic rounding, and other low precision techniques" 836,BooVAE: A Scalable Framework for Continual VAE Learning under Boosting Approach,"Variational Auto Encoders are capable of generating realistic images, sounds and video sequences.From practitioners point of view, we are usually interested in solving problems where tasks are learned sequentially, in a way that avoids revisiting all previous data at each stage.We address this problem by introducing a conceptually simple and scalable end-to-end approach of incorporating past knowledge by learning prior directly from the data. We consider scalable boosting-like approximation for intractable theoretical optimal prior.We provide empirical studies on two commonly used benchmarks, namely MNIST and Fashion MNIST on disjoint sequential image generation tasks.For each dataset proposed method delivers the best results among comparable approaches, avoiding catastrophic forgetting in a fully automatic way with a fixed model architecture.",Novel algorithm for Incremental learning of VAE with fixed architecture 837,How to train your MAML,"The field of few-shot learning has recently seen substantial advancements.Most of these advancements came from casting few-shot learning as a meta-learning problem.Model Agnostic Meta Learning or MAML is currently one of the best approaches for few-shot learning via meta-learning.MAML is simple, elegant and very powerful, however, it has a variety of issues, such as being very sensitive to neural network architectures, often leading to instability during training, requiring arduous hyperparameter searches to stabilize training and achieve high generalization and being very computationally expensive at both training and inference times.In this paper, we propose various modifications to MAML that not only stabilize the system, but also substantially improve the generalization performance, convergence speed and computational overhead of MAML, which we call MAML++.","MAML is great, but it has many problems, we solve many of those problems and as a result we learn most hyper parameters end to end, speed-up training and inference and set a new SOTA in few-shot learning" 838,Evaluating The Search Phase of Neural Architecture Search,"Neural Architecture Search aims to facilitate the design of deep networks for new tasks.Existing techniques rely on two stages: searching over the architecture space and validating the best architecture.NAS algorithms are currently compared solely based on their results on the downstream task.While intuitive, this fails to explicitly evaluate the effectiveness of their search strategies.In this paper, we propose to evaluate the NAS search phase.To this end, we compare the quality of the solutions obtained by NAS search policies with that of random architecture selection.We find that: On average, the state-of-the-art NAS algorithms perform similarly to the random policy; the widely-used weight sharing strategy degrades the ranking of the NAS candidates to the point of not reflecting their true performance, thus reducing the effectiveness of the search process.We believe that our evaluation framework will be key to designing NAS strategies that consistently discover architectures superior to random ones.",We empirically disprove a fundamental hypothesis of the widely-adopted weight sharing strategy in neural architecture search and explain why the state-of-the-arts NAS algorithms performs similarly to random search. 839,Sliced Wasserstein Auto-Encoders,"In this paper we use the geometric properties of the optimal transport problem and the Wasserstein distances to define a prior distribution for the latent space of an auto-encoder.We introduce Sliced-Wasserstein Auto-Encoders, that enable one to shape the distribution of the latent space into any samplable probability distribution without the need for training an adversarial network or having a likelihood function specified.In short, we regularize the auto-encoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a samplable prior distribution.We show that the proposed formulation has an efficient numerical solution that provides similar capabilities to Wasserstein Auto-Encoders and Variational Auto-Encoders, while benefiting from an embarrassingly simple implementation.We provide extensive error analysis for our algorithm, and show its merits on three benchmark datasets.",In this paper we use the sliced-Wasserstein distance to shape the latent distribution of an auto-encoder into any samplable prior distribution. 840,Hamiltonian Generative Networks,"The Hamiltonian formalism plays a central role in classical and quantum physics.Hamiltonians are the main tool for modelling the continuous time evolution of systems with conserved quantities, and they come equipped with many useful properties, like time reversibility and smooth interpolation in time.These properties are important for many machine learning problems - from sequence prediction to reinforcement learning and density modelling - but are not typically provided out of the box by standard tools such as recurrent neural networks.In this paper, we introduce the Hamiltonian Generative Network, the first approach capable of consistently learning Hamiltonian dynamics from high-dimensional observations without restrictive domain assumptions.Once trained, we can use HGN to sample new trajectories, perform rollouts both forward and backward in time, and even speed up or slow down the learned dynamics.We demonstrate how a simple modification of the network architecture turns HGN into a powerful normalising flow model, called Neural Hamiltonian Flow, that uses Hamiltonian dynamics to model expressive densities.Hence, we hope that our work serves as a first practical demonstration of the value that the Hamiltonian formalism can bring to machine learning.More results and video evaluations are available at: http://tiny.cc/hgn",We introduce a class of generative models that reliably learn Hamiltonian dynamics from high-dimensional observations. The learnt Hamiltonian can be applied to sequence modeling or as a normalising flow. 841,Advantages and Challenges of Using AI Planning in Cloud Migration,"Cloud Migration transforms customer’s data, application and services from original IT platform to one or more cloud en- vironment, with the goal of improving the performance of the IT system while reducing the IT management cost.The enterprise level Cloud Migration projects are generally com- plex, involves dynamically planning and replanning various types of transformations for up to 10k endpoints.Currently the planning and replanning in Cloud Migration are generally done manually or semi-manually with heavy dependency on the migration expert’s domain knowledge, which takes days to even weeks for each round of planning or replanning.As a result, automated planning engine that is capable of gener- ating high quality migration plan in a short time is particu- larly desirable for the migration industry.In this short paper, we briefly introduce the advantages of using AI planning in Cloud Migration, a preliminary prototype, as well as the challenges the requires attention from the planning and scheduling society.","In this short paper, we briefly introduce the advantages of using AI planning in Cloud Migration, a preliminary prototype, as well as the chal- lenges the requires attention from the planning and schedul- ing society." 842,Context-aware Forecasting for Multivariate Stationary Time-series,"The domain of time-series forecasting has been extensively studied because it is of fundamental importance in many real-life applications.Weather prediction, traffic flow forecasting or sales are compelling examples of sequential phenomena.Predictive models generally make use of the relations between past and future values.However, in the case of stationary time-series, observed values also drastically depend on a number of exogenous features that can be used to improve forecasting quality.In this work, we propose a change of paradigm which consists in learning such features in embeddings vectors within recurrent neural networks.We apply our framework to forecast smart cards tap-in logs in the Parisian subway network.Results show that context-embedded models perform quantitatively better in one-step ahead and multi-step ahead forecasting.",In order to forecast multivariate stationary time-series we learn embeddings containing contextual features within a RNN; we apply the framework on public transportation data 843,Discovery of Predictive Representations With a Network of General Value Functions,"The ability of an agent to its own learning objectives has long been considered a key ingredient for artificial general intelligence.""Breakthroughs in autonomous decision making and reinforcement learning have primarily been in domains where the agent's goal is outlined and clear: such as playing a game to win, or driving safely."", ""Several studies have demonstrated that learning extramural sub-tasks and auxiliary predictions can improve single human-specified task learning, transfer of learning, and the agent's learned representation of the world."", 'In all these examples, the agent was instructed what to learn about.""We investigate a framework for discovery: curating a large collection of predictions, which are used to construct the agent's representation of the world."", 'Specifically, our system maintains a large collection of predictions, continually pruning and replacing predictions.We highlight the importance of considering stability rather than convergence for such a system, and develop an adaptive, regularized algorithm towards that aim.We provide several experiments in computational micro-worlds demonstrating that this simple approach can be effective for discovering useful predictions autonomously.","We investigate a framework for discovery: curating a large collection of predictions, which are used to construct the agent’s representation in partially observable domains." 844,MULTI-MODAL GEOLOCATION ESTIMATION USING DEEP NEURAL NETWORKS,"Estimating the location where an image was taken based solely on the contents of the image is a challenging task, even for humans, as properly labeling an image in such a fashion relies heavily on contextual information, and is not as simple as identifying a single object in the image.Thus any methods which attempt to do so must somehow account for these complexities, and no single model to date is completely capable of addressing all challenges.This work contributes to the state of research in image geolocation inferencing by introducing a novel global meshing strategy, outlining a variety of training procedures to overcome the considerable data limitations when training these models, and demonstrating how incorporating additional information can be used to improve the overall performance of a geolocation inference model.In this work, it is shown that Delaunay triangles are an effective type of mesh for geolocation in relatively low volume scenarios when compared to results from state of the art models which use quad trees and an order of magnitude more training data. In addition, the time of posting, learned user albuming, and other meta data are easily incorporated to improve geolocation by up to 11% for country-level locality accuracy to 3% for city-level localities.",A global geolocation inferencing strategy with novel meshing strategy and demonstrating incorporating additional information can be used to improve the overall performance of a geolocation inference model. 845,The Variational Homoencoder: Learning to Infer High-Capacity Generative Models from Few Examples,"Hierarchical Bayesian methods have the potential to unify many related tasks by framing each as inference within a single generative model.We show that existing approaches for learning such models can fail on expressive generative networks such as PixelCNNs, by describing the global distribution with little reliance on latent variables.To address this, we develop a modification of the Variational Autoencoder in which encoded observations are decoded to new elements from the same class; the result, which we call a Variational Homoencoder, may be understood as training a hierarchical latent variable model which better utilises latent variables in these cases.Using this framework enables us to train a hierarchical PixelCNN for the Omniglot dataset, outperforming all existing models on test set likelihood.With a single model we achieve both strong one-shot generation and near human-level classification, competitive with state-of-the-art discriminative classifiers.The VHE objective extends naturally to richer dataset structures such as factorial or hierarchical categories, as we illustrate by training models to separate character content from simple variations in drawing style, and to generalise the style of an alphabet to new characters.","Technique for learning deep generative models with shared latent variables, applied to Omniglot with a PixelCNN decoder." 846,Dynamic Integration of Background Knowledge in Neural NLU Systems,"Common-sense or background knowledge is required to understand natural language, but in most neural natural language understanding systems, the requisite background knowledge is indirectly acquired from static corpora.We develop a new reading architecture for the dynamic integration of explicit background knowledge in NLU models.A new task-agnostic reading module provides refined word representations to a task-specific NLU architecture by processing background knowledge in the form of free-text statements, together with the task-specific inputs.Strong performance on the tasks of document question answering and recognizing textual entailment demonstrate the effectiveness and flexibility of our approach.Analysis shows that our models learn to exploit knowledge selectively and in a semantically appropriate way.",In this paper we present a task-agnostic reading architecture for the dynamic integration of explicit background knowledge in neural NLU models. 847,Improving Deep Learning by Inverse Square Root Linear Units (ISRLUs),"We introduce the “inverse square root linear unit” to speed up learning in deep neural networks.ISRLU has better performance than ELU but has many of the same benefits.ISRLU and ELU have similar curves and characteristics.Both have negative values, allowing them to push mean unit activation closer to zero, and bring the normal gradient closer to the unit natural gradient, ensuring a noise- robust deactivation state, lessening the over fitting risk.The significant performance advantage of ISRLU on traditional CPUs also carry over to more efficient HW implementations on HW/SW codesign for CNNs/RNNs.In experiments with TensorFlow, ISRLU leads to faster learning and better generalization than ReLU on CNNs.This work also suggests a computationally efficient variant called the “inverse square root unit” which can be used for RNNs.Many RNNs use either long short-term memory and gated recurrent units which are implemented with tanh and sigmoid activation functions.ISRU has less computational complexity but still has a similar curve to tanh and sigmoid.",We introduce the ISRLU activation function which is continuously differentiable and faster than ELU. The related ISRU replaces tanh & sigmoid. 848,Automatically Composing Representation Transformations as a Means for Generalization,"A generally intelligent learner should generalize to more complex tasks than it has previously encountered, but the two common paradigms in machine learning -- either training a separate learner per task or training a single learner for all tasks -- both have difficulty with such generalization because they do not leverage the compositional structure of the task distribution.This paper introduces the compositional problem graph as a broadly applicable formalism to relate tasks of different complexity in terms of problems with shared subproblems.We propose the compositional generalization problem for measuring how readily old knowledge can be reused and hence built upon.As a first step for tackling compositional generalization, we introduce the compositional recursive learner, a domain-general framework for learning algorithmic procedures for composing representation transformations, producing a learner that reasons about what computation to execute by making analogies to previously seen problems.We show on a symbolic and a high-dimensional domain that our compositional approach can generalize to more complex problems than the learner has previously encountered, whereas baselines that are not explicitly compositional do not.",We explore the problem of compositional generalization and propose a means for endowing neural network architectures with the ability to compose themselves to solve these problems. 849,Learning Robust Representations via Multi-View Information Bottleneck,"The information bottleneck method provides an information-theoretic method for representation learning, by training an encoder to retain all information which is relevant for predicting the label, while minimizing the amount of other, superfluous information in the representation.The original formulation, however, requires labeled data in order to identify which information is superfluous. In this work, we extend this ability to the multi-view unsupervised setting, in which two views of the same underlying entity are provided but the label is unknown.This enables us to identify superfluous information as that which is not shared by both views.A theoretical analysis leads to the definition of a new multi-view model that produces state-of-the-art results on the Sketchy dataset and on label-limited versions of the MIR-Flickr dataset. We also extend our theory to the single-view setting by taking advantage of standard data augmentation techniques, empirically showing better generalization capabilities when compared to traditional unsupervised approaches for representation learning.",We extend the information bottleneck method to the unsupervised multiview setting and show state of the art results on standard datasets 850,Extending the Framework of Equilibrium Propagation to General Dynamics,"The biological plausibility of the backpropagation algorithm has long been doubted by neuroscientists.Two major reasons are that neurons would need to send two different types of signal in the forward and backward phases, and that pairs of neurons would need to communicate through symmetric bidirectional connections.We present a simple two-phase learning procedure for fixed point recurrent networks that addresses both these issues.In our model, neurons perform leaky integration and synaptic weights are updated through a local mechanism.Our learning method extends the framework of Equilibrium Propagation to general dynamics, relaxing the requirement of an energy function.As a consequence of this generalization, the algorithm does not compute the true gradient of the objective function,but rather approximates it at a precision which is proven to be directly related to the degree of symmetry of the feedforward and feedback weights.We show experimentally that the intrinsic properties of the system lead to alignment of the feedforward and feedback weights, and that our algorithm optimizes the objective function.",We describe a biologically plausible learning algorithm for fixed point recurrent networks without tied weights 851,Unsupervised Generative 3D Shape Learning from Natural Images,"In this paper we present, to the best of our knowledge, the first method to learn a generative model of 3D shapes from natural images in a fully unsupervised way.For example, we do not use any ground truth 3D or 2D annotations, stereo video, and ego-motion during the training.Our approach follows the general strategy of Generative Adversarial Networks, where an image generator network learns to create image samples that are realistic enough to fool a discriminator network into believing that they are natural images.In contrast, in our approach the image gen- eration is split into 2 stages.In the first stage a generator network outputs 3D ob- jects.In the second, a differentiable renderer produces an image of the 3D object from a random viewpoint.The key observation is that a realistic 3D object should yield a realistic rendering from any plausible viewpoint.Thus, by randomizing the choice of the viewpoint our proposed training forces the generator network to learn an interpretable 3D representation disentangled from the viewpoint.In this work, a 3D representation consists of a triangle mesh and a texture map that is used to color the triangle surface by using the UV-mapping technique.We provide analysis of our learning approach, expose its ambiguities and show how to over- come them.Experimentally, we demonstrate that our method can learn realistic 3D shapes of faces by using only the natural images of the FFHQ dataset.",We train a generative 3D model of shapes from natural images in an fully unsupervised way. 852,Black-box Adversarial Attacks with Bayesian Optimization,"We focus on the problem of black-box adversarial attacks, where the aim is to generate adversarial examples using information limited to loss function evaluations of input-output pairs.We use Bayesian optimization to specificallycater to scenarios involving low query budgets to develop query efficient adversarial attacks.We alleviate the issues surrounding BO in regards to optimizing high dimensional deep learning models by effective dimension upsampling techniques.Our proposed approach achieves performance comparable to the state of the art black-box adversarial attacks albeit with a much lower average query count.In particular, in low query budget regimes, our proposed method reduces the query count up to 80% with respect to the state of the art methods.",We show that a relatively simple black-box adversarial attack scheme using Bayesian optimization and dimension upsampling is preferable to existing methods when the number of available queries is very low. 853,DEEP-TRIM: REVISITING L1 REGULARIZATION FOR CONNECTION PRUNING OF DEEP NETWORK,"State-of-the-art deep neural networks typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones.The compression of DNN models has therefore become an active area of research recently, with emerging as one of the most successful strategies.A very natural approach is to prune connections of DNNs via regularization, but recent empirical investigations have suggested that this does not work as well in the context of DNN compression.In this work, we revisit this simple strategy and analyze it rigorously, to show that: any of an-regularized layerwise-pruning objective has its number of non-zero elements bounded by the number of penalized prediction logits, regardless of the strength of the regularization; successful pruning highly relies on an accurate optimization solver, and there is a trade-off between compression speed and distortion of prediction accuracy, controlled by the strength of regularization.Our theoretical results thus suggest that pruning could be successful provided we use an accurate optimization solver.We corroborate this in our experiments, where we show that simple regularization with an Adamax-L1 solver gives pruning ratio competitive to the state-of-the-art.",We revisit the simple idea of pruning connections of DNNs through regularization achieving state-of-the-art results on multiple datasets with theoretic guarantees. 854,Error Correcting Algorithms for Sparsely Correlated Regressors,"Autonomy and adaptation of machines requires that they be able to measure their own errors.We consider the advantages and limitations of such an approach when a machine has to measure the error in a regression task.How can a machine measure the error of regression sub-components when it does not have the ground truth for the correct predictions?Acompressed sensing approach applied to the error signal of the regressors can recover their precision error without any ground truth.It allows for some regressors to be strongly correlated as long as not too many are so related.Its solutions, however, are not unique - a property of ground truth inference solutions.Adding --minimizationas a condition can recover the correct solution in settings where error correction is possible.We briefly discuss the similarity of the mathematics of ground truth inference for regressors to that for classifiers.",A non-parametric method to measure the error moments of regressors without ground truth can be used with biased regressors 855,Characterizing convolutional neural networks with one-pixel signature,"We propose a new representation, one-pixel signature, that can be used to reveal the characteristics of the convolution neural networks.Here, each CNN classifier is associated with a signature that is created by generating, pixel-by-pixel, an adversarial value that is the result of the largest change to the class prediction.The one-pixel signature is agnostic to the design choices of CNN architectures such as type, depth, activation function, and how they were trained.It can be computed efficiently for a black-box classifier without accessing the network parameters.Classic networks such as LetNet, VGG, AlexNet, and ResNet demonstrate different characteristics in their signature images.For application, we focus on the classifier backdoor detection problem where a CNN classifier has been maliciously inserted with an unknown Trojan.We show the effectiveness of the one-pixel signature in detecting backdoored CNN.Our proposed one-pixel signature representation is general and it can be applied in problems where discriminative classifiers, particularly neural network based, are to be characterized.",Cnvolutional neural networks characterization for backdoored classifier detection and understanding. 856,Learning Parametric Constraints in High Dimensions from Demonstrations,"We extend the learning from demonstration paradigm by providing a method for learning unknown constraints shared across tasks, using demonstrations of the tasks, their cost functions, and knowledge of the system dynamics and control constraints.Given safe demonstrations, our method uses hit-and-run sampling to obtain lower cost, and thus unsafe, trajectories.Both safe and unsafe trajectories are used to obtain a consistent representation of the unsafe set via solving a mixed integer program.Additionally, by leveraging a known parameterization of the constraint, we modify our method to learn parametric constraints in high dimensions.We show that our method can learn a six-dimensional pose constraint for a 7-DOF robot arm.",We can learn high-dimensional constraints from demonstrations by sampling unsafe trajectories and leveraging a known constraint parameterization. 857,Parametric Manifold Learning Via Sparse Multidimensional Scaling,"We propose a metric-learning framework for computing distance-preserving maps that generate low-dimensional embeddings for a certain class of manifolds.We employ Siamese networks to solve the problem of least squares multidimensional scaling for generating mappings that preserve geodesic distances on the manifold.In contrast to previous parametric manifold learning methods we show a substantial reduction in training effort enabled by the computation of geodesic distances in a farthest point sampling strategy.Additionally, the use of a network to model the distance-preserving map reduces the complexity of the multidimensional scaling problem and leads to an improved non-local generalization of the manifold compared to analogous non-parametric counterparts.We demonstrate our claims on point-cloud data and on image manifolds and show a numerical analysis of our technique to facilitate a greater understanding of the representational power of neural networks in modeling manifold data.",Parametric Manifold Learning with Neural Networks in a Geometric Framework 858,DORA The Explorer: Directed Outreaching Reinforcement Action-Selection,"Exploration is a fundamental aspect of Reinforcement Learning, typically implemented using stochastic action-selection.Exploration, however, can be more efficient if directed toward gaining new world knowledge.Visit-counters have been proven useful both in practice and in theory for directed exploration.However, a major limitation of counters is their locality.While there are a few model-based solutions to this shortcoming, a model-free approach is still missing.We propose-values, a generalization of counters that can be used to evaluate the propagating exploratory value over state-action trajectories.We compare our approach to commonly used RL techniques, and show that using-values improves learning and performance over traditional counters.We also show how our method can be implemented with function approximation to efficiently learn continuous MDPs.We demonstrate this by showing that our approach surpasses state of the art performance in the Freeway Atari 2600 game.","We propose a generalization of visit-counters that evaluate the propagating exploratory value over trajectories, enabling efficient exploration for model-free RL" 859,CEM-RL: Combining evolutionary and gradient-based methods for policy search,"Deep neuroevolution and deep reinforcement learning algorithms are two popular approaches to policy search.The former is widely applicable and rather stable, but suffers from low sample efficiency.By contrast, the latter is more sample efficient, but the most sample efficient variants are also rather unstable and highly sensitive to hyper-parameter setting.So far, these families of methods have mostly been compared as competing tools.However, an emerging approach consists in combining them so as to get the best of both worlds.Two previously existing combinations use either an ad hoc evolutionary algorithm or a goal exploration process together with the Deep Deterministic Policy Gradient algorithm, a sample efficient off-policy deep RL algorithm.In this paper, we propose a different combination scheme using the simple cross-entropymethod and Twin Delayed Deep Deterministic policy gradient, another off-policy deep RL algorithm which improves over DDPG.We evaluate the resulting method, CEM-RL, on a set of benchmarks classically used in deep RL.We show that CEM-RL benefits from several advantages over its competitors and offers a satisfactory trade-off between performance and sample efficiency.",We propose a new combination of evolution strategy and deep reinforcement learning which takes the best of both worlds 860,Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling,"Recent advances in deep reinforcement learning have made significant strides in performance on applications such as Go and Atari games.However, developing practical methods to balance exploration and exploitation in complex domains remains largely unsolved.Thompson Sampling and its extension to reinforcement learning provide an elegant approach to exploration that only requires access to posterior samples of the model.At the same time, advances in approximate Bayesian methods have made posterior approximation for flexible neural network models practical.Thus, it is attractive to consider approximate Bayesian neural networks in a Thompson Sampling framework.To understand the impact of using an approximate posterior on Thompson Sampling, we benchmark well-established and recently developed methods for approximate posterior sampling combined with Thompson Sampling over a series of contextual bandit problems.We found that many approaches that have been successful in the supervised learning setting underperformed in the sequential decision-making scenario.In particular, we highlight the challenge of adapting slowly converging uncertainty estimates to the online setting.",An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling 861,Activation Maximization Generative Adversarial Nets,"Class labels have been empirically shown useful in improving the sample quality of generative adversarial nets.In this paper, we mathematically study the properties of the current variants of GANs that make use of class label information.""With class aware gradient and cross-entropy decomposition, we reveal how class labels and associated losses influence GAN's training."", 'Based on that, we propose Activation Maximization Generative Adversarial Networks as an advanced solution.Comprehensive experiments have been conducted to validate our analysis and evaluate the effectiveness of our solution, where AM-GAN outperforms other strong baselines and achieves state-of-the-art Inception Score on CIFAR-10.In addition, we demonstrate that, with the Inception ImageNet classifier, Inception Score mainly tracks the diversity of the generator, and there is, however, no reliable evidence that it can reflect the true sample quality.We thus propose a new metric, called AM Score, to provide more accurate estimation on the sample quality.Our proposed model also outperforms the baseline methods in the new metric.",Understand how class labels help GAN training. Propose a new evaluation metric for generative models. 862,Equi-normalization of Neural Networks,"Modern neural networks are over-parametrized.In particular, each rectified linear hidden unit can be modified by a multiplicative factor by adjusting input and out- put weights, without changing the rest of the network.Inspired by the Sinkhorn-Knopp algorithm, we introduce a fast iterative method for minimizing the l2 norm of the weights, equivalently the weight decay regularizer.It provably converges to a unique solution.Interleaving our algorithm with SGD during training improves the test accuracy.For small batches, our approach offers an alternative to batch- and group- normalization on CIFAR-10 and ImageNet with a ResNet-18.",Fast iterative algorithm to balance the energy of a network while staying in the same functional equivalence class 863,Uncovering Surprising Behaviors in Reinforcement Learning via Worst-case Analysis,"Reinforcement learning agents are typically trained and evaluated according to their performance averaged over some distribution of environment settings.But does the distribution over environment settings contain important biases, and do these lead to agents that fail in certain cases despite high average-case performance?In this work, we consider worst-case analysis of agents over environment settings in order to detect whether there are directions in which agents may have failed to generalize.Specifically, we consider a 3D first-person task where agents must navigate procedurally generated mazes, and where reinforcement learning agents have recently achieved human-level average-case performance.By optimizing over the structure of mazes, we find that agents can suffer from catastrophic failures, failing to find the goal even on surprisingly simple mazes, despite their impressive average-case performance.Additionally, we find that these failures transfer between different agents and even significantly different architectures.We believe our findings highlight an important role for worst-case analysis in identifying whether there are directions in which agents have failed to generalize.Our hope is that the ability to automatically identify failures of generalization will facilitate development of more general and robust agents.To this end, we report initial results on enriching training with settings causing failure.",We find environment settings in which SOTA agents trained on navigation tasks display extreme failures suggesting failures in generalization. 864,SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models,"The standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest.We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series.If parameterized by an encoder-decoder architecture, the parameters of the encoder can be optimized to minimize its variance of this estimator.We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost.This estimator also allows use of latent variable models for tasks where unbiased estimators, rather than marginal likelihood lower bounds, are preferred, such as minimizing reverse KL divergences and estimating score functions.","We create an unbiased estimator for the log probability of latent variable models, extending such models to a larger scope of applications." 865,Hyperparameter Tuning and Implicit Regularization in Minibatch SGD,"This paper makes two contributions towards understanding how the hyperparameters of stochastic gradient descent affect the final training loss and test accuracy of neural networks.First, we argue that stochastic gradient descent exhibits two regimes with different behaviours; a noise dominated regime which typically arises for small or moderate batch sizes, and a curvature dominated regime which typically arises when the batch size is large.In the noise dominated regime, the optimal learning rate increases as the batch size rises, and the training loss and test accuracy are independent of batch size under a constant epoch budget.In the curvature dominated regime, the optimal learning rate is independent of batch size, and the training loss and test accuracy degrade as the batch size rises.We support these claims with experiments on a range of architectures including ResNets, LSTMs and autoencoders.We always perform a grid search over learning rates at all batch sizes.Second, we demonstrate that small or moderately large batch sizes continue to outperform very large batches on the test set, even when both models are trained for the same number of steps and reach similar training losses.Furthermore, when training Wide-ResNets on CIFAR-10 with a constant batch size of 64, the optimal learning rate to maximize the test accuracy only decays by a factor of 2 when the epoch budget is increased by a factor of 128, while the optimal learning rate to minimize the training loss decays by a factor of 16.These results confirm that the noise in stochastic gradients can introduce beneficial implicit regularization.",Smaller batch sizes can outperform very large batches on the test set under constant step budgets and with properly tuned learning rate schedules. 866,Single Deep Counterfactual Regret Minimization,"Counterfactual Regret Minimization is the most successful algorithm for finding approximate Nash equilibria in imperfect information games.""However, CFR's reliance on full game-tree traversals limits its scalability and generality."", ""Therefore, the game's state- and action-space is often abstracted for CFR, and the resulting strategy is then mapped back to the full game."", 'This requires extensive expert-knowledge, is not practical in many games outside of poker, and often converges to highly exploitable policies.A recently proposed method, Deep CFR, applies deep learning directly to CFR, allowing the agent to intrinsically abstract and generalize over the state-space from samples, without requiring expert knowledge.In this paper, we introduce Single Deep CFR, a variant of Deep CFR that has a lower overall approximation error by avoiding the training of an average strategy network.We show that SD-CFR is more attractive from a theoretical perspective and empirically outperforms Deep CFR with respect to exploitability and one-on-one play in poker.",Better Deep Reinforcement Learning algorithm to approximate Counterfactual Regret Minimization 867,Conditional Out-of-Sample Generation For Unpaired Data using trVAE,"While generative models have shown great success in generating high-dimensional samples conditional on low-dimensional descriptors, their generation out-of-sample poses fundamental problems.The conditional variational autoencoder as a simple conditional generative model does not explicitly relate conditions during training and, hence, has no incentive of learning a compact joint distribution across conditions.We overcome this limitation by matching their distributions using maximum mean discrepancy in the decoder layer that follows the bottleneck.This introduces a strong regularization both for reconstructing samples within the same condition and for transforming samples across conditions, resulting in much improved generalization.We refer to the architecture as transformer VAE.Benchmarking trVAE on high-dimensional image and tabular data, we demonstrate higher robustness and higher accuracy than existing approaches.In particular, we show qualitatively improved predictions for cellular perturbation response to treatment and disease based on high-dimensional single-cell gene expression data, by tackling previously problematic minority classes and multiple conditions.For generic tasks, we improve Pearson correlations of high-dimensional estimated means and variances with their ground truths from 0.89 to 0.97 and 0.75 to 0.87, respectively.",Generates never seen data during training from a desired condition 868,A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks,"We analyze speed of convergence to global optimum for gradient descent training a deep linear neural network by minimizing the L2 loss over whitened data. Convergence at a linear rate is guaranteed when the following hold: dimensions of hidden layers are at least the minimum of the input and output dimensions; weight matrices at initialization are approximately balanced; and the initial loss is smaller than the loss of any rank-deficient solution. The assumptions on initialization and) are necessary, in the sense that violating any one of them may lead to convergence failure. Moreover, in the important case of output dimension 1, i.e. scalar regression, they are met, and thus convergence to global optimum holds, with constant probability under a random initialization scheme. Our results significantly extend previous analyses, e.g., of deep linear residual networks.","We analyze gradient descent for deep linear neural networks, providing a guarantee of convergence to global optimum at a linear rate." 869,Learning Non-Parametric Invariances from Data with Permanent Random Connectomes ,"One of the fundamental problems in supervised classification and in machine learning in general, is the modelling of non-parametric invariances that exist in data.Most prior art has focused on enforcing priors in the form of invariances to parametric nuisance transformations that are expected to be present in data.However, learning non-parametric invariances directly from data remains an important open problem.In this paper, we introduce a new architectural layer for convolutional networks which is capable of learning general invariances from data itself.This layer can learn invariance to non-parametric transformations and interestingly, motivates and incorporates permanent random connectomes there by being called Permanent Random Connectome Non-Parametric Transformation Networks.PRC-NPTN networks are initialized with random connections which are a small subset of the connections in a fully connected convolution layer.Importantly, these connections in PRC-NPTNs once initialized remain permanent throughout training and testing. Random connectomes makes these architectures loosely more biologically plausible than many other mainstream network architectures which require highly ordered structures.We motivate randomly initialized connections as a simple method to learn invariance from data itself while invoking invariance towards multiple nuisance transformations simultaneously.We find that these randomly initialized permanent connections have positive effects on generalization, outperform much larger ConvNet baselines and the recently proposed Non-Parametric Transformation Network on benchmarks that enforce learning invariances from the data itself.",A layer modelling local random connectomes in the cortex within deep networks capable of learning general non-parametric invariances from the data itself. 870,Latent Transformations for Object View Points Synthesis,"We propose a fully-convolutional conditional generative model, the latent transformation neural network, capable of view synthesis using a light-weight neural network suited for real-time applications.In contrast to existing conditionalgenerative models which incorporate conditioning information via concatenation, we introduce a dedicated network component, the conditional transformation unit, designed to learn the latent space transformations corresponding to specified target views.In addition, a consistency loss term is defined to guide the network toward learning the desired latent space mappings, a task-divided decoder is constructed to refine the quality of generated views, and an adaptive discriminator is introduced to improve the adversarial training process.The generality of the proposed methodology is demonstrated on a collection of three diverse tasks: multi-view reconstruction on real hand depth images, view synthesis of real and synthetic faces, and the rotation of rigid objects.The proposed model is shown to exceed state-of-the-art results in each category while simultaneously achieving a reduction in the computational demand required for inference by 30% on average.","We introduce an effective, general framework for incorporating conditioning information into inference-based generative models." 871,Enabling Limited Resource-Bounded Disjunction in Scheduling,"We describe three approaches to enabling an extremely computationally limited embedded scheduler to consider a small number of alternative activities based on resource availability.We consider the case where the scheduler is so computationally limited that it cannot backtrack search.The first two approaches precompile resource checks that only enable selection of a preferred alternative activity if sufficient resources are estimated to be available to schedule the remaining activities.The final approach mimics backtracking by invoking the scheduler multiple times with the alternative activities.""We present an evaluation of these techniques on mission scenarios from NASA's next planetary rover where these techniques are being evaluated for inclusion in an onboard scheduler.","This paper describes three techniques to allow a non-backtracking, computationally limited scheduler to consider a small number of alternative activities based on resource availability." 872,Continual Learning via Neural Pruning,"Inspired by the modularity and the life-cycle of biological neurons,we introduce Continual Learning via Neural Pruning, a new method aimed at lifelong learning in fixed capacity models based on the pruning of neurons of low activity.In this method, an L1 regulator is used to promote the presence of neurons of zero or low activity whose connections to previously active neurons is permanently severed at the end of training.Subsequent tasks are trained using these pruned neurons after reinitialization and cause zero deterioration to the performance of previous tasks.We show empirically that this biologically inspired method leads to state of the art results beating or matching current methods of higher computational complexity.",We use simple and biologically motivated modifications of standard learning techniques to achieve state of the art performance on catastrophic forgetting benchmarks. 873,Learning Audio Features for Singer Identification and Embedding,"There has been an increasing use of neural networks for music information retrieval tasks.In this paper, we empirically investigate different ways of improving the performance of convolutional neural networks on spectral audio features.More specifically, we explore three aspects of CNN design: depth of the network, the use of residual blocks along with the use of grouped convolution, and global aggregation over time.The application context is singer classification and singing performance embedding and we believe the conclusions extend to other types of music analysis using convolutional neural networks.The results show that global time aggregation helps to improve the performance of CNNs the most.Another contribution of this paper is the release of a singing recording dataset that can be used for training and evaluation.",Using deep learning techniques on singing voice related tasks. 874,Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs,"The Softmax function is used in the final layer of nearly all existing sequence-to-sequence models for language generation.However, it is usually the slowest layer to compute which limits the vocabulary size to a subset of most frequent types; and it has a large memory footprint.We propose a general technique for replacing the softmax layer with a continuous embedding layer.Our primary innovations are a novel probabilistic loss, and a training and inference procedure in which we generate a probability distribution over pre-trained word embeddings, instead of a multinomial distribution over the vocabulary obtained via softmax.We evaluate this new class of sequence-to-sequence models with continuous outputs on the task of neural machine translation.We show that our models obtain upto 2.5x speed-up in training time while performing on par with the state-of-the-art models in terms of translation quality.These models are capable of handling very large vocabularies without compromising on translation quality.They also produce more meaningful errors than in the softmax-based models, as these errors typically lie in a subspace of the vector space of the reference translations.",Language generation using seq2seq models which produce word embeddings instead of a softmax based distribution over the vocabulary at each step enabling much faster training while maintaining generation quality 875,Spontaneous Symmetry Breaking in Deep Neural Networks,"We propose a framework to understand the unprecedented performance and robustness of deep neural networks using field theory.Correlations between the weights within the same layer can be described by symmetries in that layer, and networks generalize better if such symmetries are broken to reduce the redundancies of the weights.Using a two parameter field theory, we find that the network can break such symmetries itself towards the end of training in a process commonly known in physics as spontaneous symmetry breaking.This corresponds to a network generalizing itself without any user input layers to break the symmetry, but by communication with adjacent layers.In the layer decoupling limit applicable to residual networks, we show that the remnant symmetries that survive the non-linear layers are spontaneously broken based on empirical results.The Lagrangian for the non-linear and weight layers together has striking similarities with the one in quantum field theory of a scalar.Using results from quantum field theory we show that our framework is able to explain many experimentally observed phenomena, such as training on random labels with zero error, the information bottleneck and the phase transition out of it, shattered gradients, and many more.",Closed form results for deep learning in the layer decoupling limit applicable to Residual Networks 876,Clipping Free Attacks Against Neural Networks,"During the last years, a remarkable breakthrough has been made in AI domainthanks to artificial deep neural networks that achieved a great success in manymachine learning tasks in computer vision, natural language processing, speechrecognition, malware detection and so on.However, they are highly vulnerableto easily crafted adversarial examples.Many investigations have pointed out thisfact and different approaches have been proposed to generate attacks while addinga limited perturbation to the original data.The most robust known method so faris the so called C&W attack [1].Nonetheless, a countermeasure known as fea-ture squeezing coupled with ensemble defense showed that most of these attackscan be destroyed [6].In this paper, we present a new method we call CenteredInitial Attack whose advantage is twofold : first, it insures by construc-tion the maximum perturbation to be smaller than a threshold fixed beforehand,without the clipping process that degrades the quality of attacks.Second, it isrobust against recently introduced defenses such as feature squeezing, JPEG en-coding and even against a voting ensemble of defenses.While its application isnot limited to images, we illustrate this using five of the current best classifierson ImageNet dataset among which two are adversarialy retrained on purpose tobe robust against attacks.With a fixed maximum perturbation of only 1.5% onany pixel, around 80% of attacks fool the voting ensemble defense andnearly 100% when the perturbation is only 6%.While this shows how it is difficultto defend against CIA attacks, the last section of the paper gives some guidelinesto limit their impact."," In this paper, a new method we call Centered Initial Attack (CIA) is provided. It insures by construction the maximum perturbation to be smaller than a threshold fixed beforehand, without the clipping process." 877,Query2box: Reasoning over Knowledge Graphs in Vector Space Using Box Embeddings,"Answering complex logical queries on large-scale incomplete knowledge graphs is a fundamental yet challenging task.Recently, a promising approach to this problem has been to embed KG entities as well as the query into a vector space such that entities that answer the query are embedded close to the query.However, prior work models queries as single points in the vector space, which is problematic because a complex query represents a potentially large set of its answer entities, but it is unclear how such a set can be represented as a single point.Furthermore, prior work can only handle queries that use conjunctions and existential quantifiers.Handling queries with logical disjunctions remains an open problem.Here we propose query2box, an embedding-based framework for reasoning over arbitrary queries with,, and operators in massive and incomplete KGs.Our main insight is that queries can be embedded as boxes, where a set of points inside the box corresponds to a set of answer entities of the query.We show that conjunctions can be naturally represented as intersections of boxes and also prove a negative result that handling disjunctions would require embedding with dimension proportional to the number of KG entities.However, we show that by transforming queries into a Disjunctive Normal Form, query2box is capable of handling arbitrary logical queries with,, in a scalable manner.We demonstrate the effectiveness of query2box on two large KGs and show that query2box achieves up to 25% relative improvement over the state of the art.",Answering a wide class of logical queries over knowledge graphs with box embeddings in vector space 878,Open Loop Hyperparameter Optimization and Determinantal Point Processes,"Driven by the need for parallelizable hyperparameter optimization methods, this paper studies open loop search methods: sequences that are predetermined and can be generated before a single configuration is evaluated.Examples include grid search, uniform random search, low discrepancy sequences, and other sampling distributions.In particular, we propose the use of k-determinantal point processes in hyperparameter optimization via random search.Compared to conventional uniform random search where hyperparameter settings are sampled independently, a k-DPP promotes diversity. We describe an approach that transforms hyperparameter search spaces for efficient use with a k-DPP.In addition, we introduce a novel Metropolis-Hastings algorithm which can sample from k-DPPs defined over any space from which uniform samples can be drawn, including spaces with a mixture of discrete and continuous dimensions or tree structure.Our experiments show significant benefits in realistic scenarios with a limited budget for training supervised learners, whether in serial or parallel.",We address fully parallel hyperparameter optimization with Determinantal Point Processes. 879,GraphZoom: A Multi-level Spectral Approach for Accurate and Scalable Graph Embedding,"Graph embedding techniques have been increasingly deployed in a multitude of different applications that involve learning on non-Euclidean data.However, existing graph embedding models either fail to incorporate node attribute information during training or suffer from node attribute noise, which compromises the accuracy.Moreover, very few of them scale to large graphs due to their high computational complexity and memory usage.In this paper we propose GraphZoom, a multi-level framework for improving both accuracy and scalability of unsupervised graph embedding algorithms.GraphZoom first performs graph fusion to generate a new graph that effectively encodes the topology of the original graph and the node attribute information.This fused graph is then repeatedly coarsened into a much smaller graph by merging nodes with high spectral similarities.GraphZoom allows any existing embedding methods to be applied to the coarsened graph, before it progressively refine the embeddings obtained at the coarsest level to increasingly finer graphs.We have evaluated our approach on a number of popular graph datasets for both transductive and inductive tasks.Our experiments show that GraphZoom increases the classification accuracy and significantly reduces the run time compared to state-of-the-art unsupervised embedding methods.",A multi-level spectral approach to improving the quality and scalability of unsupervised graph embedding. 880,Syntax-Directed Variational Autoencoder for Structured Data,"Deep generative models have been enjoying success in modeling continuous data.However it remains challenging to capture the representations for discrete structures with formal grammars and semantics, e.g., computer programs and molecular structures.How to generate both syntactically and semantically correct data still remains largely an open problem.Inspired by the theory of compiler where syntax and semantics check is done via syntax-directed translation, we propose a novel syntax-directed variational autoencoder by introducing stochastic lazy attributes.This approach converts the offline SDT check into on-the-fly generated guidance for constraining the decoder.Comparing to the state-of-the-art methods, our approach enforces constraints on the output space so that the output will be not only syntactically valid, but also semantically reasonable.We evaluate the proposed model with applications in programming language and molecules, including reconstruction and program/molecule optimization.The results demonstrate the effectiveness in incorporating syntactic and semantic constraints in discrete generative models, which is significantly better than current state-of-the-art approaches.","A new generative model for discrete structured data. The proposed stochastic lazy attribute converts the offline semantic check into online guidance for stochastic decoding, which effectively addresses the constraints in syntax and semantics, and also achieves superior performance" 881,Natural Language Inference with External Knowledge,"Modeling informal inference in natural language is very challenging.With the recent availability of large annotated data, it has become feasible to train complex models such as neural networks to perform natural language inference, which have achieved state-of-the-art performance.Although there exist relatively large annotated data, can machines learn all knowledge needed to perform NLI from the data?If not, how can NLI models benefit from external knowledge and how to build NLI models to leverage it?In this paper, we aim to answer these questions by enriching the state-of-the-art neural natural language inference models with external knowledge.We demonstrate that the proposed models with external knowledge further improve the state of the art on the Stanford Natural Language Inference dataset.",the proposed models with external knowledge further improve the state of the art on the SNLI dataset. 882,EXPLOITING SEMANTIC COHERENCE TO IMPROVE PREDICTION IN SATELLITE SCENE IMAGE ANALYSIS: APPLICATION TO DISEASE DENSITY ESTIMATION,"High intra-class diversity and inter-class similarity is a characteristic of remote sensing scene image data sets currently posing significant difficulty for deep learning algorithms on classification tasks.To improve accuracy, post-classificationmethods have been proposed for smoothing results of model predictions.However, those approaches require an additional neural network to perform the smoothing operation, which adds overhead to the task.We propose an approach that involves learning deep features directly over neighboring scene images without requiring use of a cleanup model.Our approach utilizes a siamese network to improve the discriminative power of convolutional neural networks on a pairof neighboring scene images.It then exploits semantic coherence between this pair to enrich the feature vector of the image for which we want to predict a label.Empirical results show that this approach provides a viable alternative to existing methods.For example, our model improved prediction accuracy by 1 percentage point and dropped the mean squared error value by 0.02 over the baseline, on a disease density estimation task.These performance gains are comparable with results from existing post-classification methods, moreover without implementation overheads.",Approach for improving prediction accuracy by learning deep features over neighboring scene images in satellite scene image analysis. 883,Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics,"Sparsely available data points cause a numerical error on finite differences which hinder to modeling the dynamics of physical systems.The discretization error becomes even larger when the sparse data are irregularly distributed so that the data defined on an unstructured grid, making it hard to build deep learning models to handle physics-governing observations on the unstructured grid.In this paper, we propose a novel architecture named Physics-aware Difference Graph Networks that exploits neighboring information to learn finite differences inspired by physics equations.PA-DGN further leverages data-driven end-to-end learning to discover underlying dynamical relations between the spatial and temporal differences in given observations.We demonstrate the superiority of PA-DGN in the approximation of directional derivatives and the prediction of graph signals on the synthetic data and the real-world climate observations from weather stations.",We propose physics-aware difference graph networks designed to effectively learn spatial differences to modeling sparsely-observed dynamics. 884,Automatic Inference of Sound Correspondence Patterns Across Multiple Languages,"Sound correspondence patterns play a crucial role for linguistic reconstruction.Linguistsuse them to prove language relationship, to reconstruct proto-forms, and for classicalphylogenetic reconstruction based on shared innovations.Cognate words which fail toconform with expected patterns can further point to various kinds of exceptions in soundchange, such as analogy or assimilation of frequent words.Here we present an automaticmethod for the inference of sound correspondence patterns across multiple languages basedon a network approach.The core idea is to represent all columns in aligned cognate sets asnodes in a network with edges representing the degree of compatibility between the nodes.The task of inferring all compatible correspondence sets can then be handled as the well-known minimum clique cover problem in graph theory, which essentially seeks to split the graph into the smallest number of cliques in which each node is represented by exactly one clique.The resulting partitions represent all correspondence patterns which can beinferred for a given dataset.By excluding those patterns which occur in only a few cognatesets, the core of regularly recurring sound correspondences can be inferred.Based on thisidea, the paper presents a method for automatic correspondence pattern recognition, whichis implemented as part of a Python library which supplements the paper.To illustrate theusefulness of the method, various tests are presented, and concrete examples of the outputof the method are provided.In addition to the source code, the study is supplemented bya short interactive tutorial that illustrates how to use the new method and how to inspectits results.",The paper describes a new algorithm by which sound correspondence patterns for multiple languages can be inferred. 885,Kernel RNN Learning (KeRNL),"We describe Kernel RNN Learning, a reduced-rank, temporal eligibility trace-based approximation to backpropagation through time for training recurrent neural networks that gives competitive performance to BPTT on long time-dependence tasks.The approximation replaces a rank-4 gradient learning tensor, which describes how past hidden unit activations affect the current state, by a simple reduced-rank product of a sensitivity weight and a temporal eligibility trace.In this structured approximation motivated by node perturbation, the sensitivity weights and eligibility kernel time scales are themselves learned by applying perturbations.The rule represents another step toward biologically plausible or neurally inspired ML, with lower complexity in terms of relaxed architectural requirements, a smaller memory demand, and a shorter feedback time.",A biologically plausible learning rule for training recurrent neural networks 886,Deep Graph Infomax,"We present Deep Graph Infomax, a general approach for learning node representations within graph-structured data in an unsupervised manner.DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs---both derived using established graph convolutional network architectures.The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks.In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups.We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning.","A new method for unsupervised representation learning on graphs, relying on maximizing mutual information between local and global representations in a graph. State-of-the-art results, competitive with supervised learning." 887,Recall Traces: Backtracking Models for Efficient Reinforcement Learning,"In many environments only a tiny subset of all states yield high reward. In these cases, few of the interactions with the environment provide a relevant learning signal.Hence, we may want to preferentially train on those high-reward states and the probable trajectories leading to them.To this end, we advocate for the use of a that predicts the preceding states that terminate at a given high-reward state. We can train a model which, starting from a high value state, predicts and samples which-tuples may have led to that high value state.These traces of pairs, which we refer to as Recall Traces, sampled from this backtracking model starting from a high value state, are informative as they terminate in good states, and hence we can use these traces to improve a policy.We provide a variational interpretation for this idea and a practical algorithm in which the backtracking model samples from an approximate posterior distribution over trajectories which lead to large rewards.Our method improves the sample efficiency of both on- and off-policy RL algorithms across several environments and tasks. ","A backward model of previous (state, action) given the next state, i.e. P(s_t, a_t | s_), can be used to simulate additional trajectories terminating at states of interest! Improves RL learning efficiency." 888,Tensor-Based Preposition Representation,"Prepositions are among the most frequent words.Good prepositional representation is of great syntactic and semantic interest in computational linguistics.Existing methods on preposition representation either treat prepositions as content words or depend heavily on external linguistic resources including syntactic parsing, training task and dataset-specific representations.""In this paper we use word-triple counts to capture the preposition's interaction with its head and children."", 'Prepositional embeddings are derived via tensor decompositions on a large unlabeled corpus. We reveal a new geometry involving Hadamard products and empirically demonstrate its utility in paraphrasing of phrasal verbs.Furthermore, our prepositional embeddings are used as simple features to two challenging downstream tasks: preposition selection and prepositional attachment disambiguation.We achieve comparable to or better results than state of the art on multiple standardized datasets. ",This work is about tensor-based method for preposition representation training. 889,FFJORD: Free-Form Continuous Dynamics for Scalable Reversible Generative Models,"A promising class of generative models maps points from a simple distribution to a complex distribution through an invertible neural network. Likelihood-based training of these models requires restricting their architectures to allow cheap computation of Jacobian determinants. Alternatively, the Jacobian trace can be used if the transformation is specified by an ordinary differential equation.In this paper, we use Hutchinson’s trace estimator to give a scalable unbiased estimate of the log-density. The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures.We demonstrate our approach on high-dimensional density estimation, image generation, and variational inference, achieving the state-of-the-art among exact likelihood methods with efficient sampling.",We use continuous time dynamics to define a generative model with exact likelihoods and efficient sampling that is parameterized by unrestricted neural networks. 890,Generative Feature Matching Networks,"We propose a non-adversarial feature matching-based approach to train generative models.Our approach, Generative Feature Matching Networks, leverages pretrained neural networks such as autoencoders and ConvNet classifiers to perform feature extraction.We perform an extensive number of experiments with different challenging datasets, including ImageNet.Our experimental results demonstrate that, due to the expressiveness of the features from pretrained ImageNet classifiers, even by just matching first order statistics, our approach can achieve state-of-the-art results for challenging benchmarks such as CIFAR10 and STL10.",A new non-adversarial feature matching-based approach to train generative models that achieves state-of-the-art results. 891,Gaussian Prototypical Networks for Few-Shot Learning on Omniglot,"We propose a novel architecture for k-shot classification on the Omniglot dataset.Building on prototypical networks, we extend their architecture to what we call Gaussian prototypical networks.Prototypical networks learn a map between images and embedding vectors, and use their clustering for classification.In our model, a part of the encoder output is interpreted as a confidence region estimate about the embedding point, and expressed as a Gaussian covariance matrix.Our network then constructs a direction and class dependent distance metric on the embedding space, using uncertainties of individual data points as weights.We show that Gaussian prototypical networks are a preferred architecture over vanilla prototypical networks with an equivalent number of parameters.We report results consistent with state-of-the-art performance in 1-shot and 5-shot classification both in 5-way and 20-way regime on the Omniglot dataset.We explore artificially down-sampling a fraction of images in the training set, which improves our performance.Our experiments therefore lead us to hypothesize that Gaussian prototypical networks might perform better in less homogeneous, noisier datasets, which are commonplace in real world applications.",A novel architecture for few-shot classification capable of dealing with uncertainty. 892,Deep Convolutional Networks as shallow Gaussian Processes,"We show that the output of a CNN with an appropriate prior over the weights and biases is a GP in the limit of infinitely many convolutional filters, extending similar results for dense networks.For a CNN, the equivalent kernel can be computed exactly and, unlike ""deep kernels"", has very few parameters: only the hyperparameters of the original CNN.Further, we show that this kernel has two properties that allow it to be computed efficiently; the cost of evaluating the kernel for a pair of images is similar to a single forward pass through the original CNN with only one filter per layer.The kernel equivalent to a 32-layer ResNet obtains 0.84% classification error on MNIST, a new record for GP with a comparable number of parameters.",We show that CNNs and ResNets with appropriate priors on the parameters are Gaussian processes in the limit of infinitely many convolutional filters. 893,"EMS: End-to-End Model Search for Network Architecture, Pruning and Quantization","We present an end-to-end design methodology for efficient deep learning deployment.Unlike previous methods that separately optimize the neural network architecture, pruning policy, and quantization policy, we jointly optimize them in an end-to-end manner.To deal with the larger design space it brings, we train a quantization-aware accuracy predictor that fed to the evolutionary search to select the best fit.We first generate a large dataset of pairs without training each architecture, but by sampling a unified supernet.Then we use these data to train an accuracy predictor without quantization, further using predictor-transfer technique to get the quantization-aware predictor, which reduces the amount of post-quantization fine-tuning time.Extensive experiments on ImageNet show the benefits of the end-to-end methodology: it maintains the same accuracy as ResNet34 float model while saving 2.2× BitOps comparing with the 8-bit model; we obtain the same level accuracy as MobileNetV2+HAQ while achieving 2×/1.3× latency/energy saving; the end-to-end optimization outperforms separate optimizations using ProxylessNAS+AMC+HAQ by 2.3% accuracy while reducing orders of magnitude GPU hours and CO2 emission.",We present an end-to-end design methodology for efficient deep learning deployment. 894,Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer,"Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification.Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm.This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached.Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene.As such, we propose a novel evaluation measure, parametric norm-balls, by directly perturbing physical parameters that underly image formation.One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry.Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow.","Enabled by a novel differentiable renderer, we propose a new metric that has real-world implications for evaluating adversarial machine learning algorithms, resolving the lack of realism of the existing metric based on pixel norms." 895,On the Turing Completeness of Modern Neural Network Architectures,"Alternatives to recurrent neural networks, in particular, architectures based on attention or convolutions, have been gaining momentum for processing input sequences.In spite of their relevance, the computational properties of these alternatives have not yet been fully explored.We study the computational power of two of the most paradigmatic architectures exemplifying these mechanisms: the Transformer and the Neural GPU.We show both models to be Turing complete exclusively based on their capacity to compute and access internal dense representations of the data.In particular, neither the Transformer nor the Neural GPU requires access to an external memory to become Turing complete.Our study also reveals some minimal sets of elements needed to obtain these completeness results.",We show that the Transformer architecture and the Neural GPU are Turing complete. 896,An Alarm System for Segmentation Algorithm Based on Shape Model,"It is usually hard for a learning system to predict correctly on the rare events, and there is no exception for segmentation algorithms.Therefore, we hope to build an alarm system to set off alarms when the segmentation result is possibly unsatisfactory.One plausible solution is to project the segmentation results into a low dimensional feature space, and then learn classifiers/regressors in the feature space to predict the qualities of segmentation results.In this paper, we form the feature space using shape feature which is a strong prior information shared among different data, so it is capable to predict the qualities of segmentation results given different segmentation algorithms on different datasets.The shape feature of a segmentation result is captured using the value of loss function when the segmentation result is tested using a Variational Auto-Encoder.The VAE is trained using only the ground truth masks, therefore the bad segmentation results with bad shapes become the rare events for VAE and will result in large loss value.By utilizing this fact, the VAE is able to detect all kinds of shapes that are out of the distribution of normal shapes in ground truth.Finally, we learn the representation in the one-dimensional feature space to predict the qualities of segmentation results.We evaluate our alarm system on several recent segmentation algorithms for the medical segmentation task.The segmentation algorithms perform differently on different datasets, but our system consistently provides reliable prediction on the qualities of segmentation results.",We use VAE to capture the shape feature for automatic segmentation evaluation 897,Evolution of Eigenvalue Decay in Deep Networks,"The linear transformations in converged deep networks show fast eigenvalue decay.The distribution of eigenvalues looks like a Heavy-tail distribution, where the vast majority of eigenvalues is small, but not actually zero, and only a few spikes of large eigenvalues exist.We use a stochastic approximator to generate histograms of eigenvalues.This allows us to investigate layers with hundreds of thousands of dimensions.We show how the distributions change over the course of image net training, converging to a similar heavy-tail spectrum across all intermediate layers.",We investigate the eigenvalues of the linear layers in deep networks and show that the distributions develop heavy-tail behavior during training. 898,Recurrent Layer Attention Network,"Capturing long-range feature relations has been a central issue on convolutional neural networks.To tackle this, attempts to integrate end-to-end trainable attention module on CNNs are widespread.Main goal of these works is to adjust feature maps considering spatial-channel correlation inside a convolution layer.""In this paper, we focus on modeling relationships among layers and propose a novel structure, 'Recurrent Layer Attention network,' which stores the hierarchy of features into recurrent neural networks that concurrently propagating with CNN and adaptively scales feature volumes of all layers."", 'We further introduce several structural derivatives for demonstrating the compatibility on recent attention modules and the expandability of proposed network.For semantic understanding on learned features, we also visualize intermediate layers and plot the curve of layer scaling coefficients.Recurrent Layer Attention network achieves significant performance enhancement requiring a slight increase on parameters in an image classification task with CIFAR and ImageNet-1K 2012 dataset and an object detection task with Microsoft COCO 2014 dataset.","We propose a new type of end-to-end trainable attention module, which applies global weight balances among layers by utilizing co-propagating RNN with CNN." 899,Insect Cyborgs: Bio-mimetic Feature Generators Improve ML Accuracy on Limited Data,"We seek to auto-generate stronger input features for ML methods faced with limited training data.Biological neural nets excel at fast learning, implying that they extract highly informative features.In particular, the insect olfactory network learns new odors very rapidly, by means of three key elements: A competitive inhibition layer; randomized, sparse connectivity into a high-dimensional sparse plastic layer; and Hebbian updates of synaptic weights.In this work we deploy MothNet, a computational model of the moth olfactory network, as an automatic feature generator.""Attached as a front-end pre-processor, MothNet's readout neurons provide new features, derived from the original features, for use by standard ML classifiers."", ""These insect cyborgs have significantly better performance than baseline ML methods alone on vectorized MNIST and Omniglot data sets, reducing test set error averages 20% to 55%."", 'The MothNet feature generator also substantially out-performs other feature generating methods including PCA, PLS, and NNs.These results highlight the potential value of BNN-inspired feature generators in the ML context.",Features auto-generated by the bio-mimetic MothNet model significantly improve the test accuracy of standard ML methods on vectorized MNIST. The MothNet-generated features also outperform standard feature generators. 900,Anytime Neural Network: a Versatile Trade-off Between Computation and Accuracy,"We present an approach for anytime predictions in deep neural networks.For each test sample, an anytime predictor produces a coarse result quickly, and then continues to refine it until the test-time computational budget is depleted.Such predictors can address the growing computational problem of DNNs by automatically adjusting to varying test-time budgets.In this work, we study a augmentation to feed-forward networks to form anytime neural networks via auxiliary predictions and losses.Specifically, we point out a blind-spot in recent studies in such ANNs: the importance of high final accuracy.In fact, we show on multiple recognition data-sets and architectures that by having near-optimal final predictions in small anytime models, we can effectively double the speed of large ones to reach corresponding accuracy level.We achieve such speed-up with simple weighting of anytime losses that oscillate during training.We also assemble a sequence of exponentially deepening ANNs, to achieve both theoretically and practically near-optimal anytime results at any budget, at the cost of a constant fraction of additional consumed budget.","By focusing more on the final predictions in anytime predictors (such as the very recent Multi-Scale-DenseNets), we make small anytime models to outperform large ones that don't have such focus. " 901,Memory-efficient Learning for Large-scale Computational Imaging,"Computational imaging systems jointly design computation and hardware to retrieve information which is not traditionally accessible with standard imaging systems.Recently, critical aspects such as experimental design and image priors are optimized through deep neural networks formed by the unrolled iterations of classical physics-based reconstructions.However, for real-world large-scale systems, computing gradients via backpropagation restricts learning due to memory limitations of graphical processing units.In this work, we propose a memory-efficient learning procedure that exploits the reversibility of the network’s layers to enable data-driven design for large-scale computational imaging.We demonstrate our methods practicality on two large-scale systems: super-resolution optical microscopy and multi-channel magnetic resonance imaging.",We propose a memory-efficient learning procedure that exploits the reversibility of the network’s layers to enable data-driven design for large-scale computational imaging. 902,Neuron as an Agent,"Existing multi-agent reinforcement learning communication methods have relied on a trusted third party to distribute reward to agents, leaving them inapplicable in peer-to-peer environments.This paper proposes reward distribution using in MARL without a TTP with two key ideas: inter-agent reward distribution and auction theory.Auction theory is introduced because inter-agent reward distribution is insufficient for optimization.Agents in NaaA maximize their profits and, as a theoretical result, the auction mechanism is shown to have agents autonomously evaluate counterfactual returns as the values of other agents.NaaA enables representation trades in peer-to-peer environments, ultimately regarding unit in neural networks as agents.Finally, numerical experiments confirm that NaaA framework optimization leads to better performance in reinforcement learning.",Neuron as an Agent (NaaA) enable us to train multi-agent communication without a trusted third party. 903,ALBERT: A Lite BERT for Self-supervised Learning of Language Representations,"Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks.However, at some point further model increases become harder due to GPU/TPU memory limitations, longer training times, and unexpected model degradation.To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT.Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT.We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs.As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large.","A new pretraining method that establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large. " 904,SuperTML: Domain Transfer from Computer Vision to Structured Tabular Data through Two-Dimensional Word Embedding,"Structured tabular data is the most commonly used form of data in industry according to a Kaggle ML and DS Survey.Gradient Boosting Trees, Support Vector Machine, Random Forest, and Logistic Regression are typically used for classification tasks on tabular data.The recent work of Super Characters method using two-dimensional word embeddings achieved state-of-the-art results in text classification tasks, showcasing the promise of this new approach.In this paper, we propose the SuperTML method, which borrows the idea of Super Characters method and two-dimensional embeddings to address the problem of classification on tabular data.For each input of tabular data, the features are first projected into two-dimensional embeddings like an image, and then this image is fed into fine-tuned ImageNet CNN models for classification.Experimental results have shown that the proposed SuperTML method have achieved state-of-the-art results on both large and small datasets.",Deep learning on structured tabular data using two-dimensional word embedding with fine-tuned ImageNet pre-trained CNN model. 905,Adaptive Gradient Methods with Dynamic Bound of Learning Rate,"Adaptive optimization methods such as AdaGrad, RMSprop and Adam have been proposed to achieve a rapid training process with an element-wise scaling term on learning rates.Though prevailing, they are observed to generalize poorly compared with SGD or even fail to converge due to unstable and extreme learning rates.Recent work has put forward some algorithms such as AMSGrad to tackle this issue but they failed to achieve considerable improvement over existing methods.In our paper, we demonstrate that extreme learning rates can lead to poor performance.We provide new variants of Adam and AMSGrad, called AdaBound and AMSBound respectively, which employ dynamic bounds on learning rates to achieve a gradual and smooth transition from adaptive methods to SGD and give a theoretical proof of convergence.We further conduct experiments on various popular tasks and models, which is often insufficient in previous work.Experimental results show that new variants can eliminate the generalization gap between adaptive methods and SGD and maintain higher learning speed early in training at the same time.Moreover, they can bring significant improvement over their prototypes, especially on complex deep networks.The implementation of the algorithm can be found at https://github.com/Luolc/AdaBound .",Novel variants of optimization methods that combine the benefits of both adaptive and non-adaptive methods. 906,GenDICE: Generalized Offline Estimation of Stationary Values,"An important problem that arises in reinforcement learning and Monte Carlo methods is estimating quantities defined by the stationary distribution of a Markov chain.In many real-world applications, access to the underlying transition operator is limited to a fixed set of data that has already been collected, without additional interaction with the environment being available.We show that consistent estimation remains possible in this scenario, and that effective estimation can still be achieved in important applications.Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions, derived from fundamental properties of the stationary distribution, and exploiting constraint reformulations based on variational divergence minimization.The resulting algorithm, GenDICE, is straightforward and effective.We prove the consistency of the method under general conditions, provide a detailed error analysis, and demonstrate strong empirical performance on benchmark tasks, including off-line PageRank and off-policy policy evaluation.","In this paper, we proposed a novel algorithm, GenDICE, for general stationary distribution correction estimation, which can handle both discounted and average off-policy evaluation on multiple behavior-agnostic samples." 907,The Blessing of Dimensionality: An Empirical Study of Generalization,"The power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remain elusive.Numerous rigorous attempts have been made to explain generalization, but available bounds are still quite loose, and analysis does not always lead to true understanding.The goal of this work is to make generalization more intuitive.Using visualization methods, we discuss the mystery of generalization, the geometry of loss landscapes, and how the curse of dimensionality causes optimizers to settle into minima that generalize well.",An intuitive empirical and visual exploration of the generalization properties of deep neural networks. 908,Symmetry and Systematicity,"We argue that symmetry is an important consideration in addressing the problemof systematicity and investigate two forms of symmetry relevant to symbolic processes.We implement this approach in terms of convolution and show that it canbe used to achieve effective generalisation in three toy problems: rule learning,composition and grammar learning.",We use convolution to make neural networks behave more like symbolic systems. 909,Nonlinear Differential Equations with external forcing,Key equatorial climate phenomena such as QBO and ENSO have never been adequately explained as deterministic processes.This in spite of recent research showing growing evidence of predictable behavior.This study applies the fundamental Laplace tidal equations with simplifying assumptions along the equator — i.e. no Coriolis force and a small angle approximation.The solutions to the partial differential equations are highly non-linear related to Navier-Stokes and only search approaches can be used to fit to the data.,Analytical Formulation of Equatorial Standing Wave Phenomena: Application to QBO and ENSO 910,OvA-INN: Continual Learning with Invertible Neural Networks,"In the field of Continual Learning, the objective is to learn several tasks one after the other without access to the data from previous tasks.Several solutions have been proposed to tackle this problem but they usually assume that the user knows which of the tasks to perform at test time on a particular sample, or rely on small samples from previous data and most of them suffer of a substantial drop in accuracy when updated with batches of only one class at a time.In this article, we propose a new method, OvA-INN, which is able to learn one class at a time and without storing any of the previous data.To achieve this, for each class, we train a specific Invertible Neural Network to output the zero vector for its class.At test time, we can predict the class of a sample by identifying which network outputs the vector with the smallest norm.With this method, we show that we can take advantage of pretrained models by stacking an invertible network on top of a features extractor.This way, we are able to outperform state-of-the-art approaches that rely on features learning for the Continual Learning of MNIST and CIFAR-100 datasets.In our experiments, we are reaching 72% accuracy on CIFAR-100 after training our model one class at a time.",We propose to train an Invertible Neural Network for each class to perform class-by-class Continual Learning. 911,An Ensemble of Retrieval-Based and Generation-Based Human-Computer Conversation Systems.,"Human-computer conversation systems have attracted much attention in Natural Language Processing.Conversation systems can be roughly divided into two categories: retrieval-based and generation-based systems.Retrieval systems search a user-issued utterance in a large conversational repository and return a reply that best matches the query.Generative approaches synthesize new replies.Both ways have certain advantages but suffer from their own disadvantages.We propose a novel ensemble of retrieval-based and generation-based conversation system.The retrieved candidates, in addition to the original query, are fed to a reply generator via a neural network, so that the model is aware of more information.The generated reply together with the retrieved ones then participates in a re-ranking process to find the final reply to output.Experimental results show that such an ensemble system outperforms each single module by a large margin.",A novel ensemble of retrieval-based and generation-based for open-domain conversation systems. 912,Towards Understanding the Transferability of Deep Representations,"Deep neural networks trained on a wide range of datasets demonstrate impressive transferability.Deep features appear general in that they are applicable to many datasets and tasks.Such property is in prevalent use in real-world applications.A neural network pretrained on large datasets, such as ImageNet, can significantly boost generalization and accelerate training if fine-tuned to a smaller target dataset.Despite its pervasiveness, few effort has been devoted to uncovering the reason of transferability in deep feature representations.This paper tries to understand transferability from the perspectives of improved generalization, optimization and the feasibility of transferability.We demonstrate that1) Transferred models tend to find flatter minima, since their weight matrices stay close to the original flat region of pretrained parameters when transferred to a similar target dataset;2) Transferred representations make the loss landscape more favorable with improved Lipschitzness, which accelerates and stabilizes training substantially.The improvement largely attributes to the fact that the principal component of gradient is suppressed in the pretrained parameters, thus stabilizing the magnitude of gradient in back-propagation.3) The feasibility of transferability is related to the similarity of both input and label.And a surprising discovery is that the feasibility is also impacted by the training stages in that the transferability first increases during training, and then declines.We further provide a theoretical analysis to verify our observations.","Understand transferability from the perspectives of improved generalization, optimization and the feasibility of transferability." 913,Functional vs. parametric equivalence of ReLU networks,"We address the following question: How redundant is the parameterisation of ReLU networks?Specifically, we consider transformations of the weight space which leave the function implemented by the network intact.Two such transformations are known for feed-forward architectures: permutation of neurons within a layer, and positive scaling of all incoming weights of a neuron coupled with inverse scaling of its outgoing weights.In this work, we show for architectures with non-increasing widths that permutation and scaling are in fact the only function-preserving weight transformations.For any eligible architecture we give an explicit construction of a neural network such that any other network that implements the same function can be obtained from the original one by the application of permutations and rescaling.The proof relies on a geometric understanding of boundaries between linear regions of ReLU networks, and we hope the developed mathematical tools are of independent interest.",We prove that there exist ReLU networks whose parameters are almost uniquely determined by the function they implement. 914,Task-Based Top-Down Modulation Network for Multi-Task-Learning Applications,"A general problem that received considerable recent attention is how to perform multiple tasks in the same network, maximizing both efficiency and prediction accuracy.A popular approach consists of a multi-branch architecture on top of ashared backbone, jointly trained on a weighted sum of losses.However, in many cases, the shared representation results in non-optimal performance, mainly due to an interference between conflicting gradients of uncorrelated tasks.Recent approaches address this problem by a channel-wise modulation of the feature-maps along the shared backbone, with task specific vectors, manually or dynamically tuned.Taking this approach a step further, we propose a novel architecture whichmodulate the recognition network channel-wise, as well as spatial-wise, with an efficient top-down image-dependent computation scheme.Our architecture uses no task-specific branches, nor task specific modules.Instead, it uses a top-down modulation network that is shared between all of the tasks.We show the effectiveness of our scheme by achieving on par or better results than alternative approaches on both correlated and uncorrelated sets of tasks.We also demonstrate our advantages in terms of model size, the addition of novel tasks and interpretability.Code will be released.",We propose a top-down modulation network for multi-task learning applications with several advantages over current schemes. 915,Dynamical Clustering of Time Series Data Using Multi-Decoder RNN Autoencoder,"Clustering algorithms have wide applications and play an important role in data analysis fields including time series data analysis.The performance of a clustering algorithm depends on the features extracted from the data.However, in time series analysis, there has been a problem that the conventional methods based on the signal shape are unstable for phase shift, amplitude and signal length variations.In this paper, we propose a new clustering algorithm focused on the dynamical system aspect of the signal using recurrent neural network and variational Bayes method.Our experiments show that our proposed algorithm has a robustness against above variations and boost the classification performance.",Novel time series data clustring algorithm based on dynamical system features. 916,Learning Explainable Models Using Attribution Priors,"Two important topics in deep learning both involve incorporating humans into the modeling process: Model priors transfer information from humans to a model by regularizing the model's parameters; Model attributions transfer information from a model to humans by explaining the model's behavior."", 'Previous work has taken important steps to connect these topics through various forms of gradient regularization.""We find, however, that existing methods that use attributions to align a model's behavior with human intuition are ineffective."", ""We develop an efficient and theoretically grounded feature attribution method, expected gradients, and a novel framework, attribution priors, to enforce prior expectations about a model's behavior during training."", 'We demonstrate that attribution priors are broadly applicable by instantiating them on three different types of data: image data, gene expression data, and health care data.Our experiments show that models trained with attribution priors are more intuitive and achieve better generalization performance than both equivalent baselines and existing methods to regularize model behavior.",A method for encouraging axiomatic feature attributions of a deep model to match human intuition. 917,Learning Recurrent Binary/Ternary Weights,"Recurrent neural networks have shown excellent performance in processing sequence data.However, they are both complex and memory intensive due to their recursive nature.These limitations make RNNs difficult to embed on mobile devices requiring real-time processes with limited hardware resources.To address the above issues, we introduce a method that can learn binary and ternary weights during the training phase to facilitate hardware implementations of RNNs.As a result, using this approach replaces all multiply-accumulate operations by simple accumulations, bringing significant benefits to custom hardware in terms of silicon area and power consumption.On the software side, we evaluate the performance of our method using long short-term memories and gated recurrent units on various sequential models including sequence classification and language modeling.We demonstrate that our method achieves competitive results on the aforementioned tasks while using binary/ternary weights during the runtime.On the hardware side, we present custom hardware for accelerating the recurrent computations of LSTMs with binary/ternary weights.Ultimately, we show that LSTMs with binary/ternary weights can achieve up to 12x memory saving and 10x inference speedup compared to the full-precision hardware implementation design.","We propose high-performance LSTMs with binary/ternary weights, that can greatly reduce implementation complexity" 918,"Unsupervised Few-shot Object Recognition by Integrating Adversarial, Self-supervision, and Deep Metric Learning of Latent Parts","This paper addresses unsupervised few-shot object recognition, where all training images are unlabeled and do not share classes with labeled support images for few-shot recognition in testing.We use a new GAN-like deep architecture aimed at unsupervised learning of an image representation which will encode latent object parts and thus generalize well to unseen classes in our few-shot recognition task.Our unsupervised training integrates adversarial, self-supervision, and deep metric learning.We make two contributions.First, we extend the vanilla GAN with reconstruction loss to enforce the discriminator capture the most relevant characteristics of ""fake"" images generated from randomly sampled codes.Second, we compile a training set of triplet image examples for estimating the triplet loss in metric learning by using an image masking procedure suitably designed to identify latent object parts.Hence, metric learning ensures that the deep representation of images showing similar object classes which share some parts are closer than the representations of images which do not have common parts.Our results show that we significantly outperform the state of the art, as well as get similar performance to the common episodic training for fully-supervised few-shot learning on the Mini-Imagenet and Tiered-Imagenet datasets.","We address the problem of unsupervised few-shot object recognition, where all training images are unlabeled and do not share classes with test images." 919,Exploring the Space of Black-box Attacks on Deep Neural Networks,"Existing black-box attacks on deep neural networks so far have largely focused on transferability, where an adversarial instance generated for a locally trained model can “transfer” to attack other learning models.In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability.We also propose strategies to decouple the number of queries required to generate each adversarial sample from the dimensionality of the input.An iterative variant of our attack achieves close to 100% adversarial success rates for both targeted and untargeted attacks on DNNs.We carry out extensive experiments for a thorough comparative evaluation of black-box attacks and show that the proposed Gradient Estimation attacks outperform all transferability based black-box attacks we tested on both MNIST and CIFAR-10 datasets, achieving adversarial success rates similar to well known, state-of-the-art white-box attacks.We also apply the Gradient Estimation attacks successfully against a real-world content moderation classifier hosted by Clarifai.Furthermore, we evaluate black-box attacks against state-of-the-art defenses.We show that the Gradient Estimation attacks are very effective even against these defenses.",Query-based black-box attacks on deep neural networks with adversarial success rates matching white-box attacks 920,Network Signatures from Image Representation of Adjacency Matrices: Deep/Transfer Learning for Subgraph Classification,"We propose a novel subgraph image representation for classification of network fragments with the target being their parent networks.The graph image representation is based on 2D image embeddings of adjacency matrices.We use this image representation in two modes.First, as the input to a machine learning algorithm.Second, as the input to a pure transfer learner.Our conclusions from multiple datasets are that1. deep learning using structured image features performs the best compared to graph kernel and classical features based methods; and,2. pure transfer learning works effectively with minimum interference from the user and is robust against small data.",We convert subgraphs into structured images and classify them using 1. deep learning and 2. transfer learning (Caffe) and achieve stunning results. 921,Self-ensembling for visual domain adaptation,"This paper explores the use of self-ensembling for visual domain adaptation problems.Our technique is derived from the mean teacher variant of temporal ensembling, a technique that achieved state of the art results in the area of semi-supervised learning.We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness.Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge.In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.","Self-ensembling based algorithm for visual domain adaptation, state of the art results, won VisDA-2017 image classification domain adaptation challenge." 922,Generative Models of Visually Grounded Imagination,"It is easy for people to imagine what a man with pink hair looks like, even if they have never seen such a person before.We call the ability to create images of novel semantic concepts visually grounded imagination.In this paper, we show how we can modify variational auto-encoders to perform this task.Our method uses a novel training objective, and a novel product-of-experts inference network, which can handle partially specified concepts in a principled and efficient way.We also propose a set of easy-to-compute evaluation metrics that capture our intuitive notions of what it means to have good visual imagination, namely correctness, coverage, and compositionality.Finally, we perform a detailed comparison of our method with two existing joint image-attribute VAE methods by applying them to two datasets: the MNIST-with-attributes dataset, and the CelebA dataset.","A VAE-variant which can create diverse images corresponding to novel concrete or abstract ""concepts"" described using attribute vectors." 923,Combining Q-Learning and Search with Amortized Value Estimates,"We introduce ""Search with Amortized Value Estimates"", an approach for combining model-free Q-learning with model-based Monte-Carlo Tree Search.In SAVE, a learned prior over state-action values is used to guide MCTS, which estimates an improved set of state-action values.The new Q-estimates are then used in combination with real experience to update the prior.This effectively amortizes the value computation performed by MCTS, resulting in a cooperative relationship between model-free learning and model-based search.SAVE can be implemented on top of any Q-learning agent with access to a model, which we demonstrate by incorporating it into agents that perform challenging physical reasoning tasks and Atari.SAVE consistently achieves higher rewards with fewer training steps, and---in contrast to typical model-based search approaches---yields strong performance with very small search budgets.By combining real experience with information computed during search, SAVE demonstrates that it is possible to improve on both the performance of model-free learning and the computational cost of planning.","We propose a model-based method called ""Search with Amortized Value Estimates"" (SAVE) which leverages both real and planned experience by combining Q-learning with Monte-Carlo Tree Search, achieving strong performance with very small search budgets." 924,Representing Entropy : A short proof of the equivalence between soft Q-learning and policy gradients,"Two main families of reinforcement learning algorithms, Q-learning and policy gradients, have recently been proven to be equivalent when using a softmax relaxation on one part, and an entropic regularization on the other.We relate this result to the well-known convex duality of Shannon entropy and the softmax function.Such a result is also known as the Donsker-Varadhan formula.This provides a short proof of the equivalence.We then interpret this duality further, and use ideas of convex analysis to prove a new policy inequality relative to soft Q-learning.",A short proof of the equivalence of soft Q-learning and policy gradients. 925,Policy Transfer with Strategy Optimization,"Computer simulation provides an automatic and safe way for training robotic controlpolicies to achieve complex tasks such as locomotion.However, a policytrained in simulation usually does not transfer directly to the real hardware dueto the differences between the two environments.Transfer learning using domainrandomization is a promising approach, but it usually assumes that the target environmentis close to the distribution of the training environments, thus relyingheavily on accurate system identification.In this paper, we present a differentapproach that leverages domain randomization for transferring control policies tounknown environments.The key idea that, instead of learning a single policy inthe simulation, we simultaneously learn a family of policies that exhibit differentbehaviors.When tested in the target environment, we directly search for the bestpolicy in the family based on the task performance, without the need to identifythe dynamic parameters.We evaluate our method on five simulated robotic controlproblems with different discrepancies in the training and testing environmentand demonstrate that our method can overcome larger modeling errors comparedto training a robust policy or an adaptive policy.","We propose a policy transfer algorithm that can overcome large and challenging discrepancies in the system dynamics such as latency, actuator modeling error, etc." 926,vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations,We propose vq-wav2vec to learn discrete representations of audio segments through a wav2vec-style self-supervised context prediction task.The algorithm uses either a gumbel softmax or online k-means clustering to quantize the dense representations.Discretization enables the direct application of algorithms from the NLP community which require discrete inputs.Experiments show that BERT pre-training achieves a new state of the art on TIMIT phoneme classification and WSJ speech recognition.,Learn how to quantize speech signal and apply algorithms requiring discrete inputs to audio data such as BERT. 927,Hierarchical Deep Reinforcement Learning Agent with Counter Self-play on Competitive Games ,"Deep Reinforcement Learning algorithms lead to agents that can solve difficult decision making problems in complex environments.However, many difficult multi-agent competitive games, especially real-time strategy games are still considered beyond the capability of current deep reinforcement learning algorithms, although there has been a recent effort to change this p.Moreover, when the opponents in a competitive game are suboptimal, the current seeking, self-play algorithms are often unable to generalize their strategies to opponents that play strategies vastly different from their own.This suggests that a learning algorithm that is beyond conventional self-play is necessary.We develop Hierarchical Agent with Self-play, a learning approach for obtaining hierarchically structured policies that can achieve higher performance than conventional self-play on competitive games through the use of a diverse pool of sub-policies we get from Counter Self-Play.We demonstrate that the ensemble policy generated by HASP can achieve better performance while facing unseen opponents that use sub-optimal policies.On a motivating iterated Rock-Paper-Scissor game and a partially observable real-time strategic game, we are led to the conclusion that HASP can perform better than conventional self-play as well as achieve 77% win rate against FloBot, an open-source agent which has ranked at position number 2 on the online leaderboards.","We develop Hierarchical Agent with Self-play (HASP), a learning approach for obtaining hierarchically structured policies that can achieve high performance than conventional self-play on competitive real-time strategic games." 928,Adaptive Input Representations for Neural Language Modeling,"We introduce adaptive input representations for neural language modeling which extend the adaptive softmax of Grave et al. to input representations of variable capacity.There are several choices on how to factorize the input and output layers, and whether to model words, characters or sub-word units.We perform a systematic comparison of popular choices for a self-attentional architecture.Our experiments show that models equipped with adaptive embeddings are more than twice as fast to train than the popular character input CNN while having a lower number of parameters.On the WikiText-103 benchmark we achieve 18.7 perplexity, an improvement of 10.5 perplexity compared to the previously best published result and on the Billion Word benchmark, we achieve 23.02 perplexity.","Variable capacity input word embeddings and SOTA on WikiText-103, Billion Word benchmarks." 929,Deep Multivariate Mixture of Gaussians for Object Detection under Occlusion,"In this paper, we consider the problem of detecting object under occlusion.Most object detectors formulate bounding box regression as a unimodal task.However, we observe that the bounding box borders of an occluded object can have multiple plausible configurations.Also, the occluded bounding box borders have correlations with visible ones.Motivated by these two observations, we propose a deep multivariate mixture of Gaussians model for bounding box regression under occlusion.The mixture components potentially learn different configurations of an occluded part, and the covariances between variates help to learn the relationship between the occluded parts and the visible ones.Quantitatively, our model improves the AP of the baselines by 3.9% and 1.2% on CrowdHuman and MS-COCO respectively with almost no computational or memory overhead.Qualitatively, our model enjoys explainability since we can interpret the resulting bounding boxes via the covariance matrices and the mixture components.",a deep multivariate mixture of Gaussians model for bounding box regression under occlusion 930,Adversarial Training with Voronoi Constraints,"Adversarial examples are a pervasive phenomenon of machine learning models where seemingly imperceptible perturbations to the input lead to misclassifications for otherwise statistically accurate models. Adversarial training, one of the most successful empirical defenses to adversarial examples, refers to training on adversarial examples generated within a geometric constraint set.The most commonly used geometric constraint is an-ball of radius in some norm.We introduce adversarial training with Voronoi constraints, which replaces the-ball constraint with the Voronoi cell for each point in the training set.We show that adversarial training with Voronoi constraints produces robust models which significantly improve over the state-of-the-art on MNIST and are competitive on CIFAR-10.",We replace the Lp ball constraint with the Voronoi cells of the training data to produce more robust models. 931,Generalized Natural Language Grounded Navigation via Environment-agnostic Multitask Learning,"Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e.g., following natural language instructions or dialog.However, existing methods tend to overfit training data in seen environments and fail to generalize well in previously unseen environments.In order to close the gap between seen and unseen environments, we aim at learning a generalizable navigation model from two novel perspectives: we introduce a multitask navigation model that can be seamlessly trained on both Vision-Language Navigation and Navigation from Dialog History tasks, which benefits from richer natural language guidance and effectively transfers knowledge across tasks; we propose to learn environment-agnostic representations for navigation policy that are invariant among environments, thus generalizing better on unseen environments.Extensive experiments show that our environment-agnostic multitask navigation model significantly reduces the performance gap between seen and unseen environments and outperforms the baselines on unseen environments by 16% on VLN and 120% on NDH, establishing the new state of the art for NDH task.",We propose to learn a more generalized policy for natural language grounded navigation tasks via environment-agnostic multitask learning. 932,Wasserstein Barycenter Model Ensembling,"In this paper we propose to perform model ensembling in a multiclass or a multilabel learning setting using Wasserstein barycenters.Optimal transport metrics, such as the Wasserstein distance, allow incorporating semantic side information such as word embeddings.Using W. barycenters to find the consensus between models allows us to balance confidence and semantics in finding the agreement between the models.We show applications of Wasserstein ensembling in attribute-based classification, multilabel learning and image captioning generation.These results show that the W. ensembling is a viable alternative to the basic geometric or arithmetic mean ensembling.",we propose to use Wasserstein barycenters for semantic model ensembling 933,Unbiased scalable softmax optimization,"Recent neural network and language models have begun to rely on softmax distributions with an extremely large number of categories.In this context calculating the softmax normalizing constant is prohibitively expensive.This has spurred a growing literature of efficiently computable but biased estimates of the softmax.In this paper we present the first two unbiased algorithms for maximizing the softmax likelihood whose work per iteration is independent of the number of classes and datapoints.""We compare our unbiased methods' empirical performance to the state-of-the-art on seven real world datasets, where they comprehensively outperform all competitors.",Propose first methods for exactly optimizing the softmax distribution using stochastic gradient with runtime independent on the number of classes or datapoints. 934,MCTSBug: Generating Adversarial Text Sequences via Monte Carlo Tree Search and Homoglyph Attack,"Crafting adversarial examples on discrete inputs like text sequences is fundamentally different from generating such examples for continuous inputs like images.This paper tries to answer the question: under a black-box setting, can we create adversarial examples automatically to effectively fool deep learning classifiers on texts by making imperceptible changes?Our answer is a firm yes.Previous efforts mostly replied on using gradient evidence, and they are less effective either due to finding the nearest neighbor word automatically is difficult or relying heavily on hand-crafted linguistic rules.We, instead, use Monte Carlo tree search for finding the most important few words to perturb and perform homoglyph attack by replacing one character in each selected word with a symbol of identical shape. Our novel algorithm, we call MCTSBug, is black-box and extremely effective at the same time.Our experimental results indicate that MCTSBug can fool deep learning classifiers at the success rates of 95% on seven large-scale benchmark datasets, by perturbing only a few characters. Surprisingly, MCTSBug, without relying on gradient information at all, is more effective than the gradient-based white-box baseline.Thanks to the nature of homoglyph attack, the generated adversarial perturbations are almost imperceptible to human eyes.",Use Monte carlo Tree Search and Homoglyphs to generate indistinguishable adversarial samples on text data 935,Learning to Infer Graphics Programs from Hand-Drawn Images," We introduce a model that learns to convert simple hand drawings into graphics programs written in a subset of \\LaTeX.~The model combines techniques from deep learning and program synthesis. We learn a convolutional neural network that proposes plausible drawing primitives that explain an image.These drawing primitives are like a trace of the set of primitive commands issued by a graphics program.We learn a model that uses program synthesis techniques to recover a graphics program from that trace.These programs have constructs like variable bindings, iterative loops, or simple kinds of conditionals.With a graphics program in hand, we can correct errors made by the deep network and extrapolate drawings. Taken together these results are a step towards agents that induce useful, human-readable programs from perceptual input.",Learn to convert a hand drawn sketch into a high-level program 936,Calibration of neural network logit vectors to combat adversarial attacks,"Adversarial examples remain an issue for contemporary neural networks.This paper draws on Background Check, a technique in model calibration, to assist two-class neural networks in detecting adversarial examples, using the one dimensional difference between logit values as the underlying measure.This method interestingly tends to achieve the highest average recall on image sets that are generated with large perturbation vectors, which is unlike the existing literature on adversarial attacks.The proposed method does not need knowledge of the attack parameters or methods at training time, unlike a great deal of the literature that uses deep learning based methods to detect adversarial examples, such as Metzen et al., imbuing the proposed method with additional flexibility.",This paper uses principles from the field of calibration in machine learning on the logits of a neural network to defend against adversarial attacks 937,Multitask learning of Multilingual Sentence Representations,"We present a novel multi-task training approach to learning multilingual distributed representations of text.Our system learns word and sentence embeddings jointly by training a multilingual skip-gram model together with a cross-lingual sentence similarity model.We construct sentence embeddings by processing word embeddings with an LSTM and by taking an average of the outputs.Our architecture can transparently use both monolingual and sentence aligned bilingual corpora to learn multilingual embeddings, thus covering a vocabulary significantly larger than the vocabulary of the bilingual corpora alone.Our model shows competitive performance in a standard cross-lingual document classification task.We also show the effectiveness of our method in a low-resource scenario.",We jointly train a multilingual skip-gram model and a cross-lingual sentence similarity model to learn high quality multilingual text embeddings that perform well in the low resource scenario. 938,Generative model based on minimizing exact empirical Wasserstein distance,"Generative Adversarial Networks are a very powerful framework for generative modeling.However, they are often hard to train, and learning of GANs often becomes unstable.Wasserstein GAN is a promising framework to deal with the instability problem as it has a good convergence property.One drawback of the WGAN is that it evaluates the Wasserstein distance in the dual domain, which requires some approximation, so that it may fail to optimize the true Wasserstein distance.In this paper, we propose evaluating the exact empirical optimal transport cost efficiently in the primal domain and performing gradient descent with respect to its derivative to train the generator network.Experiments on the MNIST dataset show that our method is significantly stable to converge, and achieves the lowest Wasserstein distance among the WGAN variants at the cost of some sharpness of generated images.Experiments on the 8-Gaussian toy dataset show that better gradients for the generator are obtained in our method.In addition, the proposed method enables more flexible generative modeling than WGAN.",We have proposed a flexible generative model that learns stably by directly minimizing exact empirical Wasserstein distance. 939,NAS evaluation is frustratingly hard,"Neural Architecture Search is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012.Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue.While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all.As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others.Our first contribution is a benchmark of 8 NAS methods on 5 datasets.To overcome the hurdle of comparing methods with different search spaces, we propose using a method’s relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols.Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline.We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline.These experiments highlight that: the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures; the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings; the hand-designed macrostructure is more important than the searched micro-structure; and the depth-gap is a real phenomenon, evidenced by the change in rankings between 8 and 20 cell architectures.To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls, e.g. difficulties in reproducibility and comparison of search methods.Thecode used is available at https://github.com/antoyang/NAS-Benchmark.","A study of how different components in the NAS pipeline contribute to the final accuracy. Also, a benchmark of 8 methods on 5 datasets." 940,NLProlog: Reasoning with Weak Unification for Natural Language Question Answering,"Symbolic logic allows practitioners to build systems that perform rule-based reasoning which is interpretable and which can easily be augmented with prior knowledge.However, such systems are traditionally difficult to apply to problems involving natural language due to the large linguistic variability of language.Currently, most work in natural language processing focuses on neural networks which learn distributed representations of words and their composition, thereby performing well in the presence of large linguistic variability.We propose to reap the benefits of both approaches by applying a combination of neural networks and logic programming to natural language question answering.We propose to employ an external, non-differentiable Prolog prover which utilizes a similarity function over pretrained sentence encoders.We fine-tune these representations via Evolution Strategies with the goal of multi-hop reasoning on natural language. This allows us to create a system that can apply rule-based reasoning to natural language and induce domain-specific natural language rules from training data.We evaluate the proposed system on two different question answering tasks, showing that it complements two very strong baselines – BIDAF and FASTQA – and outperforms both when used in an ensemble.","We introduce NLProlog, a system that performs rule-based reasoning on natural language by leveraging pretrained sentence embeddings and fine-tuning with Evolution Strategies, and apply it to two multi-hop Question Answering tasks." 941,Learning from Samples of Variable Quality,"Training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing.This creates a fundamental quality-versus-quantity trade-off in the learning process. Do we learn from the small amount of high-quality data or the potentially large amount of weakly-labeled data?We argue that if the learner could somehow know and take the label-quality into account, we could get the best of both worlds. To this end, we introduce “fidelity-weighted learning”, a semi-supervised student-teacher approach for training deep neural networks using weakly-labeled data.FWL modulates the parameter updates to a student network, trained on the task we care about on a per-sample basis according to the posterior confidence of its label-quality estimated by a teacher, who has access to limited samples with high-quality labels.","We propose Fidelity-weighted Learning, a semi-supervised teacher-student approach for training neural networks using weakly-labeled data." 942,Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks,"This paper is focused on investigating and demystifying an intriguing robustness phenomena in over-parameterized neural network training.In particular we provide empirical and theoretical evidence that first order methods such as gradient descent are provably robust to noise/corruption on a constant fraction of the labels despite over-parameterization under a rich dataset model.In particular:i) First, we show that in the first few iterations where the updates are still in the vicinity of the initialization these algorithms only fit to the correct labels essentially ignoring the noisy labels.ii) Secondly, we prove that to start to overfit to the noisy labels these algorithms must stray rather far from from the initial model which can only occur after many more iterations.Together, these show that gradient descent with early stopping is provably robust to label noise and shed light on empirical robustness of deep networks as well as commonly adopted early-stopping heuristics.",We prove that gradient descent is robust to label corruption despite over-parameterization under a rich dataset model. 943,Double Viterbi: Weight Encoding for High Compression Ratio and Fast On-Chip Reconstruction for Deep Neural Network,"Weight pruning has been introduced as an efficient model compression technique.Even though pruning removes significant amount of weights in a network, memory requirement reduction was limited since conventional sparse matrix formats require significant amount of memory to store index-related information.Moreover, computations associated with such sparse matrix formats are slow because sequential sparse matrix decoding process does not utilize highly parallel computing systems efficiently.As an attempt to compress index information while keeping the decoding process parallelizable, Viterbi-based pruning was suggested.Decoding non-zero weights, however, is still sequential in Viterbi-based pruning.In this paper, we propose a new sparse matrix format in order to enable a highly parallel decoding process of the entire sparse matrix.The proposed sparse matrix is constructed by combining pruning and weight quantization.For the latest RNN models on PTB and WikiText-2 corpus, LSTM parameter storage requirement is compressed 19x using the proposed sparse matrix format compared to the baseline model.Compressed weight and indices can be reconstructed into a dense matrix fast using Viterbi encoders.Simulation results show that the proposed scheme can feed parameters to processing elements 20 % to 106 % faster than the case where the dense matrix values directly come from DRAM.",We present a new weight encoding scheme which enables high compression ratio and fast sparse-to-dense matrix conversion. 944,Generative Inpainting Network Applications on Seismic Image Compression,"The use of deep learning models as priors for compressive sensing tasks presents new potential for inexpensive seismic data acquisition.An appropriately designed Wasserstein generative adversarial network is designed based on a generative adversarial network architecture trained on several historical surveys, capable of learning the statistical properties of the seismic wavelets.The usage of validating and performance testing of compressive sensing are three steps.First, the existence of a sparse representation with different compression rates for seismic surveys is studied.Then, non-uniform samplings are studied, using the proposed methodology.Finally, recommendations for non-uniform seismic survey grid, based on the evaluation of reconstructed seismic images and metrics, is proposed.The primary goal of the proposed deep learning model is to provide the foundations of an optimal design for seismic acquisition, with less loss in imaging quality.Along these lines, a compressive sensing design of a non-uniform grid over an asset in Gulf of Mexico, versus a traditional seismic survey grid which collects data uniformly at every few feet, is suggested, leveraging the proposed method.","Improved a GAN based pixel inpainting network for compressed seismic image recovery andproposed\xa0a non-uniform sampling survey recommendatio, which can be easily applied to medical and other domains for compressive sensing technique." 945,Simplified Action Decoder for Deep Multi-Agent Reinforcement Learning,"In recent years we have seen fast progress on a number of benchmark problems in AI, with modern methods achieving near or super human performance in Go, Poker and Dota.One common aspect of all of these challenges is that they are by design adversarial or, technically speaking, zero-sum.In contrast to these settings, success in the real world commonly requires humans to collaborate and communicate with others, in settings that are, at least partially, cooperative.In the last year, the card game Hanabi has been established as a new benchmark environment for AI to fill this gap.In particular, Hanabi is interesting to humans since it is entirely focused on theory of mind, i.e. the ability to effectively reason over the intentions, beliefs and point of view of other agents when observing their actions.Learning to be informative when observed by others is an interesting challenge for Reinforcement Learning: Fundamentally, RL requires agents to explore in order to discover good policies.However, when done naively, this randomness will inherently make their actions less informative to others during training.We present a new deep multi-agent RL method, the Simplified Action Decoder, which resolves this contradiction exploiting the centralized training phase.During training SAD allows other agents to not only observe the action chosen, but agents instead also observe the greedy action of their team mates.By combining this simple intuition with an auxiliary task for state prediction and best practices for multi-agent learning, SAD establishes a new state of the art for 2-5 players on the self-play part of the Hanabi challenge.","We develop Simplified Action Decoder, a simple MARL algorithm that beats previous SOTA on Hanabi by a big margin across 2- to 5-player games." 946,Chargrid-OCR: End-to-end trainable Optical Character Recognition through Semantic Segmentation and Object Detection,"We present an end-to-end trainable approach for optical character recognition on printed documents.""It is based on predicting a two-dimensional character grid representation of a document image as a semantic segmentation task."", 'To identify individual character instances from the chargrid, we regard characters as objects and use object detection techniques from computer vision.We demonstrate experimentally that our method outperforms previous state-of-the-art approaches in accuracy while being easily parallelizable on GPU, as well as easier to train.","End-to-end trainable Optical Character Recognition on printed documents; we achieve state-of-the-art results, beating Tesseract4 on benchmark datasets both in terms of accuracy and runtime, using a purely computer vision based approach." 947,Training Deep Networks with Stochastic Gradient Normalized by Layerwise Adaptive Second Moments,"We propose NovoGrad, an adaptive stochastic gradient descent method with layer-wise gradient normalization and decoupled weight decay.In our experiments on neural networks for image classification, speech recognition, machine translation, and language modeling, it performs on par or better than well tuned SGD with momentum and Adam/AdamW.Additionally, NovoGrad is robust to the choice of learning rate and weight initialization, works well in a large batch setting, and has two times smaller memory footprint than Adam.",NovoGrad - an adaptive SGD method with layer-wise gradient normalization and decoupled weight decay. 948,DDSP: Differentiable Digital Signal Processing,"Most generative models of audio directly generate samples in one of two domains: time or frequency.While sufficient to express any signal, these representations are inefficient, as they do not utilize existing knowledge of how sound is generated and perceived.A third approach successfully incorporates strong domain knowledge of signal processing and perception, but has been less actively researched due to limited expressivity and difficulty integrating with modern auto-differentiation-based machine learning methods.In this paper, we introduce the Differentiable Digital Signal Processing library, which enables direct integration of classic signal processing elements with deep learning methods.Focusing on audio synthesis, we achieve high-fidelity generation without the need for large autoregressive models or adversarial losses, demonstrating that DDSP enables utilizing strong inductive biases without losing the expressive power of neural networks.Further, we show that combining interpretable modules permits manipulation of each separate model component, with applications such as independent control of pitch and loudness, realistic extrapolation to pitches not seen during training, blind dereverberation of room acoustics, transfer of extracted room acoustics to new environments, and transformation of timbre between disparate sources.In short, DDSP enables an interpretable and modular approach to generative modeling, without sacrificing the benefits of deep learning.The library will is available at https://github.com/magenta/ddsp and we encourage further contributions from the community and domain experts.",Better audio synthesis by combining interpretable DSP with end-to-end learning. 949,Spectral Convolutional Networks on Hierarchical Multigraphs,"Spectral Graph Convolutional Networks are a generalization of convolutional networks to learning on graph-structured data.Applications of spectral GCNs have been successful, but limited to a few problems where the graph is fixed, such as shape correspondence and node classification.In this work, we address this limitation by revisiting a particular family of spectral graph networks, Chebyshev GCNs, showing its efficacy in solving graph classification tasks with a variable graph structure and size.Current GCNs also restrict graphs to have at most one edge between any pair of nodes.To this end, we propose a novel multigraph network that learns from multi-relational graphs.We explicitly model different types of edges: annotated edges, learned edges with abstract meaning, and hierarchical edges.We also experiment with different ways to fuse the representations extracted from different edge types.This restriction is sometimes implied from a dataset, however, we relax this restriction for all kinds of datasets.We achieve state-of-the-art results on a variety of chemical, social, and vision graph classification benchmarks.","A novel approach to graph classification based on spectral graph convolutional networks and its extension to multigraphs with learnable relations and hierarchical structure. We show state-of-the art results on chemical, social and image datasets." 950,Clean-Label Backdoor Attacks,"Deep neural networks have been recently demonstrated to be vulnerable to backdoor attacks.Specifically, by altering a small set of training examples, an adversary is able to install a backdoor that can be used during inference to fully control the model’s behavior.While the attack is very powerful, it crucially relies on the adversary being able to introduce arbitrary, often clearly mislabeled, inputs to the training set and can thus be detected even by fairly rudimentary data filtering.In this paper, we introduce a new approach to executing backdoor attacks, utilizing adversarial examples and GAN-generated data.The key feature is that the resulting poisoned inputs appear to be consistent with their label and thus seem benign even upon human inspection.",We show how to successfully perform backdoor attacks without changing training labels. 951,On the importance of single directions for generalization,"Despite their ability to memorize large datasets, deep neural networks often achieve good generalization performance.However, the differences between the learned solutions of networks which generalize and those which do not remain unclear.Additionally, the tuning properties of single directions have been highlighted, but their importance has not been evaluated.Here, we connect these lines of inquiry to demonstrate that a network’s reliance on single directions is a good predictor of its generalization performance, across networks trained on datasets with different fractions of corrupted labels, across ensembles of networks trained on datasets with unmodified labels, across different hyper- parameters, and over the course of training.While dropout only regularizes this quantity up to a point, batch normalization implicitly discourages single direction reliance, in part by decreasing the class selectivity of individual units.Finally, we find that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance.","We find that deep networks which generalize poorly are more reliant on single directions than those that generalize well, and evaluate the impact of dropout and batch normalization, as well as class selectivity on single direction reliance." 952,Learning undirected models via query training,"Typical amortized inference in variational autoencoders is specialized for a single probabilistic query.Here we propose an inference network architecture that generalizes to unseen probabilistic queries.Instead of an encoder-decoder pair, we can train a single inference network directly from data, using a cost function that is stochastic not only over samples, but also over queries.We can use this network to perform the same inference tasks as we would in an undirected graphical model with hidden variables, without having to deal with the intractable partition function.The results can be mapped to the learning of an actual undirected model, which is a notoriously hard problem.Our network also marginalizes nuisance variables as required. We show that our approach generalizes to unseen probabilistic queries on also unseen test data, providing fast and flexible inference.Experiments show that this approach outperforms or matches PCD and AdVIL on 9 benchmark datasets.","Instead of learning the parameters of a graphical model from data, learn an inference network that can answer the same probabilistic queries." 953,Newton Residual Learning,"A plethora of computer vision tasks, such as optical flow and image alignment, can be formulated as non-linear optimization problems.Before the resurgence of deep learning, the dominant family for solving such optimization problems was numerical optimization, e.g, Gauss-Newton.More recently, several attempts were made to formulate learnable GN steps as cascade regression architectures.In this paper, we investigate recent machine learning architectures, such as deep neural networks with residual connections, under the above perspective.To this end, we first demonstrate how residual blocks can be viewed as GN steps.""Then, we go a step further and propose a new residual block, that is reminiscent of Newton's method in numerical optimization and exhibits faster convergence."", 'We thoroughly evaluate the proposed Newton-ResNet by conducting experiments on image and speech classification and image generation, using 4 datasets.All the experiments demonstrate that Newton-ResNet requires less parameters to achieve the same performance with the original ResNet.",We demonstrate how residual blocks can be viewed as Gauss-Newton steps; we propose a new residual block that exploits second order information. 954,Domain-independent Plan Intervention When Users Unwittingly Facilitate Attacks,"In competitive situations, agents may take actions to achieve their goals that unwittingly facilitate an opponent’s goals.Weconsider a domain where three agents operate: a user, an attacker agent and an observer agent.The user and the attacker compete to achieve different goals.When there is a disparity in the domain knowledge the user and the attacker possess, the attacker may use the user’s unfamiliarity with the domain toits advantage and further its own goal.In this situation, the observer, whose goal is to support the user may need to intervene, and this intervention needs to occur online, on-time and be accurate.We formalize the online plan intervention problem and propose a solution that uses a decision tree classifier to identify intervention points in situations where agents unwittingly facilitate an opponent’s goal.We trained a classifier using domain-independent features extracted from the observer’s decision space to evaluate the “criticality” of the current state.The trained model is then used in an online setting on IPC benchmarks to identify observations that warrant intervention.Our contributions lay a foundation for further work in the area of deciding when to intervene.",We introduce a machine learning model that uses domain-independent features to estimate the criticality of the current state to cause a known undesirable state. 955,Counterfactuals uncover the modular structure of deep generative models,"Deep generative models can emulate the perceptual properties of complex image datasets, providing a latent representation of the data.However, manipulating such representation to perform meaningful and controllable transformations in the data space remains challenging without some form of supervision.While previous work has focused on exploiting statistical independence to latent factors, we argue that such requirement can be advantageously relaxed and propose instead a non-statistical framework that relies on identifying a modular organization of the network, based on counterfactual manipulations.Our experiments support that modularity between groups of channels is achieved to a certain degree on a variety of generative models.This allowed the design of targeted interventions on complex image datasets, opening the way to applications such as computationally efficient style transfer and the automated assessment of robustness to contextual changes in pattern recognition systems.",We develop a framework to find modular internal representations in generative models and manipulate then to generate counterfactual examples. 956,Differentiable Hebbian Plasticity for Continual Learning,"Catastrophic forgetting poses a grand challenge for continual learning systems, which prevents neural networks from protecting old knowledge while learning new tasks sequentially.We propose a Differentiable Hebbian Plasticity Softmax layer which adds a fast learning plastic component to the slow weights of the softmax output layer.The DHP Softmax behaves as a compressed episodic memory that reactivates existing memory traces, while creating new ones.We demonstrate the flexibility of our model by combining it with existing well-known consolidation methods to prevent catastrophic forgetting.We evaluate our approach on the Permuted MNIST and Split MNIST benchmarks, and introduce Imbalanced Permuted MNIST — a dataset that combines the challenges of class imbalance and concept drift.Our model requires no additional hyperparameters and outperforms comparable baselines by reducing forgetting.",Hebbian plastic weights can behave as a compressed episodic memory storage in neural networks; improving their ability to alleviate catastrophic forgetting in continual learning. 957,Checking Functional Modularity in DNN By Biclustering Task-specific Hidden Neurons,"While real brain networks exhibit functional modularity, we investigate whether functional mod- ularity also exists in Deep Neural Networks trained through back-propagation.Under the hypothesis that DNN are also organized in task-specific modules, in this paper we seek to dissect a hidden layer into disjoint groups of task-specific hidden neurons with the help of relatively well- studied neuron attribution methods.By saying task-specific, we mean the hidden neurons in the same group are functionally related for predicting a set of similar data samples, i.e. samples with similar feature patterns.We argue that such groups of neurons which we call Functional Modules can serve as the basic functional unit in DNN.We propose a preliminary method to identify Functional Modules via bi- clustering attribution scores of hidden neurons.We find that first, unsurprisingly, the functional neurons are highly sparse, i.e., only a small sub- set of neurons are important for predicting a small subset of data samples and, while we do not use any label supervision, samples corresponding to the same group show surprisingly coherent feature patterns.We also show that these Functional Modules perform a critical role in discriminating data samples through ablation experiment.","We develop an approach to parcellate a hidden layer in DNN into functionally related groups, by applying spectral coclustering on the attribution scores of hidden neurons." 958,System Demo for Transfer Learning from Vision to Language using Domain Specific CNN Accelerator for On-Device NLP Applications,"Power-efficient CNN Domain Specific Accelerator chips are currently available for wide use in mobile devices.These chips are mainly used in computer vision applications.However, the recent work of Super Characters method for text classification and sentiment analysis tasks using two-dimensional CNN models has also achieved state-of-the-art results through the method of transfer learning from vision to text.In this paper, we implemented the text classification and sentiment analysis applications on mobile devices using CNN-DSA chips.Compact network representations using one-bit and three-bits precision for coefficients and five-bits for activations are used in the CNN-DSA chip with power consumption less than 300mW.For edge devices under memory and compute constraints, the network is further compressed by approximating the external Fully Connected layers within the CNN-DSA chip.At the workshop, we have two system demonstrations for NLP tasks.The first demo classifies the input English Wikipedia sentence into one of the 14 classes.The second demo classifies the Chinese online-shopping review into positive or negative.",Deploy text classification and sentiment analysis applications for English and Chinese on a 300mW CNN accelerator chip for on-device application scenarios. 959,Rapid Model Comparison by Amortizing Across Models,"Comparing the inferences of diverse candidate models is an essential part of model checking and escaping local optima.To enable efficient comparison, we introduce an amortized variational inference framework that can perform fast and reliable posterior estimation across models of the same architecture.Our Any Parameter Encoder extends the encoder neural network common in amortized inference to take both a data feature vector and a model parameter vector as input.APE thus reduces posterior inference across unseen data and models to a single forward pass.In experiments comparing candidate topic models for synthetic data and product reviews, our Any Parameter Encoder yields comparable posteriors to more expensive methods in far less time, especially when the encoder architecture is designed in model-aware fashion.","We develop VAEs where the encoder takes a model parameter vector as input, so we can do rapid inference for many models" 960,Self-Attentional Credit Assignment for Transfer in Reinforcement Learning,"The ability to transfer knowledge to novel environments and tasks is a sensible desiderata for general learning agents.Despite the apparent promises, transfer in RL is still an open and little exploited research area.In this paper, we take a brand-new perspective about transfer: we suggest that the ability to assign credit unveils structural invariants in the tasks that can be transferred to make RL more sample efficient.Our main contribution is Secret, a novel approach to transfer learning for RL that uses a backward-view credit assignment mechanism based on a self-attentive architecture.Two aspects are key to its generality: it learns to assign credit as a separate offline supervised process and exclusively modifies the reward function.Consequently, it can be supplemented by transfer methods that do not modify the reward function and it can be plugged on top of any RL algorithm.",Secret is a transfer method for RL based on the transfer of credit assignment. 961,Neural Variational Inference For Embedding Knowledge Graphs,"Recent advances in Neural Variational Inference allowed for a renaissance in latent variable models in a variety of domains involving high-dimensional data.In this paper, we introduce two generic Variational Inference frameworks for generative models of Knowledge Graphs; Latent Fact Model and Latent Information Model. While traditional variational methods derive an analytical approximation for the intractable distribution over the latent variables, here we construct an inference network conditioned on the symbolic representation of entities and relation types in the Knowledge Graph, to provide the variational distributions.The new framework can create models able to discover underlying probabilistic semantics for the symbolic representation by utilising parameterisable distributions which permit training by back-propagation in the context of neural variational inference, resulting in a highly-scalable method.Under a Bernoulli sampling framework, we provide an alternative justification for commonly used techniques in large-scale stochastic variational inference, which drastically reduces training time at a cost of an additional approximation to the variational lower bound. The generative frameworks are flexible enough to allow training under any prior distribution that permits a re-parametrisation trick, as well as under any scoring function that permits maximum likelihood estimation of the parameters.Experiment results display the potential and efficiency of this framework by improving upon multiple benchmarks with Gaussian prior representations.Code publicly available on Github.",Working toward generative knowledge graph models to better estimate predictive uncertainty in knowledge inference. 962,Monge-Amp\`ere Flow for Generative Modeling,"We present a deep generative model, named Monge-Amp\\`ere flow, which builds on continuous-time gradient flow arising from the Monge-Amp\\`ere equation in optimal transport theory.The generative map from the latent space to the data space follows a dynamical system, where a learnable potential function guides a compressible fluid to flow towards the target density distribution.Training of the model amounts to solving an optimal control problem.The Monge-Amp\\`ere flow has tractable likelihoods and supports efficient sampling and inference.One can easily impose symmetry constraints in the generative model by designing suitable scalar potential functions.We apply the approach to unsupervised density estimation of the MNIST dataset and variational calculation of the two-dimensional Ising model at the critical point.This approach brings insights and techniques from Monge-Amp\\`ere equation, optimal transport, and fluid dynamics into reversible flow-based generative models.",A gradient flow based dynamical system for invertible generative modeling 963,Memory-Based Graph Networks,Graph Neural Networks are a class of deep models that operates on data with arbitrary topology and order-invariant structure represented as graphs.We introduce an efficient memory layer for GNNs that can learn to jointly perform graph representation learning and graph pooling.We also introduce two new networks based on our memory layer: Memory-Based Graph Neural Network and Graph Memory Network that can learn hierarchical graph representations by coarsening the graph throughout the layers of memory.The experimental results demonstrate that the proposed models achieve state-of-the-art results in six out of seven graph classification and regression benchmarks.We also show that the learned representations could correspond to chemical features in the molecule data.,We introduce an efficient memory layer that can learn representation and coarsen input graphs simultaneously without relying on message passing. 964,"Efficient Computation of Quantized Neural Networks by {−1, +1} Encoding Decomposition","Deep neural networks require extensive computing resources, and can not be efficiently applied to embedded devices such as mobile phones, which seriously limits their applicability.To address this problem, we propose a novel encoding scheme by using to decompose quantized neural networks into multi-branch binary networks, which can be efficiently implemented by bitwise operations to achieve model compression, computational acceleration and resource saving.Our method can achieve at most ~59 speedup and ~32 memory saving over its full-precision counterparts.Therefore, users can easily achieve different encoding precisions arbitrarily according to their requirements and hardware resources.Our mechanism is very suitable for the use of FPGA and ASIC in terms of data storage and computation, which provides a feasible idea for smart chips.We validate the effectiveness of our method on both large-scale image classification and object detection tasks.","A novel encoding scheme of using to decompose QNNs into multi-branch binary networks, in which we used bitwise operations (xnor and bitcount) to achieve model compression, computational acceleration and resource saving. " 965,Diversity is All You Need: Learning Skills without a Reward Function,"Intelligent creatures can explore their environments and learn useful skills without supervision.""In this paper, we propose Diversity is All You Need, a method for learning useful skills without a reward function."", 'Our proposed method learns skills by maximizing an information theoretic objective using a maximum entropy policy.On a variety of simulated robotic tasks, we show that this simple objective results in the unsupervised emergence of diverse skills, such as walking and jumping.In a number of reinforcement learning benchmark environments, our method is able to learn a skill that solves the benchmark task despite never receiving the true task reward.We show how pretrained skills can provide a good parameter initialization for downstream tasks, and can be composed hierarchically to solve complex, sparse reward tasks.Our results suggest that unsupervised discovery of skills can serve as an effective pretraining mechanism for overcoming challenges of exploration and data efficiency in reinforcement learning.","We propose an algorithm for learning useful skills without a reward function, and show how these skills can be used to solve downstream tasks." 966,Resolving Lexical Ambiguity in English–Japanese Neural Machine Translation,"Lexical ambiguity, i.e., the presence of two or more meanings for a single word, is an inherent and challenging problem for machine translation systems.Even though the use of recurrent neural networks and attention mechanisms are expected to solve this problem, machine translation systems are not always able to correctly translate lexically ambiguous sentences.In this work, I attempt to resolve the problem of lexical ambiguity in English--Japanese neural machine translation systems by combining a pretrained Bidirectional Encoder Representations from Transformer language model that can produce contextualized word embeddings and a Transformer translation model, which is a state-of-the-art architecture for the machine translation task.These two proposed architectures have been shown to be more effective in translating ambiguous sentences than a vanilla Transformer model and the Google Translate system.Furthermore, one of the proposed models, the Transformer_BERT-WE, achieves a higher BLEU score compared to the vanilla Transformer model in terms of general translation, which is concrete proof that the use of contextualized word embeddings from BERT can not only solve the problem of lexical ambiguity, but also boost the translation quality in general.",The paper solves a lexical ambiguity problem caused from homonym in neural translation by BERT. 967,Using GANs for Generation of Realistic City-Scale Ride Sharing/Hailing Data Sets,"This paper focuses on the synthetic generation of human mobility data in urban areas.We present a novel and scalable application of Generative Adversarial Networks for modeling and generating human mobility data.We leverage actual ride requests from ride sharing/hailing services from four major cities in the US to train our GANs model.Our model captures the spatial and temporal variability of the ride-request patterns observed for all four cities on any typical day and over any typical week.Previous works have succinctly characterized the spatial and temporal properties of human mobility data sets using the fractal dimensionality and the densification power law, respectively, which we utilize to validate our GANs-generated synthetic data sets.Such synthetic data sets can avoid privacy concerns and be extremely useful for researchers and policy makers on urban mobility and intelligent transportation.",This paper focuses on the synthetic generation of human mobility data in urban areas using GANs. 968,Deep Denoising: Rate-Optimal Recovery of Structured Signals with a Deep Prior,"Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy image.The underlying principle is that neural networks trained on large datasets have empirically been shown to be able to generate natural images well from a low-dimensional latent representation of the image.Given such a generator network, or prior, a noisy image can be denoised by finding the closest image in the range of the prior.However, there is little theory to justify this success, let alone to predict the denoising performance as a function of the networks parameters.In this paper we consider the problem of denoising an image from additive Gaussian noise, assuming the image is well described by a deep neural network with ReLu activations functions, mapping a k-dimensional latent space to an n-dimensional image.We state and analyze a simple gradient-descent-like iterative algorithm that minimizes a non-convex loss function, and provably removes a fraction of) of the noise energy.We also demonstrate in numerical experiments that this denoising performance is, indeed, achieved by generative priors learned from data.","By analyzing an algorithms minimizing a non-convex loss, we show that all but a small fraction of noise can be removed from an image using a deep neural network based generative prior." 969,Large Scale Multi-Domain Multi-Task Learning with MultiModel,"Deep learning yields great results across many fields,from speech recognition, image classification, to translation.But for each problem, getting a deep model to work well involvesresearch into the architecture and a long period of tuning.We present a single model that yields good results on a numberof problems spanning multiple domains.In particular, this single modelis trained concurrently on ImageNet, multiple translation tasks,image captioning, a speech recognition corpus,and an English parsing task.Our model architecture incorporates building blocks from multipledomains.It contains convolutional layers, an attention mechanism,and sparsely-gated layers.Each of these computational blocks is crucial for a subset ofthe tasks we train on.Interestingly, even if a block is notcrucial for a task, we observe that adding it never hurts performanceand in most cases improves it on all tasks.We also show that tasks with less data benefit largely from jointtraining with other tasks, while performance on large tasks degradesonly slightly if at all.",Large scale multi-task architecture solves ImageNet and translation together and shows transfer learning. 970,Domain Adaptation Through Label Propagation: Learning Clustered and Aligned Features,"The difficulty of obtaining sufficient labeled data for supervised learning has motivated domain adaptation, in which a classifier is trained in one domain, source domain, but operates in another, target domain.Reducing domain discrepancy has improved the performance, but it is hampered by the embedded features that do not form clearly separable and aligned clusters.We address this issue by propagating labels using a manifold structure, and by enforcing cycle consistency to align the clusters of features in each domain more closely.Specifically, we prove that cycle consistency leads the embedded features distant from all but one clusters if the source domain is ideally clustered.We additionally utilize more information from approximated local manifold and pursue local manifold consistency for more improvement.Results for various domain adaptation scenarios show tighter clustering and an improvement in classification accuracy.",A novel domain adaptation method to align manifolds from source and target domains using label propagation for better accuracy. 971,Neural Network Bandit Learning by Last Layer Marginalization,"We propose a new method for training neural networks online in a bandit setting.Similar to prior work, we model the uncertainty only in the last layer of the network, treating the rest of the network as a feature extractor.This allows us to successfully balance between exploration and exploitation due to the efficient, closed-form uncertainty estimates available for linear models.To train the rest of the network, we take advantage of the posterior we have over the last layer, optimizing over all values in the last layer distribution weighted by probability.We derive a closed form, differential approximation to this objective and show empirically over various models and datasets that training the rest of the network in this fashion leads to both better online and offline performance when compared to other methods.",This paper proposes a new method for neural network learning in online bandit settings by marginalizing over the last layer 972,Aggregating explanation methods for neural networks stabilizes explanations,"Despite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation.Our contributions in this paper are twofold.First, we investigate schemes to combine explanation methods and reduce model uncertainty to obtain a single aggregated explanation.The aggregation is more robust and aligns better with the neural network than any single explanation method..Second, we propose a new approach to evaluating explanation methods that circumvents the need for manual evaluation and is not reliant on the alignment of neural networks and humans decision processes.",We show in theory and in practice that combining multiple explanation methods for DNN benefits the explanation. 973,Training Neural Machines with Partial Traces,"We present a novel approach for training neural abstract architectures which in- corporates supervision over the machine’s interpretable components.To cleanly capture the set of neural architectures to which our method applies, we introduce the concept of a differential neural computational machine and show that several existing architectures can be instantiated as a ∂NCM and can thus benefit from any amount of additional supervision over their interpretable components.Based on our method, we performed a detailed experimental evaluation with both, the NTM and NRAM architectures, and showed that the approach leads to significantly better convergence and generalization capabilities of the learning phase than when training using only input-output examples.",We increase the amount of trace supervision possible to utilize when training fully differentiable neural machine architectures. 974,Sampling-Free Learning of Bayesian Quantized Neural Networks,"Bayesian learning of model parameters in neural networks is important in scenarios where estimates with well-calibrated uncertainty are important.In this paper, we propose Bayesian quantized networks, quantized neural networks for which we learn a posterior distribution over their discrete parameters.We provide a set of efficient algorithms for learning and prediction in BQNs without the need to sample from their parameters or activations, which not only allows for differentiable learning in quantized models but also reduces the variance in gradients estimation.We evaluate BQNs on MNIST, Fashion-MNIST and KMNIST classification datasets compared against bootstrap ensemble of QNNs.We demonstrate BQNs achieve both lower predictive errors and better-calibrated uncertainties than E-QNN.","We propose Bayesian quantized networks, for which we learn a posterior distribution over their quantized parameters." 975,Cascade Adversarial Machine Learning Regularized with a Unified Embedding,"Injecting adversarial examples during training, known as adversarial training, can improve robustness against one-step attacks, but not for unknown iterative attacks.To address this challenge, we first show iteratively generated adversarial images easily transfer between networks trained with the same strategy.Inspired by this observation, we propose cascade adversarial training, which transfers the knowledge of the end results of adversarial training.We train a network from scratch by injecting iteratively generated adversarial images crafted from already defended networks in addition to one-step adversarial images from the network being trained.We also propose to utilize embedding space for both classification and low-level similarity learning to ignore unknown pixel level perturbation.During training, we inject adversarial images without replacing their corresponding clean images and penalize the distance between the two embeddings.Experimental results show that cascade adversarial training together with our proposed low-level similarity learning efficiently enhances the robustness against iterative attacks, but at the expense of decreased robustness against one-step attacks.We show that combining those two techniques can also improve robustness under the worst case black box attack scenario.",Cascade adversarial training + low level similarity learning improve robustness against both white box and black box attacks. 976,Gradients explode - Deep Networks are shallow - ResNet explained,"Whereas it is believed that techniques such as Adam, batch normalization and, more recently, SeLU nonlinearities solve the exploding gradient problem, we show that this is not the case and that in a range of popular MLP architectures, exploding gradients exist and that they limit the depth to which networks can be effectively trained, both in theory and in practice."", 'We explain why exploding gradients occur and highlight the, which can arise in architectures that avoid exploding gradients.ResNets have significantly lower gradients and thus can circumvent the exploding gradient problem, enabling the effective training of much deeper networks, which we show is a consequence of a surprising mathematical property.By noticing that, we devise the, which reveals that introducing skip connections simplifies the network mathematically, and that this simplicity may be the major cause for their success.","We show that in contras to popular wisdom, the exploding gradient problem has not been solved and that it limits the depth to which MLPs can be effectively trained. We show why gradients explode and how ResNet handles them." 977,From Adversarial Training to Generative Adversarial Networks,"In this paper, we are interested in two seemingly different concepts: and .Particularly, how these techniques work to improve each other.To this end, we analyze the limitation of adversarial training as a defense method, starting from questioning how well the robustness of a model can generalize.""Then, we successfully improve the generalizability via data augmentation by the fake images sampled from generative adversarial network."", 'After that, we are surprised to see that the resulting robust classifier leads to a better generator, for free.We intuitively explain this interesting phenomenon and leave the theoretical analysis for future work.Motivated by these observations, we propose a system that combines generator, discriminator, and adversarial attacker together in a single network.After end-to-end training and fine tuning, our method can simultaneously improve the robustness of classifiers, measured by accuracy under strong adversarial attacks, and the quality of generators, evaluated both aesthetically and quantitatively.In terms of the classifier, we achieve better robustness than the state-of-the-art adversarial training algorithm proposed in, while our generator achieves competitive performance compared with SN-GAN.",We found adversarial training not only speeds up the GAN training but also increases the image quality 978,Self-Binarizing Networks,"We present a method to train self-binarizing neural networks, that is, networks that evolve their weights and activations during training to become binary.To obtain similar binary networks, existing methods rely on the sign activation function.This function, however, has no gradients for non-zero values, which makes standard backpropagation impossible.To circumvent the difficulty of training a network relying on the sign activation function, these methods alternate between floating-point and binary representations of the network during training, which is sub-optimal and inefficient.We approach the binarization task by training on a unique representation involving a smooth activation function, which is iteratively sharpened during training until it becomes a binary representation equivalent to the sign activation function.Additionally, we introduce a new technique to perform binary batch normalization that simplifies the conventional batch normalization by transforming it into a simple comparison operation.This is unlike existing methods, which are forced to the retain the conventional floating-point-based batch normalization.Our binary networks, apart from displaying advantages of lower memory and computation as compared to conventional floating-point and binary networks, also show higher classification accuracy than existing state-of-the-art methods on multiple benchmark datasets.",A method to binarize both weights and activations of a deep neural network that is efficient in computation and memory usage and performs better than the state-of-the-art. 979,Model Aggregation via Good-Enough Model Spaces," In many applications, the training data for a machine learning task is partitioned across multiple nodes, and aggregating this data may be infeasible due to storage, communication, or privacy constraints.In this work, we present Good-Enough Model Spaces, a novel framework for learning a global satisficing model within a few communication rounds by carefully combining the space of local nodes\' satisficing models.In experiments on benchmark and medical datasets, our approach outperforms other baseline aggregation techniques such as ensembling or model averaging, and performs comparably to the ideal non-distributed models.","We present Good-Enough Model Spaces (GEMS), a framework for learning an aggregate model over distributed nodes within a small number of communication rounds." 980,Wasserstein Auto-Encoders,"We propose the Wasserstein Auto-Encoder---a new algorithm for building a generative model of the data distribution.WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder.This regularizer encourages the encoded training distribution to match the prior.We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders.Our experiments show that WAE shares many of the properties of VAEs while generating samples of better quality.","We propose a new auto-encoder based on the Wasserstein distance, which improves on the sampling properties of VAE." 981,Differential Privacy in Adversarial Learning with Provable Robustness,"In this paper, we aim to develop a novel mechanism to preserve differential privacy in adversarial learning for deep neural networks, with provable robustness to adversarial examples.We leverage the sequential composition theory in DP, to establish a new connection between DP preservation and provable robustness.To address the trade-off among model utility, privacy loss, and robustness, we design an original, differentially private, adversarial objective function, based on the post-processing property in DP, to tighten the sensitivity of our model.An end-to-end theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of DP deep neural networks.",Preserving Differential Privacy in Adversarial Learning with Provable Robustness to Adversarial Examples 982,Learning Abstract Models for Long-Horizon Exploration,"In high-dimensional reinforcement learning settings with sparse rewards, performingeffective exploration to even obtain any reward signal is an open challenge.While model-based approaches hold promise of better exploration via planning, itis extremely difficult to learn a reliable enough Markov Decision Processin high dimensions.In this paper, we propose learningan abstract MDP over a much smaller number of states, which we canplan over for effective exploration.We assume we have an abstraction functionthat maps concrete states to abstract states.In our approach, a manager maintains an abstractMDP over a subset of the abstract states, which grows monotonically through targetedexploration.Concurrently, we learn aworker policy to travel between abstract states; the worker deals with the messinessof concrete states and presents a clean abstraction to the manager.On three of', ""the hardest games from the Arcade Learning Environment, our approach outperforms the previousstate-of-the-art by over a factor of 2 in each game.In Pitfall!, our approach isthe first to achieve superhuman performance without demonstrations.","We automatically construct and explore a small abstract Markov Decision Process, enabling us to achieve state-of-the-art results on Montezuma's Revenge, Pitfall!, and Private Eye by a significant margin." 983,Way Off-Policy Batch Deep Reinforcement Learning of Human Preferences in Dialog,"Most deep reinforcement learning systems are not able to learn effectively from off-policy data, especially if they cannot explore online in the environment.This is a critical shortcoming for applying RL to real-world problems where collecting data is expensive, and models must be tested offline before being deployed to interact with the environment -- e.g. systems that learn from human interaction.Thus, we develop a novel class of off-policy batch RL algorithms which use KL-control to penalize divergence from a pre-trained prior model of probable actions.This KL-constraint reduces extrapolation error, enabling effective offline learning, without exploration, from a fixed batch of data.We also use dropout-based uncertainty estimates to lower bound the target Q-values as a more efficient alternative to Double Q-Learning.This Way Off-Policy algorithm is tested on both traditional RL tasks from OpenAI Gym, and on the problem of open-domain dialog generation; a challenging reinforcement learning problem with a 20,000 dimensional action space.WOP allows for the extraction of multiple different reward functions post-hoc from collected human interaction data, and can learn effectively from all of these.We test real-world generalization by deploying dialog models live to converse with humans in an open-domain setting, and demonstrate that WOP achieves significant improvements over state-of-the-art prior methods in batch deep RL.","We show that KL-control from a pre-trained prior can allow RL models to learn from a static batch of collected data, without the ability to explore online in the environment." 984,Single Episode Policy Transfer in Reinforcement Learning,"Transfer and adaptation to new unknown environmental dynamics is a key challenge for reinforcement learning.An even greater challenge is performing near-optimally in a single attempt at test time, possibly without access to dense rewards, which is not addressed by current methods that require multiple experience rollouts for adaptation.To achieve single episode transfer in a family of environments with related dynamics, we propose a general algorithm that optimizes a probe and an inference model to rapidly estimate underlying latent variables of test dynamics, which are then immediately used as input to a universal control policy.This modular approach enables integration of state-of-the-art algorithms for variational inference or RL.Moreover, our approach does not require access to rewards at test time, allowing it to perform in settings where existing adaptive approaches cannot.In diverse experimental domains with a single episode test constraint, our method significantly outperforms existing adaptive approaches and shows favorable performance against baselines for robust transfer.","Single episode policy transfer in a family of environments with related dynamics, via optimized probing for rapid inference of latent variables and immediate execution of a universal policy." 985,Graph Convolutional Network with Sequential Attention For Goal-Oriented Dialogue Systems,"Domain specific goal-oriented dialogue systems typically require modeling three types of inputs, viz., the knowledge-base associated with the domain, the history of the conversation, which is a sequence of utterances and the current utterance for which the response needs to be generated.While modeling these inputs, current state-of-the-art models such as Mem2Seq typically ignore the rich structure inherent in the knowledge graph and the sentences in the conversation context.Inspired by the recent success of structure-aware Graph Convolutional Networks for various NLP tasks such as machine translation, semantic role labeling and document dating, we propose a memory augmented GCN for goal-oriented dialogues.Our model exploits the entity relation graph in a knowledge-base and the dependency graph associated with an utterance to compute richer representations for words and entities.Further, we take cognizance of the fact that in certain situations, such as, when the conversation is in a code-mixed language, dependency parsers may not be available.We show that in such situations we could use the global word co-occurrence graph and use it to enrich the representations of utterances.We experiment with the modified DSTC2 dataset and its recently released code-mixed versions in four languages and show that our method outperforms existing state-of-the-art methods, using a wide range of evaluation metrics.",We propose a Graph Convolutional Network based encoder-decoder model with sequential attention for goal-oriented dialogue systems. 986,SENSE: SEMANTICALLY ENHANCED NODE SEQUENCE EMBEDDING,"Effectively capturing graph node sequences in the form of vector embeddings is critical to many applications.We achieve this by first learning vector embeddings of single graph nodes and then composing them to compactly represent node sequences.Specifically, we propose SENSE-S, a skip-gram based novel embedding mechanism, for single graph nodes that co-learns graph structure as well as their textual descriptions.We demonstrate that SENSE-S vectors increase the accuracy of multi-label classification tasks by up to 50% and link-prediction tasks by up to 78% under a variety of scenarios using real datasets.Based on SENSE-S, we next propose generic SENSE to compute composite vectors that represent a sequence of nodes, where preserving the node order is important.We prove that this approach is efficient in embedding node sequences, and our experiments on real data confirm its high accuracy in node order decoding.",Node sequence embedding mechanism that captures both graph and text properties. 987,Extreme Values are Accurate and Robust in Deep Networks,"Recent evidence shows that convolutional neural networks are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures, while traditional robust visual features like SIFT are designed to be robust across a substantial range of affine distortion, addition of noise, etc with the mimic of human perception nature.This paper aims to leverage good properties of SIFT to renovate CNN architectures towards better accuracy and robustness.We borrow the scale-space extreme value idea from SIFT, and propose EVPNet which contains three novel components to model the extreme values: parametric differences of Gaussian to extract extrema, truncated ReLU to suppress non-stable extrema and projected normalization layer to mimic PCA-SIFT like feature normalization.Experiments demonstrate that EVPNets can achieve similar or better accuracy than conventional CNNs, while achieving much better robustness on a set of adversarial attacks even without adversarial training.",This paper aims to leverage good properties of robust visual features like SIFT to renovate CNN architectures towards better accuracy and robustness. 988,Learnability for the Information Bottleneck,"Compressed representations generalize better, which may be crucial when learning from limited or noisy labeled data.The Information Bottleneck method) provides an insightful and principled approach for balancing compression and prediction in representation learning.The IB objective I − βI employs a Lagrange multiplier β to tune this trade-off.However, there is little theoretical guidance for how to select β.There is also a lack of theoretical understanding about the relationship between β, the dataset, model capacity, and learnability.In this work, we show that if β is improperly chosen, learning cannot happen: the trivial representation P = P becomes the global minimum of the IB objective.We show how this can be avoided, by identifying a sharp phase transition between the unlearnable and the learnable which arises as β varies.This phase transition defines the concept of IB-Learnability.We prove several sufficient conditions for IB-Learnability, providing theoretical guidance for selecting β.We further show that IB-learnability is determined by the largest confident, typical, and imbalanced subset of the training examples.We give a practical algorithm to estimate the minimum β for a given dataset.We test our theoretical results on synthetic datasets, MNIST, and CIFAR10 with noisy labels, and make the surprising observation that accuracy may be non-monotonic in β.",Theory predicts the phase transition between unlearnable and learnable values of beta for the Information Bottleneck objective 989,MetaPoison: Learning to craft adversarial poisoning examples via meta-learning," We consider a new class of attacks on neural networks, in which the attacker takes control of a model by making small perturbations to a subset of its training data. We formulate the task of finding poisons as a bi-level optimization problem, which can be solved using methods borrowed from the meta-learning community. Unlike previous poisoning strategies, the meta-poisoning can poison networks that are trained from scratch using an initialization unknown to the attacker and transfer across hyperparameters.Further we show that our attacks are more versatile: they can cause misclassification of the target image into an arbitrarily chosen class.Our results show above 50% attack success rate when poisoning just 3-10% of the training dataset.",Generate corrupted training images that are imperceptible yet change CNN behavior on a target during any new training. 990,Learning Two-layer Neural Networks with Symmetric Inputs,"We give a new algorithm for learning a two-layer neural network under a very general class of input distributions.Assuming there is a ground-truth two-layer networky = A \\sigma + \\xi,where A, W are weight matrices, \\xi represents noise, and the number of neurons in the hidden layer is no larger than the input or output, our algorithm is guaranteed to recover the parameters A, W of the ground-truth network.The only requirement on the input x is that it is symmetric, which still allows highly complicated and structured input.Our algorithm is based on the method-of-moments framework and extends several results in tensor decompositions.We use spectral algorithms to avoid the complicated non-convex optimization in learning neural networks.Experiments show that our algorithm can robustly learn the ground-truth neural network with a small number of samples for many symmetric input distributions.",We give an algorithm for learning a two-layer neural network with symmetric input distribution. 991,Interpretable and Pedagogical Examples,"Teachers intentionally pick the most informative examples to show their students.However, if the teacher and student are neural networks, the examples that the teacher network learns to give, although effective at teaching the student, are typically uninterpretable.We show that training the student and teacher iteratively, rather than jointly, can produce interpretable teaching strategies.""We evaluate interpretability by measuring the similarity of the teacher's emergent strategies to intuitive strategies in each domain and conducting human experiments to evaluate how effective the teacher's strategies are at teaching humans."", 'We show that the teacher network learns to select or generate interpretable, pedagogical examples to teach rule-based, probabilistic, boolean, and hierarchical concepts.","We show that training a student and teacher network iteratively, rather than jointly, can produce emergent, interpretable teaching strategies." 992,A Non-asymptotic comparison of SVRG and SGD: tradeoffs between compute and speed,"Stochastic gradient descent, which trades off noisy gradient updates for computational efficiency, is the de-facto optimization algorithm to solve large-scale machine learning problems.SGD can make rapid learning progress by performing updates using subsampled training data, but the noisy updates also lead to slow asymptotic convergence. Several variance reduction algorithms, such as SVRG, introduce control variates to obtain a lower variance gradient estimate and faster convergence. Despite their appealing asymptotic guarantees, SVRG-like algorithms have not been widely adopted in deep learning.The traditional asymptotic analysis in stochastic optimization provides limited insight into training deep learning models under a fixed number of epochs.In this paper, we present a non-asymptotic analysis of SVRG under a noisy least squares regression problem.Our primary focus is to compare the exact loss of SVRG to that of SGD at each iteration t.We show that the learning dynamics of our regression model closely matches with that of neural networks on MNIST and CIFAR-10 for both the underparameterized and the overparameterized models.Our analysis and experimental results suggest there is a trade-off between the computational cost and the convergence speed in underparametrized neural networks.SVRG outperforms SGD after a few epochs in this regime.However, SGD is shown to always outperform SVRG in the overparameterized regime.","Non-asymptotic analysis of SGD and SVRG, showing the strength of each algorithm in convergence speed and computational cost, in both under-parametrized and over-parametrized settings." 993,Understanding Knowledge Distillation in Non-autoregressive Machine Translation,"Non-autoregressive machine translation systems predict a sequence of output tokens in parallel, achieving substantial improvements in generation speed compared to autoregressive models.Existing NAT models usually rely on the technique of knowledge distillation, which creates the training data from a pretrained autoregressive model for better performance.Knowledge distillation is empirically useful, leading to large gains in accuracy for NAT models, but the reason for this success has, as of yet, been unclear.In this paper, we first design systematic experiments to investigate why knowledge distillation is crucial to NAT training.We find that knowledge distillation can reduce the complexity of data sets and help NAT to model the variations in the output data.Furthermore, a strong correlation is observed between the capacity of an NAT model and the optimal complexity of the distilled data for the best translation quality.Based on these findings, we further propose several approaches that can alter the complexity of data sets to improve the performance of NAT models.We achieve the state-of-the-art performance for the NAT-based models, and close the gap with the autoregressive baseline on WMT14 En-De benchmark.","We systematically examine why knowledge distillation is crucial to the training of non-autoregressive translation (NAT) models, and propose methods to further improve the distilled data to best match the capacity of an NAT model." 994,Stabilizing Transformers for Reinforcement Learning,"Owing to their ability to both effectively integrate information over long time horizons and scale to massive amounts of data, self-attention architectures have recently shown breakthrough success in natural language processing, achieving state-of-the-art results in domains such as language modeling and machine translation.""Harnessing the transformer's ability to process long time horizons of information could provide a similar performance boost in partially-observable reinforcement learning domains, but the large-scale transformers used in NLP have yet to be successfully applied to the RL setting."", 'In this work we demonstrate that the standard transformer architecture is difficult to optimize, which was previously observed in the supervised learning setting but becomes especially pronounced with RL objectives.We propose architectural modifications that substantially improve the stability and learning speed of the original Transformer and XL variant.The proposed architecture, the Gated Transformer-XL, surpasses LSTMs on challenging memory environments and achieves state-of-the-art results on the multi-task DMLab-30 benchmark suite, exceeding the performance of an external memory architecture.We show that the GTrXL, trained using the same losses, has stability and performance that consistently matches or exceeds a competitive LSTM baseline, including on more reactive tasks where memory is less critical.GTrXL offers an easy-to-train, simple-to-implement but substantially more expressive architectural alternative to the standard multi-layer LSTM ubiquitously used for RL agents in partially-observable environments. ","We succeed in stabilizing transformers for training in the RL setting and demonstrate a large improvement over LSTMs on DMLab-30, matching an external memory architecture." 995,Noisy Collaboration in Knowledge Distillation,"Knowledge distillation is an effective model compression technique in which a smaller model is trained to mimic a larger pretrained model.However in order to make these compact models suitable for real world deployment, not only dowe need to reduce the performance gap but also we need to make them more robust to commonly occurring and adversarial perturbations.Noise permeates every level of the nervous system, from the perception of sensory signals to thegeneration of motor responses.We therefore believe that noise could be a crucial element in improving neural networks training and addressing the apparently contradictory goals of improving both the generalization and robustness of themodel.Inspired by trial-to-trial variability in the brain that can result from multiple noise sources, we introduce variability through noise at either the input level or the supervision signals.Our results show that noise can improve both the generalization and robustness of the model.”Fickle Teacher” which uses dropout in teacher model as a source of response variation leads to significant generalization improvement.”Soft Randomization”, which matches the output distribution ofthe student model on the image with Gaussian noise to the output of the teacher on original image, improves the adversarial robustness manifolds compared to the student model trained with Gaussian noise.We further show the surprising effect of random label corruption on a model’s adversarial robustness.The study highlights the benefits of adding constructive noise in the knowledge distillation framework and hopes to inspire further work in the area.","Inspired by trial-to-trial variability in the brain that can result from multiple noise sources, we introduce variability through noise in the knowledge distillation framework and studied their effect on generalization and robustness." 996,Generalized Clustering by Learning to Optimize Expected Normalized Cuts,"We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples.Our clustering objective is based on optimizing normalized cuts, a criterion which measures both intra-cluster similarity as well as inter-cluster dissimilarity.We define a differentiable loss function equivalent to the expected normalized cuts.Unlike much of the work in unsupervised deep learning, our trained model directly outputs final cluster assignments, rather than embeddings that need further processing to be usable.Our approach generalizes to unseen datasets across a wide variety of domains, including text, and image.Specifically, we achieve state-of-the-art results on popular unsupervised clustering benchmarks, outperforming the strongest baselines by up to 10.9%.Our generalization results are superior to the recent top-performing clustering approach with the ability to generalize.",We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples. We define a differentiable loss function equivalent to the expected normalized cuts. 997,On recognition of Cyrillic Text,"We introduce the largest dataset for Cyrillic Handwritten Text Recognition and the first dataset for Cyrillic Text in the Wild Recognition, as well as suggest a method for recognizing Cyrillic Handwritten Text and Text in the Wild.Based on this approach, we develop a system that can reduce the document processing time for one of the largest mathematical competitions in Ukraine by 12 days and the amount of used paper by 0.5 ton.",We introduce several datasets for Cyrillic OCR and a method for its recognition 998,Learning Multi-facet Embeddings of Phrases and Sentences using Sparse Coding for Unsupervised Semantic Applications,"Most deep learning for NLP represents each word with a single point or single-mode region in semantic space, while the existing multi-mode word embeddings cannot represent longer word sequences like phrases or sentences.""We introduce a phrase representation where each phrase has a distinct set of multi-mode codebook embeddings to capture different semantic facets of the phrase's meaning."", 'The codebook embeddings can be viewed as the cluster centers which summarize the distribution of possibly co-occurring words in a pre-trained word embedding space.We propose an end-to-end trainable neural model that directly predicts the set of cluster centers from the input text sequence during test time.We find that the per-phrase/sentence codebook embeddings not only provide a more interpretable semantic representation but also outperform strong baselines on benchmark datasets for unsupervised phrase similarity, sentence similarity, hypernym detection, and extractive summarization.",We propose an unsupervised way to learn multiple embeddings for sentences and phrases 999,Sample Efficient Adaptive Text-to-Speech,"We present a meta-learning approach for adaptive text-to-speech with few data.During training, we learn a multi-speaker model using a shared conditional WaveNet core and independent learned embeddings for each speaker.The aim of training is not to produce a neural network with fixed weights, which is then deployed as a TTS system.Instead, the aim is to produce a network that requires few data at deployment time to rapidly adapt to new speakers.We introduce and benchmark three strategies: learning the speaker embedding while keeping the WaveNet core fixed, fine-tuning the entire architecture with stochastic gradient descent, and predicting the speaker embedding with a trained neural network encoder.The experiments show that these approaches are successful at adapting the multi-speaker neural network to new speakers, obtaining state-of-the-art results in both sample naturalness and voice similarity with merely a few minutes of audio data from new speakers.",Sample efficient algorithms to adapt a text-to-speech model to a new voice style with the state-of-the-art performance. 1000,Discriminative k-shot learning using probabilistic models,"This paper introduces a probabilistic framework for k-shot image classification.The goal is to generalise from an initial large-scale classification task to a separate task comprising new classes and small numbers of examples.The new approach not only leverages the feature-based representation learned by a neural network from the initial task, but also information about the classes.The concept information is encapsulated in a probabilistic model for the final layer weights of the neural network which acts as a prior for probabilistic k-shot learning.We show that even a simple probabilistic model achieves state-of-the-art on a standard k-shot learning dataset by a large margin.Moreover, it is able to accurately model uncertainty, leading to well calibrated classifiers, and is easily extensible and flexible, unlike many recent approaches to k-shot learning.",This paper introduces a probabilistic framework for k-shot image classification that achieves state-of-the-art results 1001,Recurrent Experience Replay in Distributed Reinforcement Learning,"Building on the recent successes of distributed training of RL agents, in this paper we investigate the training of RNN-based RL agents from distributed prioritized experience replay.We study the effects of parameter lag resulting in representational drift and recurrent state staleness and empirically derive an improved training strategy.Using a single network architecture and fixed set of hyper-parameters, the resulting agent, Recurrent Replay Distributed DQN, quadruples the previous state of the art on Atari-57, and matches the state of the art on DMLab-30.It is the first agent to exceed human-level performance in 52 of the 57 Atari games.",Investigation on combining recurrent neural networks and experience replay leading to state-of-the-art agent on both Atari-57 and DMLab-30 using single set of hyper-parameters. 1002,Linguistically-Informed Self-Attention for Semantic Role Labeling,"The current state-of-the-art end-to-end semantic role labeling model is a deep neural network architecture with no explicit linguistic features.However, prior work has shown that gold syntax trees can dramatically improve SRL, suggesting that neural network models could see great improvements from explicit modeling of syntax.In this work, we present linguistically-informed self-attention: a new neural network model that combinesmulti-head self-attention with multi-task learning across dependency parsing, part-of-speech, predicate detection and SRL.For example, syntax is incorporated by training one of the attention heads to attend to syntactic parents for each token.Our model can predict all of the above tasks, but it is also trained such that if a high-quality syntactic parse is already available, it can be beneficially injected at test time without re-training our SRL model.In experiments on the CoNLL-2005 SRL dataset LISA achieves an increase of 2.5 F1 absolute over the previous state-of-the-art on newswire with predicted predicates and more than 2.0 F1 on out-of-domain data.On ConLL-2012 English SRL we also show an improvement of more than 3.0 F1, a 13% reduction in error.","Our combination of multi-task learning and self-attention, training the model to attend to parents in a syntactic parse tree, achieves state-of-the-art CoNLL-2005 and CoNLL-2012 SRL results for models using predicted predicates." 1003,Selective Convolutional Units: Improving CNNs via Channel Selectivity,"Bottleneck structures with identity connection are now emerging popular paradigms for designing deep convolutional neural networks, for processing large-scale features efficiently.In this paper, we focus on the information-preserving nature of identity connection and utilize this to enable a convolutional layer to have a new functionality of channel-selectivity, i.e., re-distributing its computations to important channels.In particular, we propose Selective Convolutional Unit, a widely-applicable architectural unit that improves parameter efficiency of various modern CNNs with bottlenecks.During training, SCU gradually learns the channel-selectivity on-the-fly via the alternative usage of pruning unimportant channels, and rewiring the pruned parameters to important channels.The rewired parameters emphasize the target channel in a way that selectively enlarges the convolutional kernels corresponding to it.Our experimental results demonstrate that the SCU-based models without any postprocessing generally achieve both model compression and accuracy improvement compared to the baselines, consistently for all tested architectures.","We propose a new module that improves any ResNet-like architectures by enforcing ""channel selective"" behavior to convolutional layers" 1004,Learning Weighted Representations for Generalization Across Designs,"Predictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications.One example is the estimation of treatment effects from observational data, where a subtask is to predict the effect of a treatment on subjects that are systematically different from those who received the treatment in the data.A related kind of distributional shift appears in unsupervised domain adaptation, where we are tasked with generalizing to a distribution of inputs that is different from the one in which we observe labels.We pose both of these problems as prediction under a shift in design.Popular methods for overcoming distributional shift are often heuristic or rely on assumptions that are rarely true in practice, such as having a well-specified model or knowing the policy that gave rise to the observed data.Other methods are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties.In this work, we devise a bound on the generalization error under design shift, based on integral probability metrics and sample re-weighting.We combine this idea with representation learning, generalizing and tightening existing results in this space.Finally, we propose an algorithmic framework inspired by our bound and verify is effectiveness in causal effect estimation.","A theory and algorithmic framework for prediction under distributional shift, including causal effect estimation and domain adaptation" 1005,Deep Ensembles: A Loss Landscape Perspective,"Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models.While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well.Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift.One possible explanation for this gap between theory and practice is that popular scalable approximate Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space.We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions.Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space.We demonstrate that while low-loss connectors between modes exist, they are not connected in the space of predictions.Developing the concept of the diversity--accuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods.","We study deep ensembles through the lens of loss landscape and the space of predictions, demonstrating that the decorrelation power of random initializations is unmatched by subspace sampling that only explores a single mode." 1006,Is my Deep Learning Model Learning more than I want it to?,"Existing deep learning approaches for learning visual features tend to extract more information than what is required for the task at hand.From a privacy preservation perspective, the input visual information is not protected from the model; enabling the model to become more intelligent than it is trained to be.Existing approaches for suppressing additional task learning assume the presence of ground truth labels for the tasks to be suppressed during training time.In this research, we propose a three-fold novel contribution: a novel metric to measure the trust score of a trained deep learning model, a model-agnostic solution framework for trust score improvement by suppressing all the unwanted tasks, and a simulated benchmark dataset, PreserveTask, having five different fundamental image classification tasks to study the generalization nature of models.In the first set of experiments, we measure and improve the trust scores of five popular deep learning models: VGG16, VGG19, Inception-v1, MobileNet, and DenseNet and demonstrate that Inception-v1 is having the lowest trust score.Additionally, we show results of our framework on color-MNIST dataset and practical applications of face attribute preservation in Diversity in Faces and IMDB-Wiki dataset.",Can we trust our deep learning models? A framework to measure and improve a deep learning model's trust during training. 1007,Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic," Learning a policy using only observational data is challenging because the distribution of states it induces at execution time may differ from the distribution observed during training.In this work, we propose to train a policy while explicitly penalizing the mismatch between these two distributions over a fixed time horizon.We do this by using a learned model of the environment dynamics which is unrolled for multiple time steps, and training a policy network to minimize a differentiable cost over this rolled-out trajectory.This cost contains two terms: a policy cost which represents the objective the policy seeks to optimize, and an uncertainty cost which represents its divergence from the states it is trained on.We propose to measure this second cost by using the uncertainty of the dynamics model about its own predictions, using recent ideas from uncertainty estimation for deep networks.We evaluate our approach using a large-scale observational dataset of driving behavior recorded from traffic cameras, and show that we are able to learn effective driving policies from purely observational data, with no environment interaction.",A model-based RL approach which uses a differentiable uncertainty penalty to learn driving policies from purely observational data. 1008,Customizing Sequence Generation with Multi-Task Dynamical Systems,"Dynamical system models often lack the ability to adapt the sequence generation or prediction to a given context, limiting their real-world application.In this paper we show that hierarchical multi-task dynamical systems provide direct user control over sequence generation, via use of a latent code z that specifies the customization to theindividual data sequence.This enables style transfer, interpolation and morphing within generated sequences.We show the MTDS can improve predictions via latent code interpolation, and avoid the long-term performance degradation of standard RNN approaches.",Tailoring predictions from sequence models (such as LDSs and RNNs) via an explicit latent code. 1009,Improving the Generalization of Adversarial Training with Domain Adaptation,"By injecting adversarial examples into training data, adversarial training is promising for improving the robustness of deep learning models.However, most existing adversarial training approaches are based on a specific type of adversarial attack.It may not provide sufficiently representative samples from the adversarial domain, leading to a weak generalization ability on adversarial examples from other attacks.Moreover, during the adversarial training, adversarial perturbations on inputs are usually crafted by fast single-step adversaries so as to scale to large datasets.This work is mainly focused on the adversarial training yet efficient FGSM adversary.In this scenario, it is difficult to train a model with great generalization due to the lack of representative adversarial samples, aka the samples are unable to accurately reflect the adversarial domain.To alleviate this problem, we propose a novel Adversarial Training with Domain Adaptation method.Our intuition is to regard the adversarial training on FGSM adversary as a domain adaption task with limited number of target domain samples.The main idea is to learn a representation that is semantically meaningful and domain invariant on the clean domain as well as the adversarial domain.Empirical evaluations on Fashion-MNIST, SVHN, CIFAR-10 and CIFAR-100 demonstrate that ATDA can greatly improve the generalization of adversarial training and the smoothness of the learned models, and outperforms state-of-the-art methods on standard benchmark datasets.To show the transfer ability of our method, we also extend ATDA to the adversarial training on iterative attacks such as PGD-Adversial Training and the defense performance is improved considerably.",We propose a novel adversarial training with domain adaptation method that significantly improves the generalization ability on adversarial examples from different attacks. 1010,Group-Transformer: Towards A Lightweight Character-level Language Model,"Character-level language modeling is an essential but challenging task in Natural Language Processing.Prior works have focused on identifying long-term dependencies between characters and have built deeper and wider networks for better performance.However, their models require substantial computational resources, which hinders the usability of character-level language models in applications with limited resources.In this paper, we propose a lightweight model, called Group-Transformer, that reduces the resource requirements for a Transformer, a promising method for modeling sequence with long-term dependencies.Specifically, the proposed method partitions linear operations to reduce the number of parameters and computational cost.As a result, Group-Transformer only uses 18.2% of parameters compared to the best performing LSTM-based model, while providing better performance on two benchmark tasks, enwik8 and text8.When compared to Transformers with a comparable number of parameters and time complexity, the proposed model shows better performance.The implementation code will be available.","This paper proposes a novel lightweight Transformer for character-level language modeling, utilizing group-wise operations." 1011,Towards Stable and comprehensive Domain Alignment: Max-Margin Domain-Adversarial Training," Domain adaptation tackles the problem of transferring knowledge from a label-rich source domain to an unlabeled or label-scarce target domain.Recently domain-adversarial training has shown promising capacity to learn a domain-invariant feature space by reversing the gradient propagation of a domain classifier.However, DAT is still vulnerable in several aspects including training instability due to the overwhelming discriminative ability of the domain classifier in adversarial training, restrictive feature-level alignment, and lack of interpretability or systematic explanation of the learned feature space.In this paper, we propose a novel Max-margin Domain-Adversarial Training by designing an Adversarial Reconstruction Network.The proposed MDAT stabilizes the gradient reversing in ARN by replacing the domain classifier with a reconstruction network, and in this manner ARN conducts both feature-level and pixel-level domain alignment without involving extra network structures.Furthermore, ARN demonstrates strong robustness to a wide range of hyper-parameters settings, greatly alleviating the task of model selection.Extensive empirical results validate that our approach outperforms other state-of-the-art domain alignment methods.Additionally, the reconstructed target samples are visualized to interpret the domain-invariant feature space which conforms with our intuition.",A stable domain-adversarial training approach for robust and comprehensive domain adaptation 1012,Synonym Expansion for Large Shopping Taxonomies,"We present an approach for expanding taxonomies with synonyms, or aliases.We target large shopping taxonomies, with thousands of nodes.A comprehensive set of entity aliases is an important component of identifying entities in unstructured text such as product reviews or search queries.Our method consists of two stages: we generate synonym candidates from WordNet and shopping search queries, then use a binary classifier to filter candidates.We process taxonomies with thousands of synonyms in order to generate over 90,000 synonyms.We show that using the taxonomy to derive contextual features improves classification performance over using features from the target node alone.We show that our approach has potential for transfer learning between different taxonomy domains, which reduces the need to collect training data for new taxonomies.",We use machine learning to generate synonyms for large shopping taxonomies. 1013,Improving End-to-End Object Tracking Using Relational Reasoning,"Relational reasoning, the ability to model interactions and relations between objects, is valuable for robust multi-object tracking and pivotal for trajectory prediction.In this paper, we propose MOHART, a class-agnostic, end-to-end multi-object tracking and trajectory prediction algorithm, which explicitly accounts for permutation invariance in its relational reasoning.We explore a number of permutation invariant architectures and show that multi-headed self-attention outperforms the provided baselines and better accounts for complex physical interactions in a challenging toy experiment.We show on three real-world tracking datasets that adding relational reasoning capabilities in this way increases the tracking and trajectory prediction performance, particularly in the presence of ego-motion, occlusions, crowded scenes, and faulty sensor inputs.To the best of our knowledge, MOHART is the first fully end-to-end multi-object tracking from vision approach applied to real-world data reported in the literature.",MOHART uses a self-attention mechanism to perform relational reasoning in multi-object tracking. 1014,Off-Policy Actor-Critic with Shared Experience Replay,"We investigate the combination of actor-critic reinforcement learning algorithms with uniform large-scale experience replay and propose solutions for two challenges: efficient actor-critic learning with experience replay stability of very off-policy learning.We employ those insights to accelerate hyper-parameter sweeps in which all participating agents run concurrently and share their experience via a common replay module.To this end we analyze the bias-variance tradeoffs in V-trace, a form of importance sampling for actor-critic methods.Based on our analysis, we then argue for mixing experience sampled from replay with on-policy experience, and propose a new trust region scheme that scales effectively to data distributions where V-trace becomes unstable.We provide extensive empirical validation of the proposed solution.We further show the benefits of this setup by demonstrating state-of-the-art data efficiency on Atari among agents trained up until 200M environment frames.",We investigate and propose solutions for two challenges in reinforcement learning: (a) efficient actor-critic learning with experience replay (b) stability of very off-policy learning. 1015,Automatically Trading off Time and Variance when Selecting Gradient Estimators,"Stochastic gradient descent is the workhorse of modern machine learning.Sometimes, there are many different potential gradient estimators that can be used.When so, choosing the one with the best tradeoff between cost and variance is important.This paper analyzes the convergence rates of SGD as a function of time, rather than iterations.This results in a simple rule to select the estimator that leads to the best optimization convergence guarantee.This choice is the same for different variants of SGD, and with different assumptions about the objective.Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given.Then, we extend to infinite pools of estimators, where each one is indexed by control variate weights.This is enabled by a reduction to a mixed-integer quadratic program.Empirically, automatically choosing an estimator performs comparably to the best estimator chosen with hindsight.",We propose a gradient estimator selection algorithm with the aim on improving optimization efficiency. 1016,Stackelberg GAN: Towards Provable Minimax Equilibrium via Multi-Generator Architectures,"We study the problem of alleviating the instability issue in the GAN training procedure via new architecture design.The discrepancy between the minimax and maximin objective values could serve as a proxy for the difficulties that the alternating gradient descent encounters in the optimization of GANs.In this work, we give new results on the benefits of multi-generator architecture of GANs.We show that the minimax gap shrinks to \\epsilon as the number of generators increases with rate O.This improves over the best-known result of O.At the core of our techniques is a novel application of Shapley-Folkman lemma to the generic minimax problem, where in the literature the technique was only known to work when the objective function is restricted to the Lagrangian function of a constraint optimization problem.Our proposed Stackelberg GAN performs well experimentally in both synthetic and real-world datasets, improving Frechet Inception Distance by 14.61% over the previous multi-generator GANs on the benchmark datasets.","We study the problem of alleviating the instability issue in the GAN training procedure via new architecture design, with theoretical guarantees." 1017,Accelerating SGD with momentum for over-parameterized learning,"Nesterov SGD is widely used for training modern neural networks and other machine learning models.Yet, its advantages over SGD have not been theoretically clarified.Indeed, as we show in this paper, both theoretically and empirically, Nesterov SGD with any parameter selection does not in general provide acceleration over ordinary SGD.Furthermore, Nesterov SGD may diverge for step sizes that ensure convergence of ordinary SGD.""This is in contrast to the classical results in the deterministic setting, where the same step size ensures accelerated convergence of the Nesterov's method over optimal gradient descent."", 'To address the non-acceleration issue, we introduce a compensation term to Nesterov SGD.The resulting algorithm, which we call MaSS, converges for same step sizes as SGD.We prove that MaSS obtains an accelerated convergence rates over SGD for any mini-batch size in the linear setting. ', ""For full batch, the convergence rate of MaSS matches the well-known accelerated rate of the Nesterov's method."", 'We also analyze the practically important question of the dependence of the convergence rate and optimal hyper-parameters on the mini-batch size, demonstrating three distinct regimes: linear scaling, diminishing returns and saturation.Experimental evaluation of MaSS for several standard architectures of deep networks, including ResNet and convolutional networks, shows improved performance over SGD, Nesterov SGD and Adam.","This work proves the non-acceleration of Nesterov SGD with any hyper-parameters, and proposes new algorithm which provably accelerates SGD in the over-parameterized setting." 1018,Towards a Deep Network Architecture for Structured Smoothness,"We propose the Fixed Grouping Layer; a novel feedforward layer designed to incorporate the inductive bias of structured smoothness into a deep learning model.FGL achieves this goal by connecting nodes across layers based on spatial similarity.The use of structured smoothness, as implemented by FGL, is motivated by applications to structured spatial data, which is, in turn, motivated by domain knowledge.The proposed model architecture outperforms conventional neural network architectures across a variety of simulated and real datasets with structured smoothness.",A feedforward layer to incorporate structured smoothness into a deep learning model 1019,Deformable Kernels: Adapting Effective Receptive Fields for Object Deformation,"Convolutional networks are not aware of an object's geometric variations, which leads to inefficient utilization of model and data capacity."", 'To overcome this issue, recent works on deformation modeling seek to spatially reconfigure the data towards a common arrangement such that semantic recognition suffers less from deformation.This is typically done by augmenting static operators with learned free-form sampling grids in the image space, dynamically tuned to the data and task for adapting the receptive field.Yet adapting the receptive field does not quite reach the actual goal -- what really matters to the network is the *effective* receptive field, which reflects how much each pixel contributes.It is thus natural to design other approaches to adapt the ERF directly during runtime.In this work, we instantiate one possible solution as Deformable Kernels, a family of novel and generic convolutional operators for handling object deformations by directly adapting the ERF while leaving the receptive field untouched.At the heart of our method is the ability to resample the original kernel space towards recovering the deformation of objects.This approach is justified with theoretical insights that the ERF is strictly determined by data sampling locations and kernel values.We implement DKs as generic drop-in replacements of rigid kernels and conduct a series of empirical studies whose results conform with our theories.Over several tasks and standard base models, our approach compares favorably against prior works that adapt during runtime.In addition, further experiments suggest a working mechanism orthogonal and complementary to previous works.",Don't deform your convolutions -- deform your kernels. 1020,Learning protein sequence embeddings using information from structure,"Inferring the structural properties of a protein from its amino acid sequence is a challenging yet important problem in biology.Structures are not known for the vast majority of protein sequences, but structure is critical for understanding function.Existing approaches for detecting structural similarity between proteins from sequence are unable to recognize and exploit structural patterns when sequences have diverged too far, limiting our ability to transfer knowledge between structurally related proteins.We newly approach this problem through the lens of representation learning.We introduce a framework that maps any protein sequence to a sequence of vector embeddings --- one per amino acid position --- that encode structural information.We train bidirectional long short-term memory models on protein sequences with a two-part feedback mechanism that incorporates information from global structural similarity between proteins and pairwise residue contact maps for individual proteins.To enable learning from structural similarity information, we define a novel similarity measure between arbitrary-length sequences of vector embeddings based on a soft symmetric alignment between them.Our method is able to learn useful position-specific embeddings despite lacking direct observations of position-level correspondence between sequences.We show empirically that our multi-task framework outperforms other sequence-based methods and even a top-performing structure-based alignment method when predicting structural similarity, our goal.Finally, we demonstrate that our learned embeddings can be transferred to other protein sequence problems, improving the state-of-the-art in transmembrane domain prediction.",We present a method for learning protein sequence embedding models using structural information in the form of global structural similarity between proteins and within protein residue-residue contacts. 1021,Regularity Normalization: Constraining Implicit Space with Minimum Description Length,"Inspired by the adaptation phenomenon of biological neuronal firing, we propose regularity normalization: a reparameterization of the activation in the neural network that take into account the statistical regularity in the implicit space.By considering the neural network optimization process as a model selection problem, the implicit space is constrained by the normalizing factor, the minimum description length of the optimal universal code.We introduce an incremental version of computing this universal code as normalized maximum likelihood and demonstrated its flexibility to include data prior such as top-down attention and other oracle information and its compatibility to be incorporated into batch normalization and layer normalization.The preliminary results showed that the proposed method outperforms existing normalization methods in tackling the limited and imbalanced data from a non-stationary distribution benchmarked on computer vision task.As an unsupervised attention mechanism given input data, this biologically plausible normalization has the potential to deal with other complicated real-world scenarios as well as reinforcement learning setting where the rewards are sparse and non-uniform.Further research is proposed to discover these scenarios and explore the behaviors among different variants.","Considering neural network optimization process as a model selection problem, we introduce a biological plausible normalization method that extracts statistical regularity under MDL principle to tackle imbalanced and limited data issue." 1022,VILD: Variational Imitation Learning with Diverse-quality Demonstrations,"The goal of imitation learning is to learn a good policy from high-quality demonstrations.However, the quality of demonstrations in reality can be diverse, since it is easier and cheaper to collect demonstrations from a mix of experts and amateurs.""IL in such situations can be challenging, especially when the level of demonstrators' expertise is unknown."", ""We propose a new IL paradigm called Variational Imitation Learning with Diverse-quality demonstrations, where we explicitly model the level of demonstrators' expertise with a probabilistic graphical model and estimate it along with a reward function."", 'We show that a naive estimation approach is not suitable to large state and action spaces, and fix this issue by using a variational approach that can be easily implemented using existing reinforcement learning methods.Experiments on continuous-control benchmarks demonstrate that VILD outperforms state-of-the-art methods.Our work enables scalable and data-efficient IL under more realistic settings than before.",We propose an imitation learning method to learn from diverse-quality demonstrations collected by demonstrators with different level of expertise. 1023,Using Semantic Distance as a Heuristic for Service Planning,"With a growing number of available services, each having slightly different parameters, preconditions and effects, automated planning on general semantic services become highly relevant.However, most exiting planners only consider PDDL, or if they claim to use OWL-S, they usually translate it to PDDL, losing much of the semantics on the way.In this paper, we propose a new domain-independent heuristic based on a semantic distance that can be used by generic planning algorithms such as A* for automated planning of semantic services described with OWL-S.For the heuristic to include more relevant information we calculate the heuristic at runtime.Using this heuristic, we are able to produce better results in less time than with established techniques.",Describing a semantic heuristics which builds upon an OWL-S service description and uses word and sentence distance measures to evaluate the usefulness of services for a given goal. 1024,Evolving the Olfactory System,"Flies and mice are species separated by 600 million years of evolution, yet have evolved olfactory systems that share many similarities in their anatomic and functional organization.What functions do these shared anatomical and functional features serve, and are they optimal for odor sensing?In this study, we address the optimality of evolutionary design in olfactory circuits by studying artificial neural networks trained to sense odors.We found that artificial neural networks quantitatively recapitulate structures inherent in the olfactory system, including the formation of glomeruli onto a compression layer and sparse and random connectivity onto an expansion layer.Finally, we offer theoretical justifications for each result.Our work offers a framework to explain the evolutionary convergence of olfactory circuits, and gives insight and logic into the anatomic and functional structure of the olfactory system.",Artificial neural networks evolved the same structures present in the olfactory systems of flies and mice after being trained to classify odors 1025,Harmonic Unpaired Image-to-image Translation,"The recent direction of unpaired image-to-image translation is on one hand very exciting as it alleviates the big burden in obtaining label-intensive pixel-to-pixel supervision, but it is on the other hand not fully satisfactory due to the presence of artifacts and degenerated transformations.In this paper, we take a manifold view of the problem by introducing a smoothness term over the sample graph to attain harmonic functions to enforce consistent mappings during the translation.We develop HarmonicGAN to learn bi-directional translations between the source and the target domains.With the help of similarity-consistency, the inherent self-consistency property of samples can be maintained.Distance metrics defined on two types of features including histogram and CNN are exploited.Under an identical problem setting as CycleGAN, without additional manual inputs and only at a small training-time cost, HarmonicGAN demonstrates a significant qualitative and quantitative improvement over the state of the art, as well as improved interpretability.We show experimental results in a number of applications including medical imaging, object transfiguration, and semantic labeling.We outperform the competing methods in all tasks, and for a medical imaging task in particular our method turns CycleGAN from a failure to a success, halving the mean-squared error, and generating images that radiologists prefer over competing methods in 95% of cases.",Smooth regularization over sample graph for unpaired image-to-image translation results in significantly improved consistency 1026,DPSNet: End-to-end Deep Plane Sweep Stereo,"Multiview stereo aims to reconstruct scene depth from images acquired by a camera under arbitrary motion.Recent methods address this problem through deep learning, which can utilize semantic cues to deal with challenges such as textureless and reflective regions.In this paper, we present a convolutional neural network called DPSNet whose design is inspired by best practices of traditional geometry-based approaches.Rather than directly estimating depth and/or optical flow correspondence from image pairs as done in many previous deep learning methods, DPSNet takes a plane sweep approach that involves building a cost volume from deep features using the plane sweep algorithm, regularizing the cost volume via a context-aware cost aggregation, and regressing the depth map from the cost volume.The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network.Through the effective incorporation of conventional multiview stereo concepts within a deep learning framework, DPSNet achieves state-of-the-art reconstruction results on a variety of challenging datasets.",A convolution neural network for multi-view stereo matching whose design is inspired by best practices of traditional geometry-based approaches 1027,Model-based reinforcement learning for biological sequence design,"The ability to design biological structures such as DNA or proteins would have considerable medical and industrial impact.Doing so presents a challenging black-box optimization problem characterized by the large-batch, low round setting due to the need for labor-intensive wet lab evaluations.In response, we propose using reinforcement learning based on proximal-policy optimization for biological sequence design.RL provides a flexible framework for optimization generative sequence models to achieve specific criteria, such as diversity among the high-quality sequences discovered.We propose a model-based variant of PPO, DyNA-PPO, to improve sample efficiency, where the policy for a new round is trained offline using a simulator fit on functional measurements from prior rounds.To accommodate the growing number of observations across rounds, the simulator model is automatically selected at each round from a pool of diverse models of varying capacity. On the tasks of designing DNA transcription factor binding sites, designing antimicrobial proteins, and optimizing the energy of Ising models based on protein structure, we find that DyNA-PPO performs significantly better than existing methods in settings in which modeling is feasible, while still not performing worse in situations in which a reliable model cannot be learned.","We augment model-free policy learning with a sequence-level surrogate reward functions and count-based visitation bonus and demonstrate effectiveness in the large batch, low-round regime seen in designing DNA and protein sequences." 1028,Sequential Coordination of Deep Models for Learning Visual Arithmetic,"Achieving machine intelligence requires a smooth integration of perception and reasoning, yet models developed to date tend to specialize in one or the other; sophisticated manipulation of symbols acquired from rich perceptual spaces has so far proved elusive.Consider a visual arithmetic task, where the goal is to carry out simple arithmetical algorithms on digits presented under natural conditions.We propose a two-tiered architecture for tackling this kind of problem.The lower tier consists of a heterogeneous collection of information processing modules, which can include pre-trained deep neural networks for locating and extracting characters from the image, as well as modules performing symbolic transformations on the representations extracted by perception.The higher tier consists of a controller, trained using reinforcement learning, which coordinates the modules in order to solve the high-level task.For instance, the controller may learn in what contexts to execute the perceptual networks and what symbolic transformations to apply to their outputs.The resulting model is able to solve a variety of tasks in the visual arithmetic domain,and has several advantages over standard, architecturally homogeneous feedforward networks including improved sample efficiency.",We use reinforcement learning to train an agent to solve a set of visual arithmetic tasks using provided pre-trained perceptual modules and transformations of internal representations created by those modules. 1029,Learning with Social Influence through Interior Policy Differentiation,"Animals develop novel skills not only through the interaction with the environment but also from the influence of the others.In this work we model the social influence into the scheme of reinforcement learning, enabling the agents to learn both from the environment and from their peers.Specifically, we first define a metric to measure the distance between policies then quantitatively derive the definition of uniqueness.Unlike previous precarious joint optimization approaches, the social uniqueness motivation in our work is imposed as a constraint to encourage the agent to learn a policy different from the existing agents while still solve the primal task.The resulting algorithm, namely Interior Policy Differentiation, brings about performance improvement as well as a collection of policies that solve a given task with distinct behaviors",A new RL algorithm called Interior Policy Differentiation is proposed to learn a collection of diverse policies for a given primal task. 1030,Learning Numerical Attributes in Knowledge Bases,"Knowledge bases are often represented as a collection of facts in the form, where HEAD and TAIL are entities while PREDICATE is a binary relationship that links the two.It is a well-known fact that knowledge bases are far from complete, and hence the plethora of research on KB completion methods, specifically on link prediction.However, though frequently ignored, these repositories also contain numerical facts.Numerical facts link entities to numerical values via numerical predicates; e.g.,.Likewise, numerical facts also suffer from the incompleteness problem.To address this issue, we introduce the numerical attribute prediction problem.This problem involves a new type of query where the relationship is a numerical predicate.Consequently, and contrary to link prediction, the answer to this query is a numerical value.We argue that the numerical values associated with entities explain, to some extent, the relational structure of the knowledge base.Therefore, we leverage knowledge base embedding methods to learn representations that are useful predictors for the numerical attributes.An extensive set of experiments on benchmark versions of FREEBASE and YAGO show that our approaches largely outperform sensible baselines.We make the datasets available under a permissive BSD-3 license.",Prediction of numerical attribute values associated with entities in knowledge bases. 1031,Multilingual Alignment of Contextual Word Representations,"We propose procedures for evaluating and strengthening contextual embedding alignment and show that they are useful in analyzing and improving multilingual BERT.In particular, after our proposed alignment procedure, BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model, remarkably matching pseudo-fully-supervised translate-train models for Bulgarian and Greek.Further, to measure the degree of alignment, we introduce a contextual version of word retrieval and show that it correlates well with downstream zero-shot transfer.Using this word retrieval task, we also analyze BERT and find that it exhibits systematic deficiencies, e.g. worse alignment for open-class parts-of-speech and word pairs written in different scripts, that are corrected by the alignment procedure.These results support contextual alignment as a useful concept for understanding large multilingual pre-trained models.",We propose procedures for evaluating and strengthening contextual embedding alignment and show that they both improve multilingual BERT's zero-shot XNLI transfer and provide useful insights into the model. 1032,Energy Dissipation with Plug-and-Play Priors,"Neural networks have reached outstanding performance for solving various ill-posed inverse problems in imaging.However, drawbacks of end-to-end learning approaches in comparison to classical variational methods are the requirement of expensive retraining for even slightly different problem statements and the lack of provable error bounds during inference.Recent works tackled the first problem by using networks trained for Gaussian image denoising as generic plug-and-play regularizers in energy minimization algorithms.Even though this obtains state-of-the-art results on many tasks, heavy restrictions on the network architecture have to be made if provable convergence of the underlying fixed point iteration is a requirement.More recent work has proposed to train networks to output descent directions with respect to a given energy function with a provable guarantee of convergence to a minimizer of that energy.However, each problem and energy requires the training of a separate network.In this paper we consider the combination of both approaches by projecting the outputs of a plug-and-play denoising network onto the cone of descent directions to a given energy.This way, a single pre-trained network can be used for a wide variety of reconstruction tasks.Our results show improvements compared to classical energy minimization methods while still having provable convergence guarantees.",We use neural networks trained for image denoising as plug-and-play priors in energy minimization algorithms for image reconstruction problems with provable convergence. 1033,Probability Calibration for Knowledge Graph Embedding Models,"Knowledge graph embedding research has overlooked the problem of probability calibration.We show popular embedding models are indeed uncalibrated.That means probability estimates associated to predicted triples are unreliable.We present a novel method to calibrate a model when ground truth negatives are not available, which is the usual case in knowledge graphs.We propose to use Platt scaling and isotonic regression alongside our method.Experiments on three datasets with ground truth negatives show our contribution leads to well calibrated models when compared to the gold standard of using negatives.We get significantly better results than the uncalibrated models from all calibration methods.We show isotonic regression offers the best the performance overall, not without trade-offs.We also show that calibrated models reach state-of-the-art accuracy without the need to define relation-specific decision thresholds.",We propose a novel method to calibrate knowledge graph embedding models without the need of negative examples. 1034,CurricularFace: Adaptive Curriculum Learning Loss for Deep Face Recognition,"As an emerging topic in face recognition, designing margin-based loss functions can increase the feature margin between different classes for enhanced discriminability.More recently, absorbing the idea of mining-based strategies is adopted to emphasize the misclassified samples and achieve promising results.However, during the entire training process, the prior methods either do not explicitly emphasize the sample based on its importance that renders the hard samples not fully exploited or explicitly emphasize the effects of semi-hard/hard samples even at the early training stage that may lead to convergence issues.In this work, we propose a novel Adaptive Curriculum Learning loss that embeds the idea of curriculum learning into the loss function to achieve a novel training strategy for deep face recognition, which mainly addresses easy samples in the early training stage and hard ones in the later stage.Specifically, our CurricularFace adaptively adjusts the relative importance of easy and hard samples during different training stages.In each stage, different samples are assigned with different importance according to their corresponding difficultness.Extensive experimental results on popular benchmarks demonstrate the superiority of our CurricularFace over the state-of-the-art competitors.Code will be available upon publication.",A novel Adaptive Curriculum Learning loss for deep face recognition 1035,Ensemble Adversarial Training: Attacks and Defenses,"Adversarial examples are perturbed inputs designed to fool machine learning models.Adversarial training injects such examples into training data to increase robustness.""To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss."", 'We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss.The model thus learns to generate weak perturbations, rather than defend against strong ones.As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step.We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models.On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks.In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks.","Adversarial training with single-step methods overfits, and remains vulnerable to simple black-box and white-box attacks. We show that including adversarial examples from multiple sources helps defend against black-box attacks." 1036,Invariant Feature Learning by Attribute Perception Matching,"An adversarial feature learning is a powerful framework to learn representations invariant to a nuisance attribute, which uses an adversarial game between a feature extractor and a categorical attribute classifier.It theoretically sounds in term of it maximize conditional entropy between attribute and representation.However, as shown in this paper, the AFL often causes unstable behavior that slows down the convergence.We propose an as an alternative approach, based on the reformulation of conditional entropy maximization as. Although the naive approach for realizing the pair-wise distribution matching requires the significantly large number of parameters, the proposed method requires the same number of parameters with AFL but has a better convergence property.Experiments on both toy and real-world dataset prove that our proposed method converges to better invariant representation significantly faster than AFL. ","This paper proposes a new approach to incorporating desired invariance to representations learning, based on the observations that the current state-of-the-art AFL has practical issues." 1037,Empirical Analysis of the Hessian of Over-Parametrized Neural Networks,"We study the properties of common loss surfaces through their Hessian matrix.In particular, in the context of deep learning, we empirically show that the spectrum of the Hessian is composed of two parts: the bulk centered near zero, and outliers away from the bulk.We present numerical evidence and mathematical justifications to the following conjectures laid out by Sagun et.al.: Fixing data, increasing the number of parameters merely scales the bulk of the spectrum; fixing the dimension and changing the data only affects the outliers.We believe that our observations have striking implications for non-convex optimization in high dimensions.First, the *flatness* of such landscapes implies that classical notions of basins of attraction may be quite misleading.And that the discussion of wide/narrow basins may be in need of a new perspective around over-parametrization and redundancy that are able to create *large* connected components at the bottom of the landscape.Second, the dependence of a small number of large eigenvalues to the data distribution can be linked to the spectrum of the covariance matrix of gradients of model outputs.With this in mind, we may reevaluate the connections within the data-architecture-algorithm framework of a model, hoping that it would shed light on the geometry of high-dimensional and non-convex spaces in modern applications.In particular, we present a case that links the two observations: small and large batch gradient descent appear to converge to different basins of attraction but we show that they are in fact connected through their flat region and so belong to the same basin.","The loss surface is *very* degenerate, and there are no barriers between large batch and small batch solutions." 1038,Computation Reallocation for Object Detection,"The allocation of computation resources in the backbone is a crucial issue in object detection.However, classification allocation pattern is usually adopted directly to object detector, which is proved to be sub-optimal.In order to reallocate the engaged computation resources in a more efficient way, we present CR-NAS that can learn computation reallocation strategies across different feature resolution and spatial position diectly on the target detection dataset.A two-level reallocation space is proposed for both stage and spatial reallocation.A novel hierarchical search procedure is adopted to cope with the complex search space.We apply CR-NAS to multiple backbones and achieve consistent improvements.Our CR-ResNet50 and CR-MobileNetV2 outperforms the baseline by 1.9% and 1.7% COCO AP respectively without any additional computation budget.The models discovered by CR-NAS can be equiped to other powerful detection neck/head and be easily transferred to other dataset, e.g. PASCAL VOC, and other vision tasks, e.g. instance segmentation.Our CR-NAS can be used as a plugin to improve the performance of various networks, which is demanding.",We propose CR-NAS to reallocate engaged computation resources in different resolution and spatial position. 1039,A Training Scheme for the Uncertain Neuromorphic Computing Chips,"Uncertainty is a very important feature of the intelligence and helps the brain become a flexible, creative and powerful intelligent system.The crossbar-based neuromorphic computing chips, in which the computing is mainly performed by analog circuits, have the uncertainty and can be used to imitate the brain.However, most of the current deep neural networks have not taken the uncertainty of the neuromorphic computing chip into consideration.Therefore, their performances on the neuromorphic computing chips are not as good as on the original platforms.In this work, we proposed the uncertainty adaptation training scheme that tells the uncertainty to the neural network in the training process.The experimental results show that the neural networks can achieve comparable inference performances on the uncertain neuromorphic computing chip compared to the results on the original platforms, and much better than the performances without this training scheme.",A training method that can make deep learning algorithms work better on neuromorphic computing chips with uncertainty 1040,Unknown-Aware Deep Neural Network,"An important property of image classification systems in the real world is that they both accurately classify objects from target classes and safely reject unknown objects that belong to classes not present in the training data."", 'Unfortunately, although the strong generalization ability of existing CNNs ensures their accuracy when classifying known objects, it also causes them to often assign an unknown to a target class with high confidence.As a result, simply using low-confidence detections as a way to detect unknowns does not work well.In this work, we propose an Unknown-aware Deep Neural Network to solve this challenging problem.The key idea of UDN is to enhance existing CNNs to support a product operation that models the product relationship among the features produced by convolutional layers.This way, missing a single key feature of a target class will greatly reduce the probability of assigning an object to this class.UDN uses a learned ensemble of these product operations, which allows it to balance the contradictory requirements of accurately classifying known objects and correctly rejecting unknowns.To further improve the performance of UDN at detecting unknowns, we propose an information-theoretic regularization strategy that incorporates the objective of rejecting unknowns into the learning process of UDN.We experiment on benchmark image datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN, adding unknowns by injecting one dataset into another.Our results demonstrate that UDN significantly outperforms state-of-the-art methods at rejecting unknowns by 25 percentage points improvement in accuracy, while still preserving the classification accuracy.",A CNN architecture that can effective rejects the unknowns in test objects 1041,Deep Amortized Variational Inference for Multivariate Time Series Imputation with Latent Gaussian Process Models,"Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years.This raises the question whether deep learning methodologies can outperform classical data imputation methods in this domain.However, naive applications of deep learning fall short in giving reliable confidence estimates and lack interpretability.We propose a new deep sequential latent variable model for dimensionality reduction and data imputation.Our modeling assumption is simple and interpretable: the high dimensional time series has a lower-dimensional representation which evolves smoothly in time according to a Gaussian process.The non-linear dimensionality reduction in the presence of missing data is achieved using a VAE approach with a novel structured variational approximation.We demonstrate that our approach outperforms several classical and deep learning-based data imputation methods on high-dimensional data from the domains of computer vision and healthcare, while additionally improving the smoothness of the imputations and providing interpretable uncertainty estimates.",We perform amortized variational inference on a latent Gaussian process model to achieve superior imputation performance on multivariate time series with missing data. 1042,Sensitivity and Generalization in Neural Networks: an Empirical Study,"In practice it is often found that large over-parameterized neural networks generalize better than their smaller counterparts, an observation that appears to conflict with classical notions of function complexity, which typically favor smaller models.In this work, we investigate this tension between complexity and generalization through an extensive empirical exploration of two natural metrics of complexity related to sensitivity to input perturbations.Our experiments survey thousands of models with different architectures, optimizers, and other hyper-parameters, as well as four different image classification datasets.We find that trained neural networks are more robust to input perturbations in the vicinity of the training data manifold, as measured by the input-output Jacobian of the network, and that this correlates well with generalization.We further establish that factors associated with poor generalization -- such as full-batch training or using random labels -- correspond to higher sensitivity, while factors associated with good generalization -- such as data augmentation and ReLU non-linearities -- give rise to more robust functions.Finally, we demonstrate how the input-output Jacobian norm can be predictive of generalization at the level of individual test points.","We perform massive experimental studies characterizing the relationships between Jacobian norms, linear regions, and generalization." 1043,Parameter Space Noise for Exploration,"Deep reinforcement learning methods generally engage in exploratory behavior through noise injection in the action space.""An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors."", 'Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples.Combining parameter noise with traditional RL methods allows to combine the best of both worlds.We demonstrate that both off- and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks.","Parameter space noise allows reinforcement learning algorithms to explore by perturbing parameters instead of actions, often leading to significantly improved exploration performance." 1044,Extreme Language Model Compression with Optimal Subwords and Shared Projections,"Pre-trained deep neural network language models such as ELMo, GPT, BERT and XLNet have recently achieved state-of-the-art performance on a variety of language understanding tasks.However, their size makes them impractical for a number of scenarios, especially on mobile and edge devices.""In particular, the input word embedding matrix accounts for a significant proportion of the model's memory footprint, due to the large input vocabulary and embedding dimensions."", 'Knowledge distillation techniques have had success at compressing large neural network models, but they are ineffective at yielding student models with vocabularies different from the original teacher models.We introduce a novel knowledge distillation technique for training a student model with a significantly smaller vocabulary as well as lower embedding and hidden state dimensions.Specifically, we employ a dual-training mechanism that trains the teacher and student models simultaneously to obtain optimal word embeddings for the student vocabulary.We combine this approach with learning shared projection matrices that transfer layer-wise knowledge from the teacher model to the student model.Our method is able to compress the BERT-BASE model by more than 60x, with only a minor drop in downstream task metrics, resulting in a language model with a footprint of under 7MB.Experimental results also demonstrate higher compression efficiency and accuracy when compared with other state-of-the-art compression techniques.",We present novel distillation techniques that enable training student models with different vocabularies and compress BERT by 60x with minor performance drop. 1045,Learning Invariants through Soft Unification,"Human reasoning involves recognising common underlying principles across many examples by utilising variables.The by-products of such reasoning are invariants that capture patterns across examples such as ""if someone went somewhere then they are there"" without mentioning specific people or places.Humans learn what variables are and how to use them at a young age, and the question this paper addresses is whether machines can also learn and use variables solely from examples without requiring human pre-engineering.We propose Unification Networks that incorporate soft unification into neural networks to learn variables and by doing so lift examples into invariants that can then be used to solve a given task.We evaluate our approach on four datasets to demonstrate that learning invariants captures patterns in the data and can improve performance over baselines.",End-to-end learning of invariant representations with variables across examples such as if someone went somewhere then they are there. 1046,Random mesh projectors for inverse problems,"We propose a new learning-based approach to solve ill-posed inverse problems in imaging.We address the case where ground truth training samples are rare and the problem is severely ill-posed---both because of the underlying physics and because we can only get few measurements.This setting is common in geophysical imaging and remote sensing.We show that in this case the common approach to directly learn the mapping from the measured data to the reconstruction becomes unstable.Instead, we propose to first learn an ensemble of simpler mappings from the data to projections of the unknown image into random piecewise-constant subspaces.We then combine the projections to form a final reconstruction by solving a deconvolution-like problem.We show experimentally that the proposed method is more robust to measurement noise and corruptions not seen during training than a directly learned inverse.",We solve ill-posed inverse problems with scarce ground truth examples by estimating an ensemble of random projections of the model instead of the model itself. 1047,SMASH: One-Shot Model Architecture Search through HyperNetworks,"Designing architectures for deep neural networks requires expert knowledge and substantial computation time.""We propose a technique to accelerate architecture selection by learning an auxiliary HyperNet that generates the weights of a main model conditioned on that model's architecture."", 'By comparing the relative validation performance of networks with HyperNet-generated weights, we can effectively search over a wide range of architectures at the cost of a single training run.To facilitate this search, we develop a flexible mechanism based on memory read-writes that allows us to define a wide range of network connectivity patterns, with ResNet, DenseNet, and FractalNet blocks as special cases.We validate our method on CIFAR-10 and CIFAR-100, STL-10, ModelNet10, and Imagenet32x32, achieving competitive performance with similarly-sized hand-designed networks.",A technique for accelerating neural architecture selection by approximating the weights of each candidate architecture instead of training them individually. 1048,Collecting Entailment Data for Pretraining: New Protocols and Negative Results,"Textual entailment data has proven useful as pretraining data for tasks requiring language understanding, even when building on an already-pretrained model like RoBERTa.The standard protocol for collecting NLI was not designed for the creation of pretraining data, and it is likely far from ideal for this purpose.With this application in mind we propose four alternative protocols, each aimed at improving either the ease with which annotators can produce sound training examples or the quality and diversity of those examples.Using these alternatives and a simple MNLIbased baseline, we collect and compare five new 9k-example training sets.Our primary results are largely negative, with none of these new methods showing major improvements in transfer learning.However, we make several observations that should inform future work on NLI data, such as that the use of automatically provided seed sentences for inspiration improves the quality of the resulting data on most measures, and all of the interventions we investigated dramatically reduce previously observed issues with annotation artifacts.","We propose four new ways of collecting NLI data. Some help slightly as pretraining data, all help reduce annotation artifacts." 1049,Multi-Agent Interactions Modeling with Correlated Policies,"In multi-agent systems, complex interacting behaviors arise due to the high correlations among agents.However, previous work on modeling multi-agent interactions from demonstrations is primarily constrained by assuming the independence among policies and their reward structures.""In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents’ policies, which can recover agents' policies that can regenerate similar interactions."", 'Consequently, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies, which allows for decentralized training and execution.Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators and outperforms state-of-the-art multi-agent imitation learning methods.Our code is available at \\url.",Modeling complex multi-agent interactions under multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents’ policies. 1050,RedSync : Reducing Synchronization Traffic for Distributed Deep Learning,"Data parallelism has become a dominant method to scale Deep Neural Network training across multiple nodes. Since the synchronization of the local models or gradients can be a bottleneck for large-scale distributed training, compressing communication traffic has gained widespread attention recently. Among several recent proposed compression algorithms,Residual Gradient Compression is one of the most successful approaches---it can significantly compress the transmitting message size of each node and still preserve accuracy.However, the literature on compressing deep networks focuses almost exclusively on achieving good compression rate, while the efficiency of RGC in real implementation has been less investigated.In this paper, we develop an RGC method that achieves significant training time improvement in real-world multi-GPU systems.Our proposed RGC system design called RedSync, introduces a set of optimizations to reduce communication bandwidth while introducing limited overhead.We examine the performance of RedSync on two different multiple GPU platforms, including a supercomputer and a multi-card server.Our test cases include image classification on Cifar10 and ImageNet, and language modeling tasks on Penn Treebank and Wiki2 datasets.For DNNs featured with high communication to computation ratio, which has long been considered with poor scalability, RedSync shows significant performance improvement.",We proposed an implementation to accelerate DNN data parallel training by reducing communication bandwidth requirement. 1051,Learning to solve the credit assignment problem,"Backpropagation is driving today's artificial neural networks."", 'However, despite extensive research, it remains unclear if the brain implements this algorithm.Among neuroscientists, reinforcement learning algorithms are often seen as a realistic alternative.However, the convergence rate of such learning scales poorly with the number of involved neurons.Here we propose a hybrid learning approach, in which each neuron uses an RL-type strategy to learn how to approximate the gradients that backpropagation would provide.We show that our approach learns to approximate the gradient, and can match the performance of gradient-based learning on fully connected and convolutional networks.Learning feedback weights provides a biologically plausible mechanism of achieving good performance, without the need for precise, pre-specified learning rules.",Perturbations can be used to learn feedback weights on large fully connected and convolutional networks. 1052,Capacity of Deep Neural Networks under Parameter Quantization,"Most deep neural networks require complex models to achieve high performance.Parameter quantization is widely used for reducing the implementation complexities.Previous studies on quantization were mostly based on extensive simulation using training data.We choose a different approach and attempt to measure the per-parameter capacity of DNN models and interpret the results to obtain insights on optimum quantization of parameters.This research uses artificially generated data and generic forms of fully connected DNNs, convolutional neural networks, and recurrent neural networks.We conduct memorization and classification tests to study the effects of the number and precision of the parameters on the performance.The model and the per-parameter capacities are assessed by measuring the mutual information between the input and the classified output.We also extend the memorization capacity measurement results to image classification and language modeling tasks.To get insight for parameter quantization when performing real tasks, the training and test performances are compared.",We suggest the sufficient number of bits for representing weights of DNNs and the optimum bits are conservative when solving real problems. 1053,Image Segmentation by Iterative Inference from Conditional Score Estimation,"Inspired by the combination of feedforward and iterative computations in the visual cortex, and taking advantage of the ability of denoising autoencoders to estimate the score of a joint distribution, we propose a novel approach to iterative inference for capturing and exploiting the complex joint distribution of output variables conditioned on some input variables.This approach is applied to image pixel-wise segmentation, with the estimated conditional score used to perform gradient ascent towards a mode of the estimated conditional distribution.This extends previous work on score estimation by denoising autoencoders to the case of a conditional distribution, with a novel use of a corrupted feedforward predictor replacing Gaussian corruption.An advantage of this approach over more classical ways to perform iterative inference for structured outputs, like conditional random fields, is that it is not any more necessary to define an explicit energy function linking the output variables.To keep computations tractable, such energy function parametrizations are typically fairly constrained, involving only a few neighbors of each of the output variables in each clique.We experimentally find that the proposed iterative inference from conditional score estimation by conditional denoising autoencoders performs better than comparable models based on CRFs or those not using any explicit modeling of the conditional joint distribution of outputs.",Refining segmentation proposals by performing iterative inference with conditional denoising autoencoders. 1054,SALSA-TEXT : SELF ATTENTIVE LATENT SPACE BASED ADVERSARIAL TEXT GENERATION,"Inspired by the success of self attention mechanism and Transformer architecturein sequence transduction and image generation applications, we propose novel selfattention-based architectures to improve the performance of adversarial latent code-based schemes in text generation.Adversarial latent code-based text generationhas recently gained a lot of attention due to their promising results.In this paper,we take a step to fortify the architectures used in these setups, specifically AAEand ARAE.We benchmark two latent code-based methodsdesigned based on adversarial setups.In our experiments, the Google sentencecompression dataset is utilized to compare our method with these methods usingvarious objective and subjective measures.The experiments demonstrate theproposed attention-based models outperform the state-of-the-art in adversarialcode-based text generation.",We propose a self-attention based GAN architecture for unconditional text generation and improve on previous adversarial code-based results. 1055,Fatty and Skinny: A Joint Training Method of Watermark Encoder and Decoder,"Watermarks have been used for various purposes.Recently, researchers started to look into using them for deep neural networks.Some works try to hide attack triggers on their adversarial samples when attacking neural networks and others want to watermark neural networks to prove their ownership against plagiarism.Implanting a backdoor watermark module into a neural network is getting more attention from the community.In this paper, we present a general purpose encoder-decoder joint training method, inspired by generative adversarial networks.Unlike GANs, however, our encoder and decoder neural networks cooperate to find the best watermarking scheme given data samples.In other words, we do not design any new watermarking strategy but our proposed two neural networks will find the best suited method on their own.After being trained, the decoder can be implanted into other neural networks to attack or protect them.To this end, the decoder should be very tiny in order not to incur any overhead when attached to other neural networks but at the same time provide very high decoding success rates, which is very challenging.Our joint training method successfully solves the problem and in our experiments maintain almost 100% encoding-decoding success rates for multiple datasets with very little modifications on data samples to hide watermarks.We also present several real-world use cases in Appendix.",We propose a novel watermark encoder-decoder neural networks. They perform a cooperative game to define their own watermarking scheme. People do not need to design watermarking methods any more. 1056,Estimating Gradients for Discrete Random Variables by Sampling without Replacement,"We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement, which reduces variance as it avoids duplicate samples.We show that our estimator can be derived as the Rao-Blackwellization of three different estimators.Combining our estimator with REINFORCE, we obtain a policy gradient estimator and we reduce its variance using a built-in control variate which is obtained without additional model evaluations.The resulting estimator is closely related to other gradient estimators.Experiments with a toy problem, a categorical Variational Auto-Encoder and a structured prediction problem show that our estimator is the only estimator that is consistently among the best estimators in both high and low entropy settings.","We derive a low-variance, unbiased gradient estimator for expectations over discrete random variables based on sampling without replacement" 1057,Learning Implicitly Recurrent CNNs Through Parameter Sharing,"We introduce a parameter sharing scheme, in which different layers of a convolutional neural network are defined by a learned linear combination of parameter tensors from a global bank of templates. Restricting the number of templates yields a flexible hybridization of traditional CNNs and recurrent networks. Compared to traditional CNNs, we demonstrate substantial parameter savings on standard image classification tasks, while maintaining accuracy.Our simple parameter sharing scheme, though defined via soft weights, in practice often yields trained networks with near strict recurrent structure; with negligible side effects, they convert into networks with actual loops.Training these networks thus implicitly involves discovery of suitable recurrent architectures.Though considering only the aspect of recurrent links, our trained networks achieve accuracy competitive with those built using state-of-the-art neural architecture search procedures.Our hybridization of recurrent and convolutional networks may also represent a beneficial architectural bias. Specifically, on synthetic tasks which are algorithmic in nature, our hybrid networks both train faster and extrapolate better to test examples outside the span of the training set.",We propose a method that enables CNN folding to create recurrent connections 1058,Can gradient clipping mitigate label noise?,"Gradient clipping is a widely-used technique in the training of deep networks, and is generally motivated from an optimisation lens: informally, it controls the dynamics of iterates, thus enhancing the rate of convergence to a local minimum.This intuition has been made precise in a line of recent works, which show that suitable clipping can yield significantly faster convergence than vanilla gradient descent.In this paper, we propose a new lens for studying gradient clipping, namely, robustness: informally, one expects clipping to provide robustness to noise, since one does not overly trust any single sample.Surprisingly, we prove that for the common problem of label noise in classification, standard gradient clipping does not in general provide robustness.On the other hand, we show that a simple variant of gradient clipping is provably robust, and corresponds to suitably modifying the underlying loss function.This yields a simple, noise-robust alternative to the standard cross-entropy loss which performs well empirically.","Gradient clipping doesn't endow robustness to label noise, but a simple loss-based variant does." 1059,Exploring the Correlation between Likelihood of Flow-based Generative Models and Image Semantics," Among deep generative models, flow-based models, simply referred as s in this paper, differ from other models in that they provide tractable likelihood.Besides being an evaluation metric of synthesized data, flows are supposed to be robust against out-of-distribution~ inputs since they do not discard any information of the inputs.However, it has been observed that flows trained on FashionMNIST assign higher likelihoods to OoD samples from MNIST.""This counter-intuitive observation raises the concern about the robustness of flows' likelihood."", ""In this paper, we explore the correlation between flows' likelihood and image semantics."", 'We choose two typical flows as the target models: Glow, based on coupling transformations, and pixelCNN, based on autoregressive transformations.""Our experiments reveal surprisingly weak correlation between flows' likelihoods and image semantics: the predictive likelihoods of flows can be heavily affected by trivial transformations that keep the image semantics unchanged, which we call semantic-invariant transformations~."", 'We explore three SITs~: image pixel translation, random noise perturbation, latent factors zeroing~.These findings, though counter-intuitive, resonate with the fact that the predictive likelihood of a flow is the joint probability of all the image pixels.""So flows' likelihoods, modeling on pixel-level intensities, is not able to indicate the existence likelihood of the high-level image semantics."", 'We call for attention that it may be if we use the predictive likelihoods of flows for OoD samples detection.",show experimental evidences about the weak correlation between flows' likelihoods and image semantics. 1060,Countering Adversarial Images using Input Transformations,"This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system.Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image quilting before feeding the image to a convolutional network classifier.Our experiments on ImageNet show that total variance minimization and image quilting are very effective defenses in practice, in particular, when the network is trained on transformed images.The strength of those defenses lies in their non-differentiable nature and their inherent randomness, which makes it difficult for an adversary to circumvent the defenses.Our best defense eliminates 60% of strong gray-box and 90% of strong black-box attacks by a variety of major attack methods.",We apply a model-agnostic defense strategy against adversarial examples and achieve 60% white-box accuracy and 90% black-box accuracy against major attack algorithms. 1061,A2BCD: Asynchronous Acceleration with Optimal Complexity,"In this paper, we propose the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm.We prove A2BCD converges linearly to a solution of the convex minimization problem at the same rate as NU_ACDM, so long as the maximum delay is not too large.This is the first asynchronous Nesterov-accelerated algorithm that attains any provable speedup.Moreover, we then prove that these algorithms both have optimal complexity.Asynchronous algorithms complete much faster iterations, and A2BCD has optimal complexity.Hence we observe in experiments that A2BCD is the top-performing coordinate descent algorithm, converging up to 4-5x faster than NU_ACDM on some data sets in terms of wall-clock time.To motivate our theory and proof techniques, we also derive and analyze a continuous-time analog of our algorithm and prove it converges at the same rate.",We prove the first-ever convergence proof of an asynchronous accelerated algorithm that attains a speedup. 1062,"Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks","The point estimates of ReLU classification networks, arguably the most widely used neural network architecture, have recently been shown to have arbitrarily high confidence far away from the training data.This architecture is thus not robust, e.g., against out-of-distribution data.Approximate Bayesian posteriors on the weight space have been empirically demonstrated to improve predictive uncertainty in deep learning.The theoretical analysis of such Bayesian approximations is limited, including for ReLU classification networks.We present an analysis of approximate Gaussian posterior distributions on the weights of ReLU networks.We show that even a simplistic, non-Bayesian Gaussian distribution fixes the asymptotic overconfidence issue.Furthermore, when a Bayesian method, even if a simple one, is employed to obtain the Gaussian, the confidence becomes better calibrated.This theoretical result motivates a range of Laplace approximations along a fidelity-cost trade-off.We validate these findings empirically via experiments using common deep ReLU networks.","We argue theoretically that by simply assuming the weights of a ReLU network to be Gaussian distributed (without even a Bayesian formalism) could fix this issue; for a more calibrated uncertainty, a simple Bayesian method could already be sufficient." 1063,Leveraging Static and Contextualized Embeddings for Word Alignments,"Word alignments are useful for tasks like statistical and neural machine translation and annotation projection.Statistical word aligners perform well, as do methods that extract alignments jointly with translations in NMT.However, most approaches require parallel training data and quality decreases as less training data is available.We propose word alignment methods that require little or no parallel data.The key idea is to leverage multilingual word embeddings – both static and contextualized – for word alignment.Our multilingual embeddings are created from monolingual data only without relying on any parallel data or dictionaries.We find that traditional statistical aligners are outperformed by contextualized embeddings – even in scenarios with abundant parallel data.For example, for a set of 100k parallel sentences, contextualized embeddings achieve a word alignment F1 that is more than 5% higher than eflomal.",We use representations trained without any parallel data for creating word alignments. 1064,Reinforcement Learning with Perturbed Rewards,"Recent studies have shown the vulnerability of reinforcement learning models in noisy settings.The sources of noises differ across scenarios.For instance, in practice, the observed reward channel is often subject to noise, and thus observed rewards may not be credible as a result.Also, in applications such as robotics, a deep reinforcement learning algorithm can be manipulated to produce arbitrary errors.In this paper, we consider noisy RL problems where observed rewards by RL agents are generated with a reward confusion matrix.We call such observed rewards as perturbed rewards.We develop an unbiased reward estimator aided robust RL framework that enables RL agents to learn in noisy environments while observing only perturbed rewards.Our framework draws upon approaches for supervised learning with noisy data.The core ideas of our solution include estimating a reward confusion matrix and defining a set of unbiased surrogate rewards.We prove the convergence and sample complexity of our approach.Extensive experiments on different DRL platforms show that policies based on our estimated surrogate reward can achieve higher expected rewards, and converge faster than existing baselines.For instance, the state-of-the-art PPO algorithm is able to obtain 67.5% and 46.7% improvements in average on five Atari games, when the error rates are 10% and 30% respectively.",A new approach for learning with noisy rewards in reinforcement learning 1065,RNNs with Private and Shared Representations for Semi-Supervised Sequence Learning,"Training recurrent neural networks on long sequences using backpropagation through time remains a fundamental challenge.It has been shown that adding a local unsupervised loss term into the optimization objective makes the training of RNNs on long sequences more effective.While the importance of an unsupervised task can in principle be controlled by a coefficient in the objective function, the gradients with respect to the unsupervised loss term still influence all the hidden state dimensions, which might cause important information about the supervised task to be degraded or erased.Compared to existing semi-supervised sequence learning methods, this paper focuses upon a traditionally overlooked mechanism -- an architecture with explicitly designed private and shared hidden units designed to mitigate the detrimental influence of the auxiliary unsupervised loss over the main supervised task.We achieve this by dividing RNN hidden space into a private space for the supervised task and a shared space for both the supervised and unsupervised tasks.We present extensive experiments with the proposed framework on several long sequence modeling benchmark datasets.Results indicate that the proposed framework can yield performance gains in RNN models where long term dependencies are notoriously challenging to deal with.",This paper focuses upon a traditionally overlooked mechanism -- an architecture with explicitly designed private and shared hidden units designed to mitigate the detrimental influence of the auxiliary unsupervised loss over the main supervised task. 1066,Understanding and Improving Information Transfer in Multi-Task Learning,"We investigate multi-task learning approaches which use a shared feature representation for all tasks.To better understand the transfer of task information, we study an architecture with a shared module for all tasks and a separate output module for each task.We study the theory of this setting on linear and ReLU-activated models.""Our key observation is that whether or not tasks' data are well-aligned can significantly affect the performance of multi-task learning."", 'We show that misalignment between task data can cause negative transfer and provide sufficient conditions for positive transfer.""Inspired by the theoretical insights, we show that aligning tasks' embedding layers leads to performance gains for multi-task training and transfer learning on the GLUE benchmark and sentiment analysis tasks; for example, we obtained a 2.35% GLUE score average improvement on 5 GLUE tasks over BERT LARGE using our alignment method."", 'We also design an SVD-based task re-weighting scheme and show that it improves the robustness of multi-task training on a multi-label image dataset.",A Theoretical Study of Multi-Task Learning with Practical Implications for Improving Multi-Task Training and Transfer Learning 1067,Towards Neural Similarity Evaluator,"We review three limitations of BLEU and ROUGE – the most popular metricsused to assess reference summaries against hypothesis summaries, come up withcriteria for what a good metric should behave like and propose concrete ways toassess the performance of a metric in detail and show the potential of Transformers-based Language Models to assess reference summaries against hypothesis summaries.",New method for assessing the quaility of similarity evaluators and showing potential of Transformer-based language models in replacing BLEU and ROUGE. 1068,Few-shot Text Classification with Distributional Signatures,"In this paper, we explore meta-learning for few-shot text classification.Meta-learning has shown strong performance in computer vision, where low-level patterns are transferable across learning tasks.However, directly applying this approach to text is challenging–lexical features highly informative for one task maybe insignificant for another.Thus, rather than learning solely from words, our model also leverages their distributional signatures, which encode pertinent word occurrence patterns.Our model is trained within a meta-learning framework to map these signatures into attention scores, which are then used to weight the lexical representations of words.We demonstrate that our model consistently outperforms prototypical networks learned on lexical knowledge in both few-shot text classification and relation classification by a significant margin across six benchmark datasets.","Meta-learning methods used for vision, directly applied to NLP, perform worse than nearest neighbors on new classes; we can do better with distributional signatures." 1069,Disentangling the roles of dimensionality and cell classes in neural computations,"The description of neural computations in the field of neuroscience relies on two competing views: a classical single-cell view that relates the activity of individual neurons to sensory or behavioural variables, and focuses on how different cell classes map onto computations; a more recent population view that instead characterises computations in terms of collective neural trajectories, and focuses on the dimensionality of these trajectories as animals perform tasks.How the two key concepts of cell classes and low-dimensional trajectories interact to shape neural computations is however currently not understood.Here we address this question by combining machine-learning tools for training RNNs with reverse-engineering and theoretical analyses of network dynamics.We introduce a novel class of theoretically tractable recurrent networks: low-rank, mixture of Gaussian RNNs.In these networks, the rank of the connectivity controls the dimensionality of the dynamics, while the number of components in the Gaussian mixture corresponds to the number of cell classes.Using back-propagation, we determine the minimum rank and number of cell classes needed to implement neuroscience tasks of increasing complexity.We then exploit mean-field theory to reverse-engineer the obtained solutions and identify the respective roles of dimensionality and cell classes.We show that the rank determines the phase-space available for dynamics that implement input-output mappings, while having multiple cell classes allows networks to flexibly switch between different types of dynamics in the available phase-space.Our results have implications for the analysis of neuroscience experiments and the development of explainable AI.","A theoretical analysis of a new class of RNNs, trained on neuroscience tasks, allows us to identify the role of dynamical dimensionality and cell classes in neural computations." 1070,Objective Mismatch in Model-based Reinforcement Learning,"Model-based reinforcement learning has been shown to be a powerful framework for data-efficiently learning control of continuous tasks.Recent work in MBRL has mostly focused on using more advanced function approximators and planning schemes, leaving the general framework virtually unchanged since its conception.In this paper, we identify a fundamental issue of the standard MBRL framework -- what we call the objective mismatch issue.Objective mismatch arises when one objective is optimized in the hope that a second, often uncorrelated, metric will also be optimized.In the context of MBRL, we characterize the objective mismatch between training the forward dynamics model w.r.t. the likelihood of the one-step ahead prediction, and the overall goal of improving performance on a downstream control task.For example, this issue can emerge with the realization that dynamics models effective for a specific task do not necessarily need to be globally accurate, and vice versa globally accurate models might not be sufficiently accurate locally to obtain good control performance on a specific task.In our experiments, we study this objective mismatch issue and demonstrate that the likelihood of the one-step ahead prediction is not always correlated with downstream control performance.This observation highlights a critical flaw in the current MBRL framework which will require further research to be fully understood and addressed.We propose an initial method to mitigate the mismatch issue by re-weighting dynamics model training.Building on it, we conclude with a discussion about other potential directions of future research for addressing this issue.","We define, explore, and begin to address the objective mismatch issue in model-based reinforcement learning." 1071,Revisiting the Information Plane,"There has recently been a heated debate, Saxe et al., Noshad et al., Goldfeld et al.) about measuring the information flow in Deep Neural Networks using techniques from information theory.It is claimed that Deep Neural Networks in general have good generalization capabilities since they not only learn how to map from an input to an output but also how to compress information about the training data input.That is, they abstract the input information and strip down any unnecessary or over-specific information.If so, the message compression method, Information Bottleneck, could be used as a natural comparator for network performance, since this method gives an optimal information compression boundary.This claim was then later denounced as well as reaffirmed, Achille et al., Noshad et al.), as the employed method of mutual information measuring is not actually measuring information but clustering of the internal layer representations).In this paper, we will present a detailed explanation of the development in the Information Plain, which is a plot-type that compares mutual information to judge compression), when noise is retroactively added. We also explain why different activation functions show different trajectories on the IP.Further, we have looked into the effect of clustering on the network loss through early and perfect stopping using the Information Plane and how clustering can be used to help network pruning.",We give a detailed explanation of the trajectories in the information plane and investigate its usage for neural network design (pruning) 1072,Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design,"Formal understanding of the inductive bias behind deep convolutional networks, i.e. the relation between the network's architectural features and the functions it is able to model, is limited."", 'In this work, we establish a fundamental connection between the fields of quantum physics and deep learning, and use it for obtaining novel theoretical observations regarding the inductive bias of convolutional networks.""Specifically, we show a structural equivalence between the function realized by a convolutional arithmetic circuit and a quantum many-body wave function, which facilitates the use of quantum entanglement measures as quantifiers of a deep network's expressive ability to model correlations."", 'Furthermore, the construction of a deep ConvAC in terms of a quantum Tensor Network is enabled.This allows us to perform a graph-theoretic analysis of a convolutional network, tying its expressiveness to a min-cut in its underlying graph.We demonstrate a practical outcome in the form of a direct control over the inductive bias via the number of channels of each layer.We empirically validate our findings on standard convolutional networks which involve ReLU activations and max pooling.The description of a deep convolutional network in well-defined graph-theoretic tools and the structural connection to quantum entanglement, are two interdisciplinary bridges that are brought forth by this work.","Employing quantum entanglement measures for quantifying correlations in deep learning, and using the connection to fit the deep network's architecture to correlations in the data." 1073,Using Deep Reinforcement Learning to Generate Rationales for Molecules,"Deep learning algorithms are increasingly used in modeling chemical processes.However, black box predictions without rationales have limited used in practical applications, such as drug design.To this end, we learn to identify molecular substructures -- rationales -- that are associated with the target chemical property.The rationales are learned in an unsupervised fashion, requiring no additional information beyond the end-to-end task.We formulate this problem as a reinforcement learning problem over the molecular graph, parametrized by two convolution networks corresponding to the rationale selection and prediction based on it, where the latter induces the reward function.We evaluate the approach on two benchmark toxicity datasets.We demonstrate that our model sustains high performance under the additional constraint that predictions strictly follow the rationales.Additionally, we validate the extracted rationales through comparison against those described in chemical literature and through synthetic experiments.",We use a reinforcement learning over molecular graphs to generate rationales for interpretable molecular property prediction. 1074,Unifying Graph Convolutional Networks as Matrix Factorization,"In recent years, substantial progress has been made on graph convolutional networks.In this paper, for the first time, we theoretically analyze the connections between GCN and matrix factorization, and unify GCN as matrix factorization with co-training and unitization.Moreover, under the guidance of this theoretical analysis, we propose an alternative model to GCN named Co-training and Unitized Matrix Factorization.The correctness of our analysis is verified by thorough experiments.The experimental results show that CUMF achieves similar or superior performances compared to GCN.In addition, CUMF inherits the benefits of MF-based methods to naturally support constructing mini-batches, and is more friendly to distributed computing comparing with GCN.The distributed CUMF on semi-supervised node classification significantly outperforms distributed GCN methods.Thus, CUMF greatly benefits large scale and complex real-world applications.",We unify graph convolutional networks as co-training and unitized matrix factorization. 1075,GraphAF: a Flow-based Autoregressive Model for Molecular Graph Generation,"Molecular graph generation is a fundamental problem for drug discovery and has been attracting growing attention.The problem is challenging since it requires not only generating chemically valid molecular structures but also optimizing their chemical properties in the meantime.Inspired by the recent progress in deep generative models, in this paper we propose a flow-based autoregressive model for graph generation called GraphAF.GraphAF combines the advantages of both autoregressive and flow-based approaches and enjoys: high model flexibility for data density estimation; efficient parallel computation for training; an iterative sampling process, which allows leveraging chemical domain knowledge for valency checking.Experimental results show that GraphAF is able to generate 68% chemically valid molecules even without chemical knowledge rules and 100% valid molecules with chemical rules.The training process of GraphAF is two times faster than the existing state-of-the-art approach GCPN.After fine-tuning the model for goal-directed property optimization with reinforcement learning, GraphAF achieves state-of-the-art performance on both chemical property optimization and constrained property optimization.",A flow-based autoregressive model for molecular graph generation. Reaching state-of-the-art results on molecule generation and properties optimization. 1076,Preconditioner on Matrix Lie Group for SGD,"We study two types of preconditioners and preconditioned stochastic gradient descent methods in a unified framework.We call the first one the Newton type due to its close relationship to the Newton method, and the second one the Fisher type as its preconditioner is closely related to the inverse of Fisher information matrix.Both preconditioners can be derived from one framework, and efficiently estimated on any matrix Lie groups designated by the user using natural or relative gradient descent minimizing certain preconditioner estimation criteria.Many existing preconditioners and methods, e.g., RMSProp, Adam, KFAC, equilibrated SGD, batch normalization, etc., are special cases of or closely related to either the Newton type or the Fisher type ones.Experimental results on relatively large scale machine learning problems are reported for performance study.","We propose a new framework for preconditioner learning, derive new forms of preconditioners and learning methods, and reveal the relationship to methods like RMSProp, Adam, Adagrad, ESGD, KFAC, batch normalization, etc." 1077,EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks,"We present EDA: easy data augmentation techniques for boosting performance on text classification tasks.EDA consists of four simple but powerful operations: synonym replacement, random insertion, random swap, and random deletion.On five text classification tasks, we show that EDA improves performance for both convolutional and recurrent neural networks.EDA demonstrates particularly strong results for smaller datasets; on average, across five datasets, training with EDA while using only 50% of the available training set achieved the same accuracy as normal training with all available data.We also performed extensive ablation studies and suggest parameters for practical use.","Simple text augmentation techniques can significantly boost performance on text classification tasks, especially for small datasets." 1078,Physics-as-Inverse-Graphics: Unsupervised Physical Parameter Estimation from Video,"We propose a model that is able to perform physical parameter estimation of systems from video, where the differential equations governing the scene dynamics are known, but labeled states or objects are not available.Existing physical scene understanding methods require either object state supervision, or do not integrate with differentiable physics to learn interpretable system parameters and states.We address this problem through a approach that brings together vision-as-inverse-graphics and differentiable physics engines, where objects and explicit state and velocity representations are discovered by the model.This framework allows us to perform long term extrapolative video prediction, as well as vision-based model-predictive control.Our approach significantly outperforms related unsupervised methods in long-term future frame prediction of systems with interacting objects, due to its ability to build dynamics into the model as an inductive bias.We further show the value of this tight vision-physics integration by demonstrating data-efficient learning of vision-actuated model-based control for a pendulum system.""We also show that the controller's interpretability provides unique capabilities in goal-driven control and physical reasoning for zero-data adaptation.","We propose a model that is able to perform physical parameter estimation of systems from video, where the differential equations governing the scene dynamics are known, but labeled states or objects are not available." 1079,Adversarial Sampling for Active Learning,"This paper proposes ASAL, a new pool based active learning method that generates high entropy samples.Instead of directly annotating the synthetic samples, ASAL searches similar samples from the pool and includes them for training.Hence, the quality of new samples is high and annotations are reliable. ASAL is particularly suitable for large data sets because it achieves a better run-time complexity for sample selection than traditional uncertainty sampling.We present a comprehensive set of experiments on two data sets and show that ASAL outperforms similar methods and clearly exceeds the established baseline. In the discussion section we analyze in which situations ASAL performs best and why it is sometimes hard to outperform random sample selection.To the best of our knowledge this is the first adversarial active learning technique that is applied for multiple class problems using deep convolutional classifiers and demonstrates superior performance than random sample selection.",ASAL is a pool based active learning method that generates high entropy samples and retrieves matching samples from the pool in sub-linear time. 1080,A Boo(n) for Evaluating Architecture Performance,"We point out important problems with the common practice of using the best single model performance for comparing deep learning architectures, and we propose a method that corrects these flaws.Each time a model is trained, one gets a different result due to random factors in the training process, which include random parameter initialization and random data shuffling.Reporting the best single model performance does not appropriately address this stochasticity.We propose a normalized expected best-out-of-n performance as a way to correct these problems.","We point out important problems with the common practice of using the best single model performance for comparing deep learning architectures, and we propose a method that corrects these flaws." 1081,DATNet: Dual Adversarial Transfer for Low-resource Named Entity Recognition,"We propose a new architecture termed Dual Adversarial Transfer Network for addressing low-resource Named Entity Recognition.Specifically, two variants of DATNet, i.e., DATNet-F and DATNet-P, are proposed to explore effective feature fusion between high and low resource.To address the noisy and imbalanced training data, we propose a novel Generalized Resource-Adversarial Discriminator.Additionally, adversarial training is adopted to boost model generalization.We examine the effects of different components in DATNet across domains and languages and show that significant improvement can be obtained especially for low-resource data.Without augmenting any additional hand-crafted features, we achieve new state-of-the-art performances on CoNLL and Twitter NER---88.16% F1 for Spanish, 53.43% F1 for WNUT-2016, and 42.83% F1 for WNUT-2017.",We propose a new architecture termed Dual Adversarial Transfer Network (DATNet) for addressing low-resource Named Entity Recognition (NER) and achieve new state-of-the-art performances on CoNLL and Twitter NER. 1082,Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields,"Generative adversarial networks evolved into one of the most successful unsupervised techniques for generating realistic images.Even though it has recently been shown that GAN training converges, GAN models often end up in local Nash equilibria that are associated with mode collapse or otherwise fail to model the target distribution.We introduce Coulomb GANs, which pose the GAN learning problem as a potential field, where generated samples are attracted to training set samples but repel each other.The discriminator learns a potential field while the generator decreases the energy by moving its samples along the vector field determined by the gradient of the potential field.Through decreasing the energy, the GAN model learns to generate samples according to the whole target distribution and does not only cover some of its modes.We prove that Coulomb GANs possess only one Nash equilibrium which is optimal in the sense that the model distribution equals the target distribution.We show the efficacy of Coulomb GANs on LSUN bedrooms, CelebA faces, CIFAR-10 and the Google Billion Word text generation.",Coulomb GANs can optimally learn a distribution by posing the distribution learning problem as optimizing a potential field 1083,Confidence-based Graph Convolutional Networks for Semi-Supervised Learning,"Predicting properties of nodes in a graph is an important problem with applications in a variety of domains.Graph-based Semi Supervised Learning methods aim to address this problem by labeling a small subset of the nodes as seeds, and then utilizing the graph structure to predict label scores for the rest of the nodes in the graph.Recently, Graph Convolutional Networks have achieved impressive performance on the graph-based SSL task.In addition to label scores, it is also desirable to have a confidence score associated with them.Unfortunately, confidence estimation in the context of GCN has not been previously explored.We fill this important gap in this paper and propose ConfGCN, which estimates labels scores along with their confidences jointly in GCN-based setting.ConfGCN uses these estimated confidences to determine the influence of one node on another during neighborhood aggregation, thereby acquiring anisotropic capabilities.Through extensive analysis and experiments on standard benchmarks, we find that ConfGCN is able to significantly outperform state-of-the-art baselines.We have made ConfGCN’s source code available to encourage reproducible research.",We propose a confidence based Graph Convolutional Network for Semi-Supervised Learning. 1084,Benchmarking Neural Network Robustness to Common Corruptions and Perturbations,"In this paper we establish rigorous benchmarks for image classifier robustness.Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications.""Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations."", 'Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations.We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers.Afterward we discover ways to enhance corruption and perturbation robustness.We even find that a bypassed adversarial defense provides substantial common perturbation robustness.Together our benchmarks may aid future work toward networks that robustly generalize.",We propose ImageNet-C to measure classifier corruption robustness and ImageNet-P to measure perturbation robustness 1085,DeepObfusCode: Source Code Obfuscation Through Sequence-to-Sequence Networks,"The paper explores a novel methodology in source code obfuscation through the application of text-based recurrent neural network network encoder-decoder models in ciphertext generation and key generation.Sequence-to-sequence modelsare incorporated into the model architecture to generate obfuscated code, generate the deobfuscation key, and live execution.Quantitative benchmark comparison to existing obfuscation methods indicate significant improvement in stealth and execution cost for the proposed solution, and experiments regarding the model’s properties yield positive results regarding its character variation, dissimilarity to the original codebase, and consistent length of obfuscated code.","Obfuscate code using seq2seq networks, and execute using the obfuscated code and key pair" 1086,Cross-Lingual Vision-Language Navigation,"Vision-Language Navigation is the task where an agent is commanded to navigate in photo-realistic unknown environments with natural language instructions.Previous research on VLN is primarily conducted on the Room-to-Room dataset with only English instructions. The ultimate goal of VLN, however, is to serve people speaking arbitrary languages.Towards multilingual VLN with numerous languages, we collect a cross-lingual R2R dataset, which extends the original benchmark with corresponding Chinese instructions.But it is time-consuming and expensive to collect large-scale human instructions for every existing language.Based on the newly introduced dataset, we propose a general cross-lingual VLN framework to enable instruction-following navigation for different languages.We first explore the possibility of building a cross-lingual agent when no training data of the target language is available.The cross-lingual agent is equipped with a meta-learner to aggregate cross-lingual representations and a visually grounded cross-lingual alignment module to align textual representations of different languages.Under the zero-shot learning scenario, our model shows competitive results even compared to a model trained with all target language instructions.In addition, we introduce an adversarial domain adaption loss to improve the transferring ability of our model when given a certain amount of target language data.Our methods and dataset demonstrate the potentials of building a cross-lingual agent to serve speakers with different languages.","We introduce a new task and dataset on cross-lingual vision-language navigation, and propose a general cross-lingual VLN framework for the task." 1087,Towards Unsupervised Classification with Deep Generative Models,"Deep generative models have advanced the state-of-the-art in semi-supervised classification, however their capacity for deriving useful discriminative features in a completely unsupervised fashion for classification in difficult real-world data sets, where adequate manifold separation is required has not been adequately explored.Most methods rely on defining a pipeline of deriving features via generative modeling and then applying clustering algorithms, separating the modeling and discriminative processes.We propose a deep hierarchical generative model which uses a mixture of discrete and continuous distributions to learn to effectively separate the different data manifolds and is trainable end-to-end.""We show that by specifying the form of the discrete variable distribution we are imposing a specific structure on the model's latent representations."", ""We test our model's discriminative performance on the task of CLL diagnosis against baselines from the field of computational FC, as well as the Variational Autoencoder literature.",Unsupervised classification via deep generative modeling with controllable feature learning evaluated in a difficult real world task 1088,DyRep: Learning Representations over Dynamic Graphs,"Representation Learning over graph structured data has received significant attention recently due to its ubiquitous applicability.However, most advancements have been made in static graph settings while efforts for jointly learning dynamic of the graph and dynamic on the graph are still in an infant stage.Two fundamental questions arise in learning over dynamic graphs: How to elegantly model dynamical processes over graphs? How to leverage such a model to effectively encode evolving graph information into low-dimensional representations?We present DyRep - a novel modeling framework for dynamic graphs that posits representation learning as a latent mediation process bridging two observed processes namely -- dynamics of the network and dynamics on the network.Concretely, we propose a two-time scale deep temporal point process model that captures the interleaved dynamics of the observed processes.This model is further parameterized by a temporal-attentive representation network that encodes temporally evolving structural information into node representations which in turn drives the nonlinear evolution of the observed graph dynamics.Our unified framework is trained using an efficient unsupervised procedure and has capability to generalize over unseen nodes.We demonstrate that DyRep outperforms state-of-the-art baselines for dynamic link prediction and time prediction tasks and present extensive qualitative insights into our framework.",Models Representation Learning over dynamic graphs as latent hidden process bridging two observed processes of Topological Evolution of and Interactions on dynamic graphs. 1089,Smooth markets: A basic mechanism for organizing gradient-based learners,"With the success of modern machine learning, it is becoming increasingly important to understand and control how learning algorithms interact.Unfortunately, negative results from game theory show there is little hope of understanding or controlling general n-player games.We therefore introduce smooth markets, a class of n-player games with pairwise zero sum interactions.SM-games codify a common design pattern in machine learning that includes some GANs, adversarial training, and other recent algorithms.We show that SM-games are amenable to analysis and optimization using first-order methods.",We introduce a class of n-player games suited to gradient-based methods. 1090,On the Linguistic Capacity of Real-time Counter Automata,"While counter machines have received little attention in theoretical computer science since the 1960s, they have recently achieved a newfound relevance to the field of natural language processing.Recent work has suggested that some strong-performing recurrent neural networks utilize their memory as counters.Thus, one potential way to understand the sucess of these networks is to revisit the theory of counter computation.Therefore, we choose to study the abilities of real-time counter machines as formal grammars.We first show that several variants of the counter machine converge to express the same class of formal languages.We also prove that counter languages are closed under complement, union, intersection, and many other common set operations.Next, we show that counter machines cannot evaluate boolean expressions, even though they can weakly validate their syntax.This has implications for the interpretability and evaluation of neural network systems: successfully matching syntactic patterns does not guarantee that a counter-like model accurately represents underlying semantic structures.Finally, we consider the question of whether counter languages are semilinear.This work makes general contributions to the theory of formal languages that are of particular interest for the interpretability of recurrent neural networks.","We study the class of formal languages acceptable by real-time counter automata, a model of computation related to some types of recurrent neural networks." 1091,A Deep Reinforced Model for Abstractive Summarization,"Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences.For longer documents and summaries however these models often include repetitive and incoherent phrases.We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning.Models trained only with supervised learning often exhibit ""exposure bias"" - they assume ground truth is provided at each step during training.However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable.We evaluate this model on the CNN/Daily Mail and New York Times datasets.Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.Human evaluation also shows that our model produces higher quality summaries.",A summarization model combining a new intra-attention and reinforcement learning method to increase summary ROUGE scores and quality for long sequences. 1092,Role-Wise Data Augmentation for Knowledge Distillation,"Knowledge Distillation is a common method for transferring the knowledge learned by one machine learning model into another model, where typically, the teacher has a greater capacity."", ""To our knowledge, existing methods overlook the fact that although the student absorbs extra knowledge from the teacher, both models share the same input data -- and this data is the only medium by which the teacher's knowledge can be demonstrated."", 'Due to the difference in model capacities, the student may not benefit fully from the same data points on which the teacher is trained.On the other hand, a human teacher may demonstrate a piece of knowledge with individualized examples adapted to a particular student, for instance, in terms of her cultural background and interests.Inspired by this behavior, we design data augmentation agents with distinct roles to facilitate knowledge distillation.Our data augmentation agents generate distinct training data for the teacher and student, respectively.We focus specifically on KD when the teacher network has greater precision than the student network.""We find empirically that specially tailored data points enable the teacher's knowledge to be demonstrated more effectively to the student."", 'We compare our approach with existing KD methods on training popular neural architectures and demonstrate that role-wise data augmentation improves the effectiveness of KD over strong prior approaches.The code for reproducing our results will be made publicly available.",We study whether and how adaptive data augmentation and knowledge distillation can be leveraged simultaneously in a synergistic manner for better training student networks. 1093,Repurposing Decoder-Transformer Language Models for Abstractive Summarization,"Neural network models have shown excellent fluency and performance when applied to abstractive summarization.Many approaches to neural abstractive summarization involve the introduction of significant inductive bias, such as pointer-generator architectures, coverage, and partially extractive procedures, designed to mimic human summarization.We show that it is possible to attain competitive performance by instead directly viewing summarization as language modeling.We introduce a simple procedure built upon pre-trained decoder-transformers to obtain competitive ROUGE scores using a language modeling loss alone, with no beam-search or other decoding-time optimization, and instead rely on efficient nucleus sampling and greedy decoding.",We introduce a simple procedure to repurpose pre-trained transformer-based language models to perform abstractive summarization well. 1094,Progressive Memory Banks for Incremental Domain Adaptation,"This paper addresses the problem of incremental domain adaptation.We assume each domain comes sequentially, and that we could only access data in the current domain.The goal of IDA is to build a unified model performing well on all the encountered domains.We propose to augment a recurrent neural network with a directly parameterized memory bank, which is retrieved by an attention mechanism at each step of RNN transition.The memory bank provides a natural way of IDA: when adapting our model to a new domain, we progressively add new slots to the memory bank, which increases the model capacity.We learn the new memory slots and fine-tune existing parameters by back-propagation.Experiments show that our approach significantly outperforms naive fine-tuning and previous work on IDA, including elastic weight consolidation and the progressive neural network. Compared with expanding hidden states, our approach is more robust for old domains, shown by both empirical and theoretical results.","We present a neural memory-based architecture for incremental domain adaptation, and provide theoretical and empirical results." 1095,Learning to Recover Sparse Signals,"In compressed sensing, a primary problem to solve is to reconstruct a high dimensional sparse signal from a small number of observations.In this work, we develop a new sparse signal recovery algorithm using reinforcement learning and Monte CarloTree Search.Similarly to orthogonal matching pursuit, our RL+MCTS algorithm chooses the support of the signal sequentially.The key novelty is that the proposed algorithm learns how to choose the next support as opposed to following a pre-designed rule as in OMP.Empirical results are provided to demonstrate the superior performance of the proposed RL+MCTS algorithm over existing sparse signal recovery algorithms.","Formulating sparse signal recovery as a sequential decision making problem, we develop a method based on RL and MCTS that learns a policy to discover the support of the sparse signal. " 1096,Deep Connectomics Networks: Neural Network Architectures Inspired by Neuronal Networks,"The interplay between inter-neuronal network topology and cognition has been studied deeply by connectomics researchers and network scientists, which is crucial towards understanding the remarkable efficacy of biological neural networks.Curiously, the deep learning revolution that revived neural networks has not paid much attention to topological aspects.The architectures of deep neural networks do not resemble their biological counterparts in the topological sense.We bridge this gap by presenting initial results of Deep Connectomics Networks as DNNs with topologies inspired by real-world neuronal networks.We show high classification accuracy obtained by DCNs whose architecture was inspired by the biological neuronal networks of C. Elegans and the mouse visual cortex.",Initial findings in the intersection of network neuroscience and deep learning. C. Elegans and a mouse visual cortex learn to recognize handwritten digits. 1097,Learning Parsimonious Deep Feed-forward Networks,"Convolutional neural networks and recurrent neural networks are designed with network structures well suited to the nature of spacial and sequential data respectively.However, the structure of standard feed-forward neural networks is simply a stack of fully connected layers, regardless of the feature correlations in data.In addition, the number of layers and the number of neurons are manually tuned on validation data, which is time-consuming and may lead to suboptimal networks.In this paper, we propose an unsupervised structure learning method for learning parsimonious deep FNNs.Our method determines the number of layers, the number of neurons at each layer, and the sparse connectivity between adjacent layers automatically from data.The resulting models are called Backbone-Skippath Neural Networks.Experiments on 17 tasks show that, in comparison with FNNs, BSNNs can achieve better or comparable classification performance with much fewer parameters.The interpretability of BSNNs is also shown to be better than that of FNNs.",An unsupervised structure learning method for Parsimonious Deep Feed-forward Networks. 1098,GAN priors for Bayesian inference,"Bayesian inference is used extensively to infer and to quantify the uncertainty in a field of interest from a measurement of a related field when the two are linked by a mathematical model.Despite its many applications, Bayesian inference faces challenges when inferring fields that have discrete representations of large dimension, and/or have prior distributions that are difficult to characterize mathematically.In this work we demonstrate how the approximate distribution learned by a generative adversarial network may be used as a prior in a Bayesian update to address both these challenges.We demonstrate the efficacy of this approach by inferring and quantifying uncertainty in a physics-based inverse problem and an inverse problem arising in computer vision.In this latter example, we also demonstrate how the knowledge of the spatial variation of uncertainty may be used to select an optimal strategy of placing the sensors, where information about the image is revealed one sub-region at a time.",Using GANs as priors for efficient Bayesian inference of complex fields. 1099,Are Few-shot Learning Benchmarks Too Simple ?,"We argue that the widely used Omniglot and miniImageNet benchmarks are too simple because their class semantics do not vary across episodes, which defeats their intended purpose of evaluating few-shot classification methods.The class semantics of Omniglot is invariably “characters” and the class semantics of miniImageNet, “object category”.Because the class semantics are so similar, we propose a new method called Centroid Networks which can achieve surprisingly high accuracies on Omniglot and miniImageNet without using any labels at metaevaluation time.Our results suggest that those benchmarks are not adapted for supervised few-shot classification since the supervision itself is not necessary during meta-evaluation.The Meta-Dataset, a collection of 10 datasets, was recently proposed as a harder few-shot classification benchmark.Using our method, we derive a new metric, the Class Semantics Consistency Criterion, and use it to quantify the difficulty of Meta-Dataset.Finally, under some restrictive assumptions, we show that Centroid Networks is faster and more accurate than a state-of-the-art learning-to-cluster method.","Omniglot and miniImageNet are too simple for few-shot learning because we can solve them without using labels during meta-evaluation, as demonstrated with a method called centroid networks" 1100,DS-VIC: Unsupervised Discovery of Decision States for Transfer in RL,"We learn to identify decision states, namely the parsimonious set of states where decisions meaningfully affect the future states an agent can reach in an environment.We utilize the VIC framework, which maximizes an agent’s `empowerment’, ie the ability to reliably reach a diverse set of states -- and formulate a sandwich bound on the empowerment objective that allows identification of decision states. Unlike previous work, our decision states are discovered without extrinsic rewards -- simply by interacting with the world.Our results show that our decision states are:1) often interpretable, and2) lead to better exploration on downstream goal-driven tasks in partially observable environments.","Identify decision states (where agent can take actions that matter) without reward supervision, use it for transfer." 1101,Distributional Bayesian optimisation for variational inference on black-box simulators,"Inverse problems are ubiquitous in natural sciences and refer to the challenging task of inferring complex and potentially multi-modal posterior distributions over hidden parameters given a set of observations.Typically, a model of the physical process in the form of differential equations is available but leads to intractable inference over its parameters.While the forward propagation of parameters through the model simulates the evolution of the system, the inverse problem of finding the parameters given the sequence of states is not unique.In this work, we propose a generalisation of the Bayesian optimisation framework to approximate inference.The resulting method learns approximations to the posterior distribution by applying Stein variational gradient descent on top of estimates from a Gaussian process model.""Preliminary results demonstrate the method's performance on likelihood-free inference for reinforcement learning environments.",An approach to combine variational inference and Bayesian optimisation to solve complicated inverse problems 1102,Data-driven construction of robust motion primitives for non-holonomic vehicles,"We present a data driven approach to construct a library of feedback motion primitives for non-holonomic vehicles that guarantees bounded error in following arbitrarily long trajectories.This ensures that motion re-planning can be avoided as long as disturbances to the vehicle remain within a certain bound and also potentially when the obstacles are displaced within a certain bound.The library is constructed along local abstractions of the dynamics that enables addition of new motion primitives through abstraction refinement.We provide sufficient conditions for construction of such robust motion primitives for a large class of nonlinear dynamics, including commonly used models, such as the standard Reeds-Shepp model.The algorithm is applied for motion planning and control of a rover with slipping without its prior modelling.",We show that under some assumptions on vehicle dynamics and environment uncertainty it is possible to automatically synthesize motion primitives that do not accumulate error over time. 1103,Sensitivity of Deep Convolutional Networks to Gabor Noise,"Deep Convolutional Networks have been shown to be sensitive to Universal Adversarial Perturbations: input-agnostic perturbations that fool a model on large portions of a dataset.These UAPs exhibit interesting visual patterns, but this phenomena is, as yet, poorly understood.Our work shows that visually similar procedural noise patterns also act as UAPs.In particular, we demonstrate that different DCN architectures are sensitive to Gabor noise patterns.This behaviour, its causes, and implications deserve further in-depth study.","Existing Deep Convolutional Networks in image classification tasks are sensitive to Gabor noise patterns, i.e. small structured changes to the input cause large changes to the output." 1104,Robustness and/or Redundancy Emerge in Overparametrized Deep Neural Networks,"Deep neural networks perform well on a variety of tasks despite the fact that most used in practice are vastly overparametrized and even capable of perfectly fitting randomly labeled data.Recent evidence suggests that developing ""compressible"" representations is key for adjusting the complexity of overparametrized networks to the task at hand and avoiding overfitting.In this paper, we provide new empirical evidence that supports this hypothesis, identifying two independent mechanisms that emerge when the network’s width is increased: robustness and redundancy.In a series of experiments with AlexNet, ResNet and Inception networks in the CIFAR-10 and ImageNet datasets, and also using shallow networks with synthetic data, we show that DNNs consistently increase either their robustness, their redundancy, or both at greater widths for a comprehensive set of hyperparameters.These results suggest that networks in the deep learning regime adjust their effective capacity by developing either robustness or redundancy.",Probing robustness and redundancy in deep neural networks reveals capacity-constraining features which help to explain non-overfitting. 1105,"Towards Robust, Locally Linear Deep Networks","Deep networks realize complex mappings that are often understood by their locally linear behavior at or around points of interest.For example, we use the derivative of the mapping with respect to its inputs for sensitivity analysis, or to explain a prediction.One key challenge is that such derivatives are themselves inherently unstable.In this paper, we propose a new learning problem to encourage deep networks to have stable derivatives over larger regions.While the problem is challenging in general, we focus on networks with piecewise linear activation functions.Our algorithm consists of an inference step that identifies a region around a point where linear approximation is provably stable, and an optimization step to expand such regions.We propose a novel relaxation to scale the algorithm to realistic models.We illustrate our method with residual and recurrent networks on image and sequence datasets.",A scalable algorithm to establish robust derivatives of deep networks w.r.t. the inputs. 1106,Learning Internal Dense But External Sparse Structures of Deep Neural Network,"Recent years have witnessed two seemingly opposite developments of deep convolutional neural networks.On one hand, increasing the density of CNNs by adding cross-layer connections achieve higher accuracy.On the other hand, creating sparsity structures through regularization and pruning methods enjoys lower computational costs.In this paper, we bridge these two by proposing a new network structure with locally dense yet externally sparse connections.This new structure uses dense modules, as basic building blocks and then sparsely connects these modules via a novel algorithm during the training process.Experimental results demonstrate that the locally dense yet externally sparse structure could acquire competitive performance on benchmark tasks while keeping the network structure slim.","In this paper, we explore an internal dense yet external sparse network structure of deep neural networks and analyze its key properties." 1107,Ergodic Inference: Accelerate Convergence by Optimisation,"Statistical inference methods are fundamentally important in machine learning.Most state-of-the-art inference algorithms arevariants of Markov chain Monte Carlo or variational inference.However, both methods struggle with limitations in practice: MCMC methods can be computationally demanding; VI methods may have large bias.In this work, we aim to improve upon MCMC and VI by a novel hybrid method based on the idea of reducing simulation bias of finite-length MCMC chains using gradient-based optimisation.The proposed method can generate low-biased samples by increasing the length of MCMC simulation and optimising the MCMC hyper-parameters, which offers attractive balance between approximation bias and computational efficiency.We show that our method produces promising results on popular benchmarks when compared to recent hybrid methods of MCMC and VI.","In this work, we aim to improve upon MCMC and VI by a novel hybrid method based on the idea of reducing simulation bias of finite-length MCMC chains using gradient-based optimisation." 1108,Predicting the accuracy of neural networks from final and intermediate layer outputs,"We show that information about whether a neural network's output will be correct or incorrect is present in the outputs of the network's intermediate layers."", 'To demonstrate this effect, we train a new ""meta"" network to predict from either the final output of the underlying ""base"" network or the output of one of the base network\'s intermediate layers whether the base network will be correct or incorrect for a particular input.We find that, over a wide range of tasks and base networks, the meta network can achieve accuracies ranging from 65% - 85% in making this determination.",Information about whether a neural network's output will be correct or incorrect is somewhat present in the outputs of the network's intermediate layers. 1109,One Demonstration Imitation Learning,"We develop a new algorithm for imitation learning from a single expert demonstration.In contrast to many previous one-shot imitation learning approaches, our algorithm does not assume access to more than one expert demonstration during the training phase.Instead, we leverage an exploration policy to acquire unsupervised trajectories, which are then used to train both an encoder and a context-aware imitation policy.The optimization procedures for the encoder, imitation learner, and exploration policy are all tightly linked.This linking creates a feedback loop wherein the exploration policy collects new demonstrations that challenge the imitation learner, while the encoder attempts to help the imitation policy to the best of its abilities.We evaluate our algorithm on 6 MujoCo robotics tasks.",Unsupervised self-imitation algorithm capable of inference from a single expert demonstration. 1110,Human-Understandable Explanations of Infeasibility for Resource-Constrained Scheduling Problems,"Significant work has been dedicated to developing methods for communicating reasons for decision-making within au-tomated scheduling systems to human users.However, much less focus has been placed on communicating reasons for whyscheduling systems are unable to arrive at a feasible solution when over-constrained.We investigate this problem in thecontext of task scheduling.We introduce the agent resource-constrained project scheduling problem, an ex-tension of the resource-constrained project scheduling problem which includes a conception of agents that execute tasksin parallel.We outline a generic framework, based on efficiently enumerating minimal unsatisfiable sets andmaximal satisfiable sets, to produce small descriptions of the source of infeasibility.These descriptions are supple-mented with potential relaxations that would fix the infeasibility found within the problem instance.We illustrate howthis method may be applied to the ARCPSP and demonstrate how to generate different types of explanations for an over-constrained instance of the ARCPSP.",We develop a framework for generating human-understandable explanations for why infeasibility is occurring in over-constrained instances of a class of resource-constrained scheduling problems. 1111,Model-based Saliency for the Detection of Adversarial Examples,"Adversarial perturbations cause a shift in the salient features of an image, which may result in a misclassification.We demonstrate that gradient-based saliency approaches are unable to capture this shift, and develop a new defense which detects adversarial examples based on learnt saliency models instead.We study two approaches: a CNN trained to distinguish between natural and adversarial images using the saliency masks produced by our learnt saliency model, and a CNN trained on the salient pixels themselves as its input.On MNIST, CIFAR-10 and ASSIRA, our defenses are able to detect various adversarial attacks, including strong attacks such as C&W and DeepFool, contrary to gradient-based saliency and detectors which rely on the input image.The latter are unable to detect adversarial images when the L_2- and L_infinity- norms of the perturbations are too small.Lastly, we find that the salient pixel based detector improves on saliency map based detectors as it is more robust to white-box attacks.",We show that gradients are unable to capture shifts in saliency due to adversarial perturbations and present an alternative adversarial defense using learnt saliency models that is effective against both black-box and white-box attacks. 1112,M^3RL: Mind-aware Multi-agent Management Reinforcement Learning,"Most of the prior work on multi-agent reinforcement learning achieves optimal collaboration by directly learning a policy for each agent to maximize a common reward.In this paper, we aim to address this from a different angle.In particular, we consider scenarios where there are self-interested agents which have their own minds and can not be dictated to perform tasks they do not want to do.For achieving optimal coordination among these agents, we train a super agent to manage them by first inferring their minds based on both current and past observations and then initiating contracts to assign suitable tasks to workers and promise to reward them with corresponding bonuses so that they will agree to work together.The objective of the manager is to maximize the overall productivity as well as minimize payments made to the workers for ad-hoc worker teaming.To train the manager, we propose Mind-aware Multi-agent Management Reinforcement Learning, which consists of agent modeling and policy learning.We have evaluated our approach in two environments, Resource Collection and Crafting, to simulate multi-agent management problems with various task settings and multiple designs for the worker agents.""The experimental results have validated the effectiveness of our approach in modeling worker agents' minds online, and in achieving optimal ad-hoc teaming with good generalization and fast adaptation.",We propose Mind-aware Multi-agent Management Reinforcement Learning (M^3RL) for training a manager to motivate self-interested workers to achieve optimal collaboration by assigning suitable contracts to them. 1113,Depth-Recurrent Residual Connections for Super-Resolution of Real-Time Renderings,"Inferring temporally coherent data features is crucial for a large variety of learning tasks.We propose a network architecture that introduces temporal recurrent connections for the internal state of the widely used residual blocks.We demonstrate that, with these connections, convolutional neural networks can more robustly learn stable temporal states that persist between evaluations.We demonstrate their potential for inferring high-quality super-resolution images from low resolution images produced with real-time renderers.This data arises in a wide range of applications, and is particularly challenging as it contains a strongly aliased signal.Hence, the data differs substantially from the smooth inputs encountered in natural videos, and existing techniques do not succeed at producing acceptable image quality.We additionally propose a series of careful adjustments of typical generative adversarial architectures for video super-resolution to arrive at a first model that can produce detailed, yet temporally coherent images from an aliased stream of inputs from a real-time renderer.",A method for persistent latent states in ResBlocks demonstrated for super-resolution of alised image sequences. 1114,Diversely Stale Parameters for Efficient Training of Deep Convolutional Networks,"The backpropagation algorithm is the most popular algorithm training neural networks nowadays.However, it suffers from the forward locking, backward locking and update locking problems, especially when a neural network is so large that its layers are distributed across multiple devices.Existing solutions either can only handle one locking problem or lead to severe accuracy loss or memory inefficiency.Moreover, none of them consider the straggler problem among devices.In this paper, we propose \extbf and a novel efficient training algorithm, \extbf, which can address all these challenges without loss of accuracy nor memory issue.We also analyze the convergence of DSP with two popular gradient-based methods and prove that both of them are guaranteed to converge to critical points for non-convex problems.Finally, extensive experimental results on training deep convolutional neural networks demonstrate that our proposed DSP algorithm can achieve significant training speedup with stronger robustness and better generalization than compared methods.",We propose Diversely Stale Parameters to break lockings of the backpropoagation algorithm and train a CNN in parallel. 1115,Enhancing Language Emergence through Empathy,"The emergence of language in multi-agent settings is a promising research direction to ground natural language in simulated agents.If AI would be able to understand the meaning of language through its using it, it could also transfer it to other situations flexibly.That is seen as an important step towards achieving general AI.The scope of emergent communication is so far, however, still limited.It is necessary to enhance the learning possibilities for skills associated with communication to increase the emergable complexity.We took an example from human language acquisition and the importance of the empathic connection in this process.We propose an approach to introduce the notion of empathy to multi-agent deep reinforcement learning.""We extend existing approaches on referential games with an auxiliary task for the speaker to predict the listener's mind change improving the learning time."", 'Our experiments show the high potential of this architectural element by doubling the learning speed of the test setup.",An auxiliary prediction task can speed up learning in language emergence setups. 1116,Learning Mahalanobis Metric Spaces via Geometric Approximation Algorithms,"Learning Mahalanobis metric spaces is an important problem that has found numerous applications.Several algorithms have been designed for this problem, including Information Theoretic Metric Learning [Davis et al. 2007] and Large Margin Nearest Neighbor classification [Weinberger and Saul 2009]. We consider a formulation of Mahalanobis metric learning as an optimization problem,where the objective is to minimize the number of violated similarity/dissimilarity constraints. We show that for any fixed ambient dimension, there exists a fully polynomial time approximation scheme with nearly-linear running time.This result is obtained using tools from the theory of linear programming in low dimensions.We also discuss improvements of the algorithm in practice, and present experimental results on synthetic and real-world data sets.Our algorithm is fully parallelizable and performs favorably in the presence of adversarial noise.",Fully parallelizable and adversarial-noise resistant metric learning algorithm with theoretical guarantees. 1117,Engaging Image Captioning Via Personality,"Standard image captioning tasks such as COCO and Flickr30k are factual, neutral in tone and state the obvious.While such tasks are useful to verify that a machine understands the content of an image, they are not engaging to humans as captions. With this in mind we define a new task, Personality-Captions, where the goal is to be as engaging to humans as possible by incorporating controllable style and personality traits.We collect and release a large dataset of 201,858 of such captions conditioned over 215 possible traits. We build models that combine existing work from sentence representations with Transformers trained on 1.7 billion dialogue examples; and image representations with ResNets trained on 3.5 billion social media images. We obtain state-of-the-art performance on Flickr30k and COCO, and strong performance on our new task.Finally, online evaluations validate that our task and models are engaging to humans, with our best model close to human performance.",We develop engaging image captioning models conditioned on personality that are also state of the art on regular captioning tasks. 1118,DP-LSSGD: An Optimization Method to Lift the Utility in Privacy-Preserving ERM,"Machine learning models trained by differentially private stochastic gradient descent have much lower utility than the non-private ones.To mitigate this degradation, we propose a DP Laplacian smoothing SGD to train ML models with differential privacy guarantees.At the core of DP-LSSGD is the Laplacian smoothing, which smooths out the Gaussian noise used in the Gaussian mechanism.Under the same amount of noise used in the Gaussian mechanism, DP-LSSGD attains the same DP guarantee, but a better utility especially for the scenarios with strong DP guarantees.In practice, DP-LSSGD makes training both convex and nonconvex ML models more stable and enables the trained models to generalize better.The proposed algorithm is simple to implement and the extra computational complexity and memory overhead compared with DP-SGD are negligible.DP-LSSGD is applicable to train a large variety of ML models, including DNNs.",We propose a differentially private Laplacian smoothing stochastic gradient descent to train machine learning models with better utility and maintain differential privacy guarantees. 1119,Robust One-Bit Recovery via ReLU Generative Networks: Improved Statistical Rate and Global Landscape Analysis,"We study the robust one-bit compressed sensing problem whose goal is to design an algorithm that faithfully recovers any sparse target vector from quantized noisy measurements.""Under the assumption that the measurements are sub-Gaussian, to recover any-sparse up to an error with high probability, the best known computationally tractable algorithm requires\\footnote}\\varepsilon^m\\geq\ilde}x_0nG:\\mathbb^k\\rightarrow\\mathbb^d\heta_0Gm=\ilde}G\\varepsilonx_0x_0$ rather than its negative multiple.Our analysis sheds some light on the possibility of inverting a deep generative model under partial and quantized measurements, complementing the recent success of using deep generative models for inverse problems.",We provide statistical and computational analysis of one-bit compressed sensing problem with a generative prior. 1120,Unsupervised Deep Structure Learning by Recursive Dependency Analysis,"We introduce an unsupervised structure learning algorithm for deep, feed-forward, neural networks.We propose a new interpretation for depth and inter-layer connectivity where a hierarchy of independencies in the input distribution is encoded in the network structure.This results in structures allowing neurons to connect to neurons in any deeper layer skipping intermediate layers.Moreover, neurons in deeper layers encode low-order independencies and have a wide scope of the input, whereas neurons in the first layers encode higher-order independencies and have a narrower scope.Thus, the depth of the network is automatically determined---equal to the maximal order of independence in the input distribution, which is the recursion-depth of the algorithm.The proposed algorithm constructs two main graphical models:1) a generative latent graph learned from data and2) a deep discriminative graph constructed from the generative latent graph.We prove that conditional dependencies between the nodes in the learned generative latent graph are preserved in the class-conditional discriminative graph.Finally, a deep neural network structure is constructed based on the discriminative graph.We demonstrate on image classification benchmarks that the algorithm replaces the deepest layers of common convolutional networks, achieving high classification accuracy, while constructing significantly smaller structures.The proposed structure learning algorithm requires a small computational cost and runs efficiently on a standard desktop CPU.",A principled approach for structure learning of deep neural networks with a new interpretation for depth and inter-layer connectivity. 1121,Achieving Strong Regularization for Deep Neural Networks,"L1 and L2 regularizers are critical tools in machine learning due to their ability to simplify solutions.However, imposing strong L1 or L2 regularization with gradient descent method easily fails, and this limits the generalization ability of the underlying neural networks.To understand this phenomenon, we investigate how and why training fails for strong regularization.Specifically, we examine how gradients change over time for different regularization strengths and provide an analysis why the gradients diminish so fast.We find that there exists a tolerance level of regularization strength, where the learning completely fails if the regularization strength goes beyond it.We propose a simple but novel method, Delayed Strong Regularization, in order to moderate the tolerance level.Experiment results show that our proposed approach indeed achieves strong regularization for both L1 and L2 regularizers and improves both accuracy and sparsity on public data sets.Our source code is published.",We investigate how and why strong L1/L2 regularization fails and propose a method than can achieve strong regularization. 1122,The Conditional Entropy Bottleneck,"We present a new family of objective functions, which we term the Conditional Entropy Bottleneck.These objectives are motivated by the Minimum Necessary Information criterion.We demonstrate the application of CEB to classification tasks.We show that CEB gives: well-calibrated predictions; strong detection of challenging out-of-distribution examples and powerful whitebox adversarial examples; and substantial robustness to those adversaries.Finally, we report that CEB fails to learn from information-free datasets, providing a possible resolution to the problem of generalization observed in Zhang et al..",The Conditional Entropy Bottleneck is an information-theoretic objective function for learning optimal representations. 1123,Minimizing FLOPs to Learn Efficient Sparse Representations,"Deep representation learning has become one of the most widely adopted approaches for visual search, recommendation, and identification.Retrieval of such representations from a large database is however computationally challenging.Approximate methods based on learning compact representations, have been widely explored for this problem, such as locality sensitive hashing, product quantization, and PCA.In this work, in contrast to learning compact representations, we propose to learn high dimensional and sparse representations that have similar representational capacity as dense embeddings while being more efficient due to sparse matrix multiplication operations which can be much faster than dense multiplication.Following the key insight that the number of operations decreases quadratically with the sparsity of embeddings provided the non-zero entries are distributed uniformly across dimensions, we propose a novel approach to learn such distributed sparse embeddings via the use of a carefully constructed regularization function that directly minimizes a continuous relaxation of the number of floating-point operations incurred during retrieval.Our experiments show that our approach is competitive to the other baselines and yields a similar or better speed-vs-accuracy tradeoff on practical datasets.","We propose an approach to learn sparse high dimensional representations that are fast to search, by incorporating a surrogate of the number of operations directly into the loss function." 1124,Guiding Physical Intuition with Neural Stethoscopes,"Model interpretability and systematic, targeted model adaptation present central challenges in deep learning.""In the domain of intuitive physics, we study the task of visually predicting stability of block towers with the goal of understanding and influencing the model's reasoning."", 'Our contributions are two-fold.Firstly, we introduce neural stethoscopes as a framework for quantifying the degree of importance of specific factors of influence in deep networks as well as for actively promoting and suppressing information as appropriate.In doing so, we unify concepts from multitask learning as well as training with auxiliary and adversarial losses.Secondly, we deploy the stethoscope framework to provide an in-depth analysis of a state-of-the-art deep neural network for stability prediction, specifically examining its physical reasoning.We show that the baseline model is susceptible to being misled by incorrect visual cues.This leads to a performance breakdown to the level of random guessing when training on scenarios where visual cues are inversely correlated with stability.Using stethoscopes to promote meaningful feature extraction increases performance from 51% to 90% prediction accuracy.Conversely, training on an easy dataset where visual cues are positively correlated with stability, the baseline model learns a bias leading to poor performance on a harder dataset.Using an adversarial stethoscope, the network is successfully de-biased, leading to a performance increase from 66% to 88%.",Combining auxiliary and adversarial training to interrogate and help physical understanding. 1125,A RAD approach to deep mixture models,"Flow based models such as Real NVP are an extremely powerful approach to density estimation.However, existing flow based models are restricted to transforming continuous densities over a continuous input space into similarly continuous distributions over continuous latent variables.This makes them poorly suited for modeling and representing discrete structures in data distributions, for example class membership or discrete symmetries.To address this difficulty, we present a normalizing flow architecture which relies on domain partitioning using locally invertible functions, and possesses both real and discrete valued latent variables. This Real and Discrete approach retains the desirable normalizing flow properties of exact sampling, exact inference, and analytically computable probabilities, while at the same time allowing simultaneous modeling of both continuous and discrete structure in a data distribution.","Flow based models, but non-invertible, to also learn discrete variables" 1126,Small nonlinearities in activation functions create bad local minima in neural networks,"We investigate the loss surface of neural networks.We prove that even for one-hidden-layer networks with ""slightest"" nonlinearity, the empirical risks have spurious local minima in most cases.Our results thus indicate that in general ""no spurious local minim"" is a property limited to deep linear networks, and insights obtained from linear networks may not be robust.Specifically, for ReLU networks we constructively prove that for almost all practical datasets there exist infinitely many local minima.We also present a counterexample for more general activations, for which there exists a bad local minimum.Our results make the least restrictive assumptions relative to existing results on spurious local optima in neural networks.We complete our discussion by presenting a comprehensive characterization of global optimality for deep linear networks, which unifies other results on this topic.","We constructively prove that even the slightest nonlinear activation functions introduce spurious local minima, for general datasets and activation functions." 1127,Composable Planning with Attributes,"The tasks that an agent will need to solve often aren’t known during training.However, if the agent knows which properties of the environment we consider im- portant, then after learning how its actions affect those properties the agent may be able to use this knowledge to solve complex tasks without training specifi- cally for them.Towards this end, we consider a setup in which an environment is augmented with a set of user defined attributes that parameterize the features of interest.We propose a model that learns a policy for transitioning between “nearby” sets of attributes, and maintains a graph of possible transitions.Given a task at test time that can be expressed in terms of a target set of attributes, and a current state, our model infers the attributes of the current state and searches over paths through attribute space to get a high level plan, and then uses its low level policy to execute the plan.We show in grid-world games and 3D block stacking that our model is able to generalize to longer, more complex tasks at test time even when it only sees short, simple tasks at train time.","Compositional attribute-based planning that generalizes to long test tasks, despite being trained on short & simple tasks." 1128,Meta-Learning without Memorization,"The ability to learn new concepts with small amounts of data is a critical aspect of intelligence that has proven challenging for deep learning methods.Meta-learning has emerged as a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks.However, most meta-learning algorithms implicitly require that the meta-training tasks be mutually-exclusive, such that no single model can solve all of the tasks at once.For example, when creating tasks for few-shot image classification, prior work uses a per-task random assignment of image classes to N-way classification labels.If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes. This requirement means that the user must take great care in designing the tasks, for example by shuffling labels or removing task identifying information from the inputs.In some domains, this makes meta-learning entirely inapplicable.In this paper, we address this challenge by designing a meta-regularization objective using information theory that places precedence on data-driven adaptation.This causes the meta-learner to decide what must be learned from the task training data and what should be inferred from the task testing input.By doing so, our algorithm can successfully use data from non-mutually-exclusive tasks to efficiently adapt to novel tasks.We demonstrate its applicability to both contextual and gradient-based meta-learning algorithms, and apply it in practical settings where applying standard meta-learning has been difficult.Our approach substantially outperforms standard meta-learning algorithms in these settings.\xa0","We identify and formalize the memorization problem in meta-learning and solve this problem with novel meta-regularization method, which greatly expand the domain that meta-learning can be applicable to and effective on." 1129,Meta-Learning with Warped Gradient Descent,"Learning an efficient update rule from data that promotes rapid learning of new tasks from the same distribution remains an open problem in meta-learning.Typically, previous works have approached this issue either by attempting to train a neural network that directly produces updates or by attempting to learn better initialisations or scaling factors for a gradient-based update rule.Both these approaches pose challenges.On one hand, directly producing an update forgoes a useful inductive bias and can easily lead to non-converging behaviour.On the other hand, approaches that try to control a gradient-based update rule typically resort to computing gradients through the learning process to obtain their meta-gradients, leading to methods that can not scale beyond few-shot task adaptation.In this work we propose Warped Gradient Descent, a method that intersects these approaches to mitigate their limitations.WarpGrad meta-learns an efficiently parameterised preconditioning matrix that facilitates gradient descent across the task distribution.Preconditioning arises by interleaving non-linear layers, referred to as warp-layers, between the layers of a task-learner.Warp-layers are meta-learned without backpropagating through the task training process in a manner similar to methods that learn to directly produce updates.WarpGrad is computationally efficient, easy to implement, and can scale to arbitrarily large meta-learning problems.We provide a geometrical interpretation of the approach and evaluate its effectiveness in a variety of settings, including few-shot, standard supervised, continual and reinforcement learning.","We propose a novel framework for meta-learning a gradient-based update rule that scales to beyond few-shot learning and is applicable to any form of learning, including continual learning." 1130,Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models,"In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification.However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images.We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks.Defense-GAN is trained to model the distribution of unperturbed images.At inference time, it finds a close output to a given image which does not contain the adversarial changes.This output is then fed to the classifier.Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure.It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples.We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies.",Defense-GAN uses a Generative Adversarial Network to defend against white-box and black-box attacks in classification models. 1131,Efficient Training on Very Large Corpora via Gramian Estimation,"We study the problem of learning similarity functions over very large corpora using neural network embedding models.These models are typically trained using SGD with random sampling of unobserved pairs, with a sample size that grows quadratically with the corpus size, making it expensive to scale.We propose new efficient methods to train these models without having to sample unobserved pairs.Inspired by matrix factorization, our approach relies on adding a global quadratic penalty and expressing this term as the inner-product of two generalized Gramians.We show that the gradient of this term can be efficiently computed by maintaining estimates of the Gramians, and develop variance reduction schemes to improve the quality of the estimates.We conduct large-scale experiments that show a significant improvement both in training time and generalization performance compared to sampling methods.","We develop efficient methods to train neural embedding models with a dot-product structure, by reformulating the objective function in terms of generalized Gram matrices, and maintaining estimates of those matrices." 1132,GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding,"For natural language understanding technology to be maximally useful, it must be able to process language in a way that is not exclusive to a single task, genre, or dataset.In pursuit of this objective, we introduce the General Language Understanding Evaluation benchmark, a collection of tools for evaluating the performance of models across a diverse set of existing NLU tasks.By including tasks with limited training data, GLUE is designed to favor and encourage models that share general linguistic knowledge across tasks.GLUE also includes a hand-crafted diagnostic test suite that enables detailed linguistic analysis of models.We evaluate baselines based on current methods for transfer and representation learning and find that multi-task training on all tasks performs better than training a separate model per task.However, the low absolute performance of our best model indicates the need for improved general NLU systems.",We present a multi-task benchmark and analysis platform for evaluating generalization in natural language understanding systems. 1133,CM3: Cooperative Multi-goal Multi-stage Multi-agent Reinforcement Learning,"A variety of cooperative multi-agent control problems require agents to achieve individual goals while contributing to collective success.""This multi-goal multi-agent setting poses difficulties for recent algorithms, which primarily target settings with a single global reward, due to two new challenges: efficient exploration for learning both individual goal attainment and cooperation for others' success, and credit-assignment for interactions between actions and goals of different agents."", 'To address both challenges, we restructure the problem into a novel two-stage curriculum, in which single-agent goal attainment is learned prior to learning multi-agent cooperation, and we derive a new multi-goal multi-agent policy gradient with a credit function for localized credit assignment.We use a function augmentation scheme to bridge value and policy functions across the curriculum.The complete architecture, called CM3, learns significantly faster than direct adaptations of existing algorithms on three challenging multi-goal multi-agent problems: cooperative navigation in difficult formations, negotiating multi-vehicle lane changes in the SUMO traffic simulator, and strategic cooperation in a Checkers environment.","A modular method for fully cooperative multi-goal multi-agent reinforcement learning, based on curriculum learning for efficient exploration and credit assignment for action-goal interactions." 1134,Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning,"We identify two issues with the family of algorithms based on the Adversarial Imitation Learning framework.The first problem is implicit bias present in the reward functions used in these algorithms.While these biases might work well for some environments, they can also lead to sub-optimal behavior in others.Secondly, even though these algorithms can learn from few expert demonstrations, they require a prohibitively large number of interactions with the environment in order to imitate the expert for many real-world applications.In order to address these issues, we propose a new algorithm called Discriminator-Actor-Critic that uses off-policy Reinforcement Learning to reduce policy-environment interaction sample complexity by an average factor of 10.Furthermore, since our reward function is designed to be unbiased, we can apply our algorithm to many problems without making any task-specific adjustments.",We address sample inefficiency and reward bias in adversarial imitation learning algorithms such as GAIL and AIRL. 1135,Siamese Capsule Networks,"Capsule Networks have shown encouraging results on defacto benchmark computer vision datasets such as MNIST, CIFAR and smallNORB.Although, they are yet to be tested on tasks where the entities detected inherently have more complex internal representations and there are very few instances per class to learn from and where point-wise classification is not suitable.Hence, this paper carries out experiments on face verification in both controlled and uncontrolled settings that together address these points.In doing so we introduce Siamese Capsule Networks, a new variant that can be used for pairwise learning tasks.The model is trained using contrastive loss with l2-normalized capsule encoded pose features.We find that Siamese Capsule Networks perform well against strong baselines on both pairwise learning datasets, yielding best results in the few-shot learning setting where image pairs in the test set contain unseen subjects.",A variant of capsule networks that can be used for pairwise learning tasks. Results shows that Siamese Capsule Networks work well in the few shot learning setting. 1136,DyNet: Dynamic Convolution for Accelerating Convolution Neural Networks,"Convolution operator is the core of convolutional neural networks and occupies the most computation cost.To make CNNs more efficient, many methods have been proposed to either design lightweight networks or compress models.Although some efficient network structures have been proposed, such as MobileNet or ShuffleNet, we find that there still exists redundant information between convolution kernels.To address this issue, we propose a novel dynamic convolution method named \extbf in this paper, which can adaptively generate convolution kernels based on image contents.To demonstrate the effectiveness, we apply DyNet on multiple state-of-the-art CNNs.The experiment results show that DyNet can reduce the computation cost remarkably, while maintaining the performance nearly unchanged.Specifically, for ShuffleNetV2, MobileNetV2, ResNet18 and ResNet50, DyNet reduces 40.0%, 56.7%, 68.2% and 72.4% FLOPs respectively while the Top-1 accuracy on ImageNet only changes by +1.0%, -0.27%, -0.6% and -0.08%.Meanwhile, DyNet further accelerates the inference speed of MobileNetV2, ResNet18 and ResNet50 by 1.87x,1.32x and 1.48x on CPU platform respectively.To verify the scalability, we also apply DyNet on segmentation task, the results show that DyNet can reduces 69.3% FLOPs while maintaining the Mean IoU on segmentation task.",We propose a dynamic convolution method to significantly accelerate inference time of CNNs while maintaining the accuracy. 1137,Iteratively Training Look-Up Tables for Network Quantization,"Operating deep neural networks on devices with limited resources requires the reduction of their memory footprints and computational requirements.""In this paper we introduce a training method, called look-up table quantization, which learns a dictionary and assigns each weight to one of the dictionary's values."", 'We show that this method is very flexible and that many other techniques can be seen as special cases of LUT-Q.For example, we can constrain the dictionary trained with LUT-Q to generate networks with pruned weight matrices or restrict the dictionary to powers-of-two to avoid the need for multiplications.In order to obtain fully multiplier-less networks, we also introduce a multiplier-less version of batch normalization.Extensive experiments on image recognition and object detection tasks show that LUT-Q consistently achieves better performance than other methods with the same quantization bitwidth.","In this paper we introduce a training method, called look-up table quantization (LUT-Q), which learns a dictionary and assigns each weight to one of the dictionary's values" 1138,REVISTING NEGATIVE TRANSFER USING ADVERSARIAL LEARNING,"An unintended consequence of feature sharing is the model fitting to correlated tasks within the dataset, termed negative transfer. In this paper, we revisit the problem of negative transfer in multitask setting and find that its corrosive effects are applicable to a wide range of linear and non-linear models, including neural networks.We first study the effects of negative transfer in a principled way and show that previously proposed counter-measures are insufficient, particularly for trainable features.We propose an adversarial training approach to mitigate the effects of negative transfer by viewing the problem in a domain adaptation setting.Finally, empirical results on attribute prediction multi-task on AWA and CUB datasets further validate the need for correcting negative sharing in an end-to-end manner.",We look at negative transfer from a domain adaptation point of view to derive an adversarial learning algorithm. 1139,Reinforcement Learning via Replica Stacking of Quantum Measurements for the Training of Quantum Boltzmann Machines,"Recent theoretical and experimental results suggest the possibility of using current and near-future quantum hardware in challenging sampling tasks.In this paper, we introduce free-energy-based reinforcement learning as an application of quantum hardware.We propose a method for processing a quantum annealer’s measured qubit spin configurations in approximating the free energy of a quantum Boltzmann machine.We then apply this method to perform reinforcement learning on the grid-world problem using the D-Wave 2000Q quantum annealer.The experimental results show that our technique is a promising method for harnessing the power of quantum sampling in reinforcement learning tasks.",We train Quantum Boltzmann Machines using a replica stacking method and a quantum annealer to perform a reinforcement learning task. 1140,Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks,"Deep learning models are vulnerable to adversarial examples crafted by applying human-imperceptible perturbations on benign inputs.However, under the black-box setting, most existing adversaries often have a poor transferability to attack other defense models.In this work, from the perspective of regarding the adversarial example generation as an optimization process, we propose two new methods to improve the transferability of adversarial examples, namely Nesterov Iterative Fast Gradient Sign Method and Scale-Invariant attack Method.NI-FGSM aims to adapt Nesterov accelerated gradient into the iterative attacks so as to effectively look ahead and improve the transferability of adversarial examples.While SIM is based on our discovery on the scale-invariant property of deep learning models, for which we leverage to optimize the adversarial perturbations over the scale copies of the input images so as to avoid ""overfitting” on the white-box model being attacked and generate more transferable adversarial examples.NI-FGSM and SIM can be naturally integrated to build a robust gradient-based attack to generate more transferable adversarial examples against the defense models.Empirical results on ImageNet dataset demonstrate that our attack methods exhibit higher transferability and achieve higher attack success rates than state-of-the-art gradient-based attacks.",We proposed a Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and a Scale-Invariant attack Method (SIM) that can boost the transferability of adversarial examples for image classification. 1141,Probabilistic Binary Neural Networks,"Low bit-width weights and activations are an effective way of combating the increasing need for both memory and compute power of Deep Neural Networks.In this work, we present a probabilistic training method for Neural Network with both binary weights and activations, called PBNet.By embracing stochasticity during training, we circumvent the need to approximate the gradient of functions for which the derivative is zero almost always, such as, while still obtaining a fully Binary Neural Network at test time.Moreover, it allows for anytime ensemble predictions for improved performance and uncertainty estimates by sampling from the weight distribution.Since all operations in a layer of the PBNet operate on random variables, we introduce stochastic versions of Batch Normalization and max pooling, which transfer well to a deterministic network at test time. We evaluate two related training methods for the PBNet: one in which activation distributions are propagated throughout the network, and one in which binary activations are sampled in each layer.Our experiments indicate that sampling the binary activations is an important element for stochastic training of binary Neural Networks.",We introduce a stochastic training method for training Binary Neural Network with both binary weights and activations. 1142,Impact of the latent space on the ability of GANs to fit the distribution,"The goal of generative models is to model the underlying data distribution of asample based dataset.Our intuition is that an accurate model should in principlealso include the sample based dataset as part of its induced probability distribution.To investigate this, we look at fully trained generative models using the GenerativeAdversarial Networks framework and analyze the resulting generatoron its ability to memorize the dataset.Further, we show that the size of the initiallatent space is paramount to allow for an accurate reconstruction of the trainingdata.This gives us a link to compression theory, where Autoencoders areused to lower bound the reconstruction capabilities of our generative model.Here,we observe similar results to the perception-distortion tradeoff).Given a small latent space, the AE produces low quality and the GANproduces high quality outputs from a perceptual viewpoint.In contrast, the distortionerror is smaller for the AE.By increasing the dimensionality of the latentspace the distortion decreases for both models, but the perceptual quality onlyincreases for the AE.",We analyze the impact of the latent space of fully trained generators by pseudo inverting them. 1143,Robust Authorship Verification with Transfer Learning,"We address the problem of open-set authorship verification, a classification task that consists of attributing texts of unknown authorship to a given author when the unknown documents in the test set are excluded from the training set.We present an end-to-end model-building process that is universally applicable to a wide variety of corpora with little to no modification or fine-tuning.""It relies on transfer learning of a deep language model and uses a generative adversarial network and a number of text augmentation techniques to improve the model's generalization ability."", 'The language model encodes documents of known and unknown authorship into a domain-invariant space, aligning document pairs as input to the classifier, while keeping them separate.The resulting embeddings are used to train to an ensemble of recurrent and quasi-recurrent neural networks.The entire pipeline is bidirectional; forward and backward pass results are averaged.We perform experiments on four traditional authorship verification datasets, a collection of machine learning papers mined from the web, and a large Amazon-Reviews dataset.Experimental results surpass baseline and current state-of-the-art techniques, validating the proposed approach.",We propose and end-to-end model-building process that is universally applicable to a wide variety of authorship verification corpora and outperforms state-of-the-art with little to no modification or fine-tuning. 1144,Multi-Advisor Reinforcement Learning,"We consider tackling a single-agent RL problem by distributing it to learners.These learners, called advisors, endeavour to solve the problem from a different focus.Their advice, taking the form of action values, is then communicated to an aggregator, which is in control of the system.We show that the local planning method for the advisors is critical and that none of the ones found in the literature is flawless: the planning overestimates values of states where the other advisors disagree, and the planning is inefficient around danger zones.We introduce a novel approach called and discuss its theoretical aspects.We empirically examine and validate our theoretical findings on a fruit collection task.",We consider tackling a single-agent RL problem by distributing it to learners. 1145,Multi-Domain Processing via Hybrid Denoising Networks for Speech Enhancement,"We present a hybrid framework that leverages the trade-off between temporal and frequency precision in audio representations to improve the performance of speech enhancement task.We first show that conventional approaches using specific representations such as raw-audio and spectrograms are each effective at targeting different types of noise.By integrating both approaches, our model can learn multi-scale and multi-domain features, effectively removing noise existing on different regions on the time-frequency space in a complementary way.Experimental results show that the proposed hybrid model yields better performance and robustness than using each model individually.",A hybrid model utilizing both raw-audio and spectrogram information for speech enhancement tasks. 1146,A Hitchhiker's Guide to Statistical Comparisons of Reinforcement Learning Algorithms,"Consistently checking the statistical significance of experimental results is the first mandatory step towards reproducible science.""This paper presents a hitchhiker's guide to rigorous comparisons of reinforcement learning algorithms."", 'After introducing the concepts of statistical testing, we review the relevant statistical tests and compare them empirically in terms of false positive rate and statistical power as a function of the sample size and effect size.We further investigate the robustness of these tests to violations of the most common hypotheses.Beside simulations, we compare empirical distributions obtained by running Soft-Actor Critic and Twin-Delayed Deep Deterministic Policy Gradient on Half-Cheetah.We conclude by providing guidelines and code to perform rigorous comparisons of RL algorithm performances.","This paper compares statistical tests for RL comparisons (false positive, statistical power), checks robustness to assumptions using simulated distributions and empirical distributions (SAC, TD3), provides guidelines for RL students and researchers." 1147,Phase Transitions for the Information Bottleneck in Representation Learning,"In the Information Bottleneck, when tuning the relative strength between compression and prediction terms, how do the two terms behave, and what's their relationship with the dataset and the learned representation?"", 'In this paper, we set out to answer these questions by studying multiple phase transitions in the IB objective: IB_β[p] = I − βI defined on the encoding distribution p for input X, target Y and representation Z, where sudden jumps of dI/dβ and prediction accuracy are observed with increasing β.We introduce a definition for IB phase transitions as a qualitative change of the IB loss landscape, and show that the transitions correspond to the onset of learning new classes.Using second-order calculus of variations, we derive a formula that provides a practical condition for IB phase transitions, and draw its connection with the Fisher information matrix for parameterized models.We provide two perspectives to understand the formula, revealing that each IB phase transition is finding a component of maximum correlation between X and Y orthogonal to the learned representation, in close analogy with canonical-correlation analysis in linear settings.Based on the theory, we present an algorithm for discovering phase transition points.Finally, we verify that our theory and algorithm accurately predict phase transitions in categorical datasets, predict the onset of learning new classes and class difficulty in MNIST, and predict prominent phase transitions in CIFAR10.",We give a theoretical analysis of the Information Bottleneck objective to understand and predict observed phase transitions in the prediction vs. compression tradeoff. 1148,Locally adaptive activation functions with slope recovery term for deep and physics-informed neural networks,"We propose two approaches of locally adaptive activation functions namely, layer-wise and neuron-wise locally adaptive activation functions, which improve the performance of deep and physics-informed neural networks.The local adaptation of activation function is achieved by introducing scalable hyper-parameters in each layer and for every neuron separately, and then optimizing it using the stochastic gradient descent algorithm.Introduction of neuron-wise activation function acts like a vector activation function as opposed to the traditional scalar activation function given by fixed, global and layer-wise activations.In order to further increase the training speed, an activation slope based slope recovery term is added in the loss function, which further accelerate convergence, thereby reducing the training cost.For numerical experiments, a nonlinear discontinuous function is approximated using a deep neural network with layer-wise and neuron-wise locally adaptive activation functions with and without the slope recovery term and compared with its global counterpart.Moreover, solution of the nonlinear Burgers equation, which exhibits steep gradients, is also obtained using the proposed methods.On the theoretical side, we prove that in the proposed method the gradient descent algorithms are not attracted to sub-optimal critical points or local minima under practical conditions on the initialization and learning rate.Furthermore, the proposed adaptive activation functions with the slope recovery are shown to accelerate the training process in standard deep learning benchmarks using CIFAR-10, CIFAR-100, SVHN, MNIST, KMNIST, Fashion-MNIST, and Semeion data sets with and without data augmentation.",Proposing locally adaptive activation functions in deep and physics-informed neural networks for faster convergence 1149,Integer Networks for Data Compression with Latent-Variable Models,"We consider the problem of using variational latent-variable models for data compression.For such models to produce a compressed binary sequence, which is the universal data representation in a digital world, the latent representation needs to be subjected to entropy coding.Range coding as an entropy coding technique is optimal, but it can fail catastrophically if the computation of the prior differs even slightly between the sending and the receiving side.Unfortunately, this is a common scenario when floating point math is used and the sender and receiver operate on different hardware or software platforms, as numerical round-off is often platform dependent.We propose using integer networks as a universal solution to this problem, and demonstrate that they enable reliable cross-platform encoding and decoding of images using variational models.",We train variational models with quantized networks for computational determinism. This enables using them for cross-platform data compression. 1150,"Don't Decay the Learning Rate, Increase the Batch Size","It is common practice to decay the learning rate.Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training.This procedure is successful for stochastic gradient descent, SGD with momentum, Nesterov momentum, and Adam.It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times.We can further reduce the number of parameter updates by increasing the learning rate and scaling the batch size.Finally, one can increase the momentum coefficient and scale, although this tends to slightly reduce the test accuracy.Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning.We train ResNet-50 on ImageNet to 76.1% validation accuracy in under 30 minutes.",Decaying the learning rate and increasing the batch size during training are equivalent. 1151,DCN+: Mixed Objective And Deep Residual Coattention for Question Answering,"Traditional models for question answering optimize using cross entropy loss, which encourages exact answers at the cost of penalizing nearby or overlapping answers that are sometimes equally accurate.We propose a mixed objective that combines cross entropy loss with self-critical policy learning, using rewards derived from word overlap to solve the misalignment between evaluation metric and optimization objective.In addition to the mixed objective, we introduce a deep residual coattention encoder that is inspired by recent work in deep self-attention and residual networks.Our proposals improve model performance across question types and input lengths, especially for long questions that requires the ability to capture long-term dependencies.On the Stanford Question Answering Dataset, our model achieves state of the art results with 75.1% exact match accuracy and 83.1% F1, while the ensemble obtains 78.9% exact match accuracy and 86.0% F1.","We introduce the DCN+ with deep residual coattention and mixed-objective RL, which achieves state of the art performance on the Stanford Question Answering Dataset." 1152,NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search,"Neural architecture search has achieved breakthrough success in a great number of applications in the past few years.It could be time to take a step back and analyze the good and bad aspects in the field of NAS.A variety of algorithms search architectures under different search space.These searched architectures are trained using different setups, e.g., hyper-parameters, data augmentation, regularization.This raises a comparability problem when comparing the performance of various NAS algorithms.NAS-Bench-101 has shown success to alleviate this problem.In this work, we propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information.NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms.The design of our search space is inspired by the one used in the most popular cell-based searching algorithms, where a cell is represented as a directed acyclic graph.Each edge here is associated with an operation selected from a predefined operation set.For it to be applicable for all NAS algorithms, the search space defined in NAS-Bench-201 includes all possible architectures generated by 4 nodes and 5 associated operation options, which results in 15,625 neural cell candidates in total.The training log using the same setup and the performance for each architecture candidate are provided for three datasets.This allows researchers to avoid unnecessary repetitive training for selected architecture and focus solely on the search algorithm itself.The training time saved for every architecture also largely improves the efficiency of most NAS algorithms and presents a more computational cost friendly NAS community for a broader range of researchers.We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.In further support of the proposed NAS-Bench-102, we have analyzed it from many aspects and benchmarked 10 recent NAS algorithms, which verify its applicability.",A NAS benchmark applicable to almost any NAS algorithms. 1153,Improving Generalization and Stability of Generative Adversarial Networks,"Generative Adversarial Networks are one of the most popular tools for learning complex high dimensional distributions.However, generalization properties of GANs have not been well understood.In this paper, we analyze the generalization of GANs in practical settings.We show that discriminators trained on discrete datasets with the original GAN loss have poor generalization capability and do not approximate the theoretically optimal discriminator.We propose a zero-centered gradient penalty for improving the generalization of the discriminator by pushing it toward the optimal discriminator.The penalty guarantees the generalization and convergence of GANs.Experiments on synthetic and large scale datasets verify our theoretical analysis.",We propose a zero-centered gradient penalty for improving generalization and stability of GANs 1154,"The GAN Landscape: Losses, Architectures, Regularization, and Normalization","Generative adversarial networks are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion.While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant amount of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of tricks"".The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, and neural architectures.In this work we take a sober view of the current state of GANs from a practical perspective.We reproduce the current state of the art and go beyond fairly exploring the GAN landscape.We discuss common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub.",A sober view on the current state of GANs from a practical perspective 1155,Explain Your Move: Understanding Agent Actions Using Focused Feature Saliency,"As deep reinforcement learning is applied to more tasks, there is a need to visualize and understand the behavior of learned agents.Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action.Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent.Our approach generates more focused saliency maps by balancing two aspects that capture different desiderata of saliency.The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweights irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare our approach with existing approaches on agents trained to play board games and Atari games. We show through illustrative examples, human studies, and automated evaluation methods that our approach generates saliency maps that are more interpretable for humans than existing approaches.","We propose a model-agnostic approach to explain the behaviour of black-box deep RL agents, trained to play Atari and board games, by highlighting relevant features of an input state." 1156,Randomness in Deconvolutional Networks for Visual Representation,"To understand the inner work of deep neural networks and provide possible theoretical explanations, we study the deep representations through the untrained, random weight CNN-DCN architecture.As a convolutional AutoEncoder, CNN indicates the portion of a convolutional neural network from the input to an intermediate convolutional layer, and DCN indicates the corresponding deconvolutional portion.As compared with DCN training for pre-trained CNN, training the DCN for random-weight CNN converges more quickly and yields higher quality image reconstruction.Then, what happens for the overall random CNN-DCN?We gain intriguing results that the image can be reconstructed with good quality.To gain more insight on the intermediate random representation, we investigate the impact of network width versus depth, number of random channels, and size of random kernels on the reconstruction quality, and provide theoretical justifications on empirical observations.We further provide a fast style transfer application using the random weight CNN-DCN architecture to show the potential of our observation.","We investigate the deep representation of untrained, random weight CNN-DCN architectures, and show their image reconstruction quality and possible applications." 1157,Efficient Inference on Deep Neural Networks by Dynamic Representations and Decision Gates," The current trade-off between depth and computational cost makes it difficult to adopt deep neural networks for many industrial applications, especially when computing power is limited.Here, we are inspired by the idea that, while deeper embeddings are needed to discriminate difficult samples, a large number of samples can be well discriminated via much shallower embeddings.In this study, we introduce the concept of decision gates, modules trained to decide whether a sample needs to be projected into a deeper embedding or if an early prediction can be made at the d-gate, thus enabling the computation of dynamic representations at different depths. The proposed d-gate modules can be integrated with any deep neural network and reduces the average computational cost of the deep neural networks while maintaining modeling accuracy.Experimental results show that leveraging the proposed d-gate modules led to a ~38% speed-up and ~39% FLOPS reduction on ResNet-101 and ~46% speed-up and36% FLOPS reduction on DenseNet-201 trained on the CIFAR10 dataset with only ~2% drop in accuracy.",This paper introduces a new dynamic feature representation approach to provide a more efficient way to do inference on deep neural networks. 1158,On Symmetry and Initialization for Neural Networks,"This work provides an additional step in the theoretical understanding of neural networks.We consider neural networks with one hidden layer and show that when learning symmetric functions, one can choose initial conditions so that standard SGD training efficiently produces generalization guarantees.We empirically verify this and show that this does not hold when the initial conditions are chosen at random.The proof of convergence investigates the interaction between the two layers of the network.Our results highlight the importance of using symmetry in the design of neural networks.","When initialized properly, neural networks can learn the simple class of symmetric functions; when initialized randomly, they fail. " 1159,Efficient Multi-Objective Neural Architecture Search via Lamarckian Evolution,"Architecture search aims at automatically finding neural architectures that are competitive with architectures designed by human experts.While recent approaches have achieved state-of-the-art predictive performance for image recognition, they are problematic under resource constraints for two reasons: the neural architectures found are solely optimized for high predictive performance, without penalizing excessive resource consumption;most architecture search methods require vast computational resources.We address the first shortcoming by proposing LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method.We address the second shortcoming by proposing a Lamarckian inheritance mechanism for LEMONADE which generates children networks that are warmstarted with the predictive performance of their trained parents.This is accomplished by using network morphism operators for generating children.The combination of these two contributions allows finding models that are on par or even outperform different-sized NASNets, MobileNets, MobileNets V2 and Wide Residual Networks on CIFAR-10 and ImageNet64x64 within only one week on eight GPUs, which is about 20-40x less compute power than previous architecture search methods that yield state-of-the-art performance.",We propose a method for efficient Multi-Objective Neural Architecture Search based on Lamarckian inheritance and evolutionary algorithms. 1160,textTOvec: DEEP CONTEXTUALIZED NEURAL AUTOREGRESSIVE TOPIC MODELS OF LANGUAGE WITH DISTRIBUTED COMPOSITIONAL PRIOR,"We address two challenges of probabilistic topic modelling in order to better estimatethe probability of a word in a given context, i.e., P : NoLanguage Structure in Context: Probabilistic topic models ignore word order bysummarizing a given context as a “bag-of-word” and consequently the semanticsof words in the context is lost.In this work, we incorporate language structureby combining a neural autoregressive topic model with a LSTM based languagemodel in a single probabilistic framework.The LSTM-LMlearns a vector-space representation of each word by accounting for word orderin local collocation patterns, while the TM simultaneously learns a latent representationfrom the entire document.In addition, the LSTM-LM models complexcharacteristics of language, while the TM discoversthe underlying thematic structure in a collection of documents.We unite two complementaryparadigms of learning the meaning of word occurrences by combininga topic model and a language model in a unified probabilistic framework, namedas ctx-DocNADE. Limited Context and/or Smaller training corpus of documents:In settings with a small number of word occurrencesin short text or data sparsity in a corpus of few documents, the application of TMsis challenging.We address this challenge by incorporating external knowledgeinto neural autoregressive topic models via a language modelling approach: weuse word embeddings as input of a LSTM-LM with the aim to improve the wordtopicmapping on a smaller and/or short-text corpus.The proposed DocNADEextension is named as ctx-DocNADEe.We present novel neural autoregressive topic model variants coupled with neurallanguage models and embeddings priors that consistently outperform state-of-theartgenerative topic models in terms of generalization, interpretability and applicability over 6 long-textand 8 short-text datasets from diverse domains.",Unified neural model of topic and language modeling to introduce language structure in topic models for contextualized topic vectors 1161,Weightless: Lossy Weight Encoding For Deep Neural Network Compression,"The large memory requirements of deep neural networks strain the capabilities of many devices, limiting their deployment and adoption.Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization.In this paper, we present a novel scheme for lossy weight encoding which complements conventional compression techniques.The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors.Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, Weightless, can compress DNN weights by up to 496x; with the same model accuracy, this results in up to a 1.51x improvement over the state-of-the-art.",We propose a new way to compress neural networks using probabilistic data structures. 1162,What Is in a Translation Unit? Comparing Character and Subword Representations Beyond Translation,"Recent work has shown that contextualized word representations derived from neural machine translation are a viable alternative to such from simple word predictions tasks.This is because the internal understanding that needs to be built in order to be able to translate from one language to another is much more comprehensive.Unfortunately, computational and memory limitations as of present prevent NMT models from using large word vocabularies, and thus alternatives such as subword units and characters have been used.Here we study the impact of using different kinds of units on the quality of the resulting representations when used to model syntax, semantics, and morphology. We found that while representations derived from subwords are slightly better for modeling syntax, character-based representations are superior for modeling morphology and are also more robust to noisy input.","We study the impact of using different kinds of subword units on the quality of the resulting representations when used to model syntax, semantics, and morphology." 1163,Revisiting Fine-tuning for Few-shot Learning,"Few-shot learning is the process of learning novel classes using only a few examples and it remains a challenging task in machine learning.Many sophisticated few-shot learning algorithms have been proposed based on the notion that networks can easily overfit to novel examples if they are simply fine-tuned using only a few examples.In this study, we show that in the commonly used low-resolution mini-ImageNet dataset, the fine-tuning method achieves higher accuracy than common few-shot learning algorithms in the 1-shot task and nearly the same accuracy as that of the state-of-the-art algorithm in the 5-shot task.We then evaluate our method with more practical tasks, namely the high-resolution single-domain and cross-domain tasks.With both tasks, we show that our method achieves higher accuracy than common few-shot learning algorithms.We further analyze the experimental results and show that:1) the retraining process can be stabilized by employing a low learning rate,2) using adaptive gradient optimizers during fine-tuning can increase test accuracy, and3) test accuracy can be improved by updating the entire network when a large domain-shift exists between base and novel classes.","An empirical study that provides a novel perspective on few-shot learning, in which a fine-tuning method shows comparable accuracy to more complex state-of-the-art methods in several classification tasks." 1164,Generating Realistic Stock Market Order Streams,"We propose an approach to generate realistic and high-fidelity stock market data based on generative adversarial networks.We model the order stream as a stochastic process with finite history dependence, and employ a conditional Wasserstein GAN to capture history dependence of orders in a stock market.We test our approach with actual market and synthetic data on a number of different statistics, and find the generated data to be close to real data.",We propose an approach to generate realistic and high-fidelity stock market data based on generative adversarial networks. 1165,Sign Bits Are All You Need for Black-Box Attacks,"We present a novel black-box adversarial attack algorithm with state-of-the-art model evasion rates for query efficiency under and metrics.It exploits a , rather than magnitude-based, gradient estimation approach that shifts the gradient estimation from continuous to binary black-box optimization.It adaptively constructs queries to estimate the gradient, one query relying upon the previous, rather than re-estimating the gradient each step with random query construction.Its reliance on sign bits yields a smaller memory footprint and it requires neither hyperparameter tuning or dimensionality reduction.Further, its theoretical performance is guaranteed and it can characterize adversarial subspaces better than white-box gradient-aligned subspaces.""On two public black-box attack challenges and a model robustly trained against transfer attacks, the algorithm's evasion rates surpass all submitted attacks."", 'For a suite of published models, the algorithm is less failure-prone while spending fewer queries versus the best combination of state of art algorithms.For example, it evades a standard MNIST model using just queries on average.Similar performance is observed on a standard IMAGENET model with an average of queries.","We present a sign-based, rather than magnitude-based, gradient estimation approach that shifts gradient estimation from continuous to binary black-box optimization." 1166,Gating Revisited: Deep Multi-layer RNNs That Can Be Trained,"Recurrent Neural Networks are widely used models for sequence data.Just like for feedforward networks, it has become common to build ""deep"" RNNs, i.e., stack multiple recurrent layers to obtain higher-level abstractions of the data.However, this works only for a handful of layers.Unlike feedforward networks, stacking more than a few recurrent units usually hurts model performance, the reason being vanishing or exploding gradients during training.We investigate the training of multi-layer RNNs and examine the magnitude of the gradients as they propagate through the network.We show that, depending on the structure of the basic recurrent unit, the gradients are systematically attenuated or amplified, so that with an increasing depth they tend to vanish, respectively explode.Based on our analysis we design a new type of gated cell that better preserves gradient magnitude, and therefore makes it possible to train deeper RNNs.We experimentally validate our design with five different sequence modelling tasks on three different datasets.The proposed stackable recurrent cell allows for substantially deeper recurrent architectures, with improved performance.","We analyze the gradient propagation in deep RNNs and from our analysis, we propose a new multi-layer deep RNN." 1167,On the Convergence and Robustness of Batch Normalization,"Despite its empirical success, the theoretical underpinnings of the stability, convergence and acceleration properties of batch normalization remain elusive.In this paper, we attack this problem from a modelling approach, where we perform thorough theoretical analysis on BN applied to simplified model: ordinary least squares.We discover that gradient descent on OLS with BN has interesting properties, including a scaling law, convergence for arbitrary learning rates for the weights, asymptotic acceleration effects, as well as insensitivity to choice of learning rates.We then demonstrate numerically that these findings are not specific to the OLS problem and hold qualitatively for more complex supervised learning problems.This points to a new direction towards uncovering the mathematical principles that underlies batch normalization.",We mathematically analyze the effect of batch normalization on a simple model and obtain key new insights that applies to general supervised learning. 1168,Reward Constrained Policy Optimization,"Solving tasks in Reinforcement Learning is no easy feat.As the goal of the agent is to maximize the accumulated reward, it often learns to exploit loopholes and misspecifications in the reward signal resulting in unwanted behavior.While constraints may solve this issue, there is no closed form solution for general constraints.""In this work we present a novel multi-timescale approach for constrained policy optimization, called `Reward Constrained Policy Optimization', which uses an alternative penalty signal to guide the policy towards a constraint satisfying one."", 'We prove the convergence of our approach and provide empirical evidence of its ability to train constraint satisfying policies.","For complex constraints in which it is not easy to estimate the gradient, we use the discounted penalty as a guiding signal. We prove that under certain assumptions it converges to a feasible solution." 1169,Target Acquisition for Handheld Virtual Panels in VR,"The Handheld Virtual Panel is the virtual panel attached to the non-dominant hand’s controller in virtual reality.The HVP is the go-to technique for enabling menus and toolboxes in VR devices.In this paper, we investigate target acquisition performance for the HVP as a function of four factors: target width, target distance, the direction of approach with respect to gravity, and the angle of approach.Our results show that all four factors have significant effects on user performance.Based on the results, we propose guidelines towards the ergonomic and performant design of the HVP interfaces.","The paper investigates target acquisition for handheld virtual panels in VR and shows that target width, distance, direction of approach with respect to gravity, and angle of approach, all impact user performance." 1170,iSparse: Output Informed Sparsification of Neural Networks,"Deep neural networks have demonstrated unprecedented success in various knowledge management applications.However, the networks created are often very complex, with large numbers of trainable edges which require extensive computational resources.We note that many successful networks nevertheless often contain large numbers of redundant edges.Moreover, many of these edges may have negligible contributions towards the overall network performance.In this paper, we propose a novel iSparse framework and experimentally show, that we can sparsify the network, by 30-50%, without impacting the network performance.iSparse leverages a novel edge significance score, E, to determine the importance of an edge with respect to the final network output.Furthermore, iSparse can be applied both while training a model or on top of a pre-trained model, making it a retraining-free approach - leading to a minimal computational overhead.Comparisons of iSparse against PFEC, NISP, DropConnect, and Retraining-Free on benchmark datasets show that iSparse leads to effective network sparsifications.",iSparse eliminates irrelevant or insignificant network edges with minimal impact on network performance by determining edge importance w.r.t. the final network output. 1171,Learning Entity Representations for Few-Shot Reconstruction of Wikipedia Categories,"Language modeling tasks, in which words are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases.Motivated by the observation that efforts to codeworld knowledge into machine readable knowledge bases tend to be entity-centric,we investigate the use of a fill-in-the-blank task to learn context independent representations of entities from the contexts in which those entities were mentioned.We show that large scale training of neural models allows us to learn extremely high fidelity entity typing information, which we demonstrate with few-shot reconstruction of Wikipedia categories.Our learning approach is powerful enoughto encode specialized topics such as Giro d’Italia cyclists.",We learn entity representations that can reconstruct Wikipedia categories with just a few exemplars. 1172,Variational autoencoders trained with q-deformed lower bounds,"Variational autoencoders have been successful at learning a low-dimensional manifold from high-dimensional data with complex dependencies.At their core, they consist of a powerful Bayesian probabilistic inference model, to capture the salient features of the data.In training, they exploit the power of variational inference, by optimizing a lower bound on the model evidence.The latent representation and the performance of VAEs are heavily influenced by the type of bound used as a cost function.Significant research work has been carried out into the development of tighter bounds than the original ELBO, to more accurately approximate the true log-likelihood.By leveraging the q-deformed logarithm in the traditional lower bounds, ELBO and IWAE, and the upper bound CUBO, we bring contributions to this direction of research.In this proof-of-concept study, we explore different ways of creating these q-deformed bounds that are tighter than the classical ones and we show improvements in the performance of such VAEs on the binarized MNIST dataset.","Using the q-deformed logarithm, we derive tighter bounds than IWAE, to train variational autoencoders." 1173,ON BREIMAN’S DILEMMA IN NEURAL NETWORKS: SUCCESS AND FAILURE OF NORMALIZED MARGINS,"A belief persists long in machine learning that enlargement of margins over training data accounts for the resistance of models to overfitting by increasing the robustness.Yet Breiman shows a dilemma that a uniform improvement on margin distribution necessarily reduces generalization error.""In this paper, we revisit Breiman's dilemma in deep neural networks with recently proposed normalized margins using Lipschitz constant bound by spectral norm products."", ""With both simplified theory and extensive experiments, Breiman's dilemma is shown to rely on dynamics of normalized margin distributions, that reflects the trade-off between model expression power and data complexity."", 'When the complexity of data is comparable to the model expression power in the sense that training and test data share similar phase transitions in normalized margin dynamics, two efficient ways are derived via classic margin-based generalization bounds to successfully predict the trend of generalization error.On the other hand, over-expressed models that exhibit uniform improvements on training normalized margins may lose such a prediction power and fail to prevent the overfitting.","Bregman's dilemma is shown in deep learning that improvement of margins of over-parameterized models may result in overfitting, and dynamics of normalized margin distributions are proposed to predict generalization error and identify such a dilemma. " 1174,End-to-End Multi-Domain Task-Oriented Dialogue Systems with Multi-level Neural Belief Tracker,"It has been an open research challenge for developing an end-to-end multi-domain task-oriented dialogue system, in which a human can converse with the dialogue agent to complete tasks in more than one domain.First, tracking belief states of multi-domain dialogues is difficult as the dialogue agent must obtain the complete belief states from all relevant domains, each of which can have shared slots common among domains as well as unique slots specifically for the domain only.Second, the dialogue agent must also process various types of information, including contextual information from dialogue context, decoded dialogue states of current dialogue turn, and queried results from a knowledge base, to semantically shape context-aware and task-specific responses to human.To address these challenges, we propose an end-to-end neural architecture for task-oriented dialogues in multiple domains.We propose a novel Multi-level Neural Belief Tracker which tracks the dialogue belief states by learning signals at both slot and domain level independently.The representations are combined in a Late Fusion approach to form joint feature vectors of pairs.Following recent work in end-to-end dialogue systems, we incorporate the belief tracker with generation components to address end-to-end dialogue tasks.We achieve state-of-the-art performance on the MultiWOZ2.1 benchmark with 50.91% joint goal accuracy and competitive measures in task-completion and response generation.","We proposed an end-to-end dialogue system with a novel multi-level dialogue state tracker and achieved consistent performance on MultiWOZ2.1 in state tracking, task completion, and response generation performance." 1175,A Wasserstein Minimum Velocity Approach to Learning Unnormalized Models,"Score matching provides an effective approach to learning flexible unnormalized models, but its scalability is limited by the need to evaluate a second-order derivative. In this paper,we connect a general family of learning objectives including score matching to Wassersteingradient flows.This connection enables us to design a scalable approximation to theseobjectives, with a form similar to single-step contrastive divergence.We present applications in training implicit variational and Wasserstein auto-encoders with manifold-valued priors.","We present a scalable approximation to a wide range of EBM objectives, and applications in implicit VAEs and WAEs" 1176,Variational Continual Learning,"This paper develops variational continual learning, a simple but general framework for continual learning that fuses online variational inference and recent advances in Monte Carlo VI for neural networks.The framework can successfully train both deep discriminative models and deep generative models in complex continual learning settings where existing tasks evolve over time and entirely new tasks emerge.Experimental results show that VCL outperforms state-of-the-art continual learning methods on a variety of tasks, avoiding catastrophic forgetting in a fully automatic way.",This paper develops a principled method for continual learning in deep models. 1177,Variational Recurrent Models for Solving Partially Observable Control Tasks,"In partially observable environments, deep reinforcement learning agents often suffer from unsatisfactory performance, since two problems need to be tackled together: how to extract information from the raw observations to solve the task, and how to improve the policy.In this study, we propose an RL algorithm for solving PO tasks.Our method comprises two parts: a variational recurrent model for modeling the environment, and an RL controller that has access to both the environment and the VRM.The proposed algorithm was tested in two types of PO robotic control tasks, those in which either coordinates or velocities were not observable and those that require long-term memorization.Our experiments show that the proposed algorithm achieved better data efficiency and/or learned more optimal policy than other alternative approaches in tasks in which unobserved states cannot be inferred from raw observations in a simple manner.",A deep RL algorithm for solving POMDPs by auto-encoding the underlying states using a variational recurrent model 1178,Reinforcement Learning Algorithm Selection,"This paper formalises the problem of online algorithm selection in the context of Reinforcement Learning.The setup is as follows: given an episodic task and a finite number of off-policy RL algorithms, a meta-algorithm has to decide which RL algorithm is in control during the next episode so as to maximize the expected return.The article presents a novel meta-algorithm, called Epochal Stochastic Bandit Algorithm Selection.Its principle is to freeze the policy updates at each epoch, and to leave a rebooted stochastic bandit in charge of the algorithm selection.Under some assumptions, a thorough theoretical analysis demonstrates its near-optimality considering the structural sampling budget limitations.ESBAS is first empirically evaluated on a dialogue task where it is shown to outperform each individual algorithm in most configurations.ESBAS is then adapted to a true online setting where algorithms update their policies after each transition, which we call SSBAS.SSBAS is evaluated on a fruit collection task where it is shown to adapt the stepsize parameter more efficiently than the classical hyperbolic decay, and on an Atari game, where it improves the performance by a wide margin.",This paper formalises the problem of online algorithm selection in the context of Reinforcement Learning. 1179,WiSE-ALE: Wide Sample Estimator for Aggregate Latent Embedding,"In this paper, we present a new generative model for learning latent embeddings.Compared to the classical generative process, where each observed data point is generated from an individual latent variable, our approach assumes a global latent variable to generate the whole set of observed data points.We then propose a learning objective that is derived as an approximation to a lower bound to the data log likelihood, leading to our algorithm, WiSE-ALE.Compared to the standard ELBO objective, where the variational posterior for each data point is encouraged to match the prior distribution, the WiSE-ALE objective matches the averaged posterior, over all samples, with the prior, allowing the sample-wise posterior distributions to have a wider range of acceptable embedding mean and variance and leading to better reconstruction quality in the auto-encoding process.Through various examples and comparison to other state-of-the-art VAE models, we demonstrate that WiSE-ALE has excellent information embedding properties, whilst still retaining the ability to learn a smooth, compact representation.",We propose a new latent variable model to learn latent embeddings for some high-dimensional data. 1180,Adversarial Defense Via Data Dependent Activation Function and Total Variation Minimization,"We improve the robustness of deep neural nets to adversarial attacks by using an interpolating function as the output activation. This data-dependent activation function remarkably improves both classification accuracy and stability to adversarial perturbations.Together with the total variation minimization of adversarial images and augmented training, under the strongest attack, we achieve up to 20.6%, 50.7%, and 68.7% accuracy improvement w.r.t. the fast gradient sign method, iterative fast gradient sign method, and Carlini-WagnerL2attacks, respectively. Our defense strategy is additive to many of the existing methods. We give an intuitive explanation of our defense strategy via analyzing the geometry of the feature space.For reproducibility, the code will be available on GitHub.","We proposal strategies for adversarial defense based on data dependent activation function, total variation minimization, and training data augmentation" 1181, Large-Scale Visual Speech Recognition,"This work presents a scalable solution to continuous visual speech recognition.To achieve this, we constructed the largest existing visual speech recognition dataset, consisting of pairs of text and video clips of faces speaking.In tandem, we designed and trained an integrated lipreading system, consisting of a video processing pipeline that maps raw video to stable videos of lips and sequences of phonemes, a scalable deep neural network that maps the lip videos to sequences of phoneme distributions, and a production-level speech decoder that outputs sequences of words.The proposed system achieves a word error rate of 40.9% as measured on a held-out set.In comparison, professional lipreaders achieve either 86.4% or 92.9% WER on the same dataset when having access to additional types of contextual information.Our approach significantly improves on previous lipreading approaches, including variants of LipNet and of Watch, Attend, and Spell, which are only capable of 89.8% and 76.8% WER respectively.",This work presents a scalable solution to continuous visual speech recognition. 1182,Variational Autoencoders for Text Modeling without Weakening the Decoder,"Previous work has found difficulty developing generative models based on variational autoencoders for text.To address the problem of the decoder ignoring information from the encoder, these previous models weaken the capacity of the decoder to force the model to use information from latent variables.However, this strategy is not ideal as it degrades the quality of generated text and increases hyper-parameters.In this paper, we propose a new VAE for text utilizing a multimodal prior distribution, a modified encoder, and multi-task learning.We show our model can generate well-conditioned sentences without weakening the capacity of the decoder.Also, the multimodal prior distribution improves the interpretability of acquired representations.","We propose a model of variational autoencoders for text modeling without weakening the decoder, which improves the quality of text generation and interpretability of acquired representations." 1183,"To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression","Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model."", 'Recent reports prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size.""This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model's dense connection structure, exposing a similar trade-off in model size and accuracy."", 'We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models/datasets with minimal tuning and can be seamlessly incorporated within the training process.We compare the accuracy of large, but pruned models and their smaller, but dense counterparts with identical memory footprint.Across a broad range of neural network architectures, we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.","We demonstrate that large, but pruned models (large-sparse) outperform their smaller, but dense (small-dense) counterparts with identical memory footprint." 1184,Plug and Play Language Models: A Simple Approach to Controlled Text Generation,"Large transformer-based language models trained on huge text corpora have shown unparalleled generation capabilities.However, controlling attributes of the generated language is difficult without modifying the model architecture or fine-tuning on attribute-specific data and entailing the significant cost of retraining.We propose a simple alternative: the Plug and Play Language Model for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM.In the canonical scenario we present, the attribute models are simple classifiers consisting of a user-specified bag of words or a single learned layer with 100,000 times fewer parameters than the LM.""Sampling entails a forward and backward pass in which gradients from the attribute model push the LM's hidden activations and thus guide the generation."", 'Model samples demonstrate control over a range of topics and sentiment styles, and extensive automated and human annotated evaluations show attribute alignment and fluency.PPLMs are flexible in that any combination of differentiable attribute models may be used to steer text generation, which will allow for diverse and creative applications beyond the examples given in this paper.",We control the topic and sentiment of text generation (almost) without any training. 1185,Beyond Shared Hierarchies: Deep Multitask Learning through Soft Layer Ordering,"Existing deep multitask learning approaches align layers shared between tasks in a parallel ordering.Such an organization significantly constricts the types of shared structure that can be learned.The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared layers.The results indicate that a flexible ordering can enable more effective sharing, thus motivating the development of a soft ordering approach, which learns how shared layers are applied in different ways for different tasks.Deep MTL with soft ordering outperforms parallel ordering methods across a series of domains.These results suggest that the power of deep MTL comes from learning highly general building blocks that can be assembled to meet the demands of each task.",Relaxing the constraint of shared hierarchies enables more effective deep multitask learning. 1186,Confidence Calibration in Deep Neural Networks through Stochastic Inferences,"We propose a generic framework to calibrate accuracy and confidence of a prediction through stochastic inferences in deep neural networks.We first analyze relation between variation of multiple model parameters for a single example inference and variance of the corresponding prediction scores by Bayesian modeling of stochastic regularization.Our empirical observation shows that accuracy and score of a prediction are highly correlated with variance of multiple stochastic inferences given by stochastic depth or dropout.Motivated by these facts, we design a novel variance-weighted confidence-integrated loss function that is composed of two cross-entropy loss terms with respect to ground-truth and uniform distribution, which are balanced by variance of stochastic prediction scores.The proposed loss function enables us to learn deep neural networks that predict confidence calibrated scores using a single inference.Our algorithm presents outstanding confidence calibration performance and improves classification accuracy with two popular stochastic regularization techniques---stochastic depth and dropout---in multiple models and datasets; it alleviates overconfidence issue in deep neural networks significantly by training networks to achieve prediction accuracy proportional to confidence of prediction.",We propose a framework to learn confidence-calibrated networks by designing a novel loss function that incorporates predictive uncertainty estimated through stochastic inferences. 1187,"Learning Particle Dynamics for Manipulating Rigid Bodies, Deformable Objects, and Fluids","Real-life control tasks involve matters of various substances---rigid or soft bodies, liquid, gas---each with distinct physical behaviors.This poses challenges to traditional rigid-body physics engines.Particle-based simulators have been developed to model the dynamics of these complex scenes; however, relying on approximation techniques, their simulation often deviates from real-world physics, especially in the long term.In this paper, we propose to learn a particle-based simulator for complex control tasks.Combining learning with particle-based systems brings in two major benefits: first, the learned simulator, just like other particle-based systems, acts widely on objects of different materials; second, the particle-based representation poses strong inductive bias for learning: particles of the same type have the same dynamics within.This enables the model to quickly adapt to new environments of unknown dynamics within a few observations.We demonstrate robots achieving complex manipulation tasks using the learned simulator, such as manipulating fluids and deformable foam, with experiments both in simulation and in the real world.Our study helps lay the foundation for robot learning of dynamic scenes with particle-based representations.","Learning particle dynamics with dynamic interaction graphs for simulating and control rigid bodies, deformable objects, and fluids. " 1188,Towards Effective GANs for Data Distributions with Diverse Modes,"Generative Adversarial Networks, when trained on large datasets with diverse modes, are known to produce conflated images which do not distinctly belong to any of the modes.We hypothesize that this problem occurs due to the interaction between two facts: For datasets with large variety, it is likely that the modes lie on separate manifolds."" The generator is formulated as a continuous function, and the input noise is derived from a connected set, due to which G's output is a connected set."", ""If G covers all modes, then there must be some portion of G's output which connects them."", 'This corresponds to undesirable, conflated images.We develop theoretical arguments to support these intuitions.We propose a novel method to break the second assumption via learnable discontinuities in the latent noise space.Equivalently, it can be viewed as training several generators, thus creating discontinuities in the G function.We also augment the GAN formulation with a classifier C that predicts which noise partition/generator produced the output images, encouraging diversity between each partition/generator.We experiment on MNIST, celebA, STL-10, and a difficult dataset with clearly distinct modes, and show that the noise partitions correspond to different modes of the data distribution, and produce images of superior quality.",We introduce theory to explain the failure of GANs on complex datasets and propose a solution to fix it. 1189,Disentangling Correlated Speaker and Noise for Speech Synthesis via Data Augmentation and Adversarial Factorization,"To leverage crowd-sourced data to train multi-speaker text-to-speech models that can synthesize clean speech for all speakers, it is essential to learn disentangled representations which can independently control the speaker identity and background noise in generated signals.However, learning such representations can be challenging, due to the lack of labels describing the recording conditions of each training example, and the fact that speakers and recording conditions are often correlated, e.g. since users often make many recordings using the same equipment.This paper proposes three components to address this problem by: formulating a conditional generative model with factorized latent variables, using data augmentation to add noise that is not correlated with speaker identity and whose label is known during training, and using adversarial factorization to improve disentanglement.Experimental results demonstrate that the proposed method can disentangle speaker and noise attributes even if they are correlated in the training data, and can be used to consistently synthesize clean speech for all speakers.Ablation studies verify the importance of each proposed component.","Data augmentation and adversarial training are very effective for disentangling correlated speaker and noise, enabling independent control of each attribute for text-to-speech synthesis." 1190,Deep Frank-Wolfe For Neural Network Optimization,"Learning a deep neural network requires solving a challenging optimization problem: it is a high-dimensional, non-convex and non-smooth minimization problem with a large number of terms.The current practice in neural network optimization is to rely on the stochastic gradient descent algorithm or its adaptive variants.However, SGD requires a hand-designed schedule for the learning rate.In addition, its adaptive variants tend to produce solutions that generalize less well on unseen data than SGD with a hand-designed schedule.We present an optimization method that offers empirically the best of both worlds: our algorithm yields good generalization performance while requiring only one hyper-parameter.Our approach is based on a composite proximal framework, which exploits the compositional nature of deep neural networks and can leverage powerful convex optimization algorithms by design.Specifically, we employ the Frank-Wolfe algorithm for SVM, which computes an optimal step-size in closed-form at each time-step.We further show that the descent direction is given by a simple backward pass in the network, yielding the same computational cost per iteration as SGD.We present experiments on the CIFAR and SNLI data sets, where we demonstrate the significant superiority of our method over Adam, Adagrad, as well as the recently proposed BPGrad and AMSGrad.Furthermore, we compare our algorithm to SGD with a hand-designed learning rate schedule, and show that it provides similar generalization while often converging faster.The code is publicly available at https://github.com/oval-group/dfw.",We train neural networks by locally linearizing them and using a linear SVM solver (Frank-Wolfe) at each iteration. 1191,VUSFA:Variational Universal Successor Features Approximator ,"In this paper, we show how novel transfer reinforcement learning techniques can be applied to the complex task of target-driven navigation using the photorealisticAI2THOR simulator.Specifically, we build on the concept of Universal SuccessorFeatures with an A3C agent.We introduce the novel architectural1contribution of a Successor Feature Dependent Policy and adopt the concept of VariationalInformation Bottlenecks to achieve state of the art performance.VUSFA, our final architecture, is a straightforward approach that can be implemented using our open source repository.Our approach is generalizable, showed greater stability in training, and outperformed recent approaches in terms of transfer learning ability.",We present an improved version of Universal Successor Features based DRL method which can improve the transfer learning of agents. 1192,Overcoming the Disentanglement vs Reconstruction Trade-off via Jacobian Supervision,"A major challenge in learning image representations is the disentangling of the factors of variation underlying the image formation. This is typically achieved with an autoencoder architecture where a subset of the latent variables is constrained to correspond to specific factors, and the rest of them are considered nuisance variables.This approach has an important drawback: as the dimension of the nuisance variables is increased, image reconstruction is improved, but the decoder has the flexibility to ignore the specified factors, thus losing the ability to condition the output on them. In this work, we propose to overcome this trade-off by progressively growing the dimension of the latent code, while constraining the Jacobian of the output image with respect to the disentangled variables to remain the same. As a result, the obtained models are effective at both disentangling and reconstruction. We demonstrate the applicability of this method in both unsupervised and supervised scenarios for learning disentangled representations.In a facial attribute manipulation task, we obtain high quality image generation while smoothly controlling dozens of attributes with a single model.This is an order of magnitude more disentangled factors than state-of-the-art methods, while obtaining visually similar or superior results, and avoiding adversarial training.",A method for learning image representations that are good for both disentangling factors of variation and obtaining faithful reconstructions. 1193,Detecting Memorization in ReLU Networks,"We propose a new notion of 'non-linearity' of a network layer with respect to an input batch that is based on its proximity to a linear system, which is reflected in the non-negative rank of the activation matrix."", 'We measure this non-linearity by applying non-negative factorization to the activation matrix.Considering batches of similar samples, we find that high non-linearity in deep layers is indicative of memorization.Furthermore, by applying our approach layer-by-layer, we find that the mechanism for memorization consists of distinct phases.We perform experiments on fully-connected and convolutional neural networks trained on several image and audio datasets.Our results demonstrate that as an indicator for memorization, our technique can be used to perform early stopping.",We use the non-negative rank of ReLU activation matrices as a complexity measure and show it (negatively) correlates with good generalization. 1194,Stochasticity and skip connections improve knowledge transfer,"Deep neural networks have achieved state-of-the-art performance in various fields, but they have to be scaled down to be used for real-world applications.As a means to reduce the size of a neural network while preserving its performance, knowledge transfer has brought a lot of attention.One popular method of knowledge transfer is knowledge distillation, where softened outputs of a pre-trained teacher network help train student networks.Since KD, other transfer methods have been proposed, and they mainly focus on loss functions, activations of hidden layers, or additional modules to transfer knowledge well from teacher networks to student networks.In this work, we focus on the structure of a teacher network to get the effect of multiple teacher networks without additional resources.We propose changing the structure of a teacher network to have stochastic blocks and skip connections.In doing so, a teacher network becomes the aggregate of a huge number of paths.In the training phase, each sub-network is generated by dropping stochastic blocks randomly and used as a teacher network.This allows training the student network with multiple teacher networks and further enhances the student network on the same resources in a single teacher network.We verify that the proposed structure brings further improvement to student networks on benchmark datasets.",The goal of this paper is to get the effect of multiple teacher networks by exploiting stochastic blocks and skip connections. 1195,A Comparative Study of Lexical and Semantic Emoji Suggestion Systems,"Emoji suggestion systems based on typed text have been proposed to encourage emoji usage and enrich text messaging; however, such systems’ actual effects on the chat experience remain unknown.We built an Android keyboard with both lexical and semantic emoji suggestion capabilities and compared these in two different studies.To investigate the effect of emoji suggestion in online conversations, we conducted a laboratory text-messaging study with 24 participants, and also a 15-day longitudinal field deployment with 18 participants.We found that lexical emoji suggestions increased emoji usage by 31.5% over a keyboard without suggestions, while semantic suggestions increased emoji usage by 125.1%.However, suggestion mechanisms did not affect the chatting experience significantly.From these studies, we formulate a set of design guidelines for future emoji suggestion systems that better support users’ needs.",We built an Android keyboard with both lexical (word-based) and semantic (meaning-based) emoji suggestion capabilities and compared their effects in two different chat studies. 1196,Self-Adversarial Learning with Comparative Discrimination for Text Generation,"Conventional Generative Adversarial Networks for text generation tend to have issues of reward sparsity and mode collapse that affect the quality and diversity of generated samples.""To address the issues, we propose a novel self-adversarial learning paradigm for improving GANs' performance in text generation."", 'In contrast to standard GANs that use a binary classifier as its discriminator to predict whether a sample is real or generated, SAL employs a comparative discriminator which is a pairwise classifier for comparing the text quality between a pair of samples.During training, SAL rewards the generator when its currently generated sentence is found to be better than its previously generated samples.This self-improvement reward mechanism allows the model to receive credits more easily and avoid collapsing towards the limited number of real samples, which not only helps alleviate the reward sparsity issue but also reduces the risk of mode collapse.Experiments on text generation benchmark datasets show that our proposed approach substantially improves both the quality and the diversity, and yields more stable performance compared to the previous GANs for text generation.",We propose a self-adversarial learning (SAL) paradigm which improves the generator in a self-play fashion for improving GANs' performance in text generation. 1197,A novel method to determine the number of latent dimensions with SVD,"Determining the number of latent dimensions is a ubiquitous problem in machinelearning.In this study, we introduce a novel method that relies on SVD to discoverthe number of latent dimensions.The general principle behind the method is tocompare the curve of singular values of the SVD decomposition of a data set withthe randomized data set curve.The inferred number of latent dimensions correspondsto the crossing point of the two curves.To evaluate our methodology, wecompare it with competing methods such as Kaisers eigenvalue-greater-than-onerule, Parallel Analysis, Velicers MAP test.We also compare our method with the Silhouette Width technique which isused in different clustering methods to determine the optimal number of clusters.The result on synthetic data shows that the Parallel Analysis and our method havesimilar results and more accurate than the other methods, and that our methods isslightly better result than the Parallel Analysis method for the sparse data sets.","In this study, we introduce a novel method that relies on SVD to discover the number of latent dimensions." 1198,Using Explainabilty to Detect Adversarial Attacks,"Deep learning models are often sensitive to adversarial attacks, where carefully-designed input samples can cause the system to produce incorrect decisions.Here we focus on the problem of detecting attacks, rather than robust classification, since detecting that an attack occurs may be even more important than avoiding misclassification.We build on advances in explainability, where activity-map-like explanations are used to justify and validate decisions, by highlighting features that are involved with a classification decision.The key observation is that it is hard to create explanations for incorrect decisions. We propose EXAID, a novel attack-detection approach, which uses model explainability to identify images whose explanations are inconsistent with the predicted class. Specifically, we use SHAP, which uses Shapley values in the space of the input image, to identify which input features contribute to a class decision. Interestingly, this approach does not require to modify the attacked model, and it can be applied without modelling a specific attack. It can therefore be applied successfully to detect unfamiliar attacks, that were unknown at the time the detection model was designed. We evaluate EXAID on two benchmark datasets CIFAR-10 and SVHN, and against three leading attack techniques, FGSM, PGD and C&W.We find that EXAID improves over the SoTA detection methods by a large margin across a wide range of noise levels, improving detection from 70% to over 90% for small perturbations.","A novel adversarial detection approach, which uses explainability methods to identify images whose explanations are inconsistent with the predicted class. " 1199,Scalable Model Compression by Entropy Penalized Reparameterization,"We describe a simple and general neural network weight compression approach, in which the network parameters are represented in a “latent” space, amounting to a reparameterization.This space is equipped with a learned probability model, which is used to impose an entropy penalty on the parameter representation during training, and to compress the representation using a simple arithmetic coder after training.Classification accuracy and model compressibility is maximized jointly, with the bitrate--accuracy trade-off specified by a hyperparameter.We evaluate the method on the MNIST, CIFAR-10 and ImageNet classification benchmarks using six distinct model architectures.Our results show that state-of-the-art model compression can be achieved in a scalable and general way without requiring complex procedures such as multi-stage training.",An end-to-end trainable model compression method optimizing accuracy jointly with the expected model size. 1200,Ada-Boundary: Accelerating the DNN Training via Adaptive Boundary Batch Selection,"Neural networks can converge faster with help from a smarter batch selection strategy.In this regard, we propose Ada-Boundary, a novel adaptive-batch selection algorithm that constructs an effective mini-batch according to the learning progress of the model.Our key idea is to present confusing samples what the true label is.Thus, the samples near the current decision boundary are considered as the most effective to expedite convergence.Taking advantage of our design, Ada-Boundary maintains its dominance in various degrees of training difficulty.We demonstrate the advantage of Ada-Boundary by extensive experiments using two convolutional neural networks for three benchmark data sets.The experiment results show that Ada-Boundary improves the training time by up to 31.7% compared with the state-of-the-art strategy and by up to 33.5% compared with the baseline strategy.",We suggest a smart batch selection technique called Ada-Boundary. 1201,Sound event classification using ontology-based neural networks,"State of the art sound event classification relies in neural networks to learn the associations between class labels and audio recordings within a dataset.These datasets typically define an ontology to create a structure that relates these sound classes with more abstract super classes.Hence, the ontology serves as a source of domain knowledge representation of sounds.However, the ontology information is rarely considered, and specially under explored to model neural network architectures.We propose two ontology-based neural network architectures for sound event classification.We defined a framework to design simple network architectures that preserve an ontological structure.The networks are trained and evaluated using two of the most common sound event classification datasets.Results show an improvement in classification performance demonstrating the benefits of including the ontological information.",We present ontology-based neural network architectures for sound event classification. 1202,How much Position Information Do Convolutional Neural Networks Encode?,"In contrast to fully connected networks, Convolutional Neural Networks achieve efficiency by learning weights associated with local filters with a finite spatial extent.An implication of this is that a filter may know what it is looking at, but not where it is positioned in the image.Information concerning absolute position is inherently useful, and it is reasonable to assume that deep CNNs may implicitly learn to encode this information if there is a means to do so.In this paper, we test this hypothesis revealing the surprising degree of absolute position information that is encoded in commonly used neural networks.A comprehensive set of experiments show the validity of this hypothesis and shed light on how and where this information is represented while offering clues to where positional information is derived from in deep CNNs.","Our work shows positional information has been implicitly encoded in a network. This information is important for detecting position-dependent features, e.g. semantic and saliency." 1203,Multi-Task Learning for Semantic Parsing with Cross-Domain Sketch,"Semantic parsing which maps a natural language sentence into a formal machine-readable representation of its meaning, is highly constrained by the limited annotated training data.Inspired by the idea of coarse-to-fine, we propose a general-to-detailed neural network by incorporating cross-domain sketch among utterances and their logic forms.For utterances in different domains, the General Network will extract CDS using an encoder-decoder model in a multi-task learning setup.Then for some utterances in a specific domain, the Detailed Network will generate the detailed target parts using sequence-to-sequence architecture with advanced attention to both utterance and generated CDS.""Our experiments show that compared to direct multi-task learning, CDS has improved the performance in semantic parsing task which converts users' requests into meaning representation language."", 'We also use experiments to illustrate that CDS works by adding some constraints to the target decoding process, which further proves the effectiveness and rationality of CDS.",General-to-detailed neural network(GDNN) with Multi-Task Learning by incorporating cross-domain sketch(CDS) for semantic parsing 1204,On Characterizing the Capacity of Neural Networks Using Algebraic Topology,"The learnability of different neural architectures can be characterized directly by computable measures of data complexity.In this paper, we reframe the problem of architecture selection as understanding how data determines the most expressive and generalizable architectures suited to that data, beyond inductive bias.After suggesting algebraic topology as a measure for data complexity, we show that the power of a network to express the topological complexity of a dataset in its decision boundary is a strictly limiting factor in its ability to generalize.We then provide the first empirical characterization of the topological capacity of neural networks.Our empirical analysis shows that at every level of dataset complexity, neural networks exhibit topological phase transitions and stratification.This observation allowed us to connect existing theory to empirically driven conjectures on the choice of architectures for a single hidden layer neural networks.",We show that the learnability of different neural architectures can be characterized directly by computable measures of data complexity. 1205,Augmenting Genetic Algorithms with Deep Neural Networks for Exploring the Chemical Space,"Challenges in natural sciences can often be phrased as optimization problems.Machine learning techniques have recently been applied to solve such problems.One example in chemistry is the design of tailor-made organic materials and molecules, which requires efficient methods to explore the chemical space.We present a genetic algorithm that is enhanced with a neural network based discriminator model to improve the diversity of generated molecules and at the same time steer the GA.We show that our algorithm outperforms other generative models in optimization tasks.We furthermore present a way to increase interpretability of genetic algorithms, which helped us to derive design principles",Tackling inverse design via genetic algorithms augmented with deep neural networks. 1206,How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer Representations,"Bidirectional Encoder Representations from Transformers reach state-of-the-art results in a variety of Natural Language Processing tasks.However, understanding of their internal functioning is still insufficient and unsatisfactory.""In order to better understand BERT and other Transformer-based models, we present a layer-wise analysis of BERT's hidden states."", 'Unlike previous research, which mainly focuses on explaining Transformer models by their \\hbox weights, we argue that hidden states contain equally valuable information.Specifically, our analysis focuses on models fine-tuned on the task of Question Answering as an example of a complex downstream task.We inspect how QA models transform token vectors in order to find the correct answer.To this end, we apply a set of general and QA-specific probing tasks that reveal the information stored in each representation layer.""Our qualitative analysis of hidden state visualizations provides additional insights into BERT's reasoning process."", 'Our results show that the transformations within BERT go through phases that are related to traditional pipeline tasks.The system can therefore implicitly incorporate task-specific information into its token representations.""Furthermore, our analysis reveals that fine-tuning has little impact on the models' semantic abilities and that prediction errors can be recognized in the vector representations of even early layers.",We investigate hidden state activations of Transformer Models in Question Answering Tasks. 1207,Reinforcement and Imitation Learning for Diverse Visuomotor Skills,"We propose a general deep reinforcement learning method and apply it to robot manipulation tasks.Our approach leverages demonstration data to assist a reinforcement learning agent in learning to solve a wide range of tasks, mainly previously unsolved.We train visuomotor policies end-to-end to learn a direct mapping from RGB camera inputs to joint velocities.Our experiments indicate that our reinforcement and imitation approach can solve contact-rich robot manipulation tasks that neither the state-of-the-art reinforcement nor imitation learning method can solve alone.We also illustrate that these policies achieved zero-shot sim2real transfer by training with large visual and dynamics variations.",combine reinforcement learning and imitation learning to solve complex robot manipulation tasks from pixels 1208,BatchEnsemble: an Alternative Approach to Efficient Ensemble and Lifelong Learning,"Ensembles, where multiple neural networks are trained individually and their predictions are averaged, have been shown to be widely successful for improving both the accuracy and predictive uncertainty of single neural networks.""However, an ensemble's cost for both training and testing increases linearly with the number of networks."", 'In this paper, we propose BatchEnsemble, an ensemble method whose computational and memory costs are significantly lower than typical ensembles.BatchEnsemble achieves this by defining each weight matrix to be the Hadamard product of a shared weight among all ensemble members and a rank-one matrix per member.Unlike ensembles, BatchEnsemble is not only parallelizable across devices, where one device trains one member, but also parallelizable within a device, where multiple ensemble members are updated simultaneously for a given mini-batch.Across CIFAR-10, CIFAR-100, WMT14 EN-DE/EN-FR translation, and contextual bandits tasks, BatchEnsemble yields competitive accuracy and uncertainties as typical ensembles; the speedup at test time is 3X and memory reduction is 3X at an ensemble of size 4.We also apply BatchEnsemble to lifelong learning, where on Split-CIFAR-100, BatchEnsemble yields comparable performance to progressive neural networks while having a much lower computational and memory costs.We further show that BatchEnsemble can easily scale up to lifelong learning on Split-ImageNet which involves 100 sequential learning tasks.","We introduced BatchEnsemble, an efficient method for ensembling and lifelong learning which can be used to improve the accuracy and uncertainty of any neural network like typical ensemble methods." 1209,Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models,"Deep neural networks have achieved impressive performance in handling complicated semantics in natural language, while mostly treated as black boxes.To explain how the model handles compositional semantics of words and phrases, we study the hierarchical explanation problem.We highlight the key challenge is to compute non-additive and context-independent importance for individual words and phrases.We show some prior efforts on hierarchical explanations, e.g. contextual decomposition, do not satisfy the desired properties mathematically, leading to inconsistent explanation quality in different models.In this paper, we propose a formal way to quantify the importance of each word or phrase to generate hierarchical explanations.We modify contextual decomposition algorithms according to our formulation, and propose a model-agnostic explanation algorithm with competitive performance.Human evaluation and automatic metrics evaluation on both LSTM models and fine-tuned BERT Transformer models on multiple datasets show that our algorithms robustly outperform prior works on hierarchical explanations.We show our algorithms help explain compositionality of semantics, extract classification rules, and improve human trust of models.",We propose measurement of phrase importance and algorithms for hierarchical explanation of neural sequence model predictions 1210,Improving image generative models with human interactions,"GANs provide a framework for training generative models which mimic a data distribution.However, in many cases we wish to train a generative model to optimize some auxiliary objective function within the data it generates, such as making more aesthetically pleasing images.In some cases, these objective functions are difficult to evaluate, e.g. they may require human interaction.Here, we develop a system for efficiently training a GAN to increase a generic rate of positive user interactions, for example aesthetic ratings.To do this, we build a model of human behavior in the targeted domain from a relatively small set of interactions, and then use this behavioral model as an auxiliary loss function to improve the generative model.As a proof of concept, we demonstrate that this system is successful at improving positive interaction rates simulated from a variety of objectives, and characterize s","We describe how to improve an image generative model according to a slow- or difficult-to-evaluate objective, such as human feedback, which could have many applications, like making more aesthetic images." 1211,Selective Self-Training for semi-supervised Learning,"Semi-supervised learning is a study that efficiently exploits a large amount of unlabeled data to improve performance in conditions of limited labeled data.Most of the conventional SSL methods assume that the classes of unlabeled data are included in the set of classes of labeled data.In addition, these methods do not sort out useless unlabeled samples and use all the unlabeled data for learning, which is not suitable for realistic situations.In this paper, we propose an SSL method called selective self-training, which selectively decides whether to include each unlabeled sample in the training process.It is also designed to be applied to a more real situation where classes of unlabeled data are different from the ones of the labeled data.For the conventional SSL problems which deal with data where both the labeled and unlabeled samples share the same class categories, the proposed method not only performs comparable to other conventional SSL algorithms but also can be combined with other SSL algorithms.While the conventional methods cannot be applied to the new SSL problems where the separated data do not share the classes, our method does not show any performance degradation even if the classes of unlabeled data are different from those of the labeled data.","Our proposed algorithm does not use all of the unlabeled data for the training, and it rather uses them selectively." 1212,Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models,"Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions.Conditional generation enables interactive control, but creating new controls often requires expensive retraining.In this paper, we develop a method to condition generation without retraining the model.By post-hoc learning latent constraints, value functions identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient-based optimization or amortized actor functions.Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder.Further, using gradient-based optimization, we demonstrate identity-preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image.Finally, with discrete sequences of musical notes, we demonstrate zero-shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function.",A new approach to conditional generation by constraining the latent space of an unconditional generative model. 1213,Energy and Policy Considerations for Deep Learning in NLP,"Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data.These models have obtained notable gains in accuracy across many NLP tasks.However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption.As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware.In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP.Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice.",We quantify the energy cost in terms of money (cloud credits) and carbon footprint of training recently successful neural network models for NLP. Costs are high. 1214,PVAE: Learning Disentangled Representations with Intrinsic Dimension via Approximated L0 Regularization,"Many models based on the Variational Autoencoder are proposed to achieve disentangled latent variables in inference.However, most current work is focusing on designing powerful disentangling regularizers, while the given number of dimensions for the latent representation at initialization could severely influence the disentanglement.Thus, a pruning mechanism is introduced, aiming at automatically seeking for the intrinsic dimension of the data while promoting disentangled representations.The proposed method is validated on MPI3D and MNIST to be advancing state-of-the-art methods in disentanglement, reconstruction, and robustness.The code is provided on the https://github.com/WeyShi/FYP-of-Disentanglement.",The Pruning VAE is proposed to search for disentangled variables with intrinsic dimension. 1215,Learning To Generate Reviews and Discovering Sentiment,"We explore the properties of byte-level recurrent language models.When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts.Specifically, we find a single unit which performs sentiment analysis.These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank.They are also very data efficient.When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets.We also demonstrate the sentiment unit has a direct influence on the generative process of the model.Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment.",Byte-level recurrent language models learn high-quality domain specific representations of text. 1216,Revisiting Reweighted Wake-Sleep," Discrete latent-variable models, while applicable in a variety of settings, can often be difficult to learn.Sampling discrete latent variables can result in high-variance gradient estimators for two primary reasons:1) branching on the samples within the model, and2) the lack of a pathwise derivative for the samples.While current state-of-the-art methods employ control-variate schemes for the former and continuous-relaxation methods for the latter, their utility is limited by the complexities of implementing and training effective control-variate schemes and the necessity of evaluating many branch paths in the model.Here, we revisit the Reweighted Wake Sleep algorithm, and through extensive evaluations, show that it circumvents both these issues, outperforming current state-of-the-art methods in learning discrete latent-variable models.Moreover, we observe that, unlike the Importance-weighted Autoencoder, RWS learns better models and inference networks with increasing numbers of particles, and that its benefits extend to continuous latent-variable models as well.Our results suggest that RWS is a competitive, often preferable, alternative for learning deep generative models.",Empirical analysis and explanation of particle-based gradient estimators for approximate inference with deep generative models. 1217,DeepXML: Scalable & Accurate Deep Extreme Classification for Matching User Queries to Advertiser Bid Phrases,"The objective in deep extreme multi-label learning is to jointly learn feature representations and classifiers to automatically tag data points with the most relevant subset of labels from an extremely large label set.Unfortunately, state-of-the-art deep extreme classifiers are either not scalable or inaccurate for short text documents. This paper develops the DeepXML algorithm which addresses both limitations by introducing a novel architecture that splits training of head and tail labels. DeepXML increases accuracyby learning word embeddings on head labels and transferring them through a novel residual connection to data impoverished tail labels; increasing the amount of negative training data available by extending state-of-the-art negative sub-sampling techniques; and re-ranking the set of predicted labels to eliminate the hardest negatives for the original classifier.All of these contributions are implemented efficiently by extending the highly scalable Slice algorithm for pretrained embeddings to learn the proposed DeepXML architecture.As a result, DeepXML could efficiently scale to problems involving millions of labels that were beyond the pale of state-of-the-art deep extreme classifiers as it could be more than 10x faster at training than XML-CNN and AttentionXML.At the same time, DeepXML was also empirically determined to be up to 19% more accurate than leading techniques for matching search engine queries to advertiser bid phrases.",Scalable and accurate deep multi label learning with millions of labels. 1218,ROBUST ESTIMATION VIA GENERATIVE ADVERSARIAL NETWORKS,"Robust estimation under Huber's-contamination model has become an important topic in statistics and theoretical computer science."", ""Rate-optimal procedures such as Tukey's median and other estimators based on statistical depth functions are impractical because of their computational intractability."", 'In this paper, we establish an intriguing connection between f-GANs and various depth functions through the lens of f-Learning.Similar to the derivation of f-GAN, we show that these depth functions that lead to rate-optimal robust estimators can all be viewed as variational lower bounds of the total variation distance in the framework of f-Learning.This connection opens the door of computing robust estimators using tools developed for training GANs.""In particular, we show that a JS-GAN that uses a neural network discriminator with at least one hidden layer is able to achieve the minimax rate of robust mean estimation under Huber's-contamination model."", 'Interestingly, the hidden layers of the neural net structure in the discriminator class are shown to be necessary for robust estimation.",GANs are shown to provide us a new effective robust mean estimate against agnostic contaminations with both statistical optimality and practical tractability. 1219,State Space LSTM Models with Particle MCMC Inference,"Long Short-Term Memory is one of the most powerful sequence models.Despite the strong performance, however, it lacks the nice interpretability as in state space models.In this paper, we present a way to combine the best of both worlds by introducing State Space LSTM, which generalizes the earlier work of combining topic models with LSTM.However, unlike , we do not make any factorization assumptions in our inference algorithm.We present an efficient sampler based on sequential Monte Carlo method that draws from the joint posterior directly.Experimental results confirms the superiority and stability of this SMC inference algorithm on a variety of domains.","We present State Space LSTM models, a combination of state space models and LSTMs, and propose an inference algorithm based on sequential Monte Carlo. " 1220,From Amortised to Memoised Inference: Combining Wake-Sleep and Variational-Bayes for Unsupervised Few-Shot Program Learning,"Given a large database of concepts but only one or a few examples of each, can we learn models for each concept that are not only generalisable, but interpretable?In this work, we aim to tackle this problem through hierarchical Bayesian program induction.We present a novel learning algorithm which can infer concepts as short, generative, stochastic programs, while learning a global prior over programs to improve generalisation and a recognition network for efficient inference.Our algorithm, Wake-Sleep-Remember, combines gradient learning for continuous parameters with neurally-guided search over programs.We show that WSR learns compelling latent programs in two tough symbolic domains: cellular automata and Gaussian process kernels.We also collect and evaluate on a new dataset, Text-Concepts, for discovering structured patterns in natural text data.","We extend the wake-sleep algorithm and use it to learn to learn structured models from few examples, " 1221,DeepProteomics: Protein family classification using Shallow and Deep Networks,"The knowledge regarding the function of proteins is necessary as it gives a clear picture of biological processes.Nevertheless, there are many protein sequences found and added to the databases but lacks functional annotation.The laboratory experiments take a considerable amount of time for annotation of the sequences.This arises the need to use computational techniques to classify proteins based on their functions.In our work, we have collected the data from Swiss-Prot containing 40433 proteins which is grouped into 30 families.We pass it to recurrent neural network, long short term memory and gated recurrent unit model and compare it by applying trigram with deep neural network and shallow neural network on the same dataset.Through this approach, we could achieve maximum of around 78% accuracy for the classification of protein families.","Proteins, amino-acid sequences, machine learning, deep learning, recurrent neural network(RNN), long short term memory(LSTM), gated recurrent unit(GRU), deep neural networks" 1222,Robust Spoken Term Detection Automatically Adjusted for a Given Threshold,"Spoken term detection is the task of determining whether and where a given word or phrase appears in a given segment of speech.Algorithms for STD are often aimed at maximizing the gap between the scores of positive and negative examples.As such they are focused on ensuring that utterances where the term appears are ranked higher than utterances where the term does not appear.However, they do not determine a detection threshold between the two.In this paper, we propose a new approach for setting an absolute detection threshold for all terms by introducing a new calibrated loss function.The advantage of minimizing this loss function during training is that it aims at maximizing not only the relative ranking scores, but also adjusts the system to use a fixed threshold and thus enhances system robustness and maximizes the detection accuracy rates.We use the new loss function in the structured prediction setting and extend the discriminative keyword spotting algorithm for learning the spoken term detector with a single threshold for all terms.We further demonstrate the effectiveness of the new loss function by applying it on a deep neural Siamese network in a weakly supervised setting for template-based spoken term detection, again with a single fixed threshold.Experiments with the TIMIT, WSJ and Switchboard corpora showed that our approach not only improved the accuracy rates when a fixed threshold was used but also obtained higher Area Under Curve.","Spoken Term Detection, using structured prediction and deep networks, implementing a new loss function that both maximizes AUC and ranks according to a predefined threshold." 1223,Fair Resource Allocation in Federated Learning,"Federated learning involves jointly learning over massively distributed partitions of data generated on remote devices.Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices.In this work, we propose q-Fair Federated Learning, a novel optimization objective inspired by resource allocation strategies in wireless networks that encourages a more fair accuracy distribution across devices in federated networks.To solve q-FFL, we devise a scalable method, q-FedAvg, that can run in federated networks.We validate both the improved fairness and flexibility of q-FFL and the efficiency of q-FedAvg through simulations on federated datasets.","We propose a novel optimization objective that encourages fairness in heterogeneous federated networks, and develop a scalable method to solve it." 1224,PAIRWISE AUGMENTED GANS WITH ADVERSARIAL RECONSTRUCTION LOSS,"We propose a novel autoencoding model called Pairwise Augmented GANs.We train a generator and an encoder jointly and in an adversarial manner.The generator network learns to sample realistic objects.In turn, the encoder network at the same time is trained to map the true data distribution to the prior in latent space.To ensure good reconstructions, we introduce an augmented adversarial reconstruction loss.Here we train a discriminator to distinguish two types of pairs: an object with its augmentation and the one with its reconstruction.We show that such adversarial loss compares objects based on the content rather than on the exact match.We experimentally demonstrate that our model generates samples and reconstructions of quality competitive with state-of-the-art on datasets MNIST, CIFAR10, CelebA and achieves good quantitative results on CIFAR10.",We propose a novel autoencoding model with augmented adversarial reconstruction loss. We intoduce new metric for content-based assessment of reconstructions. 1225,Bayesian Deep Learning via Stochastic Gradient MCMC with a Stochastic Approximation Adaptation,"We propose a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables.Inspired by dropout, a popular tool for regularization and model ensemble, we assign sparse priors to the weights in deep neural networks in order to achieve automatic “dropout” and avoid over-fitting.By alternatively sampling from posterior distribution through stochastic gradient Markov Chain Monte Carlo and optimizing latent variables via stochastic approximation, the trajectory of the target weights is proved to converge to the true posterior distribution conditioned on optimal latent variables.This ensures a stronger regularization on the over-fitted parameter space and more accurate uncertainty quantification on the decisive variables.Simulations from large-p-small-n regressions showcase the robustness of this method when applied to models with latent variables.Additionally, its application on the convolutional neural networks leads to state-of-the-art performance on MNIST and Fashion MNIST datasets and improved resistance to adversarial attacks.",a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables 1226,Baseline-corrected space-by-time non-negative matrix factorization for decoding single trial population spike trains,"Activity of populations of sensory neurons carries stimulus information in both the temporal and the spatial dimensions.This poses the question of how to compactly represent all the information that the population codes carry across all these dimensions.""Here, we developed an analytical method to factorize a large number of retinal ganglion cells' spike trains into a robust low-dimensional representation that captures efficiently both their spatial and temporal information."", 'In particular, we extended previously used single-trial space-by-time tensor decomposition based on non-negative matrix factorization to efficiently discount pre-stimulus baseline activity.On data recorded from retinal ganglion cells with strong pre-stimulus baseline, we showed that in situations were the stimulus elicits a strong change in firing rate, our extensions yield a boost in stimulus decoding performance.Our results thus suggest that taking into account the baseline can be important for finding a compact information-rich representation of neural activity.",We extended single-trial space-by-time tensor decomposition based on non-negative matrix factorization to efficiently discount pre-stimulus baseline activity that improves decoding performance on data with non-negligible baselines. 1227,Generative Adversarial Networks for Extreme Learned Image Compression,"We propose a framework for extreme learned image compression based on Generative Adversarial Networks, obtaining visually pleasing images at significantly lower bitrates than previous methods.This is made possible through our GAN formulation of learned compression combined with a generator/decoder which operates on the full-resolution image and is trained in combination with a multi-scale discriminator.Additionally, if a semantic label map of the original image is available, our method can fully synthesize unimportant regions in the decoded image such as streets and trees from the label map, therefore only requiring the storage of the preserved region and the semantic label map.A user study confirms that for low bitrates, our approach is preferred to state-of-the-art methods, even when they use more than double the bits.",GAN-based extreme image compression method using less than half the bits of the SOTA engineered codec while preserving visual quality 1228,Learning a set of interrelated tasks by using a succession of motor policies for a socially guided intrinsically motivated learner,"We propose an active learning algorithmic architecture, capable of organizing its learning process in order to achieve a field of complex tasks by learning sequences of primitive motor policies : Socially Guided Intrinsic Motivation with Procedure Babbling.The learner can generalize over its experience to continuously learn new outcomes, by choosing actively what and how to learn guided by empirical measures of its own progress.In this paper, we are considering the learning of a set of interrelated complex outcomes hierarchically organized.We introduce a new framework called ""procedures"", which enables the autonomous discovery of how to combine previously learned skills in order to learn increasingly more complex motor policies.Our architecture can actively decide which outcome to focus on and which exploration strategy to apply.""Those strategies could be autonomous exploration, or active social guidance, where it relies on the expertise of a human teacher providing demonstrations at the learner's request."", 'We show on a simulated environment that our new architecture is capable of tackling the learning of complex motor policies, to adapt the complexity of its policies to the task at hand.We also show that our ""procedures"" increases the agent\'s capability to learn complex tasks.",The paper describes a strategic intrinsically motivated learning algorithm which tackles the learning of complex motor policies. 1229,TPO: TREE SEARCH POLICY OPTIMIZATION FOR CONTINUOUS ACTION SPACES,"Monte Carlo Tree Search has achieved impressive results on a range of discrete environments, such as Go, Mario and Arcade games, but it has not yet fulfilled its true potential in continuous domains.In this work, we introduceTPO, a tree search based policy optimization method for continuous environments.TPO takes a hybrid approach to policy optimization. Building the MCTS tree in a continuous action space and updating the policy gradient using off-policy MCTS trajectories are non-trivial.To overcome these challenges, we propose limiting tree search branching factor by drawing only few action samples from the policy distribution and define a new loss function based on the trajectories’ mean and standard deviations. Our approach led to some non-intuitive findings. MCTS training generally requires a large number of samples and simulations.However, we observed that bootstrappingtree search with a pre-trained policy allows us to achieve high quality results with a low MCTS branching factor and few number of simulations.Without the proposed policy bootstrapping, continuous MCTS would require a much larger branching factor and simulation count, rendering it computationally and prohibitively expensive.In our experiments, we use PPO as our baseline policy optimization algorithm.TPO significantly improves the policy on nearly all of our benchmarks. For example, in complex environments such as Humanoid, we achieve a 2.5×improvement over the baseline algorithm.",We use MCTS to further optimize a bootstrapped policy for continuous action spaces under a policy iteration setting. 1230,On Variational Learning of Controllable Representations for Text without Supervision,"The variational autoencoder has found success in modelling the manifold of natural images on certain datasets, allowing meaningful images to be generated while interpolating or extrapolating in the latent code space, but it is unclear whether similar capabilities are feasible for text considering its discrete nature.In this work, we investigate the reason why unsupervised learning of controllable representations fails for text.We find that traditional sequence VAEs can learn disentangled representations through their latent codes to some extent, but they often fail to properly decode when the latent factor is being manipulated, because the manipulated codes often land in holes or vacant regions in the aggregated posterior latent space, which the decoding network is not trained to process.Both as a validation of the explanation and as a fix to the problem, we propose to constrain the posterior mean to a learned probability simplex, and performs manipulation within this simplex.Our proposed method mitigates the latent vacancy problem and achieves the first success in unsupervised learning of controllable representations for text.Empirically, our method significantly outperforms unsupervised baselines and is competitive with strong supervised approaches on text style transfer.Furthermore, when switching the latent factor during a long sentence generation, our proposed framework can often complete the sentence in a seemingly natural way -- a capability that has never been attempted by previous methods.","why previous VAEs on text cannot learn controllable latent representation as on images, as well as a fix to enable the first success towards controlled text generation without supervision" 1231,A Model Cortical Network for Spatiotemporal Sequence Learning and Prediction,"In this paper we developed a hierarchical network model, called Hierarchical Prediction Network to understand how spatiotemporal memories might be learned and encoded in a representational hierarchy for predicting future video frames.The model is inspired by the feedforward, feedback and lateral recurrent circuits in the mammalian hierarchical visual system.It assumes that spatiotemporal memories are encoded in the recurrent connections within each level and between different levels of the hierarchy.The model contains a feed-forward path that computes and encodes spatiotemporal features of successive complexity and a feedback path that projects interpretation from a higher level to the level below.""Within each level, the feed-forward path and the feedback path intersect in a recurrent gated circuit that integrates their signals as well as the circuit's internal memory states to generate a prediction of the incoming signals."", 'The network learns by comparing the incoming signals with its prediction, updating its internal model of the world by minimizing the prediction errors at each level of the hierarchy in the style of. The network processes data in blocks of video frames rather than a frame-to-frame basis. This allows it to learn relationships among movement patterns, yielding state-of-the-art performance in long range video sequence predictions in benchmark datasets.We observed that hierarchical interaction in the network introduces sensitivity to memories of global movement patterns even in the population representation of the units in the earliest level.Finally, we provided neurophysiological evidence, showing that neurons in the early visual cortex of awake monkeys exhibit very similar sensitivity and behaviors.These findings suggest that predictive self-supervised learning might be an important principle for representational learning in the visual cortex. ",A new hierarchical cortical model for encoding spatiotemporal memory and video prediction 1232,Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep RL,"Saliency maps are often used to suggest explanations of the behavior of deep rein- forcement learning agents.However, the explanations derived from saliency maps are often unfalsifiable and can be highly subjective.We introduce an empirical approach grounded in counterfactual reasoning to test the hypotheses generated from saliency maps and show that explanations suggested by saliency maps are often not supported by experiments.Our experiments suggest that saliency maps are best viewed as an exploratory tool rather than an explanatory tool.",Proposing a new counterfactual-based methodology to evaluate the hypotheses generated from saliency maps about deep RL agent behavior. 1233,All Neural Networks are Created Equal,"One of the unresolved questions in deep learning is the nature of the solutions that are being discovered.We investigate the collection of solutions reached by the same network architecture, with different random initialization of weights and random mini-batches.These solutions are shown to be rather similar - more often than not, each train and test example is either classified correctly by all the networks, or by none at all. Surprisingly, all the network instances seem to share the same learning dynamics, whereby initially the same train and test examples are correctly recognized by the learned model, followed by other examples which are learned in roughly the same order.When extending the investigation to heterogeneous collections of neural network architectures, once again examples are seen to be learned in the same order irrespective of architecture, although the more powerful architecture may continue to learn and thus achieve higher accuracy.This pattern of results remains true even when the composition of classes in the test set is unrelated to the train set, for example, when using out of sample natural images or even artificial images.To show the robustness of these phenomena we provide an extensive summary of our empirical study, which includes hundreds of graphs describing tens of thousands of networks with varying NN architectures, hyper-parameters and domains.We also discuss cases where this pattern of similarity breaks down, which show that the reported similarity is not an artifact of optimization by gradient descent.Rather, the observed pattern of similarity is characteristic of learning complex problems with big networks.Finally, we show that this pattern of similarity seems to be strongly correlated with effective generalization.","Most neural networks approximate the same classification function, even across architectures, through all stages of learning." 1234,Wasserstein is all you need,"We propose a unified framework for building unsupervised representations of individual objects or entities, by associating with each object both a distributional as well as a point estimate.This is made possible by the use of optimal transport, which allows us to build these associated estimates while harnessing the underlying geometry of the ground space.Our method gives a novel perspective for building rich and powerful feature representations that simultaneously capture uncertainty and interpretability.As a guiding example, we formulate unsupervised representations for text, in particular for sentence representation and entailment detection.Empirical results show strong advantages gained through the proposed framework.This approach can be used for any unsupervised or supervised problem with a co-occurrence structure, such as any sequence data.The key tools underlying the framework are Wasserstein distances and Wasserstein barycenters.",Represent each entity based on its histogram of contexts and then Wasserstein is all you need! 1235,Interpretable Robust Recommender Systems with Side Information,"In this paper, we propose two methods, namely Trace-norm regression and Stable Trace-norm Analysis, to improve performances of recommender systems with side information.Our trace-norm regression approach extracts low-rank latent factors underlying the side information that drives user preference under different context.Furthermore, our novel recommender framework StaTNA not only captures latent low-rank common drivers for user preferences, but also considers idiosyncratic taste for individual users.We compare performances of TNR and StaTNA on the MovieLens datasets against state-of-the-art models, and demonstrate that StaTNA and TNR in general outperforms these methods.",Methodologies for recommender systems with side information based on trace-norm regularization 1236,Exploration Based Language Learning for Text-Based Games,"This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games.Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text.These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents.Moreover, they provide a learning environment in which these skills can be acquired through interactions with an environment rather than using fixed corpora.One aspect that makes these games particularly challenging for learning agents is the combinatorially large action space.Existing methods for solving text-based games are limited to games that are either very simple or have an action space restricted to a predetermined set of admissible actions.In this work, we propose to use the exploration approach of Go-Explore for solving text-based games.More specifically, in an initial exploration phase, we first extract trajectories with high rewards, after which we train a policy to solve the game by imitating these trajectories.Our experiments show that this approach outperforms existing solutions in solving text-based games, and it is more sample efficient in terms of the number of interactions with the environment.Moreover, we show that the learned policy can generalize better than existing solutions to unseen games without using any restriction on the action space.",This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games. 1237,"Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask","The recent “Lottery Ticket Hypothesis” paper by Frankle & Carbin showed that a simple approach to creating sparse networks results in models that are trainable from scratch, but only when starting from the same initial weights.The performance of these networks often exceeds the performance of the non-sparse base model, but for reasons that were not well understood.In this paper we study the three critical components of the Lottery Ticket algorithm, showing that each may be varied significantly without impacting the overall results.Ablating these factors leads to new insights for why LT networks perform as well as they do.We show why setting weights to zero is important, how signs are all you need to make the re-initialized network train, and why masking behaves like training.Finally, we discover the existence of Supermasks, or masks that can be applied to an untrained, randomly initialized network to produce a model with performance far better than chance.","In neural network pruning, zeroing pruned weights is important, sign of initialization is key, and masking can be thought of as training." 1238,SesameBERT: Attention for Anywhere,"Fine-tuning with pre-trained models has achieved exceptional results for many language tasks.In this study, we focused on one such self-attention network model, namely BERT, which has performed well in terms of stacking layers across diverse language-understanding benchmarks.However, in many downstream tasks, information between layers is ignored by BERT for fine-tuning.In addition, although self-attention networks are well-known for their ability to capture global dependencies, room for improvement remains in terms of emphasizing the importance of local contexts.In light of these advantages and disadvantages, this paper proposes SesameBERT, a generalized fine-tuning method that enables the extraction of global information among all layers through Squeeze and Excitation and enriches local information by capturing neighboring contexts via Gaussian blurring.Furthermore, we demonstrated the effectiveness of our approach in the HANS dataset, which is used to determine whether models have adopted shallow heuristics instead of learning underlying generalizations.The experiments revealed that SesameBERT outperformed BERT with respect to GLUE benchmark and the HANS evaluation set.","We proposed SesameBERT, a generalized fine-tuning method that enables the extraction of global information among all layers through Squeeze and Excitation and enriches local information by capturing neighboring contexts via Gaussian blurring." 1239,Stable Opponent Shaping in Differentiable Games,"A growing number of learning methods are actually differentiable games whose players optimise multiple, interdependent objectives in parallel – from GANs and intrinsic curiosity to multi-agent RL.Opponent shaping is a powerful approach to improve learning dynamics in these games, accounting for player influence on others’ updates.Learning with Opponent-Learning Awareness is a recent algorithm that exploits this response and leads to cooperation in settings like the Iterated Prisoner’s Dilemma.Although experimentally successful, we show that LOLA agents can exhibit ‘arrogant’ behaviour directly at odds with convergence.In fact, remarkably few algorithms have theoretical guarantees applying across all games.In this paper we present Stable Opponent Shaping, a new method that interpolates between LOLA and a stable variant named LookAhead.We prove that LookAhead converges locally to equilibria and avoids strict saddles in all differentiable games.SOS inherits these essential guarantees, while also shaping the learning of opponents and consistently either matching or outperforming LOLA experimentally.",Opponent shaping is a powerful approach to multi-agent learning but can prevent convergence; our SOS algorithm fixes this with strong guarantees in all differentiable games. 1240,Domain-Relevant Embeddings for Question Similarity,"The rate at which medical questions are asked online significantly exceeds the capacity of qualified people to answer them, leaving many questions unanswered or inadequately answered.Many of these questions are not unique, and reliable identification of similar questions would enable more efficient and effective question answering schema.While many research efforts have focused on the problem of general question similarity, these approaches do not generalize well to the medical domain, where medical expertise is often required to determine semantic similarity.In this paper, we show how a semi-supervised approach of pre-training a neural network on medical question-answer pairs is a particularly useful intermediate task for the ultimate goal of determining medical question similarity.While other pre-training tasks yield an accuracy below 78.7% on this task, our model achieves an accuracy of 82.6% with the same number of training examples, an accuracy of 80.0% with a much smaller training set, and an accuracy of 84.5% when the full corpus of medical question-answer data is used.",We show that question-answer matching is a particularly good pre-training task for question-similarity and release a dataset for medical question similarity 1241,Sharing Knowledge in Multi-Task Deep Reinforcement Learning,"We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning.We leverage the assumption that learning from different tasks, sharing common properties, is helpful to generalize the knowledge of them resulting in a more effective feature extraction compared to learning a single task.Intuitively, the resulting set of features offers performance benefits when used by Reinforcement Learning algorithms.We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks, extending the well-known finite-time bounds of Approximate Value-Iteration to the multi-task setting.In addition, we complement our analysis by proposing multi-task extensions of three Reinforcement Learning algorithms that we empirically evaluate on widely used Reinforcement Learning benchmarks showing significant improvements over the single-task counterparts in terms of sample efficiency and performance.",A study on the benefit of sharing representation in Multi-Task Reinforcement Learning. 1242,Quaternion Equivariant Capsule Networks for 3D Point Clouds,"We present a 3D capsule architecture for processing of point clouds that is equivariant with respect to the SO rotation group, translation and permutation of the unordered input sets.The network operates on a sparse set of local reference frames, computed from an input point cloud and establishes end-to-end equivariance through a novel 3D quaternion group capsule layer, including an equivariant dynamic routing procedure.The capsule layer enables us to disentangle geometry from pose, paving the way for more informative descriptions and a structured latent space.In the process, we theoretically connect the process of dynamic routing between capsules to the well-known Weiszfeld algorithm, a scheme for solving iterative re-weighted least squares problems with provable convergence properties, enabling robust pose estimation between capsule layers.Due to the sparse equivariant quaternion capsules, our architecture allows joint object classification and orientation estimation, which we validate empirically on common benchmark datasets.","Deep architectures for 3D point clouds that are equivariant to SO(3) rotations, as well as translations and permutations. " 1243,Exploring Sentence Vectors Through Automatic Summarization,"Vector semantics, especially sentence vectors, have recently been used successfully in many areas of natural language processing.However, relatively little work has explored the internal structure and properties of spaces of sentence vectors.In this paper, we will explore the properties of sentence vectors by studying a particular real-world application: Automatic Summarization.In particular, we show that cosine similarity between sentence vectors and document vectors is strongly correlated with sentence importance and that vector semantics can identify and correct gaps between the sentences chosen so far and the document.In addition, we identify specific dimensions which are linked to effective summaries.To our knowledge, this is the first time specific dimensions of sentence embeddings have been connected to sentence properties.We also compare the features of different methods of sentence embeddings.Many of these insights have applications in uses of sentence embeddings far beyond summarization.",A comparison and detailed analysis of various sentence embedding models through the real-world task of automatic summarization. 1244,Value Propagation Networks,"We present Value Propagation, a parameter-efficient differentiable planning module built on Value Iteration which can successfully be trained in a reinforcement learning fashion to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments.We evaluate on configurations of MazeBase grid-worlds, with randomly generated environments of several different sizes.Furthermore, we show that the module enables to learn to plan when the environment also includes stochastic elements, providing a cost-efficient learning system to build low-level size-invariant planners for a variety of interactive navigation problems.","We propose Value Propagation, a novel end-to-end planner which can learn to solve 2D navigation tasks via Reinforcement Learning, and that generalizes to larger and dynamic environments." 1245,Extracting and Leveraging Feature Interaction Interpretations,"Recommendation is a prevalent application of machine learning that affects many users; therefore, it is crucial for recommender models to be accurate and interpretable.In this work, we propose a method to both interpret and augment the predictions of black-box recommender systems.In particular, we propose to extract feature interaction interpretations from a source recommender model and explicitly encode these interactions in a target recommender model, where both source and target models are black-boxes.By not assuming the structure of the recommender system, our approach can be used in general settings. In our experiments, we focus on a prominent use of machine learning recommendation: ad-click prediction.We found that our interaction interpretations are both informative and predictive, i.e., significantly outperforming existing recommender models.""What's more, the same approach to interpreting interactions can provide new insights into domains even beyond recommendation.",Proposed a method to extract and leverage interpretations of feature interactions 1246,Learning Network Parameters in the ReLU Model,"Rectified linear units, or ReLUs, have become a preferred activation function for artificial neural networks.In this paper we consider the problem of learning a generative model in the presence of nonlinearity.Given a set of signal vectors, we aim to learn the network parameters, i.e., the matrix, under the model, where is a random bias vector, and^i \\in \\mathbb^kAO\\mathbf$.",We show that it is possible to recover the parameters of a 1-layer ReLU generative model from looking at samples generated by it 1247,Feat2Vec: Dense Vector Representation for Data with Arbitrary Features,"Methods that calculate dense vector representations for features in unstructured data—such as words in a document—have proven to be very successful for knowledge representation.We study how to estimate dense representations when multiple feature types exist within a dataset for supervised learning where explicit labels are available, as well as for unsupervised learning where there are no labels.Feat2Vec calculates embeddings for data with multiple feature types enforcing that all different feature types exist in a common space.In the supervised case, we show that our method has advantages over recently proposed methods; such as enabling higher prediction accuracy, and providing a way to avoid the cold-startproblem.In the unsupervised case, our experiments suggest that Feat2Vec significantly outperforms existing algorithms that do not leverage the structure of the data.We believe that we are the first to propose a method for learning unsuper vised embeddings that leverage the structure of multiple feature types.",Learn dense vector representations of arbitrary types of features in labeled and unlabeled datasets 1248,Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks ,"We investigate the internal representations that a recurrent neural network uses while learning to recognize a regular formal language.Specifically, we train a RNN on positive and negative examples from a regular language, and ask if there is a simple decoding function that maps states of this RNN to states of the minimal deterministic finite automaton for the language.""Our experiments show that such a decoding function indeed exists, and that it maps states of the RNN not to MDFA states, but to states of an obtained by clustering small sets of MDFA states into superstates."", 'A qualitative analysis reveals that the abstraction often has a simple interpretation.Overall, the results suggest a strong structural relationship between internal representations used by RNNs and finite automata, and explain the well-known ability of RNNs to recognize formal grammatical structure.",Finite Automata Can be Linearly decoded from Language-Recognizing RNNs using low coarseness abstraction functions and high accuracy decoders. 1249,Hardware-aware One-Shot Neural Architecture Search in Coordinate Ascent Framework,"Designing accurate and efficient convolutional neural architectures for vast amount of hardware is challenging because hardware designs are complex and diverse.This paper addresses the hardware diversity challenge in Neural Architecture Search.Unlike previous approaches that apply search algorithms on a small, human-designed search space without considering hardware diversity, we propose HURRICANE that explores the automatic hardware-aware search over a much larger search space and a multistep search scheme in coordinate ascent framework, to generate tailored models for different types of hardware.Extensive experiments on ImageNet show that our algorithm consistently achieves a much lower inference latency with a similar or better accuracy than state-of-the-art NAS methods on three types of hardware.Remarkably, HURRICANE achieves a 76.63% top-1 accuracy on ImageNet with a inference latency of only 16.5 ms for DSP, which is a 3.4% higher accuracy and a 6.35x inference speedup than FBNet-iPhoneX.For VPU, HURRICANE achieves a 0.53% higher top-1 accuracy than Proxyless-mobile with a 1.49x speedup.Even for well-studied mobile CPU, HURRICANE achieves a 1.63% higher top-1 accuracy than FBNet-iPhoneX with a comparable inference latency.HURRICANE also reduces the training time by 54.7% on average compared to SinglePath-Oneshot.",We propose HURRICANE to address the challenge of hardware diversity in one-shot neural architecture search 1250,LDMGAN: Reducing Mode Collapse in GANs with Latent Distribution Matching,"Generative Adversarial Networks have shown impressive results in modeling distributions over complicated manifolds such as those of natural images.However, GANs often suffer from mode collapse, which means they are prone to characterize only a single or a few modes of the data distribution.In order to address this problem, we propose a novel framework called LDMGAN.We first introduce Latent Distribution Matching constraint which regularizes the generator by aligning distribution of generated samples with that of real samples in latent space.To make use of such latent space, we propose a regularized AutoEncoder that maps the data distribution to prior distribution in encoded space.Extensive experiments on synthetic data and real world datasets show that our proposed framework significantly improves GAN’s stability and diversity.",We propose an AE-based GAN that alleviates mode collapse in GANs. 1251,Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation,"Achieving faster execution with shorter compilation time can foster further diversity and innovation in neural networks.However, the current paradigm of executing neural networks either relies on hand-optimized libraries, traditional compilation heuristics, or very recently genetic algorithms and other stochastic methods.These methods suffer from frequent costly hardware measurements rendering them not only too time consuming but also suboptimal.As such, we devise a solution that can learn to quickly adapt to a previously unseen design space for code optimization, both accelerating the search and improving the output performance.This solution dubbed CHAMELEON leverages reinforcement learning whose solution takes fewer steps to converge, and develops an adaptive sampling algorithm that not only focuses on the costly samples on representative points but also uses a domain knowledge inspired logic to improve the samples itself.Experimentation with real hardware shows that CHAMELEON provides 4.45×speed up in optimization time over AutoTVM, while also improving inference time of the modern deep networks by 5.6%.",Reinforcement learning and Adaptive Sampling for Optimized Compilation of Deep Neural Networks. 1252,Learning Adversarial Grammars for Future Prediction,"In this paper, we propose a differentiable adversarial grammar model for future prediction.The objective is to model a formal grammar in terms of differentiable functions and latent representations, so that their learning is possible through standard backpropagation.Learning a formal grammar represented with latent terminals, non-terminals, and productions rules allows capturing sequential structures with multiple possibilities from data.The adversarial grammar is designed so that it can learn stochastic production rules from the data distribution.Being able to select multiple production rules leads to different predicted outcomes, thus efficiently modeling many plausible futures. We confirm the benefit of the adversarial grammar on two diverse tasks: future 3D human pose prediction and future activity prediction.For all settings, the proposed adversarial grammar outperforms the state-of-the-art approaches, being able to predict much more accurately and further in the future, than prior work.",We design a grammar that is learned in an adversarial setting and apply it to future prediction in video. 1253,Janossy Pooling: Learning Deep Permutation-Invariant Functions for Variable-Size Inputs,"We consider a simple and overarching representation for permutation-invariant functions of sequences.Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence.This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions.If carried out naively, Janossy pooling can be computationally prohibitive.To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations.Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions.We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods.","We propose Janossy pooling, a method for learning deep permutation invariant functions designed to exploit relationships within the input sequence and tractable inference strategies such as a stochastic optimization procedure we call piSGD" 1254,Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks,"While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that number of instances per task and class is fixed.Due to such restriction, they learn to equally utilize the meta-knowledge across all the tasks, even when the number of instances per task and class largely varies.Moreover, they do not consider distributional difference in unseen tasks, on which the meta-knowledge may have less usefulness depending on the task relatedness.To overcome these limitations, we propose a novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning within each task.Through the learning of the balancing variables, we can decide whether to obtain a solution by relying on the meta-knowledge or task-specific learning.We formulate this objective into a Bayesian inference framework and tackle it using variational inference.We validate our Bayesian Task-Adaptive Meta-Learning on two realistic task- and class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches.Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework.","A novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning, and also class-specific learning within each task." 1255,Revisiting The Master-Slave Architecture In Multi-Agent Deep Reinforcement Learning,"Many tasks in artificial intelligence require the collaboration of multiple agents.We exam deep reinforcement learning for multi-agent domains.Recent research efforts often take the form of two seemingly conflicting perspectives, the decentralized perspective, where each agent is supposed to have its own controller; and the centralized perspective, where one assumes there is a larger model controlling all agents.In this regard, we revisit the idea of the master-slave architecture by incorporating both perspectives within one framework.Such a hierarchical structure naturally leverages advantages from one another.The idea of combining both perspective is intuitive and can be well motivated from many real world systems, however, out of a variety of possible realizations, we highlights three key ingredients, i.e. composed action representation, learnable communication and independent reasoning.With network designs to facilitate these explicitly, our proposal consistently outperforms latest competing methods both in synthetics experiments and when applied to challenging StarCraft micromanagement tasks.",We revisit the idea of the master-slave architecture in multi-agent deep reinforcement learning and outperforms state-of-the-arts. 1256,When Will Gradient Methods Converge to Max-margin Classifier under ReLU Models?,"We study the implicit bias of gradient descent methods in solving a binary classification problem over a linearly separable dataset.The classifier is described by a nonlinear ReLU model and the objective function adopts the exponential loss function.We first characterize the landscape of the loss function and show that there can exist spurious asymptotic local minima besides asymptotic global minima.We then show that gradient descent can converge to either a global or a local max-margin direction, or may diverge from the desired max-margin direction in a general context.For stochastic gradient descent, we show that it converges in expectation to either the global or the local max-margin direction if SGD converges.We further explore the implicit bias of these algorithms in learning a multi-neuron network under certain stationary conditions, and show that the learned classifier maximizes the margins of each sample pattern partition under the ReLU activation.",We study the implicit bias of gradient methods in solving a binary classification problem with nonlinear ReLU models. 1257,Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers,"Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions in resource-limited scenarios.A widely-used practice in relevant work assumes that a smaller-norm parameter or feature plays a less informative role at the inference time.In this paper, we propose a channel pruning technique for accelerating the computations of deep convolutional neural networks that does not critically rely on this assumption.Instead, it focuses on direct simplification of the channel-to-channel computation graph of a CNN without the need of performing a computationally difficult and not-always-useful task of making high-dimensional tensors of CNN structured sparse.Our approach takes two stages: first to adopt an end-to-end stochastic training method that eventually forces the outputs of some channels to be constant, and then to prune those constant channels from the original neural network by adjusting the biases of their impacting layers such that the resulting compact model can be quickly fine-tuned.Our approach is mathematically appealing from an optimization perspective and easy to reproduce.We experimented our approach through several image learning benchmarks and demonstrate its interest- ing aspects and competitive performance.",A CNN model pruning method using ISTA and rescaling trick to enforce sparsity of scaling parameters in batch normalization. 1258,Deep Gradient Boosting -- Layer-wise Input Normalization of Neural Networks,"Stochastic gradient descent has been the dominant optimization method for training deep neural networks due to its many desirable properties.One of the more remarkable and least understood quality of SGD is that it generalizes relatively wellon unseen data even when the neural network has millions of parameters.We hypothesize that in certain cases it is desirable to relax its intrinsic generalization properties and introduce an extension of SGD called deep gradient boosting.The key idea of DGB is that back-propagated gradients inferred using the chain rule can be viewed as pseudo-residual targets of a gradient boosting problem.Thus at each layer of a neural network the weight update is calculated by solving the corresponding boosting problem using a linear base learner.The resulting weight update formula can also be viewed as a normalization procedure of the data that arrives at each layer during the forward pass.When implemented as a separate input normalization layer the new architecture shows improved performance on image recognition tasks when compared to the same architecture without normalization layers.As opposed to batch normalization, INN has no learnable parameters however it matches its performance on CIFAR10 and ImageNet classification tasks.",What can we learn about training neural networks if we treat each layer as a gradient boosting problem? 1259,Measuring and regularizing networks in function space,"To optimize a neural network one often thinks of optimizing its parameters, but it is ultimately a matter of optimizing the function that maps inputs to outputs.Since a change in the parameters might serve as a poor proxy for the change in the function, it is of some concern that primacy is given to parameters but that the correspondence has not been tested.Here, we show that it is simple and computationally feasible to calculate distances between functions in a Hilbert space.We examine how typical networks behave in this space, and compare how parameter distances compare to function distances between various points of an optimization trajectory.We find that the two distances are nontrivially related.In particular, the ratio decreases throughout optimization, reaching a steady value around when test error plateaus.We then investigate how the distance could be applied directly to optimization.We first propose that in multitask learning, one can avoid catastrophic forgetting by directly limiting how much the input/output function changes between tasks.Secondly, we propose a new learning rule that constrains the distance a network can travel through-space in any one update.This allows new examples to be learned in a way that minimally interferes with what has previously been learned.These applications demonstrate how one can measure and regularize function distances directly, without relying on parameters or local approximations like loss curvature.",We find movement in function space is not proportional to movement in parameter space during optimization. We propose a new natural-gradient style optimizer to address this. 1260,The Convex Information Bottleneck Lagrangian,"The information bottleneck problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem which maximizes the information the representation has about the task, I, while ensuring that a minimum level of compression r is achieved <= r).For practical reasons the problem is usually solved by maximizing the IB Lagrangian for many values of the Lagrange multiplier, therefore drawing the IB curve for a given I) and selecting the representation of desired predictability and compression.It is known when Y is a deterministic function of X, the IB curve cannot be explored and other Lagrangians have been proposed to tackle this problem.In this paper we present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; prove that if these Lagrangians are used, there is a one-to-one mapping between the Lagrange multiplier and the desired compression rate r for known IB curve shapes, hence, freeing from the burden of solving the optimization problem for many values of the Lagrange multiplier.","We introduce a general family of Lagrangians that allow exploring the IB curve in all scenarios. When these are used, and the IB curve is known, one can optimize directly for a performance/compression level directly." 1261,Neural Logic Machines,"We propose the Neural Logic Machine, a neural-symbolic architecture for both inductive learning and logic reasoning.NLMs exploit the power of both neural networks---as function approximators, and logic programming---as a symbolic processor for objects with properties, relations, logic connectives, and quantifiers. After being trained on small-scale tasks, NLMs can recover lifted rules, and generalize to large-scale tasks.In our experiments, NLMs achieve perfect generalization in a number of tasks, from relational reasoning tasks on the family tree and general graphs, to decision making tasks including sorting arrays, finding shortest paths, and playing the blocks world.Most of these tasks are hard to accomplish for neural networks or inductive logic programming alone.","We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning." 1262,Robust Neural Abstractive Summarization Systems and Evaluation against Adversarial Information,"Sequence-to-sequence neural models have been actively investigated for abstractive summarization.Nevertheless, existing neural abstractive systems frequently generate factually incorrect summaries and are vulnerable to adversarial information, suggesting a crucial lack of semantic understanding.In this paper, we propose a novel semantic-aware neural abstractive summarization model that learns to generate high quality summaries through semantic interpretation over salient content.A novel evaluation scheme with adversarial samples is introduced to measure how well a model identifies off-topic information, where our model yields significantly better performance than the popular pointer-generator summarizer.Human evaluation also confirms that our system summaries are uniformly more informative and faithful as well as less redundant than the seq2seq model.",We propose a semantic-aware neural abstractive summarization model and a novel automatic summarization evaluation scheme that measures how well a model identifies off-topic information from adversarial samples. 1263,SuperChat: Dialogue Generation by Transfer Learning from Vision to Language using Two-dimensional Word Embedding and Pretrained ImageNet CNN Models,"The recent work of Super Characters method using two-dimensional word embedding achieved state-of-the-art results in text classification tasks, showcasing the promise of this new approach.This paper borrows the idea of Super Characters method and two-dimensional embedding, and proposes a method of generating conversational response for open domain dialogues.The experimental results on a public dataset shows that the proposed SuperChat method generates high quality responses.An interactive demo is ready to show at the workshop.And code will be available at github soon.",Print the input sentence and current response sentence onto an image and use fine-tuned ImageNet CNN model to predict the next response word. 1264,Unleashing the Potential of CNNs for Interpretable Few-Shot Learning,"Convolutional neural networks have been generally acknowledged as one of the driving forces for the advancement of computer vision.Despite their promising performances on many tasks, CNNs still face major obstacles on the road to achieving ideal machine intelligence.One is that CNNs are complex and hard to interpret.Another is that standard CNNs require large amounts of annotated data, which is sometimes very hard to obtain, and it is desirable to be able to learn them from few examples.In this work, we address these limitations of CNNs by developing novel, simple, and interpretable models for few-shot learn- ing. Our models are based on the idea of encoding objects in terms of visual concepts, which are interpretable visual cues represented by the feature vectors within CNNs.We first adapt the learning of visual concepts to the few-shot setting, and then uncover two key properties of feature encoding using visual concepts, which we call category sensitivity and spatial pattern.Motivated by these properties, we present two intuitive models for the problem of few-shot learning.Experiments show that our models achieve competitive performances, while being much more flexible and interpretable than alternative state-of-the-art few-shot learning methods.We conclude that using visual concepts helps expose the natural capability of CNNs for few-shot learning.",We enable ordinary CNNs for few-shot learning by exploiting visual concepts which are interpretable visual cues learnt within CNNs. 1265,Unsupervised Video-to-Video Translation,"Unsupervised image-to-image translation is a recently proposed task of translating an image to a different style or domain given only unpaired image examples at training time.In this paper, we formulate a new task of unsupervised video-to-video translation, which poses its own unique challenges.Translating video implies learning not only the appearance of objects and scenes but also realistic motion and transitions between consecutive frames.We investigate the performance of per-frame video-to-video translation using existing image-to-image translation networks, and propose a spatio-temporal 3D translator as an alternative solution to this problem.We evaluate our 3D method on multiple synthetic datasets, such as moving colorized digits, as well as the realistic segmentation-to-video GTA dataset and a new CT-to-MRI volumetric images translation dataset.Our results show that frame-wise translation produces realistic results on a single frame level but underperforms significantly on the scale of the whole video compared to our three-dimensional translation approach, which is better able to learn the complex structure of video and motion and continuity of object appearance.","Proposed new task, datasets and baselines; 3D Conv CycleGAN preserves object properties across frames; batch structure in frame-level methods matters." 1266,Building Deep Equivariant Capsule Networks,"Capsule networks are constrained by the parameter-expensive nature of their layers, and the general lack of provable equivariance guarantees.We present a variation of capsule networks that aims to remedy this.We identify that learning all pair-wise part-whole relationships between capsules of successive layers is inefficient.Further, we also realise that the choice of prediction networks and the routing mechanism are both key to equivariance.Based on these, we propose an alternative framework for capsule networks that learns to projectively encode the manifold of pose-variations, termed the space-of-variation, for every capsule-type of each layer.This is done using a trainable, equivariant function defined over a grid of group-transformations.Thus, the prediction-phase of routing involves projection into the SOV of a deeper capsule using the corresponding function.As a specific instantiation of this idea, and also in order to reap the benefits of increased parameter-sharing, we use type-homogeneous group-equivariant convolutions of shallower capsules in this phase.We also introduce an equivariant routing mechanism based on degree-centrality.We show that this particular instance of our general model is equivariant, and hence preserves the compositional representation of an input under transformations.We conduct several experiments on standard object-classification datasets that showcase the increased transformation-robustness, as well as general performance, of our model to several capsule baselines.","A new scalable, group-equivariant model for capsule networks that preserves compositionality under transformations, and is empirically more transformation-robust to older capsule network models." 1267,A Simple yet Effective Baseline for Robust Deep Learning with Noisy Labels,"Recently deep neural networks have shown their capacity to memorize training data, even with noisy labels, which hurts generalization performance.To mitigate this issue, we propose a simple but effective method that is robust to noisy labels, even with severe noise. Our objective involves a variance regularization term that implicitly penalizes the Jacobian norm of the neural network on the whole training set, which encourages generalization and prevents overfitting to the corrupted labels.Experiments on noisy benchmarks demonstrate that our approach achieves state-of-the-art performance with a high tolerance to severe noise.",The paper proposed a simple yet effective baseline for learning with noisy labels. 1268,Has Machine Translation Achieved Human Parity? A Case for Document-level Evaluation,"Recent research suggests that neural machine translation achieves parity with professional human translation on the WMT Chinese--English news translation task.We empirically test this claim with alternative evaluation protocols, contrasting the evaluation of single sentences and entire documents.In a pairwise ranking experiment, human raters assessing adequacy and fluency show a stronger preference for human over machine translation when evaluating documents as compared to isolated sentences.Our findings emphasise the need to shift towards document-level evaluation as machine translation improves to the degree that errors which are hard or impossible to spot at the sentence-level become decisive in discriminating quality of different translation outputs.","Raters prefer adequacy in human over machine translation when evaluating entire documents, but not when evaluating single sentences." 1269,ASYNCHRONOUS MULTI-AGENT GENERATIVE ADVERSARIAL IMITATION LEARNING,"Imitation learning aims to inversely learn a policy from expert demonstrations, which has been extensively studied in the literature for both single-agent setting with Markov decision process model, and multi-agent setting with Markov game model.However, existing approaches for general multi-agent Markov games are not applicable to multi-agent extensive Markov games, where agents make asynchronous decisions following a certain order, rather than simultaneous decisions.We propose a novel framework for asynchronous multi-agent generative adversarial imitation learning under general extensive Markov game settings, and the learned expert policies are proven to guarantee subgame perfect equilibrium, a more general and stronger equilibrium than Nash equilibrium.The experiment results demonstrate that compared to state-of-the-art baselines, our AMAGAIL model can better infer the policy of each expert agent using their demonstration data collected from asynchronous decision-making scenarios.",This paper extends the multi-agent generative adversarial imitation learning to extensive-form Markov games. 1270,Revisiting Self-Training for Neural Sequence Generation,"Self-training is one of the earliest and simplest semi-supervised methods.The key idea is to augment the original labeled dataset with unlabeled data paired with the model’s prediction.Self-training has mostly been well-studied to classification problems.However, in complex sequence generation tasks such as machine translation, it is still not clear how self-training woks due to the compositionality of the target space.In this work, we first show that it is not only possible but recommended to apply self-training in sequence generation.Through careful examination of the performance gains, we find that the noise added on the hidden states is critical to the success of self-training, as this acts like a regularizer which forces the model to yield similar predictions for similar inputs from unlabeled data.To further encourage this mechanism, we propose to inject noise to the input space, resulting in a “noisy” version of self-training.Empirical study on standard benchmarks across machine translation and text summarization tasks under different resource settings shows that noisy self-training is able to effectively utilize unlabeled data and improve the baseline performance by large margin.","We revisit self-training as a semi-supervised learning method for neural sequence generation problem, and show that self-training can be quite successful with injected noise." 1271,The Kanerva Machine: A Generative Distributed Memory,"We present an end-to-end trained memory system that quickly adapts to new data and generates samples like them.""Inspired by Kanerva's sparse distributed memory, it has a robust distributed reading and writing mechanism."", 'The memory is analytically tractable, which enables optimal on-line compression via a Bayesian update-rule.We formulate it as a hierarchical conditional generative model, where memory provides a rich data-dependent prior distribution.Consequently, the top-down memory and bottom-up perception are combined to produce the code representing an observation.Empirically, we demonstrate that the adaptive memory significantly improves generative models trained on both the Omniglot and CIFAR datasets.Compared with the Differentiable Neural Computer and its variants, our memory model has greater capacity and is significantly easier to train.",A generative memory model that combines slow-learning neural networks and a fast-adapting linear Gaussian model as memory. 1272,SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY,"Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity.In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility.In this work, we present a new approach that prunes a given network once at initialization prior to training.To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task.This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations.After pruning, the sparse network is trained in the standard way.Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks.Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task.","We present a new approach, SNIP, that is simple, versatile and interpretable; it prunes irrelevant connections for a given task at single-shot prior to training and is applicable to a variety of neural network models without modifications." 1273,Automatic Labeling of Data for Transfer Learning,"Transfer learning uses trained weights from a source model as the initial weightsfor the training of a target dataset. A well chosen source with a large numberof labeled data leads to significant improvement in accuracy. We demonstrate atechnique that automatically labels large unlabeled datasets so that they can trainsource models for transfer learning.We experimentally evaluate this method, usinga baseline dataset of human-annotated ImageNet1K labels, against five variationsof this technique. We show that the performance of these automatically trainedmodels come within 17% of baseline on average.",A technique for automatically labeling large unlabeled datasets so that they can train source models for transfer learning and its experimental evaluation. 1274,Tree-structured Attention Module for Image Classification,"Recent studies in attention modules have enabled higher performance in computer vision tasks by capturing global contexts and accordingly attending important features.In this paper, we propose a simple and highly parametrically efficient module named Tree-structured Attention Module which recursively encourages neighboring channels to collaborate in order to produce a spatial attention map as an output.Unlike other attention modules which try to capture long-range dependencies at each channel, our module focuses on imposing non-linearities be- tween channels by utilizing point-wise group convolution.This module not only strengthens representational power of a model but also acts as a gate which controls signal flow.Our module allows a model to achieve higher performance in a highly parameter-efficient manner.We empirically validate the effectiveness of our module with extensive experiments on CIFAR-10/100 and SVHN datasets.With our proposed attention module employed, ResNet50 and ResNet101 models gain 2.3% and 1.2% accuracy improvement with less than 1.5% parameter over- head.Our PyTorch implementation code is publicly available.",Our paper proposes an attention module which captures inter-channel relationships and offers large performance gains. 1275,Few-Shot Intent Inference via Meta-Inverse Reinforcement Learning,"A significant challenge for the practical application of reinforcement learning toreal world problems is the need to specify an oracle reward function that correctly defines a task.Inverse reinforcement learning seeks to avoid this challenge by instead inferring a reward function from expert behavior. While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world.Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function.In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a ""prior"" that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.",The applicability of inverse reinforcement learning is often hampered by the expense of collecting expert demonstrations; this paper seeks to broaden its applicability by incorporating prior task information through meta-learning. 1276,Deepström Networks,"Recent work has focused on combining kernel methods and deep learning.With this in mind, we introduce Deepström networks -- a new architecture of neural networks which we use to replace top dense layers of standard convolutional architectures with an approximation of a kernel function by relying on the Nyström approximation.Our approach is easy highly flexible.It is compatible with any kernel function and it allows exploiting multiple kernels.We show that Deepström networks reach state-of-the-art performance on standard datasets like SVHN and CIFAR100.One benefit of the method lies in its limited number of learnable parameters which make it particularly suited for small training set sizes, e.g. from 5 to 20 samples per class.Finally we illustrate two ways of using multiple kernels, including a multiple Deepström setting, that exploits a kernel on each feature map output by the convolutional part of the model. ",A new neural architecture where top dense layers of standard convolutional architectures are replaced with an approximation of a kernel function by relying on the Nyström approximation. 1277,The blood diamond effect in neural art: On ethically troublesome images of the imagenet dataset,"The main goal of this short paper is to inform the neural art community at large on the ethical ramifications of using models trained on the imagenet dataset, or using seed images from classes 445 -n02892767- [’bikini, two-piece’] and 459- n02837789- [’brassiere, bra, bandeau’] of the same.We discovered that many of the images belong to these classes were verifiably pornographic, shot in a non-consensual setting, voyeuristic and also entailed underage nudity.Akin to the and nexuses, we posit there is a similar moral conundrum at play here and would like to instigate a conversation amongst the neural artists in the community.",There's non-consensual and pornographic images in the ImageNet dataset 1278,Inductive and Unsupervised Representation Learning on Graph Structured Objects,"Inductive and unsupervised graph learning is a critical technique for predictive or information retrieval tasks where label information is difficult to obtain.It is also challenging to make graph learning inductive and unsupervised at the same time, as learning processes guided by reconstruction error based loss functions inevitably demand graph similarity evaluation that is usually computationally intractable.In this paper, we propose a general framework SEED for inductive and unsupervised representation learning on graph structured objects.Instead of directly dealing with the computational challenges raised by graph similarity evaluation, given an input graph, the SEED framework samples a number of subgraphs whose reconstruction errors could be efficiently evaluated, encodes the subgraph samples into a collection of subgraph vectors, and employs the embedding of the subgraph vector distribution as the output vector representation for the input graph.By theoretical analysis, we demonstrate the close connection between SEED and graph isomorphism.Using public benchmark datasets, our empirical study suggests the proposed SEED framework is able to achieve up to 10% improvement, compared with competitive baseline methods.",This paper proposed a novel framework for graph similarity learning in inductive and unsupervised scenario. 1279,Adversarial Training of Neural Encoding Models on Population Spike Trains,"Neural population responses to sensory stimuli can exhibit both nonlinear stimulus- dependence and richly structured shared variability.Here, we show how adversarial training can be used to optimize neural encoding models to capture both the deterministic and stochastic components of neural population data.To account for the discrete nature of neural spike trains, we use the REBAR method to estimate unbiased gradients for adversarial optimization of neural encoding models.We illustrate our approach on population recordings from primary visual cortex.We show that adding latent noise-sources to a convolutional neural network yields a model which captures both the stimulus-dependence and noise correlations of the population activity.",We show how neural encoding models can be trained to capture both the signal and spiking variability of neural population data using GANs. 1280,Weakly Supervised Clustering by Exploiting Unique Class Count,"A weakly supervised learning based clustering framework is proposed in this paper.As the core of this framework, we introduce a novel multiple instance learning task based on a bag level label called unique class count, which is the number of unique classes among all instances inside the bag.In this task, no annotations on individual instances inside the bag are needed during training of the models.We mathematically prove that with a perfect ucc classifier, perfect clustering of individual instances inside the bags is possible even when no annotations on individual instances are given during training.We have constructed a neural network based ucc classifier and experimentally shown that the clustering performance of our framework with our weakly supervised ucc classifier is comparable to that of fully supervised learning models where labels for all instances are known.Furthermore, we have tested the applicability of our framework to a real world task of semantic segmentation of breast cancer metastases in histological lymph node sections and shown that the performance of our weakly supervised framework is comparable to the performance of a fully supervised Unet model.",A weakly supervised learning based clustering framework performs comparable to that of fully supervised learning models by exploiting unique class count. 1281,Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm,"The high computational and parameter complexity of neural networks makes their training very slow and difficult to deploy on energy and storage-constrained comput- ing systems.Many network complexity reduction techniques have been proposed including fixed-point implementation.However, a systematic approach for design- ing full fixed-point training and inference of deep neural networks remains elusive.We describe a precision assignment methodology for neural network training in which all network parameters, i.e., activations and weights in the feedforward path, gradients and weight accumulators in the feedback path, are assigned close to minimal precision.The precision assignment is derived analytically and enables tracking the convergence behavior of the full precision training, known to converge a priori.Thus, our work leads to a systematic methodology of determining suit- able precision for fixed-point training.The near optimality of the resulting precision assignment is validated empirically for four networks on the CIFAR-10, CIFAR-100, and SVHN datasets.The complexity reduction arising from our approach is compared with other fixed-point neural network designs.","We analyze and determine the precision requirements for training neural networks when all tensors, including back-propagated signals and weight accumulators, are quantized to fixed-point format." 1282,"Prototypical Examples in Deep Learning: Metrics, Characteristics, and Utility","Machine learning research has investigated prototypes: examples that are representative of the behavior to be learned.We systematically evaluate five methods for identifying prototypes, both ones previously introduced as well as new ones we propose, finding all of them to provide meaningful but different interpretations.Through a human study, we confirm that all five metrics are well matched to human intuition.Examining cases where the metrics disagree offers an informative perspective on the properties of data and algorithms used in learning, with implications for data-corpus construction, efficiency, adversarial robustness, interpretability, and other ML aspects.In particular, we confirm that the ""train on hard"" curriculum approach can improve accuracy on many datasets and tasks, but that it is strictly worse when there are many mislabeled or ambiguous examples.","We can identify prototypical and outlier examples in machine learning that are quantifiably very different, and make use of them to improve many aspects of neural networks." 1283,Sparse Deep Scattering Croisé Network,"In this work, we propose the Sparse Deep Scattering Croisé Network a novel architecture based on the Deep Scattering Network.The DSN is achieved by cascading wavelet transform convolutions with a complex modulus and a time-invariant operator.We extend this work by first,crossing multiple wavelet family transforms to increase the feature diversity while avoiding any learning.Thus providing a more informative latent representation and benefit from the development of highly specialized wavelet filters over the last decades.Beside, by combining all the different wavelet representations, we reduce the amount of prior information needed regarding the signals at hand.Secondly, we develop an optimal thresholding strategy for over-complete filter banks that regularizes the network and controls instabilities such as inherent non-stationary noise in the signal.Our systematic and principled solution sparsifies the latent representation of the network by acting as a local mask distinguishing between activity and noise.Thus, we propose to enhance the DSN by increasing the variance of the scattering coefficients representation as well as improve its robustness with respect to non-stationary noise.We show that our new approach is more robust and outperforms the DSN on a bird detection task.",We propose to enhance the Deep Scattering Network in order to improve control and stability of any given machine learning pipeline by proposing a continuous wavelet thresholding scheme 1284,(How) Can AI Bots Lie?,"Recent work on explanation generation for decision-making problems has viewed the explanation process as one of model reconciliation where an AI agent brings the human mental model to the same page with regards to a task at hand.This formulation succinctly captures many possible types of explanations, as well as explicitly addresses the various properties -- e.g. the social aspects, contrastiveness, and selectiveness -- of explanations studied in social sciences among human-human interactions.However, it turns out that the same process can be hijacked into producing ""alternative explanations"" -- i.e. explanations that are not true but still satisfy all the properties of a proper explanation.In previous work, we have looked at how such explanations may be perceived by the human in the loop and alluded to one possible way of generating them.In this paper, we go into more details of this curious feature of the model reconciliation process and discuss similar implications to the overall notion of explainable decision-making.","Model Reconciliation is an established framework for plan explanations, but can be easily hijacked to produce lies." 1285,Optimistic Acceleration for Optimization,"We consider new variants of optimization algorithms.Our algorithms are based on the observation that mini-batch of stochastic gradients in consecutive iterations do not change drastically and consequently may be predictable.Inspired by the similar setting in online learning literature called Optimistic Online learning, we propose two new optimistic algorithms for AMSGrad and Adam, respectively, by exploiting the predictability of gradients. The new algorithms combine the idea of momentum method, adaptive gradient method, and algorithms in Optimistic Online learning, which leads to speed up in training deep neural nets in practice.",We consider new variants of optimization algorithms for training deep nets. 1286,Dual-module Inference for Efficient Recurrent Neural Networks,"Using Recurrent Neural Networks in sequence modeling tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements because of the memory-bound execution pattern of RNNs.We propose a big-little dual-module inference to dynamically skip unnecessary memory access and computation to speedup RNN inference.Leveraging the error-resilient feature of nonlinear activation functions used in RNNs, we propose to use a lightweight little module that approximates the original RNN layer, which is referred to as the big module, to compute activations of the insensitive region that are more error-resilient.The expensive memory access and computation of the big module can be reduced as the results are only used in the sensitive region.Our method can reduce the overall memory access by 40% on average and achieve 1.54x to 1.75x speedup on CPU-based server platform with negligible impact on model quality.",We accelerate RNN inference by dynamically reducing redundant memory access using a mixture of accurate and approximate modules. 1287,Fine-grained Entity Recognition with Reduced False Negatives and Large Type Coverage,"Fine-grained Entity Recognition is the task of detecting and classifying entity mentions to a large set of types spanning diverse domains such as biomedical, finance and sports. We observe that when the type set spans several domains, detection of entity mention becomes a limitation for supervised learning models. The primary reason being lack of dataset where entity boundaries are properly annotated while covering a large spectrum of entity types. Our work directly addresses this issue. We propose Heuristics Allied with Distant Supervision framework to automatically construct a quality dataset suitable for the FgER task. HAnDS framework exploits the high interlink among Wikipedia and Freebase in a pipelined manner, reducing annotation errors introduced by naively using distant supervision approach. Using HAnDS framework, we create two datasets, one suitable for building FgER systems recognizing up to 118 entity types based on the FIGER type hierarchy and another for up to 1115 entity types based on the TypeNet hierarchy. Our extensive empirical experimentation warrants the quality of the generated datasets. Along with this, we also provide a manually annotated dataset for benchmarking FgER systems.",We initiate a push towards building ER systems to recognize thousands of types by providing a method to automatically construct suitable datasets based on the type hierarchy. 1288,INVOCMAP: MAPPING METHOD NAMES TO METHOD INVOCATIONS VIA MACHINE LEARNING,"Implementing correct method invocation is an important task for software developers.However, this is challenging work, since the structure of method invocation can be complicated.In this paper, we propose InvocMap, a code completion tool allows developers to obtain an implementation of multiple method invocations from a list of method names inside code context.InvocMap is able to predict the nested method invocations which their names didn’t appear in the list of input method names given by developers.To achieve this, we analyze the Method Invocations by four levels of abstraction.We build a Machine Translation engine to learn the mapping from the first level to the third level of abstraction of multiple method invocations, which only requires developers to manually add local variables from generated expression to get the final code.We evaluate our proposed approach on six popular libraries: JDK, Android, GWT, Joda-Time, Hibernate, and Xstream.With the training corpus of 2.86 million method invocations extracted from 1000 Java Github projects and the testing corpus extracted from 120 online forums code snippets, InvocMap achieves the accuracy rate up to 84 in F1- score depending on how much information of context provided along with method names, that shows its potential for auto code completion.",This paper proposes a theory of classifying Method Invocations by different abstraction levels and conducting a statistical approach for code completion from method name to method invocation. 1289,Hiding Objects from Detectors: Exploring Transferrable Adversarial Patterns,"Adversaries in neural networks have drawn much attention since their first debut.While most existing methods aim at deceiving image classification models into misclassification or crafting attacks for specific object instances in the object setection tasks, we focus on creating universal adversaries to fool object detectors and hide objects from the detectors.The adversaries we examine are universal in three ways: They are not specific for specific object instances; They are image-independent; They can further transfer to different unknown models.To achieve this, we propose two novel techniques to improve the transferability of the adversaries: and .Both techniques prove to simplify the patterns of generated adversaries, and ultimately result in higher transferability.",We focus on creating universal adversaries to fool object detectors and hide objects from the detectors. 1290,DNN Feature Map Compression using Learned Representation over GF(2),"In this paper, we introduce a method to compress intermediate feature maps of deep neural networks to decrease memory storage and bandwidth requirements during inference.Unlike previous works, the proposed method is based on converting fixed-point activations into vectors over the smallest GF finite field followed by nonlinear dimensionality reduction layers embedded into a DNN.Such an end-to-end learned representation finds more compact feature maps by exploiting quantization redundancies within the fixed-point activations along the channel or spatial dimensions.We apply the proposed network architecture to the tasks of ImageNet classification and PASCAL VOC object detection.Compared to prior approaches, the conducted experiments show a factor of 2 decrease in memory requirements with minor degradation in accuracy while adding only bitwise computations.",Feature map compression method that converts quantized activations into binary vectors followed by nonlinear dimensionality reduction layers embedded into a DNN 1291,Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?,"Adversarial training is one of the strongest defenses against adversarial attacks, but it requires adversarial examples to be generated for every mini-batch during optimization. The expense of producing these examples during training often precludes adversarial training from use on complex image datasets.In this study, we explore the mechanisms by which adversarial training improves classifier robustness, and show that these mechanisms can be effectively mimicked using simple regularization methods, including label smoothing and logit squeezing. Remarkably, using these simple regularization methods in combination with Gaussian noise injection, we are able to achieve strong adversarial robustness -- often exceeding that of adversarial training -- using no adversarial examples.",Achieving strong adversarial robustness comparable to adversarial training without training on adversarial examples 1292,Meta-Learning Update Rules for Unsupervised Representation Learning,"A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training.Typically, this involves minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect.In this work, we propose instead to directly target later desired tasks by meta-learning an unsupervised learning rule which leads to representations useful for those tasks. Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to different neural network architectures, datasets, and data modalities.We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques.We further show that the meta-learned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities.It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.","We learn an unsupervised learning algorithm that produces useful representations from a set of supervised tasks. At test-time, we apply this algorithm to new tasks without any supervision and show performance comparable to a VAE." 1293,In Support of Over-Parametrization in Deep Reinforcement Learning: an Empirical Study,"There is significant recent evidence in supervised learning that, in the over-parametrized setting, wider networks achieve better test error.In other words, the bias-variance tradeoff is not directly observable when increasing network width arbitrarily.We investigate whether a corresponding phenomenon is present in reinforcement learning.We experiment on four OpenAI Gym environments, increasing the width of the value and policy networks beyond their prescribed values.Our empirical results lend support to this hypothesis.However, tuning the hyperparameters of each network width separately remains as important future work in environments/algorithms where the optimal hyperparameters vary noticably across widths, confounding the results when the same hyperparameters are used for all widths.","Over-parametrization in width seems to help in deep reinforcement learning, just as it does in supervised learning." 1294,CZ-GEM: A FRAMEWORK FOR DISENTANGLED REPRESENTATION LEARNING,"Learning disentangled representations of data is one of the central themes in unsupervised learning in general and generative modelling in particular. In this work, we tackle a slightly more intricate scenario where the observations are generated from a conditional distribution of some known control variate and some latent noise variate. To this end, we present a hierarchical model and a training method that leverages some of the recent developments in likelihood-based and likelihood-free generative models. We show that by formulation, CZ-GEM introduces the right inductive biases that ensure the disentanglement of the control from the noise variables, while also keeping the components of the control variate disentangled.This is achieved without compromising on the quality of the generated samples.Our approach is simple, general, and can be applied both in supervised and unsupervised settings.",Hierarchical generative model (hybrid of VAE and GAN) that learns a disentangled representation of data without compromising the generative quality. 1295,Deep Active Learning for Named Entity Recognition,"Deep learning has yielded state-of-the-art performance on many natural language processing tasks including named entity recognition.However, this typically requires large amounts of labeled data.In this work, we demonstrate that the amount of labeled training data can be drastically reduced when deep learning is combined with active learning.While active learning is sample-efficient, it can be computationally expensive since it requires iterative retraining.To speed this up, we introduce a lightweight architecture for NER, viz., the CNN-CNN-LSTM model consisting of convolutional character and word encoders and a long short term memory tag decoder.The model achieves nearly state-of-the-art performance on standard datasets for the task while being computationally much more efficient than best performing models.We carry out incremental active learning, during the training process, and are able to nearly match state-of-the-art performance with just 25% of the original training data.","We introduce a lightweight architecture for named entity recognition and carry out incremental active learning, which is able to match state-of-the-art performance with just 25% of the original training data." 1296,GQ-Net: Training Quantization-Friendly Deep Networks,"Network quantization is a model compression and acceleration technique that has become essential to neural network deployment.Most quantization methods per- form fine-tuning on a pretrained network, but this sometimes results in a large loss in accuracy compared to the original network.We introduce a new technique to train quantization-friendly networks, which can be directly converted to an accurate quantized network without the need for additional fine-tuning.Our technique allows quantizing the weights and activations of all network layers down to 4 bits, achieving high efficiency and facilitating deployment in practical settings.Com- pared to other fully quantized networks operating at 4 bits, we show substantial improvements in accuracy, for example 66.68% top-1 accuracy on ImageNet using ResNet-18, compared to the previous state-of-the-art accuracy of 61.52% Louizos et al. and a full precision reference accuracy of 69.76%.We performed a thorough set of experiments to test the efficacy of our method and also conducted ablation studies on different aspects of the method and techniques to improve training stability and accuracy.Our codebase and trained models are available on GitHub.",We train accurate fully quantized networks using a loss function maximizing full precision model accuracy and minimizing the difference between the full precision and quantized networks. 1297,Connectivity Learning in Multi-Branch Networks,"While much of the work in the design of convolutional networks over the last five years has revolved around the empirical investigation of the importance of depth, filter sizes, and number of feature channels, recent studies have shown that branching, i.e., splitting the computation along parallel but distinct threads and then aggregating their outputs, represents a new promising dimension for significant improvements in performance.To combat the complexity of design choices in multi-branch architectures, prior work has adopted simple strategies, such as a fixed branching factor, the same input being fed to all parallel branches, and an additive combination of the outputs produced by all branches at aggregation points.In this work we remove these predefined choices and propose an algorithm to learn the connections between branches in the network.Instead of being chosen a priori by the human designer, the multi-branch connectivity is learned simultaneously with the weights of the network by optimizing a single loss function defined with respect to the end task.""We demonstrate our approach on the problem of multi-class image classification using four different datasets where it yields consistently higher accuracy compared to the state-of-the-art ResNeXt multi-branch network given the same learning capacity.",In this paper we introduced an algorithm to learn the connectivity of deep multi-branch networks. The approach is evaluated on image categorization where it consistently yields accuracy gains over state-of-the-art models that use fixed connectivity. 1298,Discovery of Natural Language Concepts in Individual Units of CNNs,"Although deep convolutional networks have achieved improved performance in many natural language tasks, they have been treated as black boxes because they are difficult to interpret.Especially, little is known about how they represent language in their intermediate layers.In an attempt to understand the representations of deep convolutional networks trained on language tasks, we show that individual units are selectively responsive to specific morphemes, words, and phrases, rather than responding to arbitrary and uninterpretable patterns.In order to quantitatively analyze such intriguing phenomenon, we propose a concept alignment method based on how units respond to replicated text.We conduct analyses with different architectures on multiple datasets for classification and translation tasks and provide new insights into how deep models understand natural language.",We show that individual units in CNN representations learned in NLP tasks are selectively responsive to natural language concepts. 1299,Learning Temporal Abstraction with Information-theoretic Constraints for Hierarchical Reinforcement Learning,"Applying reinforcement learning to real-world problems will require reasoning about action-reward correlation over long time horizons.Hierarchical reinforcement learning methods handle this by dividing the task into hierarchies, often with hand-tuned network structure or pre-defined subgoals.We propose a novel HRL framework TAIC, which learns the temporal abstraction from past experience or expert demonstrations without task-specific knowledge.We formulate the temporal abstraction problem as learning latent representations of action sequences and present a novel approach of regularizing the latent space by adding information-theoretic constraints.Specifically, we maximize the mutual information between the latent variables and the state changes.A visualization of the latent space demonstrates that our algorithm learns an effective abstraction of the long action sequences.The learned abstraction allows us to learn new tasks on higher level more efficiently.We convey a significant speedup in convergence over benchmark learning problems.These results demonstrate that learning temporal abstractions is an effective technique in increasing the convergence rate and sample efficiency of RL algorithms.","We propose a novel HRL framework, in which we formulate the temporal abstraction problem as learning a latent representation of action sequence." 1300,Feature Intertwiner for Object Detection,"A well-trained model should classify objects with unanimous score for every category.This requires the high-level semantic features should be alike among samples, despite a wide span in resolution, texture, deformation, etc.Previous works focus on re-designing the loss function or proposing new regularization constraints on the loss.In this paper, we address this problem via a new perspective.For each category, it is assumed that there are two sets in the feature space: one with more reliable information and the other with less reliable source.We argue that the reliable set could guide the feature learning of the less reliable set during training - in spirit of student mimicking teacher’s behavior and thus pushing towards a more compact class centroid in the high-dimensional space.Such a scheme also benefits the reliable set since samples become more closer within the same category - implying that it is easilier for the classifier to identify.We refer to this mutual learning process as feature intertwiner and embed the spirit into object detection.It is well-known that objects of low resolution are more difficult to detect due to the loss of detailed information during network forward pass.We thus regard objects of high resolution as the reliable set and objects of low resolution as the less reliable set.Specifically, an intertwiner is achieved by minimizing the distribution divergence between two sets.We design a historical buffer to represent all previous samples in the reliable set and utilize them to guide the feature learning of the less reliable set.The design of obtaining an effective feature representation for the reliable set is further investigated, where we introduce the optimal transport algorithm into the framework.Samples in the less reliable set are better aligned with the reliable set with aid of OT metric.Incorporated with such a plug-and-play intertwiner, we achieve an evident improvement over previous state-of-the-arts on the COCO object detection benchmark.",(Camera-ready version) A feature intertwiner module to leverage features from one accurate set to help the learning of another less reliable set. 1301,Continual Learning with Adaptive Weights (CLAW),"Approaches to continual learning aim to successfully learn a set of related tasks that arrive in an online manner.Recently, several frameworks have been developed which enable deep learning to be deployed in this learning scenario.A key modelling decision is to what extent the architecture should be shared across tasks.On the one hand, separately modelling each task avoids catastrophic forgetting but it does not support transfer learning and leads to large models.On the other hand, rigidly specifying a shared component and a task-specific part enables task transfer and limits the model size, but it is vulnerable to catastrophic forgetting and restricts the form of task-transfer that can occur.Ideally, the network should adaptively identify which parts of the network to share in a data driven way.Here we introduce such an approach called Continual Learning with Adaptive Weights, which is based on probabilistic modelling and variational inference.Experiments show that CLAW achieves state-of-the-art performance on six benchmarks in terms of overall continual learning performance, as measured by classification accuracy, and in terms of addressing catastrophic forgetting.",A continual learning framework which learns to automatically adapt its architecture based on a proposed variational inference algorithm. 1302,Noisy $\ell^{0}$-Sparse Subspace Clustering on Dimensionality Reduced Data,"High-dimensional data often lie in or close to low-dimensional subspaces.Sparse subspace clustering methods with sparsity induced by L0-norm, such as L0-Sparse Subspace Clustering, are demonstrated to be more effective than its L1 counterpart such as Sparse Subspace Clustering.However, these L0-norm based subspace clustering methods are restricted to clean data that lie exactly in subspaces.Real data often suffer from noise and they may lie close to subspaces.We propose noisy L0-SSC to handle noisy data so as to improve the robustness.We show that the optimal solution to the optimization problem of noisy L0-SSC achieves subspace detection property, a key element with which data from different subspaces are separated, under deterministic and randomized models.Our results provide theoretical guarantee on the correctness of noisy L0-SSC in terms of SDP on noisy data.We further propose Noisy-DR-L0-SSC which provably recovers the subspaces on dimensionality reduced data.Noisy-DR-L0-SSC first projects the data onto a lower dimensional space by linear transformation, then performs noisy L0-SSC on the dimensionality reduced data so as to improve the efficiency.The experimental results demonstrate the effectiveness of noisy L0-SSC and Noisy-DR-L0-SSC.",We propose Noisy-DR-L0-SSC (Noisy Dimension Reduction L0-Sparse Subspace Clustering) to efficiently partition noisy data in accordance to their underlying subspace structure. 1303,Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness,"Mode connectivity provides novel geometric insights on analyzing loss landscapes and enables building high-accuracy pathways between well-trained neural networks.In this work, we propose to employ mode connectivity in loss landscapes to study the adversarial robustness of deep neural networks, and provide novel methods for improving this robustness. Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.When network models are tampered with backdoor or error-injection attacks, our results demonstrate that the path connection learned using limited amount of bonafide data can effectively mitigate adversarial effects while maintaining the original accuracy on clean data.Therefore, mode connectivity provides users with the power to repair backdoored or error-injected models. We also use mode connectivity to investigate the loss landscapes of regular and robust models against evasion attacks.Experiments show that there exists a barrier in adversarial robustness loss on the path connecting regular and adversarially-trained models. A high correlation is observed between the adversarial robustness loss and the largest eigenvalue of the input Hessian matrix, for which theoretical justifications are provided. Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.","A novel approach using mode connectivity in loss landscapes to mitigate adversarial effects, repair tampered models and evaluate adversarial robustness" 1304,Improving Multi-Manifold GANs with a Learned Noise Prior,"Generative adversarial networks learn to map samples from a noise distribution to a chosen data distribution.Recent work has demonstrated that GANs are consequently sensitive to, and limited by, the shape of the noise distribution.For example, a single generator struggles to map continuous noise to discontinuous output or complex output.""We address this problem by learning to generate from multiple models such that the generator's output is actually the combination of several distinct networks."", 'We contribute a novel formulation of multi-generator models where we learn a prior over the generators conditioned on the noise, parameterized by a neural network.Thus, this network not only learns the optimal rate to sample from each generator but also optimally shapes the noise received by each generator.The resulting Noise Prior GAN achieves expressivity and flexibility that surpasses both single generator models and previous multi-generator models.",A multi-generator GAN framework with an additional network to learn a prior over the input noise. 1305,Seeing is Not Necessarily Believing: Limitations of BigGANs for Data Augmentation,"Recent advances in Generative Adversarial Networks – in architectural design, training strategies, and empirical tricks – have led nearly photorealistic samples on large-scale datasets such as ImageNet. In fact, for one model in particular, BigGAN, metrics such as Inception Score or Frechet Inception Distance nearly match those of the dataset, suggesting that these models are close to match-ing the distribution of the training set. Given the quality of these models, it is worth understanding to what extent these samples can be used for data augmentation, a task expressed as a long-term goal of the GAN research project. To that end, we train ResNet-50 classifiers using either purely BigGAN images or mixtures of ImageNet and BigGAN images, and test on the ImageNet validation set.Our preliminary results suggest both a measured view of state-of-the-art GAN quality and highlight limitations of current metrics.Using only BigGAN images, we find that Top-1 and Top-5 error increased by 120% and 384%, respectively, and furthermore, adding more BigGAN data to the ImageNet training set at best only marginally improves classifier performance.Finally, we find that neither Inception Score, nor FID, nor combinations thereof are predictive of classification accuracy. These results suggest that as GANs are beginning to be deployed in downstream tasks, we should create metrics that better measure downstream task performance. We propose classification performance as one such metric that, in addition to assessing per-class sample quality, is more suited to such downstream tasks.",BigGANs do not capture the ImageNet data distributions and are only modestly successful for data augmentation. 1306,LEAF: A Benchmark for Federated Settings,"Modern federated networks, such as those comprised of wearable devices, mobile phones, or autonomous vehicles, generate massive amounts of data each day.This wealth of data can help to learn models that can improve the user experience on each device.However, the scale and heterogeneity of federated data presents new challenges in research areas such as federated learning, meta-learning, and multi-task learning.As the machine learning community begins to tackle these challenges, we are at a critical time to ensure that developments made in these areas are grounded with realistic benchmarks.To this end, we propose Leaf, a modular benchmarking framework for learning in federated settings.Leaf includes a suite of open-source federated datasets, a rigorous evaluation framework, and a set of reference implementations, all geared towards capturing the obstacles and intricacies of practical federated environments.","We present Leaf, a modular benchmarking framework for learning in federated data, with applications to learning paradigms such as federated learning, meta-learning, and multi-task learning." 1307,Learning a Spatio-Temporal Embedding for Video Instance Segmentation,"Understanding object motion is one of the core problems in computer vision.It requires segmenting and tracking objects over time.Significant progress has been made in instance segmentation, but such models cannot track objects, and more crucially, they are unable to reason in both 3D space and time.We propose a new spatio-temporal embedding loss on videos that generates temporally consistent video instance segmentation.Our model includes a temporal network that learns to model temporal context and motion, which is essential to produce smooth embeddings over time.Further, our model also estimates monocular depth, with a self-supervised loss, as the relative distance to an object effectively constrains where it can be next, ensuring a time-consistent embedding.Finally, we show that our model can accurately track and segment instances, even with occlusions and missed detections, advancing the state-of-the-art on the KITTI Multi-Object and Tracking Dataset.","We introduce a new spatio-temporal embedding loss on videos that generates temporally consistent video instance segmentation, even with occlusions and missed detections, using appearance, geometry, and temporal context." 1308,Hindsight Trust Region Policy Optimization,"As reinforcement learning continues to drive machine intelligence beyond its conventional boundary, unsubstantial practices in sparse reward environment severely limit further applications in a broader range of advanced fields.Motivated by the demand for an effective deep reinforcement learning algorithm that accommodates sparse reward environment, this paper presents Hindsight Trust Region Policy Optimization, a method that efficiently utilizes interactions in sparse reward conditions to optimize policies within trust region and, in the meantime, maintains learning stability.Firstly, we theoretically adapt the TRPO objective function, in the form of the expected return of the policy, to the distribution of hindsight data generated from the alternative goals.Then, we apply Monte Carlo with importance sampling to estimate KL-divergence between two policies, taking the hindsight data as input.Under the condition that the distributions are sufficiently close, the KL-divergence is approximated by another f-divergence.Such approximation results in the decrease of variance and alleviates the instability during policy update. Experimental results on both discrete and continuous benchmark tasks demonstrate that HTRPO converges significantly faster than previous policy gradient methods.It achieves effective performances and high data-efficiency for training policies in sparse reward environments.",This paper proposes an advanced policy optimization method with hindsight experience for sparse reward reinforcement learning. 1309,Answering Science Exam Questions Using Query Reformulation with Background Knowledge,"Open-domain question answering is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques.Much of the progress in open-domain QA systems has been realized through advances in information retrieval methods and corpus construction.In this paper, we focus on the recently introduced ARC Challenge dataset, which contains 2,590 multiple choice questions authored for grade-school science exams.These questions are selected to be the most challenging for current QA systems, and current state of the art performance is only slightly better than random chance.We present a system that reformulates a given question into queries that are used to retrieve supporting text from a large corpus of science-related text.Our rewriter is able to incorporate background knowledge from ConceptNet and -- in tandem with a generic textual entailment system trained on SciTail that identifies support in the retrieved results -- outperforms several strong baselines on the end-to-end QA task despite only being trained to identify essential terms in the original source question.We use a generalizable decision methodology over the retrieved evidence and answer candidates to select the best answer.By combining query reformulation, background knowledge, and textual entailment our system is able to outperform several strong baselines on the ARC dataset.",We explore how using background knowledge with query reformulation can help retrieve better supporting evidence when answering multiple-choice science questions. 1310,ShardNet: One Filter Set to Rule Them All,"Deep CNNs have achieved state-of-the-art performance for numerous machine learning and computer vision tasks in recent years, but as they have become increasingly deep, the number of parameters they use has also increased, making them hard to deploy in memory-constrained environments and difficult to interpret.Machine learning theory implies that such networks are highly over-parameterised and that it should be possible to reduce their size without sacrificing accuracy, and indeed many recent studies have begun to highlight specific redundancies that can be exploited to achieve this.In this paper, we take a further step in this direction by proposing a filter-sharing approach to compressing deep CNNs that reduces their memory footprint by repeatedly applying a single convolutional mapping of learned filters to simulate a CNN pipeline.We show, via experiments on CIFAR-10, CIFAR-100, Tiny ImageNet, and ImageNet that this allows us to reduce the parameter counts of networks based on common designs such as VGGNet and ResNet by a factor proportional to their depth, whilst leaving their accuracy largely unaffected.At a broader level, our approach also indicates how the scale-space regularities found in visual signals can be leveraged to build neural architectures that are more parsimonious and interpretable.","We compress deep CNNs by reusing a single convolutional layer in an iterative manner, thereby reducing their parameter counts by a factor proportional to their depth, whilst leaving their accuracies largely unaffected" 1311,Spectral Analysis of Kernel and Neural Embeddings: Optimization and Generalization,"We extend the recent results of by a spectral analysis of representations corresponding to kernel and neural embeddings.They showed that in a simple single layer network, the alignment of the labels to the eigenvectors of the corresponding Gram matrix determines both the convergence of the optimization during training as well as the generalization properties.We generalize their result to kernel and neural representations and show that these extensions improve both optimization and generalization of the basic setup studied in.",Spectral analysis for understanding how different representations can improve optimization and generalization. 1312,Conservative Uncertainty Estimation By Fitting Prior Networks,"Obtaining high-quality uncertainty estimates is essential for many applications of deep neural networks.In this paper, we theoretically justify a scheme for estimating uncertainties, based on sampling from a prior distribution.Crucially, the uncertainty estimates are shown to be conservative in the sense that they never underestimate a posterior uncertainty obtained by a hypothetical Bayesian algorithm.We also show concentration, implying that the uncertainty estimates converge to zero as we get more data.Uncertainty estimates obtained from random priors can be adapted to any deep network architecture and trained using standard supervised learning pipelines.We provide experimental evaluation of random priors on calibration and out-of-distribution detection on typical computer vision tasks, demonstrating that they outperform deep ensembles in practice.",We provide theoretical support to uncertainty estimates for deep learning obtained fitting random priors. 1313,Arbitrarily-conditioned Data Imputation,"In this paper, we propose an arbitrarily-conditioned data imputation framework built upon variational autoencoders and normalizing flows.The proposed model is capable of mapping any partial data to a multi-modal latent variational distribution.Sampling from such a distribution leads to stochastic imputation.Preliminary evaluation on MNIST dataset shows promising stochastic imputation conditioned on partial images as input.",We propose an arbitrarily-conditioned data imputation framework built upon variational autoencoders and normalizing flows 1314,The Secret Revealer: Generative Model Inversion Attacks Against Deep Neural Networks,"This paper studies , in which the access to a model is abused to infer information about the training data.Since its first introduction by~t, such attacks have raised serious concerns given that training data usually contain sensitive information.Thus far, successful model inversion attacks have only been demonstrated on simple models, such as linear regression and logistic regression.Previous attempts to invert neural networks, even the ones with simple architectures, have failed to produce convincing results.We present a novel attack method, termed the , which can invert deep neural networks with high success rates.Rather than reconstructing private training data from scratch, we leverage partial public information, which can be very generic, to learn a distributional prior via generative adversarial networks and use it to guide the inversion process.""Moreover, we theoretically prove that a model's predictive power and its vulnerability to inversion attacks are indeed two sides of the same coin---highly predictive models are able to establish a strong correlation between features and labels, which coincides exactly with what an adversary exploits to mount the attacks."", 'Our experiments demonstrate that the proposed attack improves identification accuracy over the existing work by about for reconstructing face images from a state-of-the-art face recognition classifier.We also show that differential privacy, in its canonical form, is of little avail to protect against our attacks.",We develop a privacy attack that can recover the sensitive input data of a deep net from its output 1315,Bilingual-GAN: Neural Text Generation and Neural Machine Translation as Two Sides of the Same Coin,"Latent space based GAN methods and attention based encoder-decoder architectures have achieved impressive results in text generation and Unsupervised NMT respectively.Leveraging the two domains, we propose an adversarial latent space based architecture capable of generating parallel sentences in two languages concurrently and translating bidirectionally.The bilingual generation goal is achieved by sampling from the latent space that is adversarially constrained to be shared between both languages.First an NMT model is trained, with back-translation and an adversarial setup, to enforce a latent state between the two languages.The encoder and decoder are shared for the two translation directions.Next, a GAN is trained to generate ‘synthetic’ code mimicking the languages’ shared latent space.This code is then fed into the decoder to generate text in either language.We perform our experiments on Europarl and Multi30k datasets, on the English-French language pair, and document our performance using both Supervised and Unsupervised NMT.",We present a novel method for Bilingual Text Generation producing parallel concurrent sentences in two languages. 1316,Online Explanation Generation for Human-Robot Teaming,"As Artificial Intelligence becomes an integral part of our life, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to explain its behavior is one of the key requirements of an explainable agency.""Prior work on explanation generation focuses on supporting the reasoning behind the robot's behavior."", 'These approaches, however, fail to consider the mental workload needed to understand the received explanation.In other words, the human teammate is expected to understand any explanation provided, often before the task execution, no matter how much information is presented in the explanation.In this work, we argue that an explanation, especially complex ones, should be made in an online fashion during the execution, which helps spread out the information to be explained and thus reducing the mental workload of humans.However, a challenge here is that the different parts of an explanation are dependent on each other, which must be taken into account when generating online explanations.To this end, a general formulation of online explanation generation is presented along with three different implementations satisfying different online properties.We base our explanation generation method on a model reconciliation setting introduced in our prior work.Our approaches are evaluated both with human subjects in a standard planning competition domain, using NASA Task Load Index, as well as in simulation with ten different problems across two IPC domains.",We introduce online explanation to consider the cognitive requirement of the human for understanding the generated explanation by the agent. 1317,GPU Memory Management for Deep Neural Networks Using Deep Q-Network,"Deep neural networks use deeper and broader structures to achieve better performance and consequently, use increasingly more GPU memory as well.However, limited GPU memory restricts many potential designs of neural networks.In this paper, we propose a reinforcement learning based variable swapping and recomputation algorithm to reduce the memory cost, without sacrificing the accuracy of models.Variable swapping can transfer variables between CPU and GPU memory to reduce variables stored in GPU memory.Recomputation can trade time for space by removing some feature maps during forward propagation.Forward functions are executed once again to get the feature maps before reuse.However, how to automatically decide which variables to be swapped or recomputed remains a challenging problem.To address this issue, we propose to use a deep Q-network to make plans.By combining variable swapping and recomputation, our results outperform several well-known benchmarks.",We propose a reinforcement learning based variable swapping and recomputation algorithm to reduce the memory cost. 1318,Iterative temporal differencing with fixed random feedback alignment support spike-time dependent plasticity in vanilla backpropagation for deep learning,"In vanilla backpropagation, activation function matters considerably in terms of non-linearity and differentiability.Vanishing gradient has been an important problem related to the bad choice of activation function in deep learning.This work shows that a differentiable activation function is not necessary any more for error backpropagation.The derivative of the activation function can be replaced by an iterative temporal differencing using fixed random feedback weight alignment.Using FBA with ITD, we can transform the VBP into a more biologically plausible approach for learning deep neural network architectures.""We don't claim that ITD works completely the same as the spike-time dependent plasticity in our brain but this work can be a step toward the integration of STDP-based error backpropagation in deep learning.",Iterative temporal differencing with fixed random feedback alignment support spike-time dependent plasticity in vanilla backpropagation for deep learning. 1319,A Deep Generative Acoustic Model for Compositional Automatic Speech Recognition,"Inspired by the recent successes of deep generative models for Text-To-Speech such as WaveNet and Tacotron, this article proposes the use of a deep generative model tailored for Automatic Speech Recognition as the primary acoustic model for an overall recognition system with a separate language model.Two dimensions of depth are considered: the use of mixture density networks, both autoregressive and non-autoregressive, to generate density functions capable of modeling acoustic input sequences with much more powerful conditioning than the first-generation generative models for ASR, Gaussian Mixture Models / Hidden Markov Models, and the use of standard LSTMs, in the spirit of the original tandem approach, to produce discriminative feature vectors for generative modeling.Combining mixture density networks and deep discriminative features leads to a novel dual-stack LSTM architecture directly related to the RNN Transducer, but with the explicit functional form of a density, and combining naturally with a separate language model, using Bayes rule.The generative models discussed here are compared experimentally in terms of log-likelihoods and frame accuracies.","This paper proposes the use of a deep generative acoustic model for automatic speech recognition, combining naturally with other deep sequence-to-sequence modules using Bayes' rule." 1320,Spatially Transformed Adversarial Examples,"Recent studies show that widely used Deep neural networks are vulnerable to the carefully crafted adversarial examples.Many advanced algorithms have been proposed to generate adversarial examples by leveraging the L_p distance for penalizing perturbations.Different defense methods have also been explored to defend against such adversarial attacks.While the effectiveness of L_p distance as a metric of perceptual quality remains an active research area, in this paper we will instead focus on a different type of perturbation, namely spatial transformation, as opposed to manipulating the pixel values directly as in prior works.Perturbations generated through spatial transformation could result in large L_p distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems.This potentially provides a new direction in adversarial example generation and the design of corresponding defenses.We visualize the spatial transformation based perturbation for different examples and show that our techniquecan produce realistic adversarial examples with smooth image deformation.Finally, we visualize the attention of deep networks with different types of adversarial examples to better understand how these examples are interpreted.","We propose a new approach for generating adversarial examples based on spatial transformation, which produces perceptually realistic examples compared to existing attacks. " 1321,PARAMETRIZED DEEP Q-NETWORKS LEARNING: PLAYING ONLINE BATTLE ARENA WITH DISCRETE-CONTINUOUS HYBRID ACTION SPACE,"Most existing deep reinforcement learning frameworks consider action spaces that are eitherdiscrete or continuous space.Motivated by the project of design Game AI for King of Glory, one the world’s most popular mobile game, we consider the scenario with the discrete-continuoushybrid action space.To directly apply existing DLR frameworks, existing approacheseither approximate the hybrid space by a discrete set or relaxing it into a continuous set, which isusually less efficient and robust.In this paper, we propose a parametrized deep Q-networkfor the hybrid action space without approximation or relaxation.Our algorithm combines DQN andDDPG and can be viewed as an extension of the DQN to hybrid actions.The empirical study on thegame KOG validates the efficiency and effectiveness of our method.",A DQN and DDPG hybrid algorithm is proposed to deal with the discrete-continuous hybrid action space. 1322,MDE: Multiple Distance Embeddings for Link Prediction in Knowledge Graphs,"Over the past decade, knowledge graphs became popular for capturing structured domain knowledge.Relational learning models enable the prediction of missing links inside knowledge graphs.More specifically, latent distance approaches model the relationships among entities via a distance between latent representations.Translating embedding models are among the most popular latent distance approaches which use one distance function to learn multiple relation patterns.However, they are mostly inefficient in capturing symmetric relations since the representation vector norm for all the symmetric relations becomes equal to zero.They also lose information when learning relations with reflexive patterns since they become symmetric and transitive.We propose the Multiple Distance Embedding model that addresses these limitations and a framework which enables collaborative combinations of latent distance-based terms.Our solution is based on two principles:1) using limit-based loss instead of margin ranking loss and2) by learning independent embedding vectors for each of terms we can collectively train and predict using contradicting distance terms.We further demonstrate that MDE allows modeling relations withsymmetry, inversion, and composition patterns.We propose MDE as a neural network model which allows us to map non-linear relations between the embedding vectors and the expected output of the score function.Our empirical results show that MDE outperforms the state-of-the-art embedding models on several benchmark datasets.",A novel method of modelling Knowledge Graphs based on Distance Embeddings and Neural Networks 1323,Improve Training Stability of Semi-supervised Generative Adversarial Networks with Collaborative Training,"Improved generative adversarial network is a successful method of using generative adversarial models to solve the problem of semi-supervised learning.However, it suffers from the problem of unstable training.In this paper, we found that the instability is mostly due to the vanishing gradients on the generator.To remedy this issue, we propose a new method to use collaborative training to improve the stability of semi-supervised GAN with the combination of Wasserstein GAN.The experiments have shown that our proposed method is more stable than the original Improved GAN and achieves comparable classification accuracy on different data sets.",Improve Training Stability of Semi-supervised Generative Adversarial Networks with Collaborative Training 1324,Deep Neural Networks as Gaussian Processes,"It has long been known that a single-layer fully-connected neural network with an i.i.d.prior over its parameters is equivalent to a Gaussian process, in the limit of infinite network width. This correspondence enables exact Bayesian inference for infinite width neural networks on regression tasks by means of evaluating the corresponding GP.Recently, kernel functions which mimic multi-layer random neural networks have been developed, but only outside of a Bayesian framework.As such, previous work has not identified that these kernels can be used as covariance functions for GPs and allow fully Bayesian prediction with a deep neural network.In this work, we derive the exact equivalence between infinitely wide, deep, networks and GPs with a particular covariance function.We further develop a computationally efficient pipeline to compute this covariance function.We then use the resulting GP to perform Bayesian inference for deep neural networks on MNIST and CIFAR-10. We observe that the trained neural network accuracy approaches that of the corresponding GP with increasing layer width, and that the GP uncertainty is strongly correlated with trained network prediction error.We further find that test performance increases as finite-width trained networks are made wider and more similar to a GP, and that the GP-based predictions typically outperform those of finite-width networks.Finally we connect the prior distribution over weights and variances in our GP formulation to the recent development of signal propagation in random neural networks.","We show how to make predictions using deep networks, without training deep networks." 1325,Searching for Stage-wise Neural Graphs In the Limit,"Search space is a key consideration for neural architecture search.Recently, Xie et al. found that randomly generated networks from the same distribution perform similarly, which suggest we should search for random graph distributions instead of graphs.We propose graphon as a new search space.A graphon is the limit of Cauchy sequence of graphs and a scale-free probabilistic distribution, from which graphs of different number of vertices can be drawn.This property enables us to perform NAS using fast, low-capacity models and scale the found models up when necessary.We develop an algorithm for NAS in the space of graphons and empirically demonstrate that it can find stage-wise graphs that outperform DenseNet and other baselines on ImageNet.",Graphon is a good search space for neural architecture search and empirically produces good networks. 1326,Distinguishability of Adversarial Examples,"Machine learning models including traditional models and neural networks can be easily fooled by adversarial examples which are generated from the natural examples with small perturbations. This poses a critical challenge to machine learning security, and impedes the wide application of machine learning in many important domains such as computer vision and malware detection. Unfortunately, even state-of-the-art defense approaches such as adversarial training and defensive distillation still suffer from major limitations and can be circumvented. From a unique angle, we propose to investigate two important research questions in this paper: Are adversarial examples distinguishable from natural examples? Are adversarial examples generated by different methods distinguishable from each other? These two questions concern the distinguishability of adversarial examples. Answering them will potentially lead to a simple yet effective approach, termed as defensive distinction in this paper under the formulation of multi-label classification, for protecting against adversarial examples. We design and perform experiments using the MNIST dataset to investigate these two questions, and obtain highly positive results demonstrating the strong distinguishability of adversarial examples. We recommend that this unique defensive distinction approach should be seriously considered to complement other defense approaches.",We propose a defensive distinction protection approach and demonstrate the strong distinguishability of adversarial examples. 1327,The Nonlinearity Coefficient - Predicting Generalization in Deep Neural Networks,"For a long time, designing neural architectures that exhibit high performance was considered a dark art that required expert hand-tuning.One of the few well-known guidelines for architecture design is the avoidance of exploding or vanishing gradients.However, even this guideline has remained relatively vague and circumstantial, because there exists no well-defined, gradient-based metric that can be computed training begins and can robustly predict the performance of the network training is complete.We introduce what is, to the best of our knowledge, the first such metric: the nonlinearity coefficient.""Via an extensive empirical study, we show that the NLC, computed in the network's randomly initialized state, is a powerful predictor of test error and that attaining a right-sized NLC is essential for attaining an optimal test error, at least in fully-connected feedforward networks."", 'The NLC is also conceptually simple, cheap to compute, and is robust to a range of confounders and architectural design choices that comparable metrics are not necessarily robust to.Hence, we argue the NLC is an important tool for architecture search and design, as it can robustly predict poor training outcomes before training even begins.","We introduce the NLC, a metric that is cheap to compute in the networks randomly initialized state and is highly predictive of generalization, at least in fully-connected networks." 1328,An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack,"There are two major paradigms of white-box adversarial attacks that attempt to impose input perturbations. The first paradigm, called the fix-perturbation attack, crafts adversarial samples within a given perturbation level. The second paradigm, called the zero-confidence attack, finds the smallest perturbation needed to cause misclassification, also known as the margin of an input feature. While the former paradigm is well-resolved, the latter is not. Existing zero-confidence attacks either introduce significant approximation errors, or are too time-consuming. We therefore propose MarginAttack, a zero-confidence attack framework that is able to compute the margin with improved accuracy and efficiency. Our experiments show that MarginAttack is able to compute a smaller margin than the state-of-the-art zero-confidence attacks, and matches the state-of-the-art fix-perturbation attacks. In addition, it runs significantly faster than the Carlini-Wagner attack, currently the most accurate zero-confidence attack algorithm.","This paper introduces MarginAttack, a stronger and faster zero-confidence adversarial attack." 1329,Dynamic Scale Inference by Entropy Minimization,"Given the variety of the visual world there is not one true scale for recognition: objects may appear at drastically different sizes across the visual field.Rather than enumerate variations across filter channels or pyramid levels, dynamic models locally predict scale and adapt receptive fields accordingly.The degree of variation and diversity of inputs makes this a difficult task.Existing methods either learn a feedforward predictor, which is not itself totally immune to the scale variation it is meant to counter, or select scales by a fixed algorithm, which cannot learn from the given task and data.We extend dynamic scale inference from feedforward prediction to iterative optimization for further adaptivity.We propose a novel entropy minimization objective for inference and optimize over task and structure parameters to tune the model to each input.Optimization during inference improves semantic segmentation accuracy and generalizes better to extreme scale variations that cause feedforward dynamic inference to falter.",Unsupervised optimization during inference gives top-down feedback to iteratively adjust feedforward prediction of scale variation for more equivariant recognition. 1330,Exploring Properties of the Deep Image Prior,"The Deep Image Prior is a fascinating recent approach for recovering images which appear natural, yet is not fully understood.This work aims at shedding some further light on this approach by investigating the properties of the early outputs of the DIP.First, we show that these early iterations demonstrate invariance to adversarial perturbations by classifying progressive DIP outputs and using a novel saliency map approach.Next we explore using DIP as a defence against adversaries, showing good potential.Finally, we examine the adversarial invariancy of the early DIP outputs, and hypothesize that these outputs may remove non-robust image features.By comparing classification confidence values we show some evidence confirming this hypothesis.","We investigate properties of the recently introduced Deep Image Prior (Ulyanov et al, 2017)" 1331,Data Augmentation for Rumor Detection Using Context-Sensitive Neural Language Model With Large-Scale Credibility Corpus,"In this paper, we address the challenge of limited labeled data and class imbalance problem for machine learning-based rumor detection on social media.We present an offline data augmentation method based on semantic relatedness for rumor detection.To this end, unlabeled social media data is exploited to augment limited labeled data.A context-aware neural language model and a large credibility-focused Twitter corpus are employed to learn effective representations of rumor tweets for semantic relatedness measurement.A language model fine-tuned with the a large domain-specific corpus shows a dramatic improvement on training data augmentation for rumor detection over pretrained language models.We conduct experiments on six different real-world events based on five publicly available data sets and one augmented data set.Our experiments show that the proposed method allows us to generate a larger training data with reasonable quality via weak supervision.We present preliminary results achieved using a state-of-the-art neural network model with augmented data for rumor detection.",We propose a methodology of augmenting publicly available data for rumor studies based on samantic relatedness between limited labeled and unlabeled data. 1332,Pay Less Attention with Lightweight and Dynamic Convolutions,"Self-attention is a useful mechanism to build generative models for language and images.It determines the importance of context elements by comparing each element to the current time step.In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention results.Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention.We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements.The number of operations required by this approach scales linearly in the input length, whereas self-attention is quadratic.Experiments on large-scale machine translation, language modeling and abstractive summarization show that dynamic convolutions improve over strong self-attention models.""On the WMT'14 English-German test set dynamic convolutions achieve a new state of the art of 29.7 BLEU.",Dynamic lightweight convolutions are competitive to self-attention on language tasks. 1333,Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile,"Owing to their connection with generative adversarial networks, saddle-point problems have recently attracted considerable interest in machine learning and beyond.By necessity, most theoretical guarantees revolve around convex-concave problems; however, making theoretical inroads towards efficient GAN training depends crucially on moving beyond this classic framework.To make piecemeal progress along these lines, we analyze the behavior of mirror descent in a class of non-monotone problems whose solutions coincide with those of a naturally associated variational inequality – a property which we call coherence.We first show that ordinary, “vanilla” MD converges under a strict version of this condition, but not otherwise; in particular, it may fail to converge even in bilinear models with a unique solution.We then show that this deficiency is mitigated by optimism: by taking an “extra-gradient” step, optimistic mirror descent converges in all coherent problems.Our analysis generalizes and extends the results of Daskalakis et al. [2018] for optimistic gradient descent in bilinear problems, and makes concrete headway for provable convergence beyond convex-concave games.We also provide stochastic analogues of these results, and we validate our analysis by numerical experiments in a wide array of GAN models.",We show how the inclusion of an extra-gradient step in first-order GAN training methods can improve stability and lead to improved convergence results. 1334,Learning to Optimize Neural Nets,"Learning to Optimize is a recently proposed framework for learning optimization algorithms using reinforcement learning.In this paper, we explore learning an optimization algorithm for training shallow neural nets.Such high-dimensional stochastic optimization problems present interesting challenges for existing reinforcement learning algorithms.We develop an extension that is suited to learning optimization algorithms in this setting and demonstrate that the learned optimization algorithm consistently outperforms other known optimization algorithms even on unseen tasks and is robust to changes in stochasticity of gradients and the neural net architecture.More specifically, we show that an optimization algorithm trained with the proposed method on the problem of training a neural net on MNIST generalizes to the problems of training neural nets on the Toronto Faces Dataset, CIFAR-10 and CIFAR-100.",We learn an optimization algorithm that generalizes to unseen tasks 1335,A Constructive Prediction of the Generalization Error Across Scales,"The dependency of the generalization error of neural networks on model and dataset size is of critical importance both in practice and for understanding the theory of neural networks.Nevertheless, the functional form of this dependency remains elusive.In this work, we present a functional form which approximates well the generalization error in practice.Capitalizing on the successful concept of model scaling, we are able to simultaneously construct such a form and specify the exact models which can attain it across model/data scales.Our construction follows insights obtained from observations conducted over a range of model/data scales, in various model types and datasets, in vision and language tasks.We show that the form both fits the observations well across scales, and provides accurate predictions from small- to large-scale models and data.",We predict the generalization error and specify the model which attains it across model/data scales. 1336,Unsupervised Control Through Non-Parametric Discriminative Rewards,"Learning to control an environment without hand-crafted rewards or expert data remains challenging and is at the frontier of reinforcement learning research.We present an unsupervised learning algorithm to train agents to achieve perceptually-specified goals using only a stream of observations and actions.Our agent simultaneously learns a goal-conditioned policy and a goal achievement reward function that measures how similar a state is to the goal state.This dual optimization leads to a co-operative game, giving rise to a learned reward function that reflects similarity in controllable aspects of the environment instead of distance in the space of observations.We demonstrate the efficacy of our agent to learn, in an unsupervised manner, to reach a diverse set of goals on three domains -- Atari, the DeepMind Control Suite and DeepMind Lab.",Unsupervised reinforcement learning method for learning a policy to robustly achieve perceptually specified goals. 1337,Depth-Adaptive Transformer,"State of the art sequence-to-sequence models for large scale tasks perform a fixed number of computations for each input sequence regardless of whether it is easy or hard to process.In this paper, we train Transformer models which can make output predictions at different stages of the network and we investigate different ways to predict how much computation is required for a particular sequence.Unlike dynamic computation in Universal Transformers, which applies the same set of layers iteratively, we apply different layers at every step to adjust both the amount of computation as well as the model capacity.On IWSLT German-English translation our approach matches the accuracy of a well tuned baseline Transformer while using less than a quarter of the decoder layers.",Sequence model that dynamically adjusts the amount of computation for each input. 1338,Characterizing and Avoiding Problematic Global Optima of Variational Autoencoders,"Variational Auto-encoders are deep generative latent variable models consisting of two components: a generative model that captures a data distribution p by transforming a distribution p over latent space, and an inference model that infers likely latent codes for each data point.Recent work shows that traditional training methods tend to yield solutions that violate modeling desiderata: the learned generative model captures the observed data distribution but does so while ignoring the latent codes, resulting in codes that do not represent the data; Kim et al.); the aggregate of the learned latent codes does not match the prior p.This mismatch means that the learned generative model will be unable to generate realistic data with samples from p; Tomczak and Welling).In this paper, we demonstrate that both issues stem from the fact that the global optima of the VAE training objective often correspond to undesirable solutions.Our analysis builds on two observations: the generative model is unidentifiable – there exist many generative models that explain the data equally well, each with different properties and bias in the VAE objective – the VAE objective may prefer generative models that explain the data poorly but have posteriors that are easy to approximate.We present a novel inference method, LiBI, mitigating the problems identified in our analysis.On synthetic datasets, we show that LiBI can learn generative models that capture the data distribution and inference models that better satisfy modeling assumptions when traditional methods struggle to do so.",We characterize problematic global optima of the VAE objective and present a novel inference method to avoid such optima. 1339,Distributional Inclusion Vector Embedding for Unsupervised Hypernymy Detection,"Modeling hypernymy, such as poodle is-a dog, is an important generalization aid to many NLP tasks, such as entailment, relation extraction, and question answering.Supervised learning from labeled hypernym sources, such as WordNet, limit the coverage of these models, which can be addressed by learning hypernyms from unlabeled text. Existing unsupervised methods either do not scale to large vocabularies or yield unacceptably poor accuracy. This paper introduces, a simple-to-implement unsupervised method of hypernym discovery via per-word non-negative vector embeddings which preserve the inclusion property of word contexts.In experimental evaluations more comprehensive than any previous literature of which we are aware---evaluating on 11 datasets using multiple existing as well as newly proposed scoring functions---we find that our method provides up to double the precision of previous unsupervised methods, and the highest average performance, using a much more compact word representation, and yielding many new state-of-the-art results.In addition, the meaning of each dimension in DIVE is interpretable, which leads to a novel approach on word sense disambiguation as another promising application of DIVE.",We propose a novel unsupervised word embedding which preserves the inclusion property in the context distribution and achieve state-of-the-art results on unsupervised hypernymy detection 1340,Uncertainty-guided Continual Learning with Bayesian Neural Networks,"Continual learning aims to learn new tasks without forgetting previously learned ones.This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity.""Current regularization-based continual learning algorithms need an external representation and extra computation to measure the parameters' ."", 'In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks.Uncertainty is a natural way to identify and as we continually learn, and thus mitigate catastrophic forgetting.We also show a variant of our model, which uses uncertainty for weight pruningand retains task performance after pruning by saving binary masks per tasks.We evaluate our UCB approach extensively on diverse object classification datasets with short and long sequences of tasks and report superior or on-par performance compared to existing approaches.Additionally, we show that our model does not necessarily need task information at test time, i.e.~it does not presume knowledge of which task a sample belongs to.",A regularization-based approach for continual learning using Bayesian neural networks to predict parameters' importance 1341,Becoming Cat People: Animal-like Human Experiences with a Sensory Augmenting Whisker Wearable,"Humans have a natural curiosity to imagine what it feels like to exist as someone or something else.This curiosity becomes even stronger for the pets we care for.Humans cannot truly know what it is like to be our pets, but we can deepen our understanding of what it is like to perceive and explore the world like them.We investigate how wearables can offer people animal perspective-taking opportunities to experience the world through animal senses that differ from those biologically natural to us.To assess the potential of wearables in animal perspective-taking, we developed a sensory-augmenting wearable that gives wearers cat-like whiskers.We then created a maze exploration experience where blindfolded participants utilized the whiskers to navigate the maze.We draw on animal behavioral research to evaluate how the whisker activity supported authentically cat-like experiences, and discuss the implications of this work for future learning experiences.",This paper explores using wearable sensory augmenting technology to facilitate first-hand perspective-taking of what it is like to have cat-like whiskers. 1342,On Generalization Error Bounds of Noisy Gradient Methods for Non-Convex Learning,"Generalization error measures how well the hypothesis learned from training data generalizes to previously unseen data.Proving tight generalization error bounds is a central question in statistical learning theory. In this paper, we obtain generalization error bounds for learning general non-convex objectives, which has attracted significant attention in recent years. We develop a new framework, termed Bayes-Stability, for proving algorithm-dependent generalization error bounds. The new framework combines ideas from both the PAC-Bayesian theory and the notion of algorithmic stability. Applying the Bayes-Stability method, we obtain new data-dependent generalization bounds for stochastic gradient Langevin dynamics and several other noisy gradient methods.Our result recovers a recent result in Mou et al. and improves upon the results in Pensia et al.. Our experiments demonstrate that our data-dependent bounds can distinguish randomly labelled data from normal data, which provides an explanation to the intriguing phenomena observed in Zhang et al..We also study the setting where the total loss is the sum of a bounded loss and an additiona l`2 regularization term.We obtain new generalization bounds for the continuous Langevin dynamic in this setting by developing a new Log-Sobolev inequality for the parameter distribution at any time.Our new bounds are more desirable when the noise level of the processis not very small, and do not become vacuous even when T tends to infinity.","We give some generalization error bounds of noisy gradient methods such as SGLD, Langevin dynamics, noisy momentum and so forth." 1343,Model Ensemble-Based Intrinsic Reward for Sparse Reward Reinforcement Learning,"In this paper, a new intrinsic reward generation method for sparse-reward reinforcement learning is proposed based on an ensemble of dynamics models.In the proposed method, the mixture of multiple dynamics models is used to approximate the true unknown transition probability, and the intrinsic reward is designed as the minimum of the surprise seen from each dynamics model to the mixture of the dynamics models.In order to show the effectiveness of the proposed intrinsic reward generation method, a working algorithm is constructed by combining the proposed intrinsic reward generation method with the proximal policy optimization algorithm.Numerical results show that for representative locomotion tasks, the proposed model-ensemble-based intrinsic reward generation method outperforms the previous methods based on a single dynamics model.","For sparse-reward reinforcement learning, the ensemble of multiple dynamics models is used to generate intrinsic reward designed as the minimum of the surprise." 1344,Analyzing analytical methods: The case of phonology in neural models of spoken language,"Given the fast development of analysis techniques for NLP and speechprocessing systems, few systematic studies have been conducted tocompare the strengths and weaknesses of each method. As a step inthis direction we study the case of representations of phonology inneural network models of spoken language.We use two commonly appliedanalytical techniques, diagnostic classifiers and representationalsimilarity analysis, to quantify to what extent neural activationpatterns encode phonemes and phoneme sequences.We manipulate twofactors that can affect the outcome of analysis.First, we investigatethe role of learning by comparing neural activations extracted fromtrained versus randomly-initialized models.Second, we examine thetemporal scope of the activations by probing both local activationscorresponding to a few milliseconds of the speech signal, and globalactivations pooled over the whole utterance.We conclude thatreporting analysis results with randomly initialized models iscrucial, and that global-scope methods tend to yield more consistentand interpretable results and we recommend their use as a complementto local-scope diagnostic methods.",We study representations of phonology in neural network models of spoken language with several variants of analytical techniques. 1345,LOSSLESS SINGLE IMAGE SUPER RESOLUTION FROM LOW-QUALITY JPG IMAGES,"Super Resolution is a fundamental and important low-level computer vision task.Different from traditional SR models, this study concentrates on a specific but realistic SR issue: How can we obtain satisfied SR results from compressed JPG image, which widely exists on the Internet.In general, C-JPG can release storage space while keeping considerable quality in visual.However, further image processing operations, e.g., SR, will suffer from enlarging inner artificial details and result in unacceptable outputs.To address this problem, we propose a novel SR structure with two specifically designed components, as well as a cycle loss.In short, there are mainly three contributions to this paper.First, our research can generate high-qualified SR images for prevalent C-JPG images.Second, we propose a functional sub-model to recover information for C-JPG images, instead of the perspective of noise elimination in traditional SR approaches.Third, we further integrate cycle loss into SR solver to build a hybrid loss function for better SR generation.Experiments show that our approach achieves outstanding performance among state-of-the-art methods.",We solve the specific SR issue of low-quality JPG images by functional sub-models. 1346,DONUT: CTC-based Query-by-Example Keyword Spotting,"Keyword spotting—or wakeword detection—is an essential feature for hands-free operation of modern voice-controlled devices.With such devices becoming ubiquitous, users might want to choose a personalized custom wakeword.In this work, we present DONUT, a CTC-based algorithm for online query-by-example keyword spotting that enables custom wakeword detection.The algorithm works by recording a small number of training examples from the user, generating a set of label sequence hypotheses from these training examples, and detecting the wakeword by aggregating the scores of all the hypotheses given a new audio recording.Our method combines the generalization and interpretability of CTC-based keyword spotting with the user-adaptation and convenience of a conventional query-by-example system.DONUT has low computational requirements and is well-suited for both learning and inference on embedded systems without requiring private user data to be uploaded to the cloud.",We propose an interpretable model for detecting user-chosen wakewords that learns from the user's examples. 1347,Keyframing the Future: Discovering Temporal Hierarchy with Keyframe-Inpainter Prediction,"To flexibly and efficiently reason about temporal sequences, abstract representations that compactly represent the important information in the sequence are needed.One way of constructing such representations is by focusing on the important events in a sequence.In this paper, we propose a model that learns both to discover such key events as well as to represent the sequence in terms of them. We do so using a hierarchical Keyframe-Inpainter model that first generates keyframes and their temporal placement and then inpaints the sequences between keyframes.We propose a fully differentiable formulation for efficiently learning the keyframe placement.We show that KeyIn finds informative keyframes in several datasets with diverse dynamics.When evaluated on a planning task, KeyIn outperforms other recent proposals for learning hierarchical representations.",We propose a model that learns to discover informative frames in a future video sequence and represent the video via its keyframes. 1348,Learning deep representations by mutual information estimation and maximization,"This work investigates unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder.""Importantly, we show that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation's suitability for downstream tasks."", 'We further control characteristics of the representation by matching to a prior distribution adversarially.Our method, which we call Deep InfoMax, outperforms a number of popular unsupervised learning methods and compares favorably with fully-supervised learning on several classification tasks in with some standard architectures.DIM opens new avenues for unsupervised learning of representations and is an important step towards flexible formulations of representation learning objectives for specific end-goals.","We learn deep representation by maximizing mutual information, leveraging structure in the objective, and are able to compute with fully supervised classifiers with comparable architectures" 1349,Target-directed Atomic Importance Estimation via Reverse Self-attention,"Estimating the importance of each atom in a molecule is one of the most appealing and challenging problems in chemistry, physics, and material engineering.The most common way to estimate the atomic importance is to compute the electronic structure using density-functional theory, and then to interpret it using domain knowledge of human experts.However, this conventional approach is impractical to the large molecular database because DFT calculation requires huge computation, specifically, O time complexity w.r.t. the number of electrons in a molecule.Furthermore, the calculation results should be interpreted by the human experts to estimate the atomic importance in terms of the target molecular property.To tackle this problem, we first exploit machine learning-based approach for the atomic importance estimation.To this end, we propose reverse self-attention on graph neural networks and integrate it with graph-based molecular description.Our method provides an efficiently-automated and target-directed way to estimate the atomic importance without any domain knowledge on chemistry and physics.",We first propose a fully-automated and target-directed atomic importance estimator based on the graph neural networks and a new concept of reverse self-attention. 1350,Decoupling Adaptation from Modeling with Meta-Optimizers for Meta Learning,"Meta-learning methods, most notably Model-Agnostic Meta-Learning or MAML, have achieved great success in adapting to new tasks quickly, after having been trained on similar tasks.The mechanism behind their success, however, is poorly understood.We begin this work with an experimental analysis of MAML, finding that deep models are crucial for its success, even given sets of simple tasks where a linear model would suffice on any individual task.Furthermore, on image-recognition tasks, we find that the early layers of MAML-trained models learn task-invariant features, while later layers are used for adaptation, providing further evidence that these models require greater capacity than is strictly necessary for their individual tasks.Following our findings, we propose a method which enables better use of model capacity at inference time by separating the adaptation aspect of meta-learning into parameters that are only used for adaptation but are not part of the forward model.We find that our approach enables more effective meta-learning in smaller models, which are suitably sized for the individual tasks.",We find that deep models are crucial for MAML to work and propose a method which enables effective meta-learning in smaller models. 1351,Wavelet Pooling for Convolutional Neural Networks,"Convolutional Neural Networks continuously advance the progress of 2D and 3D image and object classification.The steadfast usage of this algorithm requires constant evaluation and upgrading of foundational concepts to maintain progress.Network regularization techniques typically focus on convolutional layer operations, while leaving pooling layer operations without suitable options.We introduce Wavelet Pooling as another alternative to traditional neighborhood pooling.This method decomposes features into a second level decomposition, and discards the first-level subbands to reduce feature dimensions.This method addresses the overfitting problem encountered by max pooling, while reducing features in a more structurally compact manner than pooling via neighborhood regions.Experimental results on four benchmark classification datasets demonstrate our proposed method outperforms or performs comparatively with methods like max, mean, mixed, and stochastic pooling.","Pooling is achieved using wavelets instead of traditional neighborhood approaches (max, average, etc)." 1352,The Value of Incorporating Social Preferences in Dynamic Ridesharing,"Dynamic ridesharing services play a major role in improving the efficiency of urban transportation.User satisfaction in dynamic ridesharing is determined by multiple factors such as travel time, cost, and social compatibility with co-passengers.Existing DRS optimize profit by maximizing the operational value for service providers or minimize travel time for users but they neglect the social experience of riders, which significantly influences the total value of the service to users.""We propose DROPS, a dynamic ridesharing framework that factors the riders' social preferences in the matching process so as to improve the quality of the trips formed."", 'Scheduling trips for users is a multi-objective optimization that aims to maximize the operational value for the service provider, while simultaneously maximizing the value of the trip for the users.The user value is estimated based on compatibility between co-passengers and the ride time.We then present a real-time matching algorithm for trip formation.Finally, we evaluate our approach empirically using real-world taxi trips data, and a population model including social preferences based on user surveys.""The results demonstrate improvement in riders' social compatibility, without significantly affecting the vehicle miles for the service provider and travel time for users.",We propose a novel dynamic ridesharing framework to form trips that optimizes both operational value for the service provider and user value to the passengers by factoring the users' social preferences into the decision-making process. 1353,Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality,"Deep Neural Networks have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction.""To better understand such attacks, a characterization is needed of the properties of regions in which adversarial examples lie."", 'We tackle this challenge by characterizing the dimensional properties of adversarial regions, via the use of Local Intrinsic Dimensionality.LID assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors.We first provide explanations about how adversarial perturbation can affect the LID characteristic of adversarial regions, and then show empirically that LID characteristics can facilitate the distinction of adversarial examples generated using state-of-the-art attacks.As a proof-of-concept, we show that a potential application of LID is to distinguish adversarial examples, and the preliminary results show that it can outperform several state-of-the-art detection measures by large margins for five attack strategies considered in this paper across three benchmark datasets.Our analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.",We characterize the dimensional properties of adversarial subspaces in the neighborhood of adversarial examples via the use of Local Intrinsic Dimensionality (LID). 1354,signSGD via Zeroth-Order Oracle,"In this paper, we design and analyze a new zeroth-order stochastic optimization algorithm, ZO-signSGD, which enjoys dual advantages of gradient-free operations and signSGD.The latter requires only the sign information of gradient estimates but is able to achieve a comparable or even better convergence speed than SGD-type algorithms.Our study shows that ZO signSGD requires times more iterations than signSGD, leading to a convergence rate of under mild conditions, where is the number of optimization variables, and is the number of iterations.In addition, we analyze the effects of different types of gradient estimators on the convergence of ZO-signSGD, and propose two variants of ZO-signSGD that at least achieve convergence rate.On the application side we explore the connection between ZO-signSGD and black-box adversarial attacks in robust deep learning. Our empirical evaluations on image classification datasets MNIST and CIFAR-10 demonstrate the superior performance of ZO-signSGD on the generation of adversarial examples from black-box neural networks.","We design and analyze a new zeroth-order stochastic optimization algorithm, ZO-signSGD, and demonstrate its connection and application to black-box adversarial attacks in robust deep learning" 1355,Multi-time-horizon Solar Forecasting Using Recurrent Neural Network,"The non-stationarity characteristic of the solar power renders traditional point forecasting methods to be less useful due to large prediction errors.This results in increased uncertainties in the grid operation, thereby negatively affecting the reliability and resulting in increased cost of operation.This research paper proposes a unified architecture for multi-time-horizon solar forecasting for short and long-term predictions using Recurrent Neural Networks.The paper describes an end-to-end pipeline to implement the architecture along with methods to test and validate the performance of the prediction model.The results demonstrate that the proposed method based on the unified architecture is effective for multi-horizon solar forecasting and achieves a lower root-mean-squared prediction error compared to the previous best performing methods which use one model for each time-horizon.The proposed method enables multi-horizon forecasts with real-time inputs, which have a high potential for practical applications in the evolving smart grid.",This paper proposes a Unified Recurrent Neural Network Architecture for short-term multi-time-horizon solar forecasting and validates the forecast performance gains over the previously reported methods 1356,Skip-connection and batch-normalization improve data separation ability,"The ResNet and the batch-normalization achieved high performance even when only a few labeled data are available.However, the reasons for its high performance are unclear.To clear the reasons, we analyzed the effect of the skip-connection in ResNet and the BN on the data separation ability, which is an important ability for the classification problem.Our results show that, in the multilayer perceptron with randomly initialized weights, the angle between two input vectors converges to zero in an exponential order of its depth, that the skip-connection makes this exponential decrease into a sub-exponential decrease, and that the BN relaxes this sub-exponential decrease into a reciprocal decrease.Moreover, our analysis shows that the preservation of the angle at initialization encourages trained neural networks to separate points from different classes.These imply that the skip-connection and the BN improve the data separation ability and achieve high performance even when only a few labeled data are available.",The Skip-connection in ResNet and the batch-normalization improve the data separation ability and help to train a deep neural network. 1357,Learning Latent Semantic Representation from Pre-defined Generative Model,"Learning representations of data is an important issue in machine learning.Though GAN has led to significant improvements in the data representations, it still has several problems such as unstable training, hidden manifold of data, and huge computational overhead.GAN tends to produce the data simply without any information about the manifold of the data, which hinders from controlling desired features to generate.Moreover, most of GAN’s have a large size of manifold, resulting in poor scalability.In this paper, we propose a novel GAN to control the latent semantic representation, called LSC-GAN, which allows us to produce desired data to generate and learns a representation of the data efficiently.Unlike the conventional GAN models with hidden distribution of latent space, we define the distributions explicitly in advance that are trained to generate the data based on the corresponding features by inputting the latent variables that follow the distribution.As the larger scale of latent space caused by deploying various distributions in one latent space makes training unstable while maintaining the dimension of latent space, we need to separate the process of defining the distributions explicitly and operation of generation.We prove that a VAE is proper for the former and modify a loss function of VAE to map the data into the pre-defined latent space so as to locate the reconstructed data as close to the input data according to its characteristics.Moreover, we add the KL divergence to the loss function of LSC-GAN to include this process.The decoder of VAE, which generates the data with the corresponding features from the pre-defined latent space, is used as the generator of the LSC-GAN.Several experiments on the CelebA dataset are conducted to verify the usefulness of the proposed method to generate desired data stably and efficiently, achieving a high compression ratio that can hold about 24 pixels of information in each dimension of latent space.Besides, our model learns the reverse of features such as not laughing only with data of ordinary and smiling facial expression.",We propose a generative model that not only produces data with desired features from the pre-defined latent space but also fully understands the features of the data to create characteristics that are not in the dataset. 1358,Emergent Communication in Networked Multi-Agent Reinforcement Learning,"With the ever increasing demand and the resultant reduced quality of services, the focus has shifted towards easing network congestion to enable more efficient flow in systems like traffic, supply chains and electrical grids.A step in this direction is to re-imagine the traditional heuristics based training of systems as this approach is incapable of modelling the involved dynamics.While one can apply Multi-Agent Reinforcement Learning to such problems by considering each vertex in the network as an agent, most MARL-based models assume the agents to be independent.In many real-world tasks, agents need to behave as a group, rather than as a collection of individuals.In this paper, we propose a framework that induces cooperation and coordination amongst agents, connected via an underlying network, using emergent communication in a MARL-based setup.We formulate the problem in a general network setting and demonstrate the utility of communication in networks with the help of a case study on traffic systems.Furthermore, we study the emergent communication protocol and show the formation of distinct communities with grounded vocabulary.To the best of our knowledge, this is the only work that studies emergent language in a networked MARL setting.",A framework for studying emergent communication in a networked multi-agent reinforcement learning setup. 1359,Convolutional Conditional Neural Processes,"We introduce the Convolutional Conditional Neural Process, a new member of the Neural Process family that models translation equivariance in the data.Translation equivariance is an important inductive bias for many learning problems including time series modelling, spatial data, and images.The model embeds data sets into an infinite-dimensional function space, as opposed to finite-dimensional vector spaces.To formalize this notion, we extend the theory of neural representations of sets to include functional representations, and demonstrate that any translation-equivariant embedding can be represented using a convolutional deep-set.We evaluate ConvCNPs in several settings, demonstrating that they achieve state-of-the-art performance compared to existing NPs.We demonstrate that building in translation equivariance enables zero-shot generalization to challenging, out-of-domain tasks.",We extend deep sets to functional embeddings and Neural Processes to include translation equivariant members 1360,A rotation-equivariant convolutional neural network model of primary visual cortex,"Classical models describe primary visual cortex as a filter bank of orientation-selective linear-nonlinear or energy models, but these models fail to predict neural responses to natural stimuli accurately.Recent work shows that convolutional neural networks can be trained to predict V1 activity more accurately, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance.Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations.""We present a framework for identifying common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations."", 'We fit this rotation-equivariant CNN to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging.We show that our rotation-equivariant network outperforms a regular CNN with the same number of feature maps and reveals a number of common features, which are shared by many V1 neurons and are pooled sparsely to predict neural activity.Our findings are a first step towards a powerful new tool to study the nonlinear functional organization of visual cortex.",A rotation-equivariant CNN model of V1 that outperforms previous models and suggest functional groupings of V1 neurons. 1361,Variational Predictive Information Bottleneck,"In classic papers, Zellner demonstrated that Bayesian inference could be derived as the solution to an information theoretic functional. Below we derive a generalized form of this functional as a variational lower bound of a predictive information bottleneck objective. This generalized functional encompasses most modern inference procedures and suggests novel ones.",Rederive a wide class of inference procedures from an global information bottleneck objective. 1362,The Variational Bandwidth Bottleneck: Stochastic Evaluation on an Information Budget,"In many applications, it is desirable to extract only the relevant information from complex input data, which involves making a decision about which input features are relevant.The information bottleneck method formalizes this as an information-theoretic optimization problem by maintaining an optimal tradeoff between compression, and predicting the target.In many problem settings, including the reinforcement learning problems we consider in this work, we might prefer to compress only part of the input.""This is typically the case when we have a standard conditioning input, such as a state observation, and a privileged input, which might correspond to the goal of a task, the output of a costly planning algorithm, or communication with another agent."", 'In such cases, we might prefer to compress the privileged input, either to achieve better generalization or to minimize access to costly information.Practical implementations of the information bottleneck based on variational inference require access to the privileged input in order to compute the bottleneck variable, so although they perform compression, this compression operation itself needs unrestricted, lossless access.In this work, we propose the variational bandwidth bottleneck, which decides for each example on the estimated value of the privileged information before seeing it, i.e., only based on the standard input, and then accordingly chooses stochastically, whether to access the privileged input or not.We formulate a tractable approximation to this framework and demonstrate in a series of reinforcement learning experiments that it can improve generalization and reduce access to computationally costly information.",Training agents with adaptive computation based on information bottleneck can promote generalization. 1363,Evaluation and Comparison of Usability of Four Mobile Breathing Training Visualizations,"Breathing exercises are an accessible way to manage stress and many mental illness symptoms.Traditionally, learning breathing exercises involved in-person guidance or audio recordings.The shift to mobile devices has led to a new way of learning and engaging in breathing exercises as seen in the rise of multiple mobile applications with different breathing representations.However, limited work has been done to investigate the effectiveness of these visual representations in supporting breathing pace as measured by synchronization.We utilized a within-subjects study to evaluate four common breathing visuals to understand which is most effective in providing breathing exercise guidance.Through controlled lab studies and interviews, we identified two representations with clear advantages over the others.In addition, we found that auditory guidance was not preferred by all users.We identify potential usability issues with the representations and suggest design guidelines for future development of app-supported breathing training.",We utilized a within-subjects study to evaluate four paced breathing visuals common in mobile apps to understand which is most effective in providing breathing exercise guidance. 1364,Discriminator Based Corpus Generation for General Code Synthesis,"Current work on neural code synthesis consists of increasingly sophisticated architectures being trained on highly simplified domain-specific languages, using uniform sampling across program space of those languages for training.""By comparison, program space for a C-like language is vast, and extremely sparsely populated in terms of `useful' functionalities; this requires a far more intelligent approach to corpus generation for effective training."", 'We use a genetic programming approach using an iteratively retrained discriminator to produce a population suitable as labelled training data for a neural code synthesis architecture.We demonstrate that use of a discriminator-based training corpus generator, trained using only unlabelled problem specifications in classic Programming-by-Example format, greatly improves network performance compared to current uniform sampling techniques.",A way to generate training corpora for neural code synthesis using a discriminator trained on unlabelled data 1365,Undersensitivity in Neural Reading Comprehension,"Neural reading comprehension models have recently achieved impressive gener- alisation results, yet still perform poorly when given adversarially selected input.Most prior work has studied semantically invariant text perturbations which cause a model’s prediction to change when it should not.In this work we focus on the complementary problem: excessive prediction undersensitivity where input text is meaningfully changed, and the model’s prediction does not change when it should.We formulate a noisy adversarial attack which searches among semantic variations of comprehension questions for which a model still erroneously pro- duces the same answer as the original question – and with an even higher prob- ability.We show that – despite comprising unanswerable questions – SQuAD2.0 and NewsQA models are vulnerable to this attack and commit a substantial frac- tion of errors on adversarially generated questions.This indicates that current models—even where they can correctly predict the answer—rely on spurious sur- face patterns and are not necessarily aware of all information provided in a given comprehension question.Developing this further, we experiment with both data augmentation and adversarial training as defence strategies: both are able to sub- stantially decrease a model’s vulnerability to undersensitivity attacks on held out evaluation data.Finally, we demonstrate that adversarially robust models gener- alise better in a biased data setting with a train/evaluation distribution mismatch; they are less prone to overly rely on predictive cues only present in the training set and outperform a conventional model in the biased data setting by up to 11% F1.","We demonstrate vulnerability to undersensitivity attacks in SQuAD2.0 and NewsQA neural reading comprehension models, where the model predicts the same answer with increased confidence to adversarially chosen questions, and compare defence strategies." 1366,Compact Encoding of Words for Efficient Character-level Convolutional Neural Networks Text Classification,"This paper puts forward a new text to tensor representation that relies on information compression techniques to assign shorter codes to the most frequently used characters.This representation is language-independent with no need of pretraining and produces an encoding with no information loss.It provides an adequate description of the morphology of text, as it is able to represent prefixes, declensions, and inflections with similar vectors and are able to represent even unseen words on the training dataset.Similarly, as it is compact yet sparse, is ideal for speed up training times using tensor processing libraries.As part of this paper, we show that this technique is especially effective when coupled with convolutional neural networks for text classification at character-level.We apply two variants of CNN coupled with it.Experimental results show that it drastically reduces the number of parameters to be optimized, resulting in competitive classification accuracy values in only a fraction of the time spent by one-hot encoding representations, thus enabling training in commodity hardware.",Using Compressing tecniques to Encoding of Words is a possibility for faster training of CNN and dimensionality reduction of representation 1367,Connecting the Dots Between MLE and RL for Sequence Prediction,"Sequence prediction models can be learned from example sequences with a variety of training algorithms.Maximum likelihood learning is simple and efficient, yet can suffer from compounding error at test time.Reinforcement learning such as policy gradient addresses the issue but can have prohibitively poor exploration efficiency.A rich set of other algorithms, such as data noising, RAML, and softmax policy gradient, have also been developed from different perspectives.In this paper, we present a formalism of entropy regularized policy optimization, and show that the apparently distinct algorithms, including MLE, can be reformulated as special instances of the formulation.The difference between them is characterized by the reward function and two weight hyperparameters.The unifying interpretation enables us to systematically compare the algorithms side-by-side, and gain new insights into the trade-offs of the algorithm design.The new perspective also leads to an improved approach that dynamically interpolates among the family of algorithms, and learns the model in a scheduled way.Experiments on machine translation, text summarization, and game imitation learning demonstrate superiority of the proposed approach.",An entropy regularized policy optimization formalism subsumes a set of sequence prediction learning algorithms. A new interpolation algorithm with improved results on text generation and game imitation learning. 1368,LSTOD: Latent Spatial-Temporal Origin-Destination prediction model and its applications in ride-sharing platforms,"Origin-Destination flow data is an important instrument in transportation studies.Precise prediction of customer demands from each original location to a destination given a series of previous snapshots helps ride-sharing platforms to better understand their market mechanism.However, most existing prediction methods ignore the network structure of OD flow data and fail to utilize the topological dependencies among related OD pairs.In this paper, we propose a latent spatial-temporal origin-destination model, with a novel convolutional neural network filter to learn the spatial features of OD pairs from a graph perspective and an attention structure to capture their long-term periodicity.Experiments on a real customer request dataset with available OD information from a ride-sharing platform demonstrate the advantage of LSTOD in achieving at least 6.5% improvement in prediction accuracy over the second best model.",We propose a purely convolutional CNN model with attention mechanism to predict spatial-temporal origin-destination flows. 1369,Robust Local Features for Improving the Generalization of Adversarial Training,"Adversarial training has been demonstrated as one of the most effective methods for training robust models to defend against adversarial examples.However, adversarially trained models often lack adversarially robust generalization on unseen testing data.Recent works show that adversarially trained models are more biased towards global structure features.Instead, in this work, we would like to investigate the relationship between the generalization of adversarial training and the robust local features, as the robust local features generalize well for unseen shape variation.To learn the robust local features, we develop a Random Block Shuffle transformation to break up the global structure features on normal adversarial examples.We continue to propose a new approach called Robust Local Features for Adversarial Training, which first learns the robust local features by adversarial training on the RBS-transformed adversarial examples, and then transfers the robust local features into the training of normal adversarial examples.To demonstrate the generality of our argument, we implement RLFAT in currently state-of-the-art adversarial training frameworks.Extensive experiments on STL-10, CIFAR-10 and CIFAR-100 show that RLFAT significantly improves both the adversarially robust generalization and the standard generalization of adversarial training.Additionally, we demonstrate that our models capture more local features of the object on the images, aligning better with human perception.",We propose a new stream of adversarial training approach called Robust Local Features for Adversarial Training (RLFAT) that significantly improves both the adversarially robust generalization and the standard generalization. 1370,Goal-constrained planning domain model formal verification of safety properties,"The verification of planning domain models is crucial to ensure the safety, integrity and correctness of planning-based automated systems.This task is usually performed using model checking techniques. However, directly applying model checkers to verify planning domain models can result in false positives, i.e. counterexamples that are unreachable by a sound planner when using the domain under verification during a planning task.In this paper, we discuss the downside of unconstrained planning domain model verification.We then propose a fail-safe practice for designing planning domain models that can inherently guarantee the safety of the produced plans in case of undetected errors in domain models. In addition, we demonstrate how model checkers, as well as state trajectory constraints planning techniques, should be used to verify planning domain models so that unreachable counterexamples are not returned.",Why and how to constrain planning domain model verification with planning goals to avoid unreachable counterexamples (false positives verification outcomes). 1371,StrokeNet: A Neural Painting Environment,"We've seen tremendous success of image generating models these years."", 'Generating images through a neural network is usually pixel-based, which is fundamentally different from how humans create artwork using brushes.To imitate human drawing, interactions between the environment and the agent is required to allow trials.However, the environment is usually non-differentiable, leading to slow convergence and massive computation.In this paper we try to address the discrete nature of software environment with an intermediate, differentiable simulation.We present StrokeNet, a novel model where the agent is trained upon a well-crafted neural approximation of the painting environment.With this approach, our agent was able to learn to write characters such as MNIST digits faster than reinforcement learning approaches in an unsupervised manner.Our primary contribution is the neural simulation of a real-world environment.Furthermore, the agent trained with the emulated environment is able to directly transfer its skills to real-world software.","StrokeNet is a novel architecture where the agent is trained to draw by strokes on a differentiable simulation of the environment, which could effectively exploit the power of back-propagation." 1372,Quantum Optical Experiments Modeled by Long Short-Term Memory,"We demonstrate how machine learning is able to model experiments in quantum physics.Quantum entanglement is a cornerstone for upcoming quantum technologies such as quantum computation and quantum cryptography.Of particular interest are complex quantum states with more than two particles and a large number of entangled quantum levels.Given such a multiparticle high-dimensional quantum state, it is usually impossible to reconstruct an experimental setup that produces it.To search for interesting experiments, one thus has to randomly create millions of setups on a computer and calculate the respective output states.In this work, we show that machine learning models can provide significant improvement over random search.We demonstrate that a long short-term memory neural network can successfully learn to model quantum experiments by correctly predicting output state characteristics for given setups without the necessity of computing the states themselves.This approach not only allows for faster search but is also an essential step towards automated design of multiparticle high-dimensional quantum experiments using generative machine learning models.",We demonstrate how machine learning is able to model experiments in quantum physics. 1373,UNSUPERVISED METRIC LEARNING VIA NONLINEAR FEATURE SPACE TRANSFORMATIONS,"In this paper, we propose a nonlinear unsupervised metric learning framework to boost of the performance of clustering algorithms.Under our framework, nonlinear distance metric learning and manifold embedding are integrated and conducted simultaneously to increase the natural separations among data samples.The metric learning component is implemented through feature space transformations, regulated by a nonlinear deformable model called Coherent Point Drifting.Driven by CPD, data points can get to a higher level of linear separability, which is subsequently picked up by the manifold embedding component to generate well-separable sample projections for clustering.Experimental results on synthetic and benchmark datasets show the effectiveness of our proposed approach over the state-of-the-art solutions in unsupervised metric learning.", a nonlinear unsupervised metric learning framework to boost the performance of clustering algorithms. 1374,Don't encrypt the data; just approximate the model \ Towards Secure Transaction and Fair Pricing of Training Data,"As machine learning becomes ubiquitous, deployed systems need to be as accu- rate as they can.As a result, machine learning service providers have a surging need for useful, additional training data that benefits training, without giving up all the details about the trained program.At the same time, data owners would like to trade their data for its value, without having to first give away the data itself be- fore receiving compensation.It is difficult for data providers and model providers to agree on a fair price without first revealing the data or the trained model to the other side.Escrow systems only complicate this further, adding an additional layer of trust required of both parties.Currently, data owners and model owners don’t have a fair pricing system that eliminates the need to trust a third party and training the model on the data, which1) takes a long time to complete,2) does not guarantee that useful data is paid valuably and that useless data isn’t, without trusting in the third party with both the model and the data.Existing improve- ments to secure the transaction focus heavily on encrypting or approximating the data, such as training on encrypted data, and variants of federated learning.As powerful as the methods appear to be, we show them to be impractical in our use case with real world assumptions for preserving privacy for the data owners when facing black-box models.Thus, a fair pricing scheme that does not rely on secure data encryption and obfuscation is needed before the exchange of data.This pa- per proposes a novel method for fair pricing using data-model efficacy techniques such as influence functions, model extraction, and model compression methods, thus enabling secure data transactions.We successfully show that without running the data through the model, one can approximate the value of the data; that is, if the data turns out redundant, the pricing is minimal, and if the data leads to proper improvement, its value is properly assessed, without placing strong assumptions on the nature of the model.Future work will be focused on establishing a system with stronger transactional security against adversarial attacks that will reveal details about the model or the data to the other party.","Facing complex, black-box models, encrypting the data is not as usable as approximating the model and using it to price a potential transaction." 1375,Language Style Transfer from Non-Parallel Text with Arbitrary Styles,"Language style transfer is the problem of migrating the content of a source sentence to a target style.In many applications, parallel training data are not available and source sentences to be transferred may have arbitrary and unknown styles.In this paper, we present an encoder-decoder framework under this problem setting.Each sentence is encoded into its content and style latent representations.By recombining the content with the target style, we can decode a sentence aligned in the target domain.To adequately constrain the encoding and decoding functions, we couple them with two loss functions.The first is a style discrepancy loss, enforcing that the style representation accurately encodes the style information guided by the discrepancy between the sentence style and the target style.The second is a cycle consistency loss, which ensures that the transferred sentence should preserve the content of the original sentence disentangled from its style.We validate the effectiveness of our proposed model on two tasks: sentiment modification of restaurant reviews, and dialog response revision with a romantic style.","We present an encoder-decoder framework for language style transfer, which allows for the use of non-parallel data and source data with various unknown language styles." 1376,Neural Networks for Principal Component Analysis: A New Loss Function Provably Yields Ordered Exact Eigenvectors ,"In this paper, we propose a new loss function for performing principal component analysis using linear autoencoders.Optimizing the standard L2 loss results in a decoder matrix that spans the principal subspace of the sample covariance of the data, but fails to identify the exact eigenvectors.This downside originates from an invariance that cancels out in the global map.Here, we prove that our loss function eliminates this issue, i.e. the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix.For this new loss, we establish that all local minima are global optima and also show that computing the new loss has the same order of complexity as the classical loss.""We report numerical results on both synthetic simulations, and a real-data PCA experiment on MNIST, demonstrating our approach to be practically applicable and rectify previous LAEs' downsides.",A new loss function for PCA with linear autoencoders that provably yields ordered exact eigenvectors 1377,Integrating User Feedback under Identity Uncertainty in Knowledge Base Construction,"Users have tremendous potential to aid in the construction and maintenance of knowledges bases through the contribution of feedback that identifies incorrect and missing entity attributes and relations.However, as new data is added to the KB, the KB entities, which are constructed by running entity resolution, can change, rendering the intended targets of user feedback unknown–a problem we term identity uncertainty.In this work, we present a framework for integrating user feedback into KBs in the presence of identity uncertainty.Our approach is based on having user feedback participate alongside mentions in ER.We propose a specific representation of user feedback as feedback mentions and introduce a new online algorithm for integrating these mentions into an existing KB.In experiments, we demonstrate that our proposed approach outperforms the baselines in 70% of experimental conditions.",This paper develops a framework for integrating user feedback under identity uncertainty in knowledge bases. 1378,Poisoning Attacks with Generative Adversarial Nets,"Machine learning algorithms are vulnerable to poisoning attacks: An adversary can inject malicious points in the training dataset to influence the learning process and degrade the algorithm's performance."", 'Optimal poisoning attacks have already been proposed to evaluate worst-case scenarios, modelling attacks as a bi-level optimization problem.Solving these problems is computationally demanding and has limited applicability for some models such as deep networks.""In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i.e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training."", 'We propose a Generative Adversarial Net with three components: generator, discriminator, and the target classifier.This approach allows us to model naturally the detectability constrains that can be expected in realistic attacks and to identify the regions of the underlying data distribution that can be more vulnerable to data poisoning.Our experimental evaluation shows the effectiveness of our attack to compromise machine learning classifiers, including deep networks.","In this paper we propose a novel generative model to craft systematic poisoning attacks with detectability constraints against machine learning classifiers, including deep networks. " 1379,Quantum Expectation-Maximization for Gaussian Mixture Models,"The Expectation-Maximization algorithm is a fundamental tool in unsupervised machine learning.It is often used as an efficient way to solve Maximum Likelihood and Maximum A Posteriori estimation problems, especially for models with latent variables.It is also the algorithm of choice to fit mixture models: generative models that represent unlabelled points originating from different processes, as samples from multivariate distributions.In this work we define and use a quantum version of EM to fit a Gaussian Mixture Model.Given quantum access to a dataset of vectors of dimension, our algorithm has convergence and precision guarantees similar to the classical algorithm, but the runtime is only polylogarithmic in the number of elements in the training set, and is polynomial in other parameters - as the dimension of the feature space, and the number of components in the mixture.We generalize further the algorithm by fitting any mixture model of base distributions in the exponential family.We discuss the performance of the algorithm on datasets that are expected to be classified successfully by those algorithms, arguing that on those cases we can give strong guarantees on the runtime.",It's the quantum algorithm for Expectation Maximization. It's fast: the runtime depends only polylogarithmically on the number of elements in the dataset. 1380,SQIL: Imitation Learning via Reinforcement Learning with Sparse Rewards,"Learning to imitate expert behavior from demonstrations can be challenging, especially in environments with high-dimensional, continuous observations and unknown dynamics.Supervised learning methods based on behavioral cloning suffer from distribution shift: because the agent greedily imitates demonstrated actions, it can drift away from demonstrated states due to error accumulation.Recent methods based on reinforcement learning, such as inverse RL and generative adversarial imitation learning, overcome this issue by training an RL agent to match the demonstrations over a long horizon.Since the true reward function for the task is unknown, these methods learn a reward function from the demonstrations, often using complex and brittle approximation techniques that involve adversarial training.We propose a simple alternative that still uses RL, but does not require learning a reward function.The key idea is to provide the agent with an incentive to match the demonstrations over a long horizon, by encouraging it to return to demonstrated states upon encountering new, out-of-distribution states.We accomplish this by giving the agent a constant reward of r=+1 for matching the demonstrated action in a demonstrated state, and a constant reward of r=0 for all other behavior.Our method, which we call soft Q imitation learning, can be implemented with a handful of minor modifications to any standard Q-learning or off-policy actor-critic algorithm.Theoretically, we show that SQIL can be interpreted as a regularized variant of BC that uses a sparsity prior to encourage long-horizon imitation.Empirically, we show that SQIL outperforms BC and achieves competitive results compared to GAIL, on a variety of image-based and low-dimensional tasks in Box2D, Atari, and MuJoCo.This paper is a proof of concept that illustrates how a simple imitation method based on RL with constant rewards can be as effective as more complex methods that use learned rewards.","A simple and effective alternative to adversarial imitation learning: initialize experience replay buffer with demonstrations, set their reward to +1, set reward for all other data to 0, run Q-learning or soft actor-critic to train." 1381,Variational pSOM: Deep Probabilistic Clustering with Self-Organizing Maps,"Generating visualizations and interpretations from high-dimensional data is acommon problem in many fields.Two key approaches for tackling this problemare clustering and representation learning.There are very performant deepclustering models on the one hand and interpretable representation learning techniques,often relying on latent topological structures such as self-organizing maps,on the other hand.However, current methods do not yet successfully combinethese two approaches.We present a new deep architecture for probabilistic clustering,VarPSOM, and its extension to time series data, VarTPSOM, composed of VarPSOMmodules connected by LSTM cells.We show that they achieve superiorclustering performance compared to current deep clustering methods on staticMNIST/Fashion-MNIST data as well as medical time series, while inducing aninterpretable representation.Moreover, on the medical time series, VarTPSOMsuccessfully predicts future trajectories in the original data space.","We present a new deep architecture, VarPSOM, and its extension to time series data, VarTPSOM, which achieve superior clustering performance compared to current deep clustering methods on static and temporal data." 1382,Which Tasks Should Be Learned Together in Multi-task Learning?,"Many computer vision applications require solving multiple tasks in real-time.""A neural network can be trained to solve multiple tasks simultaneously using 'multi-task learning'."", 'This saves computation at inference time as only a single network needs to be evaluated.Unfortunately, this often leads to inferior overall performance as task objectives compete, which consequently poses the question: which tasks should and should not be learned together in one network when employing multi-task learning?We systematically study task cooperation and competition and propose a framework for assigning tasks to a few neural networks such that cooperating tasks are computed by the same neural network, while competing tasks are computed by different networks.Our framework offers a time-accuracy trade-off and can produce better accuracy using less inference time than not only a single large multi-task neural network but also many single-task networks.","We analyze what tasks are best learned together in one network, and which are best to learn separately. " 1383,Beyond Lexical: A Semantic Retrieval Framework for Textual Search Engine,"Search engine has become a fundamental component in various web and mobile applications.Retrieving relevant documents from the massive datasets is challenging for a search engine system, especially when faced with verbose or tail queries.In this paper, we explore a vector space search framework for document retrieval.Specifically, we trained a deep semantic matching model so that each query and document can be encoded as a low dimensional embedding.Our model was trained based on BERT architecture.We deployed a fast k-nearest-neighbor index service for online serving.Both offline and online metrics demonstrate that our method improved retrieval performance and search quality considerably, particularly for tail queries",A deep semantic framework for textual search engine document retrieval 1384,Matching Distributions via Optimal Transport for Semi-Supervised Learning,"Semi-Supervised Learning approaches have been an influential framework for the usage of unlabeled data when there is not a sufficient amount of labeled data available over the course of training.SSL methods based on Convolutional Neural Networks have recently provided successful results on standard benchmark tasks such as image classification.In this work, we consider the general setting of SSL problem where the labeled and unlabeled data come from the same underlying probability distribution.We propose a new approach that adopts an Optimal Transport technique serving as a metric of similarity between discrete empirical probability measures to provide pseudo-labels for the unlabeled data, which can then be used in conjunction with the initial labeled data to train the CNN model in an SSL manner.We have evaluated and compared our proposed method with state-of-the-art SSL algorithms on standard datasets to demonstrate the superiority and effectiveness of our SSL algorithm.",We propose a new algorithm based on the optimal transport to train a CNN in an SSL fashion. 1385,A Simple Neural Attentive Meta-Learner,"Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task.In response, recent work in meta-learning proposes training a meta-learner on a distribution of similar tasks, in the hopes of generalization to novel but related tasks by learning a high-level strategy that captures the essence of the problem it is asked to solve.However, many recent meta-learning approaches are extensively hand-designed, either using architectures specialized to a particular application, or hard-coding algorithmic components that constrain how the meta-learner solves the task.We propose a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information. In the most extensive set of meta-learning experiments to date, we evaluate the resulting Simple Neural AttentIve Learner on several heavily-benchmarked tasks. On all tasks, in both supervised and reinforcement learning, SNAIL attains state-of-the-art performance by significant margins.",a simple RNN-based meta-learner that achieves SOTA performance on popular benchmarks 1386,Emergent Coordination Through Competition,"We study the emergence of cooperative behaviors in reinforcement learning agents by introducing a challenging competitive multi-agent soccer environment with continuous simulated physics.""We demonstrate that decentralized, population-based training with co-play can lead to a progression in agents' behaviors: from random, to simple ball chasing, and finally showing evidence of cooperation."", 'Our study highlights several of the challenges encountered in large scale multi-agent training in continuous control.In particular, we demonstrate that the automatic optimization of simple shaping rewards, not themselves conducive to co-operative behavior, can lead to long-horizon team behavior.We further apply an evaluation scheme, grounded by game theoretic principals, that can assess agent performance in the absence of pre-defined evaluation tasks or human baselines.","We introduce a new MuJoCo soccer environment for continuous multi-agent reinforcement learning research, and show that population-based training of independent reinforcement learners can learn cooperative behaviors" 1387,Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis,"Program synthesis is the task of automatically generating a program consistent witha specification.Recent years have seen proposal of a number of neural approachesfor program synthesis, many of which adopt a sequence generation paradigm similarto neural machine translation, in which sequence-to-sequence models are trained tomaximize the likelihood of known reference programs.While achieving impressiveresults, this strategy has two key limitations.First, it ignores Program Aliasing: thefact that many different programs may satisfy a given specification.By maximizingthe likelihood of only a single reference program, it penalizes many semanticallycorrect programs, which can adversely affect the synthesizer performance.Second,this strategy overlooks the fact that programs have a strict syntax that can beefficiently checked.To address the first limitation, we perform reinforcementlearning on top of a supervised model with an objective that explicitly maximizesthe likelihood of generating semantically correct programs.For addressing thesecond limitation, we introduce a training procedure that directly maximizes theprobability of generating syntactically correct programs that fulfill the specification.We show that our contributions lead to improved accuracy of the models, especiallyin cases where the training data is limited.",Using the DSL grammar and reinforcement learning to improve synthesis of programs with complex control flow. 1388,Bayesian Time Series Forecasting with Change Point and Anomaly Detection,"Time series forecasting plays a crucial role in marketing, finance and many other quantitative fields.A large amount of methodologies has been developed on this topic, including ARIMA, Holt–Winters, etc.However, their performance is easily undermined by the existence of change points and anomaly points, two structures commonly observed in real data, but rarely considered in the aforementioned methods.In this paper, we propose a novel state space time series model, with the capability to capture the structure of change points and anomaly points, as well as trend and seasonality.To infer all the hidden variables, we develop a Bayesian framework, which is able to obtain distributions and forecasting intervals for time series forecasting, with provable theoretical properties.For implementation, an iterative algorithm with Markov chain Monte Carlo, Kalman filter and Kalman smoothing is proposed.In both synthetic data and real data applications, our methodology yields a better performance in time series forecasting compared with existing methods, along with more accurate change point detection and anomaly detection.","We propose a novel state space time series model with the capability to capture the structure of change points and anomaly points, so that it has a better forecasting performance when there exist change points and anomalies in the time series." 1389,Factorized Multimodal Transformer for Multimodal Sequential Learning,"The complex world around us is inherently multimodal and sequential.Information is scattered across different modalities and requires multiple continuous sensors to be captured.As machine learning leaps towards better generalization to real world, multimodal sequential learning becomes a fundamental research area.Arguably, modeling arbitrarily distributed spatio-temporal dynamics within and across modalities is the biggest challenge in this research area.In this paper, we present a new transformer model, called the Factorized Multimodal Transformer for multimodal sequential learning.FMT inherently models the intramodal and intermodal dynamics within its multimodal input in a factorized manner.The proposed factorization allows for increasing the number of self-attentions to better model the multimodal phenomena at hand; without encountering difficulties during training even on relatively low-resource setups.All the attention mechanisms within FMT have a full time-domain receptive field which allows them to asynchronously capture long-range multimodal dynamics.In our experiments we focus on datasets that contain the three commonly studied modalities of language, vision and acoustic.We perform a wide range of experiments, spanning across 3 well-studied datasets and 21 distinct labels.FMT shows superior performance over previously proposed models, setting new state of the art in the studied datasets.","A multimodal transformer for multimodal sequential learning, with strong empirical results on multimodal language metrics such as multimodal sentiment analysis, emotion recognition and personality traits recognition. " 1390,Stochastic Geodesic Optimization for Neural Networks,"We develop a novel and efficient algorithm for optimizing neural networks inspired by a recently proposed geodesic optimization algorithm.""Our algorithm, which we call Stochastic Geodesic Optimization, utilizes an adaptive coefficient on top of Polyak's Heavy Ball method effectively controlling the amount of weight put on the previous update to the parameters based on the change of direction in the optimization path."", 'Experimental results on strongly convex functions with Lipschitz gradients and deep Autoencoder benchmarks show that SGeO reaches lower errors than established first-order methods and competes well with lower or similar errors to a recent second-order method called K-FAC.We also incorporate Nesterov style lookahead gradient into our algorithm and observe notable improvements.",We utilize an adaptive coefficient on top of regular momentum inspired by geodesic optimization which significantly speeds up training in both convex and non-convex functions. 1391,Masked Translation Model,"We introduce the masked translation model which combines encoding and decoding of sequences within the same model component.The MTM is based on the idea of masked language modeling and supports both autoregressive and non-autoregressive decoding strategies by simply changing the order of masking.In experiments on the WMT 2016 Romanian-English task, the MTM shows strong constant-time translation performance, beating all related approaches with comparable complexity.We also extensively compare various decoding strategies supported by the MTM, as well as several length modeling techniques and training settings.",We use a transformer encoder to do translation by training it in the style of a masked translation model. 1392,Detecting Extrapolation with Local Ensembles,"We present local ensembles, a method for detecting extrapolation at test time in a pre-trained model.We focus on underdetermination as a key component of extrapolation: we aim to detect when many possible predictions are consistent with the training data and model class.Our method uses local second-order information to approximate the variance of predictions across an ensemble of models from the same class.""We compute this approximation by estimating the norm of the component of a test point's gradient that aligns with the low-curvature directions of the Hessian, and provide a tractable method for estimating this quantity."", 'Experimentally, we show that our method is capable of detecting when a pre-trained model is extrapolating on test data, with applications to out-of-distribution detection, detecting spurious correlates, and active learning.","We present local ensembles, a method for detecting extrapolation in trained models, which approximates the variance of an ensemble using local-second order information." 1393,Privacy-aware Adaptive Scheduling for Coalition Operations,"Coalition operations are essential for responding to the increasing number of world-wide incidents that require large-scale humanitarian assistance.Many nations and non-governmental organizations regularly coordinate to address such problems but their cooperation is often impeded by limits on what information they are able to share.In this paper, we consider the use of an advanced cryptographic technique called secure multi-party computation to enable coalition members to achieve joint objectives while still meeting privacy requirements. Our particular focus is on a multi-nation aid delivery scheduling task that involves coordinating when and where various aid provider nations will deliver relief materials after the occurrence of a natural disaster. Even with the use of secure multi-party computation technology, information about private data can leak. We describe how the emerging field of quantitative information flow can be used to help data owners understand the extent to which private data might become vulnerable as the result of possible or actual scheduling operations, and to enable automated adjustments of the scheduling process to ensure privacy requirements",Privacy can be thought about in the same way as other resources in planning 1394,Answer-based Adversarial Training for Generating Clarification Questions,"We propose a generative adversarial training approach for the problem of clarification question generation.Our approach generates clarification questions with the goal of eliciting new information that would make the given context more complete.We develop a Generative Adversarial Network where the generator is a sequence-to-sequence model and the discriminator is a utility function that models the value of updating the context with the answer to the clarification question.We evaluate on two datasets, using both automatic metrics and human judgments of usefulness, specificity and relevance, showing that our approach outperforms both a retrieval-based model and ablations that exclude the utility model and the adversarial training.",We propose an adversarial training approach to the problem of clarification question generation which uses the answer to the question to model the reward. 1395,Pattern Selection for Optimal Classical Planning with Saturated Cost Partitioning,Pattern databases are the foundation of some of the strongest admissible heuristics for optimal classical planning.Experiments showed that the most informative way of combining information from multiple pattern databases is to use saturated cost partitioning.Previous work selected patterns and computed saturated cost partitionings over the resulting pattern database heuristics in two separate steps.We introduce a new method that uses saturated cost partitioning to select patterns and show that it outperforms all existing pattern selection algorithms.,Using saturated cost partitioning to select patterns is preferable to all existing pattern selection algorithms. 1396,Weighted Transformer Network for Machine Translation,"State-of-the-art results on neural machine translation often use attentional sequence-to-sequence models with some form of convolution or recursion.Vaswani et.al. propose a new architecture that avoids recurrence and convolution completely.Instead, it uses only self-attention and feed-forward layers.While the proposed architecture achieves state-of-the-art results on several machine translation tasks, it requires a large number of parameters and training iterations to converge.We propose Weighted Transformer, a Transformer with modified attention layers, that not only outperforms the baseline network in BLEU score but also converges 15-40% faster.Specifically, we replace the multi-head attention by multiple self-attention branches that the model learns to combine during the training process.Our model improves the state-of-the-art performance by 0.5 BLEU points on the WMT 2014 English-to-German translation task and by 0.4 on the English-to-French translation task.",Using branched attention with learned combination weights outperforms the baseline transformer for machine translation tasks. 1397,Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning,"Uncertainty estimation and ensembling methods go hand-in-hand.Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance.At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation.In this work, we focus on in-domain uncertainty for image classification.We explore the standards for its quantification and point out pitfalls of existing metrics.Avoiding these pitfalls, we perform a broad study of different ensembling techniques.To provide more insight in the broad comparison, we introduce the deep ensemble equivalent and show that many sophisticated ensembling techniques are equivalent to an ensemble of very few independently trained networks in terms of the test log-likelihood.",We highlight the problems with common metrics of in-domain uncertainty and perform a broad study of modern ensembling techniques. 1398,A FRAMEWORK FOR ROBUSTNESS CERTIFICATION OF SMOOTHED CLASSIFIERS USING F-DIVERGENCES,"Formal verification techniques that compute provable guarantees on properties of machine learning models, like robustness to norm-bounded adversarial perturbations, have yielded impressive results.Although most techniques developed so far requires knowledge of the architecture of the machine learning model and remains hard to scale to complex prediction pipelines, the method of randomized smoothing has been shown to overcome many of these obstacles.By requiring only black-box access to the underlying model, randomized smoothing scales to large architectures and is agnostic to the internals of the network.However, past work on randomized smoothing has focused on restricted classes of smoothing measures or perturbations and has only been able to prove robustness with respect to simple norm bounds.In this paper we introduce a general framework for proving robustness properties of smoothed machine learning models in the black-box setting.Specifically, we extend randomized smoothing procedures to handle arbitrary smoothing measures and prove robustness of the smoothed classifier by using-divergences.Our methodology achieves state-of-the-art}certified robustness on MNIST, CIFAR-10 and ImageNet and also audio classification task, Librispeech, with respect to several classes of adversarial perturbations.",Develop a general framework to establish certified robustness of ML models against various classes of adversarial perturbations 1399,SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition,"The ability to decompose complex multi-object scenes into meaningful abstractions like objects is fundamental to achieve higher-level cognition.Previous approaches for unsupervised object-oriented scene representation learning are either based on spatial-attention or scene-mixture approaches and limited in scalability which is a main obstacle towards modeling real-world scenes.In this paper, we propose a generative latent variable model, called SPACE, that provides a unified probabilistic modeling framework that combines the best of spatial-attention and scene-mixture approaches.SPACE can explicitly provide factorized object representations for foreground objects while also decomposing background segments of complex morphology.Previous models are good at either of these, but not both.SPACE also resolves the scalability problems of previous methods by incorporating parallel spatial-attention and thus is applicable to scenes with a large number of objects without performance degradations.We show through experiments on Atari and 3D-Rooms that SPACE achieves the above properties consistently in comparison to SPAIR, IODINE, and GENESIS.Results of our experiments can be found on our project website: https://sites.google.com/view/space-project-page",We propose a generative latent variable model for unsupervised scene decomposition that provides factorized object representation per foreground object while also decomposing background segments of complex morphology. 1400,Variational Autoencoder with Arbitrary Conditioning,"We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in ""one shot"".The features may be both real-valued and categorical.Training of the model is performed by stochastic variational Bayes.The experimental evaluation on synthetic data, as well as feature imputation and image inpainting problems, shows the effectiveness of the proposed approach and diversity of the generated samples.",We propose an extension of conditional variational autoencoder that allows conditioning on an arbitrary subset of the features and sampling the remaining ones. 1401,Hierarchical Representations for Efficient Architecture Search,"We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance.Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies.Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches.We also present results using random search, achieving 0.3% less top-1 accuracy on CIFAR-10 and 0.1% less on ImageNet whilst reducing the search time from 36 hours down to 1 hour.",In this paper we propose a hierarchical architecture representation in which doing random or evolutionary architecture search yields highly competitive results using fewer computational resources than the prior art. 1402,Hallucinative Topological Memory for Zero-Shot Visual Planning,"In visual planning, an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline, e.g., images obtained from self-supervised robot interaction.VP algorithms essentially combine data-driven perception and planning, and are important for robotic manipulation and navigation domains, among others.A recent and promising approach to VP is the semi-parametric topological memory method, where image samples are treated as nodes in a graph, and the connectivity in the graph is learned using deep image classification.Thus, the learned graph represents the topological connectivity of the data, and planning can be performed using conventional graph search methods.However, training SPTM necessitates a suitable loss function for the connectivity classifier, which requires non-trivial manual tuning.More importantly, SPTM is constricted in its ability to generalize to changes in the domain, as its graph is constructed from direct observations and thus requires collecting new samples for planning.In this paper, we propose Hallucinative Topological Memory, which overcomes these shortcomings.In HTM, instead of training a discriminative classifier we train an energy function using contrastive predictive coding.In addition, we learn a conditional VAE model that generates samples given a context image of the domain, and use these hallucinated samples for building the connectivity graph, allowing for zero-shot generalization to domain changes.In simulated domains, HTM outperforms conventional SPTM and visual foresight methods in terms of both plan quality and success in long-horizon planning.","We propose Hallucinative Topological Memory (HTM), a visual planning algorithm that can perform zero-shot long horizon planning in new environments. " 1403,Effective and Efficient Batch Normalization Using Few Uncorrelated Data for Statistics' Estimation,"Deep Neural Networks thrive in recent years in which Batch Normalization plays an indispensable role.However, it has been observed that BN is costly due to the reduction operations.In this paper, we propose alleviating the BN’s cost by using only a small fraction of data for mean & variance estimation at each iteration.The key challenge to reach this goal is how to achieve a satisfactory balance between normalization effectiveness and execution efficiency.We identify that the effectiveness expects less data correlation while the efficiency expects regular execution pattern.To this end, we propose two categories of approach: sampling or creating few uncorrelated data for statistics’ estimation with certain strategy constraints.The former includes “Batch Sampling” that randomly selects few samples from each batch and “Feature Sampling” that randomly selects a small patch from each feature map of all samples, and the latter is “Virtual Dataset Normalization” that generates few synthetic random samples.Accordingly, multi-way strategies are designed to reduce the data correlation for accurate estimation and optimize the execution pattern for running acceleration in the meantime.All the proposed methods are comprehensively evaluated on various DNN models, where an overall training speedup by up to 21.7% on modern GPUs can be practically achieved without the support of any specialized libraries, and the loss of model accuracy and convergence rate are negligible.Furthermore, our methods demonstrate powerful performance when solving the well-known “micro-batch normalization” problem in the case of tiny batch size.","We propose accelerating Batch Normalization (BN) through sampling less correlated data for reduction operations with regular execution pattern, which achieves up to 2x and 20% speedup for BN itself and the overall training, respectively." 1404,SHE2: Stochastic Hamiltonian Exploration and Exploitation for Derivative-Free Optimization,"Derivative-free optimization using trust region methods is frequently used for machine learning applications, such asparameter optimization without the derivatives of objective functions known. Inspired by the recent work in continuous-time minimizers, our work models the common trust region methods with the exploration-exploitation using a dynamical system coupling a pair of dynamical processes.While the first exploration process searches the minimum of the blackbox function through minimizing a time-evolving surrogation function, another exploitation process updates the surrogation function time-to-time using the points traversed by the exploration process.The efficiency of derivative-free optimization thus depends on ways the two processes couple.In this paper, we propose a novel dynamical system, namely \\ThePrev---\\underlinetochastic \\underlineamiltonian \\underlinexploration and \\underlinexploitation, that surrogates the subregions of blackbox function using a time-evolving quadratic function, then explores and tracks the minimum of the quadratic functions using a fast-converging Hamiltonian system.The \\ThePrev\\ algorithm is later provided as a discrete-time numerical approximation to the system.To further accelerate optimization, we present \\TheName\\ that parallelizes multiple \\ThePrev\\ threads for concurrent exploration and exploitation.Experiment results based on a wide range of machine learning applications show that \\TheName\\ outperform a boarder range of derivative-free optimization algorithms with faster convergence speed under the same settings.",a new derivative-free optimization algorithms derived from Nesterov's accelerated gradient methods and Hamiltonian dynamics 1405,Binarized Back-Propagation: Training Binarized Neural Networks with Binarized Gradients," Binarized Neural networks have been shown to be effective in improving network efficiency during the inference phase, after the network has been trained.However, BNNs only binarize the model parameters and activations during propagations.Therefore, BNNs do not offer significant efficiency improvements during training, since the gradients are still propagated and used with high precision.We show there is no inherent difficulty in training BNNs using ""Binarized BackPropagation"", in which we also binarize the gradients.To avoid significant degradation in test accuracy, we simply increase the number of filter maps in a each convolution layer.Using BBP on dedicated hardware can potentially significantly improve the execution efficiency and speed up the training process with an appropriate hardware support, even after such an increase in network size.Moreover, our method is ideal for distributed learning as it reduces the communication costs significantly.Using this method, we demonstrate a minimal loss in classification accuracy on several datasets and topologies.",Binarized Back-Propagation all you need for completely binarized training is to is to inflate the size of the network 1406,On the Selection of Initialization and Activation Function for Deep Neural Networks,"The weight initialization and the activation function of deep neural networks have a crucial impact on the performance of the training procedure.An inappropriate selection can lead to the loss of information of the input during forward propagation and the exponential vanishing/exploding of gradients during back-propagation.""Understanding the theoretical properties of untrained random networks is key to identifying which deep networks may be trained successfully as recently demonstrated by Schoenholz et al. who showed that for deep feedforward neural networks only a specific choice of hyperparameters known as the `edge of chaos' can lead to good performance."", 'We complete this analysis by providing quantitative results showing that, for a class of ReLU-like activation functions, the information propagates indeed deeper for an initialization at the edge of chaos.By further extending this analysis, we identify a class of activation functions that improve the information propagation over ReLU-like functions.This class includes the Swish activation,, used in Hendrycks & Gimpel,Elfwing et al. and Ramachandran et al..This provides a theoretical grounding for the excellent empirical performance of observed in these contributions.We complement those previous results by illustrating the benefit of using a random initialization on the edge of chaos in this context.",How to effectively choose Initialization and Activation function for deep neural networks 1407,Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs,"The driving force behind the recent success of LSTMs has been their ability to learn complex and non-linear relationships.Consequently, our inability to describe these relationships has led to LSTMs being characterized as black boxes.To this end, we introduce contextual decomposition, an interpretation algorithm for analysing individual predictions made by standard LSTMs, without any changes to the underlying model.By decomposing the output of a LSTM, CD captures the contributions of combinations of words or variables to the final prediction of an LSTM.""On the task of sentiment analysis with the Yelp and SST data sets, we show that CD is able to reliably identify words and phrases of contrasting sentiment, and how they are combined to yield the LSTM's final prediction."", 'Using the phrase-level labels in SST, we also demonstrate that CD is able to successfully extract positive and negative negations from an LSTM, something which has not previously been done.","We introduce contextual decompositions, an interpretation algorithm for LSTMs capable of extracting word, phrase and interaction-level importance score" 1408,Phase-Aware Speech Enhancement with Deep Complex U-Net,"Most deep learning-based models for speech enhancement have mainly focused on estimating the magnitude of spectrogram while reusing the phase from noisy speech for reconstruction.This is due to the difficulty of estimating the phase of clean speech.To improve speech enhancement performance, we tackle the phase estimation problem in three ways.First, we propose Deep Complex U-Net, an advanced U-Net structured model incorporating well-defined complex-valued building blocks to deal with complex-valued spectrograms.Second, we propose a polar coordinate-wise complex-valued masking method to reflect the distribution of complex ideal ratio masks.Third, we define a novel loss function, weighted source-to-distortion ratio loss, which is designed to directly correlate with a quantitative evaluation measure.Our model was evaluated on a mixture of the Voice Bank corpus and DEMAND database, which has been widely used by many deep learning models for speech enhancement.Ablation experiments were conducted on the mixed dataset showing that all three proposed approaches are empirically valid.Experimental results show that the proposed method achieves state-of-the-art performance in all metrics, outperforming previous approaches by a large margin.",This paper proposes a novel complex masking method for speech enhancement along with a loss function for efficient phase estimation. 1409,SMiRL: Surprise Minimizing RL in Entropic Environments,"All living organisms struggle against the forces of nature to carve out niches wherethey can maintain relative stasis.We propose that such a search for order amidstchaos might offer a unifying principle for the emergence of useful behaviors inartificial agents.We formalize this idea into an unsupervised reinforcement learningmethod called surprise minimizing RL.SMiRL trains an agent with theobjective of maximizing the probability of observed states under a model trained onall previously seen states.The resulting agents acquire several proactive behaviorsto seek and maintain stable states such as balancing and damage avoidance, thatare closely tied to the affordances of the environment and its prevailing sourcesof entropy, such as winds, earthquakes, and other agents. We demonstrate thatour surprise minimizing agents can successfully play Tetris, Doom, and controla humanoid to avoid falls, without any task-specific reward supervision. Wefurther show that SMiRL can be used as an unsupervised pre-training objectivethat substantially accelerates subsequent reward-driven learning",Learning emergent behavior by minimizing Bayesian surprise with RL in natural environments with entropy. 1410,Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm,"Learning to learn is a powerful paradigm for enabling models to learn from data more effectively and efficiently.A popular approach to meta-learning is to train a recurrent model to read in a training dataset as input and output the parameters of a learned model, or output predictions for new test inputs.Alternatively, a more recent approach to meta-learning aims to acquire deep representations that can be effectively fine-tuned, via standard gradient descent, to new tasks.In this paper, we consider the meta-learning problem from the perspective of universality, formalizing the notion of learning algorithm approximation and comparing the expressive power of the aforementioned recurrent models to the more recent approaches that embed gradient descent into the meta-learner.In particular, we seek to answer the following question: does deep representation combined with standard gradient descent have sufficient capacity to approximate any learning algorithm?We find that this is indeed true, and further find, in our experiments, that gradient-based meta-learning consistently leads to learning strategies that generalize more widely compared to those represented by recurrent models.",Deep representations combined with gradient descent can approximate any learning algorithm. 1411,Translating neural signals to text using a Brain-Computer Interface,"Brain-Computer Interfaces may help patients with faltering communication abilities due to neurodegenerative diseases produce text or speech by direct neural processing.However, their practical realization has proven difficult due to limitations in speed, accuracy, and generalizability of existing interfaces.To this end, we aim to create a BCI that decodes text directly from neural signals.We implement a framework that initially isolates frequency bands in the input signal encapsulating differential information regarding production of various phonemic classes.These bands form a feature set that feeds into an LSTM which discerns at each time point probability distributions across all phonemes uttered by a subject.Finally, a particle filtering algorithm temporally smooths these probabilities incorporating prior knowledge of the English language to output text corresponding to the decoded word.Further, in producing an output, we abstain from constraining the reconstructed word to be from a given bag-of-words, unlike previous studies.The empirical success of our proposed approach, offers promise for the employment of such an interface by patients in unfettered, naturalistic environments.",We present an open-loop brain-machine interface whose performance is unconstrained to the traditionally used bag-of-words approach. 1412,An Out-of-the-box Full-network Embedding for Convolutional Neural Networks,"Transfer learning for feature extraction can be used to exploit deep representations in contexts where there is very few training data, where there are limited computational resources, or when tuning the hyper-parameters needed for training is not an option.While previous contributions to feature extraction propose embeddings based on a single layer of the network, in this paper we propose a full-network embedding which successfully integrates convolutional and fully connected features, coming from all layers of a deep convolutional neural network.To do so, the embedding normalizes features in the context of the problem, and discretizes their values to reduce noise and regularize the embedding space.Significantly, this also reduces the computational cost of processing the resultant representations.The proposed method is shown to outperform single layer embeddings on several image classification tasks, while also being more robust to the choice of the pre-trained model used for obtaining the initial features.The performance gap in classification accuracy between thoroughly tuned solutions and the full-network embedding is also reduced, which makes of the proposed approach a competitive solution for a large set of applications.",We present a full-network embedding of CNN which outperforms single layer embeddings for transfer learning tasks. 1413,Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers,"We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds.These thresholds can have fine-grained layer-wise adjustments dynamically via backpropagation.We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss using the same training epochs as dense models.Dynamic Sparse Training achieves prior art performance compared with other sparse training algorithms on various network architectures.Additionally, we have several surprising observations that provide strong evidence to the effectiveness and efficiency of our algorithm.These observations reveal the underlying problems of traditional three-stage pruning algorithms and present the potential guidance provided by our algorithm to the design of more compact network architectures.",We present a novel network pruning method that can find the optimal sparse structure during the training process with trainable pruning threshold 1414,Evaluating biological plausibility of learning algorithms the lazy way,To which extent can successful machine learning inform our understanding of biological learning?One popular avenue of inquiry in recent years has been to directly map such algorithms into a realistic circuit implementation.Here we focus on learning in recurrent networks and investigate a range of learning algorithms.Our approach decomposes them into their computational building blocks and discusses their abstract potential as biological operations.This alternative strategy provides a “lazy” but principled way of evaluating ML ideas in terms of their biological plausibility,We evaluate new ML learning algorithms' biological plausibility in the abstract based on mathematical operations needed 1415,Provable robustness against all adversarial $l_p$-perturbations for $p\geq 1$,"In recent years several adversarial attacks and defenses have been proposed.Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used.One way out of this dilemma are provable robustness guarantees.While provably robust models for specific-perturbation models have been developed, we show that they do not come with any guarantee against other-perturbations.We propose a new regularization scheme, MMR-Universal, for ReLU networks which enforces robustness wrt- -perturbations and show how that leads to the first provably robust models wrt any-norm for.",We introduce a method to train models with provable robustness wrt all the-norms for simultaneously. 1416,Dynamically Unfolding Recurrent Restorer: A Moving Endpoint Control Method for Image Restoration,"In this paper, we propose a new control framework called the moving endpoint control to restore images corrupted by different degradation levels in one model.The proposed control problem contains a restoration dynamics which is modeled by an RNN.The moving endpoint, which is essentially the terminal time of the associated dynamics, is determined by a policy network.We call the proposed model the dynamically unfolding recurrent restorer.Numerical experiments show that DURR is able to achieve state-of-the-art performances on blind image denoising and JPEG image deblocking.Furthermore, DURR can well generalize to images with higher degradation levels that are not included in the training stage.",We propose a novel method to handle image degradations of different levels by learning a diffusion terminal time. Our model can generalize to unseen degradation level and different noise statistic. 1417,Learning Wasserstein Embeddings,"The Wasserstein distance received a lot of attention recently in the community of machine learning, especially for its principled way of comparing distributions.It has found numerous applications in several hard problems, such as domain adaptation, dimensionality reduction or generative models.However, its use is still limited by a heavy computational cost.Our goal is to alleviate this problem by providing an approximation mechanism that allows to break its inherent complexity.It relies on the search of an embedding where the Euclidean distance mimics the Wasserstein distance.We show that such an embedding can be found with a siamese architecture associated with a decoder network that allows to move from the embedding space back to the original input space.Once this embedding has been found, computing optimization problems in the Wasserstein space can be conducted extremely fast.Numerical experiments supporting this idea are conducted on image datasets, and show the wide potential benefits of our method.",We show that it is possible to fastly approximate Wasserstein distances computation by finding an appropriate embedding where Euclidean distance emulates the Wasserstein distance 1418,CBOW Is Not All You Need: Combining CBOW with the Compositional Matrix Space Model,"Continuous Bag of Words is a powerful text embedding method.Due to its strong capabilities to encode word content, CBOW embeddings perform well on a wide range of downstream tasks while being efficient to compute.However, CBOW is not capable of capturing the word order.""The reason is that the computation of CBOW's word embeddings is commutative, i.e., embeddings of XYZ and ZYX are the same."", 'In order to address this shortcoming, we propose alearning algorithm for the Continuous Matrix Space Model, which we call Continual Multiplication of Words.Our algorithm is an adaptation of word2vec, so that it can be trained on large quantities of unlabeled text.We empirically show that CMOW better captures linguistic properties, but it is inferior to CBOW in memorizing word content.Motivated by these findings, we propose a hybrid model that combines the strengths of CBOW and CMOW.""Our results show that the hybrid CBOW-CMOW-model retains CBOW's strong ability to memorize word content while at the same time substantially improving its ability to encode other linguistic information by 8%."", 'As a result, the hybrid also performs better on 8 out of 11 supervised downstream tasks with an average improvement of 1.2%.",We present a novel training scheme for efficiently obtaining order-aware sentence representations. 1419,Generative Models from the perspective of Continual Learning,"Which generative model is the most suitable for Continual Learning?This paper aims at evaluating and comparing generative models on disjoint sequential image generation tasks.We investigate how several models learn and forget, considering various strategies: rehearsal, regularization, generative replay and fine-tuning.We used two quantitative metrics to estimate the generation quality and memory ability.We experiment with sequential tasks on three commonly used benchmarks for Continual Learning.We found that among all models, the original GAN performs best and among Continual Learning strategies, generative replay outperforms all other methods.Even if we found satisfactory combinations on MNIST and Fashion MNIST, training generative models sequentially on CIFAR10 is particularly instable, and remains a challenge.",A comparative study of generative models on Continual Learning scenarios. 1420,Supervised Policy Update for Deep Reinforcement Learning,"We propose a new sample-efficient methodology, called Supervised Policy Update, for deep reinforcement learning.Starting with data generated by the current policy, SPU formulates and solves a constrained optimization problem in the non-parameterized proximal policy space.Using supervised regression, it then converts the optimal non-parameterized policy to a parameterized policy, from which it draws new samples.The methodology is general in that it applies to both discrete and continuous action spaces, and can handle a wide variety of proximity constraints for the non-parameterized optimization problem.We show how the Natural Policy Gradient and Trust Region Policy Optimization problems, and the Proximal Policy Optimization problem can be addressed by this methodology.The SPU implementation is much simpler than TRPO.In terms of sample efficiency, our extensive experiments show SPU outperforms TRPO in Mujoco simulated robotic tasks and outperforms PPO in Atari video game tasks.","first posing and solving the sample efficiency optimization problem in the non-parameterized policy space, and then solving a supervised regression problem to find a parameterized policy that is near the optimal non-parameterized policy." 1421,How well do deep neural networks trained on object recognition characterize the mouse visual system?,"Recent work on modeling neural responses in the primate visual system has benefited from deep neural networks trained on large-scale object recognition, and found a hierarchical correspondence between layers of the artificial neural network and brain areas along the ventral visual stream.However, we neither know whether such task-optimized networks enable equally good models of the rodent visual system, nor if a similar hierarchical correspondence exists.Here, we address these questions in the mouse visual system by extracting features at several layers of a convolutional neural network trained on ImageNet to predict the responses of thousands of neurons in four visual areas to natural images.We found that the CNN features outperform classical subunit energy models, but found no evidence for an order of the areas we recorded via a correspondence to the hierarchy of CNN layers.Moreover, the same CNN but with random weights provided an equivalently useful feature space for predicting neural responses.Our results suggest that object recognition as a high-level task does not provide more discriminative features to characterize the mouse visual system than a random network.Unlike in the primate, training on ethologically relevant visually guided behaviors -- beyond static object recognition -- may be needed to unveil the functional organization of the mouse visual cortex.","A goal-driven approach to model four mouse visual areas (V1, LM, AL, RL) based on deep neural networks trained on static object recognition does not unveil a functional organization of visual cortex unlike in primates" 1422,Recurrent Hierarchical Topic-Guided Neural Language Models,"To simultaneously capture syntax and semantics from a text corpus, we propose a new larger-context language model that extracts recurrent hierarchical semantic structure via a dynamic deep topic model to guide natural language generation.Moving beyond a conventional language model that ignores long-range word dependencies and sentence order, the proposed model captures not only intra-sentence word dependencies, but also temporal transitions between sentences and inter-sentence topic dependences.For inference, we develop a hybrid of stochastic-gradient MCMC and recurrent autoencoding variational Bayes.Experimental results on a variety of real-world text corpora demonstrate that the proposed model not only outperforms state-of-the-art larger-context language models, but also learns interpretable recurrent multilayer topics and generates diverse sentences and paragraphs that are syntactically correct and semantically coherent.","We introduce a novel larger-context language model to simultaneously captures syntax and semantics, making it capable of generating highly interpretable sentences and paragraphs" 1423,Training a Constrained Natural Media Painting Agent using Reinforcement Learning ,"We present a novel approach to train a natural media painting using reinforcement learning.Given a reference image, our formulation is based on stroke-based rendering that imitates human drawing and can be learned from scratch without supervision.Our painting agent computes a sequence of actions that represent the primitive painting strokes.In order to ensure that the generated policy is predictable and controllable, we use a constrained learning method and train the painting agent using the environment model and follows the commands encoded in an observation.We have applied our approach on many benchmarks and our results demonstrate that our constrained agent can handle different painting media and different constraints in the action space to collaborate with humans or other agents.","We train a natural media painting agent using environment model. Based on our painting agent, we present a novel approach to train a constrained painting agent that follows the command encoded in the observation." 1424,ConQUR: Mitigating Delusional Bias in Deep Q-Learning,"Delusional bias is a fundamental source of error in approximate Q-learning.To date, the only techniques that explicitly address delusion require comprehensive search using tabular value estimates.In this paper, we develop efficient methods to mitigate delusional bias by training Q-approximators with labels that are ""consistent"" with the underlying greedy policy class.We introduce a simple penalization scheme that encourages Q-labels used across training batches to remain consistent with the expressible policy class.We also propose a search framework that allows multiple Q-approximators to be generated and tracked, thus mitigating the effect of premature policy commitments.Experimental results demonstrate that these methods can improve the performance of Q-learning in a variety of Atari games, sometimes dramatically.",We developed a search framework and consistency penalty to mitigate delusional bias. 1425,APPLICATION OF DEEP CONVOLUTIONAL NEURAL NETWORK TO PREVENT ATM FRAUD BY FACIAL DISGUISE IDENTIFICATION,"The paper proposes and demonstrates a Deep Convolutional Neural Network architecture to identify users with disguised face attempting a fraudulent ATM transaction.The recent introduction of Disguised Face Identification framework proves the applicability of deep neural networks for this very problem.All the ATMs nowadays incorporate a hidden camera in them and capture the footage of their users.However, it is impossible for the police to track down the impersonators with disguised faces from the ATM footage.The proposed deep convolutional neural network is trained to identify, in real time, whether the user in the captured image is trying to cloak his identity or not.The output of the DCNN is then reported to the ATM to take appropriate steps and prevent the swindler from completing the transaction.The network is trained using a dataset of images captured in similar situations as of an ATM.The comparatively low background clutter in the images enables the network to demonstrate high accuracy in feature extraction and classification for all the different disguises.",Proposed System can prevent impersonators with facial disguises from completing a fraudulent transaction using a pre-trained DCNN. 1426,Learning Multi-Level Hierarchies with Hindsight,"Hierarchical agents have the potential to solve sequential decision making tasks with greater sample efficiency than their non-hierarchical counterparts because hierarchical agents can break down tasks into sets of subtasks that only require short sequences of decisions. In order to realize this potential of faster learning, hierarchical agents need to be able to learn their multiple levels of policies in parallel so these simpler subproblems can be solved simultaneously. Yet, learning multiple levels of policies in parallel is hard because it is inherently unstable: changes in a policy at one level of the hierarchy may cause changes in the transition and reward functions at higher levels in the hierarchy, making it difficult to jointly learn multiple levels of policies. In this paper, we introduce a new Hierarchical Reinforcement Learning framework, Hierarchical Actor-Critic, that can overcome the instability issues that arise when agents try to jointly learn multiple levels of policies. The main idea behind HAC is to train each level of the hierarchy independently of the lower levels by training each level as if the lower level policies are already optimal. We demonstrate experimentally in both grid world and simulated robotics domains that our approach can significantly accelerate learning relative to other non-hierarchical and hierarchical methods. Indeed, our framework is the first to successfully learn 3-level hierarchies in parallel in tasks with continuous state and action spaces.",We introduce the first Hierarchical RL approach to successfully learn 3-level hierarchies in parallel in tasks with continuous state and action spaces. 1427,"Classification from Positive, Unlabeled and Biased Negative Data","Positive-unlabeled learning addresses the problem of learning a binary classifier from positive and unlabeled data.It is often applied to situations where negative data are difficult to be fully labeled.However, collecting a non-representative N set that contains only a small portion of all possible N data can be much easier in many practical situations.This paper studies a novel classification framework which incorporates such biased N data in PU learning.The fact that the training N data are biased also makes our work very different from those of standard semi-supervised learning.We provide an empirical risk minimization-based method to address this PUbN classification problem.Our approach can be regarded as a variant of traditional example-reweighting algorithms, with the weight of each example computed through a preliminary step that draws inspiration from PU learning.We also derive an estimation error bound for the proposed method.Experimental results demonstrate the effectiveness of our algorithm in not only PUbN learning scenarios but also ordinary PU leaning scenarios on several benchmark datasets.","This paper studied the PUbN classification problem, where we incorporate biased negative (bN) data, i.e., negative data that is not fully representative of the true underlying negative distribution, into positive-unlabeled (PU) learning." 1428,On the Weaknesses of Reinforcement Learning for Neural Machine Translation,"Reinforcement learning is frequently used to increase performance in text generation tasks,including machine translation,notably through the use of Minimum Risk Training and Generative Adversarial Networks.However, little is known about what and how these methods learn in the context of MT.We prove that one of the most common RL methods for MT does not optimize theexpected reward, as well as show that other methods take an infeasibly long time to converge.In fact, our results suggest that RL practices in MT are likely to improve performanceonly where the pre-trained parameters are already close to yielding the correct translation.Our findings further suggest that observed gains may be due to effects unrelated to the training signal, concretely, changes in the shape of the distribution curve.",Reinforcment practices for machine translation performance gains might not come from better predictions. 1429,Transformer to CNN: Label-scarce distillation for efficient text classification,"Significant advances have been made in Natural Language Processing modelling since the beginning of 2018.The new approaches allow for accurate results, even when there is little labelled data, because these NLP models can benefit from training on both task-agnostic and task-specific unlabelled data.However, these advantages come with significant size and computational costs.This workshop paper outlines how our proposed convolutional student architecture, having been trained by a distillation process from a large-scale model, can achieve 300x inference speedup and 39x reduction in parameter count.In some cases, the student model performance surpasses its teacher on the studied tasks.","We train a small, efficient CNN with the same performance as the OpenAI Transformer on text classification tasks" 1430,Temporal Difference Weighted Ensemble For Reinforcement Learning,"Combining multiple function approximators in machine learning models typically leads to better performance and robustness compared with a single function.In reinforcement learning, ensemble algorithms such as an averaging method and a majority voting method are not always optimal, because each function can learn fundamentally different optimal trajectories from exploration.In this paper, we propose a Temporal Difference Weighted algorithm, an ensemble method that adjusts weights of each contribution based on accumulated temporal difference errors.The advantage of this algorithm is that it improves ensemble performance by reducing weights of Q-functions unfamiliar with current trajectories.We provide experimental results for Gridworld tasks and Atari tasks that show significant performance improvements compared with baseline algorithms.",Ensemble method for reinforcement learning that weights Q-functions based on accumulated TD errors. 1431,Behaviour Suite for Reinforcement Learning,"This paper introduces the Behaviour Suite for Reinforcement Learning, or bsuite for short.bsuite is a collection of carefully-designed experiments that investigate core capabilities of reinforcement learning agents with two objectives.First, to collect clear, informative and scalable problems that capture key issues in the design of general and efficient learning algorithms.Second, to study agent behaviour through their performance on these shared benchmarks.To complement this effort, we open source this http URL, which automates evaluation and analysis of any agent on bsuite.This library facilitates reproducible and accessible research on the core issues in RL, and ultimately the design of superior learning algorithms.Our code is Python, and easy to use within existing projects.We include examples with OpenAI Baselines, Dopamine as well as new reference implementations.Going forward, we hope to incorporate more excellent experiments from the research community, and commit to a periodic review of bsuite from a committee of prominent researchers.",Bsuite is a collection of carefully-designed experiments that investigate the core capabilities of RL agents. 1432,The Adaptive Stress Testing Formulation,"Validation is a key challenge in the search for safe autonomy.Simulations are often either too simple to provide robust validation, or too complex to tractably compute.Therefore, approximate validation methods are needed to tractably find failures without unsafe simplifications.This paper presents the theory behind one such black-box approach: adaptive stress testing.We also provide three examples of validation problems formulated to work with AST.","A formulation for a black-box, reinforcement learning method to find the most-likely failure of a system acting in complex scenarios." 1433,Multi-step Greedy Policies in Model-Free Deep Reinforcement Learning,"Multi-step greedy policies have been extensively used in model-based Reinforcement Learning and in the case when a model of the environment is available.In this work, we explore the benefits of multi-step greedy policies in model-free RL when employed in the framework of multi-step Dynamic Programming: multi-step Policy and Value Iteration.These algorithms iteratively solve short-horizon decision problems and converge to the optimal solution of the original one.By using model-free algorithms as solvers of the short-horizon problems we derive fully model-free algorithms which are instances of the multi-step DP framework.As model-free algorithms are prone to instabilities w.r.t. the decision problem horizon, this simple approach can help in mitigating these instabilities and results in an improved model-free algorithms.We test this approach and show results on both discrete and continuous control problems.",Use model free algorithms like DQN/TRPO to solve short horizon problems (model free) iteratively in a Policy/Value Iteration fashion. 1434,The Limitations of Adversarial Training and the Blind-Spot Attack,"The adversarial training procedure proposed by Madry et al. is one of the most effective methods to defend against adversarial examples in deep neural net- works.In our paper, we shed some lights on the practicality and the hardness of adversarial training by showing that the effectiveness of adversarial training has a strong correlation with the distance between a test point and the manifold of training data embedded by the network.Test examples that are relatively far away from this manifold are more likely to be vulnerable to adversarial attacks.Consequentially, an adversarial training based defense is susceptible to a new class of attacks, the “blind-spot attack”, where the input images reside in “blind-spots” of the empirical distri- bution of training data but is still on the ground-truth data manifold.For MNIST, we found that these blind-spots can be easily found by simply scaling and shifting image pixel values.Most importantly, for large datasets with high dimensional and complex data manifold, the existence of blind-spots in adversarial training makes defending on any valid test examples difficult due to the curse of dimensionality and the scarcity of training data.Additionally, we find that blind-spots also exist on provable defenses including and because these trainable robustness certificates can only be practically optimized on a limited set of training data.",We show that even the strongest adversarial training methods cannot defend against adversarial examples crafted on slightly scaled and shifted test images. 1435,Random Bias Initialization Improving Binary Neural Network Training,"Edge intelligence especially binary neural network has attracted considerable attention of the artificial intelligence community recently.BNNs significantly reduce the computational cost, model size, and memory footprint. However, there is still a performance gap between the successful full-precision neural network with ReLU activation and BNNs.We argue that the accuracy drop of BNNs is due to their geometry.We analyze the behaviour of the full-precision neural network with ReLU activation and compare it with its binarized counterpart.This comparison suggests random bias initialization as a remedy to activation saturation in full-precision networks and leads us towards an improved BNN training.Our numerical experiments confirm our geometric intuition.","Improve saturating activations (sigmoid, tanh, htanh etc.) and Binarized Neural Network with Bias Initialization" 1436,Capturing Human Category Representations by Sampling in Deep Feature Spaces,"Understanding how people represent categories is a core problem in cognitive science, with the flexibility of human learning remaining a gold standard to which modern artificial intelligence and machine learning aspire.Decades of psychological research have yielded a variety of formal theories of categories, yet validating these theories with naturalistic stimuli remains a challenge.The problem is that human category representations cannot be directly observed and running informative experiments with naturalistic stimuli such as images requires having a workable representation of these stimuli.Deep neural networks have recently been successful in a range of computer vision tasks and provide a way to represent the features of images.In this paper, we introduce a method for estimating the structure of human categories that draws on ideas from both cognitive science and machine learning, blending human-based algorithms with state-of-the-art deep representation learners.We provide qualitative and quantitative results as a proof of concept for the feasibility of the method.Samples drawn from human distributions rival the quality of current state-of-the-art generative models and outperform alternative methods for estimating the structure of human categories.",using deep neural networks and clever algorithms to capture human mental visual concepts 1437,Targeted Adversarial Examples for Black Box Audio Systems,"The application of deep recurrent networks to audio transcription has led to impressive gains in automatic speech recognition systems.Many have demonstrated that small adversarial perturbations can fool deep neural networks into incorrectly predicting a specified target with high confidence.Current work on fooling ASR systems have focused on white-box attacks, in which the model architecture and parameters are known.In this paper, we adopt a black-box approach to adversarial generation, combining the approaches of both genetic algorithms and gradient estimation to solve the task.We achieve a 89.25% targeted attack similarity after 3000 generations while maintaining 94.6% audio file similarity.",We present a novel black-box targeted attack that is able to fool state of the art speech to text transcription. 1438,Learning to Sit: Synthesizing Human-Chair Interactions via Hierarchical Control,"Recent progress on physics-based character animation has shown impressive breakthroughs on human motion synthesis, through imitating motion capture data via deep reinforcement learning.However, results have mostly been demonstrated on imitating a single distinct motion pattern, and do not generalize to interactive tasks that require flexible motion patterns due to varying human-object spatial configurations.To bridge this gap, we focus on one class of interactive tasks---sitting onto a chair.We propose a hierarchical reinforcement learning framework which relies on a collection of subtask controllers trained to imitate simple, reusable mocap motions, and a meta controller trained to execute the subtasks properly to complete the main task.We experimentally demonstrate the strength of our approach over different single level and hierarchical baselines.We also show that our approach can be applied to motion prediction given an image input.A video highlight can be found at https://youtu.be/XWU3wzz1ip8/.",Synthesizing human motions on interactive tasks using mocap data and hierarchical RL. 1439,On Convergence and Stability of GANs,"We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions.We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens.We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse.We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points.We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN.We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions.",Analysis of convergence and mode collapse by studying GAN training process as regret minimization 1440,Interpretability Evaluation Framework for Deep Neural Networks,"Deep neural networks have attained surprising achievement during the last decade due to the advantages of automatic feature learning and freedom of expressiveness.However, their interpretability remains mysterious because DNNs are complex combinations of linear and nonlinear transformations.Even though many models have been proposed to explore the interpretability of DNNs, several challenges remain unsolved:1) The lack of interpretability quantity measures for DNNs, 2) the lack of theory for stability of DNNs, and3) the difficulty to solve nonconvex DNN problems with interpretability constraints.To address these challenges simultaneously, this paper presents a novel intrinsic interpretability evaluation framework for DNNs.Specifically, Four independent properties of interpretability are defined based on existing works.Moreover, we investigate the theory for the stability of DNNs, which is an important aspect of interpretability, and prove that DNNs are generally stable given different activation functions.Finally, an extended version of deep learning Alternating Direction Method of Multipliers are proposed to solve DNN problems with interpretability constraints efficiently and accurately.Extensive experiments on several benchmark datasets validate several DNNs by our proposed interpretability framework.",We propose a novel framework to evaluate the interpretability of neural network. 1441,Hyperbolic Discounting and Learning Over Multiple Horizons,"Reinforcement learning typically defines a discount factor as part of the Markov Decision Process. The discount factor values future rewards by an exponential scheme that leads to theoretical convergence guarantees of the Bellman equation.However, evidence from psychology, economics and neuroscience suggests that humans and animals instead have hyperbolic time-preferences. Here we extend earlier work of Kurth-Nelson and Redish and propose an efficient deep reinforcement learning agent that acts via hyperbolic discounting and other non-exponential discount mechanisms.We demonstrate that a simple approach approximates hyperbolic discount functions while still using familiar temporal-difference learning techniques in RL. Additionally, and independent of hyperbolic discounting, we make a surprising discovery that simultaneously learning value functions over multiple time-horizons is an effective auxiliary task which often improves over state-of-the-art methods.",A deep RL agent that learns hyperbolic (and other non-exponential) Q-values and a new multi-horizon auxiliary task. 1442,EEG based Emotion Recognition of Image Stimuli ,"Emotion is playing a great role in our daily lives.The necessity and importance of an automatic Emotion recognition system is getting increased.Traditional approaches of emotion recognition are based on facial images, measurements of heart rates, blood pressure, temperatures, tones of voice/speech, etc.However, these features can potentially be changed to fake features.So to detect hidden and real features that is not controlled by the person are data measured from brain signals.There are various ways of measuring brain waves: EEG, MEG, FMRI, etc.On the bases of cost effectiveness and performance trade-offs, EEG is chosen for emotion recognition in this work.The main aim of this study is to detect emotion based on EEG signal analysis recorded from brain in response to visual stimuli.The approaches used were the selected visual stimuli were presented to 11 healthy target subjects and EEG signal were recorded in controlled situation to minimize artefacts. The signals were filtered and type of frequency band was computed and detected.The proposed method predicts an emotion type in response to the presented stimuli.Finally, the performance of the proposed approach was tested.The average accuracy of machine learning algorithms are 78.86, 74.76, 77.82 and 82.46 respectively. In this study, we also applied EEG applications in the context of neuro-marketing.The results empirically demonstrated detection of the favourite colour preference of customers in response to the logo colour of an organization or Service.",This paper presents EEG based emotion detection of a person towards an image stimuli and its applicability on neuromarketing. 1443,Latent Variable Session-Based Recommendation," We present a probabilistic framework for session based recommendation. A latent variable for the user state is updated as the user views more items and we learn more about their interests. We provide computational solutions using both the re-parameterization trick and using the Bouchard bound for the softmax function, we further explore employing a variational auto-encoder and a variational Expectation-Maximization algorithm for tightening the variational bound. Finally we show that the Bouchard bound causes the denominator of the softmax to decompose into a sum enabling fast noisy gradients of the bound giving a fully probabilistic algorithm reminiscent of word2vec and a fast online EM algorithm.",Fast variational approximations for approximating a user state and learning product embeddings 1444,An Information-Theoretic Metric of Transferability for Task Transfer Learning,"An important question in task transfer learning is to determine task transferability, i.e. given a common input domain, estimating to what extent representations learned from a source task can help in learning a target task.Typically, transferability is either measured experimentally or inferred through task relatedness, which is often defined without a clear operational meaning.In this paper, we present a novel metric, H-score, an easily-computable evaluation function that estimates the performance of transferred representations from one task to another in classification problems.Inspired by a principled information theoretic approach, H-score has a direct connection to the asymptotic error probability of the decision function based on the transferred feature.This formulation of transferability can further be used to select a suitable set of source tasks in task transfer learning problems or to devise efficient transfer learning policies.Experiments using both synthetic and real image data show that not only our formulation of transferability is meaningful in practice, but also it can generalize to inference problems beyond classification, such as recognition tasks for 3D indoor-scene understanding.",We present a provable and easily-computable evaluation function that estimates the performance of transferred representations from one learning task to another in task transfer learning. 1445,PROTOTYPE-ASSISTED ADVERSARIAL LEARNING FOR UNSUPERVISED DOMAIN ADAPTATION,"This paper presents a generic framework to tackle the crucial class mismatch problem in unsupervised domain adaptation for multi-class distributions. Previous adversarial learning methods condition domain alignment only on pseudo labels, but noisy and inaccurate pseudo labels may perturb the multi-class distribution embedded in probabilistic predictions, hence bringing insufficient alleviation to the latent mismatch problem. Compared with pseudo labels, class prototypes are more accurate and reliable since they summarize over all the instances and are able to represent the inherent semantic distribution shared across domains.Therefore, we propose a novel Prototype-Assisted Adversarial Learning scheme, which incorporates instance probabilistic predictions and class prototypes together to provide reliable indicators for adversarial domain alignment. With the PAAL scheme, we align both the instance feature representations and class prototype representations to alleviate the mismatch among semantically different classes. Also, we exploit the class prototypes as proxy to minimize the within-class variance in the target domain to mitigate the mismatch among semantically similar classes. With these novelties, we constitute a Prototype-Assisted Conditional Domain Adaptation framework which well tackles the class mismatch problem.We demonstrate the good performance and generalization ability of the PAAL scheme and also PACDA framework on two UDA tasks, i.e., object recognition and synthetic-to-real semantic segmentation.","We propose a reliable conditional adversarial learning scheme along with a simple, generic yet effective framework for UDA tasks." 1446,D2KE: From Distance to Kernel and Embedding via Random Features For Structured Inputs,"We present a new methodology that constructs a family of from any given dissimilarity measure on structured inputs whose elements are either real-valued time series or discrete structures such as strings, histograms, and graphs.Our approach, which we call D2KE, draws from the literature of Random Features.However, instead of deriving random feature maps from a user-defined kernel to approximate kernel machines, we build a kernel from a random feature map, that we specify given the distance measure.We further propose use of a finite number of random objects to produce a random feature embedding of each instance.We provide a theoretical analysis showing that D2KE enjoys better generalizability than universal Nearest-Neighbor estimates.On one hand, D2KE subsumes the widely-used as a special case, and relates to the well-known in a limiting case.On the other hand, D2KE generalizes existing applicable only to vector input representations to complex structured inputs of variable sizes.We conduct classification experiments over such disparate domains as time series, strings, and histograms, for which our proposed framework compares favorably to existing distance-based learning methods in terms of both testing accuracy and computational time.",From Distance to Kernel and Embedding via Random Features For Structured Inputs 1447,Towards Quantum Inspired Convolution Networks,"Deep Convolution Neural Networks, rooted by the pioneer work of , and summarized in , have been shown to be very useful in a variety of fields. The state-of-the art CNN machines such as image rest net are described by real value inputs and kernel convolutions followed by the local and non-linear rectified linear outputs. Understanding the role of these layers, the accuracy and limitations of them, as well as making them more efficient are all ongoing research questions.Inspired in quantum theory, we propose the use of complex value kernel functions, followed by the local non-linear absolute operator square.We argue that an advantage of quantum inspired complex kernels is robustness to realistic unpredictable scenarios.We study a concrete problem of shape detection and show that when multiple overlapping shapes are deformed and/or clutter noise is added, a convolution layer with quantum inspired complex kernels outperforms the statistical/classical kernel counterpart and a ""Bayesian shape estimator"" .The superior performance is due to the quantum phenomena of interference, not present in classical CNNs. ","A quantum inspired kernel for convolution network, exhibiting interference phenomena, can be very useful (and compared it with real value counterpart)." 1448,Neural MMO: A massively multiplayer game environment for intelligent agents,"We present an artificial intelligence research platform inspired by the human game genre of MMORPGs.We demonstrate how this platform can be used to study behavior and learning in large populations of neural agents.Unlike currently popular game environments, our platform supports persistent environments, with variable number of agents, and open-ended task descriptions.The emergence of complex life on Earth is often attributed to the arms race that ensued from a huge number of organisms all competing for finite resources.Our platform aims to simulate this setting in microcosm: we conduct a series of experiments to test how large-scale multiagent competition can incentivize the development of skillful behavior.We find that population size magnifies the complexity of the behaviors that emerge and results in agents that out-compete agents trained in smaller populations.",An MMO-inspired research game platform for studying emergent behaviors of large populations in a complex environment 1449,Improving Sample-based Evaluation for Generative Adversarial Networks,"In this paper, we propose an improved quantitative evaluation framework for Generative Adversarial Networks on generating domain-specific images, where we improve conventional evaluation methods on two levels: the feature representation and the evaluation metric.Unlike most existing evaluation frameworks which transfer the representation of ImageNet inception model to map images onto the feature space, our framework uses a specialized encoder to acquire fine-grained domain-specific representation.Moreover, for datasets with multiple classes, we propose Class-Aware Frechet Distance, which employs a Gaussian mixture model on the feature space to better fit the multi-manifold feature distribution.Experiments and analysis on both the feature level and the image level were conducted to demonstrate improvements of our proposed framework over the recently proposed state-of-the-art FID method.To our best knowledge, we are the first to provide counter examples where FID gives inconsistent results with human judgments.It is shown in the experiments that our framework is able to overcome the shortness of FID and improves robustness.Code will be made available.",This paper improves existing sample-based evaluation for GANs and contains some insightful experiments. 1450,ResBinNet: Residual Binary Neural Network,"Recent efforts on training light-weight binary neural networks offer promising execution/memory efficiency.This paper introduces ResBinNet, which is a composition of two interlinked methodologies aiming to address the slow convergence speed and limited accuracy of binary convolutional neural networks.The first method, called residual binarization, learns a multi-level binary representation for the features within a certain neural network layer.The second method, called temperature adjustment, gradually binarizes the weights of a particular layer.The two methods jointly learn a set of soft-binarized parameters that improve the convergence rate and accuracy of binary neural networks.We corroborate the applicability and scalability of ResBinNet by implementing a prototype hardware accelerator.The accelerator is reconfigurable in terms of the numerical precision of the binarized features, offering a trade-off between runtime and inference accuracy.",Residual Binary Neural Networks significantly improve the convergence rate and inference accuracy of the binary neural networks. 1451,Anomalous Pattern Detection in Activations and Reconstruction Error of Autoencoders,"In real-world machine learning applications, large outliers and pervasive noise are commonplace, and access to clean training data as required by standard deep autoencoders is unlikely.Reliably detecting anomalies in a given set of images is a task of high practical relevance for visual quality inspection, surveillance, or medical image analysis.Autoencoder neural networks learn to reconstruct normal images, and hence can classify those images as anomalous if the reconstruction error exceeds some threshold.In this paper, we proposed an unsupervised method based on subset scanning over autoencoder activations.The contributions of our work are threefold.First, we propose a novel method combining detection with reconstruction error and subset scanning scores to improve the anomaly score of current autoencoders without requiring any retraining.Second, we provide the ability to inspect and visualize the set of anomalous nodes in the reconstruction error space that make a sample noised.Third, we show that subset scanning can be used for anomaly detection in the inner layers of the autoencoder.We provide detection power results for several untargeted adversarial noise models under standard datasets.",Unsupervised method to detect adversarial samples in autoencoder's activations and reconstruction error space 1452,Pretrain-KGEs: Learning Knowledge Representation from Pretrained Models for Knowledge Graph Embeddings,"Learning knowledge graph embeddings is an efficient approach to knowledge graph completion.Conventional KGEs often suffer from limited knowledge representation, which causes less accuracy especially when training on sparse knowledge graphs.To remedy this, we present Pretrain-KGEs, a training framework for learning better knowledgeable entity and relation embeddings, leveraging the abundant linguistic knowledge from pretrained language models.Specifically, we propose a unified approach in which we first learn entity and relation representations via pretrained language models and use the representations to initialize entity and relation embeddings for training KGE models.Our proposed method is model agnostic in the sense that it can be applied to any variant of KGE models.Experimental results show that our method can consistently improve results and achieve state-of-the-art performance using different KGE models such as TransE and QuatE, across four benchmark KG datasets in link prediction and triplet classification tasks.",We propose to learn knowledgeable entity and relation representations from Bert for knowledge graph embeddings. 1453,Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base,"We describe a novel way of representing a symbolic knowledge base called a sparse-matrix reified KB. This representation enables neural modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs.The sparse-matrix reified KB can be distributed across multiple GPUs, can scale to tens of millions of entities and facts, and is orders of magnitude faster than naive sparse-matrix implementations. The reified KB enables very simple end-to-end architectures to obtain competitive performance on several benchmarks representing two families of tasks: KB completion, and learning semantic parsers from denotations.",A scalable differentiable neural module that implements reasoning on symbolic KBs. 1454,The Differentiable Cross-Entropy Method,"We study the Cross-Entropy Method for the non-convex optimization of a continuous and parameterized objective function and introduce a differentiable variant that enables us to differentiate the output of CEM with respect to the objective function's parameters."", 'In the machine learning setting this brings CEM inside of the end-to-end learning pipeline in cases this has otherwise been impossible.We show applications in a synthetic energy-based structured prediction task and in non-convex continuous control.In the control setting we show on the simulated cheetah and walker tasks that we can embed their optimal action sequences with DCEM and then use policy optimization to fine-tune components of the controller as a step towards combining model-based and model-free RL.",DCEM learns latent domains for optimization problems and helps bridge the gap between model-based and model-free RL --- we create a differentiable controller and fine-tune parts of it with PPO 1455,A General Upper Bound for Unsupervised Domain Adaptation,"In this work, we present a novel upper bound of target error to address the problem for unsupervised domain adaptation.Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks.Furthermore, Ben-David et al. provide an upper bound for target error when transferring the knowledge, which can be summarized as minimizing the source error and distance between marginal distributions simultaneously.However, common methods based on the theory usually ignore the joint error such that samples from different classes might be mixed together when matching marginal distribution.And in such case, no matter how we minimize the marginal discrepancy, the target error is not bounded due to an increasing joint error.To address this problem, we propose a general upper bound taking joint error into account, such that the undesirable case can be properly penalized.In addition, we utilize constrained hypothesis space to further formalize a tighter bound as well as a novel cross margin discrepancy to measure the dissimilarity between hypotheses which alleviates instability during adversarial learning.Extensive empirical evidence shows that our proposal outperforms related approaches in image classification error rates on standard domain adaptation benchmarks.",joint error matters for unsupervised domain adaptation especially when the domain shift is huge 1456,Knowledge Flow: Improve Upon Your Teachers,"A zoo of deep nets is available these days for almost any given task, and it is increasingly unclear which net to start with when addressing a new task, or which net to use as an initialization for fine-tuning a new model.To address this issue, in this paper, we develop knowledge flow which moves ‘knowledge’ from multiple deep nets, referred to as teachers, to a new deep net model, called the student.The structure of the teachers and the student can differ arbitrarily and they can be trained on entirely different tasks with different output spaces too.Upon training with knowledge flow the student is independent of the teachers.We demonstrate our approach on a variety of supervised and reinforcement learning tasks, outperforming fine-tuning and other ‘knowledge exchange’ methods.",‘Knowledge Flow’ trains a deep net (student) by injecting information from multiple nets (teachers). The student is independent upon training and performs very well on learned tasks irrespective of the setting (reinforcement or supervised learning). 1457,Analytical Moment Regularizer for Training Robust Networks,"Despite the impressive performance of deep neural networks on numerous learning tasks, they still exhibit uncouth behaviours.One puzzling behaviour is the subtle sensitive reaction of DNNs to various noise attacks.Such a nuisance has strengthened the line of research around developing and training noise-robust networks.In this work, we propose a new training regularizer that aims to minimize the probabilistic expected training loss of a DNN subject to a generic Gaussian input.We provide an efficient and simple approach to approximate such a regularizer for arbitrarily deep networks.This is done by leveraging the analytic expression of the output mean of a shallow neural network, avoiding the need for memory and computation expensive data augmentation.We conduct extensive experiments on LeNet and AlexNet on various datasets including MNIST, CIFAR10, and CIFAR100 to demonstrate the effectiveness of our proposed regularizer.In particular, we show that networks that are trained with the proposed regularizer benefit from a boost in robustness against Gaussian noise to an equivalent amount of performing 3-21 folds of noisy data augmentation.Moreover, we empirically show on several architectures and datasets that improving robustness against Gaussian noise, by using the new regularizer, can improve the overall robustness against 6 other types of attacks by two orders of magnitude.",An efficient estimate to the Gaussian first moment of DNNs as a regularizer to training robust networks. 1458,Multi-Mention Learning for Reading Comprehension with Neural Cascades,"Reading comprehension is a challenging task, especially when executed across longer or across multiple evidence documents, where the answer is likely to reoccur.Existing neural architectures typically do not scale to the entire evidence, and hence, resort to selecting a single passage in the document, and carefully searching for the answer within that passage.However, in some cases, this strategy can be suboptimal, since by focusing on a specific passage, it becomes difficult to leverage multiple mentions of the same answer throughout the document.In this work, we take a different approach by constructing lightweight models that are combined in a cascade to find the answer.Each submodel consists only of feed-forward networks equipped with an attention mechanism, making it trivially parallelizable.We show that our approach can scale to approximately an order of magnitude larger evidence documents and can aggregate information from multiple mentions of each answer candidate across the document.Empirically, our approach achieves state-of-the-art performance on both the Wikipedia and web domains of the TriviaQA dataset, outperforming more complex, recurrent architectures.","We propose neural cascades, a simple and trivially parallelizable approach to reading comprehension, consisting only of feed-forward nets and attention that achieves state-of-the-art performance on the TriviaQA dataset." 1459,Atomic Compression Networks,"Compressed forms of deep neural networks are essential in deploying large-scalecomputational models on resource-constrained devices.Contrary to analogousdomains where large-scale systems are build as a hierarchical repetition of small-scale units, the current practice in Machine Learning largely relies on models withnon-repetitive components.In the spirit of molecular composition with repeatingatoms, we advance the state-of-the-art in model compression by proposing AtomicCompression Networks, a novel architecture that is constructed by recursiverepetition of a small set of neurons.In other words, the same neurons with thesame weights are stochastically re-positioned in subsequent layers of the network.Empirical evidence suggests that ACNs achieve compression rates of up to threeorders of magnitudes compared to fine-tuned fully-connected neural networks with only a fractional deterioration of classification accuracy.Moreover our method can yield sub-linear model complexitiesand permits learning deep ACNs with less parameters than a logistic regressionwith no decline in classification accuracy.","We advance the state-of-the-art in model compression by proposing Atomic Compression Networks (ACNs), a novel architecture that is constructed by recursive repetition of a small set of neurons." 1460,VideoFlow: A Conditional Flow-Based Model for Stochastic Video Generation,"Generative models that can model and predict sequences of future events can, in principle, learn to capture complex real-world phenomena, such as physical interactions.However, a central challenge in video prediction is that the future is highly uncertain: a sequence of past observations of events can imply many possible futures.Although a number of recent works have studied probabilistic models that can represent uncertain futures, such models are either extremely expensive computationally as in the case of pixel-level autoregressive models, or do not directly optimize the likelihood of the data.To our knowledge, our work is the first to propose multi-frame video prediction with normalizing flows, which allows for direct optimization of the data likelihood, and produces high-quality stochastic predictions.We describe an approach for modeling the latent space dynamics, and demonstrate that flow-based generative models offer a viable and competitive approach to generative modeling of video.",We demonstrate that flow-based generative models offer a viable and competitive approach to generative modeling of video. 1461,The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Minima and Regularization Effects,"Understanding the behavior of stochastic gradient descent in the context of deep neural networks has raised lots of concerns recently.Along this line, we theoretically study a general form of gradient based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics.Through investigating this general optimization dynamics, we analyze the behavior of SGD on escaping from minima and its regularization effects.A novel indicator is derived to characterize the efficiency of escaping from minima through measuring the alignment of noise covariance and the curvature of loss function.Based on this indicator, two conditions are established to show which type of noise structure is superior to isotropic noise in term of escaping efficiency.We further show that the anisotropic noise in SGD satisfies the two conditions, and thus helps to escape from sharp and poor minima effectively, towards more stable and flat minima that typically generalize well.We verify our understanding through comparingthis anisotropic diffusion with full gradient descent plus isotropic diffusion and other types of position-dependent noise.",We provide theoretical and empirical analysis on the role of anisotropic noise introduced by stochastic gradient on escaping from minima. 1462,Model-Augmented Actor-Critic: Backpropagating through Paths,"Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator to augment the data for policy optimization or value function learning.In this paper, we show how to make more effective use of the model by exploiting its differentiability.We construct a policy optimization algorithm that uses the pathwise derivative of the learned model and policy across future timesteps.Instabilities of learning across many timesteps are prevented by using a terminal value function, learning the policy in an actor-critic fashion.Furthermore, we present a derivation on the monotonic improvement of our objective in terms of the gradient error in the model and value function.We show that our approach is consistently more sample efficient than existing state-of-the-art model-based algorithms, matches the asymptotic performance of model-free algorithms, and scales to long horizons, a regime where typically past model-based approaches have struggled.",Policy gradient through backpropagation through time using learned models and Q-functions. SOTA results in reinforcement learning benchmark environments. 1463,Unsupervised Meta-Learning for Reinforcement Learning,"Meta-learning algorithms learn to acquire new tasks more quickly from past experience.In the context of reinforcement learning, meta-learning algorithms can acquire reinforcement learning procedures to solve new problems more efficiently by utilizing experience from prior tasks.The performance of meta-learning algorithms depends on the tasks available for meta-training: in the same way that supervised learning generalizes best to test points drawn from the same distribution as the training points, meta-learning methods generalize best to tasks from the same distribution as the meta-training tasks.In effect, meta-reinforcement learning offloads the design burden from algorithm design to task design.If we can automate the process of task design as well, we can devise a meta-learning algorithm that is truly automated.In this work, we take a step in this direction, proposing a family of unsupervised meta-learning algorithms for reinforcement learning.We motivate and describe a general recipe for unsupervised meta-reinforcement learning, and present an instantiation of this approach.Our conceptual and theoretical contributions consist of formulating the unsupervised meta-reinforcement learning problem and describing how task proposals based on mutual information can in principle be used to train optimal meta-learners.Our experimental results indicate that unsupervised meta-reinforcement learning effectively acquires accelerated reinforcement learning procedures without the need for manual task design and significantly exceeds the performance of learning from scratch.",Meta-learning on self-proposed task distributions to speed up reinforcement learning without human specified task distributions 1464,K for the Price of 1: Parameter-efficient Multi-task and Transfer Learning,"We introduce a novel method that enables parameter-efficient transfer and multi-task learning with deep neural networks.The basic approach is to learn a model patch - a small set of parameters - that will specialize to each task, instead of fine-tuning the last layer or the entire network.For instance, we show that learning a set of scales and biases is sufficient to convert a pretrained network to perform well on qualitatively different problems model into a 1000-class image classification model while reusing 98% of parameters of the SSD feature extractor).Similarly, we show that re-learning existing low-parameter layers while keeping the rest of the network frozen also improves transfer-learning accuracy significantly.Our approach allows both simultaneous as well as sequential transfer learning.In several multi-task learning problems, despite using much fewer parameters than traditional logits-only fine-tuning, we match single-task performance.","A novel and practically effective method to adapt pretrained neural networks to new tasks by retraining a minimal (e.g., less than 2%) number of parameters" 1465,One Bit Matters: Understanding Adversarial Examples as the Abuse of Redundancy,"Adversarial examples have somewhat disrupted the enormous success of machine learning and are causing concern with regards to its trustworthiness: A small perturbation of an input results in an arbitrary failure of an otherwise seemingly well-trained ML system.While studies are being conducted to discover the intrinsic properties of adversarial examples, such as their transferability and universality, there is insufficient theoretic analysis to help understand the phenomenon in a way that can influence the design process of ML experiments.In this paper, we deduce an information-theoretic model which explains adversarial attacks universally as the abuse of feature redundancies in ML algorithms.We prove that feature redundancy is a necessary condition for the existence of adversarial examples.Our model helps to explain the major questions raised in many anecdotal studies on adversarial examples.Our theory is backed up by empirical measurements of the information content of benign and adversarial examples on both image and text datasets.Our measurements show that typical adversarial examples introduce just enough redundancy to overflow the decision making of a machine learner trained on corresponding benign examples.We conclude with actionable recommendations to improve the robustness of machine learners against adversarial examples.",A new theoretical explanation for the existence of adversarial examples 1466,Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Imbalanced Data,"We propose a novel unsupervised generative model, Elastic-InfoGAN, that learns to disentangle object identity from other low-level aspects in class-imbalanced datasets.We first investigate the issues surrounding the assumptions about uniformity made by InfoGAN, and demonstrate its ineffectiveness to properly disentangle object identity in imbalanced data.""Our key idea is to make the discovery of the discrete latent factor of variation invariant to identity-preserving transformations in real images, and use that as the signal to learn the latent distribution's parameters."", 'Experiments on both artificial and real-world datasets demonstrate the effectiveness of our approach in imbalanced data by: better disentanglement of object identity as a latent factor of variation; and better approximation of class imbalance in the data, as reflected in the learned parameters of the latent distribution.","Elastic-InfoGAN is a modification of InfoGAN that learns, without any supervision, disentangled representations in class imbalanced data" 1467,Generalizing Deep Multi-task Learning with Heterogeneous Structured Networks,"Many real applications show a great deal of interest in learning multiple tasks from different data sources/modalities with unbalanced samples and dimensions.Unfortunately, existing cutting-edge deep multi-task learning approaches cannot be directly applied to these settings, due to either heterogeneous input dimensions or the heterogeneity in the optimal network architectures of different tasks.It is thus demanding to develop knowledge-sharing mechanism to handle the intrinsic discrepancies among network architectures across tasks.To this end, we propose a flexible knowledge-sharing framework for jointly learning multiple tasks from distinct data sources/modalities.The proposed framework allows each task to own its task-specific network design, via utilizing a compact tensor representation, while the sharing is achieved through the partially shared latent cores.By providing more elaborate sharing control with latent cores, our framework is effective in transferring task-invariant knowledge, yet also being efficient in learning task-specific features.Experiments on both single and multiple data sources/modalities settings display the promising results of the proposed method, especially favourable in insufficient data scenarios.",a distributed latent-space based knowledge-sharing framework for deep multi-task learning 1468,Game Changer: Accessible Audio and Tactile Guidance for Board and Card Games,"Board games often rely on visual information such as the location of the game pieces and textual information on cards.Due to this reliance on visual feedback, blind players are at a disadvantage because they cannot read the cards or see the location of the game pieces and may be unable to play a game without sighted help.We present Game Changer, an augmented workspace that provides both audio descriptions and tactile additions to make the state of the board game accessible to blind and visually impaired players.In this paper, we describe the design of Game Changer and present findings from a user study in which 7 blind participants used Game Changer to play against a sighted partner.Most players stated the game was more accessible with the additions from Game Changer and felt that Game Changer could be used to augment other games.",Game Changer is a system that provides both audio descriptions and tactile additions to make the state of the board game accessible to blind and visually impaired players. 1469,Learning from Rules Generalizing Labeled Exemplars,"In many applications labeled data is not readily available, and needs to be collected via pain-staking human supervision.We propose a rule-exemplar model for collecting human supervision to combine the scalability of rules with the quality of instance labels. The supervision is coupled such that it is both natural for humans and synergistic for learning.We propose a training algorithm that jointly denoises rules via latent coverage variables, and trains the model through a soft implication loss over the coverage and label variables. Empirical evaluation on five different tasks shows that our algorithm is more accurate than several existing methods of learning from a mix of clean and noisy supervision, and the coupled rule-exemplar supervision is effective in denoising rules.",Coupled rule-exemplar supervision and a implication loss helps to jointly learn to denoise rules and imply labels. 1470,Adversarial Imitation via Variational Inverse Reinforcement Learning,"We consider a problem of learning the reward and policy from expert examples under unknown dynamics.Our proposed method builds on the framework of generative adversarial networks and introduces the empowerment-regularized maximum-entropy inverse reinforcement learning to learn near-optimal rewards and policies.Empowerment-based regularization prevents the policy from overfitting to expert demonstrations, which advantageously leads to more generalized behaviors that result in learning near-optimal rewards.Our method simultaneously learns empowerment through variational information maximization along with the reward and policy under the adversarial learning formulation.We evaluate our approach on various high-dimensional complex control tasks.We also test our learned rewards in challenging transfer learning problems where training and testing environments are made to be different from each other in terms of dynamics or structure.The results show that our proposed method not only learns near-optimal rewards and policies that are matching expert behavior but also performs significantly better than state-of-the-art inverse reinforcement learning algorithms.",Our method introduces the empowerment-regularized maximum-entropy inverse reinforcement learning to learn near-optimal rewards and policies from expert demonstrations. 1471,RNNs implicitly implement tensor-product representations,"Recurrent neural networks can learn continuous vector representations of symbolic structures such as sequences and sentences; these representations often exhibit linear regularities. Such regularities motivate our hypothesis that RNNs that show such regularities implicitly compile symbolic structures into tensor product representations, which additively combine tensor products of vectors representing roles and vectors representing fillers. To test this hypothesis, we introduce Tensor Product Decomposition Networks, which use TPRs to approximate existing vector representations. We demonstrate using synthetic data that TPDNs can successfully approximate linear and tree-based RNN autoencoder representations, suggesting that these representations exhibit interpretable compositional structure; we explore the settings that lead RNNs to induce such structure-sensitive representations. By contrast, further TPDN experiments show that the representations of four models trained to encode naturally-occurring sentences can be largely approximated with a bag of words, with only marginal improvements from more sophisticated structures. We conclude that TPDNs provide a powerful method for interpreting vector representations, and that standard RNNs can induce compositional sequence representations that are remarkably well approximated byTPRs; at the same time, existing training tasks for sentence representation learning may not be sufficient for inducing robust structural representations","RNNs implicitly implement tensor-product representations, a principled and interpretable method for representing symbolic structures in continuous space." 1472,Data for free: Fewer-shot algorithm learning with parametricity data augmentation,We address the problem of teaching an RNN to approximate list-processing algorithms given a small number of input-output training examples.Our approach is to generalize the idea of parametricity from programming language theory to formulate a semantic property that distinguishes common algorithms from arbitrary non-algorithmic functions.This characterization leads naturally to a learned data augmentation scheme that encourages RNNs to learn algorithmic behavior and enables small-sample learning in a variety of list-processing tasks.,Learned data augmentation instills algorithm-favoring inductive biases that let RNNs learn list-processing algorithms from fewer examples. 1473,A Pseudo-Label Method for Coarse-to-Fine Multi-Label Learning with Limited Supervision,"The goal of multi-label learning is to associate a given instance with its relevant labels from a set of concepts.Previous works of MLL mainly focused on the setting where the concept set is assumed to be fixed, while many real-world applications require introducing new concepts into the set to meet new demands.One common need is to refine the original coarse concepts and split them into finer-grained ones, where the refinement process typically begins with limited labeled data for the finer-grained concepts.To address the need, we propose a special weakly supervised MLL problem that not only focuses on the situation of limited fine-grained supervision but also leverages the hierarchical relationship between the coarse concepts and the fine-grained ones.The problem can be reduced to a multi-label version of negative-unlabeled learning problem using the hierarchical relationship.We tackle the reduced problem with a meta-learning approach that learns to assign pseudo-labels to the unlabeled entries.Experimental results demonstrate that our proposed method is able to assign accurate pseudo-labels, and in turn achieves superior classification performance when compared with other existing methods.",We propose a special weakly-supervised multi-label learning problem along with a newly tailored algorithm that learns the underlying classifier by learning to assign pseudo-labels. 1474,Improving Search Through A3C Reinforcement Learning Based Conversational Agent,We develop a reinforcement learning based search assistant which can assist users through a set of actions and sequence of interactions to enable them realize their intent.Our approach caters to subjective search where the user is seeking digital assets such as images which is fundamentally different from the tasks which have objective and limited search modalities.Labeled conversational data is generally not available in such search tasks and training the agent through human interactions can be time consuming.We propose a stochastic virtual user which impersonates a real user and can be used to sample user behavior efficiently to train the agent which accelerates the bootstrapping of the agent.We develop A3C algorithm based context preserving architecture which enables the agent to provide contextual assistance to the user.We compare the A3C agent with Q-learning and evaluate its performance on average rewards and state values it obtains with the virtual user in validation episodes.Our experiments show that the agent learns to achieve higher rewards and better states.,A Reinforcement Learning based conversational search assistant which provides contextual assistance in subjective search (like digital assets). 1475,Informed Temporal Modeling via Logical Specification of Factorial LSTMs,"Consider a world in which events occur that involve various entities.Learning how to predict future events from patterns of past events becomes more difficult as we consider more types of events.Many of the patterns detected in the dataset by an ordinary LSTM will be spurious since the number of potential pairwise correlations, for example, grows quadratically with the number of events.We propose a type of factorial LSTM architecture where different blocks of LSTM cells are responsible for capturing different aspects of the world state.We use Datalog rules to specify how to derive the LSTM structure from a database of facts about the entities in the world.This is analogous to how a probabilistic relational model specifies a recipe for deriving a graphical model structure from a database.In both cases, the goal is to obtain useful inductive biases by encoding informed independence assumptions into the model.We specifically consider the neural Hawkes process, which uses an LSTM to modulate the rate of instantaneous events in continuous time.In both synthetic and real-world domains, we show that we obtain better generalization by using appropriate factorial designs specified by simple Datalog programs.",Factorize LSTM states and zero-out/tie LSTM weight matrices according to real-world structural biases expressed by Datalog programs. 1476,Model Inversion Networks for Model-Based Optimization,"In this work, we aim to solve data-driven optimization problems, where the goal is to find an input that maximizes an unknown score function given access to a dataset of input, score pairs.Inputs may lie on extremely thin manifolds in high-dimensional spaces, making the optimization prone to falling-off the manifold.Further, evaluating the unknown function may be expensive, so the algorithm should be able to exploit static, offline data.We propose model inversion networks as an approach to solve such problems.Unlike prior work, MINs scale to extremely high-dimensional input spaces and can efficiently leverage offline logged datasets for optimization in both contextual and non-contextual settings.We show that MINs can also be extended to the active setting, commonly studied in prior work, via a simple, novel and effective scheme for active data collection.Our experiments show that MINs act as powerful optimizers on a range of contextual/non-contextual, static/active problems including optimization over images and protein designs and learning from logged bandit feedback.",We propose a novel approach to solve data-driven model-based optimization problems in both passive and active settings that can scale to high-dimensional input spaces. 1477,Natural- to formal-language generation using Tensor Product Representations,"Generating formal-language represented by relational tuples, such as Lisp programs or mathematical expressions, from a natural-language input is an extremely challenging task because it requires to explicitly capture discrete symbolic structural information from the input to generate the output.Most state-of-the-art neural sequence models do not explicitly capture such structure information, and thus do not perform well on these tasks.In this paper, we propose a new encoder-decoder model based on Tensor Product Representations for Natural- to Formal-language generation, called TP-N2F.""The encoder of TP-N2F employs TPR 'binding' to encode natural-language symbolic structure in vector space and the decoder uses TPR 'unbinding' to generate a sequence of relational tuples, each consisting of a relation and a number of arguments, in symbolic space."", 'TP-N2F considerably outperforms LSTM-based Seq2Seq models, creating a new state of the art results on two benchmarks: the MathQA dataset for math problem solving, and the AlgoList dataset for program synthesis.Ablation studies show that improvements are mainly attributed to the use of TPRs in both the encoder and decoder to explicitly capture relational structure information for symbolic reasoning.","In this paper, we propose a new encoder-decoder model based on Tensor Product Representations for Natural- to Formal-language generation, called TP-N2F." 1478,Forward Modeling for Partial Observation Strategy Games - A StarCraft Defogger,"This paper we present a defogger, a model that learns to predict future hidden information from partial observations.We formulate this model in the context of forward modeling and leverage spatial and sequential constraints and correlations via convolutional neural networks and long short-term memory networks, respectively.We evaluate our approach on a large dataset of human games of StarCraft: Brood War, a real-time strategy video game.Our models consistently beat strong rule-based baselines and qualitatively produce sensible future game states.","This paper presents a defogger, a model that learns to predict future hidden information from partial observations, applied to a StarCraft dataset." 1479,Towards Understanding Generalization in Gradient-Based Meta-Learning,"In this work we study generalization of neural networks in gradient-based meta-learning by analyzing various properties of the objective landscapes.We experimentally demonstrate that as meta-training progresses, the meta-test solutions obtained by adapting the meta-train solution of the model to new tasks via few steps of gradient-based fine-tuning, become flatter, lower in loss, and further away from the meta-train solution.We also show that those meta-test solutions become flatter even as generalization starts to degrade, thus providing an experimental evidence against the correlation between generalization and flat minima in the paradigm of gradient-based meta-leaning.Furthermore, we provide empirical evidence that generalization to new tasks is correlated with the coherence between their adaptation trajectories in parameter space, measured by the average cosine similarity between task-specific trajectory directions, starting from a same meta-train solution.We also show that coherence of meta-test gradients, measured by the average inner product between the task-specific gradient vectors evaluated at meta-train solution, is also correlated with generalization.",We study generalization of neural networks in gradient-based meta- learning by analyzing various properties of the objective landscape. 1480,Feature Map Variational Auto-Encoders,"There have been multiple attempts with variational auto-encoders to learn powerful global representations of complex data using a combination of latent stochastic variables and an autoregressive model over the dimensions of the data.However, for the most challenging natural image tasks the purely autoregressive model with stochastic variables still outperform the combined stochastic autoregressive models.In this paper, we present simple additions to the VAE framework that generalize to natural images by embedding spatial information in the stochastic layers.We significantly improve the state-of-the-art results on MNIST, OMNIGLOT, CIFAR10 and ImageNet when the feature map parameterization of the stochastic variables are combined with the autoregressive PixelCNN approach.Interestingly, we also observe close to state-of-the-art results without the autoregressive part.This opens the possibility for high quality image generation with only one forward-pass.",We present a generative model that proves state-of-the-art results on gray-scale and natural images. 1481,Overcoming the vanishing gradient problem in plain recurrent networks,"Plain recurrent networks greatly suffer from the vanishing gradient problem while Gated Neural Networks such as Long-short Term Memory and Gated Recurrent Unit deliver promising results in many sequence learning tasks through sophisticated network designs.This paper shows how we can address this problem in a plain recurrent network by analyzing the gating mechanisms in GNNs.We propose a novel network called the Recurrent Identity Network which allows a plain recurrent network to overcome the vanishing gradient problem while training very deep models without the use of gates.We compare this model with IRNNs and LSTMs on multiple sequence modeling benchmarks.The RINs demonstrate competitive performance and converge faster in all tasks.Notably, small RIN models produce 12%–67% higher accuracy on the Sequential and Permuted MNIST datasets and reach state-of-the-art performance on the bAbI question answering dataset.",We propose a novel network called the Recurrent Identity Network (RIN) which allows a plain recurrent network to overcome the vanishing gradient problem while training very deep models without the use of gates. 1482,Fault Tolerant Reinforcement Learning via A Markov Game of Control and Stopping,"Recently, there has been a surge in interest in safe and robust techniques within reinforcement learning.Current notions of risk in RL fail to capture the potential for systemic failures such as abrupt stoppages from system failures or surpassing of safety thresholds and the appropriate responsive controls in such instances.We propose a novel approach to fault-tolerance within RL in which the controller learns a policy can cope with adversarial attacks and random stoppages that lead to failures of the system subcomponents.The results of the paper also cover fault-tolerant control so that the controller learns to avoid states that carry risk of system failures.By demonstrating that the class of problems is represented by a variant of SGs, we prove the existence of a solution which is a unique fixed point equilibrium of the game and characterise the optimal controller behaviour.We then introduce a value function approximation algorithm that converges to the solution through simulation in unknown environments.",The paper tackles fault-tolerance under random and adversarial stoppages. 1483,Restoration of Video Frames from a Single Blurred Image with Motion Understanding,"We propose a novel framework to generate clean video frames from a single motion-blurred image.While a broad range of literature focuses on recovering a single image from a blurred image, in this work, we tackle a more challenging task i.e. video restoration from a blurred image.We formulate video restoration from a single blurred image as an inverse problem by setting clean image sequence and their respective motion as latent factors, and the blurred image as an observation.Our framework is based on an encoder-decoder structure with spatial transformer network modules to restore a video sequence and its underlying motion in an end-to-end manner.We design a loss function and regularizers with complementary properties to stabilize the training and analyze variant models of the proposed network.The effectiveness and transferability of our network are highlighted through a large set of experiments on two different types of datasets: camera rotation blurs generated from panorama scenes and dynamic motion blurs in high speed videos.Our code and models will be publicly available.",We present a novel unified architecture that restores video frames from a single motion-blurred image in an end-to-end manner. 1484,Pruning at a Glance: A Structured Class-Blind Pruning Technique for Model Compression,"High performance of deep learning models typically comes at cost of considerable model size and computation time.These factors limit applicability for deployment on memory and battery constraint devices such as mobile phones or embedded systems.In this work we propose a novel pruning technique that eliminates entire filters and neurons according to their relative L1-norm as compared to the rest of the network, yielding more compression and decreased redundancy in the parameters.The resulting network is non-sparse, however, much more compact and requires no special infrastructure for its deployment.We prove the viability of our method by achieving 97.4%, 47.8% and 53% compression of LeNet-5, ResNet-56 and ResNet-110 respectively, exceeding state-of-the-art compression results reported on ResNet without losing any performance compared to the baseline.Our approach does not only exhibit good performance, but is also easy to implement on many architectures.",We propose a novel structured class-blind pruning technique to produce highly compressed neural networks. 1485,Low Rank Training of Deep Neural Networks for Emerging Memory Technology,"The recent success of neural networks for solving difficult decision tasks has incentivized incorporating smart decision making ""at the edge.""However, this work has traditionally focused on neural network inference, rather than training, due to memory and compute limitations, especially in emerging non-volatile memory systems, where writes are energetically costly and reduce lifespan.Yet, the ability to train at the edge is becoming increasingly important as it enables applications such as real-time adaptability to device drift and environmental variation, user customization, and federated learning across devices.In this work, we address four key challenges for training on edge devices with non-volatile memory: low weight update density, weight quantization, low auxiliary memory, and online learning.We present a low-rank training scheme that addresses these four challenges while maintaining computational efficiency.We then demonstrate the technique on a representative convolutional neural network across several adaptation problems, where it out-performs standard SGD both in accuracy and in number of weight updates.",We use Kronecker sum approximations for low-rank training to address challenges in training neural networks on edge devices that utilize emerging memory technologies. 1486,Characterizing the Accuracy/Complexity Landscape of Explanations of Deep Networks through Knowledge Extraction,"Knowledge extraction techniques are used to convert neural networks into symbolic descriptions with the objective of producing more comprehensible learning models.The central challenge is to find an explanation which is more comprehensible than the original model while still representing that model faithfully.The distributed nature of deep networks has led many to believe that the hidden features of a neural network cannot be explained by logical descriptions simple enough to be understood by humans, and that decompositional knowledge extraction should be abandoned in favour of other methods.In this paper we examine this question systematically by proposing a knowledge extraction method using rules which allows us to map the complexity/accuracy landscape of rules describing hidden features in a Convolutional Neural Network.Experiments reported in this paper show that the shape of this landscape reveals an optimal trade off between comprehensibility and accuracy, showing that each latent variable has an optimal rule to describe its behaviour.We find that the rules with optimal tradeoff in the first and final layer have a high degree of explainability whereas the rules with the optimal tradeoff in the second and third layer are less explainable.The results shed light on the feasibility of rule extraction from deep networks, and point to the value of decompositional knowledge extraction as a method of explainability.",Systematically examines how well we can explain the hidden features of a deep network in terms of logical rules. 1487,Seeing the whole picture instead of a single point: Self-supervised likelihood learning for deep generative models,"Recent findings show that deep generative models can judge out-of-distribution samples as more likely than those drawn from the same distribution as the training data.In this work, we focus on variational autoencoders and address the problem of misaligned likelihood estimates on image data.We develop a novel likelihood function that is based not only on the parameters returned by the VAE but also on the features of the data learned in a self-supervised fashion.In this way, the model additionally captures the semantic information that is disregarded by the usual VAE likelihood function.We demonstrate the improvements in reliability of the estimates with experiments on the FashionMNIST and MNIST datasets.",Improved likelihood estimates in variational autoencoders using self-supervised feature learning 1488,Global Convergence of Policy Gradient Methods for Linearized Control Problems,"Direct policy gradient methods for reinforcement learning and continuous control problems are a popularapproach for a variety of reasons:1) they are easy to implement without explicit knowledge of the underlying model;2) they are an ""end-to-end"" approach, directly optimizing the performance metric of interest;3) they inherently allow for richly parameterized policies.A notable drawback is that even in the most basic continuous control problem, these methods must solve a non-convex optimization problem, where little is understood about their efficiency from both computational and statistical perspectives.In contrast, system identification and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that policy gradient methods globally converge to the optimal solution and are efficient with regards to their sample and computational complexities.",This paper shows that model-free policy gradient methods can converge to the global optimal solution for non-convex linearized control problems. 1489,"A Compressed Sensing View of Unsupervised Text Embeddings, Bag-of-n-Grams, and LSTMs","Low-dimensional vector embeddings, computed using LSTMs or simpler techniques, are a popular approach for capturing the “meaning” of text and a form of unsupervised learning useful for downstream tasks.However, their power is not theoretically understood.The current paper derives formal understanding by looking at the subcase of linear embedding schemes.Using the theory of compressed sensing we show that representations combining the constituent word vectors are essentially information-preserving linear measurements of Bag-of-n-Grams representations of text.This leads to a new theoretical result about LSTMs: low-dimensional embeddings derived from a low-memory LSTM are provably at least as powerful on classification tasks, up to small error, as a linear classifier over BonG vectors, a result that extensive empirical work has thus far been unable to show.Our experiments support these theoretical findings and establish strong, simple, and unsupervised baselines on standard benchmarks that in some cases are state of the art among word-level methods.We also show a surprising new property of embeddings such as GloVe and word2vec: they form a good sensing matrix for text that is more efficient than random matrices, the standard sparse recovery tool, which may explain why they lead to better representations in practice.",We use the theory of compressed sensing to prove that LSTMs can do at least as well on linear text classification as Bag-of-n-Grams. 1490,A NEW POINTWISE CONVOLUTION IN DEEP NEURAL NETWORKS THROUGH EXTREMELY FAST AND NON PARAMETRIC TRANSFORMS," Some conventional transforms such as Discrete Walsh-Hadamard Transform and Discrete Cosine Transform have been widely used as feature extractors in image processing but rarely applied in neural networks.However, we found that these conventional transforms have the ability to capture the cross-channel correlations without any learnable parameters in DNNs.This paper firstly proposes to apply conventional transforms on pointwise convolution, showing that such transforms significantly reduce the computational complexity of neural networks without accuracy performance degradation.Especially for DWHT, it requires no floating point multiplications but only additions and subtractions, which can considerably reduce computation overheads.In addition, its fast algorithm further reduces complexity of floating point addition from O to O.These non-parametric and low computational properties construct extremely efficient networks in the number parameters and operations, enjoying accuracy gain.Our proposed DWHT-based model gained 1.49% accuracy increase with 79.4% reduced parameters and 48.4% reduced FLOPs compared with its baseline model on the CIFAR 100 dataset.",We introduce new pointwise convolution layers equipped with extremely fast conventional transforms in deep neural network. 1491,Lattice Representation Learning,"We introduce the notion of , in which the representation for some object of interest is a lattice point in an Euclidean space.Our main contribution is a result for replacing an objective function which employs lattice quantization with an objective function in which quantization is absent, thus allowing optimization techniques based on gradient descent to apply; we call the resulting algorithms algorithms as they are designed explicitly to allow for an optimization procedure where only local information is employed.We also argue that a technique commonly used in Variational Auto-Encoders is tightly connected with the idea of lattice representations, as the quantization error in good high dimensional lattices can be modeled as a Gaussian distribution.We use a traditional encoder/decoder architecture to explore the idea of latticed valued representations, and provide experimental evidence of the potential of using lattice representations by modifying the \exttt generic \exttt architecture so that it can implement not only Gaussian dithering of representations, but also the well known straight-through estimator and its application to vector quantization.",We propose to use lattices to represent objects and prove a fundamental result on how to train networks that use them. 1492,Finding a human-like classifier,"There were many attempts to explain the trade-off between accuracy and adversarial robustness.However, there was no clear understanding of the behaviors of a robust classifier which has human-like robustness.We argue why we need to consider adversarial robustness against varying magnitudes of perturbations not only focusing on a fixed perturbation threshold, why we need to use different method to generate adversarially perturbed samples that can be used to train a robust classifier and measure the robustness of classifiers and why we need to prioritize adversarial accuracies with different magnitudes.We introduce Lexicographical Genuine Robustness of classifiers that combines the above requirements. We also suggest a candidate oracle classifier called ""Optimal Lexicographically Genuinely Robust Classifier "" that prioritizes accuracy on meaningful adversarially perturbed examples generated by smaller magnitude perturbations. The training algorithm for estimating OLGRC requires lexicographical optimization unlike existing adversarial training methods.To apply lexicographical optimization to neural network, we utilize Gradient Episodic Memory which was originally developed for continual learning by preventing catastrophic forgetting.",We try to design and train a classifier whose adversarial robustness is more resemblance to robustness of human. 1493,Discrete Wasserstein Generative Adversarial Networks (DWGAN),"Generating complex discrete distributions remains as one of the challenging problems in machine learning.Existing techniques for generating complex distributions with high degrees of freedom depend on standard generative models like Generative Adversarial Networks, Wasserstein GAN, and associated variations.Such models are based on an optimization involving the distance between two continuous distributions.We introduce a Discrete Wasserstein GAN model which is based on a dual formulation of the Wasserstein distance between two discrete distributions.We derive a novel training algorithm and corresponding network architecture based on the formulation.Experimental results are provided for both synthetic discrete data, and real discretized data from MNIST handwritten digits.",We propose a Discrete Wasserstein GAN (DWGAN) model which is based on a dual formulation of the Wasserstein distance between two discrete distributions. 1494,Omega: An Architecture for AI Unification,"We introduce the open-ended, modular, self-improving Omega AI unification architecture which is a refinement of Solomonoff's Alpha architecture, as considered from first principles."", 'The architecture embodies several crucial principles of general intelligence including diversity of representations, diversity of data types, integrated memory, modularity, and higher-order cognition.""We retain the basic design of a fundamental algorithmic substrate called an AI kernel for problem solving and basic cognitive functions like memory, and a larger, modular architecture that re-uses the kernel in many ways."", 'Omega includes eight representation languages and six classes of neural networks, which are briefly introduced.The architecture is intended to initially address data science automation, hence it includes many problem solving methods for statistical tasks.We review the broad software architecture, higher-order cognition, self-improvement, modular neural architectures, intelligent agents, the process and memory hierarchy, hardware abstraction, peer-to-peer computing, and data abstraction facility.",It's a new AGI architecture for trans-sapient performance.This is a high-level overview of the Omega AGI architecture which is the basis of a data science automation system. Submitted to a workshop. 1495,FlowQA: Grasping Flow in History for Conversational Machine Comprehension,"Conversational machine comprehension requires a deep understanding of the conversation history.To enable traditional, single-turn models to encode the history comprehensively, we introduce Flow, a mechanism that can incorporate intermediate representations generated during the process of answering previous questions, through an alternating parallel processing structure.Compared to shallow approaches that concatenate previous questions/answers as input, Flow integrates the latent semantics of the conversation history more deeply.Our model, FlowQA, shows superior performance on two recently proposed conversational challenges.The effectiveness of Flow also shows in other tasks.By reducing sequential instruction understanding to conversational machine comprehension, FlowQA outperforms the best models on all three domains in SCONE, with +1.8% to +4.4% improvement in accuracy.","We propose the Flow mechanism and an end-to-end architecture, FlowQA, that achieves SotA on two conversational QA datasets and a sequential instruction understanding task." 1496,Residual Loss Prediction: Reinforcement Learning With No Incremental Feedback,"We consider reinforcement learning and bandit structured prediction problems with very sparse loss feedback: only at the end of an episode.We introduce a novel algorithm, RESIDUAL LOSS PREDICTION, that solves such problems by automatically learning an internal representation of a denser reward function.RESLOPE operates as a reduction to contextual bandits, using its learned loss representation to solve the credit assignment problem, and a contextual bandit oracle to trade-off exploration and exploitation.RESLOPE enjoys a no-regret reduction-style theoretical guarantee and outperforms state of the art reinforcement learning algorithms in both MDP environments and bandit structured prediction settings.",We present a novel algorithm for solving reinforcement learning and bandit structured prediction problems with very sparse loss feedback. 1497,Stabilizing Adversarial Nets with Prediction Methods,"Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train.These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function.The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates.We propose a simple modification of stochastic gradient descent that stabilizes adversarial networks.We show, both in theory and practice, that the proposed method reliably converges to saddle points.This makes adversarial networks less likely to ""collapse,"" and enables faster training with larger learning rates.","We present a simple modification to the alternating SGD method, called a prediction step, that improves the stability of adversarial networks." 1498,Towards Model-Based Contrastive Explanations for Explainable Planning,"An important type of question that arises in Explainable Planning is a contrastive question, of the form ""Why action A instead of action B?"".These kinds of questions can be answered with a contrastive explanation that compares properties of the original plan containing A against the contrastive plan containing B. An effective explanation of this type serves to highlight the differences between the decisions that have been made by the planner and what the user would expect, as well as to provide further insight into the model and the planning process.Producing this kind of explanation requires the generation of the contrastive plan.This paper introduces domain-independent compilations of user questions into constraints.These constraints are added to the planning model, so that a solution to the new model represents the contrastive plan.We introduce a formal description of the compilation from user question to constraints in a temporal and numeric PDDL2.1 planning setting.",This paper introduces domain-independent compilations of user questions into constraints for contrastive explanations. 1499,Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking,"Methods that learn representations of nodes in a graph play a critical role in network analysis since they enable many downstream learning tasks.We propose Graph2Gauss - an approach that can efficiently learn versatile node embeddings on large scale graphs that show strong performance on tasks such as link prediction and node classification.Unlike most approaches that represent nodes as point vectors in a low-dimensional continuous space, we embed each node as a Gaussian distribution, allowing us to capture uncertainty about the representation.Furthermore, we propose an unsupervised method that handles inductive learning scenarios and is applicable to different types of graphs: plain/attributed, directed/undirected.By leveraging both the network structure and the associated node attributes, we are able to generalize to unseen nodes without additional training.To learn the embeddings we adopt a personalized ranking formulation w.r.t. the node distances that exploits the natural ordering of the nodes imposed by the network structure.Experiments on real world networks demonstrate the high performance of our approach, outperforming state-of-the-art network embedding methods on several different tasks.Additionally, we demonstrate the benefits of modeling uncertainty - by analyzing it we can estimate neighborhood diversity and detect the intrinsic latent dimensionality of a graph.", We embed nodes in a graph as Gaussian distributions allowing us to capture uncertainty about their representation. 1500,Logit Regularization Methods for Adversarial Robustness,"While great progress has been made at making neural networks effective across a wide range of tasks, many are surprisingly vulnerable to small, carefully chosen perturbations of their input, known as adversarial examples.In this paper, we advocate for and experimentally investigate the use of logit regularization techniques as an adversarial defense, which can be used in conjunction with other methods for creating adversarial robustness at little to no cost.We demonstrate that much of the effectiveness of one recent adversarial defense mechanism can be attributed to logit regularization and show how to improve its defense against both white-box and black-box attacks, in the process creating a stronger black-box attacks against PGD-based models.",Logit regularization methods help explain and improve state of the art adversarial defenses 1501,DeepArchitect: Automatically Designing and Training Deep Architectures,"In deep learning, performance is strongly affected by the choice of architectureand hyperparameters.While there has been extensive work on automatic hyperpa-rameter optimization for simple spaces, complex spaces such as the space of deeparchitectures remain largely unexplored.As a result, the choice of architecture isdone manually by the human expert through a slow trial and error process guidedmainly by intuition.In this paper we describe a framework for automaticallydesigning and training deep models.We propose an extensible and modular lan-guage that allows the human expert to compactly represent complex search spacesover architectures and their hyperparameters.The resulting search spaces are tree-structured and therefore easy to traverse.Models can be automatically compiled tocomputational graphs once values for all hyperparameters have been chosen.Wecan leverage the structure of the search space to introduce different model searchalgorithms, such as random search, Monte Carlo tree search, and sequen-tial model-based optimization.We present experiments comparing thedifferent algorithms on CIFAR-10 and show that MCTS and SMBO outperformrandom search.We also present experiments on MNIST, showing that the samesearch space achieves near state-of-the-art performance with a few samples.Theseexperiments show that our framework can be used effectively for model discov-ery, as it is possible to describe expressive search spaces and discover competitivemodels without much effort from the human expert.Code for our framework andexperiments has been made publicly available",We describe a modular and composable language for describing expressive search spaces over architectures and simple model search algorithms applied to these search spaces. 1502,Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy,"Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection.The performant systems, however, typically involve big models with numerous parameters.Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems -- the models are compute and memory intensive.Low precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models.In this paper, we study the combination of these two techniques and show that the performance of low precision networks can be significantly improved by using knowledge distillation techniques.We call our approach Apprentice and show state-of-the-art accuracies using ternary precision and 4-bit precision for many variants of ResNet architecture on ImageNet dataset.We study three schemes in which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.",We show that knowledge transfer techniques can improve the accuracy of low precision networks and set new state-of-the-art accuracy for ternary and 4-bits precision. 1503,BNN+: Improved Binary Network Training,"Deep neural networks are widely used in many applications.However, their deployment on edge devices has been difficult because they are resource hungry.Binary neural networks help to alleviate the prohibitive resource requirements of DNN, where both activations and weights are limited to 1-bit.We propose an improved binary training method, by introducing a regularization function that encourages training weights around binary values.In addition to this, to enhance model performance we add trainable scaling factors to our regularization functions.Furthermore, we use an improved approximation of the derivative of the sign activation function in the backward computation.These additions are based on linear operations that are easily implementable into the binary training framework.We show experimental results on CIFAR-10 obtaining an accuracy of 86.5%, on AlexNet and 91.3% with VGG network.On ImageNet, our method also outperforms the traditional BNN method and XNOR-net, using AlexNet by a margin of 4% and 2% top-1 accuracy respectively.",The paper presents an improved training mechanism for obtaining binary networks with smaller accuracy drop that helps close the gap with it's full precision counterpart 1504,HYPE: Human-eYe Perceptual Evaluation of Generative Models,"Generative models often use human evaluations to determine and justify progress.Unfortunately, existing human evaluation methods are ad-hoc: there is currently no standardized, validated evaluation that: measures perceptual fidelity, is reliable, separates models into clear rank order, and ensures high-quality measurement without intractable cost.In response, we construct Human-eYe Perceptual Evaluation, a human metric that is grounded in psychophysics research in perception, reliable across different sets of randomly sampled outputs from a model, results in separable model performances, and efficient in cost and time.We introduce two methods.The first, HYPE-Time, measures visual perception under adaptive time constraints to determine the minimum length of time that model output such as a generated face needs to be visible for people to distinguish it as real or fake.The second, HYPE-Infinity, measures human error rate on fake and real images with no time constraints, maintaining stability and drastically reducing time and cost.We test HYPE across four state-of-the-art generative adversarial networks on unconditional image generation using two datasets, the popular CelebA and the newer higher-resolution FFHQ, and two sampling techniques of model outputs.""By simulating HYPE's evaluation multiple times, we demonstrate consistent ranking of different models, identifying StyleGAN with truncation trick sampling as superior to StyleGAN without truncation on FFHQ.","HYPE is a reliable human evaluation metric for scoring generative models, starting with human face generation across 4 GANs." 1505,Unsupervised Machine Translation Using Monolingual Corpora Only,"Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora.There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences.In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data.We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space.By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data.We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.",We propose a new unsupervised machine translation model that can learn without using parallel corpora; experimental results show impressive performance on multiple corpora and pairs of languages. 1506,Intrinsic Social Motivation via Causal Influence in Multi-Agent RL,"We derive a new intrinsic social motivation for multi-agent reinforcement learning, in which agents are rewarded for having causal influence over another agent's actions, where causal influence is assessed using counterfactual reasoning."", ""The reward does not depend on observing another agent's reward function, and is thus a more realistic approach to MARL than taken in previous work."", ""We show that the causal influence reward is related to maximizing the mutual information between agents' actions."", 'We test the approach in challenging social dilemma environments, where it consistently leads to enhanced cooperation between agents and higher collective reward.Moreover, we find that rewarding influence can lead agents to develop emergent communication protocols.Therefore, we also employ influence to train agents to use an explicit communication channel, and find that it leads to more effective communication and higher collective reward.Finally, we show that influence can be computed by equipping each agent with an internal model that predicts the actions of other agents.This allows the social influence reward to be computed without the use of a centralised controller, and as such represents a significantly more general and scalable inductive bias for MARL with independent agents.","We reward agents for having a causal influence on the actions of other agents, and show that this gives rise to better cooperation and more meaningful emergent communication protocols. " 1507,Distributed Distributional Deterministic Policy Gradients,"This work adopts the very successful distributional perspective on reinforcement learning and adapts it to the continuous control setting.We combine this within a distributed framework for off-policy learning in order to develop what we call the Distributed Distributional Deep Deterministic Policy Gradient algorithm, D4PG.We also combine this technique with a number of additional, simple improvements such as the use of N-step returns and prioritized experience replay.Experimentally we examine the contribution of each of these individual components, and show how they interact, as well as their combined contributions.Our results show that across a wide variety of simple control tasks, difficult manipulation tasks, and a set of hard obstacle-based locomotion tasks the D4PG algorithm achieves state of the art performance.","We develop an agent that we call the Distributional Deterministic Deep Policy Gradient algorithm, which achieves state of the art performance on a number of challenging continuous control problems." 1508,Learning Gaussian Policies from Smoothed Action Value Functions,"State-action value functions are ubiquitous in reinforcement learning, giving rise to popular algorithms such as SARSA and Q-learning.We propose a new notion of action value defined by a Gaussian smoothed version of the expected Q-value used in SARSA.We show that such smoothed Q-values still satisfy a Bellman equation, making them naturally learnable from experience sampled from an environment.Moreover, the gradients of expected reward with respect to the mean and covariance of a parameterized Gaussian policy can be recovered from the gradient and Hessian of the smoothed Q-value function.Based on these relationships we develop new algorithms for training a Gaussian policy directly from a learned Q-value approximator.The approach is also amenable to proximal optimization techniques by augmenting the objective with a penalty on KL-divergence from a previous policy.We find that the ability to learn both a mean and covariance during training allows this approach to achieve strong results on standard continuous control benchmarks.",We propose a new Q-value function that enables better learning of Gaussian policies. 1509,Graph Constrained Reinforcement Learning for Natural Language Action Spaces,"Interactive Fiction games are text-based simulations in which an agent interacts with the world purely through natural language.They are ideal environments for studying how to extend reinforcement learning agents to meet the challenges of natural language understanding, partial observability, and action generation in combinatorially-large text-based action spaces.We present KG-A2C, an agent that builds a dynamic knowledge graph while exploring and generates actions using a template-based action space.We contend that the dual uses of the knowledge graph to reason about game state and to constrain natural language generation are the keys to scalable exploration of combinatorially large natural language actions.Results across a wide variety of IF games show that KG-A2C outperforms current IF agents despite the exponential increase in action space size.","We present KG-A2C, a reinforcement learning agent that builds a dynamic knowledge graph while exploring and generates natural language using a template-based action space - outperforming all current agents on a wide set of text-based games." 1510,The power of deeper networks for expressing natural functions,"It is well-known that neural networks are universal approximators, but that deeper networks tend in practice to be more powerful than shallower ones.We shed light on this by proving that the total number of neurons m required to approximate natural classes of multivariate polynomials of n variables grows only linearly with n for deep neural networks, but grows exponentially when merely a single hidden layer is allowed.We also provide evidence that when the number of hidden layers is increased from 1 to k, the neuron requirement grows exponentially not with n but with n^, suggesting that the minimum number of layers required for practical expressibility grows only logarithmically with n.",We prove that deep neural networks are exponentially more efficient than shallow ones at approximating sparse multivariate polynomials. 1511,Neuron ranking - an informed way to compress convolutional neural networks,"Convolutional neural networks in recent years have made a dramatic impact in science, technology and industry, yet the theoretical mechanism of CNN architecture design remains surprisingly vague.The CNN neurons, including its distinctive element, convolutional filters, are known to be learnable features, yet their individual role in producing the output is rather unclear.The thesis of this work is that not all neurons are equally important and some of them contain more useful information to perform a given task.Hence, we propose to quantify and rank neuron importance, and directly incorporate neuron importance in the objective function under two formulations: a game theoretical approach based on Shapley value which computes the marginal contribution of each filter; and a probabilistic approach based on what-we-call, the importance switch using variational inference.Using these two methods we confirm the general theory that some of the neurons are inherently more important than the others.Various experiments illustrate that learned ranks can be readily useable for structured network compression and interpretability of learned features.",We propose CNN neuron ranking with two different methods and show their consistency in producing the result which allows to interpret what network deems important and compress the network by keeping the most relevant nodes. 1512,Learning To Explore Using Active Neural Mapping,"This work presents a modular and hierarchical approach to learn policies for exploring 3D environments.Our approach leverages the strengths of both classical and learning-based methods, by using analytical path planners with learned mappers, and global and local policies.Use of learning provides flexibility with respect to input modalities, leverages structural regularities of the world, and provides robustness to errors in state estimation.Such use of learning within each module retains its benefits, while at the same time, hierarchical decomposition and modular training allow us to sidestep the high sample complexities associated with training end-to-end policies.Our experiments in visually and physically realistic simulated 3D environments demonstrate the effectiveness of our proposed approach over past learning and geometry-based approaches.",A modular and hierarchical approach to learn policies for exploring 3D environments. 1513,SPIGAN: Privileged Adversarial Learning from Simulation,"Deep Learning for Computer Vision depends mainly on the source of supervision.Photo-realistic simulators can generate large-scale automatically labeled synthetic data, but introduce a domain gap negatively impacting performance.We propose a new unsupervised domain adaptation algorithm, called SPIGAN, relying on Simulator Privileged Information and Generative Adversarial Networks.We use internal data from the simulator as PI during the training of a target task network.We experimentally evaluate our approach on semantic segmentation.We train the networks on real-world Cityscapes and Vistas datasets, using only unlabeled real-world images and synthetic labeled data with z-buffer PI from the SYNTHIA dataset.Our method improves over no adaptation and state-of-the-art unsupervised domain adaptation techniques.",An unsupervised sim-to-real domain adaptation method for semantic segmentation using privileged information from a simulator with GAN-based image translation. 1514,Intriguing Properties of Adversarial Training at Scale,"Adversarial training is one of the main defenses against adversarial attacks.In this paper, we provide the first rigorous study on diagnosing elements of large-scale adversarial training on ImageNet, which reveals two intriguing properties.First, we study the role of normalization.Batch normalization is a crucial element for achieving state-of-the-art performance on many vision tasks, but we show it may prevent networks from obtaining strong robustness in adversarial training.One unexpected observation is that, for models trained with BN, simply removing clean images from training data largely boosts adversarial robustness, i.e., 18.3%.We relate this phenomenon to the hypothesis that clean images and adversarial images are drawn from two different domains.This two-domain hypothesis may explain the issue of BN when training with a mixture of clean and adversarial images, as estimating normalization statistics of this mixture distribution is challenging.Guided by this two-domain hypothesis, we show disentangling the mixture distribution for normalization, i.e., applying separate BNs to clean and adversarial images for statistics estimation, achieves much stronger robustness.Additionally, we find that enforcing BNs to behave consistently at training and testing can further enhance robustness.Second, we study the role of network capacity.We find our so-called ""deep"" networks are still shallow for the task of adversarial learning.Unlike traditional classification tasks where accuracy is only marginally improved by adding more layers to ""deep"" networks, adversarial training exhibits a much stronger demand on deeper networks to achieve higher adversarial robustness.This robustness improvement can be observed substantially and consistently even by pushing the network capacity to an unprecedented scale, i.e., ResNet-638. ",The first rigor diagnose of large-scale adversarial training on ImageNet 1515,Bias Also Matters: Bias Attribution for Deep Neural Network Explanation,"The gradient of a deep neural network w.r.t. the input provides information that can be used to explain the output prediction in terms of the input features and has been widely studied to assist in interpreting DNNs. In a linear model=wx+bwbbwg=wx+bw$ and the bias attribution, providing separate and complementary explanations.We study several possible attribution methods applied to the bias of each layer in BBp.In experiments, we show that BBp can generate complementary and highly interpretable explanations of DNNs in addition to gradient-based attributions.",Attribute the bias terms of deep neural networks to input features by a backpropagation-type algorithm; Generate complementary and highly interpretable explanations of DNNs in addition to gradient-based attributions. 1516,A fully automated periodicity detection in time series,"This paper presents a method to autonomously find periodicities in a signal.It is based on the same idea of using Fourier Transform and autocorrelation function presented in Vlachos et al. 2005.While showing interesting results this method does not perform well on noisy signals or signals with multiple periodicities.Thus, our method adds several new extra steps to fix these issues.Experimental results show that the proposed method outperforms the state of the art algorithms.","This paper presents a method to autonomously find multiple periodicities in a signal, using FFT and ACF and add three news steps (clustering/filtering/detrending)" 1517,Adversarial Exploration Strategy for Self-Supervised Imitation Learning,"We present an adversarial exploration strategy, a simple yet effective imitation learning scheme that incentivizes exploration of an environment without any extrinsic reward or human demonstration.Our framework consists of a deep reinforcement learning agent and an inverse dynamics model contesting with each other.The former collects training samples for the latter, and its objective is to maximize the error of the latter.The latter is trained with samples collected by the former, and generates rewards for the former when it fails to predict the actual action taken by the former.In such a competitive setting, the DRL agent learns to generate samples that the inverse dynamics model fails to predict correctly, and the inverse dynamics model learns to adapt to the challenging samples.We further propose a reward structure that ensures the DRL agent collects only moderately hard samples and not overly hard ones that prevent the inverse model from imitating effectively.We evaluate the effectiveness of our method on several OpenAI gym robotic arm and hand manipulation tasks against a number of baseline models.Experimental results show that our method is comparable to that directly trained with expert demonstrations, and superior to the other baselines even without any human priors.",A simple yet effective imitation learning scheme that incentivizes exploration of an environment without any extrinsic reward or human demonstration. 1518,DUAL SPACE LEARNING WITH VARIATIONAL AUTOENCODERS,"This paper proposes a dual variational autoencoder, a framework for generating images corresponding to multiclass labels.Recent research on conditional generative models, such as the Conditional VAE, exhibit image transfer by changing labels.However, when the dimension of multiclass labels is large, these models cannot change images corresponding to labels, because learning multiple distributions of the corresponding class is necessary to transfer an image.This leads to the lack of training data.Therefore, instead of conditioning with labels, we condition with latent vectors that include label information.DualVAE divides one distribution of the latent space by linear decision boundaries using labels.Consequently, DualVAE can easily transfer an image by moving a latent vector toward a decision boundary and is robust to the missing values of multiclass labels.To evaluate our proposed method, we introduce a conditional inception score for measuring how much an image changes to the target class.We evaluate the images transferred by DualVAE using the CIS in CelebA datasets and demonstrate state-of-the-art performance in a multiclass setting.", a new framework using dual space for generating images corresponding to multiclass labels when the number of class is large 1519,Graph Classification with Geometric Scattering,"One of the most notable contributions of deep learning is the application of convolutional neural networks to structured signal classification, and in particular image classification.Beyond their impressive performances in supervised learning, the structure of such networks inspired the development of deep filter banks referred to as scattering transforms.These transforms apply a cascade of wavelet transforms and complex modulus operators to extract features that are invariant to group operations and stable to deformations.Furthermore, ConvNets inspired recent advances in geometric deep learning, which aim to generalize these networks to graph data by applying notions from graph signal processing to learn deep graph filter cascades.We further advance these lines of research by proposing a geometric scattering transform using graph wavelets defined in terms of random walks on the graph.We demonstrate the utility of features extracted with this designed deep filter bank in graph classification of biochemistry and social network data, and in data exploration, where they enable inference of EC exchange preferences in enzyme evolution.","We present a new feed forward graph ConvNet based on generalizing the wavelet scattering transform of Mallat, and demonstrate its utility in graph classification and data exploration tasks." 1520,CORD: A Consolidated Receipt Dataset for Post-OCR Parsing,"OCR is inevitably linked to NLP since its final output is in text.Advances in document intelligence are driving the need for a unified technology that integrates OCR with various NLP tasks, especially semantic parsing.Since OCR and semantic parsing have been studied as separate tasks so far, the datasets for each task on their own are rich, while those for the integrated post-OCR parsing tasks are relatively insufficient.In this study, we publish a consolidated dataset for receipt parsing as the first step towards post-OCR parsing tasks.The dataset consists of thousands of Indonesian receipts, which contains images and box/text annotations for OCR, and multi-level semantic labels for parsing.The proposed dataset can be used to address various OCR and parsing tasks.",We introduce a large-scale receipt dataset for post-OCR parsing tasks. 1521,Pipelined Training with Stale Weights of Deep Convolutional Neural Networks,"The growth in the complexity of Convolutional Neural Networks is increasing interest in partitioning a network across multiple accelerators during training and pipelining the backpropagation computations over the accelerators.Existing approaches avoid or limit the use of stale weights through techniques such as micro-batching or weight stashing.These techniques either underutilize of accelerators or increase memory footprint.We explore the impact of stale weights on the statistical efficiency and performance in a pipelined backpropagation scheme that maximizes accelerator utilization and keeps memory overhead modest.We use 4 CNNs and show that when pipelining is limited to early layers in a network, training with stale weights converges and results in models with comparable inference accuracies to those resulting from non-pipelined training on MNIST and CIFAR-10 datasets; a drop in accuracy of 0.4%, 4%, 0.83% and 1.45% for the 4 networks, respectively.However, when pipelining is deeper in the network, inference accuracies drop significantly.We propose combining pipelined and non-pipelined training in a hybrid scheme to address this drop.We demonstrate the implementation and performance of our pipelined backpropagation in PyTorch on 2 GPUs using ResNet, achieving speedups of up to 1.8X over a 1-GPU baseline, with a small drop in inference accuracy.",Accelerating CNN training on a Pipeline of Accelerators with Stale Weights 1522,Excessive Invariance Causes Adversarial Vulnerability,"Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs.One core idea of adversarial example research is to reveal neural network errors under such distribution shifts.We decompose these errors into two complementary sources: sensitivity and invariance.We show deep networks are not only too sensitive to task-irrelevant changes of their input, as is well-known from epsilon-adversarial examples, but are also too invariant to a wide range of task-relevant changes, thus making vast regions in input space vulnerable to adversarial attacks.We show such excessive invariance occurs across various tasks and architecture types.On MNIST and ImageNet one can manipulate the class-specific content of almost any image without changing the hidden activations.We identify an insufficiency of the standard cross-entropy loss as a reason for these failures.Further, we extend this objective based on an information-theoretic analysis so it encourages the model to consider all task-dependent features in its decision.This provides the first approach tailored explicitly to overcome excessive invariance and resulting vulnerabilities.","We show deep networks are not only too sensitive to task-irrelevant changes of their input, but also too invariant to a wide range of task-relevant changes, thus making vast regions in input space vulnerable to adversarial attacks." 1523,Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design ,"Flow-based generative models are powerful exact likelihood models with efficient sampling and inference.Despite their computational efficiency, flow-based models generally have much worse density modeling performance compared to state-of-the-art autoregressive models.In this paper, we investigate and improve upon three limiting design choices employed by flow-based models in prior work: the use of uniform noise for dequantization, the use of inexpressive affine flows, and the use of purely convolutional conditioning networks in coupling layers.Based on our findings, we propose Flow++, a new flow-based model that is now the state-of-the-art non-autoregressive model for unconditional density estimation on standard image benchmarks.Our work has begun to close the significant performance gap that has so far existed between autoregressive models and flow-based models.",Improved training of current flow-based generative models (Glow and RealNVP) on density estimation benchmarks 1524,Data augmentation instead of explicit regularization,"Modern deep artificial neural networks have achieved impressive results through models with orders of magnitude more parameters than training examples which control overfitting with the help of regularization.Regularization can be implicit, as is the case of stochastic gradient descent and parameter sharing in convolutional layers, or explicit.Explicit regularization techniques, most common forms are weight decay and dropout, have proven successful in terms of improved generalization, but they blindly reduce the effective capacity of the model, introduce sensitive hyper-parameters and require deeper and wider architectures to compensate for the reduced capacity.In contrast, data augmentation techniques exploit domain knowledge to increase the number of training examples and improve generalization without reducing the effective capacity and without introducing model-dependent parameters, since it is applied on the training data.In this paper we systematically contrast data augmentation and explicit regularization on three popular architectures and three data sets.Our results demonstrate that data augmentation alone can achieve the same performance or higher as regularized models and exhibits much higher adaptability to changes in the architecture and the amount of training data.",Deep neural networks trained with data augmentation do not require any other explicit regularization (such as weight decay and dropout) and exhibit greater adaptaibility to changes in the architecture and the amount of training data. 1525,Censoring Representations with Multiple-Adversaries over Random Subspaces,"Adversarial feature learning is one of the promising ways for explicitly constrains neural networks to learn desired representations; for example, AFL could help to learn anonymized representations so as to avoid privacy issues.AFL learn such a representations by training the networks to deceive the adversary that predict the sensitive information from the network, and therefore, the success of the AFL heavily relies on the choice of the adversary.This paper proposes a novel design of the adversary, that instantiate the concept of the.The proposed method is motivated by an assumption that deceiving an adversary could fail to give meaningful information if the adversary is easily fooled, and adversary rely on single classifier suffer from this issues.In contrast, the proposed method is designed to be less vulnerable, by utilizing the ensemble of independent classifiers where each classifier tries to predict sensitive variables from a different of the representations.The empirical validations on three user-anonymization tasks show that our proposed method achieves state-of-the-art performances in all three datasets without significantly harming the utility of data.This is significant because it gives new implications about designing the adversary, which is important to improve the performance of AFL.","This paper improves the quality of the recently proposed adversarial feature leaning (AFL) approach for incorporating explicit constrains to representations, by introducing the concept of the of the adversary. " 1526,Variational inference of latent hierarchical dynamical systems in neuroscience: an application to calcium imaging data,"A key problem in neuroscience, and life sciences more generally, is that data is generated by a hierarchy of dynamical systems.One example of this is in calcium imaging data, where data is generated by a lower-order dynamical system governing calcium flux in neurons, which itself is driven by a higher-order dynamical system of neural computation.Ideally, life scientists would be able to infer the dynamics of both the lower-order systems and the higher-order systems, but this is difficult in high-dimensional regimes.A recent approach using sequential variational auto-encoders demonstrated it was possible to learn the latent dynamics of a single dynamical system for computations during reaching behaviour in the brain, using spiking data modelled as a Poisson process.Here we extend this approach using a ladder method to infer a hierarchy of dynamical systems, allowing us to capture calcium dynamics as well as neural computation.In this approach, spiking events drive lower-order calcium dynamics, and are themselves controlled by a higher-order latent dynamical system.We generate synthetic data by generating firing rates, sampling spike trains, and converting spike trains to fluorescence transients, from two dynamical systems that have been used as key benchmarks in recent literature: a Lorenz attractor, and a chaotic recurrent neural network.We show that our model is better able to reconstruct Lorenz dynamics from fluorescence data than competing methods.However, though our model can reconstruct underlying spike rates and calcium transients from the chaotic neural network well, it does not perform as well at reconstructing firing rates as basic techniques for inferring spikes from calcium data.These results demonstrate that VLAEs are a promising approach for modelling hierarchical dynamical systems data in the life sciences, but that inferring the dynamics of lower-order systems can potentially be better achieved with simpler methods.",We extend a successful recurrent variational autoencoder for dynamic systems to model an instance of dynamic systems hierarchy in neuroscience using the ladder method. 1527,Linear Symmetric Quantization of Neural Networks for Low-precision Integer Hardware,"With the proliferation of specialized neural network processors that operate on low-precision integers, the performance of Deep Neural Network inference becomes increasingly dependent on the result of quantization.Despite plenty of prior work on the quantization of weights or activations for neural networks, there is still a wide gap between the software quantizers and the low-precision accelerator implementation, which degrades either the efficiency of networks or that of the hardware for the lack of software and hardware coordination at design-phase.In this paper, we propose a learned linear symmetric quantizer for integer neural network processors, which not only quantizes neural parameters and activations to low-bit integer but also accelerates hardware inference by using batch normalization fusion and low-precision accumulators and multipliers.We use a unified way to quantize weights and activations, and the results outperform many previous approaches for various networks such as AlexNet, ResNet, and lightweight models like MobileNet while keeping friendly to the accelerator architecture.Additional, we also apply the method to object detection models and witness high performance and accuracy in YOLO-v2.Finally, we deploy the quantized models on our specialized integer-arithmetic-only DNN accelerator to show the effectiveness of the proposed quantizer.We show that even with linear symmetric quantization, the results can be better than asymmetric or non-linear methods in 4-bit networks.In evaluation, the proposed quantizer induces less than 0.4% accuracy drop in ResNet18, ResNet34, and AlexNet when quantizing the whole network as required by the integer processors.",We introduce an efficient quantization process that allows for performance acceleration on specialized integer-only neural network accelerator. 1528,EnergyNet: Energy-Efficient Dynamic Inference,"The prohibitive energy cost of running high-performance Convolutional Neural Networks has been limiting their deployment on resource-constrained platforms including mobile and wearable devices.We propose a CNN for energy-aware dynamic routing, called the EnergyNet, that achieves adaptive-complexity inference based on the inputs, leading to an overall reduction of run time energy cost without noticeably losing accuracy.That is achieved by proposing an energy loss that captures both computational and data movement costs.We combine it with the accuracy-oriented loss, and learn a dynamic routing policy for skipping certain layers in the networks, that optimizes the hybrid loss. Our empirical results demonstrate that, compared to the baseline CNNs, EnergyNetcan trim down the energy cost up to 40% and 65%, during inference on the CIFAR10 and Tiny ImageNet testing sets, respectively, while maintaining the same testing accuracies. It is further encouraging to observe that the energy awareness might serve as a training regularization and can even improve prediction accuracy: our models can achieve 0.7% higher top-1 testing accuracy than the baseline on CIFAR-10 when saving up to 27% energy, and 1.0% higher top-5 testing accuracy on Tiny ImageNet when saving up to 50% energy, respectively.",This paper proposes a new CNN model that combines energy cost with a dynamic routing strategy to enable adaptive energy-efficient inference. 1529,LSH Softmax: Sub-Linear Learning and Inference of the Softmax Layer in Deep Architectures,"Log-linear models models are widely used in machine learning, and in particular are ubiquitous in deep learning architectures in the form of the softmax.While exact inference and learning of these requires linear time, it can be done approximately in sub-linear time with strong concentrations guarantees.In this work, we present LSH Softmax, a method to perform sub-linear learning and inference of the softmax layer in the deep learning setting.Our method relies on the popular Locality-Sensitive Hashing to build a well-concentrated gradient estimator, using nearest neighbors and uniform samples.We also present an inference scheme in sub-linear time for LSH Softmax using the Gumbel distribution.On language modeling, we show that Recurrent Neural Networks trained with LSH Softmax perform on-par with computing the exact softmax while requiring sub-linear computations.","we present LSH Softmax, a softmax approximation layer for sub-linear learning and inference with strong theoretical guarantees; we showcase both its applicability and efficiency by evaluating on a real-world task: language modeling." 1530,Learning scalable and transferable multi-robot/machine sequential assignment planning via graph embedding,"Can the success of reinforcement learning methods for simple combinatorial optimization problems be extended to multi-robot sequential assignment planning?In addition to the challenge of achieving near-optimal performance in large problems, transferability to an unseen number of robots and tasks is another key challenge for real-world applications.In this paper, we suggest a method that achieves the first success in both challenges for robot/machine scheduling problems. Our method comprises of three components.First, we show any robot scheduling problem can be expressed as a random probabilistic graphical model.We develop a mean-field inference method for random PGM and use it for Q-function inference.Second, we show that transferability can be achieved by carefully designing two-step sequential encoding of problem state.Third, we resolve the computational scalability issue of fitted Q-iteration by suggesting a heuristic auction-based Q-iteration fitting method enabled by transferability we achieved. We apply our method to discrete-time, discrete space problems) and scalably achieve 97% optimality with transferability.This optimality is maintained under stochastic contexts.By extending our method to continuous time, continuous space formulation, we claim to be the first learning-based method with scalable performance in any type of multi-machine scheduling problems; our method scalability achieves comparable performance to popular metaheuristics in Identical parallel machine scheduling problems.",RL can solve (stochastic) multi-robot/scheduling problems scalably and transferably using graph embedding 1531,Learning of Sophisticated Curriculums by viewing them as Graphs over Tasks,"Curriculum learning consists in learning a difficult task by first training on an easy version of it, then on more and more difficult versions and finally on the difficult task.To make this learning efficient, given a curriculum and the current learning state of an agent, we need to find what are the good next tasks to train the agent on.Teacher-Student algorithms assume that the good next tasks are the ones on which the agent is making the fastest progress or digress.We first simplify and improve them.""However, two problematic situations where the agent is mainly trained on tasks it can't learn yet or it already learnt may occur."", 'Therefore, we introduce a new algorithm using min max ordered curriculums that assumes that the good next tasks are the ones that are learnable but not learnt yet.It outperforms Teacher-Student algorithms on small curriculums and significantly outperforms them on sophisticated ones with numerous tasks.",We present a new algorithm for learning by curriculum based on the notion of mastering rate that outperforms previous algorithms. 1532,Neocortical plasticity: an unsupervised cake but no free lunch,"The fields of artificial intelligence and neuroscience have a long history of fertile bi-directional interactions.On the one hand, important inspiration for the development of artificial intelligence systems has come from the study of natural systems of intelligence, the mammalian neocortex in particular.On the other, important inspiration for models and theories of the brain have emerged from artificial intelligence research.A central question at the intersection of these two areas is concerned with the processes by which neocortex learns, and the extent to which they are analogous to the back-propagation training algorithm of deep networks.Matching the data efficiency, transfer and generalisation properties of neocortical learning remains an area of active research in the field of deep learning.Recent advances in our understanding of neuronal, synaptic and dendritic physiology of the neocortex suggest new approaches for unsupervised representation learning, perhaps through a new class of objective functions, which could act alongside or in lieu of back-propagation.Such local learning rules have implicit rather than explicit objectives with respect to the training data, facilitating domain adaptation and generalisation. Incorporating them into deep networks for representation learning could better leverage unlabelled datasets to offer significant improvements in data efficiency of downstream supervised readout learning, and reduce susceptibility to adversarial perturbations, at the cost of a more restricted domain of applicability.",Inspiration from local dendritic processes of neocortical learning to make unsupervised learning great again. 1533,Towards Language Agnostic Universal Representations,"When a bilingual student learns to solve word problems in math, we expect the student to be able to solve these problem in both languages the student is fluent in, even if the math lessons were only taught in one language.However, current representations in machine learning are language dependent.In this work, we present a method to decouple the language from the problem by learning language agnostic representations and therefore allowing training a model in one language and applying to a different one in a zero shot fashion.We learn these representations by taking inspiration from linguistics, specifically the Universal Grammar hypothesis and learn universal latent representations that are language agnostic.We demonstrate the capabilities of these representations by showing that the models trained on a single language using language agnostic representations achieve very similar accuracies in other languages.","By taking inspiration from linguistics, specifically the Universal Grammar hypothesis, we learn language agnostic universal representations which we can utilize to do zero-shot learning across languages." 1534,Improving Gaussian mixture latent variable model convergence with Optimal Transport,"Generative models with both discrete and continuous latent variables are highly motivated by the structure of many real-world data sets.They present, however, subtleties in training often manifesting in the discrete latent variable not being leveraged.In this paper, we show why such models struggle to train using traditional log-likelihood maximization, and that they are amenable to training using the Optimal Transport framework of Wasserstein Autoencoders.We find our discrete latent variable to be fully leveraged by the model when trained, without any modifications to the objective function or significant fine tuning.Our model generates comparable samples to other approaches while using relatively simple neural networks, since the discrete latent variable carries much of the descriptive burden.Furthermore, the discrete latent provides significant control over generation.",This paper shows that the Wasserstein distance objective enables the training of latent variable models with discrete latents in a case where the Variational Autoencoder objective fails to do so. 1535,An Attention-Based Model for Learning Dynamic Interaction Networks,"While machine learning models achieve human-comparable performance on sequential data, exploiting structured knowledge is still a challenging problem.Spatio-temporal graphs have been proved to be a useful tool to abstract interaction graphs and previous works exploits carefully designed feed-forward architecture to preserve such structure.We argue to scale such network design to real-world problem, a model needs to automatically learn a meaningful representation of the possible relations.Learning such interaction structure is not trivial: on the one hand, a model has to discover the hidden relations between different problem factors in an unsupervised way; on the other hand, the mined relations have to be interpretable.In this paper, we propose an attention module able to project a graph sub-structure in a fixed size embedding, preserving the influence that the neighbours exert on a given vertex.On a comprehensive evaluation done on real-world as well as toy task, we found our model competitive against strong baselines.",A graph neural network able to automatically learn and leverage a dynamic interactive graph structure 1536,NAMSG: An Efficient Method for Training Neural Networks,"We introduce NAMSG, an adaptive first-order algorithm for training neural networks.The method is efficient in computation and memory, and is straightforward to implement.It computes the gradients at configurable remote observation points, in order to expedite the convergence by adjusting the step size for directions with different curvatures in the stochastic setting.It also scales the updating vector elementwise by a nonincreasing preconditioner to take the advantages of AMSGRAD.We analyze the convergence properties for both convex and nonconvex problems by modeling the training process as a dynamic system, and provide a strategy to select the observation factor without grid search.A data-dependent regret bound is proposed to guarantee the convergence in the convex setting.The method can further achieve a O) regret bound for strongly convex functions.Experiments demonstrate that NAMSG works well in practical problems and compares favorably to popular adaptive methods, such as ADAM, NADAM, and AMSGRAD.",A new algorithm for training neural networks that compares favorably to popular adaptive methods. 1537,Dissecting an Adversarial framework for Information Retrieval,"Recent advances in Generative Adversarial Networks facilitated by improvements to the framework and successful application to various problems has resulted in extensions to multiple domains.IRGAN attempts to leverage the framework for Information-Retrieval, a task that can be described as modeling the correct conditional probability distribution p over the documents, given the query.The work that proposes IRGAN claims that optimizing their minimax loss function will result in a generator which can learn the distribution, but their setup and baseline term steer the model away from an exact adversarial formulation, and this work attempts to point out certain inaccuracies in their formulation.Analyzing their loss curves gives insight into possible mistakes in the loss functions and better performance can be obtained by using the co-training like setup we propose, where two models are trained in a co-operative rather than an adversarial fashion.","Points out problems in loss function used in IRGAN, a recently proposed GAN framework for Information Retrieval. Further, a model motivated by co-training is proposed, which achieves better performance." 1538,Federated User Representation Learning,"Collaborative personalization, such as through learned user representations, can improve the prediction accuracy of neural-network-based models significantly.We propose Federated User Representation Learning, a simple, scalable, privacy-preserving and resource-efficient way to utilize existing neural personalization techniques in the Federated Learning setting.FURL divides model parameters into federated and private parameters.Private parameters, such as private user embeddings, are trained locally, but unlike federated parameters, they are not transferred to or averaged on the server.We show theoretically that this parameter split does not affect training for most model personalization approaches.Storing user embeddings locally not only preserves user privacy, but also improves memory locality of personalization compared to on-server training.We evaluate FURL on two datasets, demonstrating a significant improvement in model quality with 8% and 51% performance increases, and approximately the same level of performance as centralized training with only 0% and 4% reductions.Furthermore, we show that user embeddings learned in FL and the centralized setting have a very similar structure, indicating that FURL can learn collaboratively through the shared parameters while preserving user privacy.","We propose Federated User Representation Learning (FURL), a simple, scalable, privacy-preserving and bandwidth-efficient way to utilize existing neural personalization techniques in the Federated Learning (FL) setting." 1539,Discrete Autoencoders for Sequence Models,"Recurrent models for sequences have been recently successful at many tasks, especially for language modelingand machine translation.Nevertheless, it remains challenging to extract good representations fromthese models.For instance, even though language has a clear hierarchical structure going from charactersthrough words to sentences, it is not apparent in current language models.We propose to improve the representation in sequence models byaugmenting current approaches with an autoencoder that is forced to compressthe sequence through an intermediate discrete latent space.In order to propagate gradientsthough this discrete representation we introduce an improved semantic hashing technique.We show that this technique performs well on a newly proposed quantitative efficiency measure.We also analyze latent codes produced by the model showing how they correspond towords and phrases.Finally, we present an application of the autoencoder-augmentedmodel to generating diverse translations.",Autoencoders for text with a new method for using discrete latent space. 1540,Unaligned Image-to-Sequence Transformation with Loop Consistency,"We tackle the problem of modeling sequential visual phenomena.Given examples of a phenomena that can be divided into discrete time steps, we aim to take an input from any such time and realize this input at all other time steps in the sequence.Furthermore, we aim to do this ground-truth aligned sequences --- avoiding the difficulties needed for gathering aligned data.This generalizes the unpaired image-to-image problem from generating pairs to generating sequences.We extend cycle consistency to and alleviate difficulties associated with learning in the resulting long chains of computation.""We show competitive results compared to existing image-to-image techniques when modeling several different data sets including the Earth's seasons and aging of human faces.",LoopGAN extends cycle length in CycleGAN to enable unaligned sequential transformation for more than two time steps. 1541,Stochastic Weight Averaging in Parallel: Large-Batch Training That Generalizes Well,"We propose Stochastic Weight Averaging in Parallel, an algorithm to accelerate DNN training.Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models computed independently and in parallel.The resulting models generalize equally well as those trained with small mini-batches but are produced in a substantially shorter time.We demonstrate the reduction in training time and the good generalization performance of the resulting models on the computer vision datasets CIFAR10, CIFAR100, and ImageNet.","We propose SWAP, a distributed algorithm for large-batch training of neural networks." 1542,Large Batch Training of Convolutional Networks with Layer-wise Adaptive Rate Scaling,"A common way to speed up training of large convolutional networks is to add computational units.Training is then performed using data-parallel synchronous Stochastic Gradient Descent with a mini-batch divided between computational units.With an increase in the number of nodes, the batch size grows.However, training with a large batch often results in lower model accuracy.We argue that the current recipe for large batch training is not general enough and training may diverge.To overcome these optimization difficulties, we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling.Using LARS, we scaled AlexNet and ResNet-50 to a batch size of 16K.","A new large batch training algorithm based on Layer-wise Adaptive Rate Scaling (LARS); using LARS, we scaled AlexNet and ResNet-50 to a batch of 16K." 1543,Learning Compositional Koopman Operators for Model-Based Control,"Finding an embedding space for a linear approximation of a nonlinear dynamical system enables efficient system identification and control synthesis.The Koopman operator theory lays the foundation for identifying the nonlinear-to-linear coordinate transformations with data-driven methods.Recently, researchers have proposed to use deep neural networks as a more expressive class of basis functions for calculating the Koopman operators.These approaches, however, assume a fixed dimensional state space; they are therefore not applicable to scenarios with a variable number of objects.In this paper, we propose to learn compositional Koopman operators, using graph neural networks to encode the state into object-centric embeddings and using a block-wise linear transition matrix to regularize the shared structure across objects.The learned dynamics can quickly adapt to new environments of unknown physical parameters and produce control signals to achieve a specified goal.Our experiments on manipulating ropes and controlling soft robots show that the proposed method has better efficiency and generalization ability than existing baselines.",Learning compositional Koopman operators for efficient system identification and model-based control. 1544,Scalable Gradients and Variational Inference for Stochastic Differential Equations,"We derive reverse-mode automatic differentiation for solutions of stochastic differential equations, allowing time-efficient and constant-memory computation of pathwise gradients, a continuous-time analogue of the reparameterization trick.Specifically, we construct a backward SDE whose solution is the gradient and provide conditions under which numerical solutions converge.We also combine our stochastic adjoint approach with a stochastic variational inference scheme for continuous-time SDE models, allowing us to learn distributions over functions using stochastic gradient descent.Our latent SDE model achieves competitive performance compared to existing approaches on time series modeling.",We present a constant memory gradient computation procedure through solutions of stochastic differential equations (SDEs) and apply the method for learning latent SDE models. 1545,Coordinated Exploration via Intrinsic Rewards for Multi-Agent Reinforcement Learning,"Solving tasks with sparse rewards is one of the most important challenges in reinforcement learning.In the single-agent setting, this challenge has been addressed by introducing intrinsic rewards that motivate agents to explore unseen regions of their state spaces.Applying these techniques naively to the multi-agent setting results in agents exploring independently, without any coordination among themselves.We argue that learning in cooperative multi-agent settings can be accelerated and improved if agents coordinate with respect to what they have explored.In this paper we propose an approach for learning how to dynamically select between different types of intrinsic rewards which consider not just what an individual agent has explored, but all agents, such that the agents can coordinate their exploration and maximize extrinsic returns.Concretely, we formulate the approach as a hierarchical policy where a high-level controller selects among sets of policies trained on different types of intrinsic rewards and the low-level controllers learn the action policies of all agents under these specific rewards.We demonstrate the effectiveness of the proposed approach in a multi-agent gridworld domain with sparse rewards, and then show that our method scales up to more complex settings by evaluating on the VizDoom platform.","We propose several intrinsic reward functions for encouraging coordinated exploration in multi-agent problems, and introduce an approach to dynamically selecting the best exploration method for a given task, online." 1546,Learning Classifier Synthesis for Generalized Few-Shot Learning,"Object recognition in real-world requires handling long-tailed or even open-ended data.An ideal visual system needs to reliably recognize the populated visual concepts and meanwhile efficiently learn about emerging new categories with a few training instances.Class-balanced many-shot learning and few-shot learning tackle one side of this problem, via either learning strong classifiers for populated categories or learning to learn few-shot classifiers for the tail classes.In this paper, we investigate the problem of generalized few-shot learning -- a model during the deployment is required to not only learn about ""tail"" categories with few shots, but simultaneously classify the ""head"" and ""tail"" categories.""We propose the Classifier Synthesis Learning, a learning framework that learns how to synthesize calibrated few-shot classifiers in addition to the multi-class classifiers of head classes, leveraging a shared neural dictionary."", 'CASTLE sheds light upon the inductive GFSL through optimizing one clean and effective GFSL learning objective.It demonstrates superior performances than existing GFSL algorithms and strong baselines on MiniImageNet and TieredImageNet data sets.More interestingly, it outperforms previous state-of-the-art methods when evaluated on standard few-shot learning.",We propose to learn synthesizing few-shot classifiers and many-shot classifiers using one single objective function for GFSL. 1547,Knossos: Compiling AI with AI,"Machine learning workloads are often expensive to train, taking weeks to converge.The current generation of frameworks relies on custom back-ends in order to achieve efficiency, making it impractical to train models on less common hardware where no such back-ends exist.Knossos builds on recent work that avoids the need for hand-written libraries, instead compiles machine learning models in much the same way one would compile other kinds of software.In order to make the resulting code efficient, the Knossos complier directly optimises the abstract syntax tree of the program.However in contrast to traditional compilers that employ hand-written optimisation passes, we take a rewriting approach driven by the search algorithm and a learn value function that evaluates future potential cost reduction of taking various rewriting actions to the program.We show that Knossos can automatically learned optimisations that past compliers had to implement by hand.Furthermore, we demonstrate that Knossos can achieve wall time reduction compared to a hand-tuned compiler on a suite of machine learning programs, including basic linear algebra and convolutional networks.The Knossos compiler has minimal dependencies and can be used on any architecture that supports a \\Cpp toolchain.Since cost model the proposed algorithm optimises can be tailored to a particular hardware architecture, the proposed approach can potentially applied to a variety of hardware.",We combine A* search with reinforcement learning to speed up machine learning code 1548,Data-efficient Deep Reinforcement Learning for Dexterous Manipulation,"Grasping an object and precisely stacking it on another is a difficult task for traditional robotic control or hand-engineered approaches.Here we examine the problem in simulation and provide techniques aimed at solving it via deep reinforcement learning.We introduce two straightforward extensions to the Deep Deterministic Policy Gradient algorithm, which make it significantly more data-efficient and scalable.Our results show that by making extensive use of off-policy data and replay, it is possible to find high-performance control policies.Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.",Data-efficient deep reinforcement learning can be used to learning precise stacking policies. 1549,Policy Gradient For Multidimensional Action Spaces: Action Sampling and Entropy Bonus,"In recent years deep reinforcement learning has been shown to be adept at solving sequential decision processes with high-dimensional state spaces such as in the Atari games.Many reinforcement learning problems, however, involve high-dimensional discrete action spaces as well as high-dimensional state spaces.In this paper, we develop a novel policy gradient methodology for the case of large multidimensional discrete action spaces.We propose two approaches for creating parameterized policies: LSTM parameterization and a Modified MDP giving rise to Feed-Forward Network parameterization.Both of these approaches provide expressive models to which backpropagation can be applied for training.We then consider entropy bonus, which is typically added to the reward function to enhance exploration.In the case of high-dimensional action spaces, calculating the entropy and the gradient of the entropy requires enumerating all the actions in the action space and running forward and backpropagation for each action, which may be computationally infeasible.We develop several novel unbiased estimators for the entropy bonus and its gradient.Finally, we test our algorithms on two environments: a multi-hunter multi-rabbit grid game and a multi-agent multi-arm bandit problem.",policy parameterizations and unbiased policy entropy estimators for MDP with large multidimensional discrete action space 1550,Asynchronous Stochastic Subgradient Methods for General Nonsmooth Nonconvex Optimization,"Asynchronous distributed methods are a popular way to reduce the communication and synchronization costs of large-scale optimization.Yet, for all their success, little is known about their convergence guarantees in the challenging case of general non-smooth, non-convex objectives, beyond cases where closed-form proximal operator solutions are available.This is all the more surprising since these objectives are the ones appearing in the training of deep neural networks.In this paper, we introduce the first convergence analysis covering asynchronous methods in the case of general non-smooth, non-convex objectives.Our analysis applies to stochastic sub-gradient descent methods both with and without block variable partitioning, and both with and without momentum.It is phrased in the context of a general probabilistic model of asynchronous scheduling accurately adapted to modern hardware properties.We validate our analysis experimentally in the context of training deep neural network architectures.We show their overall successful asymptotic convergence as well as exploring how momentum, synchronization, and partitioning all affect performance.",Asymptotic convergence for stochastic subgradien method with momentum under general parallel asynchronous computation for general nonconvex nonsmooth optimization 1551,Low Bias Gradient Estimates for Very Deep Boolean Stochastic Networks,"Stochastic neural networks with discrete random variables are an important class of models for their expressivity and interpretability.Since direct differentiation and backpropagation is not possible, Monte Carlo gradient estimation techniques have been widely employed for training such models.Efficient stochastic gradient estimators, such Straight-Through and Gumbel-Softmax, work well for shallow models with one or two stochastic layers.Their performance, however, suffers with increasing model complexity.In this work we focus on stochastic networks with multiple layers of Boolean latent variables.To analyze such such networks, we employ the framework of harmonic analysis for Boolean functions. We use it to derive an analytic formulation for the source of bias in the biased Straight-Through estimator.Based on the analysis we propose , a simple gradient estimation algorithm that relies on three simple bias reduction steps.Extensive experiments show that FouST performs favorably compared to state-of-the-art biased estimators, while being much faster than unbiased ones.To the best of our knowledge FouST is the first gradient estimator to train up very deep stochastic neural networks, with up to 80 deterministic and 11 stochastic layers.",We present a low-bias estimator for Boolean stochastic variable models with many stochastic layers. 1552,ManiGAN: Text-Guided Image Manipulation,"We propose a novel generative adversarial network for visual attributes manipulation, which is able to semantically modify the visual attributes of given images using natural language descriptions.The key to our method is to design a novel co-attention module to combine text and image information rather than simply concatenating two features along the channel direction.Also, a detail correction module is proposed to rectify mismatched attributes of the synthetic image, and to reconstruct text-unrelated contents.Finally, we propose a new metric for evaluating manipulation results, in terms of both the generation of text-related attributes and the reconstruction of text-unrelated contents.Extensive experiments on benchmark datasets demonstrate the advantages of our proposed method, regarding the effectiveness of image manipulation and the capability of generating high-quality results.",We propose a novel method to manipulate given images using natural language descriptions. 1553,Probabilistic Planning with Sequential Monte Carlo methods,"In this work, we propose a novel formulation of planning which views it as a probabilistic inference problem over future optimal trajectories.This enables us to use sampling methods, and thus, tackle planning in continuous domains using a fixed computational budget. We design a new algorithm, Sequential Monte Carlo Planning, by leveraging classical methods in Sequential Monte Carlo and Bayesian smoothing in the context of control as inference.Furthermore, we show that Sequential Monte Carlo Planning can capture multimodal policies and can quickly learn continuous control tasks.","Leveraging control as inference and Sequential Monte Carlo methods, we proposed a probabilistic planning algorithm." 1554,On Self Modulation for Generative Adversarial Networks,"Training Generative Adversarial Networks is notoriously challenging.We propose and study an architectural modification, self-modulation, which improves GAN performance across different data sets, architectures, losses, regularizers, and hyperparameter settings.Intuitively, self-modulation allows the intermediate feature maps of a generator to change as a function of the input noise vector.While reminiscent of other conditioning techniques, it requires no labeled data.In a large-scale empirical study we observe a relative decrease of 5%-35% in FID.Furthermore, all else being equal, adding this modification to the generator leads to improved performance in 124/144 of the studied settings.Self-modulation is a simple architectural change that requires no additional parameter tuning, which suggests that it can be applied readily to any GAN.","A simple GAN modification that improves performance across many losses, architectures, regularization schemes, and datasets. " 1555,A Deep Dive into Count-Min Sketch for Extreme Classification in Logarithmic Memory,"Extreme Classification Methods have become of paramount importance, particularly for Information Retrieval problems, owing to the development of smart algorithms that are scalable to industry challenges.One of the prime class of models that aim to solve the memory and speed challenge of extreme multi-label learning is Group Testing.Multi-label Group Testing methods construct label groups by grouping original labels either randomly or based on some similarity and then train smaller classifiers to first predict the groups and then recover the original label vectors.Recently, a novel approach called MACH was proposed which projects the huge label vectors to a small and manageable count-min sketch matrix and then learns to predict this matrix to recover the original prediction probabilities.Thereby, the model memory scales O for K classes.MACH is a simple algorithm which works exceptionally well in practice.Despite this simplicity of MACH, there is a big gap between the theoretical understanding of the trade-offs with MACH.In this paper we fill this gap.Leveraging the theory of count-min sketch we provide precise quantification of the memory-identifiablity tradeoffs.We extend the theory to the case of multi-label classification, where the dependencies make the estimators hard to calculate in closed forms.To mitigate this issue, we propose novel quadratic approximation using the Inclusion-Exclusion Principle.Our estimator has significantly lower reconstruction error than the typical CMS estimator across various values of number of classes K, label sparsity and compression ratio.",How to estimate original probability vector for millions of classes from count-min sketch measurements - a theoretical and practical setup. 1556,Fix your classifier: the marginal value of training the last weight layer,"Neural networks are commonly used as models for classification for a wide variety of tasks.Typically, a learned affine transformation is placed at the end of such models, yielding a per-class value used for classification.This classifier can have a vast number of parameters, which grows linearly with the number of possible classes, thus requiring increasingly more resources.In this work we argue that this classifier can be fixed, up to a global scale constant, with little or no loss of accuracy for most tasks, allowing memory and computational benefits.Moreover, we show that by initializing the classifier with a Hadamard matrix we can speed up inference as well.We discuss the implications for current understanding of neural network models.",You can fix the classifier in neural networks without losing accuracy 1557,Found by NEMO: Unsupervised Object Detection from Negative Examples and Motion,"This paper introduces NEMO, an approach to unsupervised object detection that uses motion---instead of image labels---as a cue to learn object detection.To discriminate between motion of the target object and other changes in the image, it relies on negative examples that show the scene without the object.The required data can be collected very easily by recording two short videos, a positive one showing the object in motion and a negative one showing the scene without the object.Without any additional form of pretraining or supervision and despite of occlusions, distractions, camera motion, and adverse lighting, those videos are sufficient to learn object detectors that can be applied to new videos and even generalize to unseen scenes and camera angles.In a baseline comparison, unsupervised object detection outperforms off-the shelf template matching and tracking approaches that are given an initial bounding box of the object.The learned object representations are also shown to be accurate enough to capture the relevant information from manipulation task demonstrations, which makes them applicable to learning from demonstration in robotics.An example of object detection that was learned from 3 minutes of video can be found here: http://y2u.be/u_jyz9_ETz4",Learning to detect objects without image labels from 3 minutes of video 1558,ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning,"Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension.It is time to introduce more challenging datasets to push the development of this field towards more comprehensive reasoning of text.In this paper, we introduce a new Reading Comprehension dataset requiring logical reasoning extracted from standardized graduate admission examinations.As earlier studies suggest, human-annotated datasets usually contain biases, which are often exploited by models to achieve high accuracy without truly understanding the text.In order to comprehensively evaluate the logical reasoning ability of models on ReClor, we propose to identify biased data points and separate them into EASY set while the rest as HARD set.Empirical results show that the state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set.However, they struggle on HARD set with poor performance near that of random guess, indicating more research is needed to essentially enhance the logical reasoning ability of current models.","We introduce ReClor, a reading comprehension dataset requiring logical reasoning, and find that current state-of-the-art models struggle with real logical reasoning with poor performance near that of random guess." 1559,"Garbage in, model out: Weight theft with just noise","This paper explores the scenarios under whichan attacker can claim that ‘Noise and access tothe softmax layer of the model is all you need’to steal the weights of a convolutional neural networkwhose architecture is already known.Wewere able to achieve 96% test accuracy usingthe stolen MNIST model and 82% accuracy usingstolen KMNIST model learned using onlyi.i.d. Bernoulli noise inputs.We posit that thistheft-susceptibility of the weights is indicativeof the complexity of the dataset and propose anew metric that captures the same.The goal ofthis dissemination is to not just showcase how farknowing the architecture can take you in terms ofmodel stealing, but to also draw attention to thisrather idiosyncratic weight learnability aspects ofCNNs spurred by i.i.d. noise input.We also disseminatesome initial results obtained with usingthe Ising probability distribution in lieu of the i.i.d.Bernoulli distribution","Input only noise , glean the softmax outputs, steal the weights" 1560,Implicit λ-Jeffreys Autoencoders: Taking the Best of Both Worlds,"We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders and generative adversarial networks.It is known that GAN can produce very realistic samples while VAE does not suffer from mode collapsing problem.Our model optimizes λ-Jeffreys divergence between the model distribution and the true data distribution.We show that it takes the best properties of VAE and GAN objectives.It consists of two parts.One of these parts can be optimized by using the standard adversarial training, and the second one is the very objective of the VAE model.However, the straightforward way of substituting the VAE loss does not work well if we use an explicit likelihood such as Gaussian or Laplace which have limited flexibility in high dimensions and are unnatural for modelling images in the space of pixels.To tackle this problem we propose a novel approach to train the VAE model with an implicit likelihood by an adversarially trained discriminator.In an extensive set of experiments on CIFAR-10 and TinyImagent datasets, we show that our model achieves the state-of-the-art generation and reconstruction quality and demonstrate how we can balance between mode-seeking and mode-covering behaviour of our model by adjusting the weight λ in our objective.",We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN) 1561,Modulating transfer between tasks in gradient-based meta-learning,"Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task.This approach encounters difficulty when transfer is not mutually beneficial, for instance, when tasks are sufficiently dissimilar or change over time.Here, we use the connection between gradient-based meta-learning and hierarchical Bayes to propose a mixture of hierarchical Bayesian models over the parameters of an arbitrary function approximator such as a neural network.Generalizing the model-agnostic meta-learning algorithm, we present a stochastic expectation maximization procedure to jointly estimate parameter initializations for gradient descent as well as a latent assignment of tasks to initializations.This approach better captures the diversity of training tasks as opposed to consolidating inductive biases into a single set of hyperparameters.Our experiments demonstrate better generalization on the standard miniImageNet benchmark for 1-shot classification.We further derive a novel and scalable non-parametric variant of our method that captures the evolution of a task distribution over time as demonstrated on a set of few-shot regression tasks.",We use the connection between gradient-based meta-learning and hierarchical Bayes to learn a mixture of meta-learners that is appropriate for a heterogeneous and evolving task distribution. 1562,Capsules with Inverted Dot-Product Attention Routing,"We introduce a new routing algorithm for capsule networks, in which a child capsule is routed to a parent based only on agreement between the parent's state and the child's vote."", ""Unlike previously proposed routing algorithms, the parent's ability to reconstruct the child is not explicitly taken into account to update the routing probabilities."", 'This simplifies the routing procedure and improves performance on benchmark datasets such as CIFAR-10 and CIFAR-100.The new mechanism 1) designs routing via inverted dot-product attention; 2) imposes Layer Normalization as normalization; and 3) replaces sequential iterative routing with concurrent iterative routing.Besides outperforming existing capsule networks, our model performs at-par with a powerful CNN, using less than 25% of the parameters. On a different task of recognizing digits from overlayed digit images, the proposed capsule model performs favorably against CNNs given the same number of layers and neurons per layer. We believe that our work raises the possibility of applying capsule networks to complex real-world tasks.","We present a new routing method for Capsule networks, and it performs at-par with ResNet-18 on CIFAR-10/ CIFAR-100." 1563,Doc2Dial: a Framework for Dialogue Composition Grounded in Business Documents," We introduce Doc2Dial, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing.Such data can be used to train automated dialogue agents performing customer care tasks for the enterprises or organizations.In particular, the framework takes the documents as input and generates the tasks for obtaining the annotations for simulating dialog flows.The dialog flows are used to guide the collection of utterances produced by crowd workers.The outcomes include dialogue data grounded in the given documents, as well as various types of annotations that help ensure the quality of the data and the flexibility tocomposite dialogues.","We introduce Doc2Dial, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing for train automated dialogue agents" 1564,MelNet: A Generative Model for Audio in the Frequency Domain,"Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps. While long-range dependencies are difficult to model directly in the time domain, we show that they can be more tractably modelled in two-dimensional time-frequency representations such as spectrograms. By leveraging this representational advantage, in conjunction with a highly expressive probabilistic model and a multiscale generation procedure, we design a model capable of generating high-fidelity audio samples which capture structure at timescales which time-domain models have yet to achieve. We demonstrate that our model captures longer-range dependencies than time-domain models such as WaveNet across a diverse set of unconditional generation tasks, including single-speaker speech generation, multi-speaker speech generation, and music generation.",We introduce an autoregressive generative model for spectrograms and demonstrate applications to speech and music generation 1565,Why do deep convolutional networks generalize so poorly to small image transformations?,"Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations.In this paper we show that modern CNNs can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations.Furthermore, we see these failures to generalize more frequently in more modern networks.We show that these failures are related to the fact that the architecture of modern CNNs ignores the classical sampling theorem so that generalization is not guaranteed.We also show that biases in the statistics of commonly used image datasets makes it unlikely that CNNs will learn to be invariant to these transformations.Taken together our results suggest that the performance of CNNs in object recognition falls far short of the generalization capabilities of humans.","Modern deep CNNs are not invariant to translations, scalings and other realistic image transformations, and this lack of invariance is related to the subsampling operation and the biases contained in image datasets." 1566,Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis,"We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network.Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network.Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running.In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion.The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training.Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames of new complex human motion w.r.t. different styles.",Synthesize complex and extended human motions using an auto-conditioned LSTM network 1567,A Simple Technique to Enable Saliency Methods to Pass the Sanity Checks," attempt to explain a deep net's decision by assigning a to each feature/pixel in the input, often doing this credit-assignment via the gradient of the output with respect to input."", 'Recently t questioned the validity of many of these methods since they do not pass simple, which test whether the scores shift/vanish when layers of the trained net are randomized, or when the net is retrained using random labels for inputs.% for the inputs. %Surprisingly, the tested methods did not pass these checks: the explanations were relatively unchanged.We propose a simple fix to existing saliency methods that helps them pass sanity checks, which we call.This involves computing saliency maps for all possible labels in the classification task, and using a simple competition among them to identify and remove less relevant pixels from the map.Some theoretical justification is provided for it and its performance is empirically demonstrated on several popular methods.",We devise a mechanism called competition among pixels that allows (approximately) complete saliency methods to pass the sanity checks. 1568,Are You Sure YouWant To Do That? Classification with Interpretable Queries,"Classification systems typically act in isolation, meaning they are required to implicitly memorize the characteristics of all candidate classes in order to classify.The cost of this is increased memory usage and poor sample efficiency.We propose a model which instead verifies using reference images during the classification process, reducing the burden of memorization.The model uses iterative non-differentiable queries in order to classify an image.We demonstrate that such a model is feasible to train and can match baseline accuracy while being more parameter efficient.However, we show that finding the correct balance between image recognition and verification is essential to pushing the model towards desired behavior, suggesting that a pipeline of recognition followed by verification is a more promising approach towards designing more powerful networks with simpler architectures.",Image classification via iteratively querying for reference image from a candidate class with a RNN and use CNN to compare to the input image 1569,A Main/Subsidiary Network Framework for Simplifying Binary Neural Networks,"To reduce memory footprint and run-time latency, techniques such as neural net-work pruning and binarization have been explored separately. However, it is un-clear how to combine the best of the two worlds to get extremely small and efficient models. In this paper, we, for the first time, define the filter-level pruning problem for binary neural networks, which cannot be solved by simply migrating existing structural pruning methods for full-precision models. A novel learning-based approach is proposed to prune filters in our main/subsidiary network frame-work, where the main network is responsible for learning representative features to optimize the prediction performance, and the subsidiary component works as a filter selector on the main network. To avoid gradient mismatch when training the subsidiary component, we propose a layer-wise and bottom-up scheme. We also provide the theoretical and experimental comparison between our learning-based and greedy rule-based methods. Finally, we empirically demonstrate the effectiveness of our approach applied on several binary models, including binarizedNIN, VGG-11, and ResNet-18, on various image classification datasets. For bi-nary ResNet-18 on ImageNet, we use 78.6% filters but can achieve slightly better test error 49.87% than the original model",we define the filter-level pruning problem for binary neural networks for the first time and propose method to solve it. 1570,AntMan: Sparse Low-Rank Compression To Accelerate RNN Inference,"Wide adoption of complex RNN based models is hindered by their inference performance, cost and memory requirements.To address this issue, we develop AntMan, combining structured sparsity with low-rank decomposition synergistically, to reduce model computation, size and execution time of RNNs while attaining desired accuracy.AntMan extends knowledge distillation based training to learn the compressed models efficiently.Our evaluation shows that AntMan offers up to 100x computation reduction with less than 1pt accuracy drop for language and machine reading comprehension models.Our evaluation also shows that for a given accuracy target, AntMan produces 5x smaller models than the state-of-art.Lastly, we show that AntMan offers super-linear speed gains compared to theoretical speedup, demonstrating its practical value on commodity hardware.","Reducing computation and memory complexity of RNN models by up to 100x using sparse low-rank compression modules, trained via knowledge distillation." 1571,Residual Gated Graph ConvNets,"Graph-structured data such as social networks, functional brain networks, gene regulatory networks, communications networks have brought the interest in generalizing deep learning techniques to graph domains.In this paper, we are interested to design neural networks for graphs with variable length in order to solve learning problems such as vertex classification, graph classification, graph regression, and graph generative tasks.Most existing works have focused on recurrent neural networks to learn meaningful representations of graphs, and more recently new convolutional neural networks have been introduced.In this work, we want to compare rigorously these two fundamental families of architectures to solve graph learning tasks.We review existing graph RNN and ConvNet architectures, and propose natural extension of LSTM and ConvNet to graphs with arbitrary size.Then, we design a set of analytically controlled experiments on two basic graph problems, i.e. subgraph matching and graph clustering, to test the different architectures. Numerical results show that the proposed graph ConvNets are 3-17% more accurate and 1.5-4x faster than graph RNNs.Graph ConvNets are also 36% more accurate than variational techniques.Finally, the most effective graph ConvNet architecture uses gated edges and residuality.Residuality plays an essential role to learn multi-layer architectures as they provide a 10% gain of performance.","We compare graph RNNs and graph ConvNets, and we consider the most generic class of graph ConvNets with residuality." 1572,Complex- and Real-Valued Neural Network Architectures,"Complex-value neural networks are not a new concept, however, the use of real-values has often been favoured over complex-values due to difficulties in training and accuracy of results.Existing literature ignores the number of parameters used.We compared complex- and real-valued neural networks using five activation functions.We found that when real and complex neural networks are compared using simple classification tasks, complex neural networks perform equal to or slightly worse than real-value neural networks.However, when specialised architecture is used, complex-valued neural networks outperform real-valued neural networks.Therefore, complex–valued neural networks should be used when the input data is also complex or it can be meaningfully to the complex plane, or when the network architecture uses the structure defined by using complex numbers.",Comparison of complex- and real-valued multi-layer perceptron with respect to the number of real-valued parameters. 1573,CAYLEYNETS: SPECTRAL GRAPH CNNS WITH COMPLEX RATIONAL FILTERS,"The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains.In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs.The core ingredient of our model is a new class of parametric rational complex functions allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest.Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators.Extensive experimental results show the superior performance of our approach on spectral image classification, community detection, vertex classification and matrix completion tasks.",A spectral graph convolutional neural network with spectral zoom properties. 1574,FasterSeg: Searching for Faster Real-time Semantic Segmentation,"We present FasterSeg, an automatically designed semantic segmentation network with not only state-of-the-art performance but also faster speed than current methods.Utilizing neural architecture search, FasterSeg is discovered from a novel and broader search space integrating multi-resolution branches, that has been recently found to be vital in manually designed segmentation models.To better calibrate the balance between the goals of high accuracy and low latency, we propose a decoupled and fine-grained latency regularization, that effectively overcomes our observed phenomenons that the searched networks are prone to ""collapsing"" to low-latency yet poor-accuracy models.Moreover, we seamlessly extend FasterSeg to a new collaborative search framework, simultaneously searching for a teacher and a student network in the same single run.The teacher-student distillation further boosts the student model’s accuracy.Experiments on popular segmentation benchmarks demonstrate the competency of FasterSeg.For example, FasterSeg can run over 30% faster than the closest manually designed competitor on Cityscapes, while maintaining comparable accuracy.","We present a real-time segmentation model automatically discovered by a multi-scale NAS framework, achieving 30% faster than state-of-the-art models." 1575,Making Efficient Use of Demonstrations to Solve Hard Exploration Problems,"This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions.We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks where other state of the art methods fail to see even a single successful trajectory after tens of billions of steps of exploration.","We introduce R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions." 1576,An Investigation of Memory in Recurrent Neural Networks,"We investigate the learned dynamical landscape of a recurrent neural network solving a simple task requiring the interaction of two memory mechanisms: long- and short-term.Our results show that while long-term memory is implemented by asymptotic attractors, sequential recall is now additionally implemented by oscillatory dynamics in a transverse subspace to the basins of attraction of these stable steady states.Based on our observations, we propose how different types of memory mechanisms can coexist and work together in a single neural network, and discuss possible applications to the fields of artificial intelligence and neuroscience.",We investigate how a recurrent neural network successfully learns a task combining long-term memory and sequential recall. 1577,Count-Based Exploration with the Successor Representation,"The problem of exploration in reinforcement learning is well-understood in the tabular case and many sample-efficient algorithms are known.Nevertheless, it is often unclear how the algorithms in the tabular setting can be extended to tasks with large state-spaces where generalization is required.Recent promising developments generally depend on problem-specific density models or handcrafted features.In this paper we introduce a simple approach for exploration that allows us to develop theoretically justified algorithms in the tabular case but that also give us intuitions for new algorithms applicable to settings where function approximation is required.Our approach and its underlying theory is based on the substochastic successor representation, a concept we develop here.While the traditional successor representation is a representation that defines state generalization by the similarity of successor states, the substochastic successor representation is also able to implicitly count the number of times each state has been observed.This extension connects two until now disjoint areas of research.We show in traditional tabular domains that our algorithm empirically performs as well as other sample-efficient algorithms.We then describe a deep reinforcement learning algorithm inspired by these ideas and show that it matches the performance of recent pseudo-count-based methods in hard exploration Atari 2600 games.","We propose the idea of using the norm of the successor representation an exploration bonus in reinforcement learning. In hard exploration Atari games, our the deep RL algorithm matches the performance of recent pseudo-count-based methods." 1578,Likelihood Contribution based Multi-scale Architecture for Generative Flows,"Deep generative modeling using flows has gained popularity owing to the tractable exact log-likelihood estimation with efficient training and synthesis process.However, flow models suffer from the challenge of having high dimensional latent space, same in dimension as the input space.An effective solution to the above challenge as proposed by Dinh et al. is a multi-scale architecture, which is based on iterative early factorization of a part of the total dimensions at regular intervals.Prior works on generative flows involving a multi-scale architecture perform the dimension factorization based on a static masking.We propose a novel multi-scale architecture that performs data dependent factorization to decide which dimensions should pass through more flow layers.To facilitate the same, we introduce a heuristic based on the contribution of each dimension to the total log-likelihood which encodes the importance of the dimensions.Our proposed heuristic is readily obtained as part of the flow training process, enabling versatile implementation of our likelihood contribution based multi-scale architecture for generic flow models.We present such an implementation for the original flow introduced in Dinh et al., and demonstrate improvements in log-likelihood score and sampling quality on standard image benchmarks.We also conduct ablation studies to compare proposed method with other options for dimension factorization.",Data-dependent factorization of dimensions in a multi-scale architecture based on contribution to the total log-likelihood 1579,Towards better understanding of gradient-based attribution methods for Deep Neural Networks,"Understanding the flow of information in Deep Neural Networks is a challenging problem that has gain increasing attention over the last few years.While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective.What is more, no exhaustive empirical comparison has been performed in the past.In this work we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them.By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation.Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures.",Four existing backpropagation-based attribution methods are fundamentally similar. How to assess it? 1580,Stochastic algorithms under single spiked models,"We study SGD and Adam for estimating a rank one signal planted in matrix or tensor noise.The extreme simplicity of the problem setup allows us to isolate the effects of various factors: signal to noise ratio, density of critical points, stochasticity and initialization.We observe a surprising phenomenon: Adam seems to get stuck in local minima as soon as polynomially many critical points appear, while SGD escapes those.However, when the number of critical points degenerates to exponentials, then both algorithms get trapped.Theory tells us that at fixed SNR the problem becomes intractable for large and in our experiments SGD does not escape this.We exhibit the benefits of warm starting in those situations.We conclude that in this class of problems, warm starting cannot be replaced by stochasticity in gradients to find the basin of attraction.",SGD and Adam under single spiked model for tensor PCA 1581,Explaining Neural Networks Semantically and Quantitatively,"This paper presents a method to explain the knowledge encoded in a convolutional neural network quantitatively and semantically.How to analyze the specific rationale of each prediction made by the CNN presents one of key issues of understanding neural networks, but it is also of significant practical values in certain applications.In this study, we propose to distill knowledge from the CNN into an explainable additive model, so that we can use the explainable model to provide a quantitative explanation for the CNN prediction.We analyze the typical bias-interpreting problem of the explainable model and develop prior losses to guide the learning of the explainable additive model.Experimental results have demonstrated the effectiveness of our method.",This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically. 1582,cGANs with Projection Discriminator,"We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model.This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the conditional vector to the feature vectors.With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator.We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images.This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator.","We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model." 1583,The Frechet Distance of training and test distribution predicts the generalization gap,"Learning theory tells us that more data is better when minimizing the generalization error of identically distributed training and test sets.However, when training and test distribution differ, this distribution shift can have a significant effect.With a novel perspective on function transfer learning, we are able to lower bound the change of performance when transferring from training to test set with the Wasserstein distance between the embedded training and test set distribution.We find that there is a trade-off affecting performance between how invariant a function is to changes in training and test distribution and how large this shift in distribution is.Empirically across several data domains, we substantiate this viewpoint by showing that test performance correlates strongly with the distance in data distributions between training and test set.Complementary to the popular belief that more data is always better, our results highlight the utility of also choosing a training data distribution that is close to the test data distribution when the learned function is not invariant to such changes.",The Frechet Distance between train and test distribution correlates with the change in performance for functions that are not invariant to the shift. 1584,Spiroplots: a new discrete-time dynamical system to generate curve patterns,"We introduce a new procedural dynamic system that can generate a variety of shapes that often appear as curves, but technically, the figures are plots of many points.We name them spiroplots and show how this new system relates to other procedures or processes that generate figures.Spiroplots are an extremely simple process but with a surprising visual variety.We prove some fundamental properties and analyze some instances to see how the geometry or topology of the input determines the generated figures.We show that some spiroplots have a finite cycle and return to the initial situation, whereas others will produce new points infinitely often.This paper is accompanied by a JavaScript app that allows anyone to generate spiroplots.","A new, very simple dynamic system is introduced that generates pretty patterns; properties are proved and possibilities are explored" 1585,GMM-UNIT: Unsupervised Multi-Domain and Multi-Modal Image-to-Image Translation via Attribute Gaussian Mixture Modelling,"Unsupervised image-to-image translation aims to learn a mapping between several visual domains by using unpaired training pairs.Recent studies have shown remarkable success in image-to-image translation for multiple domains but they suffer from two main limitations: they are either built from several two-domain mappings that are required to be learned independently and/or they generate low-diversity results, a phenomenon known as model collapse.To overcome these limitations, we propose a method named GMM-UNIT based on a content-attribute disentangled representation, where the attribute space is fitted with a GMM.Each GMM component represents a domain, and this simple assumption has two prominent advantages.First, the dimension of the attribute space does not grow linearly with the number of domains, as it is the case in the literature.Second, the continuous domain encoding allows for interpolation between domains and for extrapolation to unseen domains.Additionally, we show how GMM-UNIT can be constrained down to different methods in the literature, meaning that GMM-UNIT is a unifying framework for unsupervised image-to-image translation.",GMM-UNIT is an image-to-image translation model that maps an image to multiple domains in a stochastic fashion. 1586,Compositional Attention Networks for Machine Reasoning,"We present Compositional Attention Networks, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning.While many types of neural networks are effective at learning and generalizing from massive quantities of data, this model moves away from monolithic black-box architectures towards a design that provides a strong prior for iterative reasoning, enabling it to support explainable and structured learning, as well as generalization from a modest amount of data.The model builds on the great success of existing recurrent cells such as LSTMs: It sequences a single recurrent Memory, Attention, and Control cell, and by careful design imposes structural constraints on the operation of each cell and the interactions between them, incorporating explicit control and soft attention mechanisms into their interfaces.""We demonstrate the model's strength and robustness on the challenging CLEVR dataset for visual reasoning, achieving a new state-of-the-art 98.9% accuracy, halving the error rate of the previous best model."", 'More importantly, we show that the new model is more computationally efficient, data-efficient, and requires an order of magnitude less time and/or data to achieve good results.","We present a novel architecture, based on dynamic memory, attention and composition for the task of machine reasoning." 1587,Imagining the Latent Space of a Variational Auto-Encoders," Variational Auto-Encoders are designed to capture compressible information about a dataset. As a consequence the information stored in the latent space is seldom sufficient to reconstruct a particular image. To help understand the type of information stored in the latent space we train a GAN-style decoder constrained to produce images that the VAE encoder will map to the same region of latent space.""This allows us to imagine the information captured in the latent space.We argue that this is necessary to make a VAE into a truly generative model. We use our GAN to visualise the latent space of a standard VAE and of a-VAE.","To understand the information stored in the latent space, we train a GAN-style decoder constrained to produce images that the VAE encoder will map to the same region of latent space." 1588,Hallucinating brains with artificial brains,"Human brain function as measured by functional magnetic resonance imaging, exhibits a rich diversity.In response, understanding the individual variabilityof brain function and its association with behavior has become one of themajor concerns in modern cognitive neuroscience.Our work is motivated by theview that generative models provide a useful tool for understanding this variability.To this end, this manuscript presents two novel generative models trainedon real neuroimaging data which synthesize task-dependent functional brain images.Brain images are high dimensional tensors which exhibit structured spatialcorrelations.Thus, both models are 3D conditional Generative Adversarial networks which apply Convolutional Neural Networks to learn anabstraction of brain image representations.Our results show that the generatedbrain images are diverse, yet task dependent.In addition to qualitative evaluation,we utilize the generated synthetic brain volumes as additional training data to improvedownstream fMRI classifiers.Our approach achieves significant improvements for a variety of datasets, classifi-cation tasks and evaluation scores.Our classification results provide a quantitativeevaluation of the quality of the generated images, and also serve as an additionalcontribution of this manuscript.",Two novel GANs are constructed to generate high-quality 3D fMRI brain images and synthetic brain images greatly help to improve downstream classification tasks. 1589,DARTS: Differentiable Architecture Search,"This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner.Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent.Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.","We propose a differentiable architecture search algorithm for both convolutional and recurrent networks, achieving competitive performance with the state of the art using orders of magnitude less computation resources." 1590,The Curious Case of Neural Text Degeneration,"Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model. The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration — output text that is bland, incoherent, or gets stuck in repetitive loops.To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models.Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass.To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition.Our results show that maximization is an inappropriate decoding objective for open-ended text generation, the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and Nucleus Sampling is the best decoding strategy for generating long-form text that is both high-quality — as measured by human evaluation — and as diverse as human-written text.",Current language generation systems either aim for high likelihood and devolve into generic repetition or miscalibrate their stochasticity—we provide evidence of both and propose a solution: Nucleus Sampling. 1591,Natural Language Adversarial Attack and Defense in Word Level,"Up until very recently, inspired by a mass of researches on adversarial examples for computer vision, there has been a growing interest in designing adversarial attacks for Natural Language Processing tasks, followed by very few works of adversarial defenses for NLP.To our knowledge, there exists no defense method against the successful synonym substitution based attacks that aim to satisfy all the lexical, grammatical, semantic constraints and thus are hard to perceived by humans.We contribute to fill this gap and propose a novel adversarial defense method called Synonym Encoding Method, which inserts an encoder before the input layer of the model and then trains the model to eliminate adversarial perturbations.Extensive experiments demonstrate that SEM can efficiently defend current best synonym substitution based adversarial attacks with little decay on the accuracy for benign examples.To better evaluate SEM, we also design a strong attack method called Improved Genetic Algorithm that adopts the genetic metaheuristic for synonym substitution based attacks.Compared with existing genetic based adversarial attack, IGA can achieve higher attack success rate while maintaining the transferability of the adversarial examples.","The first text adversarial defense method in word level, and the improved generic based attack method against synonyms substitution based attacks." 1592,Sparse Attentive Backtracking: Long-Range Credit Assignment in Recurrent Networks,"A major drawback of backpropagation through time is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation.This makes BPTT both computationally impractical and biologically implausible.For this reason, full backpropagation through time is rarely used on long sequences, and truncated backpropagation through time is used as a heuristic. However, this usually leads to biased estimates of the gradient in which longer term dependencies are ignored. Addressing this issue, we propose an alternative algorithm, Sparse Attentive Backtracking, which might also be related to principles used by brains to learn long-term dependencies.Sparse Attentive Backtracking learns an attention mechanism over the hidden states of the past and selectively backpropagates through paths with high attention weights. This allows the model to learn long term dependencies while only backtracking for a small number of time steps, not just from the recent past but also from attended relevant past states. ",Towards Efficient Credit Assignment in Recurrent Networks without Backpropagation Through Time 1593,Learning Discriminators as Energy Networks in Adversarial Learning,"We propose a novel adversarial learning framework in this work.Existing adversarial learning methods involve two separate networks, i.e., the structured prediction models and the discriminative models, in the training.The information captured by discriminative models complements that in the structured prediction models, but few existing researches have studied on utilizing such information to improve structured prediction models at the inference stage.In this work, we propose to refine the predictions of structured prediction models by effectively integrating discriminative models into the prediction.Discriminative models are treated as energy-based models.Similar to the adversarial learning, discriminative models are trained to estimate scores which measure the quality of predicted outputs, while structured prediction models are trained to predict contrastive outputs with maximal energy scores.In this way, the gradient vanishing problem is ameliorated, and thus we are able to perform inference by following the ascent gradient directions of discriminative models to refine structured prediction models.The proposed method is able to handle a range of tasks, , multi-label classification and image segmentation. Empirical results on these two tasks validate the effectiveness of our learning method.","We propose a novel adversarial learning framework for structured prediction, in which discriminative models can be used to refine structured prediction models at the inference stage. " 1594,RaPP: Novelty Detection with Reconstruction along Projection Pathway,"We propose RaPP, a new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder.Precisely, RaPP compares input and its autoencoder reconstruction not only in the input space but also in the hidden spaces.We show that if we feed a reconstructed input to the same autoencoder again, its activated values in a hidden space are equivalent to the corresponding reconstruction in that hidden space given the original input.In order to aggregate the hidden space activation values, we propose two metrics, which enhance the novelty detection performance.Through extensive experiments using diverse datasets, we validate that RaPP improves novelty detection performances of autoencoder-based approaches.Besides, we show that RaPP outperforms recent novelty detection methods evaluated on popular benchmarks.",A new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder. 1595,From Language to Goals: Inverse Reinforcement Learning for Vision-Based Instruction Following,"Reinforcement learning is a promising framework for solving control problems, but its use in practical situations is hampered by the fact that reward functions are often difficult to engineer.Specifying goals and tasks for autonomous machines, such as robots, is a significant challenge: conventionally, reward functions and goal states have been used to communicate objectives.But people can communicate objectives to each other simply by describing or demonstrating them.How can we build learning algorithms that will allow us to tell machines what we want them to do?In this work, we investigate the problem of grounding language commands as reward functions using inverse reinforcement learning, and argue that language-conditioned rewards are more transferable than language-conditioned policies to new environments.We propose language-conditioned reward learning, which grounds language commands as a reward function represented by a deep neural network.We demonstrate that our model learns rewards that transfer to novel tasks and environments on realistic, high-dimensional visual environments with natural language commands, whereas directly learning a language-conditioned policy leads to poor performance.",We ground language commands in a high-dimensional visual environment by learning language-conditioned rewards using inverse reinforcement learning. 1596,Additive function approximation in the brain,"Many biological learning systems such as the mushroom body, hippocampus, and cerebellum are built from sparsely connected networks of neurons.For a new understanding of such networks, we study the function spaces induced by sparse random features and characterize what functions may and may not be learned.A network with d inputs per neuron is found to be equivalent to an additive model of order d, whereas with a degree distribution the network combines additive terms of different orders.We identify three specific advantages of sparsity: additive function approximation is a powerful inductive bias that limits the curse of dimensionality, sparse networks are stable to outlier noise in the inputs, and sparse random features are scalable.Thus, even simple brain architectures can be powerful function approximators.Finally, we hope that this work helps popularize kernel theories of networks among computational neuroscientists.","We advocate for random features as a theory of biological neural networks, focusing on sparsely connected networks" 1597,Prob2Vec: Mathematical Semantic Embedding for Problem Retrieval in Adaptive Tutoring,"We propose a new application of embedding techniques to problem retrieval in adaptive tutoring.The objective is to retrieve problems similar in mathematical concepts.There are two challenges: First, like sentences, problems helpful to tutoring are never exactly the same in terms of the underlying concepts.Instead, good problems mix concepts in innovative ways, while still displaying continuity in their relationships.Second, it is difficult for humans to determine a similarity score consistent across a large enough training set.We propose a hierarchical problem embedding algorithm, called Prob2Vec, that consists of an abstraction and an embedding step.Prob2Vec achieves 96.88% accuracy on a problem similarity test, in contrast to 75% from directly applying state-of-the-art sentence embedding methods.It is surprising that Prob2Vec is able to distinguish very fine-grained differences among problems, an ability humans need time and effort to acquire.In addition, the sub-problem of concept labeling with imbalanced training data set is interesting in its own right.It is a multi-label problem suffering from dimensionality explosion, which we propose ways to ameliorate.We propose the novel negative pre-training algorithm that dramatically reduces false negative and positive ratios for classification, using an imbalanced training data set.","We propose the Prob2Vec method for problem embedding used in a personalized e-learning tool in addition to a data level classification method, called negative pre-training, for cases where the training data set is imbalanced." 1598,CrescendoNet: A Simple Deep Convolutional Neural Network with Ensemble Behavior,"We introduce a new deep convolutional neural network, CrescendoNet, by stacking simple building blocks without residual connections.Each Crescendo block contains independent convolution paths with increased depths.The numbers of convolution layers and parameters are only increased linearly in Crescendo blocks.In experiments, CrescendoNet with only 15 layers outperforms almost all networks without residual connections on benchmark datasets, CIFAR10, CIFAR100, and SVHN.Given sufficient amount of data as in SVHN dataset, CrescendoNet with 15 layers and 4.1M parameters can match the performance of DenseNet-BC with 250 layers and 15.3M parameters.CrescendoNet provides a new way to construct high performance deep convolutional neural networks without residual connections.Moreover, through investigating the behavior and performance of subnetworks in CrescendoNet, we note that the high performance of CrescendoNet may come from its implicit ensemble behavior, which differs from the FractalNet that is also a deep convolutional neural network without residual connections.Furthermore, the independence between paths in CrescendoNet allows us to introduce a new path-wise training procedure, which can reduce the memory needed for training.","We introduce CrescendoNet, a deep CNN architecture by stacking simple building blocks without residual connections." 1599,Deep Random Splines for Point Process Intensity Estimation,"Gaussian processes are the leading class of distributions on random functions, but they suffer from well known issues including difficulty scaling and inflexibility with respect to certain shape constraints.Here we propose Deep Random Splines, a flexible class of random functions obtained by transforming Gaussian noise through a deep neural network whose output are the parameters of a spline.Unlike Gaussian processes, Deep Random Splines allow us to readily enforce shape constraints while inheriting the richness and tractability of deep generative models.We also present an observational model for point process data which uses Deep Random Splines to model the intensity function of each point process and apply it to neuroscience data to obtain a low-dimensional representation of spiking activity.Inference is performed via a variational autoencoder that uses a novel recurrent encoder architecture that can handle multiple point processes as input.",We combine splines with neural networks to obtain a novel distribution over functions and use it to model intensity functions of point processes. 1600,MobileBERT: Task-Agnostic Compression of BERT by Progressive Knowledge Transfer,"The recent development of Natural Language Processing has achieved great success using large pre-trained models with hundreds of millions of parameters.However, these models suffer from the heavy model size and high latency such that we cannot directly deploy them to resource-limited mobile devices.In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model.Like BERT, MobileBERT is task-agnostic; that is, it can be universally applied to various downstream NLP tasks via fine-tuning.MobileBERT is a slimmed version of BERT-LARGE augmented with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.To train MobileBERT, we use a bottom-to-top progressive scheme to transfer the intrinsic knowledge of a specially designed Inverted Bottleneck BERT-LARGE teacher to it.Empirical studies show that MobileBERT is 4.3x smaller and 4.0x faster than original BERT-BASE while achieving competitive results on well-known NLP benchmarks.On the natural language inference tasks of GLUE, MobileBERT achieves 0.6 GLUE score performance degradation, and 367 ms latency on a Pixel 3 phone.On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a 90.0/79.2 dev F1 score, which is 1.5/2.1 higher than BERT-BASE.","We develop a task-agnosticlly compressed BERT, which is 4.3x smaller and 4.0x faster than BERT-BASE while achieving competitive performance on GLUE and SQuAD." 1601,On importance-weighted autoencoders,"The importance weighted autoencoder is a popular variational-inference method which achieves a tighter evidence bound than standard variational autoencoders by optimising a multi-sample objective, i.e. an objective that is expressible as an integral over Monte Carlo samples.Unfortunately, IWAE crucially relies on the availability of reparametrisations and even if these exist, the multi-sample objective leads to inference-network gradients which break down as is increased.""This breakdown can only be circumvented by removing high-variance score-function terms, either by heuristically ignoring them gradient from Roeder et al.) or through an identity from Tucker et al. gradient)."", 'In this work, we argue that directly optimising the proposal distribution in importance sampling as in the reweighted wake-sleep algorithm from Bornschein & Bengio is preferable to optimising IWAE-type multi-sample objectives.To formalise this argument, we introduce an adaptive-importance sampling framework termed adaptive importance sampling for learning which slightly generalises the RWS algorithm.We then show that AISLE admits IWAE-STL and IWAE-DREG as special cases.",We show that most variants of importance-weighted autoencoders can be derived in a more principled manner as special cases of adaptive importance-sampling approaches like the reweighted-wake sleep algorithm. 1602,Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization,"As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed on clusters to perform model fitting in parallel.Alistarh et al. describe two variants of data-parallel SGD that quantize and encode gradients to lessen communication costs.For the first variant, QSGD, they provide strong theoretical guarantees.For the second variant, which we call QSGDinf, they demonstrate impressive empirical gains for distributed training of large neural networks.Building on their work, we propose an alternative scheme for quantizing gradients and show that it yields stronger theoretical guarantees than exist for QSGD while matching the empirical performance of QSGDinf.",NUQSGD closes the gap between the theoretical guarantees of QSGD and the empirical performance of QSGDinf. 1603,Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity,"The impressive lifelong learning in animal brains is primarily enabled by plastic changes in synaptic connectivity.Importantly, these changes are not passive, but are actively controlled by neuromodulation, which is itself under the control of the brain.The resulting self-modifying abilities of the brain play an important role in learning and adaptation, and are a major basis for biological reinforcement learning.Here we show for the first time that artificial neural networks with such neuromodulated plasticity can be trained with gradient descent.Extending previous work on differentiable Hebbian plasticity, we propose a differentiable formulation for the neuromodulation of plasticity.We show that neuromodulated plasticity improves the performance of neural networks on both reinforcement learning and supervised learning tasks.In one task, neuromodulated plastic LSTMs with millions of parameters outperform standard LSTMs on a benchmark language modeling task.We conclude that differentiable neuromodulation of plasticity offers a powerful new framework for training neural networks.","Neural networks can be trained to modify their own connectivity, improving their online learning performance on challenging tasks." 1604,Few-Shot Learning with Simplex,"Deep learning has made remarkable achievement in many fields.However, learningthe parameters of neural networks usually demands a large amount of labeleddata.The algorithms of deep learning, therefore, encounter difficulties when appliedto supervised learning where only little data are available.This specific taskis called few-shot learning.To address it, we propose a novel algorithm for fewshotlearning using discrete geometry, in the sense that the samples in a class aremodeled as a reduced simplex.The volume of the simplex is used for the measurementof class scatter.During testing, combined with the test sample and thepoints in the class, a new simplex is formed.Then the similarity between the testsample and the class can be quantized with the ratio of volumes of the new simplexto the original class simplex.Moreover, we present an approach to constructingsimplices using local regions of feature maps yielded by convolutional neural networks.Experiments on Omniglot and miniImageNet verify the effectiveness ofour simplex algorithm on few-shot learning.",A simplex-based geometric method is proposed to cope with few-shot learning problems. 1605,Working memory facilitates reward-modulated Hebbian learning in recurrent neural networks,"Reservoir computing is a powerful tool to explain how the brain learns temporal sequences, such as movements, but existing learning schemes are either biologically implausible or too inefficient to explain animal performance.We show that a network can learn complicated sequences with a reward-modulated Hebbian learning rule if the network of reservoir neurons is combined with a second network that serves as a dynamic working memory and provides a spatio-temporal backbone signal to the reservoir.In combination with the working memory, reward-modulated Hebbian learning of the readout neurons performs as well as FORCE learning, but with the advantage of a biologically plausible interpretation of both the learning rule and the learning paradigm.",We show that a working memory input to a reservoir network makes a local reward-modulated Hebbian rule perform as well as recursive least-squares (aka FORCE) 1606,STCN: Stochastic Temporal Convolutional Networks,"Convolutional architectures have recently been shown to be competitive on manysequence modelling tasks when compared to the de-facto standard of recurrent neural networks while providing computational and modelling advantages due to inherent parallelism.However, currently, there remains a performancegap to more expressive stochastic RNN variants, especially those with several layers of dependent random variables.In this work, we propose stochastic temporal convolutional networks, a novel architecture that combines the computational advantages of temporal convolutional networks with the representational power and robustness of stochastic latent spaces.In particular, we propose a hierarchy of stochastic latent variables that captures temporal dependencies at different time-scales.The architecture is modular and flexible due to the decoupling of the deterministic and stochastic layers.We show that the proposed architecture achieves state of the art log-likelihoods across several tasks.Finally, the model is capable of predicting high-quality synthetic samples over a long-range temporal horizon in modelling of handwritten text.",We combine the computational advantages of temporal convolutional architectures with the expressiveness of stochastic latent variables. 1607,D3PG: Deep Differentiable Deterministic Policy Gradients,"Over the last decade, two competing control strategies have emerged for solving complex control tasks with high efficacy.Model-based control algorithms, such as model-predictive control and trajectory optimization, peer into the gradients of underlying system dynamics in order to solve control tasks with high sample efficiency. However, like all gradient-based numerical optimization methods,model-based control methods are sensitive to intializations and are prone to becoming trapped in local minima.Deep reinforcement learning, on the other hand, can somewhat alleviate these issues by exploring the solution space through sampling — at the expense of computational cost.In this paper, we present a hybrid method that combines the best aspects of gradient-based methods and DRL.We base our algorithm on the deep deterministic policy gradients algorithm and propose a simple modification that uses true gradients from a differentiable physical simulator to increase the convergence rate of both the actor and the critic. We demonstrate our algorithm on seven 2D robot control tasks, with the most complex one being a differentiable half cheetah with hard contact constraints.Empirical results show that our method boosts the performance of DDPGwithout sacrificing its robustness to local minima.",We propose a novel method that leverages the gradients from differentiable simulators to improve the performance of RL for robotics control 1608,Bayesian Hypernetworks,"We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks.A Bayesian hypernetwork, h, is a neural network which learns to transform a simple noise distribution, p = N, to a distribution q := q) over the parameters t of another neural network.We train q with variational inference, using an invertible h to enable efficient estimation of the variational lower bound on the posterior p via sampling.In contrast to most methods for Bayesian deep learning, Bayesian hypernets can represent a complex multimodal approximate posterior with correlations between parameters, while enabling cheap iid sampling of q. In practice, Bayesian hypernets provide a better defense against adversarial examples than dropout, and also exhibit competitive performance on a suite of tasks which evaluate model uncertainty, including regularization, active learning, and anomaly detection.",We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks. 1609,Deep Innovation Protection,"Evolutionary-based optimization approaches have recently shown promising results in domains such as Atari and robot locomotion but less so in solving 3D tasks directly from pixels.This paper presents a method called Deep Innovation Protection that allows training complex world models end-to-end for such 3D environments.The main idea behind the approach is to employ multiobjective optimization to temporally reduce the selection pressure on specific components in a world model, allowing other components to adapt.We investigate the emergent representations of these evolved networks, which learn a model of the world without the need for a specific forward-prediction loss.",Deep Innovation Protection allows evolving complex world models end-to-end for 3D tasks. 1610,MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius,"Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses.Recent work shows that randomized smoothing can be used to provide certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius.The attack-free characteristic makes MACER faster to train and easier to optimize.In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN.For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius.",We propose MACER: a provable defense algorithm that trains robust models by maximizing the certified radius. It does not use adversarial training but performs better than all existing provable l2-defenses. 1611,Guide Actor-Critic for Continuous Control,"Actor-critic methods solve reinforcement learning problems by updating a parameterized policy known as an actor in a direction that increases an estimate of the expected return known as a critic.However, existing actor-critic methods only use values or gradients of the critic to update the policy parameter.In this paper, we propose a novel actor-critic method called the guide actor-critic.GAC firstly learns a guide actor that locally maximizes the critic and then it updates the policy parameter based on the guide actor by supervised learning.Our main theoretical contributions are two folds.First, we show that GAC updates the guide actor by performing second-order optimization in the action space where the curvature matrix is based on the Hessians of the critic.Second, we show that the deterministic policy gradient method is a special case of GAC when the Hessians are ignored.Through experiments, we show that our method is a promising reinforcement learning method for continuous controls.",This paper proposes a novel actor-critic method that uses Hessians of a critic to update an actor. 1612,Reject Illegal Inputs: Scaling Generative Classifiers with Supervised Deep Infomax,"Deep Infomax~ is an unsupervised representation learning framework by maximizing the mutual information between the inputs and the outputs of an encoder, while probabilistic constraints are imposed on the outputs.In this paper, we propose Supervised Deep InfoMax~, which introduces supervised probabilistic constraints to the encoder outputs.The supervised probabilistic constraints are equivalent to a generative classifier on high-level data representations, where class conditional log-likelihoods of samples can be evaluated.Unlike other works building generative classifiers with conditional generative models, SDIMs scale on complex datasets, and can achieve comparable performance with discriminative counterparts. ', ""With SDIM, we could perform .Instead of always reporting a class label, SDIM only makes predictions when test samples' largest logits surpass some pre-chosen thresholds, otherwise they will be deemed as out of the data distributions, and be rejected.Our experiments show that SDIM with rejection policy can effectively reject illegal inputs including out-of-distribution samples and adversarial examples.","scale generative classifiers on complex datasets, and evaluate their effectiveness to reject illegal inputs including out-of-distribution samples and adversarial examples." 1613,"Scaling Laws for the Principled Design, Initialization, and Preconditioning of ReLU Networks","Abstract In this work, we describe a set of rules for the design and initialization of well-conditioned neural networks, guided by the goal of naturally balancing the diagonal blocks of the Hessian at the start of training.We show how our measure of conditioning of a block relates to another natural measure of conditioning, the ratio of weight gradients to the weights.We prove that for a ReLU-based deep multilayer perceptron, a simple initialization scheme using the geometric mean of the fan-in and fan-out satisfies our scaling rule.For more sophisticated architectures, we show how our scaling principle can be used to guide design choices to produce well-conditioned neural networks, reducing guess-work.",A theory for initialization and scaling of ReLU neural network layers 1614,Thieves on Sesame Street! Model Extraction of BERT-based APIs,"We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model.Assuming that both the adversary and victim model fine-tune a large pretrained language model such as BERT, we show that the adversary does not need any real training data to successfully mount the attack.In fact, the attacker need not even use grammatical or semantically meaningful queries: we show that random sequences of words coupled with task-specific heuristics form effective queries for model extraction on a diverse set of NLP tasks including natural language inference and question answering.Our work thus highlights an exploit only made feasible by the shift towards transfer learning methods within the NLP community: for a query budget of a few hundred dollars, an attacker can extract a model that performs only slightly worse than the victim model.Finally, we study two defense strategies against model extraction—membership classification and API watermarking—which while successful against some adversaries can also be circumvented by more clever ones.","Outputs of modern NLP APIs on nonsensical text provide strong signals about model internals, allowing adversaries to steal the APIs." 1615,SEARNN: Training RNNs with global-local losses,"We propose SEARNN, a novel training algorithm for recurrent neural networks inspired by the ""learning to search"" approach to structured prediction.RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation.Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses.Further, it introduces discrepancies between training and predicting that may hurt test performance.Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error.We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction.Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes.This allows us to validate the benefits of our approach on a machine translation task.","We introduce SeaRNN, a novel algorithm for RNN training, inspired by the learning to search approach to structured prediction, in order to avoid the limitations of MLE training." 1616,Progressive Reinforcement Learning with Distillation for Multi-Skilled Motion Control,"Deep reinforcement learning has demonstrated increasing capabilities for continuous control problems,including agents that can move with skill and agility through their environment.An open problem in this setting is that of developing good strategies for integrating or merging policiesfor multiple skills, where each individual skill is a specialist in a specific skill and its associated state distribution.We extend policy distillation methods to the continuous action setting and leverage this technique to combine \\expert policies,as evaluated in the domain of simulated bipedal locomotion across different classes of terrain.We also introduce an input injection method for augmenting an existing policy network to exploit new input features.Lastly, our method uses transfer learning to assist in the efficient acquisition of new skills.The combination of these methods allows a policy to be incrementally augmented with new skills.We compare our progressive learning and integration via distillation methodagainst three alternative baselines.",A continual learning method that uses distillation to combine expert policies and transfer learning to accelerate learning new skills. 1617,MoET: Interpretable and Verifiable Reinforcement Learning via Mixture of Expert Trees,"Deep Reinforcement Learning has led to many recent breakthroughs on complex control tasks, such as defeating the best human player in the game of Go.However, decisions made by the DRL agent are not explainable, hindering its applicability in safety-critical settings.Viper, a recently proposed technique, constructs a decision tree policy by mimicking the DRL agent.Decision trees are interpretable as each action made can be traced back to the decision rule path that lead to it.However, one global decision tree approximating the DRL policy has significant limitations with respect to the geometry of decision boundaries.We propose MoET, a more expressive, yet still interpretable model based on Mixture of Experts, consisting of a gating function that partitions the state space, and multiple decision tree experts that specialize on different partitions.We propose a training procedure to support non-differentiable decision tree experts and integrate it into imitation learning procedure of Viper.We evaluate our algorithm on four OpenAI gym environments, and show that the policy constructed in such a way is more performant and better mimics the DRL agent by lowering mispredictions and increasing the reward.We also show that MoET policies are amenable for verification using off-the-shelf automated theorem provers such as Z3.",Explainable reinforcement learning model using novel combination of mixture of experts with non-differentiable decision tree experts. 1618,A Stochastic Derivative Free Optimization Method with Momentum,"We consider the problem of unconstrained minimization of a smooth objectivefunction in in setting where only function evaluations are possible.We propose and analyze stochastic zeroth-order method with heavy ball momentum.In particular, we propose, SMTP, a momentum version of the stochastic three-point method Bergou et al..We show new complexity results for non-convex, convex and strongly convex functions.We test our method on a collection of learning to continuous control tasks on several MuJoCo Todorov et al. environments with varying difficulty and compare against STP, other state-of-the-art derivative-free optimization algorithms and against policy gradient methods.SMTP significantly outperforms STP and all other methods that we considered in our numerical experiments.Our second contribution is SMTP with importance sampling which we call SMTP_IS.We provide convergence analysis of this method for non-convex, convex and strongly convex objectives.",We develop and analyze a new derivative free optimization algorithm with momentum and importance sampling with applications to continuous control. 1619,SHREWD: Semantic Hierarchy Based Relational Embeddings For Weakly-Supervised Deep Hashing,"Using class labels to represent class similarity is a typical approach to training deep hashing systems for retrieval; samples from the same or different classes take binary 1 or 0 similarity values.This similarity does not model the full rich knowledge of semantic relations that may be present between data points.In this work we build upon the idea of using semantic hierarchies to form distance metrics between all available sample labels; for example cat to dog has a smaller distance than cat to guitar.We combine this type of semantic distance into a loss function to promote similar distances between the deep neural network embeddings.We also introduce an empirical Kullback-Leibler divergence loss term to promote binarization and uniformity of the embeddings.We test the resulting SHREWD method and demonstrate improvements in hierarchical retrieval scores using compact, binary hash codes instead of real valued ones, and show that in a weakly supervised hashing setting we are able to learn competitively without explicitly relying on class labels, but instead on similarities between labels.",We propose a new method for training deep hashing for image retrieval using only a relational distance metric between samples 1620,Local Editing of Cross-Surface Mappings with Iterative Least Squares Conformal Maps,"In this paper, we propose a novel approach to improve a given surface mapping through local refinement.The approachreceives an established mapping between two surfaces and follows four phases: inspection of the mapping and creation of a sparseset of landmarks in mismatching regions; segmentation with a low-distortion region-growing process based on flattening thesegmented parts; optimization of the deformation of segmented parts to align the landmarks in the planar parameterization domain;and aggregation of the mappings from segments to update the surface mapping.In addition, we propose a new method to deform themesh in order to meet constraints).We incrementally adjust the cotangent weights forthe constraints and apply the deformation in a fashion that guarantees that the deformed mesh will be free of flipped faces and will havelow conformal distortion.Our new deformation approach, Iterative Least Squares Conformal Mapping, outperforms otherlow-distortion deformation methods.The approach is general, and we tested it by improving the mappings from different existing surfacemapping methods.We also tested its effectiveness by editing the mappings for a variety of 3D objects.",We propose a novel approach to improve a given cross-surface mapping through local refinement with a new iterative method to deform the mesh in order to meet user constraints. 1621,REPRESENTATION COMPRESSION AND GENERALIZATION IN DEEP NEURAL NETWORKS,"Understanding the groundbreaking performance of Deep Neural Networks is oneof the greatest challenges to the scientific community today.In this work, weintroduce an information theoretic viewpoint on the behavior of deep networksoptimization processes and their generalization abilities.By studying the InformationPlane, the plane of the mutual information between the input variable andthe desired label, for each hidden layer.Specifically, we show that the training ofthe network is characterized by a rapid increase in the mutual informationbetween the layers and the target label, followed by a longer decrease in the MIbetween the layers and the input variable.Further, we explicitly show that thesetwo fundamental information-theoretic quantities correspond to the generalizationerror of the network, as a result of introducing a new generalization bound that isexponential in the representation compression.The analysis focuses on typicalpatterns of large-scale problems.For this purpose, we introduce a novel analyticbound on the mutual information between consecutive layers in the network.An important consequence of our analysis is a super-linear boost in training timewith the number of non-degenerate hidden layers, demonstrating the computationalbenefit of the hidden layers.",Introduce an information theoretic viewpoint on the behavior of deep networks optimization processes and their generalization abilities 1622,Learning concise representations for regression by evolving networks of trees,"We propose and study a method for learning interpretable representations for the task of regression.Features are represented as networks of multi-type expression trees comprised of activation functions common in neural networks in addition to other elementary functions.Differentiable features are trained via gradient descent, and the performance of features in a linear model is used to weight the rate of change among subcomponents of each representation.The search process maintains an archive of representations with accuracy-complexity trade-offs to assist in generalization and interpretation.We compare several stochastic optimization approaches within this framework.We benchmark these variants on 100 open-source regression problems in comparison to state-of-the-art machine learning approaches.Our main finding is that this approach produces the highest average test scores across problems while producing representations that are orders of magnitude smaller than the next best performing method.We also report a negative result in which attempts to directly optimize the disentanglement of the representation result in more highly correlated features.",Representing the network architecture as a set of syntax trees and optimizing their structure leads to accurate and concise regression models. 1623,Toward Understanding the Impact of Staleness in Distributed Machine Learning,"Most distributed machine learning systems store a copy of the model parameters locally on each machine to minimize network communication.In practice, in order to reduce synchronization waiting time, these copies of the model are not necessarily updated in lock-step, and can become stale.Despite much development in large-scale ML, the effect of staleness on the learning efficiency is inconclusive, mainly because it is challenging to control or monitor the staleness in complex distributed environments.In this work, we study the convergence behaviors of a wide array of ML models and algorithms under delayed updates.Our extensive experiments reveal the rich diversity of the effects of staleness on the convergence of ML algorithms and offer insights into seemingly contradictory reports in the literature.The empirical findings also inspire a new convergence analysis of SGD in non-convex optimization under staleness, matching the best-known convergence rate of O.",Empirical and theoretical study of the effects of staleness in non-synchronous execution on machine learning algorithms. 1624,Non-linear System Identification from Partial Observations via Iterative Smoothing and Learning,"System identification is the process of building a mathematical model of an unknown system from measurements of its inputs and outputs.It is a key step for model-based control, estimator design, and output prediction.""This work presents an algorithm for non-linear offline system identification from partial observations, i.e. situations in which the system's full-state is not directly observable."", ""The algorithm presented, called SISL, iteratively infers the system's full state through non-linear optimization and then updates the model parameters."", ""We test our algorithm on a simulated system of coupled Lorenz attractors, showing our algorithm's ability to identify high-dimensional systems that prove intractable for particle-based approaches."", 'We also use SISL to identify the dynamics of an aerobatic helicopter.By augmenting the state with unobserved fluid states, we learn a model that predicts the acceleration of the helicopter better than state-of-the-art approaches.",This work presents a scalable algorithm for non-linear offline system identification from partial observations. 1625,On Stochastic Sign Descent Methods,"Various gradient compression schemes have been proposed to mitigate the communication cost in distributed training of large scale machine learning models.Sign-based methods, such as signSGD, have recently been gaining popularity because of their simple compression rule and connection to adaptive gradient methods, like ADAM.In this paper, we perform a general analysis of sign-based methods for non-convex optimization.Our analysis is built on intuitive bounds on success probabilities and does not rely on special noise distributions nor on the boundedness of the variance of stochastic gradients.Extending the theory to distributed setting within a parameter server framework, we assure exponentially fast variance reduction with respect to number of nodes, maintaining 1-bit compression in both directions and using small mini-batch sizes.We validate our theoretical findings experimentally.","General analysis of sign-based methods (e.g. signSGD) for non-convex optimization, built on intuitive bounds on success probabilities." 1626,Variance Regularized Counterfactual Risk Minimization via Variational Divergence Minimization,"Off-policy learning, the task of evaluating and improving policies using historic data collected from a logging policy, is important because on-policy evaluation is usually expensive and has adverse impacts.One of the major challenge of off-policy learning is to derive counterfactual estimators that also has low variance and thus low generalization error.In this work, inspired by learning bounds for importance sampling problems, we present a new counterfactual learning principle for off-policy learning with bandit feedbacks.Our method regularizes the generalization error by minimizing the distribution divergence between the logging policy and the new policy, and removes the need for iterating through all training samples to compute sample variance regularization in prior work.With neural network policies, our end-to-end training algorithms using variational divergence minimization showed significant improvement over conventional baseline algorithms and is also consistent with our theoretical results.","For off-policy learning with bandit feedbacks, we propose a new variance regularized counterfactual learning algorithm, which has both theoretical foundations and superior empirical performance." 1627,Learned imaging with constraints and uncertainty quantification,"We outline new approaches to incorporate ideas from deep learning into wave-based least-squares imaging.The aim, and main contribution of this work, is the combination of handcrafted constraints with deep convolutional neural networks, as a way to harness their remarkable ease of generating natural images.The mathematical basis underlying our method is the expectation-maximization framework, where data are divided in batches and coupled to additional ""latent"" unknowns.These unknowns are pairs of elements from the original unknown space and network inputs.In this setting, the neural network controls the similarity between these additional parameters, acting as a ""center"" variable.The resulting problem amounts to a maximum-likelihood estimation of the network parameters when the augmented data model is marginalized over the latent variables.","We combine hard handcrafted constraints with a deep prior weak constraint to perform seismic imaging and reap information on the ""posterior"" distribution leveraging multiplicity in the data." 1628,RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers,"When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen database schemas. The generalization challenge lies in encoding the database relations in an accessible way for the semantic parser, and modeling alignment between database columns and their mentions in a given query. We present a unified framework, based on the relation-aware self-attention mechanism,to address schema encoding, schema linking, and feature representation within a text-to-SQL encoder.On the challenging Spider dataset this framework boosts the exact match accuracy to 53.7%, compared to 47.4% for the previous state-of-the-art model unaugmented with BERT embeddings.In addition, we observe qualitative improvements in the model’s understanding of schema linking and alignment.",State of the art in complex text-to-SQL parsing by combining hard and soft relational reasoning in schema/question encoding. 1629,Task-agnostic Continual Learning via Growing Long-Term Memory Networks,"As our experience shows, humans can learn and deploy a myriad of different skills to tackle the situations they encounter daily.Neural networks, in contrast, have a fixed memory capacity that prevents them from learning more than a few sets of skills before starting to forget them.In this work, we make a step to bridge neural networks with human-like learning capabilities.For this, we propose a model with a growing and open-bounded memory capacity that can be accessed based on the model’s current demands.To test this system, we introduce a continual learning task based on language modelling where the model is exposed to multiple languages and domains in sequence, without providing any explicit signal on the type of input it is currently dealing with.The proposed system exhibits improved adaptation skills in that it can recover faster than comparable baselines after a switch in the input language or domain.",We introduce a continual learning setup based on language modelling where no explicit task segmentation signal is given and propose a neural network model with growing long term memory to tackle it. 1630,Learning temporal evolution of probability distribution with Recurrent Neural Network,"We propose to tackle a time series regression problem by computing temporal evolution of a probability density function to provide a probabilistic forecast.A Recurrent Neural Network based model is employed to learn a nonlinear operator for temporal evolution of a probability density function.We use a softmax layer for a numerical discretization of a smooth probability density functions, which transforms a function approximation problem to a classification task.Explicit and implicit regularization strategies are introduced to impose a smoothness condition on the estimated probability distribution.A Monte Carlo procedure to compute the temporal evolution of the distribution for a multiple-step forecast is presented.The evaluation of the proposed algorithm on three synthetic and two real data sets shows advantage over the compared baselines.",Proposed RNN-based algorithm to estimate predictive distribution in one- and multi-step forecasts in time series prediction problems 1631,Modelling Working Memory using Deep Recurrent Reinforcement Learning,"In cognitive systems, the role of a working memory is crucial for visual reasoning and decision making.Tremendous progress has been made in understanding the mechanisms of the human/animal working memory, as well as in formulating different frameworks of artificial neural networks. In the case of humans, the visual working memory task is a standard one in which the subjects are presented with a sequence of images, each of which needs to be identified as to whether it was already seen or not.Our work is a study of multiple ways to learn a working memory model using recurrent neural networks that learn to remember input images across timesteps.We train these neural networks to solve the working memory task by training them with a sequence of images in supervised and reinforcement learning settings.The supervised setting uses image sequences with their corresponding labels.The reinforcement learning setting is inspired by the popular view in neuroscience that the working memory in the prefrontal cortex is modulated by a dopaminergic mechanism.We consider the VWM task as an environment that rewards the agent when it remembers past information and penalizes it for forgetting.We quantitatively estimate the performance of these models on sequences of images from a standard image dataset.Further, we evaluate their ability to remember and recall as they are increasingly trained over episodes.Based on our analysis, we establish that a gated recurrent neural network model with long short-term memory units trained using reinforcement learning is powerful and more efficient in temporally consolidating the input spatial information.This work is an initial analysis as a part of our ultimate goal to use artificial neural networks to model the behavior and information processing of the working memory of the brain and to use brain imaging data captured from human subjects during the VWM cognitive task to understand various memory mechanisms of the brain.","LSTMs can more effectively model the working memory if they are learned using reinforcement learning, much like the dopamine system that modulates the memory in the prefrontal cortex " 1632,From Hard to Soft: Understanding Deep Network Nonlinearities via Vector Quantization and Statistical Inference,"Nonlinearity is crucial to the performance of a deep network.To date there has been little progress understanding the menagerie of available nonlinearities, but recently progress has been made on understanding the r\\^le played by piecewise affine and convex nonlinearities like the ReLU and absolute value activation functions and max-pooling.In particular, DN layers constructed from these operations can be interpreted as that have an elegant link to vector quantization and-means.While this is good theoretical progress, the entire MASO approach is predicated on the requirement that the nonlinearities be piecewise affine and convex, which precludes important activation functions like the sigmoid, hyperbolic tangent, and softmax.""We show that, under a GMM, piecewise affine, convex nonlinearities like ReLU, absolute value, and max-pooling can be interpreted as solutions to certain natural hard VQ inference problems, while sigmoid, hyperbolic tangent, and softmax can be interpreted as solutions to corresponding soft VQ inference problems."", 'We further extend the framework by hybridizing the hard and soft VQ optimizations to create a-VQ inference that interpolates between hard, soft, and linear VQ inference.A prime example of a-VQ DN nonlinearity is the nonlinearity, which offers state-of-the-art performance in a range of computer vision tasks but was developed ad hoc by experimentation.Finally, we validate with experiments an important assertion of our theory, namely that DN performance can be significantly improved by enforcing orthogonality in its linear filters.",Reformulate deep networks nonlinearities from a vector quantization scope and bridge most known nonlinearities together. 1633,Generative Models for Graph-Based Protein Design,"Engineered proteins offer the potential to solve many problems in biomedicine, energy, and materials science, but creating designs that succeed is difficult in practice.A significant aspect of this challenge is the complex coupling between protein sequence and 3D structure, and the task of finding a viable design is often referred to as the inverse protein folding problem.We develop generative models for protein sequences conditioned on a graph-structured specification of the design target.Our approach efficiently captures the complex dependencies in proteins by focusing on those that are long-range in sequence but local in 3D space.Our framework significantly improves upon prior parametric models of protein sequences given structure, and takes a step toward rapid and targeted biomolecular design with the aid of deep generative models.","We learn to conditionally generate protein sequences given structures with a model that captures sparse, long-range dependencies." 1634,Deep Layers as Stochastic Solvers,"We provide a novel perspective on the forward pass through a block of layers in a deep network.In particular, we show that a forward pass through a standard dropout layer followed by a linear layer and a non-linear activation is equivalent to optimizing a convex objective with a single iteration of a-nice Proximal Stochastic Gradient method.We further show that replacing standard Bernoulli dropout with additive dropout is equivalent to optimizing the same convex objective with a variance-reduced proximal method.By expressing both fully-connected and convolutional layers as special cases of a high-order tensor product, we unify the underlying convex optimization problem in the tensor setting and derive a formula for the Lipschitz constant used to determine the optimal step size of the above proximal methods.We conduct experiments with standard convolutional networks applied to the CIFAR-10 and CIFAR-100 datasets and show that replacing a block of layers with multiple iterations of the corresponding solver, with step size set via, consistently improves classification accuracy.",A framework that links deep network layers to stochastic optimization algorithms; can be used to improve model accuracy and inform network design. 1635,LEARNED STEP SIZE QUANTIZATION,"Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases.Here, we present a method for training such networks, Learned Step Size Quantization, that achieves the highest accuracy to date on the ImageNet dataset when using models, from a variety of architectures, with weights and activations quantized to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach full precision baseline accuracy.Our approach builds upon existing methods for learning weights in quantized networks by improving how the quantizer itself is configured.""Specifically, we introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters."", 'This approach works using different levels of precision as needed for a given system and requires only a simple modification of existing training code.",A method for learning quantization configuration for low precision networks that achieves state of the art performance for quantized networks. 1636,Simple but effective techniques to reduce dataset biases,"There have been several studies recently showing that strong natural language understanding models are prone to relying on unwanted dataset biases without learning the underlying task, resulting in models which fail to generalize to out-of-domain datasets, and are likely to perform poorly in real-world scenarios.We propose several learning strategies to train neural models which are more robust to such biases and transfer better to out-of-domain datasets.We introduce an additional lightweight bias-only model which learns dataset biases and uses its prediction to adjust the loss of the base model to reduce the biases.In other words, our methods down-weight the importance of the biased examples, and focus training on hard examples, i.e. examples that cannot be correctly classified by only relying on biases.Our approaches are model agnostic and simple to implement. We experiment on large-scale natural language inference and fact verification datasets and their out-of-domain datasets and show that our debiased models significantly improve the robustness in all settings, including gaining 9.76 points on the FEVER symmetric evaluation dataset, 5.45 on the HANS dataset and 4.78 points on the SNLI hard set. These datasets are specifically designed to assess the robustness of models in the out-of-domain setting where typical biases in the training data do not exist in the evaluation set.",We propose several general debiasing strategies to address common biases seen in different datasets and obtain substantial improved out-of-domain performance in all settings. 1637,Extreme Few-view CT Reconstruction using Deep Inference,"Reconstruction of few-view x-ray Computed Tomography data is a highly ill-posed problem.It is often used in applications that require low radiation dose in clinical CT, rapid industrial scanning, or fixed-gantry CT.Existing analytic or iterative algorithms generally produce poorly reconstructed images, severely deteriorated by artifacts and noise, especially when the number of x-ray projections is considerably low.This paper presents a deep network-driven approach to address extreme few-view CT by incorporating convolutional neural network-based inference into state-of-the-art iterative reconstruction.The proposed method interprets few-view sinogram data using attention-based deep networks to infer the reconstructed image.The predicted image is then used as prior knowledge in the iterative algorithm for final reconstruction.We demonstrate effectiveness of the proposed approach by performing reconstruction experiments on a chest CT dataset.",We present a CNN inference-based reconstruction algorithm to address extremely few-view CT. 1638,Wizard of Wikipedia: Knowledge-Powered Conversational Agents,"In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date.The most popular sequence to sequence models typically “generate and hope” generic utterances that can be memorized in the weights of the model when mapping from input utterance to output, rather than employing recalled knowledge as context.Use of knowledge has so far proved difficult, in part because of the lack of a supervised learning benchmark task which exhibits knowledgeable open dialogue with clear grounding.To that end we collect and release a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. We then design architectures capable of retrieving knowledge, reading and conditioning on it, and finally generating natural responses.Our best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while our new benchmark allows for measuring further improvements in this important research direction.",We build knowledgeable conversational agents by conditioning on Wikipedia + a new supervised task. 1639,Online Semi-Supervised Learning with Bandit Feedback,"We formulate a new problem at the intersection of semi-supervised learning and contextual bandits, motivated by several applications including clinical trials and dialog systems.We demonstrate how contextual bandit and graph convolutional networks can be adjusted to the new problem formulation.We then take the best of both approaches to develop multi-GCN embedded contextual bandit.Our algorithms are verified on several real world datasets.",Synthesis of GCN and LINUCB algorithms for online learning with missing feedbacks 1640,How to measure the consistency of the tagging of scientific papers?," A collection of scientific papers is often accompanied by tags: keywords, topics, concepts etc., associated with each paper. Sometimes these tags are human-generated, sometimes they are machine-generated. We propose a simple measure of the consistency of the tagging of scientific papers: whether these tags are predictive for the citation graph links. Since the authors tend to cite papers about the topics close to those of their publications, a consistent tagging system could predict citations. We present an algorithm to calculate consistency, and experiments with human- and machine-generated tags. We show that augmentation, i.e. the combination of the manual tags with the machine-generated ones, can enhance the consistency of the tags. We further introduce cross-consistency, the ability to predict citation links between papers tagged by different taggers, e.g. manually and by a machine. Cross-consistency can be used to evaluate the tagging quality when the amount of labeled data is limited.",A good tagger gives similar tags to a given paper and the papers it cites 1641,Defensive Quantization Layer For Convolutional Network Against Adversarial Attack,"Recent research has intensively revealed the vulnerability of deep neural networks, especially for convolutional neural networks on the task of image recognition, through creating adversarial samples which `""slightly"" differ from legitimate samples.This vulnerability indicates that these powerful models are sensitive to specific perturbations and cannot filter out these adversarial perturbations.In this work, we propose a quantization-based method which enables a CNN to filter out adversarial perturbations effectively.Notably, different from prior work on input quantization, we apply the quantization in the intermediate layers of a CNN.Our approach is naturally aligned with the clustering of the coarse-grained semantic information learned by a CNN.Furthermore, to compensate for the loss of information which is inevitably caused by the quantization, we propose the multi-head quantization, where we project data points to different sub-spaces and perform quantization within each sub-space.We enclose our design in a quantization layer named as the Q-Layer.The results obtained on MNIST and Fashion-MNSIT datasets demonstrate that only adding one Q-Layer into a CNN could significantly improve its robustness against both white-box and black-box attacks.",We propose a quantization-based method which regularizes a CNN's learned representations to be automatically aligned with trainable concept matrix hence effectively filtering out adversarial perturbations. 1642,Invariant and Equivariant Graph Networks,"Invariant and equivariant networks have been successfully used for learning images, sets, point clouds, and graphs.A basic challenge in developing such networks is finding the maximal collection of invariant and equivariant layers.Although this question is answered for the first three examples, a full characterization of invariant and equivariant linear layers for graphs is not known.In this paper we provide a characterization of all permutation invariant and equivariant linear layers forgraph data, and show that their dimension, in case of edge-value graph data, is and, respectively.More generally, for graph data defined on-tuples of nodes, the dimension is the-th and-th Bell numbers.Orthogonal bases for the layers are computed, including generalization to multi-graph data.The constant number of basis elements and their characteristics allow successfully applying the networks to different size graphs.From the theoretical point of view, our results generalize and unify recent advancement in equivariant deep learning.In particular, we show that our model is capable of approximating any message passing neural network.Applying these new linear layers in a simple deep neural network framework is shown to achieve comparable results to state-of-the-art and to have better expressivity than previous invariant and equivariant bases.",The paper provides a full characterization of permutation invariant and equivariant linear layers for graph data. 1643,Causally Correct Partial Models for Reinforcement Learning,"In reinforcement learning, we can learn a model of future observations and rewards, and use it to plan the agent's next actions."", 'However, jointly modeling future observations can be computationally expensive or even intractable if the observations are high-dimensional.For this reason, previous works have considered partial models, which model only part of the observation.""In this paper, we show that partial models can be causally incorrect: they are confounded by the observations they don't model, and can therefore lead to incorrect planning."", 'To address this, we introduce a general family of partial models that are provably causally correct, but avoid the need to fully model future observations.",Causally correct partial models do not have to generate the whole observation to remain causally correct in stochastic environments. 1644,Efficient Lifelong Learning with A-GEM,"In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task.In this work, we investigate the efficiency of current lifelong approaches, in terms of sample complexity, computational and memory cost.Towards this end, we first introduce a new and a more realistic evaluation protocol, whereby learners observe each example only once and hyper-parameter selection is done on a small and disjoint set of tasks, which is not used for the actual learning experience and evaluation.Second, we introduce a new metric measuring how quickly a learner acquires a new skill.Third, we propose an improved version of GEM, dubbed Averaged GEM, which enjoys the same or even better performance as GEM, while being almost as computationally and memory efficient as EWC and other regularization-based methods.Finally, we show that all algorithms including A-GEM can learn even more quickly if they are provided with task descriptors specifying the classification tasks under consideration.Our experiments on several standard lifelong learning benchmarks demonstrate that A-GEM has the best trade-off between accuracy and efficiency",An efficient lifelong learning algorithm that provides a better trade-off between accuracy and time/ memory complexity compared to other algorithms. 1645,Mixed Precision Training With 8-bit Floating Point,"Reduced precision computation is one of the key areas addressing the widening’compute gap’, driven by an exponential growth in deep learning applications.In recent years, deep neural network training has largely migrated to 16-bit precision,with significant gains in performance and energy efficiency.However, attempts to train DNNs at 8-bit precision have met with significant challenges, because of the higher precision and dynamic range requirements of back-propagation. In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. We demonstrate state-of-the-art accuracy across multiple data setsand a broader set of workloads than previously reported. We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point, to improve error propagation.We also examine the impact of quantization noise on generalization, and propose a stochastic rounding technique to address gradient noise.As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline.","We demonstrated state-of-the-art training results using 8-bit floating point representation, across Resnet, GNMT, Transformer." 1646,INSTANCE CROSS ENTROPY FOR DEEP METRIC LEARNING,"Loss functions play a crucial role in deep metric learning thus a variety of them have been proposed.Some supervise the learning process by pairwise or tripletwise similarity constraints while others take the advantage of structured similarity information among multiple data points.In this work, we approach deep metric learning from a novel perspective.We propose instance cross entropy which measures the difference between an estimated instance-level matching distribution and its ground-truth one.ICE has three main appealing properties.Firstly, similar to categorical cross entropy, ICE has clear probabilistic interpretation and exploits structured semantic similarity information for learning supervision.Secondly, ICE is scalable to infinite training data as it learns on mini-batches iteratively and is independent of the training set size.Thirdly, motivated by our relative weight analysis, seamless sample reweighting is incorporated.It rescales samples’ gradients to control the differentiation degree over training examples instead of truncating them by sample mining.In addition to its simplicity and intuitiveness, extensive experiments on three real-world benchmarks demonstrate the superiority of ICE.",We propose instance cross entropy (ICE) which measures the difference between an estimated instance-level matching distribution and its ground-truth one. 1647,Modeling the Long Term Future in Model-Based Reinforcement Learning,"In model-based reinforcement learning, the agent interleaves between model learning and planning. These two components are inextricably intertwined.If the model is not able to provide sensible long-term prediction, the executed planer would exploit model flaws, which can yield catastrophic failures.This paper focuses on building a model that reasons about the long-term future and demonstrates how to use this for efficient planning and exploration.To this end, we build a latent-variable autoregressive model by leveraging recent ideas in variational inference.We argue that forcing latent variables to carry future information through an auxiliary task substantially improves long-term predictions.""Moreover, by planning in the latent space, the planner's solution is ensured to be within regions where the model is valid."", 'An exploration strategy can be devised by searching for unlikely trajectories under the model.Our methods achieves higher reward faster compared to baselines on a variety of tasks and environments in both the imitation learning and model-based reinforcement learning settings.","incorporating, in the model, latent variables that encode future content improves the long-term prediction accuracy, which is critical for better planning in model-based RL." 1648,Integrative Tensor-based Anomaly Detection System For Satellites,"Detecting anomalies is of growing importance for various industrial applications and mission-critical infrastructures, including satellite systems.Although there have been several studies in detecting anomalies based on rule-based or machine learning-based approaches for satellite systems, a tensor-based decomposition method has not been extensively explored for anomaly detection.In this work, we introduce an Integrative Tensor-based Anomaly Detection framework to detect anomalies in a satellite system.Because of the high risk and cost, detecting anomalies in a satellite system is crucial.We construct 3rd-order tensors with telemetry data collected from Korea Multi-Purpose Satellite-2 and calculate the anomaly score using one of the component matrices obtained by applying CANDECOMP/PARAFAC decomposition to detect anomalies.Our result shows that our tensor-based approach can be effective in achieving higher accuracy and reducing false positives in detecting anomalies as compared to other existing approaches.",Integrative Tensor-based Anomaly Detection(ITAD) framework for a satellite system. 1649,Information asymmetry in KL-regularized RL,"Many real world tasks exhibit rich structure that is repeated across different parts of the state space or in time.In this work we study the possibility of leveraging such repeated structure to speed up and regularize learning.We start from the KL regularized expected reward objective which introduces an additional component, a default policy.Instead of relying on a fixed default policy, we learn it from data.But crucially, we restrict the amount of information the default policy receives, forcing it to learn reusable behaviors that help the policy learn faster.We formalize this strategy and discuss connections to information bottleneck approaches and to the variational EM algorithm.We present empirical results in both discrete and continuous action domains and demonstrate that, for certain tasks, learning a default policy alongside the policy can significantly speed up and improve learning.Please watch the video demonstrating learned experts and default policies on several continuous control tasks.","Limiting state information for the default policy can improvement performance, in a KL-regularized RL framework where both agent and default policy are optimized together" 1650,Explaining Image Classifiers by Counterfactual Generation,"When an image classifier makes a prediction, which parts of the image are relevant and why?We can rephrase this question to ask: which parts of the image, if they were not seen by the classifier, would most change its decision?', ""Producing an answer requires marginalizing over images that could have been seen but weren't."", 'We can sample plausible image in-fills by conditioning a generative model on the rest of the image.""We then optimize to find the image regions that most change the classifier's decision after in-fill."", 'Our approach contrasts with ad-hoc in-filling approaches, such as blurring or injecting noise, which generate inputs far from the data distribution, and ignore informative relationships between different parts of the image.Our method produces more compact and relevant saliency maps, with fewer artifacts compared to previous methods.","We compute saliency by using a strong generative model to efficiently marginalize over plausible alternative inputs, revealing concentrated pixel areas that preserve label information." 1651,Variation Network: Learning High-level Attributes for Controlled Input Manipulation,"This paper presents the Variation Network, a generative model providing means to manipulate the high-level attributes of a given input.The originality of our approach is that VarNet is not only capable of handling pre-defined attributes but can also learn the relevant attributes of the dataset by itself. These two settings can be easily combined which makes VarNet applicable for a wide variety of tasks.Further, VarNet has a sound probabilistic interpretation which grants us with a novel way to navigate in the latent spaces as well as means to control how the attributes are learned.We demonstrate experimentally that this model is capable of performing interesting input manipulation and that the learned attributes are relevant and interpretable.",The Variation Network is a generative model able to learn high-level attributes without supervision that can then be used for controlled input manipulation. 1652,Learning The Difference That Makes A Difference With Counterfactually-Augmented Data,"Despite alarm over the reliance of machine learning systems on so-called spurious patterns in training data, the term lacks coherent meaning in standard statistical frameworks.However, the language of causality offers clarity: spurious associations are those due to a common cause vs direct or indirect effects.In this paper, we focus on NLP, introducing methods and resources for training models insensitive to spurious patterns.Given documents and their initial labels, we task humans with revise each document to accord with a counterfactual target label, asking that the revised documents be internally coherent while avoiding any gratuitous changes.Interestingly, on sentiment analysis and natural language inference tasks, classifiers trained on original data fail on their counterfactually-revised counterparts and vice versa.Classifiers trained on combined datasets perform remarkably well, just shy of those specialized to either domain.While classifiers trained on either original or manipulated data alone are sensitive to spurious features, models trained on the combined data are insensitive to this signal.We will publicly release both datasets.","Humans in the loop revise documents to accord with counterfactual labels, resulting resource helps to reduce reliance on spurious associations." 1653,Evaluations and Methods for Explanation through Robustness Analysis,"Among multiple ways of interpreting a machine learning model, measuring the importance of a set of features tied to a prediction is probably one of the most intuitive way to explain a model.In this paper, we establish the link between a set of features to a prediction with a new evaluation criteria, robustness analysis, which measures the minimum tolerance of adversarial perturbation.By measuring the tolerance level for an adversarial attack, we can extract a set of features that provides most robust support for a current prediction, and also can extract a set of features that contrasts the current prediction to a target class by setting a targeted adversarial attack.By applying this methodology to various prediction tasks across multiple domains, we observed the derived explanations are indeed capturing the significant feature set qualitatively and quantitatively.",We propose new objective measurement for evaluating explanations based on the notion of adversarial robustness. The evaluation criteria further allows us to derive new explanations which capture pertinent features qualitatively and quantitatively. 1654,MisGAN: Learning from Incomplete Data with Generative Adversarial Networks,"Generative adversarial networks have been shown to provide an effective way to model complex distributions and have obtained impressive results on various challenging tasks.However, typical GANs require fully-observed data during training.In this paper, we present a GAN-based framework for learning from complex, high-dimensional incomplete data.The proposed framework learns a complete data generator along with a mask generator that models the missing data distribution.We further demonstrate how to impute missing data by equipping our framework with an adversarially trained imputer.We evaluate the proposed framework using a series of experiments with several types of missing data processes under the missing completely at random assumption.",This paper presents a GAN-based framework for learning the distribution from high-dimensional incomplete data. 1655,An Inter-Layer Weight Prediction and Quantization for Deep Neural Networks based on Smoothly Varying Weight Hypothesis,"Due to a resource-constrained environment, network compression has become an important part of deep neural networks research.In this paper, we propose a new compression method, Inter-Layer Weight Prediction and quantization method which quantize the predicted residuals between the weights in all convolution layers based on an inter-frame prediction method in conventional video coding schemes.Furthermore, we found a phenomenon Smoothly Varying Weight Hypothesis which is that the weights in adjacent convolution layers share strong similarity in shapes and values, i.e., the weights tend to vary smoothly along with the layers.Based on SVWH, we propose a second ILWP and quantization method which quantize the predicted residuals between the weights in adjacent convolution layers.Since the predicted weight residuals tend to follow Laplace distributions with very low variance, the weight quantization can more effectively be applied, thus producing more zero weights and enhancing the weight compression ratio.In addition, we propose a new inter-layer loss for eliminating non-texture bits, which enabled us to more effectively store only texture bits.That is, the proposed loss regularizes the weights such that the collocated weights between the adjacent two layers have the same values.Finally, we propose an ILWP with an inter-layer loss and quantization method.Our comprehensive experiments show that the proposed method achieves a much higher weight compression rate at the same accuracy level compared with the previous quantization-based compression methods in deep neural networks.","We propose a new compression method, Inter-Layer Weight Prediction (ILWP) and quantization method which quantize the predicted residuals between the weights in convolution layers." 1656,FLOPs as a Direct Optimization Objective for Learning Sparse Neural Networks,"There exists a plethora of techniques for inducing structured sparsity in parametric models during the optimization process, with the final goal of resource-efficient inference.However, to the best of our knowledge, none target a specific number of floating-point operations as part of a single end-to-end optimization objective, despite reporting FLOPs as part of the results.Furthermore, a one-size-fits-all approach ignores realistic system constraints, which differ significantly between, say, a GPU and a mobile phone -- FLOPs on the former incur less latency than on the latter; thus, it is important for practitioners to be able to specify a target number of FLOPs during model compression.In this work, we extend a state-of-the-art technique to directly incorporate FLOPs as part of the optimization objective and show that, given a desired FLOPs requirement, different neural networks can be successfully trained for image classification.","We extend a state-of-the-art technique to directly incorporate FLOPs as part of the optimization objective, and we show that, given a desired FLOPs requirement, different neural networks are successfully trained." 1657,Hierarchical Image-to-image Translation with Nested Distributions Modeling,"Unpaired image-to-image translation among category domains has achieved remarkable success in past decades.Recent studies mainly focus on two challenges.For one thing, such translation is inherently multimodal due to variations of domain-specific information.For another, existing multimodal approaches have limitations in handling more than two domains, i.e. they have to independently build one model for every pair of domains.To address these problems, we propose the Hierarchical Image-to-image Translation method which jointly formulates the multimodal and multi-domain problem in a semantic hierarchy structure, and can further control the uncertainty of multimodal.Specifically, we regard the domain-specific variations as the result of the multi-granularity property of domains, and one can control the granularity of the multimodal translation by dividing a domain with large variations into multiple subdomains which capture local and fine-grained variations.With the assumption of Gaussian prior, variations of domains are modeled in a common space such that translations can further be done among multiple domains within one model.To learn such complicated space, we propose to leverage the inclusion relation among domains to constrain distributions of parent and children to be nested.Experiments on several datasets validate the promising results and competitive performance against state-of-the-arts.",Granularity controled multi-domain and multimodal image to image translation method 1658,Generalized Tensor Models for Recurrent Neural Networks,"Recurrent Neural Networks are very successful at solving challenging problems with sequential data.However, this observed efficiency is not yet entirely explained by theory.It is known that a certain class of multiplicative RNNs enjoys the property of depth efficiency --- a shallow network of exponentially large width is necessary to realize the same score function as computed by such an RNN.Such networks, however, are not very often applied to real life tasks.In this work, we attempt to reduce the gap between theory and practice by extending the theoretical analysis to RNNs which employ various nonlinearities, such as Rectified Linear Unit, and show that they also benefit from properties of universality and depth efficiency.Our theoretical results are verified by a series of extensive computational experiments.",Analysis of expressivity and generality of recurrent neural networks with ReLu nonlinearities using Tensor-Train decomposition. 1659,ADef: an Iterative Algorithm to Construct Adversarial Deformations,"While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood.In the past, image classifiers have been shown to be vulnerable to so-called adversarial attacks, which are created by additively perturbing the correctly classified image.In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step.We demonstrate our results on MNIST with convolutional neural networks and on ImageNet with Inception-v3 and ResNet-101.","We propose a new, efficient algorithm to construct adversarial examples by means of deformations, rather than additive perturbations." 1660,"Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow","Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable.Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients.In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck.""By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients."", 'We demonstrate that our proposed variational discriminator bottleneck leads to significant improvements across three distinct application areas for adversarial learning algorithms.Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running.We show that our method can learn such skills directly from raw video demonstrations, substantially outperforming prior adversarial imitation learning methods.The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re-optimized in new settings.Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods.","Regularizing adversarial learning with an information bottleneck, applied to imitation learning, inverse reinforcement learning, and generative adversarial networks." 1661,Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation,"Deploying machine learning systems in the real world requires both high accuracy on clean data and robustness to naturally occurring corruptions.While architectural advances have led to improved accuracy, building robust models remains challenging, involving major changes in training procedure and datasets. Prior work has argued that there is an inherent trade-off between robustness and accuracy, as exemplified by standard data augmentation techniques such as Cutout, which improves clean accuracy but not robustness, and additive Gaussian noise, which improves robustness but hurts accuracy.We introduce Patch Gaussian, a simple augmentation scheme that adds noise to randomly selected patches in an input image. Models trained with Patch Gaussian achieve state of the art on the CIFAR-10 and ImageNet Common Corruptions benchmarks while also maintaining accuracy on clean data.We find that this augmentation leads to reduced sensitivity to high frequency noise while retaining the ability to take advantage of relevant high frequency information in the image.We show it can be used in conjunction with other regularization methods and data augmentation policies such as AutoAugment. Finally, we find that the idea of restricting perturbations to patches can also be useful in the context of adversarial learning, yielding models without the loss in accuracy that is found with unconstrained adversarial training.",Simple augmentation method overcomes robustness/accuracy trade-off observed in literature and opens questions about the effect of training distribution on out-of-distribution generalization. 1662,Mixture Density Networks Find Viewpoint the Dominant Factor for Accurate Spatial Offset Regression,"Offset regression is a standard method for spatial localization in many vision tasks, including human pose estimation, object detection, and instance segmentation.However,if high localization accuracy is crucial for a task, convolutional neural networks will offset regressionusually struggle to deliver. This can be attributed to the locality of the convolution operation, exacerbated by variance in scale, clutter, and viewpoint.An even more fundamental issue is the multi-modality of real-world images.As a consequence, they cannot be approximated adequately using a single mode model. Instead, we propose to use mixture density networks for offset regression, allowing the model to manage various modes efficiently and learning to predict full conditional density of the outputs given the input.On 2D human pose estimation in the wild, which requires accurate localisation of body keypoints, we show that this yields significant improvement in localization accuracy.In particular, our experiments reveal viewpoint variation as the dominant multi-modal factor.Further, by carefully initializing MDN parameters, we do not face any instabilities in training, which is known to be a big obstacle for widespread deployment of MDN.The method can be readily applied to any task with a spatial regression component.Our findings highlight the multi-modal nature of real-world vision, and the significance of explicitly accounting for viewpoint variation, at least when spatial localization is concerned.",We use mixture density networks to do full conditional density estimation for spatial offset regression and apply it to the human pose estimation task. 1663,Visualizing Music Transformer,"Like language, music can be represented as a sequence of discrete symbols that form a hierarchical syntax, with notes being roughly like characters and motifs of notes like words. Unlike text however, music relies heavily on repetition on multiple timescales to build structure and meaning.The Music Transformer has shown compelling results in generating music with structure. In this paper, we introduce a tool for visualizing self-attention on polyphonic music with an interactive pianoroll. We use music transformer as both a descriptive tool and a generative model. For the former, we use it to analyze existing music to see if the resulting self-attention structure corroborates with the musical structure known from music theory. ', ""For the latter, we inspect the model's self-attention during generation, in order to understand how past notes affect future ones."", 'We also compare and contrast the attention structure of regular attention to that of relative attention, and examine its impact on the resulting generated music. For example, for the JSB Chorales dataset, a model trained with relative attention is more consistent in attending to all the voices in the preceding timestep and the chords before, and at cadences to the beginning of a phrase, allowing it to create an arc. We hope that our analyses will offer more evidence for relative self-attention as a powerful inductive bias for modeling music. We invite the reader to explore our video animations of music attention and to interact with the visualizations at https://storage.googleapis.com/nips-workshop-visualization/index.html.",Visualizing the differences between regular and relative attention for Music Transformer. 1664,Three factors influencing minima in SGD,"We study the statistical properties of the endpoint of stochastic gradient descent.We approximate SGD as a stochastic differential equation and consider its Boltzmann Gibbs equilibrium distribution under the assumption of isotropic variance in loss gradients..Through this analysis, we find that three factors – learning rate, batch size and the variance of the loss gradients – control the trade-off between the depth and width of the minima found by SGD, with wider minima favoured by a higher ratio of learning rate to batch size. In the equilibrium distribution only the ratio of learning rate to batch size appears, implying that it’s invariant under a simultaneous rescaling of each by the same amount.We experimentally show how learning rate and batch size affect SGD from two perspectives: the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the experiments suggest the endpoint of SGD is similar under simultaneous rescaling of batch size and learning rate, and also that a higher ratio leads to flatter minima, both findings are consistent with our theoretical analysis. We note experimentally that the dynamics also seem to be similar under the same rescaling of learning rate and batch size, which we explore showing that one can exchange batch size and learning rate in a cyclical learning rate schedule. Next, we illustrate how noise affects memorization, showing that high noise levels lead to better generalization. Finally, we find experimentally that the similarity under simultaneous rescaling of learning rate and batch size breaks down if the learning rate gets too large or the batch size gets too small.","Three factors (batch size, learning rate, gradient noise) change in predictable way the properties (e.g. sharpness) of minima found by SGD." 1665,A closer look at the word analogy problem,"Although word analogy problems have become a standard tool for evaluating word vectors, little is known about why word vectors are so good at solving these problems.In this paper, I attempt to further our understanding of the subject, by developing a simple, but highly accurate generative approach to solve the word analogy problem for the case when all terms involved in the problem are nouns.My results demonstrate the ambiguities associated with learning the relationship between a word pair, and the role of the training dataset in determining the relationship which gets most highlighted.Furthermore, my results show that the ability of a model to accurately solve the word analogy problem may not be indicative of a model’s ability to learn the relationship between a word pair the way a human does.","Simple generative approach to solve the word analogy problem which yields insights into word relationships, and the problems with estimating them" 1666,Rethinking the Hyperparameters for Fine-tuning,"Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks.Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyper-parameters and keeping them fixed to values normally used for training from scratch.This paper re-examines several common practices of setting hyper-parameters for fine-tuning.Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks. While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter.We find that picking the right value for momentum is critical for fine-tuning performance and connect it with previous theoretical findings. Optimal hyper-parameters for fine-tuning in particular the effective learning rate are not only dataset dependent but also sensitive to the similarity between the source domain and target domain.This is in contrast to hyper-parameters for training from scratch. Reference-based regularization that keeps models close to the initial model does not necessarily apply for ""dissimilar"" datasets.Our findings challenge common practices of fine- tuning and encourages deep learning practitioners to rethink the hyper-parameters for fine-tuning.",This paper re-examines several common practices of setting hyper-parameters for fine-tuning. 1667,DELTA: DEEP LEARNING TRANSFER USING FEATURE MAP WITH ATTENTION FOR CONVOLUTIONAL NETWORKS,"Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly accelerate training while the accuracy is frequently bottlenecked by the limited dataset size of the new target task.To solve the problem, some regularization methods, constraining the outer layer weights of the target network using the starting point as references, have been studied.In this paper, we propose a novel regularized transfer learning framework DELTA, namely DEep Learning Transfer using Feature Map with Attention.Instead of constraining the weights of neural network, DELTA aims to preserve the outer layer outputs of the target network.Specifically, in addition to minimizing the empirical loss, DELTA intends to align the outer layer outputs of two networks, through constraining a subset of feature maps that are precisely selected by attention that has been learned in an supervised learning manner.We evaluate DELTA with the state-of-the-art algorithms, including L2 and L2-SP.The experiment results show that our proposed method outperforms these baselines with higher accuracy for new tasks.",improving deep transfer learning with regularization using attention based feature maps 1668,Multi-Dimensional Explanation of Reviews,"Neural models achieved considerable improvement for many natural language processing tasks, but they offer little transparency, and interpretability comes at a cost.In some domains, automated predictions without justifications have limited applicability.Recently, progress has been made regarding single-aspect sentiment analysis for reviews, where the ambiguity of a justification is minimal.In this context, a justification, or mask, consists of word sequences from the input text, which suffice to make the prediction.Existing models cannot handle more than one aspect in one training and induce binary masks that might be ambiguous.In our work, we propose a neural model for predicting multi-aspect sentiments for reviews and generates a probabilistic multi-dimensional mask simultaneously, in an unsupervised and multi-task learning manner.Our evaluation shows that on three datasets, in the beer and hotel domain, our model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable.","Neural model predicting multi-aspect sentiments and generating a probabilistic multi-dimensional mask simultaneously. Model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable." 1669,Alpha-divergence bridges maximum likelihood and reinforcement learning in neural sequence generation,"Neural sequence generation is commonly approached by using maximum- likelihood estimation or reinforcement learning.However, it is known that they have their own shortcomings; ML presents training/testing discrepancy, whereas RL suffers from sample inefficiency.We point out that it is difficult to resolve all of the shortcomings simultaneously because of a tradeoff between ML and RL.In order to counteract these problems, we propose an objective function for sequence generation using α-divergence, which leads to an ML-RL integrated method that exploits better parts of ML and RL.We demonstrate that the proposed objective function generalizes ML and RL objective functions because it includes both as its special cases.We provide a proposition stating that the difference between the RL objective function and the proposed one monotonically decreases with increasing α.Experimental results on machine translation tasks show that minimizing the proposed objective function achieves better sequence generation performance than ML-based methods.",Propose new objective function for neural sequence generation which integrates ML-based and RL-based objective functions. 1670,Siamese Capsule Networks ,"Capsule Networks have shown encouraging results on benchmark computer vision datasets such as MNIST, CIFAR and smallNORB.Although, they are yet to be tested on tasks where the entities detected inherently have more complex internal representations and there are very few instances per class to learn from and where point-wise classification is not suitable.Hence, this paper carries out experiments on face verification in both controlled and uncontrolled settings that together address these points.In doing so we introduce , a new variant that can be used for pairwise learning tasks.We find that the model improves over baselines in the few-shot learning setting, suggesting that capsule networks are efficient at learning discriminative representations when given few samples. We find that perform well against strong baselines on both pairwise learning datasets when trained using a contrastive loss with-normalized capsule encoded pose features, yielding best results in the few-shot learning setting where image pairs in the test set contain unseen subjects.",A pairwise learned capsule network that performs well on face verification tasks given limited labeled data 1671,The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning,"In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN and Categorical DQN, while giving better run-time performance than A3C.Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting.The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones.Next, we introduce the β-leaveone-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline.Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization.Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance.Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.",Reactor combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN while giving better run-time performance than A3C. 1672,Parsing-based Approaches for Verification and Recognition of Hierarchical Plans,"Hierarchical planning, in particular, Hierarchical Task Networks, was proposed as a method to describe plans by decomposition of tasks to sub-tasks until primitive tasks, actions, are obtained.Plan verification assumes a complete plan as input, and the objective is finding a task that decomposes to this plan.In plan recognition, a prefix of the plan is given and the objective is finding a task that decomposes to the plan with the given prefix.This paper describes how to verify and recognize plans using a common method known from formal grammars, by parsing.",The paper describes methods to verify and recognize HTN plans by parsing of attribute grammars. 1673,EXPLORING NEURAL ARCHITECTURE SEARCH FOR LANGUAGE TASKS,"Neural architecture search, the task of finding neural architectures automatically, has recently emerged as a promising approach for unveiling better models over human-designed ones.However, most success stories are for vision tasks and have been quite limited for text, except for a small language modeling setup.In this paper, we explore NAS for text sequences at scale, by first focusing on the task of language translation and later extending to reading comprehension.From a standard sequence-to-sequence models for translation, we conduct extensive searches over the recurrent cells and attention similarity functions across two translation tasks, IWSLT English-Vietnamese and WMT German-English.We report challenges in performing cell searches as well as demonstrate initial success on attention searches with translation improvements over strong baselines.In addition, we show that results on attention searches are transferable to reading comprehension on the SQuAD dataset.","We explore neural architecture search for language tasks. Recurrent cell search is challenging for NMT, but attention mechanism search works. The result of attention search on translation is transferable to reading comprehension." 1674,Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer,"Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code.In some cases, autoencoders can ""interpolate"": By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints.In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data.We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting.We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations.",We propose a regularizer that improves interpolation and autoencoders and show that it also improves the learned representation for downstream tasks. 1675,From Here to There: Video Inbetweening Using Direct 3D Convolutions,"We consider the problem of generating plausible and diverse video sequences, when we are only given a start and an end frame.This task is also known as inbetweening, and it belongs to the broader area of stochastic video generation, which is generally approached by means of recurrent neural networks.In this paper, we propose instead a fully convolutional model to generate video sequences directly in the pixel domain.We first obtain a latent video representation using a stochastic fusion mechanism that learns how to incorporate information from the start and end frames.Our model learns to produce such latent representation by progressively increasing the temporal resolution, and then decode in the spatiotemporal domain using 3D convolutions.The model is trained end-to-end by minimizing an adversarial loss.Experiments on several widely-used benchmark datasets show that it is able to generate meaningful and diverse in-between video sequences, according to both quantitative and qualitative evaluations.","This paper presents method for stochastically generating in-between video frames from given key frames, using direct 3D convolutions." 1676,Weakly-supervised Knowledge Graph Alignment with Adversarial Learning,"Aligning knowledge graphs from different sources or languages, which aims to align both the entity and relation, is critical to a variety of applications such as knowledge graph construction and question answering.Existing methods of knowledge graph alignment usually rely on a large number of aligned knowledge triplets to train effective models.However, these aligned triplets may not be available or are expensive to obtain for many domains.Therefore, in this paper we study how to design fully-unsupervised methods or weakly-supervised methods, i.e., to align knowledge graphs without or with only a few aligned triplets.We propose an unsupervised framework based on adversarial training, which is able to map the entities and relations in a source knowledge graph to those in a target knowledge graph.This framework can be further seamlessly integrated with existing supervised methods, where only a limited number of aligned triplets are utilized as guidance.Experiments on real-world datasets prove the effectiveness of our proposed approach in both the weakly-supervised and unsupervised settings.",This paper studies weakly-supervised knowledge graph alignment with adversarial training frameworks. 1677,Meta-Learning to Guide Segmentation,"There are myriad kinds of segmentation, and ultimately the `""right"" segmentation of a given scene is in the eye of the annotator.Standard approaches require large amounts of labeled data to learn just one particular kind of segmentation.As a first step towards relieving this annotation burden, we propose the problem of guided segmentation: given varying amounts of pixel-wise labels, segment unannotated pixels by propagating supervision locally and non-locally.We propose guided networks, which extract a latent task representation---guidance---from variable amounts and classes of pixel supervision and optimize our architecture end-to-end for fast, accurate, and data-efficient segmentation by meta-learning.To span the few-shot and many-shot learning regimes, we examine guidance from as little as one pixel per concept to as much as 1000+ images, and compare to full gradient optimization at both extremes.To explore generalization, we analyze guidance as a bridge between different levels of supervision to segment classes as the union of instances.Our segmentor concentrates different amounts of supervision of different types of classes into an efficient latent representation, non-locally propagates this supervision across images, and can be updated quickly and cumulatively when given more supervision.",We propose a meta-learning approach for guiding visual segmentation tasks from varying amounts of supervision. 1678,LOGAN: Latent Optimisation for Generative Adversarial Networks,"Training generative adversarial networks requires balancing of delicate adversarial dynamics.Even with careful tuning, training may diverge or end up in a bad equilibrium with dropped modes.In this work, we introduce a new form of latent optimisation inspired by the CS-GAN and show that it improves adversarial dynamics by enhancing interactions between the discriminator and the generator.We develop supporting theoretical analysis from the perspectives of differentiable games and stochastic approximation.Our experiments demonstrate that latent optimisation can significantly improve GAN training, obtaining state-of-the-art performance for the ImageNet dataset.Our model achieves an Inception Score of 148 and an Frechet Inception Distance of 3.4, an improvement of 17% and 32% in IS and FID respectively, compared with the baseline BigGAN-deep model with the same architecture and number of parameters.",Latent optimisation improves adversarial training dynamics. We present both theoretical analysis and state-of-the-art image generation with ImageNet 128x128. 1679,Theoretical properties of the global optimizer of two-layer Neural Network,"In this paper, we study the problem of optimizing a two-layer artificial neural network that best fits a training dataset.We look at this problem in the setting where the number of parameters is greater than the number of sampled points.We show that for a wide class of differentiable activation functions, we have that arbitrary first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular.We essentially show that these non-singular hidden layer matrix satisfy a ""good"" property for these big class of activation functions.Techniques involved in proving this result inspire us to look at a new algorithmic, where in between two gradient step of hidden layer, we add a stochastic gradient descent step of the output layer.In this new algorithmic framework, we extend our earlier result and show that for all finite iterations the hidden layer satisfies thegood"" property mentioned earlier therefore partially explaining success of noisy gradient methods and addressing the issue of data independency of our earlier result.Both of these results are easily extended to hidden layers given by a flat matrix from that of a square matrix.Results are applicable even if network has more than one hidden layer provided all inner hidden layers are arbitrary, satisfy non-singularity, all activations are from the given class of differentiable functions and optimization is only with respect to the outermost hidden layer.Separately, we also study the smoothness properties of the objective function and show that it is actually Lipschitz smooth, i.e., its gradients do not change sharply.We use smoothness properties to guarantee asymptotic convergence of to a first-order optimal solution.",This paper talks about theoretical properties of first-order optimal point of two layer neural network in over-parametrized case 1680,Nonlinear Channels Aggregation Networks for Deep Action Recognition,"We introduce the concept of channel aggregation in ConvNet architecture, a novel compact representation of CNN features useful for explicitly modeling the nonlinear channels encoding especially when the new unit is embedded inside of deep architectures for action recognition.The channel aggregation is based on multiple-channels features of ConvNet and aims to be at the spot finding the optical convergence path at fast speed.We name our proposed convolutional architecture “nonlinear channels aggregation networks” and its new layer “nonlinear channels aggregation layer”.We theoretically motivate channels aggregation functions and empirically study their effect on convergence speed and classification accuracy.Another contribution in this work is an efficient and effective implementation of the NCAL, speeding it up orders of magnitude.We evaluate its performance on standard benchmarks UCF101 and HMDB51, and experimental results demonstrate that this formulation not only obtains a fast convergence but stronger generalization capability without sacrificing performance.",An architecture enables CNN trained on the video sequences converging rapidly 1681,Black-Box Adversarial Attack with Transferable Model-based Embedding,"We present a new method for black-box adversarial attack.Unlike previous methods that combined transfer-based and scored-based methods by using the gradient or initialization of a surrogate white-box model, this new method tries to learn a low-dimensional embedding using a pretrained model, and then performs efficient search within the embedding space to attack an unknown target network.The method produces adversarial perturbations with high level semantic patterns that are easily transferable.We show that this approach can greatly improve the query efficiency of black-box adversarial attack across different target network architectures.We evaluate our approach on MNIST, ImageNet and Google Cloud Vision API, resulting in a significant reduction on the number of queries.We also attack adversarially defended networks on CIFAR10 and ImageNet, where our method not only reduces the number of queries, but also improves the attack success rate.","We present a new method that combines transfer-based and scored black-box adversarial attack, improving the success rate and query efficiency of black-box adversarial attack across different network architectures." 1682,A Neuro-AI Interface: Learning DNNs from the Human Brain,"Deep neural networks are inspired from the human brain and the interconnection between the two has been widely studied in the literature. However, it is still an open question whether DNNs are able to make decisions like the brain.""Previous work has demonstrated that DNNs, trained by matching the neural responses from inferior temporal cortex in monkey's brain, is able to achieve human-level performance on the image object recognition tasks."", 'This indicates that neural dynamics can provide informative knowledge to help DNNs accomplish specific tasks.""In this paper, we introduce the concept of a neuro-AI interface, which aims to use human's neural responses as supervised information for helping AI systems solve a task that is difficult when using traditional machine learning strategies."", 'In order to deliver the idea of neuro-AI interfaces, we focus on deploying it to one of the fundamental problems in generative adversarial networks: designing a proper evaluation metric to evaluate the quality of images produced by GANs. ",Describe a neuro-AI interface technique to evaluate generative adversarial networks 1683,A Scalable Risk-based Framework for Rigorous Autonomous Vehicle Evaluation,"While recent developments in autonomous vehicle technology highlight substantial progress, we lack tools for rigorous and scalable testing.Real-world testing, the de facto evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of miles in order to statistically validate performance claims.We implement a simulation framework that can test an entire modern autonomous driving system, including, in particular, systems that employ deep-learning perception and control algorithms.Using adaptive sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior.We demonstrate our framework on a highway scenario.","Using adaptive sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior. " 1684,Learning To Avoid Negative Transfer in Few Shot Transfer Learning,"Many tasks in natural language understanding require learning relationships between two sequences for various tasks such as natural language inference, paraphrasing and entailment.These aforementioned tasks are similar in nature, yet they are often modeled individually.Knowledge transfer can be effective for closely related tasks, which is usually carried out using parameter transfer in neural networks.However, transferring all parameters, some of which irrelevant for a target task, can lead to sub-optimal results and can have a negative effect on performance, referred to as transfer.Hence, this paper focuses on the transferability of both instances and parameters across natural language understanding tasks by proposing an ensemble-based transfer learning method in the context of few-shot learning.Our main contribution is a method for mitigating negative transfer across tasks when using neural networks, which involves dynamically bagging small recurrent neural networks trained on different subsets of the source task/s.We present a straightforward yet novel approach for incorporating these networks to a target task for few-shot learning by using a decaying parameter chosen according to the slope changes of a smoothed spline error curve at sub-intervals during training.Our proposed method show improvements over hard and soft parameter sharing transfer methods in the few-shot learning case and shows competitive performance against models that are trained given full supervision on the target task, from only few examples.",A dynamic bagging methods approach to avoiding negatve transfer in neural network few-shot transfer learning 1685,Modeling treatment events in disease progression,"Ability to quantify and predict progression of a disease is fundamental for selecting an appropriate treatment.Many clinical metrics cannot be acquired frequently either because of their cost or because they are inconvenient or harmful to a patient.In such scenarios, in order to estimate individual trajectories of disease progression, it is advantageous to leverage similarities between patients, i.e. the covariance of trajectories, and find a latent representation of progression.Most of existing methods for estimating trajectories do not account for events in-between observations, what dramatically decreases their adequacy for clinical practice.In this study, we develop a machine learning framework named Coordinatewise-Soft-Impute for analyzing disease progression from sparse observations in the presence of confounding events.CSI is guaranteed to converge to the global minimum of the corresponding optimization problem.Experimental results also demonstrates the effectiveness of CSI using both simulated and real dataset.",A novel matrix completion based algorithm to model disease progression with events 1686,The Missing Ingredient in Zero-Shot Neural Machine Translation,"Multilingual Neural Machine Translation systems are capable of translating between multiple source and target languages within a single system.An important indicator of generalization within these systems is the quality of zero-shot translation - translating between language pairs that the system has never seen during training.However, until now, the zero-shot performance of multilingual models has lagged far behind the quality that can be achieved by using a two step translation process that pivots through an intermediate language.In this work, we diagnose why multilingual models under-perform in zero shot settings.We propose explicit language invariance losses that guide an NMT encoder towards learning language agnostic representations.Our proposed strategies significantly improve zero-shot translation performance on WMT English-French-German and on the IWSLT 2017 shared task, and for the first time, match the performance of pivoting approaches while maintaining performance on supervised directions.",Simple similarity constraints on top of multilingual NMT enables high quality translation between unseen language pairs for the first time. 1687,Finite Depth and Width Corrections to the Neural Tangent Kernel,"We prove the precise scaling, at finite depth and width, for the mean and variance of the neural tangent kernel in a randomly initialized ReLU network.The standard deviation is exponential in the ratio of network depth to width.Thus, even in the limit of infinite overparameterization, the NTK is not deterministic if depth and width simultaneously tend to infinity.Moreover, we prove that for such deep and wide networks, the NTK has a non-trivial evolution during training by showing that the mean of its first SGD update is also exponential in the ratio of network depth to width.This is sharp contrast to the regime where depth is fixed and network width is very large.Our results suggest that, unlike relatively shallow and wide networks, deep and wide ReLU networks are capable of learning data-dependent features even in the so-called lazy training regime.",The neural tangent kernel in a randomly initialized ReLU net is non-trivial fluctuations as long as the depth and width are comparable. 1688,Tensor Decompositions for Temporal Knowledge Base Completion,"Most algorithms for representation learning and link prediction in relational data have been designed for static data.However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems.This is also the case for knowledge bases, which contain facts such as that are valid only at certain points in time.For the problem of link prediction under temporal constraints, i.e., answering queries of the form, we propose a solution inspired by the canonical decomposition of tensors of order 4.We introduce new regularization schemes and present an extension of ComplEx that achieves state-of-the-art performance.Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods.",We propose new tensor decompositions and associated regularizers to obtain state of the art performances on temporal knowledge base completion. 1689,Beyond Greedy Ranking: Slate Optimization via List-CVAE,"The conventional approach to solving the recommendation problem greedily ranksindividual document candidates by prediction scores.However, this method fails tooptimize the slate as a whole, and hence, often struggles to capture biases causedby the page layout and document interdepedencies.The slate recommendationproblem aims to directly find the optimally ordered subset of documents that best serve users’ interests.Solving this problem is hard due to thecombinatorial explosion of document candidates and their display positions on thepage.Therefore we propose a paradigm shift from the traditional viewpoint of solving a ranking problem to a direct slate generation framework.In this paper, we introduce List Conditional Variational Auto-Encoders,which learn the joint distribution of documents on the slate conditionedon user responses, and directly generate full slates.Experiments on simulatedand real-world data show that List-CVAE outperforms greedy ranking methodsconsistently on various scales of documents corpora.",We used a CVAE type model structure to learn to directly generate slates/whole pages for recommendation systems. 1690,EvoNet: A Neural Network for Predicting the Evolution of Dynamic Graphs,"Neural networks for structured data like graphs have been studied extensively in recent years.To date, the bulk of research activity has focused mainly on static graphs.However, most real-world networks are dynamic since their topology tends to change over time.Predicting the evolution of dynamic graphs is a task of high significance in the area of graph mining.Despite its practical importance, the task has not been explored in depth so far, mainly due to its challenging nature.In this paper, we propose a model that predicts the evolution of dynamic graphs.Specifically, we use a graph neural network along with a recurrent architecture to capture the temporal evolution patterns of dynamic graphs.Then, we employ a generative model which predicts the topology of the graph at the next time step and constructs a graph instance that corresponds to that topology.We evaluate the proposed model on several artificial datasets following common network evolving dynamics, as well as on real-world datasets.Results demonstrate the effectiveness of the proposed model.","Combining graph neural networks and the RNN graph generative model, we propose a novel architecture that is able to learn from a sequence of evolving graphs and predict the graph topology evolution for the future timesteps" 1691,Learning to Decompose Compound Questions with Reinforcement Learning,"As for knowledge-based question answering, a fundamental problem is to relax the assumption of answerable questions from simple questions to compound questions.Traditional approaches firstly detect topic entity mentioned in questions, then traverse the knowledge graph to find relations as a multi-hop path to answers, while we propose a novel approach to leverage simple-question answerers to answer compound questions.Our model consists of two parts: a novel learning-to-decompose agent that learns a policy to decompose a compound question into simple questions and three independent simple-question answerers that classify the corresponding relations for each simple question.Experiments demonstrate that our model learns complex rules of compositionality as stochastic policy, which benefits simple neural networks to achieve state-of-the-art results on WebQuestions and MetaQA.We analyze the interpretable decomposition process as well as generated partitions.",We propose a learning-to-decompose agent that helps simple-question answerers to answer compound question over knowledge graph. 1692,Matrix Product Operator Restricted Boltzmann Machines,"A restricted Boltzmann machine learns a probabilistic distribution over its input samples and has numerous uses like dimensionality reduction, classification and generative modeling.Conventional RBMs accept vectorized data that dismisses potentially important structural information in the original tensor input.Matrix-variate and tensor-variate RBMs, named MvRBM and TvRBM, have been proposed but are all restrictive by construction.This work presents the matrix product operator RBM that utilizes a tensor network generalization of Mv/TvRBM, preserves input formats in both the visible and hidden layers, and results in higher expressive power.A novel training algorithm integrating contrastive divergence and an alternating optimization procedure is also developed.",Propose a general tensor-based RBM model which can compress the model greatly at the same keep a strong model expression capacity 1693,Robust Reinforcement Learning for Autonomous Driving ,"Autonomous driving is still considered as an “unsolved problem” given its inherent important variability and that many processes associated with its development like vehicle control and scenes recognition remain open issues.Despite reinforcement learning algorithms have achieved notable results in games and some robotic manipulations, this technique has not been widely scaled up to the more challenging real world applications like autonomous driving.In this work, we propose a deep reinforcement learning algorithm embedding an actor critic architecture with multi-step returns to achieve a better robustness of the agent learning strategies when acting in complex and unstable environments.The experiment is conducted with Carla simulator offering a customizable and realistic urban driving conditions.The developed deep actor RL guided by a policy-evaluator critic distinctly surpasses the performance of a standard deep RL agent.",An actor-critic reinforcement learning approach with multi-step returns applied to autonomous driving with Carla simulator. 1694,A Classification-Based Perspective on GAN Distributions,"A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks is whether GANs are actually able to capture the key characteristics of the datasets they are trained on.The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability.In this paper, we propose new techniques that employ classification-based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data.These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets.They also indicate that GANs have significant problems in reproducing the more distributional properties of the training dataset.In particular, the diversity of such synthetic data is orders of magnitude smaller than that of the original data.",We propose new methods for evaluating and quantifying the quality of synthetic GAN distributions from the perspective of classification tasks 1695,A Deep Learning Approach for Survival Clustering without End-of-life Signals,"The goal of survival clustering is to map subjects to clusters ranging from low-risk to high-risk.Existing survival methods assume the presence of clear signals or introduce them artificially using a pre-defined timeout.In this paper, we forego this assumption and introduce a loss function that differentiates between the empirical lifetime distributions of the clusters using a modified Kuiper statistic.We learn a deep neural network by optimizing this loss, that performs a soft clustering of users into survival groups.We apply our method to a social network dataset with over 1M subjects, and show significant improvement in C-index compared to alternatives.","The goal of survival clustering is to map subjects into clusters. Without end-of-life signals, this is a challenging task. To address this task we propose a new loss function by modifying the Kuiper statistics." 1696,A Copula approach for hyperparameter transfer learning,"Bayesian optimization is a popular methodology to tune the hyperparameters of expensive black-box functions.Despite its success, standard BO focuses on a single task at a time and is not designed to leverage information from related functions, such as tuning performance metrics of the same algorithm across multiple datasets.In this work, we introduce a novel approach to achieve transfer learning across different datasets as well as different metrics.The main idea is to regress the mapping from hyperparameter to metric quantiles with a semi-parametric Gaussian Copula distribution, which provides robustness against different scales or outliers that can occur in different tasks.We introduce two methods to leverage this estimation: a Thompson sampling strategy as well as a Gaussian Copula process using such quantile estimate as a prior.We show that these strategies can combine the estimation of multiple metrics such as runtime and accuracy, steering the optimization toward cheaper hyperparameters for the same level of accuracy.Experiments on an extensive set of hyperparameter tuning tasks demonstrate significant improvements over state-of-the-art methods.",We show how using semi-parametric prior estimations can speed up HPO significantly across datasets and metrics. 1697,An Etching Latte Art Support System by Tracing the Making Procedure Based on Projection Mapping,"It is difficult for the beginners of etching latte art to make well-balanced patterns by using two fluids with different viscosities such as foamed milk and syrup.Even though making etching latte art while watching making videos which show the procedure, it is difficult to keep balance.Thus well-balanced etching latte art cannot be made easily.In this paper, we propose a system which supports the beginners to make well-balanced etching latte art by projecting a making procedure of etching latte art directly onto a cappuccino.The experiment results show the progress by using our system. We also discuss about the similarity of the etching latte art and the design templates by using background subtraction.",We have developed an etching latte art support system which projects the making procedure directly onto a cappuccino to help the beginners to make well-balanced etching latte art. 1698,Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation,"We focus on temporal self-supervision for GAN-based video generation tasks.While adversarial training successfully yields generative models for a variety of areas, temporal relationship in the generated data is much less explored.This is crucial for sequential generation tasks, e.g. video super-resolution and unpaired video translation.For the former, state-of-the-art methods often favor simpler norm losses such as L2 over adversarial training.However, their averaging nature easily leads to temporally smooth results with an undesirable lack of spatial detail.For unpaired video translation, existing approaches modify the generator networks to form spatio-temporal cycle consistencies.In contrast, we focus on improving the learning objectives and propose a temporally self-supervised algorithm.For both tasks, we show that temporal adversarial learning is key to achieving temporally coherent solutions without sacrificing spatial detail.We also propose a novel Ping-Pong loss to improve the long-term temporal consistency.It effectively prevents recurrent networks from accumulating artifacts temporally without depressing detailed features.We also propose a first set of metrics to quantitatively evaluate the accuracy as well as the perceptual quality of the temporal evolution.A series of user studies confirms the rankings computed with these metrics.",We propose temporal self-supervisions for learning stable temporal functions with GANs. 1699,Bridging HMMs and RNNs through Architectural Transformations,"A distinct commonality between HMMs and RNNs is that they both learn hidden representations for sequential data.In addition, it has been noted that the backward computation of the Baum-Welch algorithm for HMMs is a special case of the back propagation algorithm used for neural networks). Do these observations suggest that, despite their many apparent differences, HMMs are a special case of RNNs? In this paper, we investigate a series of architectural transformations between HMMs and RNNs, both through theoretical derivations and empirical hybridization, to answer this question.In particular, we investigate three key design factors—independence assumptions between the hidden states and the observation, the placement of softmax, and the use of non-linearity—in order to pin down their empirical effects. We present a comprehensive empirical study to provide insights on the interplay between expressivity and interpretability with respect to language modeling and parts-of-speech induction.","Are HMMs a special case of RNNs? We investigate a series of architectural transformations between HMMs and RNNs, both through theoretical derivations and empirical hybridization and provide new insights." 1700,An information-theoretic analysis of deep latent-variable models,"We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference.This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs and the communication cost.We derive the optimal frontier of generative models in the two-dimensional rate-distortion plane, and show how the standard evidence lower bound objective is insufficient to select between points along this frontier.However, by performing targeted optimization to learn generative models with different rates, we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable.Through experiments on MNIST and Omniglot with a variety of architectures, we show how our framework sheds light on many recent proposed extensions to the variational autoencoder family.",We provide an information theoretic and experimental analysis of state-of-the-art variational autoencoders. 1701,How Powerful are Graph Neural Networks?,"Graph Neural Networks are an effective framework for representation learning of graphs.GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes.Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks.However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations.Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures.Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures.We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test.We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.",We develop theoretical foundations for the expressive power of GNNs and design a provably most powerful GNN. 1702,Tasks Without Borders: A New Approach to Online Multi-Task Learning,"We introduce MTLAB, a new algorithm for learning multiple related tasks with strong theoretical guarantees.Its key idea is to perform learning sequentially over the data of all tasks, without interruptions or restarts at task boundaries.Predictors for individual tasks are derived from this process by an additional online-to-batch conversion step.By learning across task boundaries, MTLAB achieves a sublinear regret of true risks in the number of tasks.In the lifelong learning setting, this leads to an improved generalization bound that converges with the total number of samples across all observed tasks, instead of the number of examples per tasks or the number of tasks independently.At the same time, it is widely applicable: it can handle finite sets of tasks, as common in multi-task learning, as well as stochastic task sequences, as studied in lifelong learning.",A new algorithm for online multi-task learning that learns without restarts at the task borders 1703,Luck Matters: Understanding Training Dynamics of Deep ReLU Networks,"We analyze the dynamics of training deep ReLU networks and their implications on generalization capability.Using a teacher-student setting, we discovered a novel relationship between the gradient received by hidden student nodes and the activations of teacher nodes for deep ReLU networks.With this relationship and the assumption of small overlapping teacher node activations, we prove that student nodes whose weights are initialized to be close to teacher nodes converge to them at a faster rate, and in over-parameterized regimes and 2-layer case, while a small set of lucky nodes do converge to the teacher nodes, the fan-out weights of other nodes converge to zero.This framework provides insight into multiple puzzling phenomena in deep learning like over-parameterization, implicit regularization, lottery tickets, etc.We verify our assumption by showing that the majority of BatchNorm biases of pre-trained VGG11/16 models are negative.Experiments on random deep teacher networks with Gaussian inputs, teacher network pre-trained on CIFAR-10 and extensive ablation studies validate our multiple theoretical predictions.","A theoretical framework for deep ReLU network that can explains multiple puzzling phenomena like over-parameterization, implicit regularization, lottery tickets, etc. " 1704,Word translation without parallel data,"State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora.Recent studies showed that the need for parallel data supervision can be alleviated with character-level information.While these methods showed encouraging results, they are not on par with their supervised counterparts and are limited to pairs of languages sharing a common alphabet.In this work, we show that we can build a bilingual dictionary between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way.Without using any character information, our model even outperforms existing supervised methods on cross-lingual tasks for some language pairs.Our experiments demonstrate that our method works very well also for distant language pairs, like English-Russian or English-Chinese.We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation.Our code, embeddings and dictionaries are publicly available.","Aligning languages without the Rosetta Stone: with no parallel data, we construct bilingual dictionaries using adversarial training, cross-domain local scaling, and an accurate proxy criterion for cross-validation." 1705,Interpretable Counting for Visual Question Answering,"Questions that require counting a variety of objects in images remain a major challenge in visual question answering.The most common approaches to VQA involve either classifying answers based on fixed length representations of both the image and question or summing fractional counts estimated from each section of the image.In contrast, we treat counting as a sequential decision process and force our model to make discrete choices of what to count.Specifically, the model sequentially selects from detected objects and learns interactions between objects that influence subsequent selections.A distinction of our approach is its intuitive and interpretable output, as discrete counts are automatically grounded in the image.Furthermore, our method outperforms the state of the art architecture for VQA on multiple metrics that evaluate counting.",We perform counting for visual question answering; our model produces interpretable outputs by counting directly from detected objects. 1706,Learning Deep Generative Models of Graphs,"Graphs are fundamental data structures required to model many important real-world data, from knowledge graphs, physical and social interactions to molecules and proteins.In this paper, we study the problem of learning generative models of graphs from a dataset of graphs of interest.After learning, these models can be used to generate samples with similar properties as the ones in the dataset. Such models can be useful in a lot of applications, e.g. drug discovery and knowledge graph construction.The task of learning generative models of graphs, however, has its unique challenges.In particular, how to handle symmetries in graphs and ordering of its elements during the generation process are important issues.We propose a generic graph neural net based model that is capable of generating any arbitrary graph. We study its performance on a few graph generation tasks compared to baselines that exploit domain knowledge. We discuss potential issues and open problems for such generative models going forward.",We study the graph generation problem and propose a powerful deep generative model capable of generating arbitrary graphs. 1707,Jointly Learning Sentence Embeddings and Syntax with Unsupervised Tree-LSTMs,"We introduce a neural network that represents sentences by composing their words according to induced binary parse trees.We use Tree-LSTM as our composition function, applied along a tree structure found by a fully differentiable natural language chart parser.Our model simultaneously optimises both the composition function and the parser, thus eliminating the need for externally-provided parse trees which are normally required for Tree-LSTM.It can therefore be seen as a tree-based RNN that is unsupervised with respect to the parse trees.As it is fully differentiable, our model is easily trained with an off-the-shelf gradient descent method and backpropagation.We demonstrate that it achieves better performance compared to various supervised Tree-LSTM architectures on a textual entailment task and a reverse dictionary task.Finally, we show how performance can be improved with an attention mechanism which fully exploits the parse chart, by attending over all possible subspans of the sentence.",Represent sentences by composing them with Tree-LSTMs according to automatically induced parse trees. 1708,On Learning Wire-Length Efficient Neural Networks,"Pruning neural networks for wiring length efficiency is considered.Three techniques are proposed and experimentally tested: distance-based regularization, nested-rank pruning, and layer-by-layer bipartite matching.The first two algorithms are used in the training and pruning phases, respectively, and the third is used in the arranging neurons phase.Experiments show that distance-based regularization with weight based pruning tends to perform the best, with or without layer-by-layer bipartite matching.These results suggest that these techniques may be useful in creating neural networks for implementation in widely deployed specialized circuits.","Three new algorithms with ablation studies to prune neural network to optimize for wiring length, as opposed to number of remaining weights." 1709,Stacking for Transfer Learning,"In machine learning tasks, overtting frequently crops up when the number of samples of target domain is insufficient, for the generalization ability of the classifier is poor in this circumstance.To solve this problem, transfer learning utilizes the knowledge of similar domains to improve the robustness of the learner.The main idea of existing transfer learning algorithms is to reduce the dierence between domains by sample selection or domain adaptation.However, no matter what transfer learning algorithm we use, the difference always exists and the hybrid training of source and target data leads to reducing fitting capability of the learner on target domain.Moreover, when the relatedness between domains is too low, negative transfer is more likely to occur.To tackle the problem, we proposed a two-phase transfer learning architecture based on ensemble learning, which uses the existing transfer learning algorithms to train the weak learners in the first stage, and uses the predictions of target data to train the final learner in the second stage.Under this architecture, the fitting capability and generalization capability can be guaranteed at the same time.We evaluated the proposed method on public datasets, which demonstrates the effectiveness and robustness of our proposed method.",How to use stacked generalization to improve the performance of existing transfer learning algorithms when limited labeled data is available. 1710,Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning,"Deep learning has achieved astonishing results on many tasks with large amounts of data and generalization within the proximity of training data.For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the resulting problems.In particular, learning physics models for model-based control requires robust extrapolation from fewer samples – often collected online in real-time – and model errors may lead to drastic damages of the system.Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples.As a first example, we propose Deep Lagrangian Networks as a deep network structure upon which Lagrangian Mechanics have been imposed.DeLaN can learn the equations of motion of a mechanical system with a deep network efficiently while ensuring physical plausibility.The resulting DeLaN network performs very well at robot tracking control.The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time.",This paper introduces a physics prior for Deep Learning and applies the resulting network topology for model-based control. 1711,Iterative Target Augmentation for Effective Conditional Generation,"Many challenging prediction problems, from molecular optimization to program synthesis, involve creating complex structured objects as outputs.However, available training data may not be sufficient for a generative model to learn all possible complex transformations.By leveraging the idea that evaluation is easier than generation, we show how a simple, broadly applicable, iterative target augmentation scheme can be surprisingly effective in guiding the training and use of such models.Our scheme views the generative model as a prior distribution, and employs a separately trained filter as the likelihood.""In each augmentation step, we filter the model's outputs to obtain additional prediction targets for the next training epoch."", 'Our method is applicable in the supervised as well as semi-supervised settings.We demonstrate that our approach yields significant gains over strong baselines both in molecular optimization and program synthesis.In particular, our augmented model outperforms the previous state-of-the-art in molecular optimization by over 10% in absolute gain.",We improve generative models by proposing a meta-algorithm that filters new training data from the model's outputs. 1712,Learning Protein Structure with a Differentiable Simulator,"The Boltzmann distribution is a natural model for many systems, from brains to materials and biomolecules, but is often of limited utility for fitting data because Monte Carlo algorithms are unable to simulate it in available time.This gap between the expressive capabilities and sampling practicalities of energy-based models is exemplified by the protein folding problem, since energy landscapes underlie contemporary knowledge of protein biophysics but computer simulations are challenged to fold all but the smallest proteins from first principles.In this work we aim to bridge the gap between the expressive capacity of energy functions and the practical capabilities of their simulators by using an unrolled Monte Carlo simulation as a model for data.We compose a neural energy function with a novel and efficient simulator based on Langevin dynamics to build an end-to-end-differentiable model of atomic protein structure given amino acid sequence information.""We introduce techniques for stabilizing backpropagation under long roll-outs and demonstrate the model's capacity to make multimodal predictions and to, in some cases, generalize to unobserved protein fold types when trained on a large corpus of protein structures.",We use an unrolled simulator as an end-to-end differentiable model of protein structure and show it can (sometimes) hierarchically generalize to unseen fold topologies. 1713,Automated Animal Training and Iterative Inference of Latent Learning Policy,"Progress in understanding how individual animals learn requires high-throughput standardized methods for behavioral training and ways of adapting training.During the course of training with hundreds or thousands of trials, an animal may change its underlying strategy abruptly, and capturing these changes requires real-time inference of the animal’s latent decision-making strategy.To address this challenge, we have developed an integrated platform for automated animal training, and an iterative decision-inference model that is able to infer the momentary decision-making policy, and predict the animal’s choice on each trial with an accuracy of ~80%, even when the animal is performing poorly.We also combined decision predictions at single-trial resolution with automated pose estimation to assess movement trajectories.Analysis of these features revealed categories of movement trajectories that associate with decision confidence.",Automated mice training for neuroscience with online iterative latent strategy inference for behavior prediction 1714,Line attractor dynamics in recurrent networks for sentiment classification,"Recurrent neural networks are a powerful tool for modeling sequential data.Despite their widespread usage, understanding how RNNs solve complex problems remains elusive. Here, we characterize how popular RNN architectures perform document-level sentiment classification.Despite their theoretical capacity to implement complex, high-dimensional computations, we find that trained networks converge to highly interpretable, low-dimensional representations. We identify a simple mechanism, integration along an approximate line attractor, and find this mechanism present across RNN architectures.Overall, these results demonstrate that surprisingly universal and human interpretable computations can arise across a range of recurrent networks.","We analyze recurrent networks trained on sentiment classification, and find that they all exhibit approximate line attractor dynamics when solving this task." 1715,Deep neuroethology of a virtual rodent,"Parallel developments in neuroscience and deep learning have led to mutually productive exchanges, pushing our understanding of real and artificial neural networks in sensory and cognitive systems.However, this interaction between fields is less developed in the study of motor control.In this work, we develop a virtual rodent as a platform for the grounded study of motor activity in artificial models of embodied control.We then use this platform to study motor activity across contexts by training a model to solve four complex tasks.""Using methods familiar to neuroscientists, we describe the behavioral representations and algorithms employed by different layers of the network using a neuroethological approach to characterize motor activity relative to the rodent's behavior and goals."", 'We find that the model uses two classes of representations which respectively encode the task-specific behavioral strategies and task-invariant behavioral kinematics.These representations are reflected in the sequential activity and population dynamics of neural subpopulations.Overall, the virtual rodent facilitates grounded collaborations between deep reinforcement learning and motor neuroscience.","We built a physical simulation of a rodent, trained it to solve a set of tasks, and analyzed the resulting networks." 1716,Improving GANs Using Optimal Transport,"We present Optimal Transport GAN, a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution.This metric, which we call mini-batch energy distance, combines optimal transport in primal form with an energy distance defined in an adversarially learned feature space, resulting in a highly discriminative distance function with unbiased mini-batch gradients.Experimentally we show OT-GAN to be highly stable when trained with large mini-batches, and we present state-of-the-art results on several popular benchmark problems for image generation.",An extension of GANs combining optimal transport in primal form with an energy distance defined in an adversarially learned feature space. 1717,Interactive Grounded Language Acquisition and Generalization in a 2D World,"We build a virtual agent for learning language in a 2D maze-like world.The agent sees images of the surrounding environment, listens to a virtual teacher, and takes actions to receive rewards.It interactively learns the teacher’s language from scratch based on two language use cases: sentence-directed navigation and question answering.It learns simultaneously the visual representations of the world, the language, and the action control.By disentangling language grounding from other computational routines and sharing a concept detection function between language grounding and prediction, the agent reliably interpolates and extrapolates to interpret sentences that contain new word combinations or new words missing from training sentences.The new words are transferred from the answers of language prediction.Such a language ability is trained and evaluated on a population of over 1.6 million distinct sentences consisting of 119 object words, 8 color words, 9 spatial-relation words, and 50 grammatical words.The proposed model significantly outperforms five comparison methods for interpreting zero-shot sentences.In addition, we demonstrate human-interpretable intermediate outputs of the model in the appendix.",Training an agent in a 2D virtual world for grounded language acquisition and generalization. 1718,Wasserstein Robust Reinforcement Learning,"Reinforcement learning algorithms, though successful, tend to over-fit to training environments, thereby hampering their application to the real-world.This paper proposes -- a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks.Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver.Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general.We empirically demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments",An RL algorithm that learns to be robust to changes in dynamics 1719,Strategy Synthesis in POMDPs via Game-Based Abstractions,"Partially observable Markov decision processes are a natural model for scenarios where one has to deal with incomplete knowledge and random events.Applications include, but are not limited to, robotics and motion planning.However, many relevant properties of POMDPs are either undecidable or very expensive to compute in terms of both runtime and memory consumption.In our work, we develop a game-based abstraction method that is able to deliver safe bounds and tight approximations for important sub-classes of such properties.We discuss the theoretical implications and showcase the applicability of our results on a broad spectrum of benchmarks.",This paper provides a game-based abstraction scheme to compute provably sound policies for POMDPs. 1720,Critical Percolation as a Framework to Analyze the Training of Deep Networks,"In this paper we approach two relevant deep learning topics:i) tackling of graph structured input data andii) a better understanding and analysis of deep networks and related learning algorithms.With this in mind we focus on the topological classification of reachability in a particular subset of planar graphs.Doing so, we are able to model the topology of data while staying in Euclidean space, thus allowing its processing with standard CNN architectures.We suggest a suitable architecture for this problem and show that it can express a perfect solution to the classification task.The shape of the cost function around this solution is also derived and, remarkably, does not depend on the size of the maze in the large maze limit.Responsible for this behavior are rare events in the dataset which strongly regulate the shape of the cost function near this global minimum.We further identify an obstacle to learning in the form of poorly performing local minima in which the network chooses to ignore some of the inputs.We further support our claims with training experiments and numerical analysis of the cost function on networks with up to layers.",A toy dataset based on critical percolation in a planar graph provides an analytical window to the training dynamics of deep neural networks 1721,Out-of-Sample Extrapolation with Neuron Editing,"While neural networks can be trained to map from one specific dataset to another, they usually do not learn a generalized transformation that can extrapolate accurately outside the space of training.For instance, a generative adversarial network exclusively trained to transform images of cars from light to dark might not have the same effect on images of horses.This is because neural networks are good at generation within the manifold of the data that they are trained on.However, generating new samples outside of the manifold or extrapolating ""out-of-sample"" is a much harder problem that has been less well studied.To address this, we introduce a technique called neuron editing that learns how neurons encode an edit for a particular transformation in a latent space.We use an autoencoder to decompose the variation within the dataset into activations of different neurons and generate transformed data by defining an editing transformation on those neurons.""By performing the transformation in a latent trained space, we encode fairly complex and non-linear transformations to the data with much simpler distribution shifts to the neuron's activations."", 'We showcase our technique on image domain/style transfer and two biological applications: removal of batch artifacts representing unwanted noise and modeling the effect of drug treatments to predict synergy between drugs.","We reframe the generation problem as one of editing existing points, and as a result extrapolate better than traditional GANs." 1722,Learning sparse relational transition models,"We present a representation for describing transition models in complex uncertain domains using relational rules. For any action, a rule selects a set of relevant objects and computes a distribution over properties of just those objects in the resulting state given their properties in the previous state. An iterative greedy algorithm is used to construct a set of deictic references that determine which objects are relevant in any given state. ', ""Feed-forward neural networks are used to learn the transition distribution on the relevant objects' properties.This strategy is demonstrated to be both more versatile and more sample efficient than learning a monolithic transition model in a simulated domain in which a robot pushes stacks of objects on a cluttered table.",A new approach that learns a representation for describing transition models in complex uncertaindomains using relational rules. 1723,Graph-based motion planning networks,"Differentiable planning network architecture has shown to be powerful in solving transfer planning tasks while possesses a simple end-to-end training feature.Many great planning architectures that have been proposed later in literature are inspired by this design principle in which a recursive network architecture is applied to emulate backup operations of a value iteration algorithm.However existing frame-works can only learn and plan effectively on domains with a lattice structure, i.e. regular graphs embedded in a certain Euclidean space.In this paper, we propose a general planning network, called Graph-based Motion Planning Networks, that will be able toi) learn and plan on general irregular graphs, henceii) render existing planning network architectures special cases.The proposed GrMPN framework is invariant to task graph permutation, i.e. graph isormophism.As a result, GrMPN possesses the generalization strength and data-efficiency ability.We demonstrate the performance of the proposed GrMPN method against other baselines on three domains ranging from 2D mazes, path planning on irregular graphs, and motion planning.",We propose an end-to-end differentiable planning network for graphs. This can be applicable to many motion planning problems 1724,Improved Self-Supervised Deep Image Denoising,"We describe techniques for training high-quality image denoising models that require only single instances of corrupted images as training data.Inspired by a recent technique that removes the need for supervision through image pairs by employing networks with a ""blind spot"" in the receptive field, we address two of its shortcomings: inefficient training and poor final denoising performance.This is achieved through a novel blind-spot convolutional network architecture that allows efficient self-supervised training, as well as application of Bayesian distribution prediction on output colors.Together, they bring the self-supervised model on par with fully supervised deep learning techniques in terms of both quality and training speed in the case of i.i.d. Gaussian noise.",We learn high-quality denoising using only single instances of corrupted images as training data. 1725,Reinforcement Learning on Web Interfaces using Workflow-Guided Exploration,"Reinforcement learning agents improve through trial-and-error, but when reward is sparse and the agent cannot discover successful action sequences, learning stagnates.This has been a notable problem in training deep RL agents to perform web-based tasks, such as booking flights or replying to emails, where a single mistake can ruin the entire sequence of actions.A common remedy is to ""warm-start"" the agent by pre-training it to mimic expert demonstrations, but this is prone to overfitting.Instead, we propose to constrain exploration using demonstrations.From each demonstration, we induce high-level ""workflows"" which constrain the allowable actions at each time step to be similar to those in the demonstration.Our exploration policy then learns to identify successful workflows and samples actions that satisfy these workflows.Workflows prune out bad exploration directions and accelerate the agent’s ability to discover rewards.We use our approach to train a novel neural policy designed to handle the semi-structured nature of websites, and evaluate on a suite of web tasks, including the recent World of Bits benchmark.We achieve new state-of-the-art results, and show that workflow-guided exploration improves sample efficiency over behavioral cloning by more than 100x.",We solve the sparse rewards problem on web UI tasks using exploration guided by demonstrations 1726,Embedded Deep Learning for Face Detection and Emotion Recognition with Intel© Movidius (TM) Neural Compute Stick,"Nowadays deep learning is one of the main topics in almost every field.It helped to get amazing results in a great number of tasks.The main problem is that this kind of learning and consequently neural networks, that can be defined deep, are resource intensive.They need specialized hardware to perform a computation in a reasonable time.Unfortunately, it is not sufficient to make deep learning ""usable"" in real life.Many tasks are mandatory to be as much as possible real-time.So it is needed to optimize many components such as code, algorithms, numeric accuracy and hardware, to make them ""efficient and usable"".All these optimizations can help us to produce incredibly accurate and fast learning models.",Embedded architecture for deep learning on optimized devices for face detection and emotion recognition 1727,Learning Covariate-Specific Embeddings with Tensor Decompositions,"Word embedding is a useful approach to capture co-occurrence structures in a large corpus of text.In addition to the text data itself, we often have additional covariates associated with individual documents in the corpus---e.g. the demographic of the author, time and venue of publication, etc.---and we would like the embedding to naturally capture the information of the covariates.In this paper, we propose a new tensor decomposition model for word embeddings with covariates.Our model jointly learns a embedding for all the words as well as a weighted diagonal transformation to model how each covariate modifies the base embedding.To obtain the specific embedding for a particular author or venue, for example, we can then simply multiply the base embedding by the transformation matrix associated with that time or venue.The main advantages of our approach is data efficiency and interpretability of the covariate transformation matrix.Our experiments demonstrate that our joint model learns substantially better embeddings conditioned on each covariate compared to the standard approach of learning a separate embedding for each covariate using only the relevant subset of data.""Furthermore, our model encourages the embeddings to be topic-aligned in the sense that the dimensions have specific independent meanings."", 'This allows our covariate-specific embeddings to be compared by topic, enabling downstream differential analysis.We empirically evaluate the benefits of our algorithm on several datasets, and demonstrate how it can be used to address many natural questions about the effects of covariates.","Using the same embedding across covariates doesn't make sense, we show that a tensor decomposition algorithm learns sparse covariate-specific embeddings and naturally separable topics jointly and data-efficiently." 1728,Principled Deep Neural Network Training through Linear Programming,"Deep Learning has received significant attention due to its impressive performance in many state-of-the-art learning tasks.Unfortunately, while very powerful, Deep Learning is not well understood theoretically and in particular only recently results for the complexity of training deep neural networks have been obtained.In this work we show that large classes of deep neural networks with various architectures, activation functions, and loss functions can be trained to near optimality with desired target accuracy using linear programming in time that is exponential in the input data and parameter space dimension and polynomial in the size of the data set; improvements of the dependence in the input dimension are known to be unlikely assuming, and improving the dependence on the parameter space dimension remains open.In particular, we obtain polynomial time algorithms for training for a given fixed network architecture.Our work applies more broadly to empirical risk minimization problems which allows us to generalize various previous results and obtain new complexity results for previously unstudied architectures in the proper learning setting.",Using linear programming we show that the computational complexity of approximate Deep Neural Network training depends polynomially on the data size for several architectures 1729,Global Approximate Inference via Local Linearisation for Temporal Gaussian Processes,"The extended Kalman filter is a classical signal processing algorithm which performs efficient approximate Bayesian inference in non-conjugate models by linearising the local measurement function, avoiding the need to compute intractable integrals when calculating the posterior.In some cases the EKF outperforms methods which rely on cubature to solve such integrals, especially in time-critical real-world problems.The drawback of the EKF is its local nature, whereas state-of-the-art methods such as variational inference or expectation propagation are considered global approximations.We formulate power EP as a nonlinear Kalman filter, before showing that linearisation results in a globally iterated algorithm that exactly matches the EKF on the first pass through the data, and iteratively improves the linearisation on subsequent passes.An additional benefit is the ability to calculate the limit as the EP power tends to zero, which removes the instability of the EP-like algorithm.The resulting inference scheme solves non-conjugate temporal Gaussian process models in linear time,, and in closed form.",We unify the extended Kalman filter (EKF) and the state space approach to power expectation propagation (PEP) by solving the intractable moment matching integrals in PEP via linearisation. This leads to a globally iterated extension of the EKF. 1730,Towards Interpretable Evaluations: A Case Study of Named Entity Recognition," With the proliferation of models for natural language processing tasks, it is even harder to understand the differences between models and their relative merits.Simply looking at differences between holistic metrics such as accuracy, BLEU, or F1 do not tell us or a particular method is better and how dataset biases influence the choices of model design. In this paper, we present a general methodology for} evaluation of NLP systems and choose the task of named entity recognition as a case study, which is a core task of identifying people, places, or organizations in text.The proposed evaluation method enables us to interpret the , , and how the affect the design of the models, identifying the strengths and weaknesses of current approaches.By making our tool available, we make it easy for future researchers to run similar analyses and drive the progress in this area.","We propose a generalized evaluation methodology to interpret model biases, dataset biases, and their correlation." 1731,Mind Your Language: Learning Visually Grounded Dialog in a Multi-Agent Setting,"The task of visually grounded dialog involves learning goal-oriented cooperative dialog between autonomous agents who exchange information about a scene through several rounds of questions and answers.We posit that requiring agents to adhere to rules of human language while also maximizing information exchange is an ill-posed problem, and observe that humans do not stray from a common language, because they are social creatures and have to communicate with many people everyday, and it is far easier to stick to a common language even at the cost of some efficiency loss.Using this as inspiration, we propose and evaluate a multi-agent dialog framework where each agent interacts with, and learns from, multiple agents, and show that this results in more relevant and coherent dialog without sacrificing task performance.",Social agents learn to talk to each other in natural language towards a goal 1732,Understanding Posterior Collapse in Generative Latent Variable Models,"Posterior collapse in Variational Autoencoders arises when the variational distribution closely matches the uninformative prior for a subset of latent variables.This paper presents a simple and intuitive explanation for posterior collapse through the analysis of linear VAEs and their direct correspondence with Probabilistic PCA.We identify how local maxima can emerge from the marginal log-likelihood of pPCA, which yields similar local maxima for the evidence lower bound.We show that training a linear VAE with variational inference recovers a uniquely identifiable global maximum corresponding to the principal component directions.We provide empirical evidence that the presence of local maxima causes posterior collapse in deep non-linear VAEs.Our findings help to explain a wide range of heuristic approaches in the literature that attempt to diminish the effect of the KL term in the ELBO to reduce posterior collapse.",We show that posterior collapse in linear VAEs is caused entirely by marginal log-likelihood (not ELBO). Experiments on deep VAEs suggest a similar phenomenon is at play. 1733,MUSE: Multi-Scale Attention Model for Sequence to Sequence Learning,"Transformers have achieved state-of-the-art results on a variety of natural language processing tasks.Despite good performance, Transformers are still weak in long sentence modeling where the global attention map is too dispersed to capture valuable information.In such case, the local/token features that are also significant to sequence modeling are omitted to some extent.To address this problem, we propose a Multi-scale attention model by concatenating attention networks with convolutional networks and position-wise feed-forward networks to explicitly capture local and token features.Considering the parameter size and computation efficiency, we re-use the feed-forward layer in the original Transformer and adopt a lightweight dynamic convolution as implementation.Experimental results show that the proposed model achieves substantial performance improvements over Transformer, especially on long sentences, and pushes the state-of-the-art from 35.6 to 36.2 on IWSLT 2014 German to English translation task, from 30.6 to 31.3 on IWSLT 2015 English to Vietnamese translation task.We also reach the state-of-art performance on WMT 2014 English to French translation dataset, with a BLEU score of 43.2.",This paper propose a new model which combines multi scale information for sequence to sequence learning. 1734,Towards Stable and Efficient Training of Verifiably Robust Neural Networks,"Training neural networks with verifiable robustness guarantees is challenging.Several existing approaches utilize linear relaxation based neural network output bounds under perturbation, but they can slow down training by a factor of hundreds depending on the underlying network architectures.Meanwhile, interval bound propagation based training is efficient and significantly outperforms linear relaxation based methods on many tasks, yet it may suffer from stability issues since the bounds are much looser especially at the beginning of training.In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass.CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks.We conduct large scale experiments on MNIST and CIFAR datasets, and outperform all previous linear relaxation and bound propagation based certified defenses in L_inf robustness.Notably, we achieve 7.02% verified test error on MNIST at epsilon=0.3, and 66.94% on CIFAR-10 with epsilon=8/255.","We propose a new certified adversarial training method, CROWN-IBP, that achieves state-of-the-art robustness for L_inf norm adversarial perturbations." 1735,Interactive Shape Based Brushing Technique for Trail Sets,"Brushing techniques have a long history with the first interactive selection tools appearing in the 1990's."", 'Since then, many additional techniques have been developed to address selection accuracy, scalability and flexibility issues.Selection is especially difficult in large datasets where many visual items tangle and create overlapping.This paper investigates a novel brushing technique which not only relies on the actual brushing location but also on the shape of the brushed area.Firstly, the user brushes the region where trajectories of interest are visible.Secondly, the shape of the brushed area is used to select similar items.Thirdly, the user can adjust the degree of similarity to filter out the requested trajectories.This technique encompasses two types of comparison metrics, the piece-wise Pearson correlation and the similarity measurement based on information geometry.We apply it to concrete scenarios with datasets from air traffic control, eye-tracking data and GPS trajectories.",Interactive technique to improve brushing in dense trajectory datasets by taking into account the shape of the brush. 1736,Compositional GAN: Learning Conditional Image Composition,"Generative Adversarial Networks can produce images of surprising complexity and realism, but are generally structured to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene.Capturing such complex interactions between different objects in the world, including their relative scaling, spatial layout, occlusion, or viewpoint transformation is a challenging problem.In this work, we propose to model object composition in a GAN framework as a self-consistent composition-decomposition network.Our model is conditioned on the object images from their marginal distributions and can generate a realistic image from their joint distribution.We evaluate our model through qualitative experiments and user evaluations in scenarios when either paired or unpaired examples for the individual object images and the joint scenes are given during training.Our results reveal that the learned model captures potential interactions between the two object domains given as input to output new instances of composed scene at test time in a reasonable fashion.",We develop a novel approach to model object compositionality in images in a GAN framework. 1737,FAKE CAN BE REAL IN GANS,"In order to alleviate the notorious mode collapse phenomenon in generative adversarial networks, we propose a novel training method of GANs in which certain fake samples can be reconsidered as real ones during the training process.This strategy can reduce the gradient value that generator receives in the region where gradient exploding happens.We show that the theoretical equilibrium between the generators and discriminations actually can be seldom realized in practice.And this results in an unbalanced generated distribution that deviates from the target one, when fake datepoints overfit to real ones, which explains the non-stability of GANs.We also prove that, by penalizing the difference between discriminator outputs and considering certain fake datapoints as real for adjacent real and fake sample pairs, gradient exploding can be alleviated.Accordingly, a modified GAN training method is proposed with a more stable training process and a better generalization.Experiments on different datasets verify our theoretical analysis.", We propose a novel GAN training method by considering certain fake samples as real to alleviate mode collapse and stabilize training process. 1738,Interactive Visual Exploration of Latent Space (IVELS) for peptide auto-encoder model selection,"We present a tool for Interactive Visual Exploration of Latent Space for model selection. Evaluating generative models of discrete sequences from a continuous latent space is a challenging problem, since their optimization involves multiple competing objective terms. We introduce a model-selection pipeline to compare and filter models throughout consecutive stages of more complex and expensive metrics.We present the pipeline in an interactive visual tool to enable the exploration of the metrics, analysis of the learned latent space, and selection of the best model for a given task. We focus specifically on the variational auto-encoder family in a case study of modeling peptide sequences, which are short sequences of amino acids.This task is especially interesting due to the presence of multiple attributes we want to model.We demonstrate how an interactive visual comparison can assist in evaluating how well an unsupervised auto-encoder meaningfully captures the attributes of interest in its latent space.",We present a visual tool to interactively explore the latent space of an auto-encoder for peptide sequences and their attributes. 1739,Discovering the mechanics of hidden neurons,"Neural networks trained through stochastic gradient descent have been around for more than 30 years, but they still escape our understanding.This paper takes an experimental approach, with a divide-and-conquer strategy in mind: we start by studying what happens in single neurons.While being the core building block of deep neural networks, the way they encode information about the inputs and how such encodings emerge is still unknown.We report experiments providing strong evidence that hidden neurons behave like binary classifiers during training and testing.During training, analysis of the gradients reveals that a neuron separates two categories of inputs, which are impressively constant across training.During testing, we show that the fuzzy, binary partition described above embeds the core information used by the network for its prediction.These observations bring to light some of the core internal mechanics of deep neural networks, and have the potential to guide the next theoretical and practical developments.",We report experiments providing strong evidence that a neuron behaves like a binary classifier during training and testing 1740,ENRICHMENT OF FEATURES FOR CLASSIFICATION USING AN OPTIMIZED LINEAR/NON-LINEAR COMBINATION OF INPUT FEATURES,"Automatic classification of objects is one of the most important tasks in engineeringand data mining applications.Although using more complex and advancedclassifiers can help to improve the accuracy of classification systems, it can bedone by analyzing data sets and their features for a particular problem.Featurecombination is the one which can improve the quality of the features.In this paper,a structure similar to Feed-Forward Neural Network is used to generate anoptimized linear or non-linear combination of features for classification.GeneticAlgorithm is applied to update weights and biases.Since nature of data setsand their features impact on the effectiveness of combination and classificationsystem, linear and non-linear activation functions are usedto achieve more reliable system.Experiments of several UCI data sets and usingminimum distance classifier as a simple classifier indicate that proposed linear andnon-linear intelligent FFNN-based feature combination can present more reliableand promising results.By using such a feature combination method, there is noneed to use more powerful and complex classifier anymore.",A method for enriching and combining features to improve classification accuracy 1741,Generating Multiple Objects at Spatially Distinct Locations,"Recent improvements to Generative Adversarial Networks have made it possible to generate realistic images in high resolution based on natural language descriptions such as image captions.Furthermore, conditional GANs allow us to control the image generation process through labels or even natural language descriptions.However, fine-grained control of the image layout, i.e. where in the image specific objects should be located, is still difficult to achieve.This is especially true for images that should contain multiple distinct objects at different spatial locations.We introduce a new approach which allows us to control the location of arbitrarily many objects within an image by adding an object pathway to both the generator and the discriminator.Our approach does not need a detailed semantic layout but only bounding boxes and the respective labels of the desired objects are needed.The object pathway focuses solely on the individual objects and is iteratively applied at the locations specified by the bounding boxes.The global pathway focuses on the image background and the general image layout.We perform experiments on the Multi-MNIST, CLEVR, and the more complex MS-COCO data set.Our experiments show that through the use of the object pathway we can control object locations within images and can model complex scenes with multiple objects at various locations.We further show that the object pathway focuses on the individual objects and learns features relevant for these, while the global pathway focuses on global image characteristics and the image background.",Extend GAN architecture to obtain control over locations and identities of multiple objects within generated images. 1742,Abstractive Dialog Summarization with Semantic Scaffolds,"The demand for abstractive dialog summary is growing in real-world applications.For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction.However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datasets.We propose an abstractive dialog summarization dataset based on MultiWOZ.If we directly apply previous state-of-the-art document summarization methods on dialogs, there are two significant drawbacks: the informative entities such as restaurant names are difficult to preserve, and the contents from different dialog domains are sometimes mismatched.To address these two drawbacks, we propose Scaffold Pointer Network to utilize the existing annotation on speaker role, semantic slot and dialog domain.SPNet incorporates these semantic scaffolds for dialog summarization.Since ROUGE cannot capture the two drawbacks mentioned, we also propose a new evaluation metric that considers critical informative entities in the text.On MultiWOZ, our proposed SPNet outperforms state-of-the-art abstractive summarization methods on all the automatic and human evaluation metrics.",We propose a novel end-to-end model (SPNet) to incorporate semantic scaffolds for improving abstractive dialog summarization. 1743,Go for a Walk and Arrive at the Answer: Reasoning Over Paths in Knowledge Bases using Reinforcement Learning,"Knowledge bases, both automatically and manually constructed, are often incomplete --- many valid facts can be inferred from the KB by synthesizing existing information.A popular approach to KB completion is to infer new relations by combinatory reasoning over the information found along other paths connecting a pair of entities.Given the enormous size of KBs and the exponential number of paths, previous path-based models have considered only the problem of predicting a missing relation given two entities, or evaluating the truth of a proposed triple.Additionally, these methods have traditionally used random paths between fixed entity pairs or more recently learned to pick paths between them.We propose a new algorithm, MINERVA, which addresses the much more difficult and practical task of answering questions where the relation is known, but only one entity.Since random walks are impractical in a setting with unknown destination and combinatorially many paths from a start node, we present a neural reinforcement learning approach which learns how to navigate the graph conditioned on the input query to find predictive paths.On a comprehensive evaluation on seven knowledge base datasets, we found MINERVA to be competitive with many current state-of-the-art methods.",We present a RL agent MINERVA which learns to walk on a knowledge graph and answer queries 1744,Significance of feedforward architectural differences between the ventral visual stream and DenseNet,"There are many differences between convolutional networks and the ventral visual streams of primates.For example, standard convolutional networks lack recurrent and lateral connections, cell dynamics, etc.However, their feedforward architectures are somewhat similar to the ventral stream, and warrant a more detailed comparison.A recent study found that the feedforward architecture of the visual cortex could be closely approximated as a convolutional network, but the resulting architecture differed from widely used deep networks in several ways.The same study also found, somewhat surprisingly, that training the ventral stream of this network for object recognition resulted in poor performance.This paper examines the performance of this network in more detail.In particular, I made a number of changes to the ventral-stream-based architecture, to make it more like a DenseNet, and tested performance at each step.I chose DenseNet because it has a high BrainScore, and because it has some cortex-like architectural features such as large in-degrees and long skip connections.Most of the changes improved performance.Further work is needed to better understand these results.One possibility is that details of the ventral-stream architecture may be ill-suited to feedforward computation, simple processing units, and/or backpropagation, which could suggest differences between the way high-performance deep networks and the brain approach core object recognition.","An approximation of primate ventral stream as a convolutional network performs poorly on object recognition, and multiple architectural features contribute to this. " 1745,Time Limits in Reinforcement Learning,"In reinforcement learning, it is common to let an agent interact with its environment for a fixed amount of time before resetting the environment and repeating the process in a series of episodes.The task that the agent has to learn can either be to maximize its performance over that fixed amount of time, or an indefinite period where the time limit is only used during training.In this paper, we investigate theoretically how time limits could effectively be handled in each of the two cases.""In the first one, we argue that the terminations due to time limits are in fact part of the environment, and propose to include a notion of the remaining time as part of the agent's input."", 'In the second case, the time limits are not part of the environment and are only used to facilitate learning.We argue that such terminations should not be treated as environmental ones and propose a method, specific to value-based algorithms, that incorporates this insight by continuing to bootstrap at the end of each partial episode.To illustrate the significance of our proposals, we perform several experiments on a range of environments from simple few-state transition graphs to complex control tasks, including novel and standard benchmark domains.Our results show that the proposed methods improve the performance and stability of existing reinforcement learning algorithms.",We consider the problem of learning optimal policies in time-limited and time-unlimited domains using time-limited interactions. 1746,Robust Learning with Jacobian Regularization,"Design of reliable systems must guarantee stability against input perturbations.In machine learning, such guarantee entails preventing overfitting and ensuring robustness of models against corruption of input data.In order to maximize stability, we analyze and develop a computationally efficient implementation of Jacobian regularization that increases classification margins of neural networks.The stabilizing effect of the Jacobian regularizer leads to significant improvements in robustness, as measured against both random and adversarial input perturbations, without severely degrading generalization properties on clean data.",We analyze and develop a computationally efficient implementation of Jacobian regularization that increases the classification margins of neural networks. 1747,Penetrating the Fog: the Path to Efficient CNN Models,"With the increasing demand to deploy convolutional neural networks on mobile platforms, the sparse kernel approach was proposed, which could save more parameters than the standard convolution while maintaining accuracy.However, despite the great potential, no prior research has pointed out how to craft an sparse kernel design with such potential, and all prior works just adopt simple combinations of existing sparse kernels such as group convolution.Meanwhile due to the large design space it is also impossible to try all combinations of existing sparse kernels.In this paper, we are the first in the field to consider how to craft an effective sparse kernel design by eliminating the large design space.Specifically, we present a sparse kernel scheme to illustrate how to reduce the space from three aspects.First, in terms of composition we remove designs composed of repeated layers.Second, to remove designs with large accuracy degradation, we find an unified property named~ behind various sparse kernel designs, which could directly indicate the final accuracy.Last, we remove designs in two cases where a better parameter efficiency could be achieved.Additionally, we provide detailed efficiency analysis on the final 4 designs in our scheme.Experimental results validate the idea of our scheme by showing that our scheme is able to find designs which are more efficient in using parameters and computation with similar or higher accuracy.","We are the first in the field to show how to craft an effective sparse kernel design from three aspects: composition, performance and efficiency." 1748,MARGINALIZED AVERAGE ATTENTIONAL NETWORK FOR WEAKLY-SUPERVISED LEARNING,"In weakly-supervised temporal action localization, previous works have failed to locate dense and integral regions for each entire action due to the overestimation of the most salient regions.To alleviate this issue, we propose a marginalized average attentional network to suppress the dominant response of the most salient regions in a principled manner.The MAAN employs a novel marginalized average aggregation module and learns a set of latent discriminative probabilities in an end-to-end fashion. MAA samples multiple subsets from the video snippet features according to a set of latent discriminative probabilities and takes the expectation over all the averaged subset features. Theoretically, we prove that the MAA module with learned latent discriminative probabilities successfully reduces the difference in responses between the most salient regions and the others. Therefore, MAAN is able to generate better class activation sequences and identify dense and integral action regions in the videos. Moreover, we propose a fast algorithm to reduce the complexity of constructing MAA from to.Extensive experiments on two large-scale video datasets show that our MAAN achieves a superior performance on weakly-supervised temporal action localization.",A novel marginalized average attentional network for weakly-supervised temporal action localization 1749,"Manifold Modeling in Embedded Space: A Perspective for Interpreting ""Deep Image Prior""","Deep image prior, which utilizes a deep convolutional network structure itself as an image prior, has attracted huge attentions in computer vision community. It empirically shows the effectiveness of ConvNet structure for various image restoration applications. However, why the DIP works so well is still unknown, and why convolution operation is essential for image reconstruction or enhancement is not very clear.In this study, we tackle these questions.""The proposed approach is dividing the convolution into delay-embedding and transformation, and proposing a simple, but essential, image/tensor modeling method which is closely related to dynamical systems and self-similarity."", 'The proposed method named as manifold modeling in embedded space is implemented by using a novel denoising-auto-encoder in combination with multi-way delay-embedding transform.""In spite of its simplicity, the image/tensor completion and super-resolution results of MMES are quite similar even competitive to DIP in our extensive experiments, and these results would help us for reinterpreting/characterizing the DIP from a perspective of low-dimensional patch-manifold prior.",We propose a new auto-encoder incorporated with multiway delay-embedding transform toward interpreting deep image prior. 1750,Differentially Private Federated Learning: A Client Level Perspective,"Federated learning is a recent advance in privacy protection.In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients.The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data.However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization.""In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model."", 'We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization.""The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance."", 'Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.",Ensuring that models learned in federated fashion do not reveal a client's participation. 1751,Low Shot Learning with Untrained Neural Networks for Imaging Inverse Problems,"Employing deep neural networks as natural image priors to solve inverse problems either requires large amounts of data to sufficiently train expressive generative models or can succeed with no data via untrained neural networks.However, very few works have considered how to interpolate between these no- to high-data regimes.""In particular, how can one use the availability of a small amount of data to one's advantage in solving these inverse problems and can a system's performance increase as the amount of data increases as well?"", 'In this work, we consider solving linear inverse problems when given a small number of examples of images that are drawn from the same distribution as the image of interest.Comparing to untrained neural networks that use no data, we show how one can pre-train a neural network with a few given examples to improve reconstruction results in compressed sensing and semantic image recovery problems such as colorization.Our approach leads to improved reconstruction as the amount of available data increases and is on par with fully trained generative models, while requiring less than 1% of the data needed to train a generative model.",We show how pre-training an untrained neural network with as few as 5-25 examples can improve reconstruction results in compressed sensing and semantic recovery problems like colorization. 1752,CoT: Cooperative Training for Generative Modeling of Discrete Data,"We propose Cooperative Training for training generative models that measure a tractable density for discrete data.CoT coordinately trains a generator G and an auxiliary predictive mediator M. The training target of M is to estimate a mixture density of the learned distribution G and the target distribution P, and that of G is to minimize the Jensen-Shannon divergence estimated through M. CoT achieves independent success without the necessity of pre-training via Maximum Likelihood Estimation or involving high-variance algorithms like REINFORCE.This low-variance algorithm is theoretically proved to be superior for both sample generation and likelihood prediction.We also theoretically and empirically show the superiority of CoT over most previous algorithms in terms of generative quality and diversity, predictive generalization ability and computational cost.","We proposed Cooperative Training, a novel training algorithm for generative modeling of discrete data." 1753,Perception-Driven Curiosity with Bayesian Surprise,"Intrinsic rewards in reinforcement learning provide a powerful algorithmic capability for agents to learn how to interact with their environment in a task-generic way.However, increased incentives for motivation can come at the cost of increased fragility to stochasticity.We introduce a method for computing an intrinsic reward for curiosity using metrics derived from sampling a latent variable model used to estimate dynamics.Ultimately, an estimate of the conditional probability of observed states is used as our intrinsic reward for curiosity.In our experiments, a video game agent uses our model to autonomously learn how to play Atari games using our curiosity reward in combination with extrinsic rewards from the game to achieve improved performance on games with sparse extrinsic rewards.When stochasticity is introduced in the environment, our method still demonstrates improved performance over the baseline.",We introduce a method for computing an intrinsic reward for curiosity using metrics derived from sampling a latent variable model used to estimate dynamics. 1754,Understanding Composition of Word Embeddings via Tensor Decomposition,"Word embedding is a powerful tool in natural language processing.In this paper we consider the problem of word embedding composition \\--- given vector representations of two words, compute a vector for the entire phrase.We give a generative model that can capture specific syntactic relations between words.Under our model, we prove that the correlations between three words form a tensor that has an approximate low rank Tucker decomposition.The result of the Tucker decomposition gives the word embeddings as well as a core tensor, which can be used to produce better compositions of the word embeddings.We also complement our theoretical results with experiments that verify our assumptions, and demonstrate the effectiveness of the new composition method.","We present a generative model for compositional word embeddings that captures syntactic relations, and provide empirical verification and evaluation." 1755,Learning and Planning with a Semantic Model,"Building deep reinforcement learning agents that can generalize and adapt to unseen environments remains a fundamental challenge for AI.This paper describes progresses on this challenge in the context of man-made environments, which are visually diverse but contain intrinsic semantic regularities.We propose a hybrid model-based and model-free approach, LEArning and Planning with Semantics, consisting of a multi-target sub-policy that acts on visual inputs, and a Bayesian model over semantic structures.When placed in an unseen environment, the agent plans with the semantic model to make high-level decisions, proposes the next sub-target for the sub-policy to execute, and updates the semantic model based on new observations.We perform experiments in visual navigation tasks using House3D, a 3D environment that contains diverse human-designed indoor scenes with real-world objects.LEAPS outperforms strong baselines that do not explicitly plan using the semantic content.",We propose a hybrid model-based & model-free approach using semantic information to improve DRL generalization in man-made environments. 1756,A Data-Driven and Distributed Approach to Sparse Signal Representation and Recovery,"In this paper, we focus on two challenges which offset the promise of sparse signal representation, sensing, and recovery.First, real-world signals can seldom be described as perfectly sparse vectors in a known basis, and traditionally used random measurement schemes are seldom optimal for sensing them.Second, existing signal recovery algorithms are usually not fast enough to make them applicable to real-time problems.In this paper, we address these two challenges by presenting a novel framework based on deep learning.For the first challenge, we cast the problem of finding informative measurements by using a maximum likelihood formulation and show how we can build a data-driven dimensionality reduction protocol for sensing signals using convolutional architectures.For the second challenge, we discuss and analyze a novel parallelization scheme and show it significantly speeds-up the signal recovery process.We demonstrate the significant improvement our method obtains over competing methods through a series of experiments.",We use deep learning techniques to solve the sparse signal representation and recovery problem. 1757,Dream to Control: Learning Behaviors by Latent Imagination,"To select effective actions in complex environments, intelligent agents need to generalize from past experience.World models can represent knowledge about the environment to facilitate such generalization.While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them.We present Dreamer, a reinforcement learning agent that solves long-horizon tasks purely by latent imagination.We efficiently learn behaviors by backpropagating analytic gradients of learned state values through trajectories imagined in the compact state space of a learned world model.On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance.","We present Dreamer, an agent that learns long-horizon behaviors purely by latent imagination using analytic value gradients." 1758,MULTIPOLAR: Multi-Source Policy Aggregation for Transfer Reinforcement Learning between Diverse Environmental Dynamics,"Transfer reinforcement learning aims at improving learning efficiency of an agent by exploiting knowledge from other source agents trained on relevant tasks.However, it remains challenging to transfer knowledge between different environmental dynamics without having access to the source environments.In this work, we explore a new challenge in transfer RL, where only a set of source policies collected under unknown diverse dynamics is available for learning a target task efficiently.To address this problem, the proposed approach, MULTI-source POLicy AggRegation, comprises two key techniques.We learn to aggregate the actions provided by the source policies adaptively to maximize the target task performance.""Meanwhile, we learn an auxiliary network that predicts residuals around the aggregated actions, which ensures the target policy's expressiveness even when some of the source policies perform poorly."", 'We demonstrated the effectiveness of MULTIPOLAR through an extensive experimental evaluation across six simulated environments ranging from classic control problems to challenging robotics simulations, under both continuous and discrete action spaces.","We propose MULTIPOLAR, a transfer RL method that leverages a set of source policies collected under unknown diverse environmental dynamics to efficiently learn a target policy in another dynamics." 1759,Large-Scale Study of Curiosity-Driven Learning,"Reinforcement learning algorithms rely on carefully engineered rewards from the environment that are extrinsic to the agent.However, annotating each environment with hand-designed, dense rewards is difficult and not scalable, motivating the need for developing reward functions that are intrinsic to the agent.Curiosity is such intrinsic reward function which uses prediction error as a reward signal.In this paper: We perform the first large-scale study of purely curiosity-driven learning, i.e., across standard benchmark environments, including the Atari game suite.Our results show surprisingly good performance as well as a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many games. We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better. We demonstrate limitations of the prediction-based rewards in stochastic setups.Game-play videos and code are at https://doubleblindsupplementary.github.io/large-curiosity/.","An agent trained only with curiosity, and no extrinsic reward, does surprisingly well on 54 popular environments, including the suite of Atari games, Mario etc." 1760,Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness,"This work provides theoretical and empirical evidence that invariance-inducing regularizers can increase predictive accuracy for worst-case spatial transformations. Evaluated on these adversarially transformed examples, we demonstrate that adding regularization on top of standard or adversarial training reduces the relative error by 20% for CIFAR10 without increasing the computational cost. This outperforms handcrafted networks that were explicitly designed to be spatial-equivariant.Furthermore, we observe for SVHN, known to have inherent variance in orientation, that robust training also improves standard accuracy on the test set.",for spatial transformations robust minimizer also minimizes standard accuracy; invariance-inducing regularization leads to better robustness than specialized architectures 1761,Order Learning and Its Application to Age Estimation,"We propose order learning to determine the order graph of classes, representing ranks or priorities, and classify an object instance into one of the classes.""To this end, we design a pairwise comparator to categorize the relationship between two instances into one of three cases: one instance is `greater than,' `similar to,' or `smaller than' the other."", 'Then, by comparing an input instance with reference instances and maximizing the consistency among the comparison results, the class of the input can be estimated reliably.We apply order learning to develop a facial age estimator, which provides the state-of-the-art performance.Moreover, the performance is further improved when the order graph is divided into disjoint chains using gender and ethnic group information or even in an unsupervised manner.",The notion of order learning is proposed and it is applied to regression problems in computer vision 1762,Topology of deep neural networks,"We study how the topology of a data set comprising two components representing two classes of objects in a binary classification problem changes as it passes through the layers of a well-trained neural network, i.e., one with perfect accuracy on training set and a generalization error of less than 1%.The goal is to shed light on two well-known mysteries in deep neural networks: a nonsmooth activation function like ReLU outperforms a smooth one like hyperbolic tangent; successful neural network architectures rely on having many layers, despite the fact that a shallow network is able to approximate any function arbitrary well.We performed extensive experiments on persistent homology of a range of point cloud data sets.The results consistently demonstrate the following: Neural networks operate by changing topology, transforming a topologically complicated data set into a topologically simple one as it passes through the layers.No matter how complicated the topology of the data set we begin with, when passed through a well-trained neural network, the Betti numbers of both components invariably reduce to their lowest possible values: zeroth Betti number is one and all higher Betti numbers are zero.Furthermore, the reduction in Betti numbers is significantly faster for ReLU activation compared to hyperbolic tangent activation --- consistent with the fact that the former define nonhomeomorphic maps whereas the latter define homeomorphic maps. Lastly, shallow and deep networks process the same data set differently --- a shallow network operates mainly through changing geometry and changes topology only in its final layers, a deep network spreads topological changes more evenly across all its layers.",We show that neural networks operate by changing topologly of a data set and explore how architectural choices effect this change. 1763,"A Closer Look at Deep Learning Heuristics: Learning rate restarts, Warmup and Distillation","The convergence rate and final performance of common deep learning models have significantly benefited from recently proposed heuristics such as learning rate schedules, knowledge distillation, skip connections and normalization layers.In the absence of theoretical underpinnings, controlled experiments aimed at explaining the efficacy of these strategies can aid our understanding of deep learning landscapes and the training dynamics.Existing approaches for empirical analysis rely on tools of linear interpolation and visualizations with dimensionality reduction, each with their limitations.Instead, we revisit the empirical analysis of heuristics through the lens of recently proposed methods for loss surface and representation analysis, viz. mode connectivity and canonical correlation analysis, and hypothesize reasons why the heuristics succeed.In particular, we explore knowledge distillation and learning rate heuristics of restarts and warmup using mode connectivity and CCA. Our empirical analysis suggests that: the reasons often quoted for the success of cosine annealing are not evidenced in practice; that the effect of learning rate warmup is to prevent the deeper layers from creating training instability; and that the latent knowledge shared by the teacher is primarily disbursed in the deeper layers.","We use empirical tools of mode connectivity and SVCCA to investigate neural network training heuristics of learning rate restarts, warmup and knowledge distillation." 1764,Discrete-Valued Neural Networks Using Variational Inference,"The increasing demand for neural networks being employed on embedded devices has led to plenty of research investigating methods for training low precision NNs.While most methods involve a quantization step, we propose a principled Bayesian approach where we first infer a distribution over a discrete weight space from which we subsequently derive hardware-friendly low precision NNs.To this end, we introduce a probabilistic forward pass to approximate the intractable variational objective that allows us to optimize over discrete-valued weight distributions for NNs with sign activation functions.In our experiments, we show that our model achieves state of the art performance on several real world data sets.In addition, the resulting models exhibit a substantial amount of sparsity that can be utilized to further reduce the computational costs for inference.",Variational Inference for infering a discrete distribution from which a low-precision neural network is derived 1765,Tensor Graph Convolutional Networks for Prediction on Dynamic Graphs,"Many irregular domains such as social networks, financial transactions, neuron connections, and natural language structures are represented as graphs.In recent years, a variety of graph neural networks have been successfully applied for representation learning and prediction on such graphs.However, in many of the applications, the underlying graph changes over time and existing GNNs are inadequate for handling such dynamic graphs.In this paper we propose a novel technique for learning embeddings of dynamic graphs based on a tensor algebra framework.Our method extends the popular graph convolutional network for learning representations of dynamic graphs using the recently proposed tensor M-product technique.Theoretical results that establish the connection between the proposed tensor approach and spectral convolution of tensors are developed.Numerical experiments on real datasets demonstrate the usefulness of the proposed method for an edge classification task on dynamic graphs.",We propose a novel tensor based method for graph convolutional networks on dynamic graphs 1766,CrystalGAN: Learning to Discover Crystallographic Structures with Generative Adversarial Networks,"Our main motivation is to propose an efficient approach to generate novel multi-element stable chemical compounds that can be used in real world applications.This task can be formulated as a combinatorial problem, and it takes many hours of human experts to construct, and to evaluate new data.Unsupervised learning methods such as Generative Adversarial Networks can be efficiently used to produce new data. Cross-domain Generative Adversarial Networks were reported to achieve exciting results in image processing applications.However, in the domain of materials science, there is a need to synthesize data with higher order complexity compared to observed samples, and the state-of-the-art cross-domain GANs can not be adapted directly.In this contribution, we propose a novel GAN called CrystalGAN which generates new chemically stable crystallographic structures with increased domain complexity.We introduce an original architecture, we provide the corresponding loss functions, and we show that the CrystalGAN generates very reasonable data.We illustrate the efficiency of the proposed method on a real original problem of novel hydrides discovery that can be further used in development of hydrogen storage materials.","Generating new chemical materials using novel cross-domain GANs.""" 1767,Data Enrichment: Multi-task Learning in High Dimension with Theoretical Guarantees,"Given samples from a group of related regression tasks, a data-enriched model describes observations by a common and per-group individual parameters.In high-dimensional regime, each parameter has its own structure such as sparsity or group sparsity.In this paper, we consider the general form of data enrichment where data comes in a fixed but arbitrary number of tasks and any convex function, e.g., norm, can characterize the structure of both common and individual parameters. We propose an estimator for the high-dimensional data enriched model and investigate its statistical properties. We delineate the sample complexity of our estimator and provide high probability non-asymptotic bound for estimation error of all parameters under a condition weaker than the state-of-the-art.We propose an iterative estimation algorithm with a geometric convergence rate.Overall, we present a first through statistical and computational analysis of inference in the data enriched model.",We provide an estimator and an estimation algorithm for a class of multi-task regression problem and provide statistical and computational analysis.. 1768,Autonomous Vehicle Fleet Coordination With Deep Reinforcement Learning,"Autonomous vehicles are becoming more common in city transportation. Companies will begin to find a need to teach these vehicles smart city fleet coordination. Currently, simulation based modeling along with hand coded rules dictate the decision making of these autonomous vehicles.We believe that complex intelligent behavior can be learned by these agents through Reinforcement Learning.In this paper, we discuss our work for solving this system by adapting the Deep Q-Learning model to the multi-agent setting. Our approach applies deep reinforcement learning by combining convolutional neural networks with DQN to teach agents to fulfill customer demand in an environment that is partially observ-able to them.We also demonstrate how to utilize transfer learning to teach agents to balance multiple objectives such as navigating to a charging station when its en-ergy level is low.The two evaluations presented show that our solution has shown hat we are successfully able to teach agents cooperation policies while balancing multiple objectives.",Utilized Deep Reinforcement Learning to teach agents ride-sharing fleet style coordination. 1769,Diffusion Scattering Transforms on Graphs,"Stability is a key aspect of data analysis.In many applications, the natural notion of stability is geometric, as illustrated for example in computer vision.Scattering transforms construct deep convolutional representations which are certified stable to input deformations.This stability to deformations can be interpreted as stability with respect to changes in the metric structure of the domain.In this work, we show that scattering transforms can be generalized to non-Euclidean domains using diffusion wavelets, while preserving a notion of stability with respect to metric changes in the domain, measured with diffusion maps.""The resulting representation is stable to metric perturbations of the domain while being able to capture high-frequency information, akin to the Euclidean Scattering.",Stability of scattering transform representations of graph data to deformations of the underlying graph support. 1770,Lifelong Learning with Dynamically Expandable Networks,"We propose a novel deep network architecture for lifelong learning which we refer to as Dynamically Expandable Network, that can dynamically decide its network capacity as it trains on a sequence of tasks, to learn a compact overlapping knowledge sharing structure among tasks.DEN is efficiently trained in an online manner by performing selective retraining, dynamically expands network capacity upon arrival of each task with only the necessary number of units, and effectively prevents semantic drift by splitting/duplicating units and timestamping them.We validate DEN on multiple public datasets in lifelong learning scenarios on multiple public datasets, on which it not only significantly outperforms existing lifelong learning methods for deep networks, but also achieves the same level of performance as the batch model with substantially fewer number of parameters.",We propose a novel deep network architecture that can dynamically decide its network capacity as it trains on a lifelong learning scenario. 1771,On learning visual odometry errors,"This paper fosters the idea that deep learning methods can be sided to classicalvisual odometry pipelines to improve their accuracy and to produce uncertaintymodels to their estimations.We show that the biases inherent to the visual odom-etry process can be faithfully learnt and compensated for, and that a learning ar-chitecture associated to a probabilistic loss function can jointly estimate a fullcovariance matrix of the residual errors, defining a heteroscedastic error model.Experiments on autonomous driving image sequences and micro aerial vehiclescamera acquisitions assess the possibility to concurrently improve visual odome-try and estimate an error associated to its outputs.",This paper discusses different methods of pairing VO with deep learning and proposes a simultaneous prediction of corrections and uncertainty. 1772,DEEP DENSITY NETWORKS AND UNCERTAINTY IN RECOMMENDER SYSTEMS,"Building robust online content recommendation systems requires learning com- plex interactions between user preferences and content features.The field has evolved rapidly in recent years from traditional multi-arm bandit and collabora- tive filtering techniques, with new methods integrating Deep Learning models that enable to capture non-linear feature interactions.Despite progress, the dynamic nature of online recommendations still poses great challenges, such as finding the delicate balance between exploration and exploitation.In this paper we provide a novel method, Deep Density Networks which deconvolves measurement and data uncertainty and predicts probability densities of CTR, enabling us to perform more efficient exploration of the feature space.We show the usefulness of using DDN online in a real world content recommendation system that serves billions of recommendations per day, and present online and offline results to eval- uate the benefit of using DDN.","We have introduced Deep Density Network, a unified DNN model to estimate uncertainty for exploration/exploitation in recommendation systems." 1773,Learning Twitter User Sentiments on Climate Change with Limited Labeled Data,"While it is well-documented that climate change accepters and deniers have become increasingly polarized in the United States over time, there has been no large-scale examination of whether these individuals are prone to changing their opinions as a result of natural external occurrences.On the sub-population of Twitter users, we examine whether climate change sentiment changes in response to five separate natural disasters occurring in the U.S. in 2018.We begin by showing that tweets can be classified with over 75% accuracy as either accepting or denying climate change when using our methodology to compensate for limited labelled data; results are robust across several machine learning models and yield geographic-level results in line with prior research.We then apply RNNs to conduct a cohort-level analysis showing that the 2018 hurricanes yielded a statistically significant increase in average tweet sentiment affirming climate change.""However, this effect does not hold for the 2018 blizzard and wildfires studied, implying that Twitter users' opinions on climate change are fairly ingrained on this subset of natural disasters.",We train RNNs on famous Twitter users to determine whether the general Twitter population is more likely to believe in climate change after a natural disaster. 1774,Towards Provable Control for Unknown Linear Dynamical Systems,"We study the control of symmetric linear dynamical systems with unknown dynamics and a hidden state.Using a recent spectral filtering technique for concisely representing such systems in a linear basis, we formulate optimal control in this setting as a convex program.This approach eliminates the need to solve the non-convex problem of explicit identification of the system and its latent state, and allows for provable optimality guarantees for the control signal.We give the first efficient algorithm for finding the optimal control signal with an arbitrary time horizon T, with sample complexity polynomial only in log and other relevant parameters.","Using a novel representation of symmetric linear dynamical systems with a latent state, we formulate optimal control as a convex program, giving the first polynomial-time algorithm that solves optimal control with sample complexity only polylogarithmic in the time horizon." 1775,PolyGAN: High-Order Polynomial Generators,"Generative Adversarial Networks have become the gold standard when it comes to learning generative models for high-dimensional distributions.Since their advent, numerous variations of GANs have been introduced in the literature, primarily focusing on utilization of novel loss functions, optimization/regularization strategies and network architectures.In this paper, we turn our attention to the generator and investigate the use of high-order polynomials as an alternative class of universal function approximators.Concretely, we propose PolyGAN, where we model the data generator by means of a high-order polynomial whose unknown parameters are naturally represented by high-order tensors.We introduce two tensor decompositions that significantly reduce the number of parameters and show how they can be efficiently implemented by hierarchical neural networks that only employ linear/convolutional blocks.We exhibit for the first time that by using our approach a GAN generator can approximate the data distribution without using any activation functions.Thorough experimental evaluation on both synthetic and real data demonstrates the merits of PolyGAN against the state of the art.",We model the data generator (in GAN) by means of a high-order polynomial represented by high-order tensors. 1776,Deep Learning is Robust to Massive Label Noise,"Deep neural networks trained on large supervised datasets have led to impressive results in recent years.However, since well-annotated datasets can be prohibitively expensive and time-consuming to collect, recent work has explored the use of larger but noisy datasets that can be more easily obtained.In this paper, we investigate the behavior of deep neural networks on training sets with massively noisy labels.We show on multiple datasets such as MINST, CIFAR-10 and ImageNet that successful learning is possible even with an essentially arbitrary amount of noise.For example, on MNIST we find that accuracy of above 90 percent is still attainable even when the dataset has been diluted with 100 noisy examples for each clean example.Such behavior holds across multiple patterns of label noise, even when noisy labels are biased towards confusing classes.Further, we show how the required dataset size for successful training increases with higher label noise.Finally, we present simple actionable techniques for improving learning in the regime of high label noise.",We show that deep neural networks are able to learn from data that has been diluted by an arbitrary amount of noise. 1777,Meta-Learning for Low-Resource Neural Machine Translation,"In this paper, we propose to extend the recently introduced model-agnostic meta-learning algorithm for low resource neural machine translation.We frame low-resource translation as a meta-learning problem, and we learn to adapt to low-resource languages based on multilingual high-resource language tasks.We use the universal lexical representation to overcome the input-output mismatch across different languages.We evaluate the proposed meta-learning strategy using eighteen European languages as source tasks and five diverse languages as target tasks.We show that the proposed approach significantly outperforms the multilingual, transfer learning based approach and enables us to train a competitive NMT system with only a fraction of training examples.For instance, the proposed approach can achieve as high as 22.04 BLEU on Romanian-English WMT’16 by seeing only 16,000 translated words.",we propose a meta-learning approach for low-resource neural machine translation that can rapidly learn to translate on a new language 1778,UaiNets: From Unsupervised to Active Deep Anomaly Detection,"This work presents a method for active anomaly detection which can be built upon existing deep learning solutions for unsupervised anomaly detection.We show that a prior needs to be assumed on what the anomalies are, in order to have performance guarantees in unsupervised anomaly detection.We argue that active anomaly detection has, in practice, the same cost of unsupervised anomaly detection but with the possibility of much better results.To solve this problem, we present a new layer that can be attached to any deep learning model designed for unsupervised anomaly detection to transform it into an active method, presenting results on both synthetic and real anomaly detection datasets.",A method for active anomaly detection. We present a new layer that can be attached to any deep learning model designed for unsupervised anomaly detection to transform it into an active method. 1779,Auto-Encoding Explanatory Examples,"In this paper, we ask for the main factors that determine a classifier's decision making and uncover such factors by studying latent codes produced by auto-encoding frameworks."", ""To deliver an explanation of a classifier's behaviour, we propose a method that provides series of examples highlighting semantic differences between the classifier's decisions."", 'We generate these examples through interpolations in latent space.We introduce and formalize the notion of a semantic stochastic path, as a suitable stochastic process defined in feature space via latent code interpolations.""We then introduce the concept of semantic Lagrangians as a way to incorporate the desired classifier's behaviour and find that the solution of the associated variational problem allows for highlighting differences in the classifier decision."", 'Very importantly, within our framework the classifier is used as a black-box, and only its evaluation is required.",We generate examples to explain a classifier desicion via interpolations in latent space. The variational auto encoder cost is extended with a functional of the classifier over the generated example path in data space. 1780,Incremental Learning of Action Models for Planning,"The soundness and optimality of a plan depends on the correctness of the domain model.In real-world applications, specifying complete domain models is difficult as the interactions between the agent and its environment can be quite complex.We propose a framework to learn a PPDDL representation of the model incrementally over multiple planning problems using only experiences from the current planning problem, which suits non-stationary environments.We introduce the novel concept of reliability as an intrinsic motivation for reinforcement learning, and as a means of learning from failure to prevent repeated instances of similar failures.Our motivation is to improve both learning efficiency and goal-directedness.We evaluate our work with experimental results for three planning domains.",Introduce an approach to allow agents to learn PPDDL action models incrementally over multiple planning problems under the framework of reinforcement learning. 1781,Towards Simplicity in Deep Reinforcement Learning: Streamlined Off-Policy Learning,"The field of Deep Reinforcement Learning has recently seen a surge in the popularity of maximum entropy reinforcement learning algorithms. Their popularity stems from the intuitive interpretation of the maximum entropy objective and their superior sample efficiency on standard benchmarks.In this paper, we seek to understand the primary contribution of the entropy term to the performance of maximum entropy algorithms.For the Mujoco benchmark, we demonstrate that the entropy term in Soft Actor Critic principally addresses the bounded nature of the action spaces.With this insight, we propose a simple normalization scheme which allows a streamlined algorithm without entropy maximization match the performance of SAC.Our experimental results demonstrate a need to revisit the benefits of entropy regularization in DRL.We also propose a simple non-uniform sampling method for selecting transitions from the replay buffer during training. We further show that the streamlined algorithm with the simple non-uniform sampling scheme outperforms SAC and achieves state-of-the-art performance on challenging continuous control tasks.",We propose a new DRL off-policy algorithm achieving state-of-the-art performance. 1782,Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering,"Very recently, it comes to be a popular approach for answering open-domain questions by first searching question-related passages, then applying reading comprehension models to extract answers.Existing works usually extract answers from single passages independently, thus not fully make use of the multiple searched passages, especially for the some questions requiring several evidences, which can appear in different passages, to be answered.The above observations raise the problem of evidence aggregation from multiple passages.In this paper, we deal with this problem as answer re-ranking.Specifically, based on the answer candidates generated from the existing state-of-the-art QA model, we propose two different re-ranking methods, strength-based and coverage-based re-rankers, which make use of the aggregated evidences from different passages to help entail the ground-truth answer for the question.Our model achieved state-of-the-arts on three public open-domain QA datasets, Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8% improvement on the former two datasets.",We propose a method that can make use of the multiple passages information for open-domain QA. 1783,MODiR: Multi-Objective Dimensionality Reduction for Joint Data Visualisation,"Many large text collections exhibit graph structures, either inherent to the content itself or encoded in the metadata of the individual documents.Example graphs extracted from document collections are co-author networks, citation networks, or named-entity-cooccurrence networks.Furthermore, social networks can be extracted from email corpora, tweets, or social media.When it comes to visualising these large corpora, either the textual content or the network graph are used.""In this paper, we propose to incorporate both, text and graph, to not only visualise the semantic information encoded in the documents' content but also the relationships expressed by the inherent network structure."", 'To this end, we introduce a novel algorithm based on multi-objective optimisation to jointly position embedded documents and graph nodes in a two-dimensional landscape.We illustrate the effectiveness of our approach with real-world datasets and show that we can capture the semantics of large document collections better than other visualisations based on either the content or the network information.","Dimensionality reduction algorithm to visualise text with network information, for example an email corpus or co-authorships." 1784,Identifying Bias in AI using Simulation,"Machine learned models exhibit bias, often because the datasets used to train them are biased.This presents a serious problem for the deployment of such technology, as the resulting models might perform poorly on populations that are minorities within the training set and ultimately present higher risks to them.We propose to use high-fidelity computer simulations to interrogate and diagnose biases within ML classifiers.We present a framework that leverages Bayesian parameter search to efficiently characterize the high dimensional feature space and more quickly identify weakness in performance.We apply our approach to an example domain, face detection, and show that it can be used to help identify demographic biases in commercial face application programming interfaces.",We present a framework that leverages high-fidelity computer simulations to interrogate and diagnose biases within ML classifiers. 1785,Sample-Based Point Cloud Decoder Networks,"Point clouds are a flexible and ubiquitous way to represent 3D objects with arbitrary resolution and precision.Previous work has shown that adapting encoder networks to match the semantics of their input point clouds can significantly improve their effectiveness over naive feedforward alternatives.However, the vast majority of work on point-cloud decoders are still based on fully-connected networks that map shape representations to a fixed number of output points.In this work, we investigate decoder architectures that more closely match the semantics of variable sized point clouds.Specifically, we study sample-based point-cloud decoders that map a shape representation to a point feature distribution, allowing an arbitrary number of sampled features to be transformed into individual output points.We develop three sample-based decoder architectures and compare their performance to each other and show their improved effectiveness over feedforward architectures.In addition, we investigate the learned distributions to gain insight into the output transformation.Our work is available as an extensible software platform to reproduce these results and serve as a baseline for future work.",We present and evaluate sampling-based point cloud decoders that outperform the baseline MLP approach by better matching the semantics of point clouds. 1786,Reinforced Genetic Algorithm Learning for Optimizing Computation Graphs,"We present a deep reinforcement learning approach to minimizing the execution cost of neural network computation graphs in an optimizing compiler.Unlike earlier learning-based works that require training the optimizer on the same graph to be optimized, we propose a learning approach that trains an optimizer offline and then generalizes to previously unseen graphs without further training.This allows our approach to produce high-quality execution decisions on real-world TensorFlow graphs in seconds instead of hours.We consider two optimization tasks for computation graphs: minimizing running time and peak memory usage.In comparison to an extensive set of baselines, our approach achieves significant improvements over classical and other learning-based methods on these two tasks.","We use deep RL to learn a policy that directs the search of a genetic algorithm to better optimize the execution cost of computation graphs, and show improved results on real-world TensorFlow graphs." 1787,Visual Representation Learning with 3D View-Contrastive Inverse Graphics Networks,"Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction.One of the most basic prediction tasks is view prediction: how would a given scene look from an alternative viewpoint?Humans excel at this task.Our ability to imagine and fill in missing visual information is tightly coupled with perception: we feel as if we see the world in 3 dimensions, while in fact, information from only the front surface of the world hits our retinas.This paper explores the connection between view-predictive representation learning and its role in the development of 3D visual recognition.We propose inverse graphics networks, which take as input 2.5D video streams captured by a moving camera, and map to stable 3D feature maps of the scene, by disentangling the scene content from the motion of the camera.The model can also project its 3D feature maps to novel viewpoints, to predict and match against target views.We propose contrastive prediction losses that can handle stochasticity of the visual input and can scale view-predictive learning to more photorealistic scenes than those considered in previous works.We show that the proposed model learns 3D visual representations useful for semi-supervised learning of 3D object detectors, and unsupervised learning of 3D moving object detectors, by estimating motion of the inferred 3D feature maps in videos of dynamic scenes.To the best of our knowledge, this is the first work that empirically shows view prediction to be a useful and scalable self-supervised task beneficial to 3D object detection. ","We show that with the right loss and architecture, view-predictive learning improves 3D object detection" 1788,Neural TTS Stylization with Adversarial and Collaborative Games,"The modeling of style when synthesizing natural human speech from text has been the focus of significant attention.Some state-of-the-art approaches train an encoder-decoder network on paired text and audio samples by encouraging its output to reconstruct x_aud.The synthesized audio waveform is expected to contain the verbal content of x_txt and the auditory style of x_aud.Unfortunately, modeling style in TTS is somewhat under-determined and training models with a reconstruction loss alone is insufficient to disentangle content and style from other factors of variation.In this work, we introduce an end-to-end TTS model that offers enhanced content-style disentanglement ability and controllability.We achieve this by combining a pairwise training procedure, an adversarial game, and a collaborative game into one training scheme.The adversarial game concentrates the true data distribution, and the collaborative game minimizes the distance between real samples and generated samples in both the original space and the latent space.As a result, the proposed model delivers a highly controllable generator, and a disentangled representation.Benefiting from the separate modeling of style and content, our model can generate human fidelity speech that satisfies the desired style conditions.""Our model achieves start-of-the-art results across multiple tasks, including style transfer, emotion modeling, and identity transfer.",a generative adversarial network for style modeling in a text-to-speech system 1789,Over-parameterization Improves Generalization in the XOR Detection Problem,"Empirical evidence suggests that neural networks with ReLU activations generalize better with over-parameterization.However, there is currently no theoretical analysis that explains this observation.In this work, we study a simplified learning task with over-parameterized convolutional networks that empirically exhibits the same qualitative phenomenon. For this setting, we provide a theoretical analysis of the optimization and generalization performance of gradient descent.Specifically, we prove data-dependent sample complexity bounds which show that over-parameterization improves the generalization performance of gradient descent.",We show in a simplified learning task that over-parameterization improves generalization of a convnet that is trained with gradient descent. 1790,When Agents Talk Back: Rebellious Explanations,"As the area of Explainable AI, and Explainable AI Planning, matures, the ability for agents to generate and curate explanations will likewise grow.We propose a new challenge area in the form of rebellious and deceptive explanations.We discuss how these explanations might be generated and then briefly discuss evaluation criteria.",Position paper proposing rebellious and deceptive explanations for agents. 1791,Learning Latent Superstructures in Variational Autoencoders for Deep Multidimensional Clustering,"We investigate a variant of variational autoencoders where there is a superstructure of discrete latent variables on top of the latent features.In general, our superstructure is a tree structure of multiple super latent variables and it is automatically learned from data.When there is only one latent variable in the superstructure, our model reduces to one that assumes the latent features to be generated from a Gaussian mixture model.We call our model the latent tree variational autoencoder.Whereas previous deep learning methods for clustering produce only one partition of data, LTVAE produces multiple partitions of data, each being given by one super latent variable.This is desirable because high dimensional data usually have many different natural facets and can be meaningfully partitioned in multiple ways.",We investigate a variant of variational autoencoders where there is a superstructure of discrete latent variables on top of the latent features. 1792,Learning Latent Representations for Inverse Dynamics using Generalized Experiences,"Many practical robot locomotion tasks require agents to use control policies that can be parameterized by goals.Popular deep reinforcement learning approaches in this direction involve learning goal-conditioned policies or value functions, or Inverse Dynamics Models.IDMs map an agent’s current state and desired goal to the required actions.We show that the key to achieving good performance with IDMs lies in learning the information shared between equivalent experiences, so that they can be generalized to unseen scenarios.We design a training process that guides the learning of latent representations to encode this shared information.Using a limited number of environment interactions, our agent is able to efficiently navigate to arbitrary points in the goal space.We demonstrate the effectiveness of our approach in high-dimensional locomotion environments such as the Mujoco Ant, PyBullet Humanoid, and PyBullet Minitaur.We provide quantitative and qualitative results to show that our method clearly outperforms competing baseline approaches.","We show that the key to achieving good performance with IDMs lies in learning latent representations to encode the information shared between equivalent experiences, so that they can be generalized to unseen scenarios." 1793,Linearly Constrained Weights: Resolving the Vanishing Gradient Problem by Reducing Angle Bias,"In this paper, we first identify , a simple but remarkable phenomenon that causes the vanishing gradient problem in a multilayer perceptron with sigmoid activation functions.We then propose to reduce the angle bias in a neural network, so as to train the network under the constraints that the sum of the elements of each weight vector is zero.A reparameterization technique is presented to efficiently train a model with LCW by embedding the constraints on weight vectors into the structure of the network.Interestingly, batch normalization can be viewed as a mechanism to correct angle bias.Preliminary experiments show that LCW helps train a 100-layered MLP more efficiently than does batch normalization.",We identify angle bias that causes the vanishing gradient problem in deep nets and propose an efficient method to reduce the bias. 1794,Efficient Probabilistic Logic Reasoning with Graph Neural Networks,"Markov Logic Networks, which elegantly combine logic rules and probabilistic graphical models, can be used to address many knowledge graph problems.However, inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.In recent years, graph neural networks have emerged as efficient and effective tools for large-scale graph problems.Nevertheless, GNNs do not explicitly incorporate prior logic rules into the models, and may require many labeled examples for a target task.In this paper, we explore the combination of MLNs and GNNs, and use graph neural networks for variational inference in MLN.We propose a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.Our extensive experiments on several benchmark datasets demonstrate that ExpressGNN leads to effective and efficient probabilistic logic reasoning.",We employ graph neural networks in the variational EM framework for efficient inference and learning of Markov Logic Networks. 1795,Meta-Reinforcement Learning for Adaptive Autonomous Driving,"Reinforcement learning methods achieved major advances in multiple tasks surpassing human performance.However, most of RL strategies show a certain degree of weakness and may become computationally intractable when dealing with high-dimensional and non-stationary environments.In this paper, we build a meta-reinforcement learning method embedding an adaptive neural network controller for efficient policy iteration in changing task conditions.Our main goal is to extend RL application to the challenging task of urban autonomous driving in CARLA simulator.",A meta-reinforcement learning approach embedding a neural network controller applied to autonomous driving with Carla simulator. 1796,Learning Representations in Reinforcement Learning: an Information Bottleneck Approach,"The information bottleneck principle is an elegant and useful approach to representation learning.In this paper, we investigate the problem of representation learning in the context of reinforcement learning using the information bottleneck framework, aiming at improving the sample efficiency of the learning algorithms.We analytically derive the optimal conditional distribution of the representation, and provide a variational lower bound.Then, we maximize this lower bound with the Stein variational gradient method.We incorporate this framework in the advantageous actor critic algorithm and the proximal policy optimization algorithm.Our experimental results show that our framework can improve the sample efficiency of vanilla A2C and PPO significantly.Finally, we study the information-bottleneck perspective in deep RL with the algorithm called mutual information neural estimation.We experimentally verify that the information extraction-compression process also exists in deep RL and our framework is capable of accelerating this process.We also analyze the relationship between MINE and our method, through this relationship, we theoretically derive an algorithm to optimize our IB framework without constructing the lower bound.",Derive an information bottleneck framework in reinforcement learning and some simple relevant theories and tools. 1797,Modular Continual Learning in a Unified Visual Environment," A core aspect of human intelligence is the ability to learn new tasks quickly and switch between them flexibly.Here, we describe a modular continual reinforcement learning paradigm inspired by these abilities.We first introduce a visual interaction environment that allows many types of tasks to be unified in a single framework.We then describe a reward map prediction scheme that learns new tasks robustly in the very large state and action spaces required by such an environment.We investigate how properties of module architecture influence efficiency of task learning, showing that a module motif incorporating specific design principles significantly outperforms more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance.Finally, we present a meta-controller architecture for task switching based on a dynamic neural voting scheme, which allows new modules to use information learned from previously-seen tasks to substantially improve their own learning efficiency.",We propose a neural module approach to continual learning using a unified visual environment with a large action space. 1798,Thinking like a machine — generating visual rationales through latent space optimization,"Interpretability and small labelled datasets are key issues in the practical application of deep learning, particularly in areas such as medicine.In this paper, we present a semi-supervised technique that addresses both these issues simultaneously.We learn dense representations from large unlabelled image datasets, then use those representations to both learn classifiers from small labeled sets and generate visual rationales explaining the predictions.Using chest radiography diagnosis as a motivating application, we show our method has good generalization ability by learning to represent our chest radiography dataset while training a classifier on an separate set from a different institution.Our method identifies heart failure and other thoracic diseases.For each prediction, we generate visual rationales for positive classifications by optimizing a latent representation to minimize the probability of disease while constrained by a similarity measure in image space.Decoding the resultant latent representation produces an image without apparent disease.""The difference between the original and the altered image forms an interpretable visual rationale for the algorithm's prediction."", 'Our method simultaneously produces visual rationales that compare favourably to previous techniques and a classifier that outperforms the current state-of-the-art.",We propose a method of using GANs to generate high quality visual rationales to help explain model predictions. 1799,Improving Evolutionary Strategies with Generative Neural Networks,"Evolutionary Strategies are a popular family of black-box zeroth-order optimization algorithms which rely on search distributions to efficiently optimize a large variety of objective functions.This paper investigates the potential benefits of using highly flexible search distributions in ES algorithms, in contrast to standard ones.We model such distributions with Generative Neural Networks and introduce a new ES algorithm that leverages their expressiveness to accelerate the stochastic search.Because it acts as a plug-in, our approach allows to augment virtually any standard ES algorithm with flexible search distributions.We demonstrate the empirical advantages of this method on a diversity of objective functions.",We propose a new algorithm leveraging the expressiveness of Generative Neural Networks to improve Evolutionary Strategies algorithms. 1800,Trace norm regularization and faster inference for embedded speech recognition RNNs,"We propose and evaluate new techniques for compressing and speeding up dense matrix multiplications as found in the fully connected and recurrent layers of neural networks for embedded large vocabulary continuous speech recognition.For compression, we introduce and study a trace norm regularization technique for training low rank factored versions of matrix multiplications.Compared to standard low rank training, we show that our method leads to good accuracy versus number of parameter trade-offs and can be used to speed up training of large models.For speedup, we enable faster inference on ARM processors through new open sourced kernels optimized for small batch sizes, resulting in 3x to 7x speed ups over the widely used gemmlowp library.Beyond LVCSR, we expect our techniques and kernels to be more generally applicable to embedded neural networks with large fully connected or recurrent layers.",We compress and speed up speech recognition models on embedded devices through a trace norm regularization technique and optimized kernels. 1801,Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets,"Training activation quantized neural networks involves minimizing a piecewise constant training loss whose gradient vanishes almost everywhere, which is undesirable for the standard back-propagation or chain rule.An empirical way around this issue is to use a straight-through estimator in the backward pass only, so that the ""gradient"" through the modified chain rule becomes non-trivial.Since this unusual ""gradient"" is certainly not the gradient of loss function, the following question arises: why searching in its negative direction minimizes the training loss?In this paper, we provide the theoretical justification of the concept of STE by answering this question.We consider the problem of learning a two-linear-layer network with binarized ReLU activation and Gaussian input data.We shall refer to the unusual ""gradient"" given by the STE-modifed chain rule as coarse gradient.The choice of STE is not unique.We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient, and its negation is a descent direction for minimizing the population loss.We further show the associated coarse gradient descent algorithm converges to a critical point of the population loss minimization problem. Moreover, we show that a poor choice of STE leads to instability of the training algorithm near certain local minima, which is verified with CIFAR-10 experiments.",We make theoretical justification for the concept of straight-through estimator. 1802,GumbelClip: Off-Policy Actor-Critic Using Experience Replay,"This paper presents GumbelClip, a set of modifications to the actor-critic algorithm, for off-policy reinforcement learning.GumbelClip uses the concepts of truncated importance sampling along with additive noise to produce a loss function enabling the use of off-policy samples.The modified algorithm achieves an increase in convergence speed and sample efficiency compared to on-policy algorithms and is competitive with existing off-policy policy gradient methods while being significantly simpler to implement.The effectiveness of GumbelClip is demonstrated against existing on-policy and off-policy actor-critic algorithms on a subset of the Atari domain.","With a set of modifications, under 10 LOC, to A2C you get an off-policy actor-critic that outperforms A2C and performs similarly to ACER. The modifications are large batchsizes, aggressive clamping, and policy ""forcing"" with gumbel noise." 1803,Skip-Thought GAN: Generating Text through Adversarial Training using Skip-Thought Vectors,"In the past few years, various advancements have been made in generative models owing to the formulation of Generative Adversarial Networks.GANs have been shown to perform exceedingly well on a wide variety of tasks pertaining to image generation and style transfer.In the field of Natural Language Processing, word embeddings such as word2vec and GLoVe are state-of-the-art methods for applying neural network models on textual data.Attempts have been made for utilizing GANs with word embeddings for text generation.This work presents an approach to text generation using Skip-Thought sentence embeddings in conjunction with GANs based on gradient penalty functions and f-measures.The results of using sentence embeddings with GANs for generating text conditioned on input information are comparable to the approaches where word embeddings are used.",Generating text using sentence embeddings from Skip-Thought Vectors with the help of Generative Adversarial Networks. 1804,Neural Subgraph Isomorphism Counting,"In this paper, we study a new graph learning problem: learning to count subgraph isomorphisms.Although the learning based approach is inexact, we are able to generalize to count large patterns and data graphs in polynomial time compared to the exponential time of the original NP-complete problem.Different from other traditional graph learning problems such as node classification and link prediction, subgraph isomorphism counting requires more global inference to oversee the whole graph.To tackle this problem, we propose a dynamic intermedium attention memory network which augments different representation learning architectures and iteratively attends pattern and target data graphs to memorize different subgraph isomorphisms for the global counting.We develop both small graphs and large graphs sets to evaluate different models.Experimental results show that learning based subgraph isomorphism counting can help reduce the time complexity with acceptable accuracy.Our DIAMNet can further improve existing representation learning models for this more global problem.","In this paper, we study a new graph learning problem: learning to count subgraph isomorphisms." 1805,Zero-Shot Policy Transfer with Disentangled Attention,"Domain adaptation is an open problem in deep reinforcement learning.Often, agents are asked to perform in environments where data is difficult to obtain.In such settings, agents are trained in similar environments, such as simulators, and are then transferred to the original environment.The gap between visual observations of the source and target environments often causes the agent to fail in the target environment.We present a new RL agent, SADALA.SADALA first learns a compressed state representation.It then jointly learns to ignore distracting features and solve the task presented.""SADALA's separation of important and unimportant visual features leads to robust domain transfer."", 'SADALA outperforms both prior disentangled-representation based RL and domain randomization approaches across RL environments.",We present an agent that uses a beta-vae to extract visual features and an attention mechanism to ignore irrelevant features from visual observations to enable robust transfer between visual domains. 1806,Robustness Verification for Transformers,"Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding the behavior of a given model and for obtaining safety guarantees.However, previous methods are usually limited to relatively simple neural networks.In this paper, we consider the robustness verification problem for Transformers.Transformers have complex self-attention layers that pose many challenges for verification, including cross-nonlinearity and cross-position dependency, which have not been discussed in previous work.We resolve these challenges and develop the first verification algorithm for Transformers.The certified robustness bounds computed by our method are significantly tighter than those by naive Interval Bound Propagation.These bounds also shed light on interpreting Transformers as they consistently reflect the importance of words in sentiment analysis.",We propose the first algorithm for verifying the robustness of Transformers. 1807,Generalization Puzzles in Deep Networks,"In the last few years, deep learning has been tremendously successful in many applications.However, our theoretical understanding of deep learning, and thus the ability of providing principled improvements, seems to lag behind.A theoretical puzzle concerns the ability of deep networks to predict well despite their intriguing apparent lack of generalization: their classification accuracy on the training set is not a proxy for their performance on a test set.How is it possible that training performance is independent of testing performance?Do indeed deep networks require a drastically new theory of generalization?Or are there measurements based on the training data that are predictive of the network performance on future data?Here we show that when performance is measured appropriately, the training performance is in fact predictive of expected performance, consistently with classical machine learning theory.","Contrary to previous beliefs, the training performance of deep networks, when measured appropriately, is predictive of test performance, consistent with classical machine learning theory." 1808,"Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control","We propose a ""plan online and learn offline"" framework for the setting where an agent, with an internal model, needs to continually act and learn in the world.Our work builds on the synergistic relationship between local model-based control, global value function learning, and exploration.We study how local trajectory optimization can cope with approximation errors in the value function, and can stabilize and accelerate value function learning.Conversely, we also study how approximate value functions can help reduce the planning horizon and allow for better policies beyond local solutions.Finally, we also demonstrate how trajectory optimization can be used to perform temporally coordinated exploration in conjunction with estimating uncertainty in value function approximation.This exploration is critical for fast and stable learning of the value function.Combining these components enable solutions to complex control tasks, like humanoid locomotion and dexterous in-hand manipulation, in the equivalent of a few minutes of experience in the real world.",We propose a framework that incorporates planning for efficient exploration and learning in complex environments. 1809,On the Generalization Effects of DenseNet Model Structures ,"Modern neural network architectures take advantage of increasingly deeper layers, and various advances in their structure to achieve better performance.While traditional explicit regularization techniques like dropout, weight decay, and data augmentation are still being used in these new models, little about the regularization and generalization effects of these new structures have been studied.""Besides being deeper than their predecessors, could newer architectures like ResNet and DenseNet also benefit from their structures' implicit regularization properties?"", ""In this work, we investigate the skip connection's effect on network's generalization features."", 'Through experiments, we show that certain neural network architectures contribute to their generalization abilities.""Specifically, we study the effect that low-level features have on generalization performance when they are introduced to deeper layers in DenseNet, ResNet as well as networks with 'skip connections'."", 'We show that these low-level representations do help with generalization in multiple settings when both the quality and quantity of training data is decreased.","Our paper analyses the tremendous representational power of networks especially with 'skip connections', which may be used as a method for better generalization." 1810,Bridging ELBO objective and MMD,"One of the challenges in training generative models such as the variational auto encoder is avoiding posterior collapse.When the generator has too much capacity, it is prone to ignoring latent code.This problem is exacerbated when the dataset is small, and the latent dimension is high.The root of the problem is the ELBO objective, specifically the Kullback–Leibler divergence term in objective function.This paper proposes a new objective function to replace the KL term with one that emulates the maximum mean discrepancy objective.It also introduces a new technique, named latent clipping, that is used to control distance between samples in latent space.A probabilistic autoencoder model, named-VAE, is designed and trained on MNIST and MNIST Fashion datasets, using the new objective function and is shown to outperform models trained with ELBO and-VAE objective.The-VAE is less prone to posterior collapse, and can generate reconstructions and new samples in good quality.Latent representations learned by-VAE are shown to be good and can be used for downstream tasks such as classification. ",This paper proposes a new objective function to replace KL term with one that emulates maximum mean discrepancy (MMD) objective. 1811,Input Complexity and Out-of-distribution Detection with Likelihood-based Generative Models,"Likelihood-based generative models are a promising resource to detect out-of-distribution inputs which could compromise the robustness or reliability of a machine learning system.However, likelihoods derived from such models have been shown to be problematic for detecting certain types of inputs that significantly differ from training data.""In this paper, we pose that this problem is due to the excessive influence that input complexity has in generative models' likelihoods."", 'We report a set of experiments supporting this hypothesis, and use an estimate of input complexity to derive an efficient and parameter-free OOD score, which can be seen as a likelihood-ratio, akin to Bayesian model comparison.We find such score to perform comparably to, or even better than, existing OOD detection approaches under a wide range of data sets, models, model sizes, and complexity estimates.","We pose that generative models' likelihoods are excessively influenced by the input's complexity, and propose a way to compensate it when detecting out-of-distribution inputs" 1812,On the Convergence of Adam and Beyond," Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients.In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution.We show that one cause for such failures is the exponential moving average used in the algorithms.We provide an explicit example of a simple convex optimization setting where Adam does not converge to the optimal solution, and describe the precise problems with the previous analysis of Adam algorithm.""Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with long-term memory of past gradients, and propose new variants of the Adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance.","We investigate the convergence of popular optimization algorithms like Adam , RMSProp and propose new variants of these methods which provably converge to optimal solution in convex settings. " 1813,Abstract Diagrammatic Reasoning with Multiplex Graph Networks,"Abstract reasoning, particularly in the visual domain, is a complex human ability, but it remains a challenging problem for artificial neural learning systems.In this work we propose MXGNet, a multilayer graph neural network for multi-panel diagrammatic reasoning tasks.MXGNet combines three powerful concepts, namely, object-level representation, graph neural networks and multiplex graphs, for solving visual reasoning tasks.MXGNet first extracts object-level representations for each element in all panels of the diagrams, and then forms a multi-layer multiplex graph capturing multiple relations between objects across different diagram panels.MXGNet summarises the multiple graphs extracted from the diagrams of the task, and uses this summarisation to pick the most probable answer from the given candidates.We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices.For an Euler Diagram Syllogism task MXGNet achieves state-of-the-art accuracy of 99.8%. For PGM and RAVEN, two comprehensive datasets for RPM reasoning, MXGNet outperforms the state-of-the-art models by a considerable margin.","MXGNet is a multilayer, multiplex graph based architecture which achieves good performance on various diagrammatic reasoning tasks." 1814,Semantic Structure Extraction for Spreadsheet Tables with a Multi-task Learning Architecture,"Semantic structure extraction for spreadsheets includes detecting table regions, recognizing structural components and classifying cell types.Automatic semantic structure extraction is key to automatic data transformation from various table structures into canonical schema so as to enable data analysis and knowledge discovery.However, they are challenged by the diverse table structures and the spatial-correlated semantics on cell grids.To learn spatial correlations and capture semantics on spreadsheets, we have developed a novel learning-based framework for spreadsheet semantic structure extraction.First, we propose a multi-task framework that learns table region, structural components and cell types jointly; second, we leverage the advances of the recent language model to capture semantics in each cell value; third, we build a large human-labeled dataset with broad coverage of table structures.Our evaluation shows that our proposed multi-task framework is highly effective that outperforms the results of training each task separately.","We propose a novel multi-task framework that learns table detection, semantic component recognition and cell type classification for spreadsheet tables with promising results." 1815,Towards Holistic and Automatic Evaluation of Open-Domain Dialogue Generation,"Open-domain dialogue generation has gained increasing attention in Natural Language Processing.Comparing these methods requires a holistic means of dialogue evaluation.Human ratings are deemed as the gold standard.As human evaluation is inefficient and costly, an automated substitute is desirable.In this paper, we propose holistic evaluation metrics which capture both the quality and diversity of dialogues.Our metrics consists of GPT-2 based context coherence between sentences in a dialogue, GPT-2 based fluency in phrasing, and,-gram based diversity in responses to augmented queries.The empirical validity of our metrics is demonstrated by strong correlation with human judgments.We provide the associated code, datasets and human ratings.",We propose automatic metrics to holistically evaluate open-dialogue generation and they strongly correlate with human evaluation. 1816,Learning Graph Convolution Filters from Data Manifold,"Convolution Neural Network has gained tremendous success in computer vision tasks with its outstanding ability to capture the local latent features.Recently, there has been an increasing interest in extending CNNs to the general spatial domain.Although various types of graph convolution and geometric convolution methods have been proposed, their connections to traditional 2D-convolution are not well-understood.In this paper, we show that depthwise separable convolution is a path to unify the two kinds of convolution methods in one mathematical view, based on which we derive a novel Depthwise Separable Graph Convolution that subsumes existing graph convolution methods as special cases of our formulation.Experiments show that the proposed approach consistently outperforms other graph convolution and geometric convolution baselines on benchmark datasets in multiple domains.","We devise a novel Depthwise Separable Graph Convolution (DSGC) for the generic spatial domain data, which is highly compatible with depthwise separable convolution." 1817,Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset,"Generating musical audio directly with neural networks is notoriously difficult because it requires coherently modeling structure at many different timescales.Fortunately, most music is also highly structured and can be represented as discrete note events played on musical instruments.Herein, we show that by using notes as an intermediate representation, we can train a suite of models capable of transcribing, composing, and synthesizing audio waveforms with coherent musical structure on timescales spanning six orders of magnitude, a process we call Wave2Midi2Wave.This large advance in the state of the art is enabled by our release of the new MAESTRO dataset, composed of over 172 hours of virtuosic piano performances captured with fine alignment between note labels and audio waveforms.The networks and the dataset together present a promising approach toward creating new expressive and interpretable neural models of music.","We train a suite of models capable of transcribing, composing, and synthesizing audio waveforms with coherent musical structure, enabled by the new MAESTRO dataset." 1818,Challenges in Computing and Optimizing Upper Bounds of Marginal Likelihood based on Chi-Square Divergences,"Variational inference based on chi-square divergence minimization provides a way to approximate a model's posterior while obtaining an upper bound on the marginal likelihood."", 'However, in practice CHIVI relies on Monte Carlo estimates of an upper bound objective that at modest sample sizes are not guaranteed to be true bounds on the marginal likelihood.This paper provides an empirical study of CHIVI performance on a series of synthetic inference tasks.We show that CHIVI is far more sensitive to initialization than classic VI based on KL minimization, often needs a very large number of samples, and may not be a reliable upper bound.We also suggest possible ways to detect and alleviate some of these pathologies, including diagnostic bounds and initialization strategies.","An empirical study of variational inference based on chi-square divergence minimization, showing that minimizing the CUBO is trickier than maximizing the ELBO" 1819,Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks,"It has been widely recognized that adversarial examples can be easily crafted to fool deep networks, which mainly root from the locally non-linear behavior nearby input examples.Applying mixup in training provides an effective mechanism to improve generalization performance and model robustness against adversarial perturbations, which introduces the globally linear behavior in-between training examples.However, in previous work, the mixup-trained models only passively defend adversarial attacks in inference by directly classifying the inputs, where the induced global linearity is not well exploited.Namely, since the locality of the adversarial perturbations, it would be more efficient to actively break the locality via the globality of the model predictions.Inspired by simple geometric intuition, we develop an inference principle, named mixup inference, for mixup-trained models.MI mixups the input with other random clean samples, which can shrink and transfer the equivalent perturbation if the input is adversarial.Our experiments on CIFAR-10 and CIFAR-100 demonstrate that MI can further improve the adversarial robustness for the models trained by mixup and its variants.",We exploit the global linearity of the mixup-trained models in inference to break the locality of the adversarial perturbations. 1820,BERT Goes to Law School: Quantifying the Competitive Advantage of Access to Large Legal Corpora in Contract Understanding,"Fine-tuning language models, such as BERT, on domain specific corpora has proven to be valuable in domains like scientific papers and biomedical text.In this paper, we show that fine-tuning BERT on legal documents similarly provides valuable improvements on NLP tasks in the legal domain.Demonstrating this outcome is significant for analyzing commercial agreements, because obtaining large legal corpora is challenging due to their confidential nature.As such, we show that having access to large legal corpora is a competitive advantage for commercial applications, and academic research on analyzing contracts.","Fine-tuning BERT on legal corpora provides marginal, but valuable, improvements on NLP tasks in the legal domain." 1821,A Probabilistic Formulation of Unsupervised Text Style Transfer,"We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques.Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus.By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion.In contrast with traditional generative sequence models, our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution.While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate.Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss.Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation.Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes.Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art.","We formulate a probabilistic latent sequence model to tackle unsupervised text style transfer, and show its effectiveness across a suite of unsupervised text style transfer tasks. " 1822,Compressed Sensing and Overparametrized Networks: Overfitting Peaks in a Model of Misparametrized Sparse Regression in the Interpolation Limit,"Current practice in machine learning is to employ deep nets in an overparametrized limit, with the nominal number of parameters typically exceeding the number of measurements.This resembles the situation in compressed sensing, or in sparse regression with penalty terms, and provides a theoretical avenue for understanding phenomena that arise in the context of deep nets.One such phenonemon is the success of deep nets in providing good generalization in an interpolating regime with zero training error.Traditional statistical practice calls for regularization or smoothing to prevent ""overfitting"".However, recent work shows that there exist data interpolation procedures which are statistically consistent and provide good generalization performance.In this context, it has been suggested that ""classical"" and ""modern"" regimes for machine learning are separated by a peak in the generalization error curve, a phenomenon dubbed ""double descent"".While such overfitting peaks do exist and arise from ill-conditioned design matrices, here we challenge the interpretation of the overfitting peak as demarcating the regime where good generalization occurs under overparametrization.We propose a model of Misparamatrized Sparse Regression and analytically compute the GE curves for and penalties.We show that the overfitting peak arising in the interpolation limit is dissociated from the regime of good generalization.The analytical expressions are obtained in the so called ""thermodynamic"" limit.We find an additional interesting phenomenon: increasing overparametrization in the fitting model increases sparsity, which should intuitively improve performance of penalized regression.However, at the same time, the relative number of measurements decrease compared to the number of fitting parameters, and eventually overparametrization does lead to poor generalization.Nevertheless, penalized regression can show good generalization performance under conditions of data interpolation even with a large amount of overparametrization.These results provide a theoretical avenue into studying inverse problems in the interpolating regime using overparametrized fitting functions such as deep nets.","Proposes an analytically tractable model and inference procedure (misparametrized sparse regression, inferred using L_1 penalty and studied in the data-interpolation limit) to study deep-net related phenomena in the context of inverse problems. " 1823,Variational Hashing-based Collaborative Filtering with Self-Masking,"Hashing-based collaborative filtering learns binary vector representations of users and items, such that recommendations can be computed very efficiently using the Hamming distance, which is simply the sum of differing bits between two hash codes.A problem with hashing-based collaborative filtering using the Hamming distance, is that each bit is equally weighted in the distance computation, but in practice some bits might encode more important properties than other bits, where the importance depends on the user.""To this end, we propose an end-to-end trainable variational hashing-based collaborative filtering approach that uses the novel concept of self-masking: the user hash code acts as a mask on the items, such that it learns to encode which bits are important to the user, rather than the user's preference towards the underlying item property that the bits represent."", 'This allows a binary user-level importance weighting of each item without the need to store additional weights for each user.We experimentally evaluate our approach against state-of-the-art baselines on 4 datasets, and obtain significant gains of up to 12% in NDCG.We also make available an efficient implementation of self-masking, which experimentally yields <4% runtime overhead compared to the standard Hamming distance.","We propose a new variational hashing-based collaborative filtering approach optimized for a novel self-mask variant of the Hamming distance, which outperforms state-of-the-art by up to 12% on NDCG." 1824,A Resizable Mini-batch Gradient Descent based on a Multi-Armed Bandit,"Determining the appropriate batch size for mini-batch gradient descent is always time consuming as it often relies on grid search.This paper considers a resizable mini-batch gradient descent algorithm based on a multi-armed bandit that achieves performance equivalent to that of best fixed batch-size.At each epoch, the RMGD samples a batch size according to a certain probability distribution proportional to a batch being successful in reducing the loss function.Sampling from this probability provides a mechanism for exploring different batch size and exploiting batch sizes with history of success. After obtaining the validation loss at each epoch with the sampled batch size, the probability distribution is updated to incorporate the effectiveness of the sampled batch size.Experimental results show that the RMGD achieves performance better than the best performing single batch size.It is surprising that the RMGD achieves better performance than grid search.Furthermore, it attains this performance in a shorter amount of time than grid search.",An optimization algorithm that explores various batch sizes based on probability and automatically exploits successful batch size which minimizes validation loss. 1825,NoiGAN: NOISE AWARE KNOWLEDGE GRAPH EMBEDDING WITH GAN,"Knowledge graph has gained increasing attention in recent years for its successful applications of numerous tasks.Despite the rapid growth of knowledge construction, knowledge graphs still suffer from severe incompletion and inevitably involve various kinds of errors.Several attempts have been made to complete knowledge graph as well as to detect noise.However, none of them considers unifying these two tasks even though they are inter-dependent and can mutually boost the performance of each other.In this paper, we proposed to jointly combine these two tasks with a unified Generative Adversarial Networks framework to learn noise-aware knowledge graph embedding.Extensive experiments have demonstrated that our approach is superior to existing state-of-the-art algorithms both in regard to knowledge graph completion and error detection.",We proposed a unified Generative Adversarial Networks (GAN) framework to learn noise-aware knowledge graph embedding. 1826,Residual EBMs: Does Real vs. Fake Text Discrimination Generalize?,"Energy-based models, a.k.a.un-normalized models, have had recent successes in continuous spaces.However, they have not been successfully applied to model text sequences. While decreasing the energy at training samples is straightforward, mining samples where the energy should be increased is difficult. In part, this is because standard gradient-based methods are not readily applicable when the input is high-dimensional and discrete. Here, we side-step this issue by generating negatives using pre-trained auto-regressive language models. The EBM then worksin the of the language model; and is trained to discriminate real text from text generated by the auto-regressive models.We investigate the generalization ability of residual EBMs, a pre-requisite for using them in other applications. We extensively analyze generalization for the task of classifying whether an input is machine or human generated, a natural task given the training loss and how we mine negatives.Overall, we observe that EBMs can generalize remarkably well to changes in the architecture of the generators producing negatives.However, EBMs exhibit more sensitivity to the training set used by such generators.",A residual EBM for text whose formulation is equivalent to discriminating between human and machine generated text. We study its generalization behavior. 1827,Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning,"Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints.In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples.We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions.We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that mixup augmentation and setting a minimum number of labeled samples per mini-batch are effective regularization techniques for reducing it.The proposed approach achieves state-of-the-art results in CIFAR-10/100 and Mini-ImageNet despite being much simpler than other state-of-the-art.These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work.Code will be made available.","Pseudo-labeling has shown to be a weak alternative for semi-supervised learning. We, conversely, demonstrate that dealing with confirmation bias with several regularizations makes pseudo-labeling a suitable approach." 1828,Temporal Difference Models: Model-Free Deep RL for Model-Based Control,"Model-free reinforcement learning has been proven to be a powerful, general tool for learning complex behaviors.However, its sample efficiency is often impractically large for solving challenging real-world problems, even for off-policy algorithms such as Q-learning.A limiting factor in classic model-free RL is that the learning signal consists only of scalar rewards, ignoring much of the rich information contained in state transition tuples.Model-based RL uses this information, by training a predictive model, but often does not achieve the same asymptotic performance as model-free RL due to model bias.We introduce temporal difference models, a family of goal-conditioned value functions that can be trained with model-free learning and used for model-based control.TDMs combine the benefits of model-free and model-based RL: they leverage the rich information in state transitions to learn very efficiently, while still attaining asymptotic performance that exceeds that of direct model-based RL methods.Our experimental results show that, on a range of continuous control tasks, TDMs provide a substantial improvement in efficiency compared to state-of-the-art model-based and model-free methods.","We show that a special goal-condition value function trained with model free methods can be used within model-based control, resulting in substantially better sample efficiency and performance." 1829,Neural Permutation Processes,"We introduce a neural architecture to perform amortized approximate Bayesian inference over latent random permutations of two sets of objects.The method involves approximating permanents of matrices of pairwise probabilities using recent ideas on functions defined over sets.Each sampled permutation comes with a probability estimate, a quantity unavailable in MCMC approaches.We illustrate the method in sets of 2D points and MNIST images.",A novel neural architecture for efficient amortized inference over latent permutations 1830,Improving Relevance Prediction with Transfer Learning in Large-scale Retrieval Systems,"Machine learned large-scale retrieval systems require a large amount of training data representing query-item relevance.""However, collecting users' explicit feedback is costly."", 'In this paper, we propose to leverage user logs and implicit feedback as auxiliary objectives to improve relevance modeling in retrieval systems.Specifically, we adopt a two-tower neural net architecture to model query-item relevance given both collaborative and content information.By introducing auxiliary tasks trained with much richer implicit user feedback data, we improve the quality and resolution for the learned representations of queries and items.Applying these learned representations to an industrial retrieval system has delivered significant improvements.",We propose a novel two-tower shared-bottom model architecture for transferring knowledge from rich implicit feedbacks to predict relevance for large-scale retrieval systems. 1831,Learning to Move with Affordance Maps,"The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent, from household robotic vacuums to autonomous vehicles.Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry, but fail to model dynamic objects or semantic constraints.Learning-based RL agents are an attractive alternative because they can incorporate both semantic and geometric information, but are notoriously sample inefficient, difficult to generalize to novel settings, and are difficult to interpret.In this paper, we combine the best of both worlds with a modular approach that a spatial representation of a scene that is trained to be effective when coupled with traditional geometric planners.Specifically, we design an agent that learns to predict a spatial affordance map that elucidates what parts of a scene are navigable through active self-supervised experience gathering.In contrast to most simulation environments that assume a static world, we evaluate our approach in the VizDoom simulator, using large-scale randomly-generated maps containing a variety of dynamic actors and hazards.We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.","We address the task of autonomous exploration and navigation using spatial affordance maps that can be learned in a self-supervised manner, these outperform classic geometric baselines while being more sample efficient than contemporary RL algorithms"