[ "Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect.", "Particularly, the properties of critical points and the landscape around them are of importance to determine the convergence performance of optimization algorithms.", "In this paper, we provide a necessary and sufficient characterization of the analytical forms for the critical points (as well as global minimizers) of the square loss functions for linear neural networks.", "We show that the analytical forms of the critical points characterize the values of the corresponding loss functions as well as the necessary and sufficient conditions to achieve global minimum.", "Furthermore, we exploit the analytical forms of the critical points to characterize the landscape properties for the loss functions of linear neural networks and shallow ReLU networks.", "One particular conclusion is that: While the loss function of linear networks has no spurious local minimum, the loss function of one-hidden-layer nonlinear networks with ReLU activation function does have local minimum that is not global minimum." ]
[ 0, 0, 0, 0, 1, 0 ]
[ 0.3018867874688502, 0.37209301838831804, 0.6037735799216805, 0.571428566430654, 0.7234042503395203, 0.15094339124243522 ]
[ "We provide necessary and sufficient analytical forms for the critical points of the square loss functions for various neural networks, and exploit the analytical forms to characterize the landscape properties for the loss functions of these neural networks." ]
[ "The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain.", "One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways.", "To address this “weight transport problem” (Grossberg, 1987), two biologically-plausible algorithms, proposed by Liao et al. (2016) and Lillicrap et al. (2016), relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets.", "However, a recent study by Bartunov et al. (2018) finds that although feedback alignment (FA) and some variants of target-propagation (TP) perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet.", "Here, we additionally evaluate the sign-symmetry (SS) algorithm (Liao et al., 2016), which differs from both BP and FA in that the feedback and feedforward weights do not share magnitudes but share signs.", "We examined the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet; RetinaNet for MS COCO).", "Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks.", "These results complement the study by Bartunov et al. (2018) and establish a new benchmark for future biologically-plausible learning algorithms on more difficult datasets and more complex architectures." ]
[ 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0, 0, 0.13043477920604923, 0.14285713922902502, 0, 0.11764705467128042, 0, 0.11111110709876558 ]
[ "Biologically plausible learning algorithms, particularly sign-symmetry, work well on ImageNet" ]
[ "We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors.", "We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.\n" ]
[ 0, 1 ]
[ 0.33333332839506175, 0.8888888839111112 ]
[ "We introduce the 2-simplicial Transformer and show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning." ]
[ "We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics.", "Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation.", "Our proposed tensor recurrent architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions.", "Furthermore, we decompose the higher-order structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance.", "We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs.", "We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.06666666222222252, 0.06451612466181092, 0.06060605638200213, 0.13793102996432832, 0.06666666222222252, 0.052631575069252354 ]
[ "Accurate forecasting over very long time horizons using tensor-train RNNs" ]
[ "Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret.", "We propose a variational message-passing algorithm for variational inference in such models.", "We make three contributions.", "First, we propose structured inference networks that incorporate the structure of the graphical model in the inference network of variational auto-encoders (VAE).", "Second, we establish conditions under which such inference networks enable fast amortized inference similar to VAE.", "Finally, we derive a variational message passing algorithm to perform efficient natural-gradient inference while retaining the efficiency of the amortized inference.", "By simultaneously enabling structured, amortized, and natural-gradient inference for deep structured models, our method simplifies and generalizes existing methods." ]
[ 0, 1, 0, 0, 0, 0, 0 ]
[ 0.2777777727932099, 0.5714285666581633, 0.09523809215419511, 0.3428571378612245, 0, 0.22222221723765442, 0.1714285664326532 ]
[ "We propose a variational message-passing algorithm for models that contain both the deep model and probabilistic graphical model." ]
[ "Modern deep neural networks have a large amount of weights, which make them difficult to deploy on computation constrained devices such as mobile phones.", "One common approach to reduce the model size and computational cost is to use low-rank factorization to approximate a weight matrix.", "However, performing standard low-rank factorization with a small rank can hurt the model expressiveness and significantly decrease the performance.", "In this work, we propose to use a mixture of multiple low-rank factorizations to model a large weight matrix, and the mixture coefficients are computed dynamically depending on its input.", "We demonstrate the effectiveness of the proposed approach on both language modeling and image classification tasks.", "Experiments show that our method not only improves the computation efficiency but also maintains (sometimes outperforms) its accuracy compared with the full-rank counterparts." ]
[ 0, 0, 0, 0, 1, 0 ]
[ 0.04651162297458138, 0.2105263107894738, 0.1621621571658146, 0.13043477775992457, 0.23529411271626308, 0.09756097063652613 ]
[ "A simple modification to low-rank factorization that improves performances (in both image and language tasks) while still being compact." ]
[ "Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices.", "We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs).", "PCRs deviate from previous formats by leveraging progressive compression to split each training example into multiple examples of increasingly higher fidelity, without adding to the total data size.", "Training examples of similar fidelity are grouped together, which reduces both the system overhead and data bandwidth needed to train a model.", "We show that models can be trained on aggressively compressed representations of the training data and still retain high accuracy, and that PCRs can enable a 2x speedup on average over baseline formats using JPEG compression.", "Our results hold across deep learning architectures for a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ." ]
[ 0, 1, 0, 0, 0, 0 ]
[ 0.2222222172246915, 0.30434782108695657, 0.15999999503200013, 0.2222222172246915, 0.21818181331570258, 0.19047618552154208 ]
[ "We propose a simple, general, and space-efficient data format to accelerate deep learning training by allowing sample fidelity to be dynamically selected at training time" ]
[ "It is fundamental and challenging to train robust and accurate Deep Neural Networks (DNNs) when semantically abnormal examples exist.", "Although great progress has been made, there is still one crucial research question which is not thoroughly explored yet: What training examples should be focused and how much more should they be emphasised to achieve robust learning?", "In this work, we study this question and propose gradient rescaling (GR) to solve it.", "GR modifies the magnitude of logit vector’s gradient to emphasise on relatively easier training data points when noise becomes more severe, which functions as explicit emphasis regularisation to improve the generalisation performance of DNNs.", "Apart from regularisation, we connect GR to examples weighting and designing robust loss functions.", "We empirically demonstrate that GR is highly anomaly-robust and outperforms the state-of-the-art by a large margin, e.g., increasing 7% on CIFAR100 with 40% noisy labels.", "It is also significantly superior to standard regularisers in both clean and abnormal settings.", "Furthermore, we present comprehensive ablation studies to explore the behaviours of GR under different cases, which is informative for applying GR in real-world scenarios." ]
[ 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Generative Adversarial Networks (GANs) have achieved remarkable results in the task of generating realistic natural images.", "In most applications, GAN models share two aspects in common.", "On the one hand, GANs training involves solving a challenging saddle point optimization problem, interpreted as an adversarial game between a generator and a discriminator functions.", "On the other hand, the generator and the discriminator are parametrized in terms of deep convolutional neural networks.", "The goal of this paper is to disentangle the contribution of these two factors to the success of GANs.", "In particular, we introduce Generative Latent Optimization (GLO), a framework to train deep convolutional generators without using discriminators, thus avoiding the instability of adversarial optimization problems.", "Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: learning from large data, synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors." ]
[ 0, 0, 0, 0, 0, 0, 1 ]
[ 0.08695651720226867, 0.04999999625000029, 0.25925925432098773, 0.173913038941399, 0.13333332888888905, 0.14285713788265325, 0.3548387046826223 ]
[ "Are GANs successful because of adversarial training or the use of ConvNets? We show a ConvNet generator trained with a simple reconstruction loss and learnable noise vectors leads many of the desirable properties of a GAN." ]
[ "In this paper, we propose a novel kind of kernel, random forest kernel, to enhance the empirical performance of MMD GAN.", "Different from common forests with deterministic routings, a probabilistic routing variant is used in our innovated random-forest kernel, which is possible to merge with the CNN frameworks.", "Our proposed random-forest kernel has the following advantages: From the perspective of random forest, the output of GAN discriminator can be viewed as feature inputs to the forest, where each tree gets access to merely a fraction of the features, and thus the entire forest benefits from ensemble learning.", "In the aspect of kernel method, random-forest kernel is proved to be characteristic, and therefore suitable for the MMD structure.", "Besides, being an asymmetric kernel, our random-forest kernel is much more flexible, in terms of capturing the differences between distributions.", "Sharing the advantages of CNN, kernel method, and ensemble learning, our random-forest kernel based MMD GAN obtains desirable empirical performances on CIFAR-10, CelebA and LSUN bedroom data sets.", "Furthermore, for the sake of completeness, we also put forward comprehensive theoretical analysis to support our experimental results." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0.14814814397805223, 0.1818181781450873, 0.12499999722222227, 0.23076922650887582, 0.14285713877551035, 0.17647058463667825, 0 ]
[ "Equip MMD GANs with a new random-forest kernel." ]
[ "Reinforcement learning in an actor-critic setting relies on accurate value estimates of the critic.", "However, the combination of function approximation, temporal difference (TD) learning and off-policy training can lead to an overestimating value function.", "A solution is to use Clipped Double Q-learning (CDQ), which is used in the TD3 algorithm and computes the minimum of two critics in the TD-target. \n", "We show that CDQ induces an underestimation bias and propose a new algorithm that accounts for this by using a weighted average of the target from CDQ and the target coming from a single critic.\n", "The weighting parameter is adjusted during training such that the value estimates match the actual discounted return on the most recent episodes and by that it balances over- and underestimation.\n", "Empirically, we obtain more accurate value estimates and demonstrate state of the art results on several OpenAI gym tasks." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.41666666180555556, 0.06896551272294917, 0.12121211698806258, 0.10526315401662063, 0.05405405010956932, 0.20689654720570758 ]
[ "A method for more accurate critic estimates in reinforcement learning." ]
[ "We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos.", "As part of this framework, we construct ImageNet-Vid-Robust, a human-expert--reviewed dataset of 22,668 images grouped into 1,145 sets of perceptually similar images derived from frames in the ImageNet Video Object Detection dataset.", "We evaluate a diverse array of classifiers trained on ImageNet, including models trained for robustness, and show a median classification accuracy drop of 16\\%.", "Additionally, we evaluate the Faster R-CNN and R-FCN models for detection, and show that natural perturbations induce both classification as well as localization errors, leading to a median drop in detection mAP of 14 points.", "Our analysis shows that natural perturbations in the real world are heavily problematic for current CNNs, posing a significant challenge to their deployment in safety-critical environments that require reliable, low-latency predictions." ]
[ 1, 0, 0, 0, 0 ]
[ 0.999999995, 0.2127659526301495, 0.2499999950125001, 0.26923076459319534, 0.24999999521701396 ]
[ "We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos." ]
[ "Structured tabular data is the most commonly used form of data in industry according to a Kaggle ML and DS Survey.", "Gradient Boosting Trees, Support Vector Machine, Random Forest, and Logistic Regression are typically used for classification tasks on tabular data.", "The recent work of Super Characters method using two-dimensional word embedding achieved state-of-the-art results in text classification tasks, showcasing the promise of this new approach.", "In this paper, we propose the SuperTML method, which borrows the idea of Super Characters method and two-dimensional embedding to address the problem of classification on tabular data.", "For each input of tabular data, the features are first projected into two-dimensional embedding like an image, and then this image is fed into fine-tuned ImageNet CNN models for classification.", "Experimental results have shown that the proposed SuperTML method have achieved state-of-the-art results on both large and small datasets." ]
[ 0, 0, 0, 0, 1, 0 ]
[ 0.12121211643709846, 0.181818177043159, 0.05405404949598286, 0.105263153393352, 0.19047618620181417, 0 ]
[ "Deep learning for structured tabular data machine learning using pre-trained CNN model from ImageNet." ]
[ "Learning rich representations from predictive learning without labels has been a longstanding challenge in the field of machine learning.", "Generative pre-training has so far not been as successful as contrastive methods in modeling representations of raw images.", "In this paper, we propose a neural architecture for self-supervised representation learning on raw images called the PatchFormer which learns to model spatial dependencies across patches in a raw image.", "Our method learns to model the conditional probability distribution of missing patches given the context of surrounding patches.", "We evaluate the utility of the learned representations by fine-tuning the pre-trained model on low data-regime classification tasks.", "Specifically, we benchmark our model on semi-supervised ImageNet classification which has become a popular benchmark recently for semi-supervised and self-supervised learning methods.", "Our model is able to achieve 30.3% and 65.5% top-1 accuracies when trained only using 1% and 10% of the labels on ImageNet showing the promise for generative pre-training methods." ]
[ 0, 0, 1, 0, 0, 0, 0 ]
[ 0.071428566836735, 0.0740740694101512, 0.2631578908587258, 0, 0.0769230721893494, 0.19999999555555567, 0.09999999625000015 ]
[ "Decoding pixels can still work for representation learning on images" ]
[ "Adaptive regularization methods pre-multiply a descent direction by a preconditioning matrix.", "Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive.", "We show how to modify full-matrix adaptive regularization in order to make it practical and effective.", "We also provide novel theoretical analysis\n", "for adaptive regularization in non-convex optimization settings.", "The core of our algorithm, termed GGT, consists of efficient inverse computation of square roots of low-rank matrices.", "Our preliminary experiments underscore improved convergence rate of GGT across a variety of synthetic tasks and standard deep learning benchmarks." ]
[ 0, 0, 0, 0, 1, 0, 0 ]
[ 0, 0.0714285665306126, 0.1481481432098767, 0, 0.4210526269252078, 0, 0 ]
[ "fast, truly scalable full-matrix AdaGrad/Adam, with theory for adaptive stochastic non-convex optimization" ]
[ "Dialogue systems require a great deal of different but complementary expertise to assist, inform, and entertain humans.", "For example, different domains (e.g., restaurant reservation, train ticket booking) of goal-oriented dialogue systems can be viewed as different skills, and so does ordinary chatting abilities of chit-chat dialogue systems.", "In this paper, we propose to learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP).", "The experimental results show that this approach achieves competitive performance on a combined dataset of MultiWOZ (Budzianowski et al., 2018), In-Car Assistant (Eric et al.,2017), and Persona-Chat (Zhang et al., 2018).", "Finally, we demonstrate that each dialogue skill is effectively learned and can be combined with other skills to produce selective responses." ]
[ 0, 0, 1, 0, 0 ]
[ 0.22222221752098775, 0.17857142357142872, 0.9818181768198347, 0.17543859149276717, 0.24489795428571431 ]
[ "In this paper, we propose to learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP). " ]
[ "Model distillation aims to distill the knowledge of a complex model into a simpler one.", "In this paper, we consider an alternative formulation called dataset distillation: we keep the model fixed and instead attempt to distill the knowledge from a large training dataset into a small one.", "The idea is to synthesize a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data.", "For example, we show that it is possible to compress 60,000 MNIST training images into just 10 synthetic distilled images (one per class) and achieve close to the original performance, given a fixed network initialization.", "We evaluate our method in various initialization settings. ", "Experiments on multiple datasets, MNIST, CIFAR10, PASCAL-VOC, and CUB-200, demonstrate the ad-vantage of our approach compared to alternative methods. ", "Finally, we include a real-world application of dataset distillation to the continual learning setting: we show that storing distilled images as episodic memory of previous tasks can alleviate forgetting more effectively than real images." ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.28571428091428575, 0.2857142808163266, 0.25454544982479343, 0.25925925450617293, 0.13333332913333346, 0.14634145841760873, 0.23076922595414212 ]
[ "We propose to distill a large dataset into a small set of synthetic data that can train networks close to original performance. " ]
[ "We relate the minimax game of generative adversarial networks (GANs) to finding the saddle points of the Lagrangian function for a convex optimization problem, where the discriminator outputs and the distribution of generator outputs play the roles of primal variables and dual variables, respectively.", "This formulation shows the connection between the standard GAN training process and the primal-dual subgradient methods for convex optimization.", "The inherent connection does not only provide a theoretical convergence proof for training GANs in the function space, but also inspires a novel objective function for training.", "The modified objective function forces the distribution of generator outputs to be updated along the direction according to the primal-dual subgradient methods.", "A toy example shows that the proposed method is able to resolve mode collapse, which in this case cannot be avoided by the standard GAN or Wasserstein GAN.", "Experiments on both Gaussian mixture synthetic data and real-world image datasets demonstrate the performance of the proposed method on generating diverse samples." ]
[ 0, 1, 0, 0, 0, 0 ]
[ 0.1632653018742192, 0.3124999950195313, 0.21052631101108046, 0.11764705389273376, 0.14634145877453913, 0.11428570938775531 ]
[ "We propose a primal-dual subgradient method for training GANs and this method effectively alleviates mode collapse." ]
[ "Specifying reward functions is difficult, which motivates the area of reward inference: learning rewards from human behavior.", "The starting assumption in the area is that human behavior is optimal given the desired reward function, but in reality people have many different forms of irrationality, from noise to myopia to risk aversion and beyond.", "This fact seems like it will be strictly harmful to reward inference: it is already hard to infer the reward from rational behavior, and noise and systematic biases make actions have less direct of a relationship to the reward.", "Our insight in this work is that, contrary to expectations, irrationality can actually help rather than hinder reward inference.", "For some types and amounts of irrationality, the expert now produces more varied policies compared to rational behavior, which help disambiguate among different reward parameters -- those that otherwise correspond to the same rational behavior.", "We put this to the test in a systematic analysis of the effect of irrationality on reward inference.", "We start by covering the space of irrationalities as deviations from the Bellman update, simulate expert behavior, and measure the accuracy of inference to contrast the different types and study the gains and losses.", "We provide a mutual information-based analysis of our findings, and wrap up by discussing the need to accurately model irrationality, as well as to what extent we might expect (or be able to train) real people to exhibit helpful irrationalities when teaching rewards to learners." ]
[ 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.062499995000000405, 0.08333332888888913, 0.12499999555555572, 0.17142856646530627, 0.12499999555555572, 0.18749999500000014, 0.13953487904813427, 0.07142856734693902 ]
[ "We find that irrationality from an expert demonstrator can help a learner infer their preferences. " ]
[ "Natural Language Processing models lack a unified approach to robustness testing.", "In this paper we introduce WildNLP - a framework for testing model stability in a natural setting where text corruptions such as keyboard errors or misspelling occur.", "We compare robustness of models from 4 popular NLP tasks: Q&A, NLI, NER and Sentiment Analysis by testing their performance on aspects introduced in the framework.", "In particular, we focus on a comparison between recent state-of-the- art text representations and non-contextualized word embeddings.", "In order to improve robust- ness, we perform adversarial training on se- lected aspects and check its transferability to the improvement of models with various cor- ruption types.", "We find that the high perfor- mance of models does not ensure sufficient robustness, although modern embedding tech- niques help to improve it.", "We release cor- rupted datasets and code for WildNLP frame- work for the community." ]
[ 0, 0, 1, 0, 0, 0, 0 ]
[ 0.17647058385813158, 0.04081632154935504, 0.8571428521615995, 0.09999999511250024, 0.15999999503200013, 0.13043477760869585, 0.11111110649691378 ]
[ "We compare robustness of models from 4 popular NLP tasks: Q&A, NLI, NER and Sentiment Analysis by testing their performance on perturbed inputs." ]
[ "Training generative models like Generative Adversarial Network (GAN) is challenging for noisy data.", "A novel curriculum learning algorithm pertaining to clustering is proposed to address this issue in this paper.", "The curriculum construction is based on the centrality of underlying clusters in data points. ", "The data points of high centrality takes priority of being fed into generative models during training.", "To make our algorithm scalable to large-scale data, the active set is devised, in the sense that every round of training proceeds only on an active subset containing a small fraction of already trained data and the incremental data of lower centrality.", "Moreover, the geometric analysis is presented to interpret the necessity of cluster curriculum for generative models.", "The experiments on cat and human-face data validate that our algorithm is able to learn the optimal generative models (e.g. ProGAN) with respect to specified quality metrics for noisy data.", "An interesting finding is that the optimal cluster curriculum is closely related to the critical point of the geometric percolation process formulated in the paper." ]
[ 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.20689654677764577, 0.5161290272632676, 0.2580645111342353, 0.2580645111342353, 0.23076922650887582, 0.45161289823100936, 0.2666666620839507, 0.27027026536157783 ]
[ "A novel cluster-based algorithm of curriculum learning is proposed to solve the robust training of generative models." ]
[ "Backdoor attacks aim to manipulate a subset of training data by injecting adversarial triggers such that machine learning models trained on the tampered dataset will make arbitrarily (targeted) incorrect prediction on the testset with the same trigger embedded.", "While federated learning (FL) is capable of aggregating information provided by different parties for training a better model, its distributed learning methodology and inherently heterogeneous data distribution across parties may bring new vulnerabilities.", "In addition to recent centralized backdoor attacks on FL where each party embeds the same global trigger during training, we propose the distributed backdoor attack (DBA) --- a novel threat assessment framework developed by fully exploiting the distributed nature of FL.", "DBA decomposes a global trigger pattern into separate local patterns and embed them into the training set of different adversarial parties respectively.", "Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data.", "We conduct extensive experiments to show that the attack success rate of DBA is significantly higher than centralized backdoors under different settings.", "Moreover, we find that distributed attacks are indeed more insidious, as DBA can evade two state-of-the-art robust FL algorithms against centralized backdoors.", "We also provide explanations for the effectiveness of DBA via feature visual interpretation and feature importance ranking.\n", "To further explore the properties of DBA, we test the attack performance by varying different trigger factors, including local trigger variations (size, gap, and location), scaling factor in FL, data distribution, and poison ratio and interval.", "Our proposed DBA and thorough evaluation results shed lights on characterizing the robustness of FL." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.19999999500000015, 0.21212120713957774, 0.28169013584606234, 0.07142856674107174, 0.33333332847222225, 0.2456140303477994, 0.2105263110495538, 0.11538461098372799, 0.08955223381599492, 0.15999999580000013 ]
[ "We proposed a novel distributed backdoor attack on federated learning and show that it is not only more effective compared with standard centralized attacks, but also harder to be defended by existing robust FL methods" ]
[ "Graph networks have recently attracted considerable interest, and in particular in the context of semi-supervised learning.", "These methods typically work by generating node representations that are propagated throughout a given weighted graph.\n\n", "Here we argue that for semi-supervised learning, it is more natural to consider propagating labels in the graph instead.", "Towards this end, we propose a differentiable neural version of the classic Label Propagation (LP) algorithm.", "This formulation can be used for learning edge weights, unlike other methods where weights are set heuristically.", "Starting from a layer implementing a single iteration of LP, we proceed by adding several important non-linear steps that significantly enhance the label-propagating mechanism.\n\n", "Experiments in two distinct settings demonstrate the utility of our approach.\n" ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.19354838210197725, 0.06060605561065239, 0.17142856646530627, 0.062499995000000405, 0.06060605561065239, 0.049999995200000466, 0.0714285665306126 ]
[ "Neural net for graph-based semi-supervised learning; revisits the classics and propagates *labels* rather than feature representations" ]
[ "Neural architecture search (NAS) has made rapid progress incomputervision,wherebynewstate-of-the-artresultshave beenachievedinaseriesoftaskswithautomaticallysearched neural network (NN) architectures.", "In contrast, NAS has not made comparable advances in natural language understanding (NLU).", "Corresponding to encoder-aggregator meta architecture of typical neural networks models for NLU tasks (Gong et al. 2018), we re-define the search space, by splittingitinto twoparts:encodersearchspace,andaggregator search space.", "Encoder search space contains basic operations such as convolutions, RNNs, multi-head attention and its sparse variants, star-transformers.", "Dynamic routing is included in the aggregator search space, along with max (avg) pooling and self-attention pooling.", "Our search algorithm is then fulfilled via DARTS, a differentiable neural architecture search framework.", "We progressively reduce the search space every few epochs, which further reduces the search time and resource costs.", "Experiments on five benchmark data-sets show that, the new neural networks we generate can achieve performances comparable to the state-of-the-art models that does not involve language model pre-training.\n" ]
[ 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.15789473218836578, 0, 0.39999999500800004, 0.09756097075550292, 0.09999999520000023, 0.21621621165814472, 0.14999999520000015, 0.1538461488757398 ]
[ "Neural Architecture Search for a series of Natural Language Understanding tasks. Design the search space for NLU tasks. And Apply differentiable architecture search to discover new models" ]
[ "Network embedding (NE) methods aim to learn low-dimensional representations of network nodes as vectors, typically in Euclidean space.", "These representations are then used for a variety of downstream prediction tasks.", "Link prediction is one of the most popular choices for assessing the performance of NE methods.", "However, the complexity of link prediction requires a carefully designed evaluation pipeline to provide consistent, reproducible and comparable results.", "We argue this has not been considered sufficiently in recent works.", "The main goal of this paper is to overcome difficulties associated with evaluation pipelines and reproducibility of results.", "We introduce EvalNE, an evaluation framework to transparently assess and compare the performance of NE methods on link prediction.", "EvalNE provides automation and abstraction for tasks such as hyper-parameter tuning, model validation, edge sampling, computation of edge embeddings and model validation.", "The framework integrates efficient procedures for edge and non-edge sampling and can be used to easily evaluate any off-the-shelf embedding method.", "The framework is freely available as a Python toolbox.", "Finally, demonstrating the usefulness of EvalNE in practice, we conduct an empirical study in which we try to replicate and analyse experimental sections of several influential papers." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.19047618557823143, 0.22222221777777784, 0.26315789008310253, 0.3720930183234181, 0.05714285283265339, 0.341463409779893, 0.4651162741373716, 0.13953487878853452, 0.13636363140495886, 0.18181817785123974, 0.16666666166666683 ]
[ "In this paper we introduce EvalNE, a Python toolbox for automating the evaluation of network embedding methods on link prediction and ensuring the reproducibility of results." ]
[ "Deep learning models can be efficiently optimized via stochastic gradient descent, but there is little theoretical evidence to support this.", "A key question in optimization is to understand when the optimization landscape of a neural network is amenable to gradient-based optimization.", "We focus on a simple neural network two-layer ReLU network with two hidden units, and show that all local minimizers are global.", "This combined with recent work of Lee et al. (2017); Lee et al. (2016) show that gradient descent converges to the global minimizer." ]
[ 0, 0, 1, 0 ]
[ 0.13043477769376202, 0.18604650684694443, 0.3829787184608421, 0.17391303856332718 ]
[ "Recovery guarantee of stochastic gradient descent with random initialization for learning a two-layer neural network with two hidden nodes, unit-norm weights, ReLU activation functions and Gaussian inputs." ]
[ "Dropout is a simple yet effective technique to improve generalization performance and prevent overfitting in deep neural networks (DNNs).", "In this paper, we discuss three novel observations about dropout to better understand the generalization of DNNs with rectified linear unit (ReLU) activations: 1) dropout is a smoothing technique that encourages each local linear model of a DNN to be trained on data points from nearby regions; 2) a constant dropout rate can result in effective neural-deactivation rates that are significantly different for layers with different fractions of activated neurons; and 3) the rescaling factor of dropout causes an inconsistency to occur between the normalization during training and testing conditions when batch normalization is also used. ", "The above leads to three simple but nontrivial improvements to dropout resulting in our proposed method \"Jumpout.", "\"", "Jumpout samples the dropout rate using a monotone decreasing distribution (such as the right part of a truncated Gaussian), so the local linear model at each data point is trained, with high probability, to work better for data points from nearby than from more distant regions.", "Instead of tuning a dropout rate for each layer and applying it to all samples, jumpout moreover adaptively normalizes the dropout rate at each layer and every training sample/batch, so the effective dropout rate applied to the activated neurons are kept the same.", "Moreover, we rescale the outputs of jumpout for a better trade-off that keeps both the variance and mean of neurons more consistent between training and test phases, which mitigates the incompatibility between dropout and batch normalization.", "Compared to the original dropout, jumpout shows significantly improved performance on CIFAR10, CIFAR100, Fashion- MNIST, STL10, SVHN, ImageNet-1k, etc., while introducing negligible additional memory and computation costs." ]
[ 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.32558139041644135, 0.2549019571856978, 0.1999999952000001, 0.21538461072662735, 0.14285713795918387, 0.07407406913580279, 0.19230768733727824 ]
[ "Jumpout applies three simple yet effective modifications to dropout, based on novel understandings about the generalization performance of DNN with ReLU in local regions." ]
[ "Concerns about interpretability, computational resources, and principled inductive priors have motivated efforts to engineer sparse neural models for NLP tasks.", "If sparsity is important for NLP, might well-trained neural models naturally become roughly sparse?", "Using the Taxi-Euclidean norm to measure sparsity, we find that frequent input words are associated with concentrated or sparse activations, while frequent target words are associated with dispersed activations but concentrated gradients.", "We find that gradients associated with function words are more concentrated than the gradients of content words, even controlling for word frequency." ]
[ 0, 0, 0, 1 ]
[ 0.09523809024943337, 0.11111110635802489, 0.1249999950347224, 0.232558134537588 ]
[ "We study the natural emergence of sparsity in the activations and gradients for some layers of a dense LSTM language model, over the course of training." ]
[ "The integration of a Knowledge Base (KB) into a neural dialogue agent is one of the key challenges in Conversational AI.", "Memory networks has proven to be effective to encode KB information into an external memory to thus generate more fluent and informed responses.", "Unfortunately, such memory becomes full of latent representations during training, so the most common strategy is to overwrite old memory entries randomly. \n\n", "In this paper, we question this approach and provide experimental evidence showing that conventional memory networks generate many redundant latent vectors resulting in overfitting and the need for larger memories.", "We introduce memory dropout as an automatic technique that encourages diversity in the latent space by", "1) Aging redundant memories to increase their probability of being overwritten during training", "2) Sampling new memories that summarize the knowledge acquired by redundant memories.", "This technique allows us to incorporate Knowledge Bases to achieve state-of-the-art dialogue generation in the Stanford Multi-Turn Dialogue dataset.", "Considering the same architecture, its use provides an improvement of +2.2 BLEU points for the automatic generation of responses and an increase of +8.1% in the recognition of named entities." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.08510637816206455, 0.20408162775510216, 0.1199999950720002, 0.6071428521428572, 0.6818181771900826, 0.09756097127900079, 0.20512820107823804, 0.13043477784499072, 0.22222221722908106 ]
[ "Conventional memory networks generate many redundant latent vectors resulting in overfitting and the need for larger memories. We introduce memory dropout as an automatic technique that encourages diversity in the latent space." ]
[ "Su-Boyd-Candes (2014) made a connection between Nesterov's method and an ordinary differential equation (ODE). ", "We show if a Hessian damping term is added to the ODE from Su-Boyd-Candes (2014), then Nesterov's method arises as a straightforward discretization of the modified ODE.", "Analogously, in the strongly convex case, a Hessian damping term is added to Polyak's ODE, which is then discretized to yield Nesterov's method for strongly convex functions. ", "Despite the Hessian term, both second order ODEs can be represented as first order systems.\n\n", "Established Liapunov analysis is used to recover the accelerated rates of convergence in both continuous and discrete time. ", "Moreover, the Liapunov analysis can be extended to the case of stochastic gradients which allows the full gradient case to be considered as a special case of the stochastic case. ", "The result is a unified approach to convex acceleration in both continuous and discrete time and in both the stochastic and full gradient cases. \n" ]
[ 0, 1, 0, 0, 0, 0, 0 ]
[ 0.31578946890581727, 0.5531914843639657, 0.2127659524490721, 0.10526315311634371, 0.19047618552154208, 0.27272726773760336, 0.27272726773760336 ]
[ "We derive Nesterov's method arises as a straightforward discretization of an ODE different from the one in Su-Boyd-Candes and prove acceleration the stochastic case" ]
[ "We propose learning to transfer learn (L2TL) to improve transfer learning on a target dataset by judicious extraction of information from a source dataset.", "L2TL considers joint optimization of vastly-shared weights between models for source and target tasks, and employs adaptive weights for scaling of constituent losses.", "The adaptation of the weights is based on reinforcement learning, guided with a performance metric on the target validation set.", "We demonstrate state-of-the-art performance of L2TL given fixed models, consistently outperforming fine-tuning baselines on various datasets.", "In the regimes of small-scale target datasets and significant label mismatch between source and target datasets, L2TL outperforms previous work by an even larger margin." ]
[ 1, 0, 0, 0, 0 ]
[ 0.999999995, 0.15789473184210542, 0.21621621121986864, 0.17142856646530627, 0.19047618552154208 ]
[ "We propose learning to transfer learn (L2TL) to improve transfer learning on a target dataset by judicious extraction of information from a source dataset." ]
[ "In many partially observable scenarios, Reinforcement Learning (RL) agents must rely on long-term memory in order to learn an optimal policy.", "We demonstrate that using techniques from NLP and supervised learning fails at RL tasks due to stochasticity from the environment and from exploration.", "Utilizing our insights on the limitations of traditional memory methods in RL, we propose AMRL, a class of models that can learn better policies with greater sample efficiency and are resilient to noisy inputs.", "Specifically, our models use a standard memory module to summarize short-term context, and then aggregate all prior states from the standard model without respect to order.", "We show that this provides advantages both in terms of gradient decay and signal-to-noise ratio over time.", "Evaluating in Minecraft and maze environments that test long-term memory, we find that our model improves average return by 19% over a baseline that has the same number of parameters and by 9% over a stronger baseline that has far more parameters." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.19047618547619058, 0.09756097061273078, 0.25925925450617293, 0.17777777280000015, 0.21052631084487544, 0.07547169332858698 ]
[ "In Deep RL, order-invariant functions can be used in conjunction with standard memory modules to improve gradient decay and resilience to noise." ]
[ "Optimization on manifold has been widely used in machine learning, to handle optimization problems with constraint.", "Most previous works focus on the case with a single manifold.", "However, in practice it is quite common that the optimization problem involves more than one constraints, (each constraint corresponding to one manifold).", "It is not clear in general how to optimize on multiple manifolds effectively and provably especially when the intersection of multiple manifolds is not a manifold or cannot be easily calculated.", "We propose a unified algorithm framework to handle the optimization on multiple manifolds.", "Specifically, we integrate information from multiple manifolds and move along an ensemble direction by viewing the information from each manifold as a drift and adding them together.", "We prove the convergence properties of the proposed algorithms.", "We also apply the algorithms into training neural network with batch normalization layers and achieve preferable empirical results." ]
[ 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.31249999500000003, 0.14814814331961607, 0.16216215725346983, 0.186046506955111, 0.34482758126040436, 0.14999999520000015, 0.08333332888888913, 0.05882352442906617 ]
[ "This paper introduces an algorithm to handle optimization problem with multiple constraints under vision of manifold." ]
[ "It has long been assumed that high dimensional continuous control problems cannot be solved effectively by discretizing individual dimensions of the action space due to the exponentially large number of bins over which policies would have to be learned.", "In this paper, we draw inspiration from the recent success of sequence-to-sequence models for structured prediction problems to develop policies over discretized spaces.", "Central to this method is the realization that complex functions over high dimensional spaces can be modeled by neural networks that predict one dimension at a time.", "Specifically, we show how Q-values and policies over continuous spaces can be modeled using a next step prediction model over discretized dimensions.", "With this parameterization, it is possible to both leverage the compositional structure of action spaces during learning, as well as compute maxima over action spaces (approximately).", "On a simple example task we demonstrate empirically that our method can perform global search, which effectively gets around the local optimization issues that plague DDPG.", "We apply the technique to off-policy (Q-learning) methods and show that our method can achieve the state-of-the-art for off-policy methods on several continuous control tasks." ]
[ 0, 0, 1, 0, 0, 0, 0 ]
[ 0.19230768790680483, 0.1999999951125001, 0.23255813475392115, 0.21052631084487544, 0.1999999951125001, 0.09523809041950139, 0.20512820021038802 ]
[ "A method to do Q-learning on continuous action spaces by predicting a sequence of discretized 1-D actions." ]
[ "Model-based reinforcement learning (MBRL) aims to learn a dynamic model to reduce the number of interactions with real-world environments.", "However, due to estimation error, rollouts in the learned model, especially those of long horizon, fail to match the ones in real-world environments.", "This mismatching has seriously impacted the sample complexity of MBRL.", "The phenomenon can be attributed to the fact that previous works employ supervised learning to learn the one-step transition models, which has inherent difficulty ensuring the matching of distributions from multi-step rollouts.", "Based on the claim, we propose to learn the synthesized model by matching the distributions of multi-step rollouts sampled from the synthesized model and the real ones via WGAN.", "We theoretically show that matching the two can minimize the difference of cumulative rewards between the real transition and the learned one.", "Our experiments also show that the proposed model imitation method outperforms the state-of-the-art in terms of sample complexity and average return." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0.1333333285333335, 0.062499995312500355, 0, 0.19512194707911967, 0.1714285669224491, 0.12903225331945908, 0.12499999531250018 ]
[ "Our method incorporates WGAN to achieve occupancy measure matching for transition learning." ]
[ "Batch Normalization (BN) and its variants have seen widespread adoption in the deep learning community because they improve the training of deep neural networks.", "Discussions of why this normalization works so well remain unsettled. ", "We make explicit the relationship between ordinary least squares and partial derivatives computed when back-propagating through BN.", "We recast the back-propagation of BN as a least squares fit, which zero-centers and decorrelates partial derivatives from normalized activations.", "This view, which we term {\\em gradient-least-squares}, is an extensible and arithmetically accurate description of BN.", "To further explore this perspective, we motivate, interpret, and evaluate two adjustments to BN." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.05128204636423453, 0.07142856665816359, 0.17647058323529427, 0.5405405355734113, 0.12121211621671278, 0.06451612407908468 ]
[ "Gaussian normalization performs a least-squares fit during back-propagation, which zero-centers and decorrelates partial derivatives from normalized activations." ]
[ "Batch Normalization (BN) has become a cornerstone of deep learning across diverse architectures, appearing to help optimization as well as generalization.", "While the idea makes intuitive sense, theoretical analysis of its effectiveness has been lacking.", "Here theoretical support is provided for one of its conjectured properties, namely, the ability to allow gradient descent to succeed with less tuning of learning rates.", "It is shown that even if we fix the learning rate of scale-invariant parameters (e.g., weights of each layer with BN) to a constant (say, 0.3), gradient descent still approaches a stationary point (i.e., a solution where gradient is zero) in the rate of T^{−1/2} in T iterations, asymptotically matching the best bound for gradient descent with well-tuned learning rates.", "A similar result with convergence rate T^{−1/4} is also shown for stochastic gradient descent." ]
[ 0, 0, 1, 0, 0 ]
[ 0.2325581345592213, 0.2162162115120527, 0.2978723354277954, 0.21917807787577412, 0.05405404934989084 ]
[ "We give a theoretical analysis of the ability of batch normalization to automatically tune learning rates, in the context of finding stationary points for a deep learning objective." ]
[ "Generative models of natural images have progressed towards high fidelity samples by the strong leveraging of scale.", "We attempt to carry this success to the field of video modeling by showing that large Generative Adversarial Networks trained on the complex Kinetics-600 dataset are able to produce video samples of substantially higher complexity and fidelity than previous work. ", "Our proposed model, Dual Video Discriminator GAN (DVD-GAN), scales to longer and higher resolution videos by leveraging a computationally efficient decomposition of its discriminator.", "We evaluate on the related tasks of video synthesis and video prediction, and achieve new state-of-the-art Fréchet Inception Distance for prediction for Kinetics-600, as well as state-of-the-art Inception Score for synthesis on the UCF-101 dataset, alongside establishing a strong baseline for synthesis on Kinetics-600." ]
[ 0, 1, 0, 0 ]
[ 0.09302325114115761, 0.31746031256235835, 0.15686274011534043, 0.2758620639892985 ]
[ "We propose DVD-GAN, a large video generative model that is state of the art on several tasks and produces highly complex videos when trained on large real world datasets." ]
[ "Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated.", "In this work, we introduce Neural Process Networks to understand procedural text through (neural) simulation of action dynamics. ", "Our model complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers.", "The model updates the states of the entities by executing learned action operators.", "Empirical results demonstrate that our proposed model can reason about the unstated causal effects of actions, allowing it to provide more accurate contextual information for understanding and generating procedural text, all while offering more interpretable internal representations than existing alternatives." ]
[ 0, 0, 0, 1, 0 ]
[ 0.20512820021038802, 0.04878048283164834, 0.20512820021038802, 0.23529411307958487, 0.19672130686374642 ]
[ "We propose a new recurrent memory architecture that can track common sense state changes of entities by simulating the causal effects of actions." ]
[ "There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression. ", "In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence.", "In many applications this expensive initialization is not practical, for example streaming algorithms --- where inputs are ephemeral and can only be inspected a small number of times. ", "In this paper we explore the learning of approximate set membership over a stream of data in one-shot via meta-learning.", "We propose a novel memory architecture, the Neural Bloom Filter, which we show to be more compressive than Bloom Filters and several existing memory-augmented neural networks in scenarios of skewed data or structured sets." ]
[ 0, 0, 0, 1, 0 ]
[ 0.045454541291322696, 0.12121211643709846, 0.04761904334467159, 0.3124999951757813, 0.2173913002930057 ]
[ "We investigate the space efficiency of memory-augmented neural nets when learning set membership." ]
[ "We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network.", "Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them.", "We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network.", "We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks.", "Our approach only requires calculating two square curvature factor matrices for each layer.", "Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage.", "We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture." ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.6222222172444445, 0.13333332835555575, 0.34999999501250006, 0.27906976244456466, 0.058823524688581694, 0.13953487872363457, 0.17647058351211087 ]
[ "We construct a Kronecker factored Laplace approximation for neural networks that leads to an efficient matrix normal distribution over the weights." ]
[ "Spectral embedding is a popular technique for the representation of graph data.", "Several regularization techniques have been proposed to improve the quality of the embedding with respect to downstream tasks like clustering.", "In this paper, we explain on a simple block model the impact of the complete graph regularization, whereby a constant is added to all entries of the adjacency matrix.", "Specifically, we show that the regularization forces the spectral embedding to focus on the largest blocks, making the representation less sensitive to noise or outliers.", "We illustrate these results on both on both synthetic and real data, showing how regularization improves standard clustering scores." ]
[ 0, 0, 0, 1, 0 ]
[ 0.20689654687277062, 0.2285714235755103, 0.14285713803854888, 0.7368421003185596, 0.11764705382352963 ]
[ "Graph regularization forces spectral embedding to focus on the largest clusters, making the representation less sensitive to noise. " ]
[ "The exposure bias problem refers to the training-inference discrepancy caused by teacher forcing in maximum likelihood estimation (MLE) training for auto-regressive neural network language models (LM).", "It has been regarded as a central problem for natural language generation (NLG) model training.", "Although a lot of algorithms have been proposed to avoid teacher forcing and therefore to alleviate exposure bias, there is little work showing how serious the exposure bias problem is.", "In this work, we first identify the auto-recovery ability of MLE-trained LM, which casts doubt on the seriousness of exposure bias.", "We then develop a precise, quantifiable definition for exposure bias.", "However, according to our measurements in controlled experiments, there's only around 3% performance gain when the training-inference discrepancy is completely removed.", "Our results suggest the exposure bias problem could be much less serious than it is currently assumed to be." ]
[ 0, 0, 0, 0, 0, 0, 1 ]
[ 0.21739129943289237, 0.11428570938775531, 0.21276595255771855, 0.10256409756739011, 0.2666666622222223, 0.09756097061273078, 0.68421052132964 ]
[ "We show that exposure bias could be much less serious than it is currently assumed to be for MLE LM training." ]
[ "The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks.", "Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games.", "We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation.", "We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured. " ]
[ 1, 0, 0, 0 ]
[ 0.35897435424063123, 0.05405404923301723, 0.0930232512709575, 0.1632653018742192 ]
[ "A controlled study of the role of environments with respect to properties in emergent communication protocols." ]
[ "For understanding generic documents, information like font sizes, column layout, and generally the positioning of words may carry semantic information that is crucial for solving a downstream document intelligence task.", "Our novel BERTgrid, which is based on Chargrid by Katti et al. (2018), represents a document as a grid of contextualized word piece embedding vectors, thereby making its spatial structure and semantics accessible to the processing neural network.", "The contextualized embedding vectors are retrieved from a BERT language model.", "We use BERTgrid in combination with a fully convolutional network on a semantic instance segmentation task for extracting fields from invoices.", "We demonstrate its performance on tabulated line item and document header field extraction." ]
[ 0, 0, 1, 0, 0 ]
[ 0.09999999601250016, 0.12499999646701399, 0.2727272677272728, 0.1290322534859523, 0.08333332836805586 ]
[ "Grid-based document representation with contextualized embedding vectors for documents with 2D layouts" ]
[ "Deep reinforcement learning (RL) policies are known to be vulnerable to adversarial perturbations to their observations, similar to adversarial examples for classifiers.", "However, an attacker is not usually able to directly modify another agent's observations.", "This might lead one to wonder: is it possible to attack an RL agent simply by choosing an adversarial policy acting in a multi-agent environment so as to create natural observations that are adversarial?", "We demonstrate the existence of adversarial policies in zero-sum games between simulated humanoid robots with proprioceptive observations, against state-of-the-art victims trained via self-play to be robust to opponents.", "The adversarial policies reliably win against the victims but generate seemingly random and uncoordinated behavior.", "We find that these policies are more successful in high-dimensional environments, and induce substantially different activations in the victim policy network than when the victim plays against a normal opponent.", "Videos are available at" ]
[ 0, 0, 1, 0, 0, 0, 0 ]
[ 0.31578946869806096, 0.12121211643709846, 0.4313725442522107, 0.17021276106835687, 0.11428570938775531, 0.12765956957899524, 0.07407407023319637 ]
[ "Deep RL policies can be attacked by other agents taking actions so as to create natural observations that are adversarial." ]
[ "GloVe and Skip-gram word embedding methods learn word vectors by decomposing a denoised matrix of word co-occurrences into a product of low-rank matrices.", "In this work, we propose an iterative algorithm for computing word vectors based on modeling word co-occurrence matrices with Generalized Low Rank Models.", "Our algorithm generalizes both Skip-gram and GloVe as well as giving rise to other embedding methods based on the specified co-occurrence matrix, distribution of co-occurences, and the number of iterations in the iterative algorithm.", "For example, using a Tweedie distribution with one iteration results in GloVe and using a Multinomial distribution with full-convergence mode results in Skip-gram.", "Experimental results demonstrate that multiple iterations of our algorithm improves results over the GloVe method on the Google word analogy similarity task." ]
[ 0, 1, 0, 0, 0 ]
[ 0.21621621121986864, 0.34999999505000007, 0.26086956045368626, 0.11428570928979613, 0.15789473185595584 ]
[ "We present a novel iterative algorithm based on generalized low rank models for computing and interpreting word embedding models." ]
[ "Deterministic models are approximations of reality that are often easier to build and interpret than stochastic alternatives. \n", "Unfortunately, as nature is capricious, observational data can never be fully explained by deterministic models in practice. \n", "Observation and process noise need to be added to adapt deterministic models to behave stochastically, such that they are capable of explaining and extrapolating from noisy data.\n", "Adding process noise to deterministic simulators can induce a failure in the simulator resulting in no return value for certain inputs -- a property we describe as ``brittle.''\n", "We investigate and address the wasted computation that arises from these failures, and the effect of such failures on downstream inference tasks.\n", "We show that performing inference in this space can be viewed as rejection sampling, and train a conditional normalizing flow as a proposal over noise values such that there is a low probability that the simulator crashes, increasing computational efficiency and inference fidelity for a fixed sample budget when used as the proposal in an approximate inference algorithm." ]
[ 0, 0, 0, 0, 0, 1 ]
[ 0.11764705382352963, 0, 0.09523809041950139, 0.1818181770764464, 0.1578947318975071, 0.22580644763267432 ]
[ "We learn a conditional autoregressive flow to propose perturbations that don't induce simulator failure, improving inference performance." ]
[ "Multi-hop question answering requires models to gather information from different parts of a text to answer a question.", "Most current approaches learn to address this task in an end-to-end way with neural networks, without maintaining an explicit representation of the reasoning process.", "We propose a method to extract a discrete reasoning chain over the text, which consists of a series of sentences leading to the answer.", "We then feed the extracted chains to a BERT-based QA model to do final answer prediction.", "Critically, we do not rely on gold annotated chains or ``supporting facts:'' at training time, we derive pseudogold reasoning chains using heuristics based on named entity recognition and coreference resolution.", "Nor do we rely on these annotations at test time, as our model learns to extract chains from raw text alone. ", "We test our approach on two recently proposed large multi-hop question answering datasets: WikiHop and HotpotQA, and achieve state-of-art performance on WikiHop and strong performance on HotpotQA.", "Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way.", "Furthermore, human evaluation shows that our extracted chains allow humans to give answers with high confidence, indicating that these are a strong intermediate abstraction for this task." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.1379310294887041, 0.16216215745799867, 0.30303029814508725, 0.06896551224732497, 0.048780483307555446, 0, 0.17142856662857156, 0.0930232514223907, 0.09999999545000023 ]
[ "We improve answering of questions that require multi-hop reasoning extracting an intermediate chain of sentences." ]
[ "Normalizing constant (also called partition function, Bayesian evidence, or marginal likelihood) is one of the central goals of Bayesian inference, yet most of the existing methods are both expensive and inaccurate.", "Here we develop a new approach, starting from posterior samples obtained with a standard Markov Chain Monte Carlo (MCMC).", "We apply a novel Normalizing Flow (NF) approach to obtain an analytic density estimator from these samples, followed by Optimal Bridge Sampling (OBS) to obtain the normalizing constant.", "We compare our method which we call Gaussianized Bridge Sampling (GBS) to existing methods such as Nested Sampling (NS) and Annealed Importance Sampling (AIS) on several examples, showing our method is both significantly faster and substantially more accurate than these methods, and comes with a reliable error estimation." ]
[ 0, 0, 0, 1 ]
[ 0.23728813062912965, 0.1199999953920002, 0.3103448226397147, 0.3243243194156319 ]
[ "We develop a new method for normalization constant (Bayesian evidence) estimation using Optimal Bridge Sampling and a novel Normalizing Flow, which is shown to outperform existing methods in terms of accuracy and computational time." ]
[ "We present a large-scale empirical study of catastrophic forgetting (CF) in modern Deep Neural Network (DNN) models that perform sequential (or: incremental) learning.\n", "A new experimental protocol is proposed that takes into account typical constraints encountered in application scenarios.\n", "As the investigation is empirical, we evaluate CF behavior on the hitherto largest number of visual classification datasets, from each of which we construct a representative number of Sequential Learning Tasks (SLTs) in close alignment to previous works on CF.\n", "Our results clearly indicate that there is no model that avoids CF for all investigated datasets and SLTs under application conditions.", "We conclude with a discussion of potential solutions and workarounds to CF, notably for the EWC and IMM models." ]
[ 1, 0, 0, 0, 0 ]
[ 0.27272726776859507, 0.21621621124908705, 0.03703703237311445, 0.19999999500000015, 0.2631578897506926 ]
[ "We check DNN models for catastrophic forgetting using a new evaluation scheme that reflects typical application conditions, with surprising results." ]
[ "Federated Learning (FL) refers to learning a high quality global model based on decentralized data storage, without ever copying the raw data.", "A natural scenario arises with data created on mobile phones by the activity of their users.", "Given the typical data heterogeneity in such situations, it is natural to ask how can the global model be personalized for every such device, individually.", "In this work, we point out that the setting of Model Agnostic Meta Learning (MAML), where one optimizes for a fast, gradient-based, few-shot adaptation to a heterogeneous distribution of tasks, has a number of similarities with the objective of personalization for FL.", "We present FL as a natural source of practical applications for MAML algorithms, and make the following observations.", "1) The popular FL algorithm, Federated Averaging, can be interpreted as a meta learning algorithm.", "2) Careful fine-tuning can yield a global model with higher accuracy, which is at the same time easier to personalize.", "However, solely optimizing for the global model accuracy yields a weaker personalization result.", "3) A model trained using a standard datacenter optimization method is much harder to personalize, compared to one trained using Federated Averaging, supporting the first claim.", "These results raise new questions for FL, MAML, and broader ML research." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.21621621130752386, 0, 0.10256409772518103, 0.15686274079200319, 0.05882352442906617, 0.19354838210197725, 0.22222221728395072, 0.0689655122948874, 0.25641025157133474, 0 ]
[ "Federated Averaging already is a Meta Learning algorithm, while datacenter-trained methods are significantly harder to personalize." ]
[ "Memorization of data in deep neural networks has become a subject of significant research interest. \n", "In this paper, we link memorization of images in deep convolutional autoencoders to downsampling through strided convolution. ", "To analyze this mechanism in a simpler setting, we train linear convolutional autoencoders and show that linear combinations of training data are stored as eigenvectors in the linear operator corresponding to the network when downsampling is used. ", "On the other hand, networks without downsampling do not memorize training data. ", "We provide further evidence that the same effect happens in nonlinear networks. ", "Moreover, downsampling in nonlinear networks causes the model to not only memorize just linear combinations of images, but individual training images. ", "Since convolutional autoencoder components are building blocks of deep convolutional networks, we envision that our findings will shed light on the important phenomenon of memorization in over-parameterized deep networks. \n" ]
[ 0, 1, 0, 0, 0, 0, 0 ]
[ 0.1538461489644972, 0.3448275814982165, 0.26666666297283953, 0.08333332836805586, 0.16666666170138905, 0.12121211676767694, 0.15789473272853197 ]
[ "We identify downsampling as a mechansim for memorization in convolutional autoencoders." ]
[ "Reinforcement learning provides a powerful and general framework for decision\n", "making and control, but its application in practice is often hindered by the need\n", "for extensive feature and reward engineering.", "Deep reinforcement learning methods\n", "can remove the need for explicit engineering of policy or value features, but\n", "still require a manually specified reward function.", "Inverse reinforcement learning\n", "holds the promise of automatic reward acquisition, but has proven exceptionally\n", "difficult to apply to large, high-dimensional problems with unknown dynamics.", "In\n", "this work, we propose AIRL, a practical and scalable inverse reinforcement learning\n", "algorithm based on an adversarial reward learning formulation that is competitive\n", "with direct imitation learning algorithms.", "Additionally, we show that AIRL is\n", "able to recover portable reward functions that are robust to changes in dynamics,\n", "enabling us to learn policies even under significant variation in the environment\n", "seen during training." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.06896551272294917, 0, 0.07999999635200017, 0.17391304060491497, 0.12499999517578143, 0.07692307298816588, 0.18181817946280993, 0.13333332868888906, 0.0714285670663268, 0.2580645113839751, 0.33333332868888893, 0.08333333003472235, 0, 0.19354838235171706, 0.06451612428720119, 0 ]
[ "We propose an adversarial inverse reinforcement learning algorithm capable of learning reward functions which can transfer to new, unseen environments." ]
[ "We consider two questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well?", "Our work responds to \\citet{zhang2016understanding}, who showed deep neural networks can easily memorize randomly labeled training data, despite generalizing well on real labels of the same inputs.", "We show that the same phenomenon occurs in small linear models.", "These observations are explained by the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization.", "We also demonstrate that, when one holds the learning rate fixed, there is an optimum batch size which maximizes the test set accuracy.", "We propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large.", "Interpreting stochastic gradient descent as a stochastic differential equation, we identify the ``noise scale\" $g = \\epsilon (\\frac{N}{B} - 1) \\approx \\epsilon N/B$, where $\\epsilon$ is the learning rate, $N$ the training set size and $B$ the batch size.", "Consequently the optimum batch size is proportional to both the learning rate and the size of the training set, $B_{opt} \\propto \\epsilon N$.", "We verify these predictions empirically." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.15999999539200013, 0.044444439644444965, 0.06896551253269949, 0.2777777727777779, 0.09999999505000023, 0.514285709289796, 0.1568627405305653, 0.1621621571658146, 0 ]
[ "Generalization is strongly correlated with the Bayesian evidence, and gradient noise drives SGD towards minima whose evidence is large." ]
[ "In the industrial field, the positron annihilation is not affected by complex environment, and the gamma-ray photon penetration is strong, so the nondestructive detection of industrial parts can be realized.", "Due to the poor image quality caused by gamma-ray photon scattering, attenuation and short sampling time in positron process, we propose the idea of combining deep learning to generate positron images with good quality and clear details by adversarial nets.", "The structure of the paper is as follows: firstly, we encode to get the hidden vectors of medical CT images based on transfer Learning, and use PCA to extract positron image features.", "Secondly, we construct a positron image memory based on attention mechanism as a whole input to the adversarial nets which uses medical hidden variables as a query.", "Finally, we train the whole model jointly and update the input parameters until convergence.", "Experiments have proved the possibility of generating rare positron images for industrial non-destructive testing using countermeasure networks, and good imaging results have been achieved." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.060606056932966244, 0.09523809215419511, 0.05405405066471898, 0.18749999625000005, 0, 0.06451612520291386 ]
[ "adversarial nets, attention mechanism, positron images, data scarcity" ]
[ "We revisit the Recurrent Attention Model (RAM, Mnih et al. (2014)), a recurrent neural network for visual attention, from an active information sampling perspective. \n\n", "We borrow ideas from neuroscience research on the role of active information sampling in the context of visual attention and gaze (Gottlieb, 2018), where the author suggested three types of motives for active information sampling strategies.", "We find the original RAM model only implements one of them.\n\n", "We identify three key weakness of the original RAM and provide a simple solution by adding two extra terms on the objective function.", "The modified RAM", "1) achieves faster convergence,", "2) allows dynamic decision making per sample without loss of accuracy, and", "3) generalizes much better on longer sequence of glimpses which is not trained for, compared with the original RAM. \n" ]
[ 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.0869565167769379, 0.2399999951280001, 0.18181817719008275, 0.5581395298864251, 0, 0, 0.060606055977961794, 0.14634145841760873 ]
[ " Inspired by neuroscience research, solve three key weakness of the widely-cited recurrent attention model by simply adding two terms on the objective function." ]
[ "Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data.", "However, a large quantity of labeled graphs is difficult to obtain, which significantly limit the true success of GNNs.", "Although active learning has been widely studied for addressing label-sparse issues with other data types like text, images, etc., how to make it effective over graphs is an open question for research. ", "In this paper, we present the investigation on active learning with GNNs for node classification tasks. ", "Specifically, we propose a new method, which uses node feature propagation followed by K-Medoids clustering of the nodes for instance selection in active learning.", "With a theoretical bound analysis we justify the design choice of our approach.", "In our experiments on four benchmark dataset, the proposed method outperforms other representative baseline methods consistently and significantly." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0.05882352525951587, 0.14285713826530627, 0.1395348801514333, 0.22222221755829916, 0.17647058408304508, 0.08695651682419688, 0.071428566836735 ]
[ "This paper introduces a clustering-based active learning algorithm on graphs." ]
[ "Continuous Normalizing Flows (CNFs) have emerged as promising deep generative models for a wide range of tasks thanks to their invertibility and exact likelihood estimation.", "However, conditioning CNFs on signals of interest for conditional image generation and downstream predictive tasks is inefficient due to the high-dimensional latent code generated by the model, which needs to be of the same size as the input data.", "In this paper, we propose InfoCNF, an efficient conditional CNF that partitions the latent space into a class-specific supervised code and an unsupervised code that shared among all classes for efficient use of labeled information.", "Since the partitioning strategy (slightly) increases the number of function evaluations (NFEs), InfoCNF also employs gating networks to learn the error tolerances of its ordinary differential equation (ODE) solvers for better speed and performance.", "We show empirically that InfoCNF improves the test accuracy over the baseline while yielding comparable likelihood scores and reducing the NFEs on CIFAR10.", "Furthermore, applying the same partitioning strategy in InfoCNF on time-series data helps improve extrapolation performance." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.09090908600206637, 0.15094339162691361, 0.35999999528800003, 0.399999995288, 0.14999999501250016, 0.058823524480969266 ]
[ "We propose the InfoCNF, an efficient conditional CNF that employs gating networks to learn the error tolerances of the ODE solvers " ]
[ "A central goal of unsupervised learning is to acquire representations from unlabeled data or experience that can be used for more effective learning of downstream tasks from modest amounts of labeled data.", "Many prior unsupervised learning works aim to do so by developing proxy objectives based on reconstruction, disentanglement, prediction, and other metrics.", "Instead, we develop an unsupervised meta-learning method that explicitly optimizes for the ability to learn a variety of tasks from small amounts of data.", "To do so, we construct tasks from unlabeled data in an automatic way and run meta-learning over the constructed tasks.", "Surprisingly, we find that, when integrated with meta-learning, relatively simple task construction mechanisms, such as clustering embeddings, lead to good performance on a variety of downstream, human-specified tasks.", "Our experiments across four image datasets indicate that our unsupervised meta-learning approach acquires a learning algorithm without any labeled data that is applicable to a wide range of downstream classification tasks, improving upon the embedding learned by four prior unsupervised learning methods." ]
[ 0, 0, 0, 0, 0, 1 ]
[ 0.2666666618666667, 0.1538461488757398, 0.2926829219036289, 0.05405404905770682, 0.0869565169754256, 0.3999999955966942 ]
[ "An unsupervised learning method that uses meta-learning to enable efficient learning of downstream image classification tasks, outperforming state-of-the-art methods." ]
[ "Domain transfer is a exciting and challenging branch of machine learning because models must learn to smoothly transfer between domains, preserving local variations and capturing many aspects of variation without labels. \n", "However, most successful applications to date require the two domains to be closely related (ex. image-to-image, video-video), \n", "utilizing similar or shared networks to transform domain specific properties like texture, coloring, and line shapes. \n", "Here, we demonstrate that it is possible to transfer across modalities (ex. image-to-audio) by first abstracting the data with latent generative models and then learning transformations between latent spaces. \n", "We find that a simple variational autoencoder is able to learn a shared latent space to bridge between two generative models in an unsupervised fashion, and even between different types of models (ex. variational autoencoder and a generative adversarial network). \n", "We can further impose desired semantic alignment of attributes with a linear classifier in the shared latent space. \n", "The proposed variation autoencoder enables preserving both locality and semantic alignment through the transfer process, as shown in the qualitative and quantitative evaluations.\n", "Finally, the hierarchical structure decouples the cost of training the base generative models and semantic alignments, enabling computationally efficient and data efficient retraining of personalized mapping functions." ]
[ 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.2307692258357989, 0.051282046443129975, 0.04999999511250048, 0.30769230275887577, 0.2909090860429753, 0.19047618552154208, 0.31111110611358034, 0.2222222172246915 ]
[ "Conditional VAE on top of latent spaces of pre-trained generative models that enables transfer between drastically different domains while preserving locality and semantic alignment." ]
[ "We propose Adversarial Inductive Transfer Learning (AITL), a method for addressing discrepancies in input and output spaces between source and target domains.", "AITL utilizes adversarial domain adaptation and multi-task learning to address these discrepancies.", "Our motivating application is pharmacogenomics where the goal is to predict drug response in patients using their genomic information.", "The challenge is that clinical data (i.e. patients) with drug response outcome is very limited, creating a need for transfer learning to bridge the gap between large pre-clinical pharmacogenomics datasets (e.g. cancer cell lines) and clinical datasets.", "Discrepancies exist between", "1) the genomic data of pre-clinical and clinical datasets (the input space), and", "2) the different measures of the drug response (the output space).", "To the best of our knowledge, AITL is the first adversarial inductive transfer learning method to address both input and output discrepancies.", "Experimental results indicate that AITL outperforms state-of-the-art pharmacogenomics and transfer learning baselines and may guide precision oncology more accurately." ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.24390243402736475, 0.3749999953125, 0.15789473185595584, 0.21428570969387764, 0, 0.24999999531250006, 0.19999999555555567, 0.5853658486615111, 0.21052631080332423 ]
[ "A novel method of inductive transfer learning that employs adversarial learning and multi-task learning to address the discrepancy in input and output space" ]
[ "Named entity recognition (NER) and relation extraction (RE) are two important tasks in information extraction and retrieval (IE & IR).", "Recent work has demonstrated that it is beneficial to learn these tasks jointly, which avoids the propagation of error inherent in pipeline-based systems and improves performance.", "However, state-of-the-art joint models typically rely on external natural language processing (NLP) tools, such as dependency parsers, limiting their usefulness to domains (e.g. news) where those tools perform well.", "The few neural, end-to-end models that have been proposed are trained almost completely from scratch.", "In this paper, we propose a neural, end-to-end model for jointly extracting entities and their relations which does not rely on external NLP tools and which integrates a large, pre-trained language model.", "Because the bulk of our model's parameters are pre-trained and we eschew recurrence for self-attention, our model is fast to train.", "On 5 datasets across 3 domains, our model matches or exceeds state-of-the-art performance, sometimes by a large margin." ]
[ 0, 0, 0, 0, 0, 1, 0 ]
[ 0.2857142807183674, 0.18604650684694443, 0.042553186871888235, 0.12499999501953145, 0.13333332863209893, 0.3243243193571951, 0 ]
[ "A novel, high-performing architecture for end-to-end named entity recognition and relation extraction that is fast to train." ]
[ "In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks.\n", "Firstly, we show that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, also reducing the amount of parameters by 80\\%.\n", "Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs.", "We incorporate local gradient information into the approximate posterior to sharpen it around the current batch statistics.", "We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks.\n", "We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them.", "We also introduce a new benchmark for studying uncertainty for language models so future methods can be easily compared." ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.5454545411157026, 0, 0, 0, 0, 0.04444444181728411, 0.07999999596800021 ]
[ " Variational Bayes scheme for Recurrent Neural Networks" ]
[ "Over the passage of time Unmanned Autonomous Vehicles (UAVs), especially\n", "Autonomous flying drones grabbed a lot of attention in Artificial Intelligence.\n", "Since electronic technology is getting smaller, cheaper and more efficient, huge\n", "advancement in the study of UAVs has been observed recently.", "From monitoring\n", "floods, discerning the spread of algae in water bodies to detecting forest trail, their\n", "application is far and wide.", "Our work is mainly focused on autonomous flying\n", "drones where we establish a case study towards efficiency, robustness and accuracy\n", "of UAVs where we showed our results well supported through experiments.\n", "We provide details of the software and hardware architecture used in the study.", "We\n", "further discuss about our implementation algorithms and present experiments that\n", "provide a comparison between three different state-of-the-art algorithms namely\n", "TrailNet, InceptionResnet and MobileNet in terms of accuracy, robustness, power\n", "consumption and inference time.", "In our study, we have shown that MobileNet has\n", "produced better results with very less computational requirement and power consumption.\n", "We have also reported the challenges we have faced during our work\n", "as well as a brief discussion on our future work to improve safety features and\n", "performance." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0, 0.21052631080332423, 0, 0, 0.11764705384083066, 0.19047618557823143, 0.09523809034013632, 0.09523809034013632, 0, 0, 0, 0, 0, 0, 0, 0.0869565169754256 ]
[ "case study on optimal deep learning model for UAVs" ]
[ "Music relies heavily on repetition to build structure and meaning. ", "Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure. ", "The Transformer (Vaswani et al., 2017), a sequence model based on self-attention, has achieved compelling results in many generation tasks that require maintaining long-range coherence.", "This suggests that self-attention might also be well-suited to modeling music.", "In musical composition and performance, however, relative timing is critically important. ", "Existing approaches for representing relative positional information in the Transformer modulate attention based on pairwise distance (Shaw et al., 2018). ", "This is impractical for long sequences such as musical compositions since their memory complexity is quadratic in the sequence length. ", "We propose an algorithm that reduces the intermediate memory requirements to linear in the sequence length.", "This enables us to demonstrate that a Transformer with our modified relative attention mechanism can generate minute-long (thousands of steps) compositions with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies. ", "We evaluate the Transformer with our relative attention mechanism on two datasets, JSB Chorales and Piano-e-competition, and obtain state-of-the-art results on the latter." ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.14814814331961607, 0.21052631091412755, 0.1428571381405897, 0.14814814331961607, 0.0714285665306126, 0.21052631091412755, 0.16666666172839517, 0.2580645111342353, 0.18867924106799583, 0.16666666172839517 ]
[ "We show the first successful use of Transformer in generating music that exhibits long-term structure. " ]
[ "Sequential decision problems for real-world applications often need to be solved in real-time, requiring algorithms to perform well with a restricted computational budget.", "Width-based lookaheads have shown state-of-the-art performance in classical planning problems as well as over the Atari games with tight budgets.", "In this work we investigate width-based lookaheads over Stochastic Shortest paths (SSP).", "We analyse why width-based algorithms perform poorly over SSP problems, and overcome these pitfalls proposing a method to estimate costs-to-go.", "We formalize width-based lookaheads as an instance of the rollout algorithm, give a definition of width for SSP problems and explain its sample complexity.", "Our experimental results over a variety of SSP benchmarks show the algorithm to outperform other state-of-the-art rollout algorithms such as UCT and RTDP." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.09756097063652613, 0, 0.06451612428720119, 0.20512820013149255, 0.19047618552154208, 0.19047618552154208 ]
[ "We propose a new Monte Carlo Tree Search / rollout algorithm that relies on width-based search to construct a lookahead." ]
[ "Deep Neural Networks (DNNs) are known for excellent performance in supervised tasks such as classification.", "Convolutional Neural Networks (CNNs), in particular, can learn effective features and build high-level representations that can be used for\n", "classification, but also for querying and nearest neighbor search.", "However, CNNs have also been shown to suffer from a performance drop when the distribution of the data changes from training to test data.", "In this paper we analyze the internal\n", "representations of CNNs and observe that the representations of unseen data in each class, spread more (with higher variance) in the embedding space of the CNN compared to representations of the training data.", "More importantly, this difference is more extreme if the unseen data comes from a shifted distribution.", "Based on this observation, we objectively evaluate the degree of representation’s variance in each class by applying eigenvalue decomposition on the within-class covariance of the internal representations of CNNs and observe the same behaviour.", "This can be problematic as larger variances might lead to mis-classification if the sample crosses the decision boundary of its class.", "We apply nearest neighbor classification on the representations and empirically show that the embeddings with the high variance actually have significantly worse KNN classification performances, although this could not be foreseen from their end-to-end classification results.", "To tackle this problem, we propose Deep Within-Class Covariance Analysis (DWCCA), a deep neural network layer that significantly reduces the within-class covariance of a DNN’s representation, improving performance on unseen test data from a shifted distribution.", "We empirically evaluate DWCCA on two datasets for Acoustic Scene Classification (DCASE2016 and DCASE2017).", "We demonstrate that not only does DWCCA significantly improve the network’s internal representation, it\n", "also increases the end-to-end classification accuracy, especially when the test set exhibits a slight distribution shift.", "By adding DWCCA to a VGG neural network, we achieve around 6 percentage points improvement in the case of a distribution\n", "mismatch." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.09999999531250023, 0.18604650676041115, 0.0588235255190314, 0.1333333283950619, 0.12499999658203134, 0.20833332834201398, 0.09756097085068434, 0.2641509384122464, 0.08888888395061757, 0.2105263108648816, 0.4406779612180408, 0.1025640979618674, 0.25641025180802113, 0.09999999531250023, 0.22222221728395072 ]
[ "We propose a novel deep neural network layer for normalising within-class covariance of an internal representation in a neural network that results in significantly improving the generalisation of the learned representations." ]
[ "Generative models have proven to be an outstanding tool for representing high-dimensional probability distributions and generating realistic looking images.", "A fundamental characteristic of generative models is their ability to produce multi-modal outputs.", "However, while training, they are often susceptible to mode collapse, which means that the model is limited in mapping the input noise to only a few modes of the true data distribution.", "In this paper, we draw inspiration from Determinantal Point Process (DPP) to devise a generative model that alleviates mode collapse while producing higher quality samples.", "DPP is an elegant probabilistic measure used to model negative correlations within a subset and hence quantify its diversity.", "We use DPP kernel to model the diversity in real data as well as in synthetic data.", "Then, we devise a generation penalty term that encourages the generator to synthesize data with a similar diversity to real data.", "In contrast to previous state-of-the-art generative models that tend to use additional trainable parameters or complex training paradigms, our method does not change the original training scheme.", "Embedded in an adversarial training and variational autoencoder, our Generative DPP approach shows a consistent resistance to mode-collapse on a wide-variety of synthetic data and natural image datasets including MNIST, CIFAR10, and CelebA, while outperforming state-of-the-art methods for data-efficiency, convergence-time, and generation quality.", "Our code will be made publicly available." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.09756097063652613, 0.11428570961632674, 0.23529411274125347, 0.21276595246717986, 0.24390243405116013, 0.2777777730246914, 0.1999999950500001, 0.08510637799909491, 0.19672130686374642, 0 ]
[ "The addition of a diversity criterion inspired from DPP in the GAN objective avoids mode collapse and leads to better generations. " ]
[ "Despite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization.", "In this work we suggest a novel complexity measure based on unit-wise capacities resulting in a tighter generalization bound for two layer ReLU networks.", "Our capacity bound correlates with the behavior of test error with increasing network sizes (within the range reported in the experiments), and could partly explain the improvement in generalization with over-parametrization.", "We further present a matching lower bound for the Rademacher complexity that improves over previous capacity lower bounds for neural networks." ]
[ 0, 0, 1, 0 ]
[ 0.17021276177455874, 0.27027026556610667, 0.5128205082182776, 0.30303029814508725 ]
[ "We suggest a generalization bound that could partly explain the improvement in generalization with over-parametrization." ]
[ "We introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of multiple state-of-the-art networks, thus allowing to design deeper and more accurate networks.\n\n", "The novel processing blocks that facilitate efficient information flow are a convolution-type operation block for point sets that blends neighborhood information in a memory-efficient manner; a multi-resolution point cloud processing block; and a crosslink block that efficiently shares information across low- and high-resolution processing branches.", "Combining these blocks, we design significantly wider and deeper architectures.\n\n", "We extensively evaluate the proposed architectures on multiple point segmentation benchmarks (ShapeNetPart, ScanNet, PartNet) and report systematic improvements in terms of both accuracy and memory consumption by using our generic modules in conjunction with multiple recent architectures (PointNet++, DGCNN, SpiderCNN, PointCNN).", "We report a 9.7% increase in IoU on the PartNet dataset, which is the most complex, while decreasing memory footprint by 57%." ]
[ 1, 0, 0, 0, 0 ]
[ 0.9818181768198347, 0.1999999950500001, 0.15789473272853197, 0.3124999951220704, 0.08163264811328642 ]
[ "We introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of multiple state-of-the-art networks, thus allowing to design deeper and more accurate networks." ]
[ "End-to-end acoustic-to-word speech recognition models have recently gained popularity because they are easy to train, scale well to large amounts of training data, and do not require a lexicon.", "In addition, word models may also be easier to integrate with downstream tasks such as spoken language understanding, because inference (search) is much simplified compared to phoneme, character or any other sort of sub-word units.", "In this paper, we describe methods to construct contextual acoustic word embeddings directly from a supervised sequence-to-sequence acoustic-to-word speech recognition model using the learned attention distribution.", "On a suite of 16 standard sentence evaluation tasks, our embeddings show competitive performance against a word2vec model trained on the speech transcriptions.", "In addition, we evaluate these embeddings on a spoken language understanding task and observe that our embeddings match the performance of text-based embeddings in a pipeline of first performing speech recognition and then constructing word embeddings from transcriptions." ]
[ 0, 0, 1, 0, 0 ]
[ 0.13043477784499072, 0.11538461085798835, 0.4090909042561984, 0.14999999505000017, 0.2799999953920001 ]
[ "Methods to learn contextual acoustic word embeddings from an end-to-end speech recognition model that perform competitively with text-based word embeddings." ]
[ "Unsupervised monocular depth estimation has made great progress after deep\n", "learning is involved.", "Training with binocular stereo images is considered as a\n", "good option as the data can be easily obtained.", "However, the depth or disparity\n", "prediction results show poor performance for the object boundaries.", "The main\n", "reason is related to the handling of occlusion areas during the training.", "In this paper,\n", "we propose a novel method to overcome this issue.", "Exploiting disparity maps\n", "property, we generate an occlusion mask to block the back-propagation of the occlusion\n", "areas during image warping.", "We also design new networks with flipped\n", "stereo images to induce the networks to learn occluded boundaries.", "It shows that\n", "our method achieves clearer boundaries and better evaluation results on KITTI\n", "driving dataset and Virtual KITTI dataset." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19999999555555567, 0, 0.06896551296076128, 0.06896551296076128, 0.15999999680000007, 0.13793103020214045, 0.19354838251821027, 0, 0.2068965474435197, 0, 0.25806451155046833, 0, 0, 0.06896551296076128, 0, 0.1290322534859523, 0 ]
[ "This paper propose a mask method which solves the previous blurred results of unsupervised monocular depth estimation caused by occlusion" ]
[ "Graph classification is currently dominated by graph kernels, which, while powerful, suffer some significant limitations.", "Convolutional Neural Networks (CNNs) offer a very appealing alternative.", "However, processing graphs with CNNs is not trivial.", "To address this challenge, many sophisticated extensions of CNNs have recently been proposed.", "In this paper, we reverse the problem: rather than proposing yet another graph CNN model, we introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs.", "Despite its simplicity, our method proves very competitive to state-of-the-art graph kernels and graph CNNs, and outperforms them by a wide margin on some datasets.", "It is also preferable to graph kernels in terms of time complexity.", "Code and data are publicly available." ]
[ 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.055555550694444865, 0.06666666246666693, 0.13793103048751498, 0.058823524688581694, 0.7142857095982144, 0.1818181768285125, 0.060606055977961794, 0 ]
[ "We introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs." ]
[ "The key attribute that drives the unprecedented success of modern Recurrent Neural Networks (RNNs) on learning tasks which involve sequential data, is their ever-improving ability to model intricate long-term temporal dependencies.", "However, a well established measure of RNNs' long-term memory capacity is lacking, and thus formal understanding of their ability to correlate data throughout time is limited.", "Though depth efficiency in convolutional networks is well established by now, it does not suffice in order to account for the success of deep RNNs on inputs of varying lengths, and the need to address their 'time-series expressive power' arises.", "In this paper, we analyze the effect of depth on the ability of recurrent networks to express correlations ranging over long time-scales.", "To meet the above need, we introduce a measure of the information flow across time that can be supported by the network, referred to as the Start-End separation rank.", "Essentially, this measure reflects the distance of the function realized by the recurrent network from a function that models no interaction whatsoever between the beginning and end of the input sequence.", "We prove that deep recurrent networks support Start-End separation ranks which are exponentially higher than those supported by their shallow counterparts.", "Moreover, we show that the ability of deep recurrent networks to correlate different parts of the input sequence increases exponentially as the input sequence extends, while that of vanilla shallow recurrent networks does not adapt to the sequence length at all.", "Thus, we establish that depth brings forth an overwhelming advantage in the ability of recurrent networks to model long-term dependencies, and provide an exemplar of quantifying this key attribute which may be readily extended to other RNN architectures of interest, e.g. variants of LSTM networks.", "We obtain our results by considering a class of recurrent networks referred to as Recurrent Arithmetic Circuits (RACs), which merge the hidden state with the input via the Multiplicative Integration operation." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.2545454496264464, 0.29166666166666677, 0.16666666186666684, 0.18181817685950424, 0.19999999500800014, 0.24489795418575602, 0.3999999950222222, 0.26415093844072635, 0.24999999531250006, 0.22641508938412258 ]
[ "We propose a measure of long-term memory and prove that deep recurrent networks are much better fit to model long-term temporal dependencies than shallow ones." ]
[ "Holistically exploring the perceptual and neural representations underlying animal communication has traditionally been very difficult because of the complexity of the underlying signal.", "We present here a novel set of techniques to project entire communicative repertoires into low dimensional spaces that can be systematically sampled from, exploring the relationship between perceptual representations, neural representations, and the latent representational spaces learned by machine learning algorithms.", "We showcase this method in one ongoing experiment studying sequential and temporal maintenance of context in songbird neural and perceptual representations of syllables.", "We further discuss how studying the neural mechanisms underlying the maintenance of the long-range information content present in birdsong can inform and be informed by machine sequence modeling." ]
[ 1, 0, 0, 0 ]
[ 0.28571428075102046, 0.14814814397805223, 0.22222221728395072, 0.19047618575963732 ]
[ "We compare perceptual, neural, and modeled representations of animal communication using machine learning, behavior, and physiology. " ]
[ "The information bottleneck principle (Shwartz-Ziv & Tishby, 2017) suggests that SGD-based training of deep neural networks results in optimally compressed hidden layers, from an information theoretic perspective.", "However, this claim was established on toy data.", "The goal of the work we present here is to test these claims in a realistic setting using a larger and deeper convolutional architecture, a ResNet model.", "We trained PixelCNN++ models as inverse representation decoders to measure the mutual information between hidden layers of a ResNet and input image data, when trained for (1) classification and (2) autoencoding.", "We find that two stages of learning happen for both training regimes, and that compression does occur, even for an autoencoder.", "Sampling images by conditioning on hidden layers’ activations offers an intuitive visualisation to understand what a ResNets learns to forget." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.08888888400987681, 0, 0.18181817691115715, 0.2916666618836806, 0.10526315289473707, 0.10526315289473707 ]
[ "The Information Bottleneck Principle applied to ResNets, using PixelCNN++ models to decode mutual information and conditionally generate images for information illustration" ]
[ "We study the problem of safe adaptation: given a model trained on a variety of past experiences for some task, can this model learn to perform that task in a new situation while avoiding catastrophic failure?", "This problem setting occurs frequently in real-world reinforcement learning scenarios such as a vehicle adapting to drive in a new city, or a robotic drone adapting a policy trained only in simulation.", "While learning without catastrophic failures is exceptionally difficult, prior experience can allow us to learn models that make this much easier.", "These models might not directly transfer to new settings, but can enable cautious adaptation that is substantially safer than na\\\"{i}ve adaptation as well as learning from scratch.", "Building on this intuition, we propose risk-averse domain adaptation (RADA).", "RADA works in two steps: it first trains probabilistic model-based RL agents in a population of source domains to gain experience and capture epistemic uncertainty about the environment dynamics.", "Then, when dropped into a new environment, it employs a pessimistic exploration policy, selecting actions that have the best worst-case performance as forecasted by the probabilistic model.", "We show that this simple maximin policy accelerates domain adaptation in a safety-critical driving environment with varying vehicle sizes.", "We compare our approach against other approaches for adapting to new environments, including meta-reinforcement learning." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.16393442124160187, 0.0727272677421491, 0.1199999951280002, 0.11111110613854619, 0.15384615003287322, 0.2807017493875039, 0.11111110613854619, 0.20833332855034734, 0 ]
[ "Adaptation of an RL agent in a target environment with unknown dynamics is fast and safe when we transfer prior experience in a variety of environments and then select risk-averse actions during adaptation." ]
[ "We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers.", "Our model builds an object-based scene representation and translates sentences into executable, symbolic programs.", "To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation.", "Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to.", "Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences.", "We use curriculum learning to guide the searching over the large compositional space of images and language.", "Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences.", "Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains.", "It also empowers applications including visual question answering and bidirectional image-text retrieval." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.7868852410642301, 0.1538461492439186, 0.21739129938563337, 0.2272727223657026, 0.2702702658875092, 0.19512194646044032, 0.45454544963842974, 0.09090908600206637, 0.10810810372534715 ]
[ "We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them." ]
[ "Bayesian inference offers a theoretically grounded and general way to train neural networks and can potentially give calibrated uncertainty.", "However, it is challenging to specify a meaningful and tractable prior over the network parameters, and deal with the weight correlations in the posterior.", "To this end, this paper introduces two innovations:", "(i) a Gaussian process-based hierarchical model for the network parameters based on recently introduced unit embeddings that can flexibly encode weight structures, and", "(ii) input-dependent contextual variables for the weight prior that can provide convenient ways to regularize the function space being modeled by the network through the use of kernels. \n", "We show these models provide desirable test-time uncertainty estimates, demonstrate cases of modeling inductive biases for neural networks with kernels and demonstrate competitive predictive performance on an active learning benchmark." ]
[ 0, 1, 0, 0, 0, 0 ]
[ 0.21276595272068824, 0.31999999512800004, 0, 0.2307692258357989, 0.14545454046942166, 0.17241378810344843 ]
[ "We introduce a Gaussian Process Prior over weights in a neural network and explore its ability to model input-dependent weights with benefits to various tasks, including uncertainty estimation and generalization in the low-sample setting." ]
[ "We perform an in-depth investigation of the suitability of self-attention models for character-level neural machine translation.", "We test the standard transformer model, as well as a novel variant in which the encoder block combines information from nearby characters using convolution.", "We perform extensive experiments on WMT and UN datasets, testing both bilingual and multilingual translation to English using up to three input languages (French, Spanish, and Chinese).", "Our transformer variant consistently outperforms the standard transformer at the character-level and converges faster while learning more robust character-level alignments." ]
[ 1, 0, 0, 0 ]
[ 0.999999995, 0.10810810328707107, 0.1538461491124262, 0.12499999501953145 ]
[ "We perform an in-depth investigation of the suitability of self-attention models for character-level neural machine translation." ]
[ "The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures.", "Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies.", "Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset.", "This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples -- ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks.", "We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training.", "Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice." ]
[ 0, 0, 0, 0, 1, 0 ]
[ 0.0999999956125002, 0.16666666205246927, 0.10810810355003672, 0.11764705502499051, 0.30188678875044506, 0.07692307192307725 ]
[ "we present the state-of-the-art results of using neural networks to diagnose chest x-rays" ]
[ "Semmelhack et al. (2014) have achieved high classification accuracy in distinguishing swim bouts of zebrafish using a Support Vector Machine (SVM).", "Convolutional Neural Networks (CNNs) have reached superior performance in various image recognition tasks over SVMs, but these powerful networks remain a black box.", "Reaching better transparency helps to build trust in their classifications and makes learned features interpretable to experts.", "Using a recently developed technique called Deep Taylor Decomposition, we generated heatmaps to highlight input regions of high relevance for predictions.", "We find that our CNN makes predictions by analyzing the steadiness of the tail's trunk, which markedly differs from the manually extracted features used by Semmelhack et al. (2014).", "We further uncovered that the network paid attention to experimental artifacts.", "Removing these artifacts ensured the validity of predictions.", "After correction, our best CNN beats the SVM by 6.12%, achieving a classification accuracy of 96.32%.", "Our work thus demonstrates the utility of AI explainability for CNNs." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ 0.19047618547619058, 0.045454540464876576, 0.10810810319941586, 0.14285713785714302, 0.255319143992757, 0.12499999548828142, 0.13793103048751498, 0.3076923027218935, 0.3124999954882813 ]
[ "We demonstrate the utility of a recent AI explainability technique by visualizing the learned features of a CNN trained on binary classification of zebrafish movements." ]
[ "When communicating, humans rely on internally-consistent language representations.", "That is, as speakers, we expect listeners to behave the same way we do when we listen.", "This work proposes several methods for encouraging such internal consistency in dialog agents in an emergent communication setting.", "We consider two hypotheses about the effect of internal-consistency constraints:", "1) that they improve agents’ ability to refer to unseen referents, and", "2) that they improve agents’ ability to generalize across communicative roles (e.g. performing as a speaker de- spite only being trained as a listener).", "While we do not find evidence in favor of the former, our results show significant support for the latter." ]
[ 0, 0, 0, 0, 0, 1, 0 ]
[ 0, 0.06896551224732497, 0.12903225311134256, 0, 0.31999999507200005, 0.43243242772826884, 0 ]
[ "Internal-consistency constraints improve agents ability to develop emergent protocols that generalize across communicative roles." ]
[ "Neural networks (NNs) are able to perform tasks that rely on compositional structure even though they lack obvious mechanisms for representing this structure.", "To analyze the internal representations that enable such success, we propose ROLE, a technique that detects whether these representations implicitly encode symbolic structure.", "ROLE learns to approximate the representations of a target encoder E by learning a symbolic constituent structure and an embedding of that structure into E’s representational vector space.", "The constituents of the approximating symbol structure are defined by structural positions — roles — that can be filled by symbols.", "We show that when E is constructed to explicitly embed a particular type of structure (e.g., string or tree), ROLE successfully extracts the ground-truth roles defining that structure.", "We then analyze a seq2seq network trained to perform a more complex compositional task (SCAN), where there is no ground truth role scheme available.", "For this model, ROLE successfully discovers an interpretable symbolic structure that the model implicitly uses to perform the SCAN task, providing a comprehensive account of the link between the representations and the behavior of a notoriously hard-to-interpret type of model.", "We verify the causal importance of the discovered symbolic structure by showing that, when we systematically manipulate hidden embeddings based on this symbolic structure, the model’s output is also changed in the way predicted by our analysis.", "Finally, we use ROLE to explore whether popular sentence embedding models are capturing compositional structure and find evidence that they are not; we conclude by discussing how insights from ROLE can be used to impart new inductive biases that will improve the compositional abilities of such models." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.20512820021038802, 0.21052631084487544, 0.14285713803854888, 0.11111110612654343, 0.17777777307654333, 0.14999999511250017, 0.28571428118284053, 0.1632653015910039, 0.1403508730070792 ]
[ "We introduce a new analysis technique that discovers interpretable compositional structure in notoriously hard-to-interpret recurrent neural networks." ]
[ "The vertebrate visual system is hierarchically organized to process visual information in successive stages.", "Neural representations vary drastically across the first stages of visual processing: at the output of the retina, ganglion cell receptive fields (RFs) exhibit a clear antagonistic center-surround structure, whereas in the primary visual cortex (V1), typical RFs are sharply tuned to a precise orientation.", "There is currently no unified theory explaining these differences in representations across layers.", "Here, using a deep convolutional neural network trained on image recognition as a model of the visual system, we show that such differences in representation can emerge as a direct consequence of different neural resource constraints on the retinal and cortical networks, and for the first time we find a single model from which both geometries spontaneously emerge at the appropriate stages of visual processing.", "The key constraint is a reduced number of neurons at the retinal output, consistent with the anatomy of the optic nerve as a stringent bottleneck.", "Second, we find that, for simple downstream cortical networks, visual representations at the retinal output emerge as nonlinear and lossy feature detectors, whereas they emerge as linear and faithful encoders of the visual scene for more complex cortical networks.", "This result predicts that the retinas of small vertebrates (e.g. salamander, frog) should perform sophisticated nonlinear computations, extracting features directly relevant to behavior, whereas retinas of large animals such as primates should mostly encode the visual scene linearly and respond to a much broader range of stimuli.", "These predictions could reconcile the two seemingly incompatible views of the retina as either performing feature extraction or efficient coding of natural scenes, by suggesting that all vertebrates lie on a spectrum between these two objectives, depending on the degree of neural resources allocated to their visual system." ]
[ 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.12903225319458916, 0.1428571384948981, 0.12903225319458916, 0.2686567124883048, 0.05128204631163757, 0.07999999539200027, 0.06666666246666693, 0.16666666246666675 ]
[ "We reproduced neural representations found in biological visual systems by simulating their neural resource constraints in a deep convolutional model." ]
[ "While it has not yet been proven, empirical evidence suggests that model generalization is related to local properties of the optima which can be described via the Hessian.", "We connect model generalization with the local property of a solution under the PAC-Bayes paradigm.", "In particular, we prove that model generalization ability is related to the Hessian, the higher-order \"smoothness\" terms characterized by the Lipschitz constant of the Hessian, and the scales of the parameters.", "Guided by the proof, we propose a metric to score the generalization capability of the model, as well as an algorithm that optimizes the perturbed model accordingly." ]
[ 0, 1, 0, 0 ]
[ 0.2631578906232687, 0.479999995072, 0.28571428140408167, 0.29411764268166096 ]
[ "a theory connecting Hessian of the solution and the generalization power of the model" ]
[ "Unsupervised learning is about capturing dependencies between variables and is driven by the contrast between the probable vs improbable configurations of these variables, often either via a generative model which only samples probable ones or with an energy function (unnormalized log-density) which is low for probable ones and high for improbable ones.", "Here we consider learning both an energy function and an efficient approximate sampling mechanism for the corresponding distribution.", "Whereas the critic (or discriminator) in generative adversarial networks (GANs) learns to separate data and generator samples, introducing an entropy maximization regularizer on the generator can turn the interpretation of the critic into an energy function, which separates the training distribution from everything else, and thus can be used for tasks like anomaly or novelty detection. \n\n", "This paper is motivated by the older idea of sampling in latent space rather than data space because running a Monte-Carlo Markov Chain (MCMC) in latent space has been found to be easier and more efficient, and because a GAN-like generator can convert latent space samples to data space samples.", "For this purpose, we show how a Markov chain can be run in latent space whose samples can be mapped to data space, producing better samples.", "These samples are also used for the negative phase gradient required to estimate the log-likelihood gradient of the data space energy function.", "To maximize entropy at the output of the generator, we take advantage of recently introduced neural estimators of mutual information.", "We find that in addition to producing a useful scoring function for anomaly detection, the resulting approach produces sharp samples (like GANs) while covering the modes well, leading to high Inception and Fréchet scores.\n" ]
[ 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.21428571020408166, 0.2424242374288339, 0.24999999625000005, 0.1509433920113921, 0.10256409772518103, 0.28571428075102046, 0.2424242374288339, 0.24489795478550605 ]
[ "We introduced entropy maximization to GANs, leading to a reinterpretation of the critic as an energy function." ]
[ "Neural Style Transfer has become a popular technique for\n", "generating images of distinct artistic styles using convolutional neural networks.", "This\n", "recent success in image style transfer has raised the question of\n", "whether similar methods can be leveraged to alter the “style” of musical\n", "audio.", "In this work, we attempt long time-scale high-quality audio transfer\n", "and texture synthesis in the time-domain that captures harmonic,\n", "rhythmic, and timbral elements related to musical style, using examples that\n", "may have different lengths and musical keys.", "We demonstrate the ability\n", "to use randomly initialized convolutional neural networks to transfer\n", "these aspects of musical style from one piece onto another using 3\n", "different representations of audio: the log-magnitude of the Short Time\n", "Fourier Transform (STFT), the Mel spectrogram, and the Constant-Q Transform\n", "spectrogram.", "We propose using these representations as a way of\n", "generating and modifying perceptually significant characteristics of\n", "musical audio content.", "We demonstrate each representation's\n", "shortcomings and advantages over others by carefully designing\n", "neural network structures that complement the nature of musical audio.", "Finally, we show that the most\n", "compelling “style” transfer examples make use of an ensemble of these\n", "representations to help capture the varying desired characteristics of\n", "audio signals." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.06896551296076128, 0.06666666222222252, 0.32258064058272634, 0.1874999953125001, 0.2666666622222223, 0.13793103020214045, 0.0645161244536944, 0.07407407023319637, 0.16666666388888893, 0.07142856734693902, 0.1874999953125001, 0.21428571020408166, 0.07142856734693902, 0.275862064684899, 0.07407407023319637, 0.1739130412098299, 0.08333333055555565, 0, 0.2666666622222223, 0.07692307337278123, 0.13333332888888905, 0.2068965474435197 ]
[ "We present a long time-scale musical audio style transfer algorithm which synthesizes audio in the time-domain, but uses Time-Frequency representations of audio." ]
[ "To communicate with new partners in new contexts, humans rapidly form new linguistic conventions.", "Recent language models trained with deep neural networks are able to comprehend and produce the existing conventions present in their training data, but are not able to flexibly and interactively adapt those conventions on the fly as humans do.", "We introduce a repeated reference task as a benchmark for models of adaptation in communication and propose a regularized continual learning framework that allows an artificial agent initialized with a generic language model to more accurately and efficiently understand their partner over time.", "We evaluate this framework through simulations on COCO and in real-time reference game experiments with human partners." ]
[ 0, 0, 1, 0 ]
[ 0.1874999953125001, 0.15094339152723404, 0.5084745717897158, 0.270270265303141 ]
[ "We propose a repeated reference benchmark task and a regularized continual learning approach for adaptive communication with humans in unfamiliar domains" ]
[ "Traditional set prediction models can struggle with simple datasets due to an issue we call the responsibility problem.", "We introduce a pooling method for sets of feature vectors based on sorting features across elements of the set.", "This can be used to construct a permutation-equivariant auto-encoder that avoids this responsibility problem.", "On a toy dataset of polygons and a set version of MNIST, we show that such an auto-encoder produces considerably better reconstructions and representations.", "Replacing the pooling function in existing set encoders with FSPool improves accuracy and convergence speed on a variety of datasets." ]
[ 1, 0, 0, 0, 0 ]
[ 0.2580645112591052, 0.12903225319458916, 0.22222221722908106, 0.1176470541003462, 0.181818177043159 ]
[ "Sort in encoder and undo sorting in decoder to avoid responsibility problem in set auto-encoders" ]
[ "We present a method for policy learning to navigate indoor environments.", "We adopt a hierarchical policy approach, where two agents are trained to work in cohesion with one another to perform a complex navigation task.", "A Planner agent operates at a higher level and proposes sub-goals for an Executor agent.", "The Executor reports an embedding summary back to the Planner as additional side information at the end of its series of operations for the Planner's next sub-goal proposal.", "The end goal is generated by the environment and exposed to the Planner which then decides which set of sub-goals to propose to the Executor.", "We show that this Planner-Executor setup drastically increases the sample efficiency of our method over traditional single agent approaches, effectively mitigating the difficulty accompanying long series of actions with a sparse reward signal.", "On the challenging Habitat environment which requires navigating various realistic indoor environments, we demonstrate that our approach offers a significant improvement over prior work for navigation." ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.434782603705104, 0.23529411307958487, 0.23076922579881665, 0.10810810372534715, 0, 0.09302325179015701, 0.15789473252077574 ]
[ "We present a hierarchical learning framework for navigation within an embodied learning setting" ]
[ "Saliency methods aim to explain the predictions of deep neural networks.", "These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction.", "We use a simple and common pre-processing step ---adding a mean shift to the input data--- to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute.", "We define input invariance as the requirement that a saliency method mirror the sensitivity of the model with respect to transformations of the input.", "We show, through several examples, that saliency methods that do not satisfy a input invariance property are unreliable and can lead to misleading and inaccurate attribution." ]
[ 0, 0, 0, 0, 1 ]
[ 0, 0, 0.058823526903114286, 0, 0.13793103162901313 ]
[ "Attribution can sometimes be misleading" ]
[ "Large Transformer models routinely achieve state-of-the-art results on\n", "a number of tasks but training these models can be prohibitively costly,\n", "especially on long sequences.", "We introduce two techniques to improve\n", "the efficiency of Transformers.", "For one, we replace dot-product attention\n", "by one that uses locality-sensitive hashing, changing its complexity\n", "from O(L^2) to O(L), where L is the length of the sequence.\n", "Furthermore, we use reversible residual layers instead of the standard\n", "residuals, which allows storing activations only once in the training\n", "process instead of N times, where N is the number of layers.\n", "The resulting model, the Reformer, performs on par with Transformer models\n", "while being much more memory-efficient and much faster on long sequences." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.1249999950000002, 0, 0, 0, 0, 0, 0.11764705384083066, 0, 0.22222221728395072, 0, 0.1052631530193908, 0.21052631091412755, 0.11111110617283973 ]
[ "Efficient Transformer with locality-sensitive hashing and reversible layers" ]
[ "Obtaining policies that can generalise to new environments in reinforcement learning is challenging.", "In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments.", "We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations.", "We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information.", "In addition, we propose txt2π, a model that captures three-way interactions between the goal, document, and observations.", "On RTFM, txt2π generalises to new environments with dynamics not seen during training via reading.", "Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM.", "Through curriculum learning, txt2π produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps." ]
[ 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.4827586157431629, 0.5405405356318481, 0.13333332875061746, 0.24390243426531835, 0.06060605561065239, 0.3225806401664933, 0, 0.11764705384083066 ]
[ "We show language understanding via reading is promising way to learn policies that generalise to new environments." ]
[ "An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data.", "We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other.", "Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists.", "We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning.", "Furthermore, our analysis is not just descriptive, but prescriptive.", "It suggests a natural modification to gradient descent that can greatly reduce overfitting." ]
[ 0, 1, 0, 0, 0, 0 ]
[ 0.12499999531250018, 0.33898304660729683, 0, 0.20512820015779104, 0, 0.19354838222684714 ]
[ "We propose a hypothesis for why gradient descent generalizes based on how per-example gradients interact with each other." ]
[ " Recent advances in deep learning have shown promising results in many low-level vision tasks.", "However, solving the single-image-based view synthesis is still an open problem.", "In particular, the generation of new images at parallel camera views given a single input image is of great interest, as it enables 3D visualization of the 2D input scenery.", "We propose a novel network architecture to perform stereoscopic view synthesis at arbitrary camera positions along the X-axis, or Deep 3D Pan, with \"t-shaped\" adaptive kernels equipped with globally and locally adaptive dilations.", "Our proposed network architecture, the monster-net, is devised with a novel t-shaped adaptive kernel with globally and locally adaptive dilation, which can efficiently incorporate global camera shift into and handle local 3D geometries of the target image's pixels for the synthesis of naturally looking 3D panned views when a 2-D input image is given.", "Extensive experiments were performed on the KITTI, CityScapes and our VXXLXX_STEREO indoors dataset to prove the efficacy of our method.", "Our monster-net significantly outperforms the state-of-the-art method, SOTA, by a large margin in all metrics of RMSE, PSNR, and SSIM.", "Our proposed monster-net is capable of reconstructing more reliable image structures in synthesized images with coherent geometry.", "Moreover, the disparity information that can be extracted from the \"t-shaped\" kernel is much more reliable than that of the SOTA for the unsupervised monocular depth estimation task, confirming the effectiveness of our method.\n" ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0, 0.14814814331961607, 0.0952380905215422, 0.46808510189225894, 0.19672130760548248, 0, 0, 0.06060605561065239, 0.04444443986172887 ]
[ "Novel architecture for stereoscopic view synthesis at arbitrary camera shifts utilizing adaptive t-shaped kernels with adaptive dilations." ]
[ "Deep Neutral Networks(DNNs) require huge GPU memory when training on modern image/video databases.", "Unfortunately, the GPU memory as a hardware resource is always finite, which limits the image resolution, batch size, and learning rate that could be used for better DNN performance.", "In this paper, we propose a novel training approach, called Re-forwarding, that substantially reduces memory usage in training.", "Our approach automatically finds a subset of vertices in a DNN computation graph, and stores tensors only at these vertices during the first forward.", "During backward, extra local forwards (called the Re-forwarding process) are conducted to compute the missing tensors between the subset of vertices.", "The total memory cost becomes the sum of (1) the memory cost at the subset of vertices and (2) the maximum memory cost among local re-forwards.", "Re-forwarding trades training time overheads for memory and does not compromise any performance in testing.", "We propose theories and algorithms that achieve the optimal memory solutions for DNNs with either linear or arbitrary computation graphs.", "Experiments show that Re-forwarding cuts down up-to 80% of training memory on popular DNNs such as Alexnet, VGG, ResNet, Densenet and Inception net." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ 0.1176470541003462, 0.20408162775510216, 0.10526315295013873, 0.13953487872363457, 0.09999999501250025, 0.1538461488757398, 0.2222222173611112, 0.29268292183224276, 0.3181818131921489 ]
[ "This paper proposes fundamental theory and optimal algorithms for DNN training, which reduce up to 80% of training memory for popular DNNs." ]
[ "Compression is a key step to deploy large neural networks on resource-constrained platforms.", "As a popular compression technique, quantization constrains the number of distinct weight values and thus reducing the number of bits required to represent and store each weight.", "In this paper, we study the representation power of quantized neural networks.", "First, we prove the universal approximability of quantized ReLU networks on a wide class of functions.", "Then we provide upper bounds on the number of weights and the memory size for a given approximation error bound and the bit-width of weights for function-independent and function-dependent structures.", "Our results reveal that, to attain an approximation error bound of $\\epsilon$, the number of weights needed by a quantized network is no more than $\\mathcal{O}\\left(\\log^5(1/\\epsilon)\\right)$ times that of an unquantized network.", "This overhead is of much lower order than the lower bound of the number of weights needed for the error bound, supporting the empirical success of various quantization techniques.", "To the best of our knowledge, this is the first in-depth study on the complexity bounds of quantized neural networks." ]
[ 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.12499999517578143, 0.1463414584414041, 0.32258064041623313, 0.4117647009515571, 0.2857142807596373, 0.2127659526301495, 0.24390243405116013, 0.3333333283487655 ]
[ "This paper proves the universal approximability of quantized ReLU neural networks and puts forward the complexity bound given arbitrary error." ]
[ "Reinforcement learning (RL) with value-based methods (e.g., Q-learning) has shown success in a variety of domains such as\n", "games and recommender systems (RSs).", "When the action space is finite, these algorithms implicitly finds a policy by learning the optimal value function, which are often very efficient. \n", "However, one major challenge of extending Q-learning to tackle continuous-action RL problems is that obtaining optimal Bellman backup requires solving a continuous action-maximization (max-Q) problem.", "While it is common to restrict the parameterization of the Q-function to be concave in actions to simplify the max-Q problem, such a restriction might lead to performance degradation.", "Alternatively, when the Q-function is parameterized with a generic feed-forward neural network (NN), the max-Q problem can be NP-hard.", "In this work, we propose the CAQL method which minimizes the Bellman residual using Q-learning with one of several plug-and-play action optimizers.", "In particular, leveraging the strides of optimization theories in deep NN, we show that max-Q problem can be solved optimally with mixed-integer programming (MIP)---when the Q-function has sufficient representation power, this MIP-based optimization induces better policies and is more robust than counterparts, e.g., CEM or GA, that approximate the max-Q solution.", "To speed up training of CAQL, we develop three techniques, namely", "(i) dynamic tolerance,", "(ii) dual filtering, and", "(iii) clustering.\n", "To speed up inference of CAQL, we introduce the action function that concurrently learns the optimal policy.\n", "To demonstrate the efficiency of CAQL we compare it with state-of-the-art RL algorithms on benchmark continuous control problems that have different degrees of action constraints and show that CAQL significantly outperforms policy-based methods in heavily constrained environments." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19999999555555567, 0, 0.06060605638200213, 0.11428571020408178, 0.05882352525951587, 0, 0.06451612466181092, 0.03448275576694435, 0.09523809024943337, 0, 0, 0.0740740694101512, 0.13636363285123976 ]
[ "A general framework of value-based reinforcement learning for continuous control" ]

Dataset Card for SciTLDR

Dataset Summary

SciTLDR: Extreme Summarization of Scientific Documents

SciTLDR is a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden.

Supported Tasks and Leaderboards




Dataset Structure

SciTLDR is split in to a 60/20/20 train/dev/test split. For each file, each line is a json, formatted as follows

   "source_labels":[binary list in which 1 is the oracle sentence],
   "rouge_scores":[precomputed rouge-1 scores],

The keys rouge_scores and source_labels are not necessary for any code to run, precomputed Rouge scores are provided for future research.

Data Instances

{ "source": [ "Mixed precision training (MPT) is becoming a practical technique to improve the speed and energy efficiency of training deep neural networks by leveraging the fast hardware support for IEEE half-precision floating point that is available in existing GPUs.", "MPT is typically used in combination with a technique called loss scaling, that works by scaling up the loss value up before the start of backpropagation in order to minimize the impact of numerical underflow on training.", "Unfortunately, existing methods make this loss scale value a hyperparameter that needs to be tuned per-model, and a single scale cannot be adapted to different layers at different training stages.", "We introduce a loss scaling-based training method called adaptive loss scaling that makes MPT easier and more practical to use, by removing the need to tune a model-specific loss scale hyperparameter.", "We achieve this by introducing layer-wise loss scale values which are automatically computed during training to deal with underflow more effectively than existing methods.", "We present experimental results on a variety of networks and tasks that show our approach can shorten the time to convergence and improve accuracy, compared with using the existing state-of-the-art MPT and single-precision floating point." ], "source_labels": [ 0, 0, 0, 1, 0, 0 ], "rouge_scores": [ 0.2399999958000001, 0.26086956082230633, 0.19999999531250012, 0.38095237636054424, 0.2051282003944774, 0.2978723360796741 ], "paper_id": "rJlnfaNYvB", "target": [ "We devise adaptive loss scaling to improve mixed precision training that surpass the state-of-the-art results.", "Proposal for an adaptive loss scaling method during backpropagation for mix precision training where scale rate is decided automatically to reduce the underflow.", "The authors propose a method to train models in FP16 precision that adopts a more elaborate way to minimize underflow in every layer simultaneously and automatically." ], "title": "Adaptive Loss Scaling for Mixed Precision Training" }

Data Fields

  • source: The Abstract, Introduction and Conclusion (AIC) or Full text of the paper, with one sentence per line.
  • source_labels: Binary 0 or 1, 1 denotes the oracle sentence.
  • rouge_scores: Precomputed ROUGE baseline scores for each sentence.
  • paper_id: Arxiv Paper ID.
  • target: Multiple summaries for each sentence, one sentence per line.
  • title: Title of the paper.

Data Splits

train valid test
SciTLDR-A 1992 618 619
SciTLDR-AIC 1992 618 619
SciTLDR-FullText 1992 618 619

Dataset Creation

[More Information Needed]

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?


Annotation process

Given the title and first 128 words of a reviewer comment about a paper, re-write the summary (if it exists) into a single sentence or an incomplete phrase. Summaries must be no more than one sentence. Most summaries are between 15 and 25 words. The average rewritten summary is 20 words long.

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

To encourage further research in the area of extreme summarization of scientific documents.

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

Apache License 2.0

Citation Information

@article{cachola2020tldr, title={{TLDR}: Extreme Summarization of Scientific Documents}, author={Isabel Cachola and Kyle Lo and Arman Cohan and Daniel S. Weld}, journal={arXiv:2004.15011}, year={2020}, }


Thanks to @Bharat123rox for adding this dataset.

Downloads last month

Models trained or fine-tuned on allenai/scitldr