source
sequence
source_labels
sequence
rouge_scores
sequence
paper_id
stringlengths
9
11
target
sequence
[ "The vulnerabilities of deep neural networks against adversarial examples have become a significant concern for deploying these models in sensitive domains.", "Devising a definitive defense against such attacks is proven to be challenging, and the methods relying on detecting adversarial samples are only valid when the attacker is oblivious to the detection mechanism.", "In this paper, we consider the adversarial detection problem under the robust optimization framework.", "We partition the input space into subspaces and train adversarial robust subspace detectors using asymmetrical adversarial training (AAT).", "The integration of the classifier and detectors presents a detection mechanism that provides a performance guarantee to the adversary it considered.", "We demonstrate that AAT promotes the learning of class-conditional distributions, which further gives rise to generative detection/classification approaches that are both robust and more interpretable.", "We provide comprehensive evaluations of the above methods, and demonstrate their competitive performances and compelling properties on adversarial detection and robust classification problems." ]
[ 0, 0, 0, 0, 0, 0, 1 ]
[ 0.051282044500112534, 0.21739129722118378, 0.19354838132858276, 0.22857142984867096, 0.1621621549129486, 0.1904761791229248, 0.307692289352417 ]
SJeQEp4YDH
[ "A new generative modeling technique based on asymmetrical adversarial training, and its applications to adversarial example detection and robust classification" ]
[ "Exploration is a key component of successful reinforcement learning, but optimal approaches are computationally intractable, so researchers have focused on hand-designing mechanisms based on exploration bonuses and intrinsic reward, some inspired by curious behavior in natural systems. ", "In this work, we propose a strategy for encoding curiosity algorithms as programs in a domain-specific language and searching, during a meta-learning phase, for algorithms that enable RL agents to perform well in new domains. ", "Our rich language of programs, which can combine neural networks with other building blocks including nearest-neighbor modules and can choose its own loss functions, enables the expression of highly generalizable programs that perform well in domains as disparate as grid navigation with image input, acrobot, lunar lander, ant and hopper. ", "To make this approach feasible, we develop several pruning techniques, including learning to predict a program's success based on its syntactic properties. ", "We demonstrate the effectiveness of the approach empirically, finding curiosity strategies that are similar to those in published literature, as well as novel strategies that are competitive with them and generalize well." ]
[ 0, 1, 0, 0, 0 ]
[ 0.13793103396892548, 0.23076923191547394, 0.1492537260055542, 0.045454539358615875, 0.21276594698429108 ]
BygdyxHFDS
[ "Meta-learning curiosity algorithms by searching through a rich space of programs yields novel mechanisms that generalize across very different reinforcement-learning domains." ]
[ "Many machine learning algorithms represent input data with vector embeddings or discrete codes.", "When inputs exhibit compositional structure (e.g. objects built from parts or procedures from subroutines), it is natural to ask whether this compositional structure is reflected in the the inputs’ learned representations.", "While the assessment of compositionality in languages has received significant attention in linguistics and adjacent fields, the machine learning literature lacks general-purpose tools for producing graded measurements of compositional structure in more general (e.g. vector-valued) representation spaces.", "We describe a procedure for evaluating compositionality by measuring how well the true representation-producing model can be approximated by a model that explicitly composes a collection of inferred representational primitives.", "We use the procedure to provide formal and empirical characterizations of compositional structure in a variety of settings, exploring the relationship between compositionality and learning dynamics, human judgments, representational similarity, and generalization." ]
[ 0, 0, 0, 0, 1 ]
[ 0.05405404791235924, 0.23529411852359772, 0.3103448152542114, 0.2800000011920929, 0.42307692766189575 ]
HJz05o0qK7
[ "This paper proposes a simple procedure for evaluating compositional structure in learned representations, and uses the procedure to explore the role of compositionality in four learning problems." ]
[ "In this paper, we propose an end-to-end deep learning model, called E2Efold, for RNA secondary structure prediction which can effectively take into account the inherent constraints in the problem.", "The key idea of E2Efold is to directly predict the RNA base-pairing matrix, and use an unrolled constrained programming algorithm as a building block in the architecture to enforce constraints.", "With comprehensive experiments on benchmark datasets, we demonstrate the superior performance of E2Efold: it predicts significantly better structures compared to previous SOTA (29.7% improvement in some cases in F1 scores and even larger improvement for pseudoknotted structures) and runs as efficient as the fastest algorithms in terms of inference time." ]
[ 0, 1, 0 ]
[ 0.3829787075519562, 0.42553192377090454, 0.1269841194152832 ]
S1eALyrYDH
[ "A DL model for RNA secondary structure prediction, which uses an unrolled algorithm in the architecture to enforce constraints." ]
[ "Learning in recurrent neural networks (RNNs) is most often implemented by gradient descent using backpropagation through time (BPTT), but BPTT does not model accurately how the brain learns.", "Instead, many experimental results on synaptic plasticity can be summarized as three-factor learning rules involving eligibility traces of the local neural activity and a third factor.", "We present here eligibility propagation (e-prop), a new factorization of the loss gradients in RNNs that fits the framework of three factor learning rules when derived for biophysical spiking neuron models.", "When tested on the TIMIT speech recognition benchmark, it is competitive with BPTT both for training artificial LSTM networks and spiking RNNs.", "Further analysis suggests that the diversity of learning signals and the consideration of slow internal neural dynamics are decisive to the learning efficiency of e-prop." ]
[ 0, 1, 0, 0, 0 ]
[ 0.07999999821186066, 0.2916666567325592, 0.23529411852359772, 0.22727271914482117, 0.1904761791229248 ]
SkxJ4QKIIS
[ "We present eligibility propagation an alternative to BPTT that is compatible with experimental data on synaptic plasticity and competes with BPTT on machine learning benchmarks." ]
[ "Recurrent neural networks (RNNs) are an effective representation of control policies for a wide range of reinforcement and imitation learning problems.", "RNN policies, however, are particularly difficult to explain, understand, and analyze due to their use of continuous-valued memory vectors and observation features.", "In this paper, we introduce a new technique, Quantized Bottleneck Insertion, to learn finite representations of these vectors and features.", "The result is a quantized representation of the RNN that can be analyzed to improve our understanding of memory use and general behavior.", "We present results of this approach on synthetic environments and six Atari games.", "The resulting finite representations are surprisingly small in some cases, using as few as 3 discrete memory states and 10 observations for a perfect Pong policy.", "We also show that these finite policy representations lead to improved interpretability." ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.19999998807907104, 0.04999999329447746, 0.14999999105930328, 0.1428571343421936, 0.1818181723356247, 0.13333332538604736, 0.1249999925494194 ]
S1gOpsCctm
[ "Extracting a finite state machine from a recurrent neural network via quantization for the purpose of interpretability with experiments on Atari." ]
[ "Building upon the recent success of deep reinforcement learning methods, we investigate the possibility of on-policy reinforcement learning improvement by reusing the data from several consecutive policies.", "On-policy methods bring many benefits, such as ability to evaluate each resulting policy.", "However, they usually discard all the information about the policies which existed before.", "In this work, we propose adaptation of the replay buffer concept, borrowed from the off-policy learning setting, to the on-policy algorithms.", "To achieve this, the proposed algorithm generalises the Q-, value and advantage functions for data from multiple policies.", "The method uses trust region optimisation, while avoiding some of the common problems of the algorithms such as TRPO or ACKTR: it uses hyperparameters to replace the trust region selection heuristics, as well as the trainable covariance matrix instead of the fixed one.", "In many cases, the method not only improves the results comparing to the state-of-the-art trust region on-policy learning algorithms such as ACKTR and TRPO, but also with respect to their off-policy counterpart DDPG. " ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.6829268336296082, 0, 0.12903225421905518, 0.2631579041481018, 0.277777761220932, 0.07843136787414551, 0.1599999964237213 ]
B1MB5oRqtQ
[ "We investigate the theoretical and practical evidence of on-policy reinforcement learning improvement by reusing the data from several consecutive policies." ]
[ "Convolutional Neural Networks (CNN) have been successful in processing data signals that are uniformly sampled in the spatial domain (e.g., images).", "However, most data signals do not natively exist on a grid, and in the process of being sampled onto a uniform physical grid suffer significant aliasing error and information loss.", "Moreover, signals can exist in different topological structures as, for example, points, lines, surfaces and volumes.", "It has been challenging to analyze signals with mixed topologies (for example, point cloud with surface mesh).", "To this end, we develop mathematical formulations for Non-Uniform Fourier Transforms (NUFT) to directly, and optimally, sample nonuniform data signals of different topologies defined on a simplex mesh into the spectral domain with no spatial sampling error.", "The spectral transform is performed in the Euclidean space, which removes the translation ambiguity from works on the graph spectrum.", "Our representation has four distinct advantages: (1) the process causes no spatial sampling error during initial sampling, (2) the generality of this approach provides a unified framework for using CNNs to analyze signals of mixed topologies, (3) it allows us to leverage state-of-the-art backbone CNN architectures for effective learning without having to design a particular architecture for a particular data structure in an ad-hoc fashion, and (4) the representation allows weighted meshes where each element has a different weight (i.e., texture) indicating local properties.", "We achieve good results on-par with state-of-the-art for 3D shape retrieval task, and new state-of-the-art for point cloud to surface reconstruction task." ]
[ 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.039215680211782455, 0.10526315122842789, 0.08888888359069824, 0, 0.1818181723356247, 0.04255318641662598, 0.11881187558174133, 0.16326530277729034 ]
B1G5ViAqFm
[ "We use non-Euclidean Fourier Transformation of shapes defined by a simplicial complex for deep learning, achieving significantly better results than point-based sampling techiques used in current 3D learning literature." ]
[ "Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences.", "Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function.", "While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are less satisfactory in practice due to finite data and possible violation of assumptions.", "Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring.", "Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards.", "The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity.", "In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward.", "We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows for a flexible score function under the acyclicity constraint." ]
[ 0, 0, 0, 0, 0, 0, 0, 1 ]
[ 0.0624999962747097, 0.20512819290161133, 0.11764705181121826, 0.05128204822540283, 0.1111111044883728, 0.1249999925494194, 0.07547169178724289, 0.2448979616165161 ]
S1g2skStPB
[ "We apply reinforcement learning to score-based causal discovery and achieve promising results on both synthetic and real datasets" ]
[ "The topic modeling discovers the latent topic probability of given the text documents.", "To generate the more meaningful topic that better represents the given document, we proposed a universal method which can be used in the data preprocessing stage.", "The method consists of three steps.", "First, it generates the word/word-pair from every single document.", "Second, it applies a two way parallel TF-IDF algorithm to word/word-pair for semantic filtering.", "Third, it uses the k-means algorithm to merge the word pairs that have the similar semantic meaning.\n\n", "Experiments are carried out on the Open Movie Database (OMDb), Reuters Dataset and 20NewsGroup Dataset and use the mean Average Precision score as the evaluation metric.", "Comparing our results with other state-of-the-art topic models, such as Latent Dirichlet allocation and traditional Restricted Boltzmann Machines.", "Our proposed data preprocessing can improve the generated topic accuracy by up to 12.99\\%.", "How the number of clusters and the number of word pairs should be adjusted for different type of text document is also discussed.\n" ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17142856121063232, 0.875, 0.06666666269302368, 0.12121211737394333, 0.10526315122842789, 0.14999999105930328, 0.04347825422883034, 0.0476190410554409, 0.3589743673801422, 0.13636362552642822 ]
Byni8NLHf
[ "We proposed a universal method which can be used in the data preprocessing stage to generate the more meaningful topic that better represents the given document" ]
[ "Multi-task learning has been successful in modeling multiple related tasks with large, carefully curated labeled datasets.", "By leveraging the relationships among different tasks, multi-task learning framework can improve the performance significantly.", "However, most of the existing works are under the assumption that the predefined tasks are related to each other.", "Thus, their applications on real-world are limited, because rare real-world problems are closely related.", "Besides, the understanding of relationships among tasks has been ignored by most of the current methods.", "Along this line, we propose a novel multi-task learning framework - Learning To Transfer Via Modelling Multi-level Task Dependency, which constructed attention based dependency relationships among different tasks.", "At the same time, the dependency relationship can be used to guide what knowledge should be transferred, thus the performance of our model also be improved.", "To show the effectiveness of our model and the importance of considering multi-level dependency relationship, we conduct experiments on several public datasets, on which we obtain significant improvements over current methods." ]
[ 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.09999999403953552, 0.31578946113586426, 0.14999999105930328, 0, 0.15789473056793213, 0.42307692766189575, 0.260869562625885, 0.15686273574829102 ]
BklhsgSFvB
[ "We propose a novel multi-task learning framework which extracts multi-view dependency relationship automatically and use it to guide the knowledge transfer among different tasks." ]
[ " We design simple and quantifiable testing of global translation-invariance in deep learning models trained on the MNIST dataset.", "Experiments on convolutional and capsules neural networks show that both models have poor performance in dealing with global translation-invariance; however, the performance improved by using data augmentation.", "Although the capsule network is better on the MNIST testing dataset, the convolutional neural network generally has better performance on the translation-invariance." ]
[ 1, 0, 0 ]
[ 0.2857142686843872, 0.1666666567325592, 0 ]
SJlgOjAqYQ
[ "Testing of global translational invariance in Convolutional and Capsule Networks" ]
[ "Gaussian processes are ubiquitous in nature and engineering.", "A case in point is a class of neural networks in the infinite-width limit, whose priors correspond to Gaussian processes.", "Here we perturbatively extend this correspondence to finite-width neural networks, yielding non-Gaussian processes as priors.", "The methodology developed herein allows us to track the flow of preactivation distributions by progressively integrating out random variables from lower to higher layers, reminiscent of renormalization-group flow.", "We further develop a perturbative prescription to perform Bayesian inference with weakly non-Gaussian priors." ]
[ 0, 0, 0, 0, 1 ]
[ 0.06666666269302368, 0.24390242993831635, 0.1621621549129486, 0.21276594698429108, 0.277777761220932 ]
HygP3TVFvS
[ "We develop an analytical method to study Bayesian inference of finite-width neural networks and find that the renormalization-group flow picture naturally emerges." ]
[ "Distillation is a method to transfer knowledge from one model to another and often achieves higher accuracy with the same capacity.", "In this paper, we aim to provide a theoretical understanding on what mainly helps with the distillation.", "Our answer is \"early stopping\".", "Assuming that the teacher network is overparameterized, we argue that the teacher network is essentially harvesting dark knowledge from the data via early stopping.", "This can be justified by a new concept, Anisotropic In- formation Retrieval (AIR), which means that the neural network tends to fit the informative information first and the non-informative information (including noise) later.", "Motivated by the recent development on theoretically analyzing overparame- terized neural networks, we can characterize AIR by the eigenspace of the Neural Tangent Kernel(NTK).", "AIR facilities a new understanding of distillation.", "With that, we further utilize distillation to refine noisy labels.", "We propose a self-distillation al- gorithm to sequentially distill knowledge from the network in the previous training epoch to avoid memorizing the wrong labels.", "We also demonstrate, both theoret- ically and empirically, that self-distillation can benefit from more than just early stopping.", "Theoretically, we prove convergence of the proposed algorithm to the ground truth labels for randomly initialized overparameterized neural networks in terms of l2 distance, while the previous result was on convergence in 0-1 loss.", "The theoretical result ensures the learned neural network enjoy a margin on the training data which leads to better generalization.", "Empirically, we achieve better testing accuracy and entirely avoid early stopping which makes the algorithm more user-friendly.\n" ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19999998807907104, 0.2978723347187042, 0.05714285373687744, 0.25, 0.09999999403953552, 0.15686273574829102, 0.1621621549129486, 0.09999999403953552, 0.15686273574829102, 0.1666666567325592, 0.16949151456356049, 0.12244897335767746, 0.1666666567325592 ]
HJlF3h4FvB
[ "theoretically understand the regularization effect of distillation. We show that early stopping is essential in this process. From this perspective, we developed a distillation method for learning with corrupted Label with theoretical guarantees." ]
[ "Lifelong learning poses considerable challenges in terms of effectiveness (minimizing prediction errors for all tasks) and overall computational tractability for real-time performance. ", "This paper addresses continuous lifelong multitask learning by jointly re-estimating the inter-task relations (\\textit{output} kernel) and the per-task model parameters at each round, assuming data arrives in a streaming fashion.", "We propose a novel algorithm called \\textit{Online Output Kernel Learning Algorithm} (OOKLA) for lifelong learning setting.", "To avoid the memory explosion, we propose a robust budget-limited versions of the proposed algorithm that efficiently utilize the relationship between the tasks to bound the total number of representative examples in the support set. ", "In addition, we propose a two-stage budgeted scheme for efficiently tackling the task-specific budget constraints in lifelong learning.", "Our empirical results over three datasets indicate superior AUC performance for OOKLA and its budget-limited cousins over strong baselines." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.1249999925494194, 0.1538461446762085, 0.38461539149284363, 0.04999999701976776, 0.2857142686843872, 0.0714285671710968 ]
H1Ww66x0-
[ "a novel approach for online lifelong learning using output kernels." ]
[ "Minecraft is a videogame that offers many interesting challenges for AI systems.", "In this paper, we focus in construction scenarios where an agent must build a complex structure made of individual blocks.", "As higher-level objects are formed of lower-level objects, the construction can naturally be modelled as a hierarchical task network.", "We model a house-construction scenario in classical and HTN planning and compare the advantages and disadvantages of both kinds of models." ]
[ 0, 0, 0, 1 ]
[ 0.12903225421905518, 0.1538461446762085, 0.15789473056793213, 0.9729729890823364 ]
BkgyvHSWFV
[ "We model a house-construction scenario in Minecraft in classical and HTN planning and compare the advantages and disadvantages of both kinds of models." ]
[ "Attacks on natural language models are difficult to compare due to their different definitions of what constitutes a successful attack.", "We present a taxonomy of constraints to categorize these attacks.", "For each constraint, we present a real-world use case and a way to measure how well generated samples enforce the constraint.", "We then employ our framework to evaluate two state-of-the art attacks which fool models with synonym substitution.", "These attacks claim their adversarial perturbations preserve the semantics and syntactical correctness of the inputs, but our analysis shows these constraints are not strongly enforced.", "For a significant portion of these adversarial examples, a grammar checker detects an increase in errors.", "Additionally, human studies indicate that many of these adversarial examples diverge in semantic meaning from the input or do not appear to be human-written.", "Finally, we highlight the need for standardized evaluation of attacks that share constraints.", "Without shared evaluation metrics, it is up to researchers to set thresholds that determine the trade-off between attack quality and attack success.", "We recommend well-designed human studies to determine the best threshold to approximate human judgement." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.1860465109348297, 0.1764705777168274, 0.1818181723356247, 0.09756097197532654, 0.1666666567325592, 0.1538461446762085, 0.25, 0.10810810327529907, 0.09090908616781235, 0.0555555522441864 ]
BkxmKgHtwH
[ "We present a framework for evaluating adversarial examples in natural language processing and demonstrate that generated adversarial examples are often not semantics-preserving, syntactically correct, or non-suspicious." ]
[ "In order for machine learning to be deployed and trusted in many applications, it is crucial to be able to reliably explain why the machine learning algorithm makes certain predictions.", "For example, if an algorithm classifies a given pathology image to be a malignant tumor, then the doctor may need to know which parts of the image led the algorithm to this classification.", "How to interpret black-box predictors is thus an important and active area of research. ", "A fundamental question is: how much can we trust the interpretation itself?", "In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different}interpretations.", "We systematically characterize the fragility of the interpretations generated by several widely-used feature-importance interpretation methods (saliency maps, integrated gradient, and DeepLIFT) on ImageNet and CIFAR-10.", "Our experiments show that even small random perturbation can change the feature importance and new systematic perturbations can lead to dramatically different interpretations without changing the label.", "We extend these results to show that interpretations based on exemplars (e.g. influence functions) are similarly fragile.", "Our analysis of the geometry of the Hessian matrix gives insight on why fragility could be a fundamental challenge to the current interpretation approaches." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.1071428507566452, 0.10526315122842789, 0.08695651590824127, 0.1395348757505417, 0.09677419066429138, 0.18518517911434174, 0.0714285671710968, 0.08163265138864517, 0.11538460850715637 ]
H1xJjlbAZ
[ "Can we trust a neural network's explanation for its prediction? We examine the robustness of several popular notions of interpretability of neural networks including saliency maps and influence functions and design adversarial examples against them." ]
[ "Stochastic AUC maximization has garnered an increasing interest due to better fit to imbalanced data classification.", "However, existing works are limited to stochastic AUC maximization with a linear predictive model, which restricts its predictive power when dealing with extremely complex data.", "In this paper, we consider stochastic AUC maximization problem with a deep neural network as the predictive model.", "Building on the saddle point reformulation of a surrogated loss of AUC, the problem can be cast into a {\\it non-convex concave} min-max problem.", "The main contribution made in this paper is to make stochastic AUC maximization more practical for deep neural networks and big data with theoretical insights as well.", "In particular, we propose to explore Polyak-\\L{}ojasiewicz (PL) condition that has been proved and observed in deep learning, which enables us to develop new stochastic algorithms with even faster convergence rate and more practical step size scheme.", "An AdaGrad-style algorithm is also analyzed under the PL condition with adaptive convergence rate.", "Our experimental results demonstrate the effectiveness of the proposed algorithms." ]
[ 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.09090908616781235, 0.3461538553237915, 0.4680851101875305, 0.08163265138864517, 0.3571428656578064, 0.15625, 0.1395348757505417, 0.10526315122842789 ]
HJepXaVYDr
[ "The paper designs two algorithms for the stochastic AUC maximization problem with state-of-the-art complexities when using deep neural network as predictive model, which are also verified by empirical studies." ]
[ "Designing rewards for Reinforcement Learning (RL) is challenging because it needs to convey the desired task, be efficient to optimize, and be easy to compute.", "The latter is particularly problematic when applying RL to robotics, where detecting whether the desired configuration is reached might require considerable supervision and instrumentation.", "Furthermore, we are often interested in being able to reach a wide range of configurations, hence setting up a different reward every time might be unpractical.", "Methods like Hindsight Experience Replay (HER) have recently shown promise to learn policies able to reach many goals, without the need of a reward.", "Unfortunately, without tricks like resetting to points along the trajectory, HER might take a very long time to discover how to reach certain areas of the state-space.", "In this work we investigate different approaches to incorporate demonstrations to drastically speed up the convergence to a policy able to reach any goal, also surpassing the performance of an agent trained with other Imitation Learning algorithms.", "Furthermore, our method can be used when only trajectories without expert actions are available, which can leverage kinestetic or third person demonstration." ]
[ 0, 0, 0, 0, 0, 1, 0 ]
[ 0.13333332538604736, 0.08695651590824127, 0, 0.17391303181648254, 0.04255318641662598, 0.178571417927742, 0 ]
HkglHcSj2N
[ "We tackle goal-conditioned tasks by combining Hindsight Experience Replay and Imitation Learning algorithms, showing faster convergence than the first and higher final performance than the second." ]
[ "Bayesian neural networks, which both use the negative log-likelihood loss function and average their predictions using a learned posterior over the parameters, have been used successfully across many scientific fields, partly due to their ability to `effortlessly' extract desired representations from many large-scale datasets.", "However, generalization bounds for this setting is still missing.\n", "In this paper, we present a new PAC-Bayesian generalization bound for the negative log-likelihood loss which utilizes the \\emph{Herbst Argument} for the log-Sobolev inequality to bound the moment generating function of the learners risk." ]
[ 0, 0, 1 ]
[ 0.072727270424366, 0.1599999964237213, 0.23255813121795654 ]
HkgR8erKwB
[ "We derive a new PAC-Bayesian Bound for unbounded loss functions (e.g. Negative Log-Likelihood). " ]
[ "Data augmentation techniques, e.g., flipping or cropping, which systematically enlarge the training dataset by explicitly generating more training samples, are effective in improving the generalization performance of deep neural networks.", "In the supervised setting, a common practice for data augmentation is to assign the same label to all augmented samples of the same source.", "However, if the augmentation results in large distributional discrepancy among them (e.g., rotations), forcing their label invariance may be too difficult to solve and often hurts the performance.", "To tackle this challenge, we suggest a simple yet effective idea of learning the joint distribution of the original and self-supervised labels of augmented samples.", "The joint learning framework is easier to train, and enables an aggregated inference combining the predictions from different augmented samples for improving the performance.", "Further, to speed up the aggregation process, we also propose a knowledge transfer technique, self-distillation, which transfers the knowledge of augmentation into the model itself.", "We demonstrate the effectiveness of our data augmentation framework on various fully-supervised settings including the few-shot and imbalanced classification scenarios." ]
[ 0, 0, 0, 0, 0, 0, 1 ]
[ 0.1599999964237213, 0.19999998807907104, 0.12244897335767746, 0.2857142686843872, 0.1395348757505417, 0.2380952388048172, 0.5641025304794312 ]
SkliR1SKDS
[ "We propose a simple self-supervised data augmentation technique which improves performance of fully-supervised scenarios including few-shot learning and imbalanced classification." ]
[ "Long short-term memory (LSTM) networks allow to exhibit temporal dynamic behavior with feedback connections and seem a natural choice for learning sequences of 3D meshes.", "We introduce an approach for dynamic mesh representations as used for numerical simulations of car crashes.", "To bypass the complication of using 3D meshes, we transform the surface mesh sequences into spectral descriptors that efficiently encode the shape.", "A two branch LSTM based network architecture is chosen to learn the representations and dynamics of the crash during the simulation.", "The architecture is based on unsupervised video prediction by an LSTM without any convolutional layer.", "It uses an encoder LSTM to map an input sequence into a fixed length vector representation.", "On this representation one decoder LSTM performs the reconstruction of the input sequence, while the other decoder LSTM predicts the future behavior by receiving initial steps of the sequence as seed.", "The spatio-temporal error behavior of the model is analysed to study how well the model can extrapolate the learned spectral descriptors into the future, that is, how well it has learned to represent the underlying dynamical structural mechanics.", "Considering that only a few training examples are available, which is the typical case for numerical simulations, the network performs very well." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.1860465109348297, 0.1818181723356247, 0.15789473056793213, 0.6486486196517944, 0.1818181723356247, 0.12121211737394333, 0.1904761791229248, 0.08510638028383255, 0.1538461446762085 ]
BklekANtwr
[ "A two branch LSTM based network architecture learns the representation and dynamics of 3D meshes of numerical crash simulations." ]
[ "The purpose of an encoding model is to predict brain activity given a stimulus.", "In this contribution, we attempt at estimating a whole brain encoding model of auditory perception in a naturalistic stimulation setting.", "We analyze data from an open dataset, in which 16 subjects watched a short movie while their brain activity was being measured using functional MRI.", "We extracted feature vectors aligned with the timing of the audio from the movie, at different layers of a Deep Neural Network pretrained on the classification of auditory scenes.", "fMRI data was parcellated using hierarchical clustering on 500 parcels, and encoding models were estimated using a fully connected neural network with one hidden layer, trained to predict the signals for each parcel from the DNN features.", "Individual encoding models were successfully trained and predicted brain activity on unseen data, in parcels located in the superior temporal lobe, as well as dorsolateral prefrontal regions, which are usually considered as areas involved in auditory and language processing.", "Taken together, this contribution extends previous attempts on estimating encoding models, by showing the ability to model brain activity using a generic DNN (ie not specifically trained for this purpose) to extract auditory features, suggesting a degree of similarity between internal DNN representations and brain activity in naturalistic settings." ]
[ 0, 0, 1, 0, 0, 0, 0 ]
[ 0.3030303120613098, 0.2631579041481018, 0.3181818127632141, 0.23255813121795654, 0.14814814925193787, 0.22641508281230927, 0.22580644488334656 ]
SyxENQtL8H
[ "Feature vectors from SoundNet can predict brain activity of subjects watching a movie in auditory and language related brain regions." ]
[ "In this paper, we describe the \"implicit autoencoder\" (IAE), a generative autoencoder in which both the generative path and the recognition path are parametrized by implicit distributions.", "We use two generative adversarial networks to define the reconstruction and the regularization cost functions of the implicit autoencoder, and derive the learning rules based on maximum-likelihood learning.", "Using implicit distributions allows us to learn more expressive posterior and conditional likelihood distributions for the autoencoder.", "Learning an expressive conditional likelihood distribution enables the latent code to only capture the abstract and high-level information of the data, while the remaining information is captured by the implicit conditional likelihood distribution.", "For example, we show that implicit autoencoders can disentangle the global and local information, and perform deterministic or stochastic reconstructions of the images.", "We further show that implicit autoencoders can disentangle discrete underlying factors of variation from the continuous factors in an unsupervised fashion, and perform clustering and semi-supervised learning." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.2978723347187042, 0.25531914830207825, 0.5, 0.2857142686843872, 0.2666666507720947, 0.2857142686843872 ]
HyMRaoAqKX
[ "We propose a generative autoencoder that can learn expressive posterior and conditional likelihood distributions using implicit distributions, and train the model using a new formulation of the ELBO." ]
[ "Strong inductive biases allow children to learn in fast and adaptable ways.", "Children use the mutual exclusivity (ME) bias to help disambiguate how words map to referents, assuming that if an object has one label then it does not need another.", "In this paper, we investigate whether or not standard neural architectures have a ME bias, demonstrating that they lack this learning assumption.", "Moreover, we show that their inductive biases are poorly matched to lifelong learning formulations of classification and translation.", "We demonstrate that there is a compelling case for designing neural networks that reason by mutual exclusivity, which remains an open challenge." ]
[ 0, 1, 0, 0, 0 ]
[ 0.15789473056793213, 0.29629629850387573, 0.1702127605676651, 0.1818181723356247, 0.08510638028383255 ]
S1lvn0NtwH
[ "Children use the mutual exclusivity (ME) bias to learn new words, while standard neural nets show the opposite bias, hindering learning in naturalistic scenarios such as lifelong learning." ]
[ "Cortical neurons process and integrate information on multiple timescales.", "In addition, these timescales or temporal receptive fields display functional and hierarchical organization.", "For instance, areas important for working memory (WM), such as prefrontal cortex, utilize neurons with stable temporal receptive fields and long timescales to support reliable representations of stimuli.", "Despite of the recent advances in experimental techniques, the underlying mechanisms for the emergence of neuronal timescales long enough to support WM are unclear and challenging to investigate experimentally.", "Here, we demonstrate that spiking recurrent neural networks (RNNs) designed to perform a WM task reproduce previously observed experimental findings and that these models could be utilized in the future to study how neuronal timescales specific to WM emerge." ]
[ 0, 0, 0, 0, 1 ]
[ 0, 0, 0.2448979616165161, 0.1304347813129425, 0.2857142686843872 ]
B1em4mFL8H
[ "Spiking recurrent neural networks performing a working memory task utilize long heterogeneous timescales, strikingly similar to those observed in prefrontal cortex." ]
[ "Conventional deep learning classifiers are static in the sense that they are trained on\n", "a predefined set of classes and learning to classify a novel class typically requires\n", "re-training.", "In this work, we address the problem of Low-shot network-expansion\n", "learning.", "We introduce a learning framework which enables expanding a pre-trained\n", "(base) deep network to classify novel classes when the number of examples for the\n", "novel classes is particularly small.", "We present a simple yet powerful distillation\n", "method where the base network is augmented with additional weights to classify\n", "the novel classes, while keeping the weights of the base network unchanged.", "We\n", "term this learning hard distillation, since we preserve the response of the network\n", "on the old classes to be equal in both the base and the expanded network.", "We\n", "show that since only a small number of weights needs to be trained, the hard\n", "distillation excels for low-shot training scenarios.", "Furthermore, hard distillation\n", "avoids detriment to classification performance on the base classes.", "Finally, we\n", "show that low-shot network expansion can be done with a very small memory\n", "footprint by using a compact generative model of the base classes training data\n", "with only a negligible degradation relative to learning with the full training set." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1666666567325592, 0.1666666567325592, 0.8571428656578064, 0.09999999403953552, 0.1666666567325592, 0, 0, 0.08695651590824127, 0.1904761791229248, 0.43478259444236755, 0.0833333283662796, 0.1538461446762085, 0, 0, 0.09999999403953552, 0, 0.1666666567325592, 0.17391303181648254 ]
SJw03ceRW
[ " In this paper, we address the problem of Low-shot network-expansion learning" ]
[ "People ask questions that are far richer, more informative, and more creative than current AI systems.", "We propose a neural program generation framework for modeling human question asking, which represents questions as formal programs and generates programs with an encoder-decoder based deep neural network.", "From extensive experiments using an information-search game, we show that our method can ask optimal questions in synthetic settings, and predict which questions humans are likely to ask in unconstrained settings.", "We also propose a novel grammar-based question generation framework trained with reinforcement learning, which is able to generate creative questions without supervised data." ]
[ 0, 0, 0, 1 ]
[ 0.1428571343421936, 0.3396226465702057, 0.2181818187236786, 0.4000000059604645 ]
SylR-CEKDS
[ "We introduce a model of human question asking that combines neural networks and symbolic programs, which can learn to generate good questions with or without supervised examples." ]
[ "The classification of images taken in special imaging environments except air is the first challenge in extending the applications of deep learning.", "We report on an UW-Net (Underwater Network), a new convolutional neural network (CNN) based network for underwater image classification.", "In this model, we simulate the visual correlation of background attention with image understanding for special environments, such as fog and underwater by constructing an inception-attention (I-A) module.", "The experimental results demonstrate that the proposed UW-Net achieves an accuracy of 99.3% on underwater image classification, which is significantly better than other image classification networks, such as AlexNet, InceptionV3, ResNet and Se-ResNet.", "Moreover, we demonstrate the proposed IA module can be used to boost the performance of the existing object recognition networks.", "By substituting the inception module with the I-A module, the Inception-ResnetV2 network achieves a 10.7% top1 error rate and a 0% top5 error rate on the subset of ILSVRC-2012, which further illustrates the function of the background attention in the image classifications." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.07692307233810425, 0.07999999821186066, 0.22857142984867096, 0, 0, 0 ]
HklCmaVtPS
[ "A visual understanding mechanism for special environment" ]
[ "Deep networks have achieved impressive results across a variety of important tasks.", "However, a known weakness is a failure to perform well when evaluated on data which differ from the training distribution, even if these differences are very small, as is the case with adversarial examples. ", "We propose \\emph{Fortified Networks}, a simple transformation of existing networks, which “fortifies” the hidden layers in a deep network by identifying when the hidden states are off of the data manifold, and maps these hidden states back to parts of the data manifold where the network performs well.", "Our principal contribution is to show that fortifying these hidden states improves the robustness of deep networks and our experiments (i) demonstrate improved robustness to standard adversarial attacks in both black-box and white-box threat models; (ii) suggest that our improvements are not primarily due to the problem of deceptively good results due to degraded quality in the gradient signal (the gradient masking problem) and (iii) show the advantage of doing this fortification in the hidden layers instead of the input space. ", "We demonstrate improvements in adversarial robustness on three datasets (MNIST, Fashion MNIST, CIFAR10), across several attack parameters, both white-box and black-box settings, and the most widely studied attacks (FGSM, PGD, Carlini-Wagner). ", "We show that these improvements are achieved across a wide variety of hyperparameters. " ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0, 0.2857142686843872, 0.3396226465702057, 0.1818181723356247, 0.1666666567325592, 0.06451612710952759 ]
SkgVRiC9Km
[ "Better adversarial training by learning to map back to the data manifold with autoencoders in the hidden states. " ]
[ "Neural networks could misclassify inputs that are slightly different from their training data, which indicates a small margin between their decision boundaries and the training dataset.", "In this work, we study the binary classification of linearly separable datasets and show that linear classifiers could also have decision boundaries that lie close to their training dataset if cross-entropy loss is used for training.", "In particular, we show that if the features of the training dataset lie in a low-dimensional affine subspace and the cross-entropy loss is minimized by using a gradient method, the margin between the training points and the decision boundary could be much smaller than the optimal value.", "This result is contrary to the conclusions of recent related works such as (Soudry et al., 2018), and we identify the reason for this contradiction.", "In order to improve the margin, we introduce differential training, which is a training paradigm that uses a loss function defined on pairs of points from each class.", "We show that the decision boundary of a linear classifier trained with differential training indeed achieves the maximum margin.", "The results reveal the use of cross-entropy loss as one of the hidden culprits of adversarial examples and introduces a new direction to make neural networks robust against them." ]
[ 0, 0, 1, 0, 0, 0, 0 ]
[ 0.23999999463558197, 0.36666667461395264, 0.5625, 0.11764705181121826, 0.2641509473323822, 0.3181818127632141, 0.23076923191547394 ]
ByfbnsA9Km
[ "We show that minimizing the cross-entropy loss by using a gradient method could lead to a very poor margin if the features of the dataset lie on a low-dimensional subspace." ]
[ "The concepts of unitary evolution matrices and associative memory have boosted the field of Recurrent Neural Networks (RNN) to state-of-the-art performance in a variety of sequential tasks. ", "However, RNN still has a limited capacity to manipulate long-term memory. ", "To bypass this weakness the most successful applications of RNN use external techniques such as attention mechanisms.", "In this paper we propose a novel RNN model that unifies the state-of-the-art approaches: Rotational Unit of Memory (RUM).", "The core of RUM is its rotational operation, which is, naturally, a unitary matrix, providing architectures with the power to learn long-term dependencies by overcoming the vanishing and exploding gradients problem. ", "Moreover, the rotational unit also serves as associative memory.", "We evaluate our model on synthetic memorization, question answering and language modeling tasks. ", "RUM learns the Copying Memory task completely and improves the state-of-the-art result in the Recall task. ", "RUM’s performance in the bAbI Question Answering task is comparable to that of models with attention mechanism.", "We also improve the state-of-the-art result to 1.189 bits-per-character (BPC) loss in the Character Level Penn Treebank (PTB) task, which is to signify the applications of RUM to real-world sequential data.", "The universality of our construction, at the core of RNN, establishes RUM as a promising approach to language modeling, speech recognition and machine translation." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3255814015865326, 0.13793103396892548, 0.1764705777168274, 0.3333333134651184, 0.1666666567325592, 0.07692307233810425, 0.12903225421905518, 0.12903225421905518, 0.23529411852359772, 0.2222222238779068, 0.14999999105930328 ]
Sk4w0A0Tb
[ "A novel RNN model which outperforms significantly the current frontier of models in a variety of sequential tasks." ]
[ "While many recent advances in deep reinforcement learning rely on model-free methods, model-based approaches remain an alluring prospect for their potential to exploit unsupervised data to learn environment dynamics.", "One prospect is to pursue hybrid approaches, as in AlphaGo, which combines Monte-Carlo Tree Search (MCTS)—a model-based method—with deep-Q networks (DQNs)—a model-free method.", "MCTS requires generating rollouts, which is computationally expensive.", "In this paper, we propose to simulate roll-outs, exploiting the latest breakthroughs in image-to-image transduction, namely Pix2Pix GANs, to predict the dynamics of the environment.", "Our proposed algorithm, generative adversarial tree search (GATS), simulates rollouts up to a specified depth using both a GAN- based dynamics model and a reward predictor.", "GATS employs MCTS for planning over the simulated samples and uses DQN to estimate the Q-function at the leaf states.", "Our theoretical analysis establishes some favorable properties of GATS vis-a-vis the bias-variance trade-off and empirical results show that on 5 popular Atari games, the dynamics and reward predictors converge quickly to accurate solutions.", "However, GATS fails to outperform DQNs in 4 out of 5 games.", "Notably, in these experiments, MCTS has only short rollouts (up to tree depth 4), while previous successes of MCTS have involved tree depth in the hundreds.", "We present a hypothesis for why tree search with short rollouts can fail even given perfect modeling." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10810810327529907, 0, 0, 0, 0, 0, 0.09999999403953552, 0, 0, 0 ]
BJl4f2A5tQ
[ "Surprising negative results on Model Based + Model deep RL" ]
[ "We present a novel architecture of GAN for a disentangled representation learning.", "The new model architecture is inspired by Information Bottleneck (IB) theory thereby named IB-GAN.", "IB-GAN objective is similar to that of InfoGAN but has a crucial difference; a capacity regularization for mutual information is adopted, thanks to which the generator of IB-GAN can harness a latent representation in disentangled and interpretable manner.", "To facilitate the optimization of IB-GAN in practice, a new variational upper-bound is derived.", "With experiments on CelebA, 3DChairs, and dSprites datasets, we demonstrate that the visual quality of samples generated by IB-GAN is often better than those by β-VAEs.", "Moreover, IB-GAN achieves much higher disentanglement metrics score than β-VAEs or InfoGAN on the dSprites dataset." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.5925925970077515, 0.3333333134651184, 0.2083333283662796, 0.19999998807907104, 0.1463414579629898, 0 ]
ryljV2A5KX
[ "Inspired by Information Bottleneck theory, we propose a new architecture of GAN for a disentangled representation learning" ]
[ "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs.", "As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters.", "We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic.", "Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs.", "In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance.", "We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training." ]
[ 0, 0, 0, 0, 1, 0 ]
[ 0.1621621549129486, 0.1875, 0.15789473056793213, 0.12121211737394333, 0.2857142686843872, 0.0476190410554409 ]
r1lUOzWCW
[ "Explain bias situation with MMD GANs; MMD GANs work with smaller critic networks than WGAN-GPs; new GAN evaluation metric." ]
[ "We extend the Consensus Network framework to Transductive Consensus Network (TCN), a semi-supervised multi-modal classification framework, and identify its two mechanisms: consensus and classification.", "By putting forward three variants as ablation studies, we show both mechanisms should be functioning together.", "Overall, TCNs outperform or align with the best benchmark algorithms when only 20 to 200 labeled data points are available." ]
[ 1, 0, 0 ]
[ 0.2666666507720947, 0, 0 ]
HyeWvcQOKm
[ "A semi-supervised multi-modal classification framework, TCN, that outperforms various benchmarks." ]
[ "Separating mixed distributions is a long standing challenge for machine learning and signal processing.", "Applications include: single-channel multi-speaker separation (cocktail party problem), singing voice separation and separating reflections from images.", "Most current methods either rely on making strong assumptions on the source distributions (e.g. sparsity, low rank, repetitiveness) or rely on having training samples of each source in the mixture.", "In this work, we tackle the scenario of extracting an unobserved distribution additively mixed with a signal from an observed (arbitrary) distribution.", "We introduce a new method: Neural Egg Separation - an iterative method that learns to separate the known distribution from progressively finer estimates of the unknown distribution.", "In some settings, Neural Egg Separation is initialization sensitive, we therefore introduce GLO Masking which ensures a good initialization.", "Extensive experiments show that our method outperforms current methods that use the same level of supervision and often achieves similar performance to full supervision." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0.1428571343421936, 0, 0, 0.23529411852359772, 0.1538461446762085, 0, 0.1111111044883728 ]
SkelJnRqt7
[ "An iterative neural method for extracting signals that are only observed mixed with other signals" ]
[ "In health, machine learning is increasingly common, yet neural network embedding (representation) learning is arguably under-utilized for physiological signals. ", "This inadequacy stands out in stark contrast to more traditional computer science domains, such as computer vision (CV), and natural language processing (NLP). ", "For physiological signals, learning feature embeddings is a natural solution to data insufficiency caused by patient privacy concerns -- rather than share data, researchers may share informative embedding models (i.e., representation models), which map patient data to an output embedding. ", "Here, we present the PHASE (PHysiologicAl Signal Embeddings) framework, which consists of three components: i) learning neural network embeddings of physiological signals, ii) predicting outcomes based on the learned embedding, and iii) interpreting the prediction results by estimating feature attributions in the \"stacked\" models (i.e., feature embedding model followed by prediction model). ", "PHASE is novel in three ways: 1) To our knowledge, PHASE is the first instance of transferal of neural networks to create physiological signal embeddings.", "2) We present a tractable method to obtain feature attributions through stacked models. ", "We prove that our stacked model attributions can approximate Shapley values -- attributions known to have desirable properties -- for arbitrary sets of models.", "3) PHASE was extensively tested in a cross-hospital setting including publicly available data. ", "In our experiments, we show that PHASE significantly outperforms alternative embeddings -- such as raw, exponential moving average/variance, and autoencoder -- currently in use.", "Furthermore, we provide evidence that transferring neural network embedding/representation learners between distinct hospitals still yields performant embeddings and offer recommendations when transference is ineffective." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.0555555522441864, 0.04878048226237297, 0.1071428507566452, 0.12121211737394333, 0.09999999403953552, 0.25, 0.19999998807907104, 0.0624999962747097, 0.09756097197532654, 0.1428571343421936 ]
SygInj05Fm
[ "Physiological signal embeddings for prediction performance and hospital transference with a general Shapley value interpretability method for stacked models." ]
[ "We consider the dictionary learning problem, where the aim is to model the given data as a linear combination of a few columns of a matrix known as a dictionary, where the sparse weights forming the linear combination are known as coefficients.", "Since the dictionary and coefficients, parameterizing the linear model are unknown, the corresponding optimization is inherently non-convex.", "This was a major challenge until recently, when provable algorithms for dictionary learning were proposed.", "Yet, these provide guarantees only on the recovery of the dictionary, without explicit recovery guarantees on the coefficients.", "Moreover, any estimation error in the dictionary adversely impacts the ability to successfully localize and estimate the coefficients.", "This potentially limits the utility of existing provable dictionary learning methods in applications where coefficient recovery is of interest.", "To this end, we develop NOODL: a simple Neurally plausible alternating Optimization-based Online Dictionary Learning algorithm, which recovers both the dictionary and coefficients exactly at a geometric rate, when initialized appropriately.", "Our algorithm, NOODL, is also scalable and amenable for large scale distributed implementations in neural architectures, by which we mean that it only involves simple linear and non-linear operations.", "Finally, we corroborate these theoretical results via experimental evaluation of the proposed algorithm with the current state-of-the-art techniques." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.3181818127632141, 0.19354838132858276, 0.32258063554763794, 0.13793103396892548, 0.1249999925494194, 0.29411762952804565, 0.21739129722118378, 0.045454539358615875, 0.1818181723356247 ]
HJeu43ActQ
[ "We present a provable algorithm for exactly recovering both factors of the dictionary learning model. " ]
[ "Real-world Relation Extraction (RE) tasks are challenging to deal with, either due to limited training data or class imbalance issues.", "In this work, we present Data Augmented Relation Extraction (DARE), a simple method to augment training data by properly finetuning GPT2 to generate examples for specific relation types.", "The generated training data is then used in combination with the gold dataset to train a BERT-based RE classifier.", "In a series of experiments we show the advantages of our method, which leads in improvements of up to 11 F1 score points compared to a strong baseline.", "Also, DARE achieves new state-of-the-art in three widely used biomedical RE datasets surpassing the previous best results by 4.7 F1 points on average." ]
[ 0, 1, 0, 0, 0 ]
[ 0.1599999964237213, 0.24242423474788666, 0.07999999821186066, 0, 0 ]
rJedNwij2r
[ "Data Augmented Relation Extraction with GPT-2" ]
[ "This paper addresses the problem of representing a system's belief using multi-variate normal distributions (MND) where the underlying model is based on a deep neural network (DNN).", "The major challenge with DNNs is the computational complexity that is needed to obtain model uncertainty using MNDs.", "To achieve a scalable method, we propose a novel approach that expresses the parameter posterior in sparse information form.", "Our inference algorithm is based on a novel Laplace Approximation scheme, which involves a diagonal correction of the Kronecker-factored eigenbasis.", "As this makes the inversion of the information matrix intractable - an operation that is required for full Bayesian analysis, we devise a low-rank approximation of this eigenbasis and a memory-efficient sampling scheme.", "We provide both a theoretical analysis and an empirical evaluation on various benchmark data sets, showing the superiority of our approach over existing methods." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.0624999962747097, 0, 0, 0.1538461446762085, 0.0555555522441864, 0 ]
Bkxd9JBYPH
[ "An approximate inference algorithm for deep learning" ]
[ "The ever-increasing size of modern datasets combined with the difficulty of obtaining label information has made semi-supervised learning of significant practical importance in modern machine learning applications.", "In comparison to supervised learning, the key difficulty in semi-supervised learning is how to make full use of the unlabeled data.", "In order to utilize manifold information provided by unlabeled data, we propose a novel regularization called the tangent-normal adversarial regularization, which is composed by two parts.", "The two parts complement with each other and jointly enforce the smoothness along two different directions that are crucial for semi-supervised learning.", "One is applied along the tangent space of the data manifold, aiming to enforce local invariance of the classifier on the manifold, while the other is performed on the normal space orthogonal to the tangent space, intending to impose robustness on the classifier against the noise causing the observed data deviating from the underlying data manifold. ", "Both of the two regularizers are achieved by the strategy of virtual adversarial training.", "Our method has achieved state-of-the-art performance on semi-supervised learning tasks on both artificial dataset and practical datasets." ]
[ 0, 0, 1, 0, 0, 0, 0 ]
[ 0.1860465109348297, 0.20512819290161133, 0.35555556416511536, 0.1463414579629898, 0.145454540848732, 0.25, 0.2222222238779068 ]
BJxz5jRcFm
[ "We propose a novel manifold regularization strategy based on adversarial training, which can significantly improve the performance of semi-supervised learning." ]
[ "Universal approximation property of neural networks is one of the motivations to use these models in various real-world problems.", "However, this property is not the only characteristic that makes neural networks unique as there is a wide range of other approaches with similar property.", "Another characteristic which makes these models interesting is that they can be trained with the backpropagation algorithm which allows an efficient gradient computation and gives these universal approximators the ability to efficiently learn complex manifolds from a large amount of data in different domains.", "Despite their abundant use in practice, neural networks are still not well understood and a broad range of ongoing research is to study the interpretability of neural networks.", "On the other hand, topological data analysis (TDA) relies on strong theoretical framework of (algebraic) topology along with other mathematical tools for analyzing possibly complex datasets.", "In this work, we leverage a universal approximation theorem originating from algebraic topology to build a connection between TDA and common neural network training framework.", "We introduce the notion of automatic subdivisioning and devise a particular type of neural networks for regression tasks: Simplicial Complex Networks (SCNs).", "SCN's architecture is defined with a set of bias functions along with a particular policy during the forward pass which alternates the common architecture search framework in neural networks.", "We believe the view of SCNs can be used as a step towards building interpretable deep learning models.", "Finally, we verify its performance on a set of regression problems." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.12121211737394333, 0.10526315122842789, 0.0714285671710968, 0.04999999701976776, 0.19999998807907104, 0.05128204822540283, 0.1666666567325592, 0.14999999105930328, 0.12121211737394333, 0 ]
SJlRDCVtwr
[ "A novel method for supervised learning through subdivisioning the input space along with function approximation." ]
[ "The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks.", "However, despite the proliferation of such methods there is currently no study of their robustness to adversarial attacks.", "We provide the first adversarial vulnerability analysis on the widely used family of methods based on random walks.", "We derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks.", "We further show that our attacks are transferable since they generalize to many models, and are successful even when the attacker is restricted." ]
[ 0, 0, 1, 0, 0 ]
[ 0.11764705181121826, 0.07407406717538834, 0.1538461446762085, 0.1249999925494194, 0.0624999962747097 ]
Sye7qoC5FQ
[ "Adversarial attacks on unsupervised node embeddings based on eigenvalue perturbation theory." ]
[ "Hierarchical reinforcement learning methods offer a powerful means of planning flexible behavior in complicated domains.", "However, learning an appropriate hierarchical decomposition of a domain into subtasks remains a substantial challenge.", "We present a novel algorithm for subtask discovery, based on the recently introduced multitask linearly-solvable Markov decision process (MLMDP) framework.", "The MLMDP can perform never-before-seen tasks by representing them as a linear combination of a previously learned basis set of tasks.", "In this setting, the subtask discovery problem can naturally be posed as finding an optimal low-rank approximation of the set of tasks the agent will face in a domain.", "We use non-negative matrix factorization to discover this minimal basis set of tasks, and show that the technique learns intuitive decompositions in a variety of domains.", "Our method has several qualitatively desirable features: it is not limited to learning subtasks with single goal states, instead learning distributed patterns of preferred states; it learns qualitatively different hierarchical decompositions in the same domain depending on the ensemble of tasks the agent will face; and it may be straightforwardly iterated to obtain deeper hierarchical decompositions." ]
[ 0, 0, 1, 0, 0, 0, 0 ]
[ 0.060606054961681366, 0.1249999925494194, 0.6842105388641357, 0.1111111044883728, 0.1818181723356247, 0.1395348757505417, 0.0624999962747097 ]
ry80wMW0W
[ "We present a novel algorithm for hierarchical subtask discovery which leverages the multitask linear Markov decision process framework." ]
[ "This paper introduces a framework for solving combinatorial optimization problems by learning from input-output examples of optimization problems.", "We introduce a new memory augmented neural model in which the memory is not resettable (i.e the information stored in the memory after processing an input example is kept for the next seen examples).", "We used deep reinforcement learning to train a memory controller agent to store useful memories.", "Our model was able to outperform hand-crafted solver on Binary Linear Programming (Binary LP).", "The proposed model is tested on different Binary LP instances with large number of variables (up to 1000 variables) and constrains (up to 700 constrains)." ]
[ 0, 1, 0, 0, 0 ]
[ 0.1111111044883728, 0.3333333432674408, 0.23529411852359772, 0.1764705777168274, 0.2790697515010834 ]
Bk_fs6gA-
[ "We propose a memory network model to solve Binary LP instances where the memory information is perseved for long-term use. " ]
[ "Sequence-to-sequence (Seq2Seq) models with attention have excelled at tasks which involve generating natural language sentences such as machine translation, image captioning and speech recognition.", "Performance has further been improved by leveraging unlabeled data, often in the form of a language model.", "In this work, we present the Cold Fusion method, which leverages a pre-trained language model during training, and show its effectiveness on the speech recognition task.", "We show that Seq2Seq models with Cold Fusion are able to better utilize language information enjoying", "i) faster convergence and better generalization, and", "ii) almost complete transfer to a new domain while using less than 10% of the labeled training data." ]
[ 0, 0, 0, 0, 0, 1 ]
[ 0.14814814925193787, 0.12765957415103912, 0.1090909019112587, 0.3478260934352875, 0.1111111119389534, 0.5416666865348816 ]
rybAWfx0b
[ "We introduce a novel method to train Seq2Seq models with language models that converge faster, generalize better and can almost completely transfer to a new domain using less than 10% of labeled data." ]
[ "A central capability of intelligent systems is the ability to continuously build upon previous experiences to speed up and enhance learning of new tasks.", "Two distinct research paradigms have studied this question.", "Meta-learning views this problem as learning a prior over model parameters that is amenable for fast adaptation on a new task, but typically assumes the set of tasks are available together as a batch.", "In contrast, online (regret based) learning considers a sequential setting in which problems are revealed one after the other, but conventionally train only a single model without any task-specific adaptation.", "This work introduces an online meta-learning setting, which merges ideas from both the aforementioned paradigms to better capture the spirit and practice of continual lifelong learning.", "We propose the follow the meta leader (FTML) algorithm which extends the MAML algorithm to this setting.", "Theoretically, this work provides an O(logT) regret guarantee for the FTML algorithm.", "Our experimental evaluation on three different large-scale tasks suggest that the proposed algorithm significantly outperforms alternatives based on traditional online learning approaches." ]
[ 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.25641024112701416, 0, 0.1666666567325592, 0.17391303181648254, 0.5714285373687744, 0.32258063554763794, 0.06896550953388214, 0.15789473056793213 ]
HkxzljA4_N
[ "We introduce the online meta learning problem setting to better capture the spirit and practice of continual lifelong learning." ]
[ "We investigate the extent to which individual attention heads in pretrained transformer language models, such as BERT and RoBERTa, implicitly capture syntactic dependency relations.", "We employ two methods—taking the maximum attention weight and computing the maximum spanning tree—to extract implicit dependency relations from the attention weights of each layer/head, and compare them to the ground-truth Universal Dependency (UD) trees.", "We show that, for some UD relation types, there exist heads that can recover the dependency type significantly better than baselines on parsed English text, suggesting that some self-attention heads act as a proxy for syntactic structure.", "We also analyze BERT fine-tuned on two datasets—the syntax-oriented CoLA and the semantics-oriented MNLI—to investigate whether fine-tuning affects the patterns of their self-attention, but we do not observe substantial differences in the overall dependency relations extracted using our methods.", "Our results suggest that these models have some specialist attention heads that track individual dependency types, but no generalist head that performs holistic parsing significantly better than a trivial baseline, and that analyzing attention weights directly may not reveal much of the syntactic knowledge that BERT-style models are known to learn." ]
[ 1, 0, 0, 0, 0 ]
[ 0.05882352590560913, 0.05128204822540283, 0, 0.04255318641662598, 0.036363635212183 ]
rJgoYekkgB
[ "Attention weights don't fully expose what BERT knows about syntax." ]
[ "State-of-the-art face super-resolution methods employ deep convolutional neural networks to learn a mapping between low- and high-resolution facial patterns by exploring local appearance knowledge.", "However, most of these methods do not well exploit facial structures and identity information, and struggle to deal with facial images that exhibit large pose variation and misalignment.", "In this paper, we propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.", "Firstly, the 3D face rendering branch is set up to obtain 3D priors of salient facial structures and identity knowledge.", "Secondly, the Spatial Attention Mechanism is used to better exploit this hierarchical information (i.e. intensity similarity, 3D facial structure, identity content) for the super-resolution problem.", "Extensive experiments demonstrate that the proposed algorithm achieves superior face super-resolution results and outperforms the state-of-the-art." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.1395348757505417, 0.13636362552642822, 0.800000011920929, 0.31578946113586426, 0.13636362552642822, 0.1764705777168274 ]
HJeOHJHFPH
[ "We propose a novel face super resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures." ]
[ "We present Cross-View Training (CVT), a simple but effective method for deep semi-supervised learning.", "On labeled examples, the model is trained with standard cross-entropy loss.", "On an unlabeled example, the model first performs inference (acting as a \"teacher\") to produce soft targets.", "The model then learns from these soft targets (acting as a ``\"student\").", "We deviate from prior work by adding multiple auxiliary student prediction layers to the model.", "The input to each student layer is a sub-network of the full model that has a restricted view of the input (e.g., only seeing one region of an image).", "The students can learn from the teacher (the full model) because the teacher sees more of each example.", "Concurrently, the students improve the quality of the representations used by the teacher as they learn to make predictions with limited data.", "When combined with Virtual Adversarial Training, CVT improves upon the current state-of-the-art on semi-supervised CIFAR-10 and semi-supervised SVHN.", "We also apply CVT to train models on five natural language processing tasks using hundreds of millions of sentences of unlabeled data.", "On all tasks CVT substantially outperforms supervised learning alone, resulting in models that improve upon or are competitive with the current state-of-the-art.\n" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.12121211737394333, 0.13333332538604736, 0.0555555522441864, 0, 0.05882352590560913, 0.13333332538604736, 0.11428570747375488, 0.15789473056793213, 0.2222222238779068, 0.051282044500112534, 0.0952380895614624 ]
BJubPWZRW
[ "Self-training with different views of the input gives excellent results for semi-supervised image recognition, sequence tagging, and dependency parsing." ]
[ "I show how it can be beneficial to express Metropolis accept/reject decisions in terms of comparison with a uniform [0,1] value, and to then update this uniform value non-reversibly, as part of the Markov chain state, rather than sampling it independently each iteration.", "This provides a small improvement for random walk Metropolis and Langevin updates in high dimensions. ", "It produces a larger improvement when using Langevin updates with persistent momentum, giving performance comparable to that of Hamiltonian Monte Carlo (HMC) with long trajectories. ", "This is of significance when some variables are updated by other methods, since if HMC is used, these updates can be done only between trajectories, whereas they can be done more often with Langevin updates. ", "This is seen for a Bayesian neural network model, in which connection weights are updated by persistent Langevin or HMC, while hyperparameters are updated by Gibbs sampling.\n" ]
[ 1, 0, 0, 0, 0 ]
[ 0.2448979616165161, 0, 0.05714285373687744, 0.1463414579629898, 0 ]
Hked5J2EKr
[ "A non-reversible way of making accept/reject decisions can be beneficial" ]
[ "We introduce ES-MAML, a new framework for solving the model agnostic meta learning (MAML) problem based on Evolution Strategies (ES).", "Existing algorithms for MAML are based on policy gradients, and incur significant difficulties when attempting to estimate second derivatives using backpropagation on stochastic policies.", "We show how ES can be applied to MAML to obtain an algorithm which avoids the problem of estimating second derivatives, and is also conceptually simple and easy to implement.", "Moreover, ES-MAML can handle new types of nonsmooth adaptation operators, and other techniques for improving performance and estimation of ES methods become applicable.", "We show empirically that ES-MAML is competitive with existing methods and often yields better adaptation with fewer queries." ]
[ 0, 0, 0, 0, 1 ]
[ 0.27272728085517883, 0.12765957415103912, 0.19607841968536377, 0.17777776718139648, 0.2926829159259796 ]
S1exA2NtDB
[ "We provide a new framework for MAML in the ES/blackbox setting, and show that it allows deterministic and linear policies, better exploration, and non-differentiable adaptation operators." ]
[ "Transforming one probability distribution to another is a powerful tool in Bayesian inference and machine learning.", "Some prominent examples are constrained-to-unconstrained transformations of distributions for use in Hamiltonian Monte-Carlo and constructing flexible and learnable densities such as normalizing flows.", "We present Bijectors.jl, a software package for transforming distributions implemented in Julia, available at github.com/TuringLang/Bijectors.jl.", "The package provides a flexible and composable way of implementing transformations of distributions without being tied to a computational framework. \n\n", "We demonstrate the use of Bijectors.jl on improving variational inference by encoding known statistical dependencies into the variational posterior using normalizing flows, providing a general approach to relaxing the mean-field assumption usually made in variational inference." ]
[ 0, 0, 0, 0, 1 ]
[ 0.2083333283662796, 0.25925925374031067, 0.3199999928474426, 0.23529411852359772, 0.40625 ]
BklKK1nEFH
[ "We present a software framework for transforming distributions and demonstrate its flexibility on relaxing mean-field assumptions in variational inference with the use of coupling flows to replicate structure from the target generative model." ]
[ "Multi-domain learning (MDL) aims at obtaining a model with minimal average risk across multiple domains.", "Our empirical motivation is automated microscopy data, where cultured cells are imaged after being exposed to known and unknown chemical perturbations, and each dataset displays significant experimental bias.", "This paper presents a multi-domain adversarial learning approach, MuLANN, to leverage multiple datasets with overlapping but distinct class sets, in a semi-supervised setting.", "Our contributions include:", "i) a bound on the average- and worst-domain risk in MDL, obtained using the H-divergence;", "ii) a new loss to accommodate semi-supervised multi-domain learning and domain adaptation;", "iii) the experimental validation of the approach, improving on the state of the art on two standard image benchmarks, and a novel bioimage dataset, Cell." ]
[ 0, 0, 0, 0, 0, 1, 0 ]
[ 0.12121211737394333, 0.08888888359069824, 0.25, 0, 0.25, 0.4000000059604645, 0.15789473056793213 ]
Sklv5iRqYX
[ "Adversarial Domain adaptation and Multi-domain learning: a new loss to handle multi- and single-domain classes in the semi-supervised setting." ]
[ "We introduce our Distribution Regression Network (DRN) which performs regression from input probability distributions to output probability distributions.", "Compared to existing methods, DRN learns with fewer model parameters and easily extends to multiple input and multiple output distributions.", "On synthetic and real-world datasets, DRN performs similarly or better than the state-of-the-art.", "Furthermore, DRN generalizes the conventional multilayer perceptron (MLP).", "In the framework of MLP, each node encodes a real number, whereas in DRN, each node encodes a probability distribution." ]
[ 1, 0, 0, 0, 0 ]
[ 0.2142857164144516, 0.06896550953388214, 0.07999999821186066, 0.19999998807907104, 0.1428571343421936 ]
ByYPLJA6W
[ "A learning network which generalizes the MLP framework to perform distribution-to-distribution regression" ]
[ "Existing sequence prediction methods are mostly concerned with time-independent sequences, in which the actual time span between events is irrelevant and the distance between events is simply the difference between their order positions in the sequence.", "While this time-independent view of sequences is applicable for data such as natural languages, e.g., dealing with words in a sentence, it is inappropriate and inefficient for many real world events that are observed and collected at unequally spaced points of time as they naturally arise, e.g., when a person goes to a grocery store or makes a phone call.", "The time span between events can carry important information about the sequence dependence of human behaviors.", "In this work, we propose a set of methods for using time in sequence prediction.", "Because neural sequence models such as RNN are more amenable for handling token-like input, we propose two methods for time-dependent event representation, based on the intuition on how time is tokenized in everyday life and previous work on embedding contextualization.", "We also introduce two methods for using next event duration as regularization for training a sequence prediction model.", "We discuss these methods based on recurrent neural nets.", "We evaluate these methods as well as baseline models on five datasets that resemble a variety of sequence prediction tasks.", "The experiments revealed that the proposed methods offer accuracy gain over baseline models in a range of settings." ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.16326530277729034, 0.13333332538604736, 0.10526315122842789, 0.3243243098258972, 0.23728813230991364, 0.3589743673801422, 0.19354838132858276, 0.5365853905677795, 0.25 ]
SJDJNzWAZ
[ "Proposed methods for time-dependent event representation and regularization for sequence prediction; Evaluated these methods on five datasets that involve a range of sequence prediction tasks." ]
[ "Off-policy reinforcement learning algorithms promise to be applicable in settings where only a fixed data-set (batch) of environment interactions is available and no new experience can be acquired.", "This property makes these algorithms appealing for real world problems such as robot control.", "In practice, however, standard off-policy algorithms fail in the batch setting for continuous control.", "In this paper, we propose a simple solution to this problem.", "It admits the use of data generated by arbitrary behavior policies and uses a learned prior -- the advantage-weighted behavior model (ABM) -- to bias the RL policy towards actions that have previously been executed and are likely to be successful on the new task.", "Our method can be seen as an extension of recent work on batch-RL that enables stable learning from conflicting data-sources.", "We find improvements on competitive baselines in a variety of RL tasks -- including standard continuous control benchmarks and multi-task learning for simulated and real-world robots." ]
[ 0, 0, 0, 0, 1, 0, 0 ]
[ 0.22641508281230927, 0.04999999701976776, 0.09999999403953552, 0.1111111044883728, 0.3125, 0.21739129722118378, 0.23529411852359772 ]
rke7geHtwH
[ "We develop a method for stable offline reinforcement learning from logged data. The key is to regularize the RL policy towards a learned \"advantage weighted\" model of the data." ]
[ "One of the main challenges in applying graph convolutional neural networks on gene-interaction data is the lack of understanding of the vector space to which they belong and also the inherent difficulties involved in representing those interactions on a significantly lower dimension, viz Euclidean spaces.", "The challenge becomes more prevalent when dealing with various types of heterogeneous data.", "We introduce a systematic, generalized method, called iSOM-GSN, used to transform ``multi-omic'' data with higher dimensions onto a two-dimensional grid.", "Afterwards, we apply a convolutional neural network to predict disease states of various types.", "Based on the idea of Kohonen's self-organizing map, we generate a two-dimensional grid for each sample for a given set of genes that represent a gene similarity network. ", "We have tested the model to predict breast and prostate cancer using gene expression, DNA methylation and copy number alteration, yielding prediction accuracies in the 94-98% range for tumor stages of breast cancer and calculated Gleason scores of prostate cancer with just 11 input genes for both cases.", "The scheme not only outputs nearly perfect classification accuracy, but also provides an enhanced scheme for representation learning, visualization, dimensionality reduction, and interpretation of the results." ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.24137930572032928, 0.12121211737394333, 0.10256409645080566, 0.23529411852359772, 0.2222222238779068, 0.1355932205915451, 0.17777776718139648 ]
BJgRjTNtPH
[ "This paper presents a deep learning model that combines self-organizing maps and convolutional neural networks for representation learning of multi-omics data" ]
[ "State-of-the-art performances on language comprehension tasks are achieved by huge language models pre-trained on massive unlabeled text corpora, with very light subsequent fine-tuning in a task-specific supervised manner.", "It seems the pre-training procedure learns a very good common initialization for further training on various natural language understanding tasks, such that only few steps need to be taken in the parameter space to learn each task.", "In this work, using Bidirectional Encoder Representations from Transformers (BERT) as an example, we verify this hypothesis by showing that task-specific fine-tuned language models are highly close in parameter space to the pre-trained one.", "Taking advantage of such observations, we further show that the fine-tuned versions of these huge models, having on the order of $10^8$ floating-point parameters, can be made very computationally efficient.", "First, fine-tuning only a fraction of critical layers suffices.", "Second, fine-tuning can be adequately performed by learning a binary multiplicative mask on pre-trained weights, \\textit{i.e.} by parameter-sparsification.", "As a result, with a single effort, we achieve three desired outcomes: (1) learning to perform specific tasks, (2) saving memory by storing only binary masks of certain layers for each task, and (3) saving compute on appropriate hardware by performing sparse operations with model parameters. " ]
[ 0, 0, 0, 0, 1, 0, 0 ]
[ 0.1875, 0.04878048598766327, 0.1538461446762085, 0.060606058686971664, 0.2666666507720947, 0.07999999821186066, 0.040816325694322586 ]
SJx7004FPH
[ "Sparsification as fine-tuning of language models" ]
[ "We present a simple and effective algorithm designed to address the covariate shift problem in imitation learning.", "It operates by training an ensemble of policies on the expert demonstration data, and using the variance of their predictions as a cost which is minimized with RL together with a supervised behavioral cloning cost.", "Unlike adversarial imitation methods, it uses a fixed reward function which is easy to optimize.", "We prove a regret bound for the algorithm in the tabular setting which is linear in the time horizon multiplied by a coefficient which we show to be low for certain problems in which behavioral cloning fails.", "We evaluate our algorithm empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning." ]
[ 1, 0, 0, 0, 0 ]
[ 0.3571428656578064, 0.09756097197532654, 0.07692307233810425, 0.09999999403953552, 0.10526315122842789 ]
rkgbYyHtwB
[ "Method for addressing covariate shift in imitation learning using ensemble uncertainty" ]
[ "We present and discuss a simple image preprocessing method for learning disentangled latent factors. \n", "In particular, we utilize the implicit inductive bias contained in features from networks pretrained on the ImageNet database. \n", "We enhance this bias by explicitly fine-tuning such pretrained networks on tasks useful for the NeurIPS2019 disentanglement challenge, such as angle and position estimation or color classification.\n", "Furthermore, we train a VAE on regionally aggregate feature maps, and discuss its disentanglement performance using metrics proposed in recent literature." ]
[ 0, 1, 0, 0 ]
[ 0.06666666269302368, 0.12121211737394333, 0.0952380895614624, 0.0555555522441864 ]
S1gHsYFhsB
[ "We use supervised finetuning of feature vectors to improve transfer from simulation to the real world" ]
[ "A reinforcement learning agent that needs to pursue different goals across episodes requires a goal-conditional policy.", "In addition to their potential to generalize desirable behavior to unseen goals, such policies may also enable higher-level planning based on subgoals.", "In sparse-reward environments, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended appears crucial to enable sample efficient learning.", "However, reinforcement learning agents have only recently been endowed with such capacity for hindsight.", "In this paper, we demonstrate how hindsight can be introduced to policy gradient methods, generalizing this idea to a broad class of successful algorithms.", "Our experiments on a diverse selection of sparse-reward environments show that hindsight leads to a remarkable increase in sample efficiency." ]
[ 0, 0, 1, 0, 0, 0 ]
[ 0.10256409645080566, 0.04651162400841713, 0.7199999690055847, 0.10810810327529907, 0.13333332538604736, 0.0476190410554409 ]
Bkg2viA5FQ
[ "We introduce the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended to policy gradient methods." ]
[ "Computer vision has undergone a dramatic revolution in performance, driven in large part through deep features trained on large-scale supervised datasets.", "However, much of these improvements have focused on static image analysis; video understanding has seen rather modest improvements.", "Even though new datasets and spatiotemporal models have been proposed, simple frame-by-frame classification methods often still remain competitive.", "We posit that current video datasets are plagued with implicit biases over scene and object structure that can dwarf variations in temporal structure.", "In this work, we build a video dataset with fully observable and controllable object and scene bias, and which truly requires spatiotemporal understanding in order to be solved.", "Our dataset, named CATER, is rendered synthetically using a library of standard 3D objects, and tests the ability to recognize compositions of object movements that require long-term reasoning.", "In addition to being a challenging dataset, CATER also provides a plethora of diagnostic tools to analyze modern spatiotemporal video architectures by being completely observable and controllable.", "Using CATER, we provide insights into some of the most recent state of the art deep video architectures." ]
[ 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.09756097197532654, 0.10526315122842789, 0.10256409645080566, 0.2857142686843872, 0.25531914830207825, 0.2083333283662796, 0.13333332538604736, 0.10810810327529907 ]
HJgzt2VKPB
[ "We propose a new video understanding benchmark, with tasks that by-design require temporal reasoning to be solved, unlike most existing video datasets." ]
[ "We address the efficiency issues caused by the straggler effect in the recently emerged federated learning, which collaboratively trains a model on decentralized non-i.i.d. (non-independent and identically distributed) data across massive worker devices without exchanging training data in the unreliable and heterogeneous networks.", "We propose a novel two-stage analysis on the error bounds of general federated learning, which provides practical insights into optimization.", "As a result, we propose a novel easy-to-implement federated learning algorithm that uses asynchronous settings and strategies to control discrepancies between the global model and delayed models and adjust the number of local epochs with the estimation of staleness to accelerate convergence and resist performance deterioration caused by stragglers.", "Experiment results show that our algorithm converges fast and robust on the existence of massive stragglers." ]
[ 0, 0, 0, 1 ]
[ 0.18518517911434174, 0.34285715222358704, 0.3214285671710968, 0.5161290168762207 ]
B1lL9grYDS
[ "We propose an efficient and robust asynchronous federated learning algorithm on the existence of stragglers" ]
[ "Long Short-Term Memory (LSTM) units have the ability to memorise and use long-term dependencies between inputs to generate predictions on time series data.", "We introduce the concept of modifying the cell state (memory) of LSTMs using rotation matrices parametrised by a new set of trainable weights.", "This addition shows significant increases of performance on some of the tasks from the bAbI dataset." ]
[ 0, 0, 1 ]
[ 0.1463414579629898, 0.3589743673801422, 0.42424240708351135 ]
ByUEelW0-
[ "Adding a new set of weights to the LSTM that rotate the cell memory improves performance on some bAbI tasks." ]
[ "We address the problem of marginal inference for an exponential family defined over the set of permutation matrices.", "This problem is known to quickly become intractable as the size of the permutation increases, since its involves the computation of the permanent of a matrix, a #P-hard problem.", "We introduce Sinkhorn variational marginal inference as a scalable alternative, a method whose validity is ultimately justified by the so-called Sinkhorn approximation of the permanent.", "We demonstrate the efectiveness of our method in the problem of probabilistic identification of neurons in the worm C.elegans" ]
[ 0, 0, 1, 0 ]
[ 0.24242423474788666, 0.10256409645080566, 0.25641024112701416, 0.25 ]
HkxPtJh4YB
[ "New methodology for variational marginal inference of permutations based on Sinkhorn algorithm, applied to probabilistic identification of neurons" ]
[ "The robustness of neural networks to adversarial examples has received great attention due to security implications.", "Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness.", "In this paper, we provide theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and propose to use the Extreme Value Theory for efficient evaluation.", "Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.", "The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks.", "Experimental results on various networks, including ResNet, Inception-v3 and MobileNet, show that", "(i) CLEVER is aligned with the robustness indication measured by the $\\ell_2$ and $\\ell_\\infty$ norms of adversarial examples from powerful attacks, and", "(ii) defended networks using defensive distillation or bounded ReLU indeed give better CLEVER scores.", "To the best of our knowledge, CLEVER is the first attack-independent robustness metric that can be applied to any neural network classifiers.\n\n" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ 0.1764705777168274, 0.1538461446762085, 0.21276594698429108, 0.15789473056793213, 0.0624999962747097, 0.06451612710952759, 0.10256409645080566, 0, 0.5853658318519592 ]
BkUHlMZ0b
[ "We propose the first attack-independent robustness metric, a.k.a CLEVER, that can be applied to any neural network classifier." ]
[ "Multi-agent collaboration is required by numerous real-world problems.", "Although distributed setting is usually adopted by practical systems, local range communication and information aggregation still matter in fulfilling complex tasks.", "For multi-agent reinforcement learning, many previous studies have been dedicated to design an effective communication architecture.", "However, existing models usually suffer from an ossified communication structure, e.g., most of them predefine a particular communication mode by specifying a fixed time frequency and spatial scope for agents to communicate regardless of necessity.", "Such design is incapable of dealing with multi-agent scenarios that are capricious and complicated, especially when only partial information is available.", "Motivated by this, we argue that the solution is to build a spontaneous and self-organizing communication (SSoC) learning scheme.", "By treating the communication behaviour as an explicit action, SSoC learns to organize communication in an effective and efficient way.", "Particularly, it enables each agent to spontaneously decide when and who to send messages based on its observed states.", "In this way, a dynamic inter-agent communication channel is established in an online and self-organizing manner.", "The agents also learn how to adaptively aggregate the received messages and its own hidden states to execute actions.", "Various experiments have been conducted to demonstrate that SSoC really learns intelligent message passing among agents located far apart.", "With such agile communications, we observe that effective collaboration tactics emerge which have not been mastered by the compared baselines." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.1666666567325592, 0.12903225421905518, 0.16326530277729034, 0.11428570747375488, 0.47058823704719543, 0.12121211737394333, 0.060606054961681366, 0.25806450843811035, 0.060606054961681366, 0, 0 ]
rJ4vlh0qtm
[ "This paper proposes a spontaneous and self-organizing communication (SSoC) learning scheme for multi-agent RL tasks." ]
[ "We study the BERT language representation model and the sequence generation model with BERT encoder for multi-label text classification task.", "We experiment with both models and explore their special qualities for this setting.", "We also introduce and examine experimentally a mixed model, which is an ensemble of multi-label BERT and sequence generating BERT models.", "Our experiments demonstrated that BERT-based models and the mixed model, in particular, outperform current baselines in several metrics achieving state-of-the-art results on three well-studied multi-label classification datasets with English texts and two private Yandex Taxi datasets with Russian texts." ]
[ 1, 0, 0, 0 ]
[ 0.42424240708351135, 0.06896550953388214, 0.22857142984867096, 0.11999999731779099 ]
BJeHFlBYvB
[ "On using BERT as an encoder for sequential prediction of labels in multi-label text classification task" ]
[ "Click Through Rate (CTR) prediction is a critical task in industrial applications, especially for online social and commerce applications.", "It is challenging to find a proper way to automatically discover the effective cross features in CTR tasks.", "We propose a novel model for CTR tasks, called Deep neural networks with Encoder enhanced Factorization Machine (DeepEnFM).", "Instead of learning the cross features directly, DeepEnFM adopts the Transformer encoder as a backbone to align the feature embeddings with the clues of other fields.", "The embeddings generated from encoder are beneficial for the further feature interactions.", "Particularly, DeepEnFM utilizes a bilinear approach to generate different similarity functions with respect to different field pairs.", "Furthermore, the max-pooling method makes DeepEnFM feasible to capture both the supplementary and suppressing information among different attention heads.", "Our model is validated on the Criteo and Avazu datasets, and achieves state-of-art performance." ]
[ 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.13333332538604736, 0.0714285671710968, 0.3448275923728943, 0.060606054961681366, 0.08695651590824127, 0.1538461446762085, 0.20689654350280762, 0.0833333283662796 ]
SJlyta4YPS
[ "DNN and Encoder enhanced FM with bilinear attention and max-pooling for CTR" ]
[ "For autonomous agents to successfully operate in the real world, the ability to anticipate future scene states is a key competence.", "In real-world scenarios, future states become increasingly uncertain and multi-modal, particularly on long time horizons.", "Dropout based Bayesian inference provides a computationally tractable, theoretically well grounded approach to learn different hypotheses/models to deal with uncertain futures and make predictions that correspond well to observations -- are well calibrated.", "However, it turns out that such approaches fall short to capture complex real-world scenes, even falling behind in accuracy when compared to the plain deterministic approaches.", "This is because the used log-likelihood estimate discourages diversity.", "In this work, we propose a novel Bayesian formulation for anticipating future scene states which leverages synthetic likelihoods that encourage the learning of diverse models to accurately capture the multi-modal nature of future scene states.", "We show that our approach achieves accurate state-of-the-art predictions and calibrated probabilities through extensive experiments for scene anticipation on Cityscapes dataset.", "Moreover, we show that our approach generalizes across diverse tasks such as digit generation and precipitation forecasting." ]
[ 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.17142856121063232, 0.12903225421905518, 0.35555556416511536, 0.04999999701976776, 0.07999999821186066, 0.1304347813129425, 0.21621620655059814, 0.12121211737394333 ]
rkgK3oC5Fm
[ "Dropout based Bayesian inference is extended to deal with multi-modality and is evaluated on scene anticipation tasks." ]
[ "Conditional generative adversarial networks (cGAN) have led to large improvements in the task of conditional image generation, which lies at the heart of computer vision.", "The major focus so far has been on performance improvement, while there has been little effort in making cGAN more robust to noise.", "The regression (of the generator) might lead to arbitrarily large errors in the output, which makes cGAN unreliable for real-world applications.", "In this work, we introduce a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue.", "Our model augments the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold even in the presence of intense noise.", "We prove that RoCGAN share similar theoretical properties as GAN and experimentally verify that our model outperforms existing state-of-the-art cGAN architectures by a large margin in a variety of domains including images from natural scenes and faces." ]
[ 0, 0, 0, 1, 0, 0 ]
[ 0.25, 0.08695651590824127, 0.17777776718139648, 0.44897958636283875, 0.3829787075519562, 0.1355932205915451 ]
Byg0DsCqYQ
[ "We introduce a new type of conditional GAN, which aims to leverage structure in the target space of the generator. We augment the generator with a new, unsupervised pathway to learn the target structure. " ]
[ "Though deep neural networks have achieved the state of the art performance in visual classification, recent studies have shown that they are all vulnerable to the attack of adversarial examples.", "To solve the problem, some regularization adversarial training methods, constraining the output label or logit, have been studied.", "In this paper, we propose a novel regularized adversarial training framework ATLPA,namely Adversarial Tolerant Logit Pairing with Attention.", "Instead of constraining a hard distribution (e.g., one-hot vectors or logit) in adversarial training, ATLPA uses Tolerant Logit which consists of confidence distribution on top-k classes and captures inter-class similarities at the image level.", "Specifically, in addition to minimizing the empirical loss, ATLPA encourages attention map for pairs of examples to be similar.", "When applied to clean examples and their adversarial counterparts, ATLPA improves accuracy on adversarial examples over adversarial training.", "We evaluate ATLPA with the state of the art algorithms, the experiment results show that our method outperforms these baselines with higher accuracy.", "Compared with previous work, our work is evaluated under highly challenging PGD attack: the maximum perturbation $\\epsilon$ is 64 and 128 with 10 to 200 attack iterations." ]
[ 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.045454539358615875, 0.11428570747375488, 1, 0.1538461446762085, 0, 0.12121211737394333, 0.052631575614213943, 0.04651162400841713 ]
HJx0Yn4FPB
[ "In this paper, we propose a novel regularized adversarial training framework ATLPA,namely Adversarial Tolerant Logit Pairing with Attention." ]
[ "A fundamental trait of intelligence is the ability to achieve goals in the face of novel circumstances.", "In this work, we address one such setting which requires solving a task with a novel set of actions.", "Empowering machines with this ability requires generalization in the way an agent perceives its available actions along with the way it uses these actions to solve tasks.", "Hence, we propose a framework to enable generalization over both these aspects: understanding an action’s functionality, and using actions to solve tasks through reinforcement learning.", "Specifically, an agent interprets an action’s behavior using unsupervised representation learning over a collection of data samples reflecting the diverse properties of that action.", "We employ a reinforcement learning architecture which works over these action representations, and propose regularization metrics essential for enabling generalization in a policy.", "We illustrate the generalizability of the representation learning method and policy, to enable zero-shot generalization to previously unseen actions on challenging sequential decision-making environments.", "Our results and videos can be found at sites.google.com/view/action-generalization/" ]
[ 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.2222222238779068, 0.13333332538604736, 0.17142856121063232, 0.2222222238779068, 0.23529411852359772, 0.29411762952804565, 0.4117647111415863, 0 ]
rkx35lHKwB
[ "We address the problem of generalization of reinforcement learning to unseen action spaces." ]
[ "Temporal point processes are the dominant paradigm for modeling sequences of events happening at irregular intervals.", "The standard way of learning in such models is by estimating the conditional intensity function. ", "However, parameterizing the intensity function usually incurs several trade-offs.", "We show how to overcome the limitations of intensity-based approaches by directly modeling the conditional distribution of inter-event times. ", "We draw on the literature on normalizing flows to design models that are flexible and efficient.", "We additionally propose a simple mixture model that matches the flexibility of flow-based models, but also permits sampling and computing moments in closed form. ", "The proposed models achieve state-of-the-art performance in standard prediction tasks and are suitable for novel applications, such as learning sequence embeddings and imputing missing data." ]
[ 0, 1, 0, 0, 0, 0, 0 ]
[ 0.2857142686843872, 0.3571428656578064, 0.1904761791229248, 0.2666666507720947, 0.07407406717538834, 0.10810810327529907, 0.0555555522441864 ]
HygOjhEYDH
[ "Learn in temporal point processes by modeling the conditional density, not the conditional intensity." ]
[ "We propose a novel yet simple neural network architecture for topic modelling.", "The method is based on training an autoencoder structure where the bottleneck represents the space of the topics distribution and the decoder outputs represent the space of the words distributions over the topics.", "We exploit an auxiliary decoder to prevent mode collapsing in our model. ", "A key feature for an effective topic modelling method is having sparse topics and words distributions, where there is a trade-off between the sparsity level of topics and words.", "This feature is implemented in our model by L-2 regularization and the model hyperparameters take care of the trade-off. ", "We show in our experiments that our model achieves competitive results compared to the state-of-the-art deep models for topic modelling, despite its simple architecture and training procedure.", "The “New York Times” and “20 Newsgroups” datasets are used in the experiments.\n\n" ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 0.3333333432674408, 0, 0.10526315122842789, 0.25806450843811035, 0.0833333283662796, 0.25, 0 ]
BJluV5tjiQ
[ "A deep model for topic modelling" ]
[ "Voice Conversion (VC) is a task of converting perceived speaker identity from a source speaker to a particular target speaker.", "Earlier approaches in the literature primarily find a mapping between the given source-target speaker-pairs.", "Developing mapping techniques for many-to-many VC using non-parallel data, including zero-shot learning remains less explored areas in VC.", "Most of the many-to-many VC architectures require training data from all the target speakers for whom we want to convert the voices.", "In this paper, we propose a novel style transfer architecture, which can also be extended to generate voices even for target speakers whose data were not used in the training (i.e., case of zero-shot learning).", "In particular, propose Adaptive Generative Adversarial Network (AdaGAN), new architectural training procedure help in learning normalized speaker-independent latent representation, which will be used to generate speech with different speaking styles in the context of VC.", "We compare our results with the state-of-the-art StarGAN-VC architecture.", "In particular, the AdaGAN achieves 31.73%, and 10.37% relative improvement compared to the StarGAN in MOS tests for speech quality and speaker similarity, respectively.", "The key strength of the proposed architectures is that it yields these results with less computational complexity.", "AdaGAN is 88.6% less complex than StarGAN-VC in terms of FLoating Operation Per Second (FLOPS), and 85.46% less complex in terms of trainable parameters. " ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0.25, 0.17142856121063232, 0.07692307233810425, 0.04081632196903229, 0, 0.10256409645080566, 0, 0.10810810327529907 ]
HJlk-eHFwH
[ "Novel adaptive instance normalization based GAN framework for non parallel many-to-many and zero-shot VC. " ]
[ "Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks.", "Self attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context.", "To tackle the problem, we propose a novel model called Sparse Transformer.", "Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments.", "Extensive experimental results on a series of natural language processing tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Sparse Transformer in model performance. \n ", "Sparse Transformer reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation and IWSLT 2014 German-to-English translation.", "In addition, we conduct qualitative analysis to account for Sparse Transformer's superior performance." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0.14999999105930328, 0.2222222238779068, 0.21621620655059814, 0.800000011920929, 0.2222222238779068, 0.1538461446762085, 0.15789473056793213 ]
Hye87grYDH
[ "This work propose Sparse Transformer to improve the concentration of attention on the global context through an explicit selection of the most relevant segments for sequence to sequence learning. " ]
[ "Human observers can learn to recognize new categories of objects from a handful of examples, yet doing so with machine perception remains an open challenge.", "We hypothesize that data-efficient recognition is enabled by representations which make the variability in natural signals more predictable, as suggested by recent perceptual evidence.", "We therefore revisit and improve Contrastive Predictive Coding, a recently-proposed unsupervised learning framework, and arrive at a representation which enables generalization from small amounts of labeled data.", "When provided with only 1% of ImageNet labels (i.e. 13 per class), this model retains a strong classification performance, 73% Top-5 accuracy, outperforming supervised networks by 28% (a 65% relative improvement) and state-of-the-art semi-supervised methods by 14%.", "We also find this representation to serve as a useful substrate for object detection on the PASCAL-VOC 2007 dataset, approaching the performance of representations trained with a fully annotated ImageNet dataset." ]
[ 0, 1, 0, 0, 0 ]
[ 0.05714285373687744, 0.11764705181121826, 0.1111111044883728, 0.0833333283662796, 0.09999999403953552 ]
rJerHlrYwH
[ "Unsupervised representations learned with Contrastive Predictive Coding enable data-efficient image classification." ]
[ "We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels.", "We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently.", "Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer.", "Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights.", "We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms.", "Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training.", "In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network." ]
[ 0, 0, 0, 0, 0, 1, 0 ]
[ 0.3928571343421936, 0.2142857164144516, 0.2978723347187042, 0.35555556416511536, 0.2142857164144516, 0.4313725531101227, 0.23999999463558197 ]
ByeSYa4KPS
[ "Redistributing and growing weights according to the momentum magnitude enables the training of sparse networks from random initializations that can reach dense performance levels with 5% to 50% weights while accelerating training by up to 5.6x." ]
[ "To provide principled ways of designing proper Deep Neural Network (DNN) models, it is essential to understand the loss surface of DNNs under realistic assumptions.", "We introduce interesting aspects for understanding the local minima and overall structure of the loss surface.", "The parameter domain of the loss surface can be decomposed into regions in which activation values (zero or one for rectified linear units) are consistent.", "We found that, in each region, the loss surface have properties similar to that of linear neural networks where every local minimum is a global minimum.", "This means that every differentiable local minimum is the global minimum of the corresponding region.", "We prove that for a neural network with one hidden layer using rectified linear units under realistic assumptions.", "There are poor regions that lead to poor local minima, and we explain why such regions exist even in the overparameterized DNNs." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0.23255813121795654, 0.29411762952804565, 0.27272728085517883, 0.5909090638160706, 0.5625, 0.10810810327529907, 0.1538461446762085 ]
SJDYgPgCZ
[ "The loss surface of neural networks is a disjoint union of regions where every local minimum is a global minimum of the corresponding region." ]
[ "Contextualized word representations, such as ELMo and BERT, were shown to perform well on a various of semantic and structural (syntactic) task.", "In this work, we tackle the task of unsupervised disentanglement between semantics and structure in neural language representations: we aim to learn a transformation of the contextualized vectors, that discards the lexical semantics, but keeps the structural information.", "To this end, we automatically generate groups of sentences which are structurally similar but semantically different, and use metric-learning approach to learn a transformation that emphasizes the structural component that is encoded in the vectors.", "We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.", "Finally, we demonstrate the utility of our distilled representations by showing that they outperform the original contextualized representations in few-shot parsing setting." ]
[ 0, 0, 0, 1, 0 ]
[ 0, 0.09090908616781235, 0, 0.14814814925193787, 0.12903225421905518 ]
HJlRFlHFPS
[ "We distill language models representations for syntax by unsupervised metric learning" ]
[ "We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals.", "The proposed semi-parametric topological memory (SPTM) consists of a (non-parametric) graph with nodes corresponding to locations in the environment and a (parametric) deep network capable of retrieving nodes from the graph based on observations.", "The graph stores no metric information, only connectivity of locations corresponding to the nodes.", "We use SPTM as a planning module in a navigation system.", "Given only 5 minutes of footage of a previously unseen maze, an SPTM-based navigation agent can build a topological map of the environment and use it to confidently navigate towards goals.", "The average success rate of the SPTM agent in goal-directed navigation across test environments is higher than the best-performing baseline by a factor of three." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 1, 0.13333332538604736, 0, 0.307692289352417, 0.1818181723356247, 0.20512819290161133 ]
SygwwGbRW
[ "We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals." ]
[ "The available resolution in our visual world is extremely high, if not infinite.", "Existing CNNs can be applied in a fully convolutional way to images of arbitrary resolution, but as the size of the input increases, they can not capture contextual information.", "In addition, computational requirements scale linearly to the number of input pixels, and resources are allocated uniformly across the input, no matter how informative different image regions are.", "We attempt to address these problems by proposing a novel architecture that traverses an image pyramid in a top-down fashion, while it uses a hard attention mechanism to selectively process only the most informative image parts.", "We conduct experiments on MNIST and ImageNet datasets, and we show that our models can significantly outperform fully convolutional counterparts, when the resolution of the input is that big that the receptive field of the baselines can not adequately cover the objects of interest.", "Gains in performance come for less FLOPs, because of the selective processing that we follow.", "Furthermore, our attention mechanism makes our predictions more interpretable, and creates a trade-off between accuracy and complexity that can be tuned both during training and testing time." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0.0555555522441864, 0.16326530277729034, 0.16326530277729034, 0.6545454263687134, 0.10526315122842789, 0.15789473056793213, 0.08510638028383255 ]
rkeH6AEYvr
[ "We propose a novel architecture that traverses an image pyramid in a top-down fashion, while it visits only the most informative regions along the way." ]
[ "Hyperparameter optimization can be formulated as a bilevel optimization problem, where the optimal parameters on the training set depend on the hyperparameters.", "We aim to adapt regularization hyperparameters for neural networks by fitting compact approximations to the best-response function, which maps hyperparameters to optimal weights and biases.", "We show how to construct scalable best-response approximations for neural networks by modeling the best-response as a single network whose hidden units are gated conditionally on the regularizer.", "We justify this approximation by showing the exact best-response for a shallow linear network with L2-regularized Jacobian can be represented by a similar gating mechanism.", "We fit this model using a gradient-based hyperparameter optimization algorithm which alternates between approximating the best-response around the current hyperparameters and optimizing the hyperparameters using the approximate best-response function.", "Unlike other gradient-based approaches, we do not require differentiating the training loss with respect to the hyperparameters, allowing us to tune discrete hyperparameters, data augmentation hyperparameters, and dropout probabilities.", "Because the hyperparameters are adapted online, our approach discovers hyperparameter schedules that can outperform fixed hyperparameter values.", "Empirically, our approach outperforms competing hyperparameter optimization methods on large-scale deep learning problems.", "We call our networks, which update their own hyperparameters online during training, Self-Tuning Networks (STNs)." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.12121211737394333, 0.2702702581882477, 0.1463414579629898, 0.10526315122842789, 0.15789473056793213, 0.14999999105930328, 0, 0, 0.06666666269302368 ]
r1eEG20qKQ
[ "We use a hypernetwork to predict optimal weights given hyperparameters, and jointly train everything together." ]
[ "Conditional Generative Adversarial Networks (cGANs) are finding increasingly widespread use in many application domains.", "Despite outstanding progress, quantitative evaluation of such models often involves multiple distinct metrics to assess different desirable properties, such as image quality, conditional consistency, and intra-conditioning diversity.", "In this setting, model benchmarking becomes a challenge, as each metric may indicate a different \"best\" model.", "In this paper, we propose the Frechet Joint Distance (FJD), which is defined as the Frechet distance between joint distributions of images and conditioning, allowing it to implicitly capture the aforementioned properties in a single metric.", "We conduct proof-of-concept experiments on a controllable synthetic dataset, which consistently highlight the benefits of FJD when compared to currently established metrics.", "Moreover, we use the newly introduced metric to compare existing cGAN-based models for a variety of conditioning modalities (e.g. class labels, object masks, bounding boxes, images, and text captions).", "We show that FJD can be used as a promising single metric for model benchmarking." ]
[ 0, 0, 0, 0, 0, 0, 1 ]
[ 0.05882352590560913, 0.30434781312942505, 0.11428570747375488, 0.22641508281230927, 0.0952380895614624, 0.1599999964237213, 0.34285715222358704 ]
rylxpA4YwH
[ "We propose a new metric for evaluating conditional GANs that captures image quality, conditional consistency, and intra-conditioning diversity in a single measure." ]
[ "Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL.", "On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori.", "However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning.", "To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions.", "TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values.", "We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network.", "Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the tree.", "We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games.", "Furthermore, we present ablation studies that demonstrate the effect of different auxiliary losses on learning transition models." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2666666507720947, 0.09302324801683426, 0.1666666567325592, 0.18518517911434174, 0.1538461446762085, 0.2222222238779068, 0.22727271914482117, 0.15094339847564697, 0.1860465109348297 ]
H1dh6Ax0Z
[ "We present TreeQN and ATreeC, new architectures for deep reinforcement learning in discrete-action domains that integrate differentiable on-line tree planning into the action-value function or policy." ]
[ "Multi-label classification (MLC) is the task of assigning a set of target labels for a given sample.", "Modeling the combinatorial label interactions in MLC has been a long-haul challenge.", "Recurrent neural network (RNN) based encoder-decoder models have shown state-of-the-art performance for solving MLC.", "However, the sequential nature of modeling label dependencies through an RNN limits its ability in parallel computation, predicting dense labels, and providing interpretable results.", "In this paper, we propose Message Passing Encoder-Decoder (MPED) Networks, aiming to provide fast, accurate, and interpretable MLC.", "MPED networks model the joint prediction of labels by replacing all RNNs in the encoder-decoder architecture with message passing mechanisms and dispense with autoregressive inference entirely. ", "The proposed models are simple, fast, accurate, interpretable, and structure-agnostic (can be used on known or unknown structured data).", "Experiments on seven real-world MLC datasets show the proposed models outperform autoregressive RNN models across five different metrics with a significant speedup during training and testing time." ]
[ 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.24242423474788666, 0.13333332538604736, 0.0624999962747097, 0.1904761791229248, 0.2222222238779068, 0.1395348757505417, 0.05405404791235924, 0.09090908616781235 ]
r1xYr3C5t7
[ "We propose Message Passing Encoder-Decode networks for a fast and accurate way of modelling label dependencies for multi-label classification." ]
[ "Recent few-shot learning algorithms have enabled models to quickly adapt to new tasks based on only a few training samples.", "Previous few-shot learning works have mainly focused on classification and reinforcement learning.", "In this paper, we propose a few-shot meta-learning system that focuses exclusively on regression tasks.", "Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of sparsifying basis functions.", "This enables a few labeled samples to approximate the function.", "We design a Basis Function Learner network to encode basis functions for a task distribution, and a Weights Generator network to generate the weight vector for a novel task.", "We show that our model outperforms the current state of the art meta-learning methods in various regression tasks." ]
[ 0, 0, 0, 1, 0, 0, 0 ]
[ 0.21621620655059814, 0.13793103396892548, 0.24242423474788666, 0.30434781312942505, 0.2857142686843872, 0.29999998211860657, 0.22857142984867096 ]
BJxDNxSFDH
[ "We propose a method of doing few-shot regression by learning a set of basis functions to represent the function distribution." ]
[ "Large-scale pre-trained language model, such as BERT, has recently achieved great success in a wide range of language understanding tasks.", "However, it remains an open question how to utilize BERT for text generation tasks.", "In this paper, we present a novel approach to addressing this challenge in a generic sequence-to-sequence (Seq2Seq) setting.", "We first propose a new task, Conditional Masked Language Modeling (C-MLM), to enable fine-tuning of BERT on target text-generation dataset.", "The fine-tuned BERT (i.e., teacher) is then exploited as extra supervision to improve conventional Seq2Seq models (i.e., student) for text generation.", "By leveraging BERT's idiosyncratic bidirectional nature, distilling the knowledge learned from BERT can encourage auto-regressive Seq2Seq models to plan ahead, imposing global sequence-level supervision for coherent text generation.", "Experiments show that the proposed approach significantly outperforms strong baselines of Transformer on multiple text generation tasks, including machine translation (MT) and text summarization.", "Our proposed model also achieves new state-of-the-art results on the IWSLT German-English and English-Vietnamese MT datasets." ]
[ 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.09999999403953552, 0.34285715222358704, 0.10810810327529907, 0.2926829159259796, 0.23255813121795654, 0.20408162474632263, 0.22727271914482117, 0.1621621549129486 ]
Bkgz_krKPB
[ "We propose a model-agnostic way to leverage BERT for text generation and achieve improvements over Transformer on 2 tasks over 4 datasets." ]
[ "Humans have the remarkable ability to correctly classify images despite possible degradation.", "Many studies have suggested that this hallmark of human vision results from the interaction between feedforward signals from bottom-up pathways of the visual cortex and feedback signals provided by top-down pathways.", "Motivated by such interaction, we propose a new neuro-inspired model, namely Convolutional Neural Networks with Feedback (CNN-F).", "CNN-F extends CNN with a feedback generative network, combining bottom-up and top-down inference to perform approximate loopy belief propagation. ", "We show that CNN-F's iterative inference allows for disentanglement of latent variables across layers.", "We validate the advantages of CNN-F over the baseline CNN.", "Our experimental results suggest that the CNN-F is more robust to image degradation such as pixel noise, occlusion, and blur. ", "Furthermore, we show that the CNN-F is capable of restoring original images from the degraded ones with high reconstruction accuracy while introducing negligible artifacts." ]
[ 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0, 0.10810810327529907, 0.1428571343421936, 0.4516128897666931, 0.07999999821186066, 0.19999998807907104, 0.1249999925494194, 0.11764705181121826 ]
rylU4mtUIS
[ "CNN-F extends CNN with a feedback generative network for robust vision." ]
[ "We develop new approximation and statistical learning theories of convolutional neural networks (CNNs) via the ResNet-type structure where the channel size, filter size, and width are fixed.", "It is shown that a ResNet-type CNN is a universal approximator and its expression ability is no worse than fully-connected neural networks (FNNs) with a \\textit{block-sparse} structure even if the size of each layer in the CNN is fixed.", "Our result is general in the sense that we can automatically translate any approximation rate achieved by block-sparse FNNs into that by CNNs.", "Thanks to the general theory, it is shown that learning on CNNs satisfies optimality in approximation and estimation of several important function classes.\n\n", "As applications, we consider two types of function classes to be estimated: the Barron class and H\\\"older class.", "We prove the clipped empirical risk minimization (ERM) estimator can achieve the same rate as FNNs even the channel size, filter size, and width of CNNs are constant with respect to the sample size.", "This is minimax optimal (up to logarithmic factors) for the H\\\"older class.", "Our proof is based on sophisticated evaluations of the covering number of CNNs and the non-trivial parameter rescaling technique to control the Lipschitz constant of CNNs to be constructed." ]
[ 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.3050847351551056, 0.8955223560333252, 0.178571417927742, 0.2711864411830902, 0.11538460850715637, 0.2461538463830948, 0.08510638028383255, 0.17241378128528595 ]
HklnzhR9YQ
[ "It is shown that ResNet-type CNNs are a universal approximator and its expression ability is not worse than fully connected neural networks (FNNs) with a \\textit{block-sparse} structure even if the size of each layer in the CNN is fixed." ]
[ "Few shot image classification aims at learning a classifier from limited labeled data.", "Generating the classification weights has been applied in many meta-learning approaches for few shot image classification due to its simplicity and effectiveness.", "However, we argue that it is difficult to generate the exact and universal classification weights for all the diverse query samples from very few training samples.", "In this work, we introduce Attentive Weights Generation for few shot learning via Information Maximization (AWGIM), which addresses current issues by two novel contributions.", "i) AWGIM generates different classification weights for different query samples by letting each of query samples attends to the whole support set.", "ii) To guarantee the generated weights adaptive to different query sample, we re-formulate the problem to maximize the lower bound of mutual information between generated weights and query as well as support data.", "As far as we can see, this is the first attempt to unify information maximization into few shot learning.", "Both two contributions are proved to be effective in the extensive experiments and we show that AWGIM is able to achieve state-of-the-art performance on benchmark datasets." ]
[ 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.2222222238779068, 0.2857142686843872, 0.2631579041481018, 0.2631579041481018, 0.1818181723356247, 0.14999999105930328, 0.3636363446712494, 0.05128204822540283 ]
BJxpIJHKwB
[ "A novel few shot learning method to generate query-specific classification weights via information maximization." ]
[ "Conversational question answering (CQA) is a novel QA task that requires the understanding of dialogue context.", "Different from traditional single-turn machine reading comprehension (MRC), CQA is a comprehensive task comprised of passage reading, coreference resolution, and contextual understanding.", "In this paper, we propose an innovative contextualized attention-based deep neural network, SDNet, to fuse context into traditional MRC models.", "Our model leverages both inter-attention and self-attention to comprehend the conversation and passage.", "Furthermore, we demonstrate a novel method to integrate the BERT contextual model as a sub-module in our network.", "Empirical results show the effectiveness of SDNet.", "On the CoQA leaderboard, it outperforms the previous best model's F1 score by 1.6%.", "Our ensemble model further improves the F1 score by 2.7%." ]
[ 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.2857142686843872, 0.19512194395065308, 0.051282044500112534, 0.06451612710952759, 0.3333333134651184, 0.07692307233810425, 0, 0 ]
SJx0F2VtwB
[ "A neural method for conversational question answering with attention mechanism and a novel usage of BERT as contextual embedder" ]
[ "Generative adversarial networks (GANs) are one of the most popular approaches when it comes to training generative models, among which variants of Wasserstein GANs are considered superior to the standard GAN formulation in terms of learning stability and sample quality.", "However, Wasserstein GANs require the critic to be 1-Lipschitz, which is often enforced implicitly by penalizing the norm of its gradient, or by globally restricting its Lipschitz constant via weight normalization techniques.", "Training with a regularization term penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient, was found to be practically infeasible in most situations.", "Inspired by Virtual Adversarial Training, we propose a method called Adversarial Lipschitz Regularization, and show that using an explicit Lipschitz penalty is indeed viable and leads to competitive performance when applied to Wasserstein GANs, highlighting an important connection between Lipschitz regularization and adversarial training." ]
[ 0, 0, 0, 1 ]
[ 0.05128204822540283, 0.060606058686971664, 0.06896551698446274, 0.09756097197532654 ]
Bke_DertPB
[ "alternative to gradient penalty" ]
[ "Multi-task learning promises to use less data, parameters, and time than training separate single-task models.", "But realizing these benefits in practice is challenging.", "In particular, it is difficult to define a suitable architecture that has enough capacity to support many tasks while not requiring excessive compute for each individual task.", "There are difficult trade-offs when deciding how to allocate parameters and layers across a large set of tasks.", "To address this, we propose a method for automatically searching over multi-task architectures that accounts for resource constraints.", "We define a parameterization of feature sharing strategies for effective coverage and sampling of architectures.", "We also present a method for quick evaluation of such architectures with feature distillation.", "Together these contributions allow us to quickly optimize for parameter-efficient multi-task models.", "We benchmark on Visual Decathlon, demonstrating that we can automatically search for and identify architectures that effectively make trade-offs between task resource requirements while maintaining a high level of final performance." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.07999999821186066, 0, 0.1111111044883728, 0, 0.29629629850387573, 0.25, 0.25, 0.1818181723356247, 0.20000000298023224 ]
B1eoyAVFwH
[ "automatic search for multi-task architectures that reduce per-task feature use" ]
[ "As distributed approaches to natural language semantics have developed and diversified, embedders for linguistic units larger than words (e.g., sentences) have come to play an increasingly important role. ", "To date, such embedders have been evaluated using benchmark tasks (e.g., GLUE) and linguistic probes. ", "We propose a comparative approach, nearest neighbor overlap (N2O), that quantifies similarity between embedders in a task-agnostic manner. ", "N2O requires only a collection of examples and is simple to understand: two embedders are more similar if, for the same set of inputs, there is greater overlap between the inputs' nearest neighbors. ", "We use N2O to compare 21 sentence embedders and show the effects of different design choices and architectures." ]
[ 0, 0, 1, 0, 0 ]
[ 0.11764705181121826, 0.09999999403953552, 0.550000011920929, 0.22641508281230927, 0.41025641560554504 ]
ByePEC4KDS
[ "We propose nearest neighbor overlap, a procedure which quantifies similarity between embedders in a task-agnostic manner, and use it to compare 21 sentence embedders." ]
[ "Generative Adversarial Networks (GANs) can achieve state-of-the-art sample quality in generative modelling tasks but suffer from the mode collapse problem.", "Variational Autoencoders (VAE) on the other hand explicitly maximize a reconstruction-based data log-likelihood forcing it to cover all modes, but suffer from poorer sample quality.", "Recent works have proposed hybrid VAE-GAN frameworks which integrate a GAN-based synthetic likelihood to the VAE objective to address both the mode collapse and sample quality issues, with limited success.", "This is because the VAE objective forces a trade-off between the data log-likelihood and divergence to the latent prior.", "The synthetic likelihood ratio term also shows instability during training.", "We propose a novel objective with a ``\"Best-of-Many-Samples\" reconstruction cost and a stable direct estimate of the synthetic likelihood.", "This enables our hybrid VAE-GAN framework to achieve high data log-likelihood and low divergence to the latent prior at the same time and shows significant improvement over both hybrid VAE-GANS and plain GANs in mode coverage and quality." ]
[ 0, 0, 0, 0, 0, 0, 1 ]
[ 0.1538461446762085, 0.13636362552642822, 0.3404255211353302, 0.2222222238779068, 0.06896550953388214, 0.277777761220932, 0.3529411852359772 ]
S1lk61BtvB
[ "We propose a new objective for training hybrid VAE-GANs which lead to significant improvement in mode coverage and quality." ]