input
stringlengths 331
3.18k
| output
sequence |
---|---|
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Emoji suggestion systems based on typed text have been proposed to encourage emoji usage and enrich text messaging; however, such systems’ actual effects on the chat experience remain unknown.', 'We built an Android keyboard with both lexical (word-based) and semantic (meaning-based) emoji suggestion capabilities and compared these in two different studies.', 'To investigate the effect of emoji suggestion in online conversations, we conducted a laboratory text-messaging study with 24 participants, and also a 15-day longitudinal field deployment with 18 participants.', 'We found that lexical emoji suggestions increased emoji usage by 31.5% over a keyboard without suggestions, while semantic suggestions increased emoji usage by 125.1%.', 'However, suggestion mechanisms did not affect the chatting experience significantly.', 'From these studies, we formulate a set of design guidelines for future emoji suggestion systems that better support users’ needs.']
### SUMMARY:
| [
"We built an Android keyboard with both lexical (word-based) and semantic (meaning-based) emoji suggestion capabilities and compared their effects in two different chat studies. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Conventional Generative Adversarial Networks (GANs) for text generation tend to have issues of reward sparsity and mode collapse that affect the quality and diversity of generated samples.', "To address the issues, we propose a novel self-adversarial learning (SAL) paradigm for improving GANs' performance in text generation.", 'In contrast to standard GANs that use a binary classifier as its discriminator to predict whether a sample is real or generated, SAL employs a comparative discriminator which is a pairwise classifier for comparing the text quality between a pair of samples.', 'During training, SAL rewards the generator when its currently generated sentence is found to be better than its previously generated samples.', 'This self-improvement reward mechanism allows the model to receive credits more easily and avoid collapsing towards the limited number of real samples, which not only helps alleviate the reward sparsity issue but also reduces the risk of mode collapse.', 'Experiments on text generation benchmark datasets show that our proposed approach substantially improves both the quality and the diversity, and yields more stable performance compared to the previous GANs for text generation.']
### SUMMARY:
| [
"We propose a self-adversarial learning (SAL) paradigm which improves the generator in a self-play fashion for improving GANs' performance in text generation."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Determining the number of latent dimensions is a ubiquitous problem in machine\n', 'learning.', 'In this study, we introduce a novel method that relies on SVD to discover\n', 'the number of latent dimensions.', 'The general principle behind the method is to\n', 'compare the curve of singular values of the SVD decomposition of a data set with\n', 'the randomized data set curve.', 'The inferred number of latent dimensions corresponds\n', 'to the crossing point of the two curves.', 'To evaluate our methodology, we\n', 'compare it with competing methods such as Kaisers eigenvalue-greater-than-one\n', 'rule (K1), Parallel Analysis (PA), Velicers MAP test (Minimum Average Partial).\n', 'We also compare our method with the Silhouette Width (SW) technique which is\n', 'used in different clustering methods to determine the optimal number of clusters.\n', 'The result on synthetic data shows that the Parallel Analysis and our method have\n', 'similar results and more accurate than the other methods, and that our methods is\n', 'slightly better result than the Parallel Analysis method for the sparse data sets.']
### SUMMARY:
| [
"In this study, we introduce a novel method that relies on SVD to discover the number of latent dimensions."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Deep learning models are often sensitive to adversarial attacks, where carefully-designed input samples can cause the system to produce incorrect decisions.', 'Here we focus on the problem of detecting attacks, rather than robust classification, since detecting that an attack occurs may be even more important than avoiding misclassification.', 'We build on advances in explainability, where activity-map-like explanations are used to justify and validate decisions, by highlighting features that are involved with a classification decision.', 'The key observation is that it is hard to create explanations for incorrect decisions.', ' We propose EXAID, a novel attack-detection approach, which uses model explainability to identify images whose explanations are inconsistent with the predicted class', '. Specifically, we use SHAP, which uses Shapley values in the space of the input image, to identify which input features contribute to a class decision', '. Interestingly, this approach does not require to modify the attacked model, and it can be applied without modelling a specific attack', '. It can therefore be applied successfully to detect unfamiliar attacks, that were unknown at the time the detection model was designed', '. We evaluate EXAID on two benchmark datasets CIFAR-10 and SVHN, and against three leading attack techniques, FGSM, PGD and C&W.', 'We find that EXAID improves over the SoTA detection methods by a large margin across a wide range of noise levels, improving detection from 70% to over 90% for small perturbations.']
### SUMMARY:
| [
"A novel adversarial detection approach, which uses explainability methods to identify images whose explanations are inconsistent with the predicted class. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We describe a simple and general neural network weight compression approach, in which the network parameters (weights and biases) are represented in a “latent” space, amounting to a reparameterization.', 'This space is equipped with a learned probability model, which is used to impose an entropy penalty on the parameter representation during training, and to compress the representation using a simple arithmetic coder after training.', 'Classification accuracy and model compressibility is maximized jointly, with the bitrate--accuracy trade-off specified by a hyperparameter.', 'We evaluate the method on the MNIST, CIFAR-10 and ImageNet classification benchmarks using six distinct model architectures.', 'Our results show that state-of-the-art model compression can be achieved in a scalable and general way without requiring complex procedures such as multi-stage training.']
### SUMMARY:
| [
"An end-to-end trainable model compression method optimizing accuracy jointly with the expected model size."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Neural networks can converge faster with help from a smarter batch selection strategy.', 'In this regard, we propose Ada-Boundary, a novel adaptive-batch selection algorithm that constructs an effective mini-batch according to the learning progress of the model.Our key idea is to present confusing samples what the true label is.', 'Thus, the samples near the current decision boundary are considered as the most effective to expedite convergence.', 'Taking advantage of our design, Ada-Boundary maintains its dominance in various degrees of training difficulty.', 'We demonstrate the advantage of Ada-Boundary by extensive experiments using two convolutional neural networks for three benchmark data sets.', 'The experiment results show that Ada-Boundary improves the training time by up to 31.7% compared with the state-of-the-art strategy and by up to 33.5% compared with the baseline strategy.']
### SUMMARY:
| [
"We suggest a smart batch selection technique called Ada-Boundary."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['State of the art sound event classification relies in neural networks to learn the associations between class labels and audio recordings within a dataset.', 'These datasets typically define an ontology to create a structure that relates these sound classes with more abstract super classes.', 'Hence, the ontology serves as a source of domain knowledge representation of sounds.', 'However, the ontology information is rarely considered, and specially under explored to model neural network architectures.\n', 'We propose two ontology-based neural network architectures for sound event classification.', 'We defined a framework to design simple network architectures that preserve an ontological structure.', 'The networks are trained and evaluated using two of the most common sound event classification datasets.', 'Results show an improvement in classification performance demonstrating the benefits of including the ontological information.']
### SUMMARY:
| [
"We present ontology-based neural network architectures for sound event classification."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In contrast to fully connected networks, Convolutional Neural Networks (CNNs) achieve efficiency by learning weights associated with local filters with a finite spatial extent.', 'An implication of this is that a filter may know what it is looking at, but not where it is positioned in the image.', 'Information concerning absolute position is inherently useful, and it is reasonable to assume that deep CNNs may implicitly learn to encode this information if there is a means to do so.', 'In this paper, we test this hypothesis revealing the surprising degree of absolute position information that is encoded in commonly used neural networks.', 'A comprehensive set of experiments show the validity of this hypothesis and shed light on how and where this information is represented while offering clues to where positional information is derived from in deep CNNs.']
### SUMMARY:
| [
"Our work shows positional information has been implicitly encoded in a network. This information is important for detecting position-dependent features, e.g. semantic and saliency."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Semantic parsing which maps a natural language sentence into a formal machine-readable representation of its meaning, is highly constrained by the limited annotated training data.', 'Inspired by the idea of coarse-to-fine, we propose a general-to-detailed neural network(GDNN) by incorporating cross-domain sketch(CDS) among utterances and their logic forms.', 'For utterances in different domains, the General Network will extract CDS using an encoder-decoder model in a multi-task learning setup.', 'Then for some utterances in a specific domain, the Detailed Network will generate the detailed target parts using sequence-to-sequence architecture with advanced attention to both utterance and generated CDS.', "Our experiments show that compared to direct multi-task learning, CDS has improved the performance in semantic parsing task which converts users' requests into meaning representation language(MRL).", 'We also use experiments to illustrate that CDS works by adding some constraints to the target decoding process, which further proves the effectiveness and rationality of CDS.']
### SUMMARY:
| [
"General-to-detailed neural network(GDNN) with Multi-Task Learning by incorporating cross-domain sketch(CDS) for semantic parsing"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The learnability of different neural architectures can be characterized directly by computable measures of data complexity.', 'In this paper, we reframe the problem of architecture selection as understanding how data determines the most expressive and generalizable architectures suited to that data, beyond inductive bias.', 'After suggesting algebraic topology as a measure for data complexity, we show that the power of a network to express the topological complexity of a dataset in its decision boundary is a strictly limiting factor in its ability to generalize.', 'We then provide the first empirical characterization of the topological capacity of neural networks.', 'Our empirical analysis shows that at every level of dataset complexity, neural networks exhibit topological phase transitions and stratification.', 'This observation allowed us to connect existing theory to empirically driven conjectures on the choice of architectures for a single hidden layer neural networks.']
### SUMMARY:
| [
"We show that the learnability of different neural architectures can be characterized directly by computable measures of data complexity."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Challenges in natural sciences can often be phrased as optimization problems.', 'Machine learning techniques have recently been applied to solve such problems.', 'One example in chemistry is the design of tailor-made organic materials and molecules, which requires efficient methods to explore the chemical space.', 'We present a genetic algorithm (GA) that is enhanced with a neural network (DNN) based discriminator model to improve the diversity of generated molecules and at the same time steer the GA.', 'We show that our algorithm outperforms other generative models in optimization tasks.', 'We furthermore present a way to increase interpretability of genetic algorithms, which helped us to derive design principles']
### SUMMARY:
| [
"Tackling inverse design via genetic algorithms augmented with deep neural networks. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Bidirectional Encoder Representations from Transformers (BERT) reach state-of-the-art results in a variety of Natural Language Processing tasks.', 'However, understanding of their internal functioning is still insufficient and unsatisfactory.', "In order to better understand BERT and other Transformer-based models, we present a layer-wise analysis of BERT's hidden states.", 'Unlike previous research, which mainly focuses on explaining Transformer models by their \\hbox{attention} weights, we argue that hidden states contain equally valuable information.', 'Specifically, our analysis focuses on models fine-tuned on the task of Question Answering (QA) as an example of a complex downstream task.', 'We inspect how QA models transform token vectors in order to find the correct answer.', 'To this end, we apply a set of general and QA-specific probing tasks that reveal the information stored in each representation layer.', "Our qualitative analysis of hidden state visualizations provides additional insights into BERT's reasoning process.", 'Our results show that the transformations within BERT go through phases that are related to traditional pipeline tasks.', 'The system can therefore implicitly incorporate task-specific information into its token representations.', "Furthermore, our analysis reveals that fine-tuning has little impact on the models' semantic abilities and that prediction errors can be recognized in the vector representations of even early layers."]
### SUMMARY:
| [
"We investigate hidden state activations of Transformer Models in Question Answering Tasks."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We propose a general deep reinforcement learning method and apply it to robot manipulation tasks.', 'Our approach leverages demonstration data to assist a reinforcement learning agent in learning to solve a wide range of tasks, mainly previously unsolved.', 'We train visuomotor policies end-to-end to learn a direct mapping from RGB camera inputs to joint velocities.', 'Our experiments indicate that our reinforcement and imitation approach can solve contact-rich robot manipulation tasks that neither the state-of-the-art reinforcement nor imitation learning method can solve alone.', 'We also illustrate that these policies achieved zero-shot sim2real transfer by training with large visual and dynamics variations.']
### SUMMARY:
| [
"combine reinforcement learning and imitation learning to solve complex robot manipulation tasks from pixels"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Ensembles, where multiple neural networks are trained individually and their predictions are averaged, have been shown to be widely successful for improving both the accuracy and predictive uncertainty of single neural networks.', "However, an ensemble's cost for both training and testing increases linearly with the number of networks.\n", 'In this paper, we propose BatchEnsemble, an ensemble method whose computational and memory costs are significantly lower than typical ensembles.', 'BatchEnsemble achieves this by defining each weight matrix to be the Hadamard product of a shared weight among all ensemble members and a rank-one matrix per member.', 'Unlike ensembles, BatchEnsemble is not only parallelizable across devices, where one device trains one member, but also parallelizable within a device, where multiple ensemble members are updated simultaneously for a given mini-batch.', 'Across CIFAR-10, CIFAR-100, WMT14 EN-DE/EN-FR translation, and contextual bandits tasks, BatchEnsemble yields competitive accuracy and uncertainties as typical ensembles; the speedup at test time is 3X and memory reduction is 3X at an ensemble of size 4.', 'We also apply BatchEnsemble to lifelong learning, where on Split-CIFAR-100, BatchEnsemble yields comparable performance to progressive neural networks while having a much lower computational and memory costs.', 'We further show that BatchEnsemble can easily scale up to lifelong learning on Split-ImageNet which involves 100 sequential learning tasks.']
### SUMMARY:
| [
"We introduced BatchEnsemble, an efficient method for ensembling and lifelong learning which can be used to improve the accuracy and uncertainty of any neural network like typical ensemble methods."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Reinforcement learning typically requires carefully designed reward functions in order to learn the desired behavior.', 'We present a novel reward estimation method that is based on a finite sample of optimal state trajectories from expert demon- strations and can be used for guiding an agent to mimic the expert behavior.', 'The optimal state trajectories are used to learn a generative or predictive model of the “good” states distribution.', 'The reward signal is computed by a function of the difference between the actual next state acquired by the agent and the predicted next state given by the learned generative or predictive model.', 'With this inferred reward function, we perform standard reinforcement learning in the inner loop to guide the agent to learn the given task.', 'Experimental evaluations across a range of tasks demonstrate that the proposed method produces superior performance compared to standard reinforcement learning with both complete or sparse hand engineered rewards.', 'Furthermore, we show that our method successfully enables an agent to learn good actions directly from expert player video of games such as the Super Mario Bros and Flappy Bird.']
### SUMMARY:
| [
"Reward Estimation from Game Videos"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Deep neural networks have achieved impressive performance in handling complicated semantics in natural language, while mostly treated as black boxes.', 'To explain how the model handles compositional semantics of words and phrases, we study the hierarchical explanation problem.', 'We highlight the key challenge is to compute non-additive and context-independent importance for individual words and phrases.', 'We show some prior efforts on hierarchical explanations, e.g. contextual decomposition, do not satisfy the desired properties mathematically, leading to inconsistent explanation quality in different models.', 'In this paper, we propose a formal way to quantify the importance of each word or phrase to generate hierarchical explanations.', 'We modify contextual decomposition algorithms according to our formulation, and propose a model-agnostic explanation algorithm with competitive performance.', 'Human evaluation and automatic metrics evaluation on both LSTM models and fine-tuned BERT Transformer models on multiple datasets show that our algorithms robustly outperform prior works on hierarchical explanations.', 'We show our algorithms help explain compositionality of semantics, extract classification rules, and improve human trust of models.']
### SUMMARY:
| [
"We propose measurement of phrase importance and algorithms for hierarchical explanation of neural sequence model predictions"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Stochastic gradient descent (SGD) with stochastic momentum is popular in nonconvex stochastic optimization and particularly for the training of deep neural networks.', "In standard SGD, parameters are updated by improving along the path of the gradient at the current iterate on a batch of examples, where the addition of a ``momentum'' term biases the update in the direction of the previous change in parameters.", 'In non-stochastic convex optimization one can show that a momentum adjustment provably reduces convergence time in many settings, yet such results have been elusive in the stochastic and non-convex settings.', 'At the same time, a widely-observed empirical phenomenon is that in training deep networks stochastic momentum appears to significantly improve convergence time, variants of it have flourished in the development of other popular update methods, e.g. ADAM, AMSGrad, etc.', 'Yet theoretical justification for the use of stochastic momentum has remained a significant open question.', 'In this paper we propose an answer: stochastic momentum improves deep network training because it modifies SGD to escape saddle points faster and, consequently, to more quickly find a second order stationary point.', 'Our theoretical results also shed light on the related question of how to choose the ideal momentum parameter--our analysis suggests that $\\beta \\in [0,1)$ should be large (close to 1), which comports with empirical findings.', 'We also provide experimental findings that further validate these conclusions.']
### SUMMARY:
| [
"Higher momentum parameter $\\beta$ helps for escaping saddle points faster"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['GANs provide a framework for training generative models which mimic a data distribution.', 'However, in many cases we wish to train a generative model to optimize some auxiliary objective function within the data it generates, such as making more aesthetically pleasing images.', 'In some cases, these objective functions are difficult to evaluate, e.g. they may require human interaction.', 'Here, we develop a system for efficiently training a GAN to increase a generic rate of positive user interactions, for example aesthetic ratings.', 'To do this, we build a model of human behavior in the targeted domain from a relatively small set of interactions, and then use this behavioral model as an auxiliary loss function to improve the generative model.', 'As a proof of concept, we demonstrate that this system is successful at improving positive interaction rates simulated from a variety of objectives, and characterize s']
### SUMMARY:
| [
"We describe how to improve an image generative model according to a slow- or difficult-to-evaluate objective, such as human feedback, which could have many applications, like making more aesthetic images."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Semi-supervised learning (SSL) is a study that efficiently exploits a large amount of unlabeled data to improve performance in conditions of limited labeled data.', 'Most of the conventional SSL methods assume that the classes of unlabeled data are included in the set of classes of labeled data.', 'In addition, these methods do not sort out useless unlabeled samples and use all the unlabeled data for learning, which is not suitable for realistic situations.', 'In this paper, we propose an SSL method called selective self-training (SST), which selectively decides whether to include each unlabeled sample in the training process.', 'It is also designed to be applied to a more real situation where classes of unlabeled data are different from the ones of the labeled data.', 'For the conventional SSL problems which deal with data where both the labeled and unlabeled samples share the same class categories, the proposed method not only performs comparable to other conventional SSL algorithms but also can be combined with other SSL algorithms.', 'While the conventional methods cannot be applied to the new SSL problems where the separated data do not share the classes, our method does not show any performance degradation even if the classes of unlabeled data are different from those of the labeled data.']
### SUMMARY:
| [
"Our proposed algorithm does not use all of the unlabeled data for the training, and it rather uses them selectively."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions.', 'Conditional generation enables interactive control, but creating new controls often requires expensive retraining.', 'In this paper, we develop a method to condition generation without retraining the model.', 'By post-hoc learning latent constraints, value functions identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient-based optimization or amortized actor functions.', 'Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder.', 'Further, using gradient-based optimization, we demonstrate identity-preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image.', 'Finally, with discrete sequences of musical notes, we demonstrate zero-shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function.']
### SUMMARY:
| [
"A new approach to conditional generation by constraining the latent space of an unconditional generative model."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data.', 'These models have obtained notable gains in accuracy across many NLP tasks.', 'However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption.', 'As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware.', 'In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP.', 'Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice.']
### SUMMARY:
| [
"We quantify the energy cost in terms of money (cloud credits) and carbon footprint of training recently successful neural network models for NLP. Costs are high."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Many models based on the Variational Autoencoder are proposed to achieve disentangled latent variables in inference.', 'However, most current work is focusing on designing powerful disentangling regularizers, while the given number of dimensions for the latent representation at initialization could severely influence the disentanglement.', 'Thus, a pruning mechanism is introduced, aiming at automatically seeking for the intrinsic dimension of the data while promoting disentangled representations.', 'The proposed method is validated on MPI3D and MNIST to be advancing state-of-the-art methods in disentanglement, reconstruction, and robustness.', 'The code is provided on the https://github.com/WeyShi/FYP-of-Disentanglement.']
### SUMMARY:
| [
"The Pruning VAE is proposed to search for disentangled variables with intrinsic dimension."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We explore the properties of byte-level recurrent language models.', 'When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts.', 'Specifically, we find a single unit which performs sentiment analysis.', 'These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank.', 'They are also very data efficient.', 'When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets.', 'We also demonstrate the sentiment unit has a direct influence on the generative process of the model.', 'Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment.']
### SUMMARY:
| [
"Byte-level recurrent language models learn high-quality domain specific representations of text."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
[' Discrete latent-variable models, while applicable in a variety of settings, can often be difficult to learn.', 'Sampling discrete latent variables can result in high-variance gradient estimators for two primary reasons:', '1) branching on the samples within the model, and', '2) the lack of a pathwise derivative for the samples.', 'While current state-of-the-art methods employ control-variate schemes for the former and continuous-relaxation methods for the latter, their utility is limited by the complexities of implementing and training effective control-variate schemes and the necessity of evaluating (potentially exponentially) many branch paths in the model.', 'Here, we revisit the Reweighted Wake Sleep (RWS; Bornschein and Bengio, 2015) algorithm, and through extensive evaluations, show that it circumvents both these issues, outperforming current state-of-the-art methods in learning discrete latent-variable models.', 'Moreover, we observe that, unlike the Importance-weighted Autoencoder, RWS learns better models and inference networks with increasing numbers of particles, and that its benefits extend to continuous latent-variable models as well.', 'Our results suggest that RWS is a competitive, often preferable, alternative for learning deep generative models.']
### SUMMARY:
| [
"Empirical analysis and explanation of particle-based gradient estimators for approximate inference with deep generative models."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The objective in deep extreme multi-label learning is to jointly learn feature representations and classifiers to automatically tag data points with the most relevant subset of labels from an extremely large label set.', 'Unfortunately, state-of-the-art deep extreme classifiers are either not scalable or inaccurate for short text documents.', ' This paper develops the DeepXML algorithm which addresses both limitations by introducing a novel architecture that splits training of head and tail labels', '. DeepXML increases accuracy', 'by (a) learning word embeddings on head labels and transferring them through a novel residual connection to data impoverished tail labels', '; (b) increasing the amount of negative training data available by extending state-of-the-art negative sub-sampling techniques; and', '(c) re-ranking the set of predicted labels to eliminate the hardest negatives for the original classifier.', 'All of these contributions are implemented efficiently by extending the highly scalable Slice algorithm for pretrained embeddings to learn the proposed DeepXML architecture.', 'As a result, DeepXML could efficiently scale to problems involving millions of labels that were beyond the pale of state-of-the-art deep extreme classifiers as it could be more than 10x faster at training than XML-CNN and AttentionXML.', 'At the same time, DeepXML was also empirically determined to be up to 19% more accurate than leading techniques for matching search engine queries to advertiser bid phrases.']
### SUMMARY:
| [
"Scalable and accurate deep multi label learning with millions of labels."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
["Robust estimation under Huber's $\\epsilon$-contamination model has become an important topic in statistics and theoretical computer science.", "Rate-optimal procedures such as Tukey's median and other estimators based on statistical depth functions are impractical because of their computational intractability.", 'In this paper, we establish an intriguing connection between f-GANs and various depth functions through the lens of f-Learning.', 'Similar to the derivation of f-GAN, we show that these depth functions that lead to rate-optimal robust estimators can all be viewed as variational lower bounds of the total variation distance in the framework of f-Learning.', 'This connection opens the door of computing robust estimators using tools developed for training GANs.', "In particular, we show that a JS-GAN that uses a neural network discriminator with at least one hidden layer is able to achieve the minimax rate of robust mean estimation under Huber's $\\epsilon$-contamination model.", 'Interestingly, the hidden layers of the neural net structure in the discriminator class are shown to be necessary for robust estimation.']
### SUMMARY:
| [
"GANs are shown to provide us a new effective robust mean estimate against agnostic contaminations with both statistical optimality and practical tractability."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Long Short-Term Memory (LSTM) is one of the most powerful sequence models.', 'Despite the strong performance, however, it lacks the nice interpretability as in state space models.', 'In this paper, we present a way to combine the best of both worlds by introducing State Space LSTM (SSL), which generalizes the earlier work \\cite{zaheer2017latent} of combining topic models with LSTM.', 'However, unlike \\cite{zaheer2017latent}, we do not make any factorization assumptions in our inference algorithm.', 'We present an efficient sampler based on sequential Monte Carlo (SMC) method that draws from the joint posterior directly.', 'Experimental results confirms the superiority and stability of this SMC inference algorithm on a variety of domains.']
### SUMMARY:
| [
"We present State Space LSTM models, a combination of state space models and LSTMs, and propose an inference algorithm based on sequential Monte Carlo. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Given a large database of concepts but only one or a few examples of each, can we learn models for each concept that are not only generalisable, but interpretable?', 'In this work, we aim to tackle this problem through hierarchical Bayesian program induction.', 'We present a novel learning algorithm which can infer concepts as short, generative, stochastic programs, while learning a global prior over programs to improve generalisation and a recognition network for efficient inference.', 'Our algorithm, Wake-Sleep-Remember (WSR), combines gradient learning for continuous parameters with neurally-guided search over programs.', 'We show that WSR learns compelling latent programs in two tough symbolic domains: cellular automata and Gaussian process kernels.', 'We also collect and evaluate on a new dataset, Text-Concepts, for discovering structured patterns in natural text data.']
### SUMMARY:
| [
"We extend the wake-sleep algorithm and use it to learn to learn structured models from few examples, "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The knowledge regarding the function of proteins is necessary as it gives a clear picture of biological processes.', 'Nevertheless, there are many protein sequences found and added to the databases but lacks functional annotation.', 'The laboratory experiments take a considerable amount of time for annotation of the sequences.', 'This arises the need to use computational techniques to classify proteins based on their functions.', 'In our work, we have collected the data from Swiss-Prot containing 40433 proteins which is grouped into 30 families.', 'We pass it to recurrent neural network(RNN), long short term memory(LSTM) and gated recurrent unit(GRU) model and compare it by applying trigram with deep neural network and shallow neural network on the same dataset.', 'Through this approach, we could achieve maximum of around 78% accuracy for the classification of protein families. \n']
### SUMMARY:
| [
"Proteins, amino-acid sequences, machine learning, deep learning, recurrent neural network(RNN), long short term memory(LSTM), gated recurrent unit(GRU), deep neural networks"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Spoken term detection (STD) is the task of determining whether and where a given word or phrase appears in a given segment of speech.', 'Algorithms for STD are often aimed at maximizing the gap between the scores of positive and negative examples.', 'As such they are focused on ensuring that utterances where the term appears are ranked higher than utterances where the term does not appear.', 'However, they do not determine a detection threshold between the two.', 'In this paper, we propose a new approach for setting an absolute detection threshold for all terms by introducing a new calibrated loss function.', 'The advantage of minimizing this loss function during training is that it aims at maximizing not only the relative ranking scores, but also adjusts the system to use a fixed threshold and thus enhances system robustness and maximizes the detection accuracy rates.', 'We use the new loss function in the structured prediction setting and extend the discriminative keyword spotting algorithm for learning the spoken term detector with a single threshold for all terms.', 'We further demonstrate the effectiveness of the new loss function by applying it on a deep neural Siamese network in a weakly supervised setting for template-based spoken term detection, again with a single fixed threshold.', 'Experiments with the TIMIT, WSJ and Switchboard corpora showed that our approach not only improved the accuracy rates when a fixed threshold was used but also obtained higher Area Under Curve (AUC).']
### SUMMARY:
| [
"Spoken Term Detection, using structured prediction and deep networks, implementing a new loss function that both maximizes AUC and ranks according to a predefined threshold."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Federated learning involves jointly learning over massively distributed partitions of data generated on remote devices.', 'Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices.', 'In this work, we propose q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by resource allocation strategies in wireless networks that encourages a more fair accuracy distribution across devices in federated networks.', 'To solve q-FFL, we devise a scalable method, q-FedAvg, that can run in federated networks.', 'We validate both the improved fairness and flexibility of q-FFL and the efficiency of q-FedAvg through simulations on federated datasets.']
### SUMMARY:
| [
"We propose a novel optimization objective that encourages fairness in heterogeneous federated networks, and develop a scalable method to solve it."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We propose a novel autoencoding model called Pairwise Augmented GANs.', 'We train a generator and an encoder jointly and in an adversarial manner.', 'The generator network learns to sample realistic objects.', 'In turn, the encoder network at the same time is trained to map the true data distribution to the prior in latent space.', 'To ensure good reconstructions, we introduce an augmented adversarial reconstruction loss.', 'Here we train a discriminator to distinguish two types of pairs: an object with its augmentation and the one with its reconstruction.', 'We show that such adversarial loss compares objects based on the content rather than on the exact match.', 'We experimentally demonstrate that our model generates samples and reconstructions of quality competitive with state-of-the-art on datasets MNIST, CIFAR10, CelebA and achieves good quantitative results on CIFAR10.']
### SUMMARY:
| [
"We propose a novel autoencoding model with augmented adversarial reconstruction loss. We intoduce new metric for content-based assessment of reconstructions. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We propose a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables.', 'Inspired by dropout, a popular tool for regularization and model ensemble, we assign sparse priors to the weights in deep neural networks (DNN) in order to achieve automatic “dropout” and avoid over-fitting.', 'By alternatively sampling from posterior distribution through stochastic gradient Markov Chain Monte Carlo (SG-MCMC) and optimizing latent variables via stochastic approximation (SA), the trajectory of the target weights is proved to converge to the true posterior distribution conditioned on optimal latent variables.', 'This ensures a stronger regularization on the over-fitted parameter space and more accurate uncertainty quantification on the decisive variables.', 'Simulations from large-p-small-n regressions showcase the robustness of this method when applied to models with latent variables.', 'Additionally, its application on the convolutional neural networks (CNN) leads to state-of-the-art performance on MNIST and Fashion MNIST datasets and improved resistance to adversarial attacks.']
### SUMMARY:
| [
"a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Activity of populations of sensory neurons carries stimulus information in both the temporal and the spatial dimensions.', 'This poses the question of how to compactly represent all the information that the population codes carry across all these dimensions.', "Here, we developed an analytical method to factorize a large number of retinal ganglion cells' spike trains into a robust low-dimensional representation that captures efficiently both their spatial and temporal information.", 'In particular, we extended previously used single-trial space-by-time tensor decomposition based on non-negative matrix factorization to efficiently discount pre-stimulus baseline activity.', 'On data recorded from retinal ganglion cells with strong pre-stimulus baseline, we showed that in situations were the stimulus elicits a strong change in firing rate, our extensions yield a boost in stimulus decoding performance.', 'Our results thus suggest that taking into account the baseline can be important for finding a compact information-rich representation of neural activity.']
### SUMMARY:
| [
"We extended single-trial space-by-time tensor decomposition based on non-negative matrix factorization to efficiently discount pre-stimulus baseline activity that improves decoding performance on data with non-negligible baselines."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We propose a framework for extreme learned image compression based on Generative Adversarial Networks (GANs), obtaining visually pleasing images at significantly lower bitrates than previous methods.', 'This is made possible through our GAN formulation of learned compression combined with a generator/decoder which operates on the full-resolution image and is trained in combination with a multi-scale discriminator.', 'Additionally, if a semantic label map of the original image is available, our method can fully synthesize unimportant regions in the decoded image such as streets and trees from the label map, therefore only requiring the storage of the preserved region and the semantic label map.', 'A user study confirms that for low bitrates, our approach is preferred to state-of-the-art methods, even when they use more than double the bits.']
### SUMMARY:
| [
"GAN-based extreme image compression method using less than half the bits of the SOTA engineered codec while preserving visual quality"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In learning to rank, one is interested in optimising the global ordering of a list of items according to their utility for users.', 'Popular approaches learn a scoring function that scores items individually (i.e. without the context of other items in the list) by optimising a pointwise, pairwise or listwise loss.', 'The list is then sorted in the descending order of the scores.', 'Possible interactions between items present in the same list are taken into account in the training phase at the loss level.', 'However, during inference, items are scored individually, and possible interactions between them are not considered.', 'In this paper, we propose a context-aware neural network model that learns item scores by applying a self-attention mechanism.', 'The relevance of a given item is thus determined in the context of all other items present in the list, both in training and in inference.', 'Finally, we empirically demonstrate significant performance gains of self-attention based neural architecture over Multi-Layer Perceptron baselines.', 'This effect is consistent across popular pointwise, pairwise and listwise losses on datasets with both implicit and explicit relevance feedback.']
### SUMMARY:
| [
"Learning to rank using the Transformer architecture."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We propose an active learning algorithmic architecture, capable of organizing its learning process in order to achieve a field of complex tasks by learning sequences of primitive motor policies : Socially Guided Intrinsic Motivation with Procedure Babbling (SGIM-PB).', 'The learner can generalize over its experience to continuously learn new outcomes, by choosing actively what and how to learn guided by empirical measures of its own progress.', 'In this paper, we are considering the learning of a set of interrelated complex outcomes hierarchically organized.\n\n', 'We introduce a new framework called "procedures", which enables the autonomous discovery of how to combine previously learned skills in order to learn increasingly more complex motor policies (combinations of primitive motor policies).', 'Our architecture can actively decide which outcome to focus on and which exploration strategy to apply.', "Those strategies could be autonomous exploration, or active social guidance, where it relies on the expertise of a human teacher providing demonstrations at the learner's request.", 'We show on a simulated environment that our new architecture is capable of tackling the learning of complex motor policies, to adapt the complexity of its policies to the task at hand.', 'We also show that our "procedures" increases the agent\'s capability to learn complex tasks.']
### SUMMARY:
| [
"The paper describes a strategic intrinsically motivated learning algorithm which tackles the learning of complex motor policies."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Monte Carlo Tree Search (MCTS) has achieved impressive results on a range of discrete environments, such as Go, Mario and Arcade games, but it has not yet fulfilled its true potential in continuous domains.', 'In this work, we introduceTPO, a tree search based policy optimization method for continuous environments.', 'TPO takes a hybrid approach to policy optimization. ', 'Building the MCTS tree in a continuous action space and updating the policy gradient using off-policy MCTS trajectories are non-trivial.', 'To overcome these challenges, we propose limiting tree search branching factor by drawing only few action samples from the policy distribution and define a new loss function based on the trajectories’ mean and standard deviations. ', 'Our approach led to some non-intuitive findings. ', 'MCTS training generally requires a large number of samples and simulations.', 'However, we observed that bootstrappingtree search with a pre-trained policy allows us to achieve high quality results with a low MCTS branching factor and few number of simulations.', 'Without the proposed policy bootstrapping, continuous MCTS would require a much larger branching factor and simulation count, rendering it computationally and prohibitively expensive.', 'In our experiments, we use PPO as our baseline policy optimization algorithm.', 'TPO significantly improves the policy on nearly all of our benchmarks. ', 'For example, in complex environments such as Humanoid, we achieve a 2.5×improvement over the baseline algorithm.']
### SUMMARY:
| [
"We use MCTS to further optimize a bootstrapped policy for continuous action spaces under a policy iteration setting."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The variational autoencoder (VAE) has found success in modelling the manifold of natural images on certain datasets, allowing meaningful images to be generated while interpolating or extrapolating in the latent code space, but it is unclear whether similar capabilities are feasible for text considering its discrete nature.', 'In this work, we investigate the reason why unsupervised learning of controllable representations fails for text.', 'We find that traditional sequence VAEs can learn disentangled representations through their latent codes to some extent, but they often fail to properly decode when the latent factor is being manipulated, because the manipulated codes often land in holes or vacant regions in the aggregated posterior latent space, which the decoding network is not trained to process.', 'Both as a validation of the explanation and as a fix to the problem, we propose to constrain the posterior mean to a learned probability simplex, and performs manipulation within this simplex.', 'Our proposed method mitigates the latent vacancy problem and achieves the first success in unsupervised learning of controllable representations for text.', 'Empirically, our method significantly outperforms unsupervised baselines and is competitive with strong supervised approaches on text style transfer.', 'Furthermore, when switching the latent factor (e.g., topic) during a long sentence generation, our proposed framework can often complete the sentence in a seemingly natural way -- a capability that has never been attempted by previous methods.']
### SUMMARY:
| [
"why previous VAEs on text cannot learn controllable latent representation as on images, as well as a fix to enable the first success towards controlled text generation without supervision"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this paper we developed a hierarchical network model, called Hierarchical Prediction Network (HPNet) to understand how spatiotemporal memories might be learned and encoded in a representational hierarchy for predicting future video frames.', 'The model is inspired by the feedforward, feedback and lateral recurrent circuits in the mammalian hierarchical visual system.', 'It assumes that spatiotemporal memories are encoded in the recurrent connections within each level and between different levels of the hierarchy.', 'The model contains a feed-forward path that computes and encodes spatiotemporal features of successive complexity and a feedback path that projects interpretation from a higher level to the level below.', "Within each level, the feed-forward path and the feedback path intersect in a recurrent gated circuit that integrates their signals as well as the circuit's internal memory states to generate a prediction of the incoming signals.", 'The network learns by comparing the incoming signals with its prediction, updating its internal model of the world by minimizing the prediction errors at each level of the hierarchy in the style of {\\em predictive self-supervised learning}. The network processes data in blocks of video frames rather than a frame-to-frame basis. ', 'This allows it to learn relationships among movement patterns, yielding state-of-the-art performance in long range video sequence predictions in benchmark datasets.', 'We observed that hierarchical interaction in the network introduces sensitivity to memories of global movement patterns even in the population representation of the units in the earliest level.', 'Finally, we provided neurophysiological evidence, showing that neurons in the early visual cortex of awake monkeys exhibit very similar sensitivity and behaviors.', 'These findings suggest that predictive self-supervised learning might be an important principle for representational learning in the visual cortex. ']
### SUMMARY:
| [
"A new hierarchical cortical model for encoding spatiotemporal memory and video prediction"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Saliency maps are often used to suggest explanations of the behavior of deep rein- forcement learning (RL) agents.', 'However, the explanations derived from saliency maps are often unfalsifiable and can be highly subjective.', 'We introduce an empirical approach grounded in counterfactual reasoning to test the hypotheses generated from saliency maps and show that explanations suggested by saliency maps are often not supported by experiments.', 'Our experiments suggest that saliency maps are best viewed as an exploratory tool rather than an explanatory tool.']
### SUMMARY:
| [
"Proposing a new counterfactual-based methodology to evaluate the hypotheses generated from saliency maps about deep RL agent behavior. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['One of the unresolved questions in deep learning is the nature of the solutions that are being discovered.', 'We investigate the collection of solutions reached by the same network architecture, with different random initialization of weights and random mini-batches.', 'These solutions are shown to be rather similar - more often than not, each train and test example is either classified correctly by all the networks, or by none at all. ', 'Surprisingly, all the network instances seem to share the same learning dynamics, whereby initially the same train and test examples are correctly recognized by the learned model, followed by other examples which are learned in roughly the same order.', 'When extending the investigation to heterogeneous collections of neural network architectures, once again examples are seen to be learned in the same order irrespective of architecture, although the more powerful architecture may continue to learn and thus achieve higher accuracy.', 'This pattern of results remains true even when the composition of classes in the test set is unrelated to the train set, for example, when using out of sample natural images or even artificial images.', 'To show the robustness of these phenomena we provide an extensive summary of our empirical study, which includes hundreds of graphs describing tens of thousands of networks with varying NN architectures, hyper-parameters and domains.', 'We also discuss cases where this pattern of similarity breaks down, which show that the reported similarity is not an artifact of optimization by gradient descent.', 'Rather, the observed pattern of similarity is characteristic of learning complex problems with big networks.', 'Finally, we show that this pattern of similarity seems to be strongly correlated with effective generalization.']
### SUMMARY:
| [
"Most neural networks approximate the same classification function, even across architectures, through all stages of learning."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We propose a unified framework for building unsupervised representations of individual objects or entities (and their compositions), by associating with each object both a distributional as well as a point estimate (vector embedding).', 'This is made possible by the use of optimal transport, which allows us to build these associated estimates while harnessing the underlying geometry of the ground space.', 'Our method gives a novel perspective for building rich and powerful feature representations that simultaneously capture uncertainty (via a distributional estimate) and interpretability (with the optimal transport map).', 'As a guiding example, we formulate unsupervised representations for text, in particular for sentence representation and entailment detection.', 'Empirical results show strong advantages gained through the proposed framework.', 'This approach can be used for any unsupervised or supervised problem (on text or other modalities) with a co-occurrence structure, such as any sequence data.', 'The key tools underlying the framework are Wasserstein distances and Wasserstein barycenters (and, hence the title!).']
### SUMMARY:
| [
"Represent each entity based on its histogram of contexts and then Wasserstein is all you need!"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this paper, we propose two methods, namely Trace-norm regression (TNR) and Stable Trace-norm Analysis (StaTNA), to improve performances of recommender systems with side information.', 'Our trace-norm regression approach extracts low-rank latent factors underlying the side information that drives user preference under different context.', 'Furthermore, our novel recommender framework StaTNA not only captures latent low-rank common drivers for user preferences, but also considers idiosyncratic taste for individual users.', 'We compare performances of TNR and StaTNA on the MovieLens datasets against state-of-the-art models, and demonstrate that StaTNA and TNR in general outperforms these methods.']
### SUMMARY:
| [
"Methodologies for recommender systems with side information based on trace-norm regularization"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games.', 'Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text.', 'These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents.', 'Moreover, they provide a learning environment in which these skills can be acquired through interactions with an environment rather than using fixed corpora. \n', 'One aspect that makes these games particularly challenging for learning agents is the combinatorially large action space.\n', 'Existing methods for solving text-based games are limited to games that are either very simple or have an action space restricted to a predetermined set of admissible actions.', 'In this work, we propose to use the exploration approach of Go-Explore (Ecoffet et al., 2019) for solving text-based games.', 'More specifically, in an initial exploration phase, we first extract trajectories with high rewards, after which we train a policy to solve the game by imitating these trajectories.\n', 'Our experiments show that this approach outperforms existing solutions in solving text-based games, and it is more sample efficient in terms of the number of interactions with the environment.', 'Moreover, we show that the learned policy can generalize better than existing solutions to unseen games without using any restriction on the action space.']
### SUMMARY:
| [
"This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The recent “Lottery Ticket Hypothesis” paper by Frankle & Carbin showed that a simple approach to creating sparse networks (keep the large weights) results in models that are trainable from scratch, but only when starting from the same initial weights.', 'The performance of these networks often exceeds the performance of the non-sparse base model, but for reasons that were not well understood.', 'In this paper we study the three critical components of the Lottery Ticket (LT) algorithm, showing that each may be varied significantly without impacting the overall results.', 'Ablating these factors leads to new insights for why LT networks perform as well as they do.', 'We show why setting weights to zero is important, how signs are all you need to make the re-initialized network train, and why masking behaves like training.', 'Finally, we discover the existence of Supermasks, or masks that can be applied to an untrained, randomly initialized network to produce a model with performance far better than chance (86% on MNIST, 41% on CIFAR-10).']
### SUMMARY:
| [
"In neural network pruning, zeroing pruned weights is important, sign of initialization is key, and masking can be thought of as training."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Fine-tuning with pre-trained models has achieved exceptional results for many language tasks.', 'In this study, we focused on one such self-attention network model, namely BERT, which has performed well in terms of stacking layers across diverse language-understanding benchmarks.', 'However, in many downstream tasks, information between layers is ignored by BERT for fine-tuning.', 'In addition, although self-attention networks are well-known for their ability to capture global dependencies, room for improvement remains in terms of emphasizing the importance of local contexts.', 'In light of these advantages and disadvantages, this paper proposes SesameBERT, a generalized fine-tuning method that (1) enables the extraction of global information among all layers through Squeeze and Excitation and (2) enriches local information by capturing neighboring contexts via Gaussian blurring.', 'Furthermore, we demonstrated the effectiveness of our approach in the HANS dataset, which is used to determine whether models have adopted shallow heuristics instead of learning underlying generalizations.', 'The experiments revealed that SesameBERT outperformed BERT with respect to GLUE benchmark and the HANS evaluation set.']
### SUMMARY:
| [
"We proposed SesameBERT, a generalized fine-tuning method that enables the extraction of global information among all layers through Squeeze and Excitation and enriches local information by capturing neighboring contexts via Gaussian blurring."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['A growing number of learning methods are actually differentiable games whose players optimise multiple, interdependent objectives in parallel – from GANs and intrinsic curiosity to multi-agent RL.', 'Opponent shaping is a powerful approach to improve learning dynamics in these games, accounting for player influence on others’ updates.', 'Learning with Opponent-Learning Awareness (LOLA) is a recent algorithm that exploits this response and leads to cooperation in settings like the Iterated Prisoner’s Dilemma.', 'Although experimentally successful, we show that LOLA agents can exhibit ‘arrogant’ behaviour directly at odds with convergence.', 'In fact, remarkably few algorithms have theoretical guarantees applying across all (n-player, non-convex) games.', 'In this paper we present Stable Opponent Shaping (SOS), a new method that interpolates between LOLA and a stable variant named LookAhead.', 'We prove that LookAhead converges locally to equilibria and avoids strict saddles in all differentiable games.', 'SOS inherits these essential guarantees, while also shaping the learning of opponents and consistently either matching or outperforming LOLA experimentally.']
### SUMMARY:
| [
"Opponent shaping is a powerful approach to multi-agent learning but can prevent convergence; our SOS algorithm fixes this with strong guarantees in all differentiable games."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The rate at which medical questions are asked online significantly exceeds the capacity of qualified people to answer them, leaving many questions unanswered or inadequately answered.', 'Many of these questions are not unique, and reliable identification of similar questions would enable more efficient and effective question answering schema.', 'While many research efforts have focused on the problem of general question similarity, these approaches do not generalize well to the medical domain, where medical expertise is often required to determine semantic similarity.', 'In this paper, we show how a semi-supervised approach of pre-training a neural network on medical question-answer pairs is a particularly useful intermediate task for the ultimate goal of determining medical question similarity.', 'While other pre-training tasks yield an accuracy below 78.7% on this task, our model achieves an accuracy of 82.6% with the same number of training examples, an accuracy of 80.0% with a much smaller training set, and an accuracy of 84.5% when the full corpus of medical question-answer data is used.']
### SUMMARY:
| [
"We show that question-answer matching is a particularly good pre-training task for question-similarity and release a dataset for medical question similarity"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning.', 'We leverage the assumption that learning from different tasks, sharing common properties, is helpful to generalize the knowledge of them resulting in a more effective feature extraction compared to learning a single task.', 'Intuitively, the resulting set of features offers performance benefits when used by Reinforcement Learning algorithms.', 'We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks, extending the well-known finite-time bounds of Approximate Value-Iteration to the multi-task setting.', 'In addition, we complement our analysis by proposing multi-task extensions of three Reinforcement Learning algorithms that we empirically evaluate on widely used Reinforcement Learning benchmarks showing significant improvements over the single-task counterparts in terms of sample efficiency and performance.']
### SUMMARY:
| [
"A study on the benefit of sharing representation in Multi-Task Reinforcement Learning."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We present a 3D capsule architecture for processing of point clouds that is equivariant with respect to the SO(3) rotation group, translation and permutation of the unordered input sets.', 'The network operates on a sparse set of local reference frames, computed from an input point cloud and establishes end-to-end equivariance through a novel 3D quaternion group capsule layer, including an equivariant dynamic routing procedure.', 'The capsule layer enables us to disentangle geometry from pose, paving the way for more informative descriptions and a structured latent space.', 'In the process, we theoretically connect the process of dynamic routing between capsules to the well-known Weiszfeld algorithm, a scheme for solving iterative re-weighted least squares (IRLS) problems with provable convergence properties, enabling robust pose estimation between capsule layers.', 'Due to the sparse equivariant quaternion capsules, our architecture allows joint object classification and orientation estimation, which we validate empirically on common benchmark datasets. \n\n']
### SUMMARY:
| [
"Deep architectures for 3D point clouds that are equivariant to SO(3) rotations, as well as translations and permutations. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Vector semantics, especially sentence vectors, have recently been used successfully in many areas of natural language processing.', 'However, relatively little work has explored the internal structure and properties of spaces of sentence vectors.', 'In this paper, we will explore the properties of sentence vectors by studying a particular real-world application: Automatic Summarization.', 'In particular, we show that cosine similarity between sentence vectors and document vectors is strongly correlated with sentence importance and that vector semantics can identify and correct gaps between the sentences chosen so far and the document.', 'In addition, we identify specific dimensions which are linked to effective summaries.', 'To our knowledge, this is the first time specific dimensions of sentence embeddings have been connected to sentence properties.', 'We also compare the features of different methods of sentence embeddings.', 'Many of these insights have applications in uses of sentence embeddings far beyond summarization.']
### SUMMARY:
| [
"A comparison and detailed analysis of various sentence embedding models through the real-world task of automatic summarization."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We present Value Propagation (VProp), a parameter-efficient differentiable planning module built on Value Iteration which can successfully be trained in a reinforcement learning fashion to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments.', 'We evaluate on configurations of MazeBase grid-worlds, with randomly generated environments of several different sizes.', 'Furthermore, we show that the module enables to learn to plan when the environment also includes stochastic elements, providing a cost-efficient learning system to build low-level size-invariant planners for a variety of interactive navigation problems.']
### SUMMARY:
| [
"We propose Value Propagation, a novel end-to-end planner which can learn to solve 2D navigation tasks via Reinforcement Learning, and that generalizes to larger and dynamic environments."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recommendation is a prevalent application of machine learning that affects many users; therefore, it is crucial for recommender models to be accurate and interpretable.', 'In this work, we propose a method to both interpret and augment the predictions of black-box recommender systems.', 'In particular, we propose to extract feature interaction interpretations from a source recommender model and explicitly encode these interactions in a target recommender model, where both source and target models are black-boxes.', 'By not assuming the structure of the recommender system, our approach can be used in general settings. ', 'In our experiments, we focus on a prominent use of machine learning recommendation: ad-click prediction.', 'We found that our interaction interpretations are both informative and predictive, i.e., significantly outperforming existing recommender models.', "What's more, the same approach to interpreting interactions can provide new insights into domains even beyond recommendation."]
### SUMMARY:
| [
"Proposed a method to extract and leverage interpretations of feature interactions"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Rectified linear units, or ReLUs, have become a preferred activation function for artificial neural networks.', 'In this paper we consider the problem of learning a generative model in the presence of nonlinearity (modeled by the ReLU functions).', 'Given a set of signal vectors $\\mathbf{y}^i \\in \\mathbb{R}^d, i =1, 2, \\dots , n$, we aim to learn the network parameters, i.e., the $d\\times k$ matrix $A$, under the model $\\mathbf{y}^i = \\mathrm{ReLU}(A\\mathbf{c}^i +\\mathbf{b})$, where $\\mathbf{b}\\in \\mathbb{R}^d$ is a random bias vector, and {$\\mathbf{c}^i \\in \\mathbb{R}^k$ are arbitrary unknown latent vectors}.', 'We show that it is possible to recover the column space of $A$ within an error of $O(d)$ (in Frobenius norm) under certain conditions on the distribution of $\\mathbf{b}$.']
### SUMMARY:
| [
"We show that it is possible to recover the parameters of a 1-layer ReLU generative model from looking at samples generated by it"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Methods that calculate dense vector representations for features in unstructured data—such as words in a document—have proven to be very successful for knowledge representation.', 'We study how to estimate dense representations when multiple feature types exist within a dataset for supervised learning where explicit labels are available, as well as for unsupervised learning where there are no labels.', 'Feat2Vec calculates embeddings for data with multiple feature types enforcing that all different feature types exist in a common space.', 'In the supervised case, we show that our method has advantages over recently proposed methods; such as enabling higher prediction accuracy, and providing a way to avoid the cold-start\n', 'problem.', 'In the unsupervised case, our experiments suggest that Feat2Vec significantly outperforms existing algorithms that do not leverage the structure of the data.', 'We believe that we are the first to propose a method for learning unsuper vised embeddings that leverage the structure of multiple feature types.']
### SUMMARY:
| [
"Learn dense vector representations of arbitrary types of features in labeled and unlabeled datasets"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language.', 'Specifically, we train a RNN on positive and negative examples from a regular language, and ask if there is a simple decoding function that maps states of this RNN to states of the minimal deterministic finite automaton (MDFA) for the language.', "Our experiments show that such a decoding function indeed exists, and that it maps states of the RNN not to MDFA states, but to states of an {\\em abstraction} obtained by clustering small sets of MDFA states into ``''superstates''.", 'A qualitative analysis reveals that the abstraction often has a simple interpretation.', 'Overall, the results suggest a strong structural relationship between internal representations used by RNNs and finite automata, and explain the well-known ability of RNNs to recognize formal grammatical structure. \n']
### SUMMARY:
| [
"Finite Automata Can be Linearly decoded from Language-Recognizing RNNs using low coarseness abstraction functions and high accuracy decoders. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Designing accurate and efficient convolutional neural architectures for vast amount of hardware is challenging because hardware designs are complex and diverse.', 'This paper addresses the hardware diversity challenge in Neural Architecture Search (NAS).', 'Unlike previous approaches that apply search algorithms on a small, human-designed search space without considering hardware diversity, we propose HURRICANE that explores the automatic hardware-aware search over a much larger search space and a multistep search scheme in coordinate ascent framework, to generate tailored models for different types of hardware.', 'Extensive experiments on ImageNet show that our algorithm consistently achieves a much lower inference latency with a similar or better accuracy than state-of-the-art NAS methods on three types of hardware.', 'Remarkably, HURRICANE achieves a 76.63% top-1 accuracy on ImageNet with a inference latency of only 16.5 ms for DSP, which is a 3.4% higher accuracy and a 6.35x inference speedup than FBNet-iPhoneX.', 'For VPU, HURRICANE achieves a 0.53% higher top-1 accuracy than Proxyless-mobile with a 1.49x speedup.', 'Even for well-studied mobile CPU, HURRICANE achieves a 1.63% higher top-1 accuracy than FBNet-iPhoneX with a comparable inference latency.', 'HURRICANE also reduces the training time by 54.7% on average compared to SinglePath-Oneshot.']
### SUMMARY:
| [
"We propose HURRICANE to address the challenge of hardware diversity in one-shot neural architecture search"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this paper, we present Neural Phrase-based Machine Translation (NPMT).', 'Our method explicitly models the phrase structures in output sequences using Sleep-WAke Networks (SWAN), a recently proposed segmentation-based sequence modeling method.', 'To mitigate the monotonic alignment requirement of SWAN, we introduce a new layer to perform (soft) local reordering of input sequences.', 'Different from existing neural machine translation (NMT) approaches, NPMT does not use attention-based decoding mechanisms. ', 'Instead, it directly outputs phrases in a sequential order and can decode in linear time.', 'Our experiments show that NPMT achieves superior performances on IWSLT 2014 German-English/English-German and IWSLT 2015 English-Vietnamese machine translation tasks compared with strong NMT baselines.', 'We also observe that our method produces meaningful phrases in output languages.']
### SUMMARY:
| [
"Neural phrase-based machine translation with linear decoding time"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Generative Adversarial Networks (GANs) have shown impressive results in modeling distributions over complicated manifolds such as those of natural images.', 'However, GANs often suffer from mode collapse, which means they are prone to characterize only a single or a few modes of the data distribution.', 'In order to address this problem, we propose a novel framework called LDMGAN.', 'We first introduce Latent Distribution Matching (LDM) constraint which regularizes the generator by aligning distribution of generated samples with that of real samples in latent space.', 'To make use of such latent space, we propose a regularized AutoEncoder (AE) that maps the data distribution to prior distribution in encoded space.', 'Extensive experiments on synthetic data and real world datasets show that our proposed framework significantly improves GAN’s stability and diversity.']
### SUMMARY:
| [
"We propose an AE-based GAN that alleviates mode collapse in GANs."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Achieving faster execution with shorter compilation time can foster further diversity and innovation in neural networks.', 'However, the current paradigm of executing neural networks either relies on hand-optimized libraries, traditional compilation heuristics, or very recently genetic algorithms and other stochastic methods.', 'These methods suffer from frequent costly hardware measurements rendering them not only too time consuming but also suboptimal.', 'As such, we devise a solution that can learn to quickly adapt to a previously unseen design space for code optimization, both accelerating the search and improving the output performance.', 'This solution dubbed CHAMELEON leverages reinforcement learning whose solution takes fewer steps to converge, and develops an adaptive sampling algorithm that not only focuses on the costly samples (real hardware measurements) on representative points but also uses a domain knowledge inspired logic to improve the samples itself.', 'Experimentation with real hardware shows that CHAMELEON provides 4.45×speed up in optimization time over AutoTVM, while also improving inference time of the modern deep networks by 5.6%.']
### SUMMARY:
| [
"Reinforcement learning and Adaptive Sampling for Optimized Compilation of Deep Neural Networks."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this paper, we propose a differentiable adversarial grammar model for future prediction.', 'The objective is to model a formal grammar in terms of differentiable functions and latent representations, so that their learning is possible through standard backpropagation.', 'Learning a formal grammar represented with latent terminals, non-terminals, and productions rules allows capturing sequential structures with multiple possibilities from data.\n\n', 'The adversarial grammar is designed so that it can learn stochastic production rules from the data distribution.', 'Being able to select multiple production rules leads to different predicted outcomes, thus efficiently modeling many plausible futures. ', 'We confirm the benefit of the adversarial grammar on two diverse tasks: future 3D human pose prediction and future activity prediction.', 'For all settings, the proposed adversarial grammar outperforms the state-of-the-art approaches, being able to predict much more accurately and further in the future, than prior work.']
### SUMMARY:
| [
"We design a grammar that is learned in an adversarial setting and apply it to future prediction in video."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We consider a simple and overarching representation for permutation-invariant functions of sequences (or set functions).', 'Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence.', 'This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions.', 'If carried out naively, Janossy pooling can be computationally prohibitive.', 'To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations.', 'Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions.', 'We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods.']
### SUMMARY:
| [
"We propose Janossy pooling, a method for learning deep permutation invariant functions designed to exploit relationships within the input sequence and tractable inference strategies such as a stochastic optimization procedure we call piSGD"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that number of instances per task and class is fixed.', 'Due to such restriction, they learn to equally utilize the meta-knowledge across all the tasks, even when the number of instances per task and class largely varies.', 'Moreover, they do not consider distributional difference in unseen tasks, on which the meta-knowledge may have less usefulness depending on the task relatedness.', 'To overcome these limitations, we propose a novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning within each task.', 'Through the learning of the balancing variables, we can decide whether to obtain a solution by relying on the meta-knowledge or task-specific learning.', 'We formulate this objective into a Bayesian inference framework and tackle it using variational inference.', 'We validate our Bayesian Task-Adaptive Meta-Learning (Bayesian TAML) on two realistic task- and class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches.', 'Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework.']
### SUMMARY:
| [
"A novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning, and also class-specific learning within each task."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Many tasks in artificial intelligence require the collaboration of multiple agents.', 'We exam deep reinforcement learning for multi-agent domains.', 'Recent research efforts often take the form of two seemingly conflicting perspectives, the decentralized perspective, where each agent is supposed to have its own controller; and the centralized perspective, where one assumes there is a larger model controlling all agents.', 'In this regard, we revisit the idea of the master-slave architecture by incorporating both perspectives within one framework.', 'Such a hierarchical structure naturally leverages advantages from one another.', 'The idea of combining both perspective is intuitive and can be well motivated from many real world systems, however, out of a variety of possible realizations, we highlights three key ingredients, i.e. composed action representation, learnable communication and independent reasoning.', 'With network designs to facilitate these explicitly, our proposal consistently outperforms latest competing methods both in synthetics experiments and when applied to challenging StarCraft micromanagement tasks.']
### SUMMARY:
| [
"We revisit the idea of the master-slave architecture in multi-agent deep reinforcement learning and outperforms state-of-the-arts."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We study the implicit bias of gradient descent methods in solving a binary classification problem over a linearly separable dataset.', 'The classifier is described by a nonlinear ReLU model and the objective function adopts the exponential loss function.', 'We first characterize the landscape of the loss function and show that there can exist spurious asymptotic local minima besides asymptotic global minima.', 'We then show that gradient descent (GD) can converge to either a global or a local max-margin direction, or may diverge from the desired max-margin direction in a general context.', 'For stochastic gradient descent (SGD), we show that it converges in expectation to either the global or the local max-margin direction if SGD converges.', 'We further explore the implicit bias of these algorithms in learning a multi-neuron network under certain stationary conditions, and show that the learned classifier maximizes the margins of each sample pattern partition under the ReLU activation.']
### SUMMARY:
| [
"We study the implicit bias of gradient methods in solving a binary classification problem with nonlinear ReLU models."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions in resource-limited scenarios.', 'A widely-used practice in relevant work assumes that a smaller-norm parameter or feature plays a less informative role at the inference time.', 'In this paper, we propose a channel pruning technique for accelerating the computations of deep convolutional neural networks (CNNs) that does not critically rely on this assumption.', 'Instead, it focuses on direct simplification of the channel-to-channel computation graph of a CNN without the need of performing a computationally difficult and not-always-useful task of making high-dimensional tensors of CNN structured sparse.', 'Our approach takes two stages: first to adopt an end-to-end stochastic training method that eventually forces the outputs of some channels to be constant, and then to prune those constant channels from the original neural network by adjusting the biases of their impacting layers such that the resulting compact model can be quickly fine-tuned.', 'Our approach is mathematically appealing from an optimization perspective and easy to reproduce.', 'We experimented our approach through several image learning benchmarks and demonstrate its interest- ing aspects and competitive performance.']
### SUMMARY:
| [
"A CNN model pruning method using ISTA and rescaling trick to enforce sparsity of scaling parameters in batch normalization."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Stochastic gradient descent (SGD) has been the dominant optimization method for training deep neural networks due to its many desirable properties.', 'One of the more remarkable and least understood quality of SGD is that it generalizes relatively well\n', 'on unseen data even when the neural network has millions of parameters.', 'We hypothesize that in certain cases it is desirable to relax its intrinsic generalization properties and introduce an extension of SGD called deep gradient boosting (DGB).', 'The key idea of DGB is that back-propagated gradients inferred using the chain rule can be viewed as pseudo-residual targets of a gradient boosting problem.', 'Thus at each layer of a neural network the weight update is calculated by solving the corresponding boosting problem using a linear base learner.', 'The resulting weight update formula can also be viewed as a normalization procedure of the data that arrives at each layer during the forward pass.', 'When implemented as a separate input normalization layer (INN) the new architecture shows improved performance on image recognition tasks when compared to the same architecture without normalization layers.', 'As opposed to batch normalization (BN), INN has no learnable parameters however it matches its performance on CIFAR10 and ImageNet classification tasks.']
### SUMMARY:
| [
"What can we learn about training neural networks if we treat each layer as a gradient boosting problem?"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We propose to extend existing deep reinforcement learning (Deep RL) algorithms by allowing them to additionally choose sequences of actions as a part of their policy. ', 'This modification forces the network to anticipate the reward of action sequences, which, as we show, improves the exploration leading to better convergence.', 'Our proposal is simple, flexible, and can be easily incorporated into any Deep RL framework.', 'We show the power of our scheme by consistently outperforming the state-of-the-art GA3C algorithm on several popular Atari Games.']
### SUMMARY:
| [
"Anticipation improves convergence of deep reinforcement learning."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['To optimize a neural network one often thinks of optimizing its parameters, but it is ultimately a matter of optimizing the function that maps inputs to outputs.', 'Since a change in the parameters might serve as a poor proxy for the change in the function, it is of some concern that primacy is given to parameters but that the correspondence has not been tested.', 'Here, we show that it is simple and computationally feasible to calculate distances between functions in a $L^2$ Hilbert space.', 'We examine how typical networks behave in this space, and compare how parameter $\\ell^2$ distances compare to function $L^2$ distances between various points of an optimization trajectory.', 'We find that the two distances are nontrivially related.', 'In particular, the $L^2/\\ell^2$ ratio decreases throughout optimization, reaching a steady value around when test error plateaus.', 'We then investigate how the $L^2$ distance could be applied directly to optimization.', 'We first propose that in multitask learning, one can avoid catastrophic forgetting by directly limiting how much the input/output function changes between tasks.', 'Secondly, we propose a new learning rule that constrains the distance a network can travel through $L^2$-space in any one update.', 'This allows new examples to be learned in a way that minimally interferes with what has previously been learned.', 'These applications demonstrate how one can measure and regularize function distances directly, without relying on parameters or local approximations like loss curvature.']
### SUMMARY:
| [
"We find movement in function space is not proportional to movement in parameter space during optimization. We propose a new natural-gradient style optimizer to address this."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem which maximizes the information the representation has about the task, I(T;Y), while ensuring that a minimum level of compression r is achieved (i.e., I(X;T) <= r).', 'For practical reasons the problem is usually solved by maximizing the IB Lagrangian for many values of the Lagrange multiplier, therefore drawing the IB curve (i.e., the curve of maximal I(T;Y) for a given I(X;Y)) and selecting the representation of desired predictability and compression.', 'It is known when Y is a deterministic function of X, the IB curve cannot be explored and other Lagrangians have been proposed to tackle this problem (e.g., the squared IB Lagrangian).', 'In this paper we', '(i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios;', '(ii) prove that if these Lagrangians are used, there is a one-to-one mapping between the Lagrange multiplier and the desired compression rate r for known IB curve shapes, hence, freeing from the burden of solving the optimization problem for many values of the Lagrange multiplier.']
### SUMMARY:
| [
"We introduce a general family of Lagrangians that allow exploring the IB curve in all scenarios. When these are used, and the IB curve is known, one can optimize directly for a performance/compression level directly."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning.', 'NLMs exploit the power of both neural networks---as function approximators, and logic programming---as a symbolic processor for objects with properties, relations, logic connectives, and quantifiers. ', 'After being trained on small-scale tasks (such as sorting short arrays), NLMs can recover lifted rules, and generalize to large-scale tasks (such as sorting longer arrays).', 'In our experiments, NLMs achieve perfect generalization in a number of tasks, from relational reasoning tasks on the family tree and general graphs, to decision making tasks including sorting arrays, finding shortest paths, and playing the blocks world.', 'Most of these tasks are hard to accomplish for neural networks or inductive logic programming alone.']
### SUMMARY:
| [
"We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Sequence-to-sequence (seq2seq) neural models have been actively investigated for abstractive summarization.', 'Nevertheless, existing neural abstractive systems frequently generate factually incorrect summaries and are vulnerable to adversarial information, suggesting a crucial lack of semantic understanding.', 'In this paper, we propose a novel semantic-aware neural abstractive summarization model that learns to generate high quality summaries through semantic interpretation over salient content.', 'A novel evaluation scheme with adversarial samples is introduced to measure how well a model identifies off-topic information, where our model yields significantly better performance than the popular pointer-generator summarizer.', 'Human evaluation also confirms that our system summaries are uniformly more informative and faithful as well as less redundant than the seq2seq model.']
### SUMMARY:
| [
"We propose a semantic-aware neural abstractive summarization model and a novel automatic summarization evaluation scheme that measures how well a model identifies off-topic information from adversarial samples."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The recent work of Super Characters method using two-dimensional word embedding achieved state-of-the-art results in text classification tasks, showcasing the promise of this new approach.', 'This paper borrows the idea of Super Characters method and two-dimensional embedding, and proposes a method of generating conversational response for open domain dialogues.', 'The experimental results on a public dataset shows that the proposed SuperChat method generates high quality responses.', 'An interactive demo is ready to show at the workshop.', 'And code will be available at github soon.']
### SUMMARY:
| [
"Print the input sentence and current response sentence onto an image and use fine-tuned ImageNet CNN model to predict the next response word."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Convolutional neural networks (CNNs) have been generally acknowledged as one of the driving forces for the advancement of computer vision.', 'Despite their promising performances on many tasks, CNNs still face major obstacles on the road to achieving ideal machine intelligence.', 'One is that CNNs are complex and hard to interpret.', 'Another is that standard CNNs require large amounts of annotated data, which is sometimes very hard to obtain, and it is desirable to be able to learn them from few examples.', 'In this work, we address these limitations of CNNs by developing novel, simple, and interpretable models for few-shot learn- ing. Our models are based on the idea of encoding objects in terms of visual concepts, which are interpretable visual cues represented by the feature vectors within CNNs.', 'We first adapt the learning of visual concepts to the few-shot setting, and then uncover two key properties of feature encoding using visual concepts, which we call category sensitivity and spatial pattern.', 'Motivated by these properties, we present two intuitive models for the problem of few-shot learning.', 'Experiments show that our models achieve competitive performances, while being much more flexible and interpretable than alternative state-of-the-art few-shot learning methods.', 'We conclude that using visual concepts helps expose the natural capability of CNNs for few-shot learning.']
### SUMMARY:
| [
"We enable ordinary CNNs for few-shot learning by exploiting visual concepts which are interpretable visual cues learnt within CNNs."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recently various neural networks have been proposed for irregularly structured data such as graphs and manifolds.', 'To our knowledge, all existing graph networks have discrete depth.', 'Inspired by neural ordinary differential equation (NODE) for data in the Euclidean domain, we extend the idea of continuous-depth models to graph data, and propose graph ordinary differential equation (GODE).', 'The derivative of hidden node states are parameterized with a graph neural network, and the output states are the solution to this ordinary differential equation.', 'We demonstrate two end-to-end methods for efficient training of GODE: (1) indirect back-propagation with the adjoint method; (2) direct back-propagation through the ODE solver, which accurately computes the gradient.', 'We demonstrate that direct backprop outperforms the adjoint method in experiments.', 'We then introduce a family of bijective blocks, which enables $\\mathcal{O}(1)$ memory consumption.', 'We demonstrate that GODE can be easily adapted to different existing graph neural networks and improve accuracy.', 'We validate the performance of GODE in both semi-supervised node classification tasks and graph classification tasks.', 'Our GODE model achieves a continuous model in time, memory efficiency, accurate gradient estimation, and generalizability with different graph networks.']
### SUMMARY:
| [
"Apply ordinary differential equation model on graph structured data"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Unsupervised image-to-image translation is a recently proposed task of translating an image to a different style or domain given only unpaired image examples at training time.', 'In this paper, we formulate a new task of unsupervised video-to-video translation, which poses its own unique challenges.', 'Translating video implies learning not only the appearance of objects and scenes but also realistic motion and transitions between consecutive frames.', 'We investigate the performance of per-frame video-to-video translation using existing image-to-image translation networks, and propose a spatio-temporal 3D translator as an alternative solution to this problem.', 'We evaluate our 3D method on multiple synthetic datasets, such as moving colorized digits, as well as the realistic segmentation-to-video GTA dataset and a new CT-to-MRI volumetric images translation dataset.', 'Our results show that frame-wise translation produces realistic results on a single frame level but underperforms significantly on the scale of the whole video compared to our three-dimensional translation approach, which is better able to learn the complex structure of video and motion and continuity of object appearance.']
### SUMMARY:
| [
"Proposed new task, datasets and baselines; 3D Conv CycleGAN preserves object properties across frames; batch structure in frame-level methods matters."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Capsule networks are constrained by the parameter-expensive nature of their layers, and the general lack of provable equivariance guarantees.', 'We present a variation of capsule networks that aims to remedy this.', 'We identify that learning all pair-wise part-whole relationships between capsules of successive layers is inefficient.', 'Further, we also realise that the choice of prediction networks and the routing mechanism are both key to equivariance.', 'Based on these, we propose an alternative framework for capsule networks that learns to projectively encode the manifold of pose-variations, termed the space-of-variation (SOV), for every capsule-type of each layer.', 'This is done using a trainable, equivariant function defined over a grid of group-transformations.', 'Thus, the prediction-phase of routing involves projection into the SOV of a deeper capsule using the corresponding function.', 'As a specific instantiation of this idea, and also in order to reap the benefits of increased parameter-sharing, we use type-homogeneous group-equivariant convolutions of shallower capsules in this phase.', 'We also introduce an equivariant routing mechanism based on degree-centrality.', 'We show that this particular instance of our general model is equivariant, and hence preserves the compositional representation of an input under transformations.', 'We conduct several experiments on standard object-classification datasets that showcase the increased transformation-robustness, as well as general performance, of our model to several capsule baselines.']
### SUMMARY:
| [
"A new scalable, group-equivariant model for capsule networks that preserves compositionality under transformations, and is empirically more transformation-robust to older capsule network models."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recently deep neural networks have shown their capacity to memorize training data, even with noisy labels, which hurts generalization performance.', 'To mitigate this issue, we propose a simple but effective method that is robust to noisy labels, even with severe noise. ', 'Our objective involves a variance regularization term that implicitly penalizes the Jacobian norm of the neural network on the whole training set (including the noisy-labeled data), which encourages generalization and prevents overfitting to the corrupted labels.', 'Experiments on noisy benchmarks demonstrate that our approach achieves state-of-the-art performance with a high tolerance to severe noise.']
### SUMMARY:
| [
"The paper proposed a simple yet effective baseline for learning with noisy labels."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recent research suggests that neural machine translation achieves parity with professional human translation on the WMT Chinese--English news translation task.', 'We empirically test this claim with alternative evaluation protocols, contrasting the evaluation of single sentences and entire documents.', 'In a pairwise ranking experiment, human raters assessing adequacy and fluency show a stronger preference for human over machine translation when evaluating documents as compared to isolated sentences.', 'Our findings emphasise the need to shift towards document-level evaluation as machine translation improves to the degree that errors which are hard or impossible to spot at the sentence-level become decisive in discriminating quality of different translation outputs.']
### SUMMARY:
| [
"Raters prefer adequacy in human over machine translation when evaluating entire documents, but not when evaluating single sentences."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Imitation learning aims to inversely learn a policy from expert demonstrations, which has been extensively studied in the literature for both single-agent setting with Markov decision process (MDP) model, and multi-agent setting with Markov game (MG) model.', 'However, existing approaches for general multi-agent Markov games are not applicable to multi-agent extensive Markov games, where agents make asynchronous decisions following a certain order, rather than simultaneous decisions.', 'We propose a novel framework for asynchronous multi-agent generative adversarial imitation learning (AMAGAIL) under general extensive Markov game settings, and the learned expert policies are proven to guarantee subgame perfect equilibrium (SPE), a more general and stronger equilibrium than Nash equilibrium (NE).', 'The experiment results demonstrate that compared to state-of-the-art baselines, our AMAGAIL model can better infer the policy of each expert agent using their demonstration data collected from asynchronous decision-making scenarios (i.e., extensive Markov games).']
### SUMMARY:
| [
"This paper extends the multi-agent generative adversarial imitation learning to extensive-form Markov games."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Self-training is one of the earliest and simplest semi-supervised methods.', 'The key idea is to augment the original labeled dataset with unlabeled data paired with the model’s prediction.', 'Self-training has mostly been well-studied to classification problems.', 'However, in complex sequence generation tasks such as machine translation, it is still not clear how self-training woks due to the compositionality of the target space.', 'In this work, we first show that it is not only possible but recommended to apply self-training in sequence generation.', 'Through careful examination of the performance gains, we find that the noise added on the hidden states (e.g. dropout) is critical to the success of self-training, as this acts like a regularizer which forces the model to yield similar predictions for similar inputs from unlabeled data.', 'To further encourage this mechanism, we propose to inject noise to the input space, resulting in a “noisy” version of self-training.', 'Empirical study on standard benchmarks across machine translation and text summarization tasks under different resource settings shows that noisy self-training is able to effectively utilize unlabeled data and improve the baseline performance by large margin.']
### SUMMARY:
| [
"We revisit self-training as a semi-supervised learning method for neural sequence generation problem, and show that self-training can be quite successful with injected noise."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We present an end-to-end trained memory system that quickly adapts to new data and generates samples like them.', "Inspired by Kanerva's sparse distributed memory, it has a robust distributed reading and writing mechanism.", 'The memory is analytically tractable, which enables optimal on-line compression via a Bayesian update-rule.', 'We formulate it as a hierarchical conditional generative model, where memory provides a rich data-dependent prior distribution.', 'Consequently, the top-down memory and bottom-up perception are combined to produce the code representing an observation.', 'Empirically, we demonstrate that the adaptive memory significantly improves generative models trained on both the Omniglot and CIFAR datasets.', 'Compared with the Differentiable Neural Computer (DNC) and its variants, our memory model has greater capacity and is significantly easier to train.']
### SUMMARY:
| [
"A generative memory model that combines slow-learning neural networks and a fast-adapting linear Gaussian model as memory."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity.', 'In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility.', 'In this work, we present a new approach that prunes a given network once at initialization prior to training.', 'To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task.', 'This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations.', 'After pruning, the sparse network is trained in the standard way.', 'Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks.', 'Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task.']
### SUMMARY:
| [
"We present a new approach, SNIP, that is simple, versatile and interpretable; it prunes irrelevant connections for a given task at single-shot prior to training and is applicable to a variety of neural network models without modifications."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Transfer learning uses trained weights from a source model as the initial weightsfor the training of a target dataset. ', 'A well chosen source with a large numberof labeled data leads to significant improvement in accuracy. ', 'We demonstrate atechnique that automatically labels large unlabeled datasets so that they can trainsource models for transfer learning.', 'We experimentally evaluate this method, usinga baseline dataset of human-annotated ImageNet1K labels, against five variationsof this technique. ', 'We show that the performance of these automatically trainedmodels come within 17% of baseline on average.']
### SUMMARY:
| [
"A technique for automatically labeling large unlabeled datasets so that they can train source models for transfer learning and its experimental evaluation. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recent studies in attention modules have enabled higher performance in computer vision tasks by capturing global contexts and accordingly attending important features.', 'In this paper, we propose a simple and highly parametrically efficient module named Tree-structured Attention Module (TAM) which recursively encourages neighboring channels to collaborate in order to produce a spatial attention map as an output.', 'Unlike other attention modules which try to capture long-range dependencies at each channel, our module focuses on imposing non-linearities be- tween channels by utilizing point-wise group convolution.', 'This module not only strengthens representational power of a model but also acts as a gate which controls signal flow.', 'Our module allows a model to achieve higher performance in a highly parameter-efficient manner.', 'We empirically validate the effectiveness of our module with extensive experiments on CIFAR-10/100 and SVHN datasets.', 'With our proposed attention module employed, ResNet50 and ResNet101 models gain 2.3% and 1.2% accuracy improvement with less than 1.5% parameter over- head.', 'Our PyTorch implementation code is publicly available.']
### SUMMARY:
| [
"Our paper proposes an attention module which captures inter-channel relationships and offers large performance gains."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['A significant challenge for the practical application of reinforcement learning toreal world problems is the need to specify an oracle reward function that correctly defines a task.', 'Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. ', 'While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door).', 'Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function.', 'In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a "prior" that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. ', 'We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.']
### SUMMARY:
| [
"The applicability of inverse reinforcement learning is often hampered by the expense of collecting expert demonstrations; this paper seeks to broaden its applicability by incorporating prior task information through meta-learning."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recent work has focused on combining kernel methods and deep learning.', 'With this in mind, we introduce Deepström networks -- a new architecture of neural networks which we use to replace top dense layers of standard convolutional architectures with an approximation of a kernel function by relying on the Nyström approximation. \n', 'Our approach is easy highly flexible.', 'It is compatible with any kernel function and it allows exploiting multiple kernels. \n', 'We show that Deepström networks reach state-of-the-art performance on standard datasets like SVHN and CIFAR100.', 'One benefit of the method lies in its limited number of learnable parameters which make it particularly suited for small training set sizes, e.g. from 5 to 20 samples per class.', 'Finally we illustrate two ways of using multiple kernels, including a multiple Deepström setting, that exploits a kernel on each feature map output by the convolutional part of the model. ']
### SUMMARY:
| [
"A new neural architecture where top dense layers of standard convolutional architectures are replaced with an approximation of a kernel function by relying on the Nyström approximation."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The main goal of this short paper is to inform the neural art community at large on the ethical ramifications of using models trained on the imagenet dataset, or using seed images from classes 445 -n02892767- [’bikini, two-piece’] and 459- n02837789- [’brassiere, bra, bandeau’] of the same.', 'We discovered that many of the images belong to these classes were verifiably pornographic, shot in a non-consensual setting, voyeuristic and also entailed underage nudity.', 'Akin to the \\textit{ivory carving-illegal poaching} and \\textit{diamond jewelry art-blood diamond} nexuses, we posit there is a similar moral conundrum at play here and would like to instigate a conversation amongst the neural artists in the community.']
### SUMMARY:
| [
"There's non-consensual and pornographic images in the ImageNet dataset"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Inductive and unsupervised graph learning is a critical technique for predictive or information retrieval tasks where label information is difficult to obtain.', 'It is also challenging to make graph learning inductive and unsupervised at the same time, as learning processes guided by reconstruction error based loss functions inevitably demand graph similarity evaluation that is usually computationally intractable.', 'In this paper, we propose a general framework SEED (Sampling, Encoding, and Embedding Distributions) for inductive and unsupervised representation learning on graph structured objects.', 'Instead of directly dealing with the computational challenges raised by graph similarity evaluation, given an input graph, the SEED framework samples a number of subgraphs whose reconstruction errors could be efficiently evaluated, encodes the subgraph samples into a collection of subgraph vectors, and employs the embedding of the subgraph vector distribution as the output vector representation for the input graph.', 'By theoretical analysis, we demonstrate the close connection between SEED and graph isomorphism.', 'Using public benchmark datasets, our empirical study suggests the proposed SEED framework is able to achieve up to 10% improvement, compared with competitive baseline methods.']
### SUMMARY:
| [
"This paper proposed a novel framework for graph similarity learning in inductive and unsupervised scenario."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Neural population responses to sensory stimuli can exhibit both nonlinear stimulus- dependence and richly structured shared variability.', 'Here, we show how adversarial training can be used to optimize neural encoding models to capture both the deterministic and stochastic components of neural population data.', 'To account for the discrete nature of neural spike trains, we use the REBAR method to estimate unbiased gradients for adversarial optimization of neural encoding models.', 'We illustrate our approach on population recordings from primary visual cortex.', 'We show that adding latent noise-sources to a convolutional neural network yields a model which captures both the stimulus-dependence and noise correlations of the population activity.']
### SUMMARY:
| [
"We show how neural encoding models can be trained to capture both the signal and spiking variability of neural population data using GANs."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['A weakly supervised learning based clustering framework is proposed in this paper.', 'As the core of this framework, we introduce a novel multiple instance learning task based on a bag level label called unique class count (ucc), which is the number of unique classes among all instances inside the bag.', 'In this task, no annotations on individual instances inside the bag are needed during training of the models.', 'We mathematically prove that with a perfect ucc classifier, perfect clustering of individual instances inside the bags is possible even when no annotations on individual instances are given during training.', 'We have constructed a neural network based ucc classifier and experimentally shown that the clustering performance of our framework with our weakly supervised ucc classifier is comparable to that of fully supervised learning models where labels for all instances are known.', 'Furthermore, we have tested the applicability of our framework to a real world task of semantic segmentation of breast cancer metastases in histological lymph node sections and shown that the performance of our weakly supervised framework is comparable to the performance of a fully supervised Unet model.']
### SUMMARY:
| [
"A weakly supervised learning based clustering framework performs comparable to that of fully supervised learning models by exploiting unique class count."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Model-free reinforcement learning (RL) methods are succeeding in a growing number of tasks, aided by recent advances in deep learning. ', 'However, they tend to suffer from high sample complexity, which hinders their use in real-world domains. ', 'Alternatively, model-based reinforcement learning promises to reduce sample complexity, but tends to require careful tuning and to date have succeeded mainly in restrictive domains where simple models are sufficient for learning.', 'In this paper, we analyze the behavior of vanilla model-based reinforcement learning methods when deep neural networks are used to learn both the model and the policy, and show that the learned policy tends to exploit regions where insufficient data is available for the model to be learned, causing instability in training.', 'To overcome this issue, we propose to use an ensemble of models to maintain the model uncertainty and regularize the learning process.', 'We further show that the use of likelihood ratio derivatives yields much more stable learning than backpropagation through time.', 'Altogether, our approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO) significantly reduces the sample complexity compared to model-free deep RL methods on challenging continuous control benchmark tasks.']
### SUMMARY:
| [
"Deep Model-Based RL that works well."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The high computational and parameter complexity of neural networks makes their training very slow and difficult to deploy on energy and storage-constrained comput- ing systems.', 'Many network complexity reduction techniques have been proposed including fixed-point implementation.', 'However, a systematic approach for design- ing full fixed-point training and inference of deep neural networks remains elusive.', 'We describe a precision assignment methodology for neural network training in which all network parameters, i.e., activations and weights in the feedforward path, gradients and weight accumulators in the feedback path, are assigned close to minimal precision.', 'The precision assignment is derived analytically and enables tracking the convergence behavior of the full precision training, known to converge a priori.', 'Thus, our work leads to a systematic methodology of determining suit- able precision for fixed-point training.', 'The near optimality (minimality) of the resulting precision assignment is validated empirically for four networks on the CIFAR-10, CIFAR-100, and SVHN datasets.', 'The complexity reduction arising from our approach is compared with other fixed-point neural network designs.']
### SUMMARY:
| [
"We analyze and determine the precision requirements for training neural networks when all tensors, including back-propagated signals and weight accumulators, are quantized to fixed-point format."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Machine learning (ML) research has investigated prototypes: examples that are representative of the behavior to be learned.', 'We systematically evaluate five methods for identifying prototypes, both ones previously introduced as well as new ones we propose, finding all of them to provide meaningful but different interpretations.', 'Through a human study, we confirm that all five metrics are well matched to human intuition.', 'Examining cases where the metrics disagree offers an informative perspective on the properties of data and algorithms used in learning, with implications for data-corpus construction, efficiency, adversarial robustness, interpretability, and other ML aspects.', 'In particular, we confirm that the "train on hard" curriculum approach can improve accuracy on many datasets and tasks, but that it is strictly worse when there are many mislabeled or ambiguous examples.']
### SUMMARY:
| [
"We can identify prototypical and outlier examples in machine learning that are quantifiably very different, and make use of them to improve many aspects of neural networks."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this work, we propose the Sparse Deep Scattering Croisé Network (SDCSN) a novel architecture based on the Deep Scattering Network (DSN).', 'The DSN is achieved by cascading wavelet transform convolutions with a complex modulus and a time-invariant operator.', 'We extend this work by first,\n', 'crossing multiple wavelet family transforms to increase the feature diversity while avoiding any learning.', 'Thus providing a more informative latent representation and benefit from the development of highly specialized wavelet filters over the last decades.', 'Beside, by combining all the different wavelet representations, we reduce the amount of prior information needed regarding the signals at hand.\n', 'Secondly, we develop an optimal thresholding strategy for over-complete filter banks that regularizes the network and controls instabilities such as inherent non-stationary noise in the signal.', 'Our systematic and principled solution sparsifies the latent representation of the network by acting as a local mask distinguishing between activity and noise.', 'Thus, we propose to enhance the DSN by increasing the variance of the scattering coefficients representation as well as improve its robustness with respect to non-stationary noise.\n', 'We show that our new approach is more robust and outperforms the DSN on a bird detection task.']
### SUMMARY:
| [
"We propose to enhance the Deep Scattering Network in order to improve control and stability of any given machine learning pipeline by proposing a continuous wavelet thresholding scheme"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We propose a neural clustering model that jointly learns both latent features and how they cluster.', 'Unlike similar methods our model does not require a predefined number of clusters.', 'Using a supervised approach, we agglomerate latent features towards randomly sampled targets within the same space whilst progressively removing the targets until we are left with only targets which represent cluster centroids.', 'To show the behavior of our model across different modalities we apply our model on both text and image data and very competitive results on MNIST.', 'Finally, we also provide results against baseline models for fashion-MNIST, the 20 newsgroups dataset, and a Twitter dataset we ourselves create.']
### SUMMARY:
| [
"Neural clustering without needing a number of clusters"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recent work on explanation generation for decision-making problems has viewed the explanation process as one of model reconciliation where an AI agent brings the human mental model (of its capabilities, beliefs, and goals) to the same page with regards to a task at hand.', 'This formulation succinctly captures many possible types of explanations, as well as explicitly addresses the various properties -- e.g. the social aspects, contrastiveness, and selectiveness -- of explanations studied in social sciences among human-human interactions.', 'However, it turns out that the same process can be hijacked into producing "alternative explanations" -- i.e. explanations that are not true but still satisfy all the properties of a proper explanation.', 'In previous work, we have looked at how such explanations may be perceived by the human in the loop and alluded to one possible way of generating them.', 'In this paper, we go into more details of this curious feature of the model reconciliation process and discuss similar implications to the overall notion of explainable decision-making.']
### SUMMARY:
| [
"Model Reconciliation is an established framework for plan explanations, but can be easily hijacked to produce lies."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We consider new variants of optimization algorithms.', 'Our algorithms are based on the observation that mini-batch of stochastic gradients in consecutive iterations do not change drastically and consequently may be predictable.', 'Inspired by the similar setting in online learning literature called Optimistic Online learning, we propose two new optimistic algorithms for AMSGrad and Adam, respectively, by exploiting the predictability of gradients. ', 'The new algorithms combine the idea of momentum method, adaptive gradient method, and algorithms in Optimistic Online learning, which leads to speed up in training deep neural nets in practice.']
### SUMMARY:
| [
"We consider new variants of optimization algorithms for training deep nets."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Using Recurrent Neural Networks (RNNs) in sequence modeling tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements because of the memory-bound execution pattern of RNNs.', 'We propose a big-little dual-module inference to dynamically skip unnecessary memory access and computation to speedup RNN inference.', 'Leveraging the error-resilient feature of nonlinear activation functions used in RNNs, we propose to use a lightweight little module that approximates the original RNN layer, which is referred to as the big module, to compute activations of the insensitive region that are more error-resilient.', 'The expensive memory access and computation of the big module can be reduced as the results are only used in the sensitive region.', 'Our method can reduce the overall memory access by 40% on average and achieve 1.54x to 1.75x speedup on CPU-based server platform with negligible impact on model quality.']
### SUMMARY:
| [
"We accelerate RNN inference by dynamically reducing redundant memory access using a mixture of accurate and approximate modules."
] |