input
stringlengths 331
3.18k
| output
sequence |
---|---|
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Fine-grained Entity Recognition (FgER) is the task of detecting and classifying entity mentions to a large set of types spanning diverse domains such as biomedical, finance and sports. ', 'We observe that when the type set spans several domains, detection of entity mention becomes a limitation for supervised learning models. ', 'The primary reason being lack of dataset where entity boundaries are properly annotated while covering a large spectrum of entity types. ', 'Our work directly addresses this issue. ', 'We propose Heuristics Allied with Distant Supervision (HAnDS) framework to automatically construct a quality dataset suitable for the FgER task. ', 'HAnDS framework exploits the high interlink among Wikipedia and Freebase in a pipelined manner, reducing annotation errors introduced by naively using distant supervision approach. ', 'Using HAnDS framework, we create two datasets, one suitable for building FgER systems recognizing up to 118 entity types based on the FIGER type hierarchy and another for up to 1115 entity types based on the TypeNet hierarchy. ', 'Our extensive empirical experimentation warrants the quality of the generated datasets. ', 'Along with this, we also provide a manually annotated dataset for benchmarking FgER systems.']
### SUMMARY:
| [
"We initiate a push towards building ER systems to recognize thousands of types by providing a method to automatically construct suitable datasets based on the type hierarchy. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Implementing correct method invocation is an important task for software developers.', 'However, this is challenging work, since the structure of method invocation can be complicated.', 'In this paper, we propose InvocMap, a code completion tool allows developers to obtain an implementation of multiple method invocations from a list of method names inside code context.', 'InvocMap is able to predict the nested method invocations which their names didn’t appear in the list of input method names given by developers.', 'To achieve this, we analyze the Method Invocations by four levels of abstraction.', 'We build a Machine Translation engine to learn the mapping from the first level to the third level of abstraction of multiple method invocations, which only requires developers to manually add local variables from generated expression to get the final code.', 'We evaluate our proposed approach on six popular libraries: JDK, Android, GWT, Joda-Time, Hibernate, and Xstream.', 'With the training corpus of 2.86 million method invocations extracted from 1000 Java Github projects and the testing corpus extracted from 120 online forums code snippets, InvocMap achieves the accuracy rate up to 84 in F1- score depending on how much information of context provided along with method names, that shows its potential for auto code completion.']
### SUMMARY:
| [
"This paper proposes a theory of classifying Method Invocations by different abstraction levels and conducting a statistical approach for code completion from method name to method invocation."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Adversaries in neural networks have drawn much attention since their first debut. \n', 'While most existing methods aim at deceiving image classification models into misclassification or crafting attacks for specific object instances in the object setection tasks, we focus on creating universal adversaries to fool object detectors and hide objects from the detectors. \n', 'The adversaries we examine are universal in three ways: \n', '(1) They are not specific for specific object instances; \n', '(2) They are image-independent; \n', '(3) They can further transfer to different unknown models. \n', 'To achieve this, we propose two novel techniques to improve the transferability of the adversaries: \\textit{piling-up} and \\textit{monochromatization}. \n', 'Both techniques prove to simplify the patterns of generated adversaries, and ultimately result in higher transferability.']
### SUMMARY:
| [
"We focus on creating universal adversaries to fool object detectors and hide objects from the detectors. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['This work presents the Poincaré Wasserstein Autoencoder, a reformulation of\n', 'the recently proposed Wasserstein autoencoder framework on a non-Euclidean\n', 'manifold, the Poincaré ball model of the hyperbolic space H n .', 'By assuming the\n', 'latent space to be hyperbolic, we can use its intrinsic hierarchy to impose structure\n', 'on the learned latent space representations.', 'We show that for datasets with latent\n', 'hierarchies, we can recover the structure in a low-dimensional latent space.', 'We\n', 'also demonstrate the model in the visual domain to analyze some of its properties\n', 'and show competitive results on a graph link prediction task.']
### SUMMARY:
| [
"Wasserstein Autoencoder with hyperbolic latent space"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this paper, we introduce a method to compress intermediate feature maps of deep neural networks (DNNs) to decrease memory storage and bandwidth requirements during inference.', 'Unlike previous works, the proposed method is based on converting fixed-point activations into vectors over the smallest GF(2) finite field followed by nonlinear dimensionality reduction (NDR) layers embedded into a DNN.', 'Such an end-to-end learned representation finds more compact feature maps by exploiting quantization redundancies within the fixed-point activations along the channel or spatial dimensions.', 'We apply the proposed network architecture to the tasks of ImageNet classification and PASCAL VOC object detection.', 'Compared to prior approaches, the conducted experiments show a factor of 2 decrease in memory requirements with minor degradation in accuracy while adding only bitwise computations.']
### SUMMARY:
| [
"Feature map compression method that converts quantized activations into binary vectors followed by nonlinear dimensionality reduction layers embedded into a DNN"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Adversarial training is one of the strongest defenses against adversarial attacks, but it requires adversarial examples to be generated for every mini-batch during optimization. ', 'The expense of producing these examples during training often precludes adversarial training from use on complex image datasets. \n', 'In this study, we explore the mechanisms by which adversarial training improves classifier robustness, and show that these mechanisms can be effectively mimicked using simple regularization methods, including label smoothing and logit squeezing. \n', 'Remarkably, using these simple regularization methods in combination with Gaussian noise injection, we are able to achieve strong adversarial robustness -- often exceeding that of adversarial training -- using no adversarial examples.']
### SUMMARY:
| [
"Achieving strong adversarial robustness comparable to adversarial training without training on adversarial examples"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training.', 'Typically, this involves minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect.', 'In this work, we propose instead to directly target later desired tasks by meta-learning an unsupervised learning rule which leads to representations useful for those tasks. ', 'Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.', 'Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to different neural network architectures, datasets, and data modalities.', 'We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques.', 'We further show that the meta-learned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities.', 'It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.']
### SUMMARY:
| [
"We learn an unsupervised learning algorithm that produces useful representations from a set of supervised tasks. At test-time, we apply this algorithm to new tasks without any supervision and show performance comparable to a VAE."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['There is significant recent evidence in supervised learning that, in the over-parametrized setting, wider networks achieve better test error.', 'In other words, the bias-variance tradeoff is not directly observable when increasing network width arbitrarily.', 'We investigate whether a corresponding phenomenon is present in reinforcement learning.', 'We experiment on four OpenAI Gym environments, increasing the width of the value and policy networks beyond their prescribed values.', 'Our empirical results lend support to this hypothesis.', 'However, tuning the hyperparameters of each network width separately remains as important future work in environments/algorithms where the optimal hyperparameters vary noticably across widths, confounding the results when the same hyperparameters are used for all widths.']
### SUMMARY:
| [
"Over-parametrization in width seems to help in deep reinforcement learning, just as it does in supervised learning."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Learning disentangled representations of data is one of the central themes in unsupervised learning in general and generative modelling in particular. ', 'In this work, we tackle a slightly more intricate scenario where the observations are generated from a conditional distribution of some known control variate and some latent noise variate. ', 'To this end, we present a hierarchical model and a training method (CZ-GEM) that leverages some of the recent developments in likelihood-based and likelihood-free generative models. ', 'We show that by formulation, CZ-GEM introduces the right inductive biases that ensure the disentanglement of the control from the noise variables, while also keeping the components of the control variate disentangled.', 'This is achieved without compromising on the quality of the generated samples.', 'Our approach is simple, general, and can be applied both in supervised and unsupervised settings.']
### SUMMARY:
| [
"Hierarchical generative model (hybrid of VAE and GAN) that learns a disentangled representation of data without compromising the generative quality."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Deep learning has yielded state-of-the-art performance on many natural language processing tasks including named entity recognition (NER).', 'However, this typically requires large amounts of labeled data.', 'In this work, we demonstrate that the amount of labeled training data can be drastically reduced when deep learning is combined with active learning.', 'While active learning is sample-efficient, it can be computationally expensive since it requires iterative retraining.', 'To speed this up, we introduce a lightweight architecture for NER, viz., the CNN-CNN-LSTM model consisting of convolutional character and word encoders and a long short term memory (LSTM) tag decoder.', 'The model achieves nearly state-of-the-art performance on standard datasets for the task while being computationally much more efficient than best performing models.', 'We carry out incremental active learning, during the training process, and are able to nearly match state-of-the-art performance with just 25\\% of the original training data.']
### SUMMARY:
| [
"We introduce a lightweight architecture for named entity recognition and carry out incremental active learning, which is able to match state-of-the-art performance with just 25% of the original training data."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Network quantization is a model compression and acceleration technique that has become essential to neural network deployment.', 'Most quantization methods per- form fine-tuning on a pretrained network, but this sometimes results in a large loss in accuracy compared to the original network.', 'We introduce a new technique to train quantization-friendly networks, which can be directly converted to an accurate quantized network without the need for additional fine-tuning.', 'Our technique allows quantizing the weights and activations of all network layers down to 4 bits, achieving high efficiency and facilitating deployment in practical settings.', 'Com- pared to other fully quantized networks operating at 4 bits, we show substantial improvements in accuracy, for example 66.68% top-1 accuracy on ImageNet using ResNet-18, compared to the previous state-of-the-art accuracy of 61.52% Louizos et al. (2019) and a full precision reference accuracy of 69.76%.', 'We performed a thorough set of experiments to test the efficacy of our method and also conducted ablation studies on different aspects of the method and techniques to improve training stability and accuracy.', 'Our codebase and trained models are available on GitHub.']
### SUMMARY:
| [
"We train accurate fully quantized networks using a loss function maximizing full precision model accuracy and minimizing the difference between the full precision and quantized networks."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['While much of the work in the design of convolutional networks over the last five years has revolved around the empirical investigation of the importance of depth, filter sizes, and number of feature channels, recent studies have shown that branching, i.e., splitting the computation along parallel but distinct threads and then aggregating their outputs, represents a new promising dimension for significant improvements in performance.', 'To combat the complexity of design choices in multi-branch architectures, prior work has adopted simple strategies, such as a fixed branching factor, the same input being fed to all parallel branches, and an additive combination of the outputs produced by all branches at aggregation points. \n\n', 'In this work we remove these predefined choices and propose an algorithm to learn the connections between branches in the network.', 'Instead of being chosen a priori by the human designer, the multi-branch connectivity is learned simultaneously with the weights of the network by optimizing a single loss function defined with respect to the end task.', "We demonstrate our approach on the problem of multi-class image classification using four different datasets where it yields consistently higher accuracy compared to the state-of-the-art ``ResNeXt'' multi-branch network given the same learning capacity."]
### SUMMARY:
| [
"In this paper we introduced an algorithm to learn the connectivity of deep multi-branch networks. The approach is evaluated on image categorization where it consistently yields accuracy gains over state-of-the-art models that use fixed connectivity."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Although deep convolutional networks have achieved improved performance in many natural language tasks, they have been treated as black boxes because they are difficult to interpret.', 'Especially, little is known about how they represent language in their intermediate layers.', 'In an attempt to understand the representations of deep convolutional networks trained on language tasks, we show that individual units are selectively responsive to specific morphemes, words, and phrases, rather than responding to arbitrary and uninterpretable patterns.', 'In order to quantitatively analyze such intriguing phenomenon, we propose a concept alignment method based on how units respond to replicated text.', 'We conduct analyses with different architectures on multiple datasets for classification and translation tasks and provide new insights into how deep models understand natural language.']
### SUMMARY:
| [
"We show that individual units in CNN representations learned in NLP tasks are selectively responsive to natural language concepts."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Applying reinforcement learning (RL) to real-world problems will require reasoning about action-reward correlation over long time horizons.', 'Hierarchical reinforcement learning (HRL) methods handle this by dividing the task into hierarchies, often with hand-tuned network structure or pre-defined subgoals.', 'We propose a novel HRL framework TAIC, which learns the temporal abstraction from past experience or expert demonstrations without task-specific knowledge.', 'We formulate the temporal abstraction problem as learning latent representations of action sequences and present a novel approach of regularizing the latent space by adding information-theoretic constraints.', 'Specifically, we maximize the mutual information between the latent variables and the state changes.\n', 'A visualization of the latent space demonstrates that our algorithm learns an effective abstraction of the long action sequences.', 'The learned abstraction allows us to learn new tasks on higher level more efficiently.', 'We convey a significant speedup in convergence over benchmark learning problems.', 'These results demonstrate that learning temporal abstractions is an effective technique in increasing the convergence rate and sample efficiency of RL algorithms.']
### SUMMARY:
| [
"We propose a novel HRL framework, in which we formulate the temporal abstraction problem as learning a latent representation of action sequence."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recurrent neural networks (RNNs) have achieved state-of-the-art performance on many diverse tasks, from machine translation to surgical activity recognition, yet training RNNs to capture long-term dependencies remains difficult.', 'To date, the vast majority of successful RNN architectures alleviate this problem using nearly-additive connections between states, as introduced by long short-term memory (LSTM).', 'We take an orthogonal approach and introduce MIST RNNs, a NARX RNN architecture that allows direct connections from the very distant past.', 'We show that MIST RNNs', '1) exhibit superior vanishing-gradient properties in comparison to LSTM and previously-proposed NARX RNNs;', '2) are far more efficient than previously-proposed NARX RNN architectures, requiring even fewer computations than LSTM; and', '3) improve performance substantially over LSTM and Clockwork RNNs on tasks requiring very long-term dependencies.']
### SUMMARY:
| [
"We introduce MIST RNNs, which a) exhibit superior vanishing-gradient properties in comparison to LSTM; b) improve performance substantially over LSTM and Clockwork RNNs on tasks requiring very long-term dependencies; and c) are much more efficient than previously-proposed NARX RNNs, with even fewer parameters and operations than LSTM."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['A well-trained model should classify objects with unanimous score for every category.', 'This requires the high-level semantic features should be alike among samples, despite a wide span in resolution, texture, deformation, etc.', 'Previous works focus on re-designing the loss function or proposing new regularization constraints on the loss.', 'In this paper, we address this problem via a new perspective.', 'For each category, it is assumed that there are two sets in the feature space: one with more reliable information and the other with less reliable source.', 'We argue that the reliable set could guide the feature learning of the less reliable set during training - in spirit of student mimicking teacher’s behavior and thus pushing towards a more compact class centroid in the high-dimensional space.', 'Such a scheme also benefits the reliable set since samples become more closer within the same category - implying that it is easilier for the classifier to identify.', 'We refer to this mutual learning process as feature intertwiner and embed the spirit into object detection.', 'It is well-known that objects of low resolution are more difficult to detect due to the loss of detailed information during network forward pass.', 'We thus regard objects of high resolution as the reliable set and objects of low resolution as the less reliable set.', 'Specifically, an intertwiner is achieved by minimizing the distribution divergence between two sets.', 'We design a historical buffer to represent all previous samples in the reliable set and utilize them to guide the feature learning of the less reliable set.', 'The design of obtaining an effective feature representation for the reliable set is further investigated, where we introduce the optimal transport (OT) algorithm into the framework.', 'Samples in the less reliable set are better aligned with the reliable set with aid of OT metric.', 'Incorporated with such a plug-and-play intertwiner, we achieve an evident improvement over previous state-of-the-arts on the COCO object detection benchmark.']
### SUMMARY:
| [
"(Camera-ready version) A feature intertwiner module to leverage features from one accurate set to help the learning of another less reliable set."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Approaches to continual learning aim to successfully learn a set of related tasks that arrive in an online manner.', 'Recently, several frameworks have been developed which enable deep learning to be deployed in this learning scenario.', 'A key modelling decision is to what extent the architecture should be shared across tasks.', 'On the one hand, separately modelling each task avoids catastrophic forgetting but it does not support transfer learning and leads to large models.', 'On the other hand, rigidly specifying a shared component and a task-specific part enables task transfer and limits the model size, but it is vulnerable to catastrophic forgetting and restricts the form of task-transfer that can occur.', 'Ideally, the network should adaptively identify which parts of the network to share in a data driven way.', 'Here we introduce such an approach called Continual Learning with Adaptive Weights (CLAW), which is based on probabilistic modelling and variational inference.', 'Experiments show that CLAW achieves state-of-the-art performance on six benchmarks in terms of overall continual learning performance, as measured by classification accuracy, and in terms of addressing catastrophic forgetting.']
### SUMMARY:
| [
"A continual learning framework which learns to automatically adapt its architecture based on a proposed variational inference algorithm. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['High-dimensional data often lie in or close to low-dimensional subspaces.', 'Sparse subspace clustering methods with sparsity induced by L0-norm, such as L0-Sparse Subspace Clustering (L0-SSC), are demonstrated to be more effective than its L1 counterpart such as Sparse Subspace Clustering (SSC).', 'However, these L0-norm based subspace clustering methods are restricted to clean data that lie exactly in subspaces.', 'Real data often suffer from noise and they may lie close to subspaces.', 'We propose noisy L0-SSC to handle noisy data so as to improve the robustness.', 'We show that the optimal solution to the optimization problem of noisy L0-SSC achieves subspace detection property (SDP), a key element with which data from different subspaces are separated, under deterministic and randomized models.', 'Our results provide theoretical guarantee on the correctness of noisy L0-SSC in terms of SDP on noisy data.', 'We further propose Noisy-DR-L0-SSC which provably recovers the subspaces on dimensionality reduced data.', 'Noisy-DR-L0-SSC first projects the data onto a lower dimensional space by linear transformation, then performs noisy L0-SSC on the dimensionality reduced data so as to improve the efficiency.', 'The experimental results demonstrate the effectiveness of noisy L0-SSC and Noisy-DR-L0-SSC.']
### SUMMARY:
| [
"We propose Noisy-DR-L0-SSC (Noisy Dimension Reduction L0-Sparse Subspace Clustering) to efficiently partition noisy data in accordance to their underlying subspace structure."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Mode connectivity provides novel geometric insights on analyzing loss landscapes and enables building high-accuracy pathways between well-trained neural networks.', 'In this work, we propose to employ mode connectivity in loss landscapes to study the adversarial robustness of deep neural networks, and provide novel methods for improving this robustness. ', 'Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.', 'When network models are tampered with backdoor or error-injection attacks, our results demonstrate that the path connection learned using limited amount of bonafide data can effectively mitigate adversarial effects while maintaining the original accuracy on clean data.', 'Therefore, mode connectivity provides users with the power to repair backdoored or error-injected models. ', 'We also use mode connectivity to investigate the loss landscapes of regular and robust models against evasion attacks.', 'Experiments show that there exists a barrier in adversarial robustness loss on the path connecting regular and adversarially-trained models. ', 'A high correlation is observed between the adversarial robustness loss and the largest eigenvalue of the input Hessian matrix, for which theoretical justifications are provided. ', 'Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.']
### SUMMARY:
| [
"A novel approach using mode connectivity in loss landscapes to mitigate adversarial effects, repair tampered models and evaluate adversarial robustness"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Generative adversarial networks (GANs) learn to map samples from a noise distribution to a chosen data distribution.', 'Recent work has demonstrated that GANs are consequently sensitive to, and limited by, the shape of the noise distribution.', 'For example, a single generator struggles to map continuous noise (e.g. a uniform distribution) to discontinuous output (e.g. separate Gaussians) or complex output (e.g. intersecting parabolas).', "We address this problem by learning to generate from multiple models such that the generator's output is actually the combination of several distinct networks.", 'We contribute a novel formulation of multi-generator models where we learn a prior over the generators conditioned on the noise, parameterized by a neural network.', 'Thus, this network not only learns the optimal rate to sample from each generator but also optimally shapes the noise received by each generator.', 'The resulting Noise Prior GAN (NPGAN) achieves expressivity and flexibility that surpasses both single generator models and previous multi-generator models.']
### SUMMARY:
| [
"A multi-generator GAN framework with an additional network to learn a prior over the input noise."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recent advances in Generative Adversarial Networks (GANs) – in architectural design, training strategies, and empirical tricks – have led nearly photorealistic samples on large-scale datasets such as ImageNet. ', 'In fact, for one model in particular, BigGAN, metrics such as Inception Score or Frechet Inception Distance nearly match those of the dataset, suggesting that these models are close to match-ing the distribution of the training set. ', 'Given the quality of these models, it is worth understanding to what extent these samples can be used for data augmentation, a task expressed as a long-term goal of the GAN research project. ', 'To that end, we train ResNet-50 classifiers using either purely BigGAN images or mixtures of ImageNet and BigGAN images, and test on the ImageNet validation set.', 'Our preliminary results suggest both a measured view of state-of-the-art GAN quality and highlight limitations of current metrics.', 'Using only BigGAN images, we find that Top-1 and Top-5 error increased by 120% and 384%, respectively, and furthermore, adding more BigGAN data to the ImageNet training set at best only marginally improves classifier performance.', 'Finally, we find that neither Inception Score, nor FID, nor combinations thereof are predictive of classification accuracy. ', 'These results suggest that as GANs are beginning to be deployed in downstream tasks, we should create metrics that better measure downstream task performance. ', 'We propose classification performance as one such metric that, in addition to assessing per-class sample quality, is more suited to such downstream tasks.']
### SUMMARY:
| [
"BigGANs do not capture the ImageNet data distributions and are only modestly successful for data augmentation."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Modern federated networks, such as those comprised of wearable devices, mobile phones, or autonomous vehicles, generate massive amounts of data each day.', 'This wealth of data can help to learn models that can improve the user experience on each device.', 'However, the scale and heterogeneity of federated data presents new challenges in research areas such as federated learning, meta-learning, and multi-task learning.', 'As the machine learning community begins to tackle these challenges, we are at a critical time to ensure that developments made in these areas are grounded with realistic benchmarks.', 'To this end, we propose Leaf, a modular benchmarking framework for learning in federated settings.', 'Leaf includes a suite of open-source federated datasets, a rigorous evaluation framework, and a set of reference implementations, all geared towards capturing the obstacles and intricacies of practical federated environments.']
### SUMMARY:
| [
"We present Leaf, a modular benchmarking framework for learning in federated data, with applications to learning paradigms such as federated learning, meta-learning, and multi-task learning."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Understanding object motion is one of the core problems in computer vision.', 'It requires segmenting and tracking objects over time.', 'Significant progress has been made in instance segmentation, but such models cannot track objects, and more crucially, they are unable to reason in both 3D space and time.\n', 'We propose a new spatio-temporal embedding loss on videos that generates temporally consistent video instance segmentation.', 'Our model includes a temporal network that learns to model temporal context and motion, which is essential to produce smooth embeddings over time.', 'Further, our model also estimates monocular depth, with a self-supervised loss, as the relative distance to an object effectively constrains where it can be next, ensuring a time-consistent embedding.', 'Finally, we show that our model can accurately track and segment instances, even with occlusions and missed detections, advancing the state-of-the-art on the KITTI Multi-Object and Tracking Dataset.']
### SUMMARY:
| [
"We introduce a new spatio-temporal embedding loss on videos that generates temporally consistent video instance segmentation, even with occlusions and missed detections, using appearance, geometry, and temporal context."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['As reinforcement learning continues to drive machine intelligence beyond its conventional boundary, unsubstantial practices in sparse reward environment severely limit further applications in a broader range of advanced fields.', 'Motivated by the demand for an effective deep reinforcement learning algorithm that accommodates sparse reward environment, this paper presents Hindsight Trust Region Policy Optimization (HTRPO), a method that efficiently utilizes interactions in sparse reward conditions to optimize policies within trust region and, in the meantime, maintains learning stability.', 'Firstly, we theoretically adapt the TRPO objective function, in the form of the expected return of the policy, to the distribution of hindsight data generated from the alternative goals.', 'Then, we apply Monte Carlo with importance sampling to estimate KL-divergence between two policies, taking the hindsight data as input.', 'Under the condition that the distributions are sufficiently close, the KL-divergence is approximated by another f-divergence.', 'Such approximation results in the decrease of variance and alleviates the instability during policy update. ', 'Experimental results on both discrete and continuous benchmark tasks demonstrate that HTRPO converges significantly faster than previous policy gradient methods.', 'It achieves effective performances and high data-efficiency for training policies in sparse reward environments.']
### SUMMARY:
| [
"This paper proposes an advanced policy optimization method with hindsight experience for sparse reward reinforcement learning."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Open-domain question answering (QA) is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques.', 'Much of the progress in open-domain QA systems has been realized through advances in information retrieval methods and corpus construction.', 'In this paper, we focus on the recently introduced ARC Challenge dataset, which contains 2,590 multiple choice questions authored for grade-school science exams.', 'These questions are selected to be the most challenging for current QA systems, and current state of the art performance is only slightly better than random chance.', 'We present a system that reformulates a given question into queries that are used to retrieve supporting text from a large corpus of science-related text.', 'Our rewriter is able to incorporate background knowledge from ConceptNet and -- in tandem with a generic textual entailment system trained on SciTail that identifies support in the retrieved results -- outperforms several strong baselines on the end-to-end QA task despite only being trained to identify essential terms in the original source question.', 'We use a generalizable decision methodology over the retrieved evidence and answer candidates to select the best answer.', 'By combining query reformulation, background knowledge, and textual entailment our system is able to outperform several strong baselines on the ARC dataset.']
### SUMMARY:
| [
"We explore how using background knowledge with query reformulation can help retrieve better supporting evidence when answering multiple-choice science questions."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Deep CNNs have achieved state-of-the-art performance for numerous machine learning and computer vision tasks in recent years, but as they have become increasingly deep, the number of parameters they use has also increased, making them hard to deploy in memory-constrained environments and difficult to interpret.', 'Machine learning theory implies that such networks are highly over-parameterised and that it should be possible to reduce their size without sacrificing accuracy, and indeed many recent studies have begun to highlight specific redundancies that can be exploited to achieve this.', 'In this paper, we take a further step in this direction by proposing a filter-sharing approach to compressing deep CNNs that reduces their memory footprint by repeatedly applying a single convolutional mapping of learned filters to simulate a CNN pipeline.', 'We show, via experiments on CIFAR-10, CIFAR-100, Tiny ImageNet, and ImageNet that this allows us to reduce the parameter counts of networks based on common designs such as VGGNet and ResNet by a factor proportional to their depth, whilst leaving their accuracy largely unaffected.', 'At a broader level, our approach also indicates how the scale-space regularities found in visual signals can be leveraged to build neural architectures that are more parsimonious and interpretable.']
### SUMMARY:
| [
"We compress deep CNNs by reusing a single convolutional layer in an iterative manner, thereby reducing their parameter counts by a factor proportional to their depth, whilst leaving their accuracies largely unaffected"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We extend the recent results of (Arora et al., 2019) by a spectral analysis of representations corresponding to kernel and neural embeddings.', 'They showed that in a simple single layer network, the alignment of the labels to the eigenvectors of the corresponding Gram matrix determines both the convergence of the optimization during training as well as the generalization properties.', 'We generalize their result to kernel and neural representations and show that these extensions improve both optimization and generalization of the basic setup studied in (Arora et al., 2019).']
### SUMMARY:
| [
"Spectral analysis for understanding how different representations can improve optimization and generalization."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Obtaining high-quality uncertainty estimates is essential for many applications of deep neural networks.', 'In this paper, we theoretically justify a scheme for estimating uncertainties, based on sampling from a prior distribution.', 'Crucially, the uncertainty estimates are shown to be conservative in the sense that they never underestimate a posterior uncertainty obtained by a hypothetical Bayesian algorithm.', 'We also show concentration, implying that the uncertainty estimates converge to zero as we get more data.', 'Uncertainty estimates obtained from random priors can be adapted to any deep network architecture and trained using standard supervised learning pipelines.', 'We provide experimental evaluation of random priors on calibration and out-of-distribution detection on typical computer vision tasks, demonstrating that they outperform deep ensembles in practice.']
### SUMMARY:
| [
"We provide theoretical support to uncertainty estimates for deep learning obtained fitting random priors."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this paper, we propose an arbitrarily-conditioned data imputation framework built upon variational autoencoders and normalizing flows.', 'The proposed model is capable of mapping any partial data to a multi-modal latent variational distribution.', 'Sampling from such a distribution leads to stochastic imputation.', 'Preliminary evaluation on MNIST dataset shows promising stochastic imputation conditioned on partial images as input.']
### SUMMARY:
| [
"We propose an arbitrarily-conditioned data imputation framework built upon variational autoencoders and normalizing flows"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['This paper studies \\emph{model inversion attacks}, in which the access to a model is abused to infer information about the training data.', 'Since its first introduction by~\\citet{fredrikson2014privacy}, such attacks have raised serious concerns given that training data usually contain sensitive information.', 'Thus far, successful model inversion attacks have only been demonstrated on simple models, such as linear regression and logistic regression.', 'Previous attempts to invert neural networks, even the ones with simple architectures, have failed to produce convincing results.', 'We present a novel attack method, termed the \\emph{generative model inversion attack}, which can invert deep neural networks with high success rates.', 'Rather than reconstructing private training data from scratch, we leverage partial public information, which can be very generic, to learn a distributional prior via generative adversarial networks (GANs) and use it to guide the inversion process.', "Moreover, we theoretically prove that a model's predictive power and its vulnerability to inversion attacks are indeed two sides of the same coin---highly predictive models are able to establish a strong correlation between features and labels, which coincides exactly with what an adversary exploits to mount the attacks.\n", 'Our experiments demonstrate that the proposed attack improves identification accuracy over the existing work by about $75\\%$ for reconstructing face images from a state-of-the-art face recognition classifier.', 'We also show that differential privacy, in its canonical form, is of little avail to protect against our attacks.']
### SUMMARY:
| [
"We develop a privacy attack that can recover the sensitive input data of a deep net from its output"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Latent space based GAN methods and attention based encoder-decoder architectures have achieved impressive results in text generation and Unsupervised NMT respectively.', 'Leveraging the two domains, we propose an adversarial latent space based architecture capable of generating parallel sentences in two languages concurrently and translating bidirectionally.', 'The bilingual generation goal is achieved by sampling from the latent space that is adversarially constrained to be shared between both languages.', 'First an NMT model is trained, with back-translation and an adversarial setup, to enforce a latent state between the two languages.', 'The encoder and decoder are shared for the two translation directions.', 'Next, a GAN is trained to generate ‘synthetic’ code mimicking the languages’ shared latent space.', 'This code is then fed into the decoder to generate text in either language.', 'We perform our experiments on Europarl and Multi30k datasets, on the English-French language pair, and document our performance using both Supervised and Unsupervised NMT.']
### SUMMARY:
| [
"We present a novel method for Bilingual Text Generation producing parallel concurrent sentences in two languages."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['As Artificial Intelligence (AI) becomes an integral part of our life, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. ', 'For a robotic teammate, the ability to generate explanations to explain its behavior is one of the key requirements of an explainable agency.', "Prior work on explanation generation focuses on supporting the reasoning behind the robot's behavior.", 'These approaches, however, fail to consider the mental workload needed to understand the received explanation.', 'In other words, the human teammate is expected to understand any explanation provided, often before the task execution, no matter how much information is presented in the explanation.\n', 'In this work, we argue that an explanation, especially complex ones, should be made in an online fashion during the execution, which helps spread out the information to be explained and thus reducing the mental workload of humans.', 'However, a challenge here is that the different parts of an explanation are dependent on each other, which must be taken into account when generating online explanations.', 'To this end, a general formulation of online explanation generation is presented along with three different implementations satisfying different online properties.', 'We base our explanation generation method on a model reconciliation setting introduced in our prior work.', 'Our approaches are evaluated both with human subjects in a standard planning competition (IPC) domain, using NASA Task Load Index (TLX), as well as in simulation with ten different problems across two IPC domains.\n']
### SUMMARY:
| [
"We introduce online explanation to consider the cognitive requirement of the human for understanding the generated explanation by the agent."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Deep neural networks use deeper and broader structures to achieve better performance and consequently, use increasingly more GPU memory as well.', 'However, limited GPU memory restricts many potential designs of neural networks.', 'In this paper, we propose a reinforcement learning based variable swapping and recomputation algorithm to reduce the memory cost, without sacrificing the accuracy of models.', 'Variable swapping can transfer variables between CPU and GPU memory to reduce variables stored in GPU memory.', 'Recomputation can trade time for space by removing some feature maps during forward propagation.', 'Forward functions are executed once again to get the feature maps before reuse.', 'However, how to automatically decide which variables to be swapped or recomputed remains a challenging problem.', 'To address this issue, we propose to use a deep Q-network(DQN) to make plans.', 'By combining variable swapping and recomputation, our results outperform several well-known benchmarks.']
### SUMMARY:
| [
"We propose a reinforcement learning based variable swapping and recomputation algorithm to reduce the memory cost."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In vanilla backpropagation (VBP), activation function matters considerably in terms of non-linearity and differentiability.\n', 'Vanishing gradient has been an important problem related to the bad choice of activation function in deep learning (DL).\n', 'This work shows that a differentiable activation function is not necessary any more for error backpropagation. \n', 'The derivative of the activation function can be replaced by an iterative temporal differencing (ITD) using fixed random feedback weight alignment (FBA).\n', 'Using FBA with ITD, we can transform the VBP into a more biologically plausible approach for learning deep neural network architectures.\n', "We don't claim that ITD works completely the same as the spike-time dependent plasticity (STDP) in our brain but this work can be a step toward the integration of STDP-based error backpropagation in deep learning."]
### SUMMARY:
| [
"Iterative temporal differencing with fixed random feedback alignment support spike-time dependent plasticity in vanilla backpropagation for deep learning."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Inspired by the recent successes of deep generative models for Text-To-Speech (TTS) such as WaveNet (van den Oord et al., 2016) and Tacotron (Wang et al., 2017), this article proposes the use of a deep generative model tailored for Automatic Speech Recognition (ASR) as the primary acoustic model (AM) for an overall recognition system with a separate language model (LM).', 'Two dimensions of depth are considered: (1) the use of mixture density networks, both autoregressive and non-autoregressive, to generate density functions capable of modeling acoustic input sequences with much more powerful conditioning than the first-generation generative models for ASR, Gaussian Mixture Models / Hidden Markov Models (GMM/HMMs), and (2) the use of standard LSTMs, in the spirit of the original tandem approach, to produce discriminative feature vectors for generative modeling.', 'Combining mixture density networks and deep discriminative features leads to a novel dual-stack LSTM architecture directly related to the RNN Transducer (Graves, 2012), but with the explicit functional form of a density, and combining naturally with a separate language model, using Bayes rule.', 'The generative models discussed here are compared experimentally in terms of log-likelihoods and frame accuracies.']
### SUMMARY:
| [
"This paper proposes the use of a deep generative acoustic model for automatic speech recognition, combining naturally with other deep sequence-to-sequence modules using Bayes' rule."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Recent studies show that widely used Deep neural networks (DNNs) are vulnerable to the carefully crafted adversarial examples.\n', 'Many advanced algorithms have been proposed to generate adversarial examples by leveraging the L_p distance for penalizing perturbations.\n', 'Different defense methods have also been explored to defend against such adversarial attacks. \n', 'While the effectiveness of L_p distance as a metric of perceptual quality remains an active research area, in this paper we will instead focus on a different type of perturbation, namely spatial transformation, as opposed to manipulating the pixel values directly as in prior works.\n', 'Perturbations generated through spatial transformation could result in large L_p distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems.', 'This potentially provides a new direction in adversarial example generation and the design of corresponding defenses.\n', 'We visualize the spatial transformation based perturbation for different examples and show that our technique\n', 'can produce realistic adversarial examples with smooth image deformation.\n', 'Finally, we visualize the attention of deep networks with different types of adversarial examples to better understand how these examples are interpreted.']
### SUMMARY:
| [
"We propose a new approach for generating adversarial examples based on spatial transformation, which produces perceptually realistic examples compared to existing attacks. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Most existing deep reinforcement learning (DRL) frameworks consider action spaces that are either\n', 'discrete or continuous space.', 'Motivated by the project of design Game AI for King of Glory\n', '(KOG), one the world’s most popular mobile game, we consider the scenario with the discrete-continuous\n', 'hybrid action space.', 'To directly apply existing DLR frameworks, existing approaches\n', 'either approximate the hybrid space by a discrete set or relaxing it into a continuous set, which is\n', 'usually less efficient and robust.', 'In this paper, we propose a parametrized deep Q-network (P-DQN)\n', 'for the hybrid action space without approximation or relaxation.', 'Our algorithm combines DQN and\n', 'DDPG and can be viewed as an extension of the DQN to hybrid actions.', 'The empirical study on the\n', 'game KOG validates the efficiency and effectiveness of our method.']
### SUMMARY:
| [
"A DQN and DDPG hybrid algorithm is proposed to deal with the discrete-continuous hybrid action space."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Over the past decade, knowledge graphs became popular for capturing structured domain knowledge. \n', 'Relational learning models enable the prediction of missing links inside knowledge graphs.', 'More specifically, latent distance approaches model the relationships among entities via a distance between latent representations.\n', 'Translating embedding models (e.g., TransE) are among the most popular latent distance approaches which use one distance function to learn multiple relation patterns. \n', 'However, they are mostly inefficient in capturing symmetric relations since the representation vector norm for all the symmetric relations becomes equal to zero.', 'They also lose information when learning relations with reflexive patterns since they become symmetric and transitive.\n', 'We propose the Multiple Distance Embedding model (MDE) that addresses these limitations and a framework which enables collaborative combinations of latent distance-based terms (MDE).\n', 'Our solution is based on two principles:', '1) using limit-based loss instead of margin ranking loss and', '2) by learning independent embedding vectors for each of terms we can collectively train and predict using contradicting distance terms.\n', 'We further demonstrate that MDE allows modeling relations with (anti)symmetry, inversion, and composition patterns.', 'We propose MDE as a neural network model which allows us to map non-linear relations between the embedding vectors and the expected output of the score function.\n', 'Our empirical results show that MDE outperforms the state-of-the-art embedding models on several benchmark datasets.']
### SUMMARY:
| [
"A novel method of modelling Knowledge Graphs based on Distance Embeddings and Neural Networks"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Improved generative adversarial network (Improved GAN) is a successful method of using generative adversarial models to solve the problem of semi-supervised learning.', 'However, it suffers from the problem of unstable training.', 'In this paper, we found that the instability is mostly due to the vanishing gradients on the generator.', 'To remedy this issue, we propose a new method to use collaborative training to improve the stability of semi-supervised GAN with the combination of Wasserstein GAN.', 'The experiments have shown that our proposed method is more stable than the original Improved GAN and achieves comparable classification accuracy on different data sets.']
### SUMMARY:
| [
"Improve Training Stability of Semi-supervised Generative Adversarial Networks with Collaborative Training"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['It has long been known that a single-layer fully-connected neural network with an i.i.d.', 'prior over its parameters is equivalent to a Gaussian process (GP), in the limit of infinite network width. ', 'This correspondence enables exact Bayesian inference for infinite width neural networks on regression tasks by means of evaluating the corresponding GP.', 'Recently, kernel functions which mimic multi-layer random neural networks have been developed, but only outside of a Bayesian framework.', 'As such, previous work has not identified that these kernels can be used as covariance functions for GPs and allow fully Bayesian prediction with a deep neural network.\n\n', 'In this work, we derive the exact equivalence between infinitely wide, deep, networks and GPs with a particular covariance function.', 'We further develop a computationally efficient pipeline to compute this covariance function.', 'We then use the resulting GP to perform Bayesian inference for deep neural networks on MNIST and CIFAR-10. ', 'We observe that the trained neural network accuracy approaches that of the corresponding GP with increasing layer width, and that the GP uncertainty is strongly correlated with trained network prediction error.', 'We further find that test performance increases as finite-width trained networks are made wider and more similar to a GP, and that the GP-based predictions typically outperform those of finite-width networks.', 'Finally we connect the prior distribution over weights and variances in our GP formulation to the recent development of signal propagation in random neural networks.']
### SUMMARY:
| [
"We show how to make predictions using deep networks, without training deep networks."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Search space is a key consideration for neural architecture search.', 'Recently, Xie et al. (2019a) found that randomly generated networks from the same distribution perform similarly, which suggest we should search for random graph distributions instead of graphs.', 'We propose graphon as a new search space.', 'A graphon is the limit of Cauchy sequence of graphs and a scale-free probabilistic distribution, from which graphs of different number of vertices can be drawn.', 'This property enables us to perform NAS using fast, low-capacity models and scale the found models up when necessary.', 'We develop an algorithm for NAS in the space of graphons and empirically demonstrate that it can find stage-wise graphs that outperform DenseNet and other baselines on ImageNet.']
### SUMMARY:
| [
"Graphon is a good search space for neural architecture search and empirically produces good networks."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Adversarial training is by far the most successful strategy for improving robustness of neural networks to adversarial attacks.', 'Despite its success as a defense mechanism, adversarial training fails to generalize well to unperturbed test set.', 'We hypothesize that this poor generalization is a consequence of adversarial training with uniform perturbation radius around every training sample.', 'Samples close to decision boundary can be morphed into a different class under a small perturbation budget, and enforcing large margins around these samples produce poor decision boundaries that generalize poorly.', 'Motivated by this hypothesis, we propose instance adaptive adversarial training -- a technique that enforces sample-specific perturbation margins around every training sample.', 'We show that using our approach, test accuracy on unperturbed samples improve with a marginal drop in robustness.', 'Extensive experiments on CIFAR-10, CIFAR-100 and Imagenet datasets demonstrate the effectiveness of our proposed approach.']
### SUMMARY:
| [
"Instance adaptive adversarial training for improving robustness-accuracy tradeoff"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Machine learning models including traditional models and neural networks can be easily fooled by adversarial examples which are generated from the natural examples with small perturbations. ', 'This poses a critical challenge to machine learning security, and impedes the wide application of machine learning in many important domains such as computer vision and malware detection. ', 'Unfortunately, even state-of-the-art defense approaches such as adversarial training and defensive distillation still suffer from major limitations and can be circumvented. ', 'From a unique angle, we propose to investigate two important research questions in this paper: Are adversarial examples distinguishable from natural examples? ', 'Are adversarial examples generated by different methods distinguishable from each other? ', 'These two questions concern the distinguishability of adversarial examples. ', 'Answering them will potentially lead to a simple yet effective approach, termed as defensive distinction in this paper under the formulation of multi-label classification, for protecting against adversarial examples. ', 'We design and perform experiments using the MNIST dataset to investigate these two questions, and obtain highly positive results demonstrating the strong distinguishability of adversarial examples. ', 'We recommend that this unique defensive distinction approach should be seriously considered to complement other defense approaches.']
### SUMMARY:
| [
"We propose a defensive distinction protection approach and demonstrate the strong distinguishability of adversarial examples."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['For a long time, designing neural architectures that exhibit high performance was considered a dark art that required expert hand-tuning.', 'One of the few well-known guidelines for architecture design is the avoidance of exploding or vanishing gradients.', 'However, even this guideline has remained relatively vague and circumstantial, because there exists no well-defined, gradient-based metric that can be computed {\\it before} training begins and can robustly predict the performance of the network {\\it after} training is complete.\n\n', 'We introduce what is, to the best of our knowledge, the first such metric: the nonlinearity coefficient (NLC).', "Via an extensive empirical study, we show that the NLC, computed in the network's randomly initialized state, is a powerful predictor of test error and that attaining a right-sized NLC is essential for attaining an optimal test error, at least in fully-connected feedforward networks.", 'The NLC is also conceptually simple, cheap to compute, and is robust to a range of confounders and architectural design choices that comparable metrics are not necessarily robust to.', 'Hence, we argue the NLC is an important tool for architecture search and design, as it can robustly predict poor training outcomes before training even begins.']
### SUMMARY:
| [
"We introduce the NLC, a metric that is cheap to compute in the networks randomly initialized state and is highly predictive of generalization, at least in fully-connected networks."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['This paper gives a rigorous analysis of trained Generalized Hamming Networks (GHN) proposed by Fan (2017) and discloses an interesting finding about GHNs, i.e. stacked convolution layers in a GHN is equivalent to a single yet wide convolution layer.', 'The revealed equivalence, on the theoretical side, can be regarded as a constructive manifestation of the universal approximation theorem Cybenko (1989); Hornik (1991).', 'In practice, it has profound and multi-fold implications.', 'For network visualization, the constructed deep epitomes at each layer provide a visualization of network internal representation that does not rely on the input data.', 'Moreover, deep epitomes allows the direct extraction of features in just one step, without resorting to regularized optimizations used in existing visualization tools.']
### SUMMARY:
| [
"bridge the gap in soft computing"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['There are two major paradigms of white-box adversarial attacks that attempt to impose input perturbations. ', 'The first paradigm, called the fix-perturbation attack, crafts adversarial samples within a given perturbation level. ', 'The second paradigm, called the zero-confidence attack, finds the smallest perturbation needed to cause misclassification, also known as the margin of an input feature. ', 'While the former paradigm is well-resolved, the latter is not. ', 'Existing zero-confidence attacks either introduce significant approximation errors, or are too time-consuming. ', 'We therefore propose MarginAttack, a zero-confidence attack framework that is able to compute the margin with improved accuracy and efficiency. ', 'Our experiments show that MarginAttack is able to compute a smaller margin than the state-of-the-art zero-confidence attacks, and matches the state-of-the-art fix-perturbation attacks. ', 'In addition, it runs significantly faster than the Carlini-Wagner attack, currently the most accurate zero-confidence attack algorithm.']
### SUMMARY:
| [
"This paper introduces MarginAttack, a stronger and faster zero-confidence adversarial attack."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Given the variety of the visual world there is not one true scale for recognition: objects may appear at drastically different sizes across the visual field.', 'Rather than enumerate variations across filter channels or pyramid levels, dynamic models locally predict scale and adapt receptive fields accordingly.', 'The degree of variation and diversity of inputs makes this a difficult task.', 'Existing methods either learn a feedforward predictor, which is not itself totally immune to the scale variation it is meant to counter, or select scales by a fixed algorithm, which cannot learn from the given task and data.', 'We extend dynamic scale inference from feedforward prediction to iterative optimization for further adaptivity.', 'We propose a novel entropy minimization objective for inference and optimize over task and structure parameters to tune the model to each input.', 'Optimization during inference improves semantic segmentation accuracy and generalizes better to extreme scale variations that cause feedforward dynamic inference to falter.']
### SUMMARY:
| [
"Unsupervised optimization during inference gives top-down feedback to iteratively adjust feedforward prediction of scale variation for more equivariant recognition."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The Deep Image Prior (DIP, Ulyanov et al., 2017) is a fascinating recent approach for recovering images which appear natural, yet is not fully understood.', 'This work aims at shedding some further light on this approach by investigating the properties of the early outputs of the DIP.', 'First, we show that these early iterations demonstrate invariance to adversarial perturbations by classifying progressive DIP outputs and using a novel saliency map approach.', 'Next we explore using DIP as a defence against adversaries, showing good potential.', 'Finally, we examine the adversarial invariancy of the early DIP outputs, and hypothesize that these outputs may remove non-robust image features.', 'By comparing classification confidence values we show some evidence confirming this hypothesis.']
### SUMMARY:
| [
"We investigate properties of the recently introduced Deep Image Prior (Ulyanov et al, 2017)"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this paper, we address the challenge of limited labeled data and class imbalance problem for machine learning-based rumor detection on social media.', 'We present an offline data augmentation method based on semantic relatedness for rumor detection.', 'To this end, unlabeled social media data is exploited to augment limited labeled data.', 'A context-aware neural language model and a large credibility-focused Twitter corpus are employed to learn effective representations of rumor tweets for semantic relatedness measurement.', 'A language model fine-tuned with the a large domain-specific corpus shows a dramatic improvement on training data augmentation for rumor detection over pretrained language models.', 'We conduct experiments on six different real-world events based on five publicly available data sets and one augmented data set.', 'Our experiments show that the proposed method allows us to generate a larger training data with reasonable quality via weak supervision.', 'We present preliminary results achieved using a state-of-the-art neural network model with augmented data for rumor detection.']
### SUMMARY:
| [
"We propose a methodology of augmenting publicly available data for rumor studies based on samantic relatedness between limited labeled and unlabeled data."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Self-attention is a useful mechanism to build generative models for language and images.', 'It determines the importance of context elements by comparing each element to the current time step.', 'In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention results.', 'Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention.', 'We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements.', 'The number of operations required by this approach scales linearly in the input length, whereas self-attention is quadratic.', 'Experiments on large-scale machine translation, language modeling and abstractive summarization show that dynamic convolutions improve over strong self-attention models.', "On the WMT'14 English-German test set dynamic convolutions achieve a new state of the art of 29.7 BLEU."]
### SUMMARY:
| [
"Dynamic lightweight convolutions are competitive to self-attention on language tasks."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond.', 'By necessity, most theoretical guarantees revolve around convex-concave (or even linear) problems; however, making theoretical inroads towards efficient GAN training depends crucially on moving beyond this classic framework.', 'To make piecemeal progress along these lines, we analyze the behavior of mirror descent (MD) in a class of non-monotone problems whose solutions coincide with those of a naturally associated variational inequality – a property which we call coherence.', 'We first show that ordinary, “vanilla” MD converges under a strict version of this condition, but not otherwise; in particular, it may fail to converge even in bilinear models with a unique solution.', 'We then show that this deficiency is mitigated by optimism: by taking an “extra-gradient” step, optimistic mirror descent (OMD) converges in all coherent problems.', 'Our analysis generalizes and extends the results of Daskalakis et al. [2018] for optimistic gradient descent (OGD) in bilinear problems, and makes concrete headway for provable convergence beyond convex-concave games.', 'We also provide stochastic analogues of these results, and we validate our analysis by numerical experiments in a wide array of GAN models (including Gaussian mixture models, and the CelebA and CIFAR-10 datasets).']
### SUMMARY:
| [
"We show how the inclusion of an extra-gradient step in first-order GAN training methods can improve stability and lead to improved convergence results."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Batch normalization (BN) is often used in an attempt to stabilize and accelerate training in deep neural networks.', 'In many cases it indeed decreases the number of parameter updates required to achieve low training error.', 'However, it also reduces robustness to small adversarial input perturbations and common corruptions by double-digit percentages, as we show on five standard datasets.', 'Furthermore, we find that substituting weight decay for BN is sufficient to nullify a relationship between adversarial vulnerability and the input dimension.', 'A recent mean-field analysis found that BN induces gradient explosion when used on multiple layers, but this cannot fully explain the vulnerability we observe, given that it occurs already for a single BN layer.', 'We argue that the actual cause is the tilting of the decision boundary with respect to the nearest-centroid classifier along input dimensions of low variance.', 'As a result, the constant introduced for numerical stability in the BN step acts as an important hyperparameter that can be tuned to recover some robustness at the cost of standard test accuracy.', 'We explain this mechanism explicitly on a linear ``toy model and show in experiments that it still holds for nonlinear ``real-world models.']
### SUMMARY:
| [
"Batch normalization reduces robustness at test-time to common corruptions and adversarial examples."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Learning to Optimize is a recently proposed framework for learning optimization algorithms using reinforcement learning.', 'In this paper, we explore learning an optimization algorithm for training shallow neural nets.', 'Such high-dimensional stochastic optimization problems present interesting challenges for existing reinforcement learning algorithms.', 'We develop an extension that is suited to learning optimization algorithms in this setting and demonstrate that the learned optimization algorithm consistently outperforms other known optimization algorithms even on unseen tasks and is robust to changes in stochasticity of gradients and the neural net architecture.', 'More specifically, we show that an optimization algorithm trained with the proposed method on the problem of training a neural net on MNIST generalizes to the problems of training neural nets on the Toronto Faces Dataset, CIFAR-10 and CIFAR-100.']
### SUMMARY:
| [
"We learn an optimization algorithm that generalizes to unseen tasks"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The dependency of the generalization error of neural networks on model and dataset size is of critical importance both in practice and for understanding the theory of neural networks.', 'Nevertheless, the functional form of this dependency remains elusive.', 'In this work, we present a functional form which approximates well the generalization error in practice.', 'Capitalizing on the successful concept of model scaling (e.g., width, depth), we are able to simultaneously construct such a form and specify the exact models which can attain it across model/data scales.', 'Our construction follows insights obtained from observations conducted over a range of model/data scales, in various model types and datasets, in vision and language tasks.', 'We show that the form both fits the observations well across scales, and provides accurate predictions from small- to large-scale models and data.']
### SUMMARY:
| [
"We predict the generalization error and specify the model which attains it across model/data scales."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Learning to control an environment without hand-crafted rewards or expert data remains challenging and is at the frontier of reinforcement learning research.', 'We present an unsupervised learning algorithm to train agents to achieve perceptually-specified goals using only a stream of observations and actions.', 'Our agent simultaneously learns a goal-conditioned policy and a goal achievement reward function that measures how similar a state is to the goal state.', 'This dual optimization leads to a co-operative game, giving rise to a learned reward function that reflects similarity in controllable aspects of the environment instead of distance in the space of observations.', 'We demonstrate the efficacy of our agent to learn, in an unsupervised manner, to reach a diverse set of goals on three domains -- Atari, the DeepMind Control Suite and DeepMind Lab.']
### SUMMARY:
| [
"Unsupervised reinforcement learning method for learning a policy to robustly achieve perceptually specified goals."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['State of the art sequence-to-sequence models for large scale tasks perform a fixed number of computations for each input sequence regardless of whether it is easy or hard to process.\n', 'In this paper, we train Transformer models which can make output predictions at different stages of the network and we investigate different ways to predict how much computation is required for a particular sequence.\n', 'Unlike dynamic computation in Universal Transformers, which applies the same set of layers iteratively, we apply different layers at every step to adjust both the amount of computation as well as the model capacity.\n', 'On IWSLT German-English translation our approach matches the accuracy of a well tuned baseline Transformer while using less than a quarter of the decoder layers.']
### SUMMARY:
| [
"Sequence model that dynamically adjusts the amount of computation for each input."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Variational Auto-encoders (VAEs) are deep generative latent variable models consisting of two components: a generative model that captures a data distribution p(x) by transforming a distribution p(z) over latent space, and an inference model that infers likely latent codes for each data point (Kingma and Welling, 2013).', 'Recent work shows that traditional training methods tend to yield solutions that violate modeling desiderata: (1) the learned generative model captures the observed data distribution but does so while ignoring the latent codes, resulting in codes that do not represent the data (e.g. van den Oord et al. (2017); Kim et al. (2018)); (2) the aggregate of the learned latent codes does not match the prior p(z).', 'This mismatch means that the learned generative model will be unable to generate realistic data with samples from p(z)(e.g. Makhzani et al. (2015); Tomczak and Welling (2017)).\n\n', 'In this paper, we demonstrate that both issues stem from the fact that the global optima of the VAE training objective often correspond to undesirable solutions.', 'Our analysis builds on two observations: (1) the generative model is unidentifiable – there exist many generative models that explain the data equally well, each with different (and potentially unwanted) properties and (2) bias in the VAE objective – the VAE objective may prefer generative models that explain the data poorly but have posteriors that are easy to approximate.', 'We present a novel inference method, LiBI, mitigating the problems identified in our analysis.', 'On synthetic datasets, we show that LiBI can learn generative models that capture the data distribution and inference models that better satisfy modeling assumptions when traditional methods struggle to do so.']
### SUMMARY:
| [
"We characterize problematic global optima of the VAE objective and present a novel inference method to avoid such optima."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Modeling hypernymy, such as poodle is-a dog, is an important generalization aid to many NLP tasks, such as entailment, relation extraction, and question answering.', 'Supervised learning from labeled hypernym sources, such as WordNet, limit the coverage of these models, which can be addressed by learning hypernyms from unlabeled text. ', 'Existing unsupervised methods either do not scale to large vocabularies or yield unacceptably poor accuracy. ', 'This paper introduces {\\it distributional inclusion vector embedding (DIVE)}, a simple-to-implement unsupervised method of hypernym discovery via per-word non-negative vector embeddings which preserve the inclusion property of word contexts.', 'In experimental evaluations more comprehensive than any previous literature of which we are aware---evaluating on 11 datasets using multiple existing as well as newly proposed scoring functions---we find that our method provides up to double the precision of previous unsupervised methods, and the highest average performance, using a much more compact word representation, and yielding many new state-of-the-art results.', 'In addition, the meaning of each dimension in DIVE is interpretable, which leads to a novel approach on word sense disambiguation as another promising application of DIVE.']
### SUMMARY:
| [
"We propose a novel unsupervised word embedding which preserves the inclusion property in the context distribution and achieve state-of-the-art results on unsupervised hypernymy detection"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Continual learning aims to learn new tasks without forgetting previously learned ones.', 'This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity.', "Current regularization-based continual learning algorithms need an external representation and extra computation to measure the parameters' \\textit{importance}.", 'In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks (UCB) where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks.', 'Uncertainty is a natural way to identify \\textit{what to remember} and \\textit{what to change} as we continually learn, and thus mitigate catastrophic forgetting.', 'We also show a variant of our model, which uses uncertainty for weight pruning \n', 'and retains task performance after pruning by saving binary masks per tasks.', 'We evaluate our UCB approach extensively on diverse object classification datasets with short and long sequences of tasks and report superior or on-par performance compared to existing approaches.', 'Additionally, we show that our model does not necessarily need task information at test time, i.e.~it does not presume knowledge of which task a sample belongs to.']
### SUMMARY:
| [
"A regularization-based approach for continual learning using Bayesian neural networks to predict parameters' importance"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Humans have a natural curiosity to imagine what it feels like to exist as someone or something else.', 'This curiosity becomes even stronger for the pets we care for.', 'Humans cannot truly know what it is like to be our pets, but we can deepen our understanding of what it is like to perceive and explore the world like them.', 'We investigate how wearables can offer people animal perspective-taking opportunities to experience the world through animal senses that differ from those biologically natural to us.', 'To assess the potential of wearables in animal perspective-taking, we developed a sensory-augmenting wearable that gives wearers cat-like whiskers.', 'We then created a maze exploration experience where blindfolded participants utilized the whiskers to navigate the maze.', 'We draw on animal behavioral research to evaluate how the whisker activity supported authentically cat-like experiences, and discuss the implications of this work for future learning experiences.']
### SUMMARY:
| [
"This paper explores using wearable sensory augmenting technology to facilitate first-hand perspective-taking of what it is like to have cat-like whiskers."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Generalization error (also known as the out-of-sample error) measures how well the hypothesis learned from training data generalizes to previously unseen data.', 'Proving tight generalization error bounds is a central question in statistical learning theory. ', 'In this paper, we obtain generalization error bounds for learning general non-convex objectives, which has attracted significant attention in recent years. ', 'We develop a new framework, termed Bayes-Stability, for proving algorithm-dependent generalization error bounds. ', 'The new framework combines ideas from both the PAC-Bayesian theory and the notion of algorithmic stability. ', 'Applying the Bayes-Stability method, we obtain new data-dependent generalization bounds for stochastic gradient Langevin dynamics (SGLD) and several other noisy gradient methods (e.g., with momentum, mini-batch and acceleration, Entropy-SGD).', 'Our result recovers (and is typically tighter than) a recent result in Mou et al. (2018) and improves upon the results in Pensia et al. (2018). ', 'Our experiments demonstrate that our data-dependent bounds can distinguish randomly labelled data from normal data, which provides an explanation to the intriguing phenomena observed in Zhang et al. (2017a).', 'We also study the setting where the total loss is the sum of a bounded loss and an additiona l`2 regularization term.', 'We obtain new generalization bounds for the continuous Langevin dynamic in this setting by developing a new Log-Sobolev inequality for the parameter distribution at any time.', 'Our new bounds are more desirable when the noise level of the processis not very small, and do not become vacuous even when T tends to infinity.']
### SUMMARY:
| [
"We give some generalization error bounds of noisy gradient methods such as SGLD, Langevin dynamics, noisy momentum and so forth."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this paper, a new intrinsic reward generation method for sparse-reward reinforcement learning is proposed based on an ensemble of dynamics models.', 'In the proposed method, the mixture of multiple dynamics models is used to approximate the true unknown transition probability, and the intrinsic reward is designed as the minimum of the surprise seen from each dynamics model to the mixture of the dynamics models.', 'In order to show the effectiveness of the proposed intrinsic reward generation method, a working algorithm is constructed by combining the proposed intrinsic reward generation method with the proximal policy optimization (PPO) algorithm.', 'Numerical results show that for representative locomotion tasks, the proposed model-ensemble-based intrinsic reward generation method outperforms the previous methods based on a single dynamics model.']
### SUMMARY:
| [
"For sparse-reward reinforcement learning, the ensemble of multiple dynamics models is used to generate intrinsic reward designed as the minimum of the surprise."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Given the fast development of analysis techniques for NLP and speech\n', 'processing systems, few systematic studies have been conducted to\n', 'compare the strengths and weaknesses of each method.', ' As a step in\n', 'this direction we study the case of representations of phonology in\n', 'neural network models of spoken language.', 'We use two commonly applied\n', 'analytical techniques, diagnostic classifiers and representational\n', 'similarity analysis, to quantify to what extent neural activation\n', 'patterns encode phonemes and phoneme sequences.', 'We manipulate two\n', 'factors that can affect the outcome of analysis.', 'First, we investigate\n', 'the role of learning by comparing neural activations extracted from\n', 'trained versus randomly-initialized models.', 'Second, we examine the\n', 'temporal scope of the activations by probing both local activations\n', 'corresponding to a few milliseconds of the speech signal, and global\n', 'activations pooled over the whole utterance.', 'We conclude that\n', 'reporting analysis results with randomly initialized models is\n', 'crucial, and that global-scope methods tend to yield more consistent\n', 'and interpretable results and we recommend their use as a complement\n', 'to local-scope diagnostic methods.']
### SUMMARY:
| [
"We study representations of phonology in neural network models of spoken language with several variants of analytical techniques."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Super Resolution (SR) is a fundamental and important low-level computer vision (CV) task.', 'Different from traditional SR models, this study concentrates on a specific but realistic SR issue: How can we obtain satisfied SR results from compressed JPG (C-JPG) image, which widely exists on the Internet.', 'In general, C-JPG can release storage space while keeping considerable quality in visual.', 'However, further image processing operations, e.g., SR, will suffer from enlarging inner artificial details and result in unacceptable outputs.', 'To address this problem, we propose a novel SR structure with two specifically designed components, as well as a cycle loss.', 'In short, there are mainly three contributions to this paper.', 'First, our research can generate high-qualified SR images for prevalent C-JPG images.', 'Second, we propose a functional sub-model to recover information for C-JPG images, instead of the perspective of noise elimination in traditional SR approaches.', 'Third, we further integrate cycle loss into SR solver to build a hybrid loss function for better SR generation.', 'Experiments show that our approach achieves outstanding performance among state-of-the-art methods.']
### SUMMARY:
| [
"We solve the specific SR issue of low-quality JPG images by functional sub-models."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Keyword spotting—or wakeword detection—is an essential feature for hands-free operation of modern voice-controlled devices.', 'With such devices becoming ubiquitous, users might want to choose a personalized custom wakeword.', 'In this work, we present DONUT, a CTC-based algorithm for online query-by-example keyword spotting that enables custom wakeword detection.', 'The algorithm works by recording a small number of training examples from the user, generating a set of label sequence hypotheses from these training examples, and detecting the wakeword by aggregating the scores of all the hypotheses given a new audio recording.', 'Our method combines the generalization and interpretability of CTC-based keyword spotting with the user-adaptation and convenience of a conventional query-by-example system.', 'DONUT has low computational requirements and is well-suited for both learning and inference on embedded systems without requiring private user data to be uploaded to the cloud.']
### SUMMARY:
| [
"We propose an interpretable model for detecting user-chosen wakewords that learns from the user's examples."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['To flexibly and efficiently reason about temporal sequences, abstract representations that compactly represent the important information in the sequence are needed.', 'One way of constructing such representations is by focusing on the important events in a sequence.', 'In this paper, we propose a model that learns both to discover such key events (or keyframes) as well as to represent the sequence in terms of them. ', 'We do so using a hierarchical Keyframe-Inpainter (KeyIn) model that first generates keyframes and their temporal placement and then inpaints the sequences between keyframes.', 'We propose a fully differentiable formulation for efficiently learning the keyframe placement.', 'We show that KeyIn finds informative keyframes in several datasets with diverse dynamics.', 'When evaluated on a planning task, KeyIn outperforms other recent proposals for learning hierarchical representations.']
### SUMMARY:
| [
"We propose a model that learns to discover informative frames in a future video sequence and represent the video via its keyframes."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['This work investigates unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder.', "Importantly, we show that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation's suitability for downstream tasks.", 'We further control characteristics of the representation by matching to a prior distribution adversarially.', 'Our method, which we call Deep InfoMax (DIM), outperforms a number of popular unsupervised learning methods and compares favorably with fully-supervised learning on several classification tasks in with some standard architectures.', 'DIM opens new avenues for unsupervised learning of representations and is an important step towards flexible formulations of representation learning objectives for specific end-goals.']
### SUMMARY:
| [
"We learn deep representation by maximizing mutual information, leveraging structure in the objective, and are able to compute with fully supervised classifiers with comparable architectures"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Estimating the importance of each atom in a molecule is one of the most appealing and challenging problems in chemistry, physics, and material engineering.', 'The most common way to estimate the atomic importance is to compute the electronic structure using density-functional theory (DFT), and then to interpret it using domain knowledge of human experts.', 'However, this conventional approach is impractical to the large molecular database because DFT calculation requires huge computation, specifically, O(n^4) time complexity w.r.t. the number of electrons in a molecule.', 'Furthermore, the calculation results should be interpreted by the human experts to estimate the atomic importance in terms of the target molecular property.', 'To tackle this problem, we first exploit machine learning-based approach for the atomic importance estimation.', 'To this end, we propose reverse self-attention on graph neural networks and integrate it with graph-based molecular description.', 'Our method provides an efficiently-automated and target-directed way to estimate the atomic importance without any domain knowledge on chemistry and physics.']
### SUMMARY:
| [
"We first propose a fully-automated and target-directed atomic importance estimator based on the graph neural networks and a new concept of reverse self-attention."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Spectral clustering is a leading and popular technique in unsupervised data analysis. ', 'Two of its major limitations are scalability and generalization of the spectral embedding (i.e., out-of-sample-extension).', 'In this paper we introduce a deep learning approach to spectral clustering that overcomes the above shortcomings.', 'Our network, which we call SpectralNet, learns a map that embeds input data points into the eigenspace of their associated graph Laplacian matrix and subsequently clusters them.', 'We train SpectralNet using a procedure that involves constrained stochastic optimization.', 'Stochastic optimization allows it to scale to large datasets, while the constraints, which are implemented using a special purpose output layer, allow us to keep the network output orthogonal.', 'Moreover, the map learned by SpectralNet naturally generalizes the spectral embedding to unseen data points.', 'To further improve the quality of the clustering, we replace the standard pairwise Gaussian affinities with affinities leaned from unlabeled data using a Siamese network. ', 'Additional improvement can be achieved by applying the network to code representations produced, e.g., by standard autoencoders.', 'Our end-to-end learning procedure is fully unsupervised.', 'In addition, we apply VC dimension theory to derive a lower bound on the size of SpectralNet. ', 'State-of-the-art clustering results are reported for both the MNIST and Reuters datasets.\n']
### SUMMARY:
| [
"Unsupervised spectral clustering using deep neural networks"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Meta-learning methods, most notably Model-Agnostic Meta-Learning (Finn et al, 2017) or MAML, have achieved great success in adapting to new tasks quickly, after having been trained on similar tasks.\n', 'The mechanism behind their success, however, is poorly understood.\n', 'We begin this work with an experimental analysis of MAML, finding that deep models are crucial for its success, even given sets of simple tasks where a linear model would suffice on any individual task.\n', 'Furthermore, on image-recognition tasks, we find that the early layers of MAML-trained models learn task-invariant features, while later layers are used for adaptation, providing further evidence that these models require greater capacity than is strictly necessary for their individual tasks.\n', 'Following our findings, we propose a method which enables better use of model capacity at inference time by separating the adaptation aspect of meta-learning into parameters that are only used for adaptation but are not part of the forward model.\n', 'We find that our approach enables more effective meta-learning in smaller models, which are suitably sized for the individual tasks.\n']
### SUMMARY:
| [
"We find that deep models are crucial for MAML to work and propose a method which enables effective meta-learning in smaller models."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Convolutional Neural Networks continuously advance the progress of 2D and 3D image and object classification.', 'The steadfast usage of this algorithm requires constant evaluation and upgrading of foundational concepts to maintain progress.', 'Network regularization techniques typically focus on convolutional layer operations, while leaving pooling layer operations without suitable options.', 'We introduce Wavelet Pooling as another alternative to traditional neighborhood pooling.', 'This method decomposes features into a second level decomposition, and discards the first-level subbands to reduce feature dimensions.', 'This method addresses the overfitting problem encountered by max pooling, while reducing features in a more structurally compact manner than pooling via neighborhood regions.', 'Experimental results on four benchmark classification datasets demonstrate our proposed method outperforms or performs comparatively with methods like max, mean, mixed, and stochastic pooling.']
### SUMMARY:
| [
"Pooling is achieved using wavelets instead of traditional neighborhood approaches (max, average, etc)."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Dynamic ridesharing services (DRS) play a major role in improving the efficiency of urban transportation.', 'User satisfaction in dynamic ridesharing is determined by multiple factors such as travel time, cost, and social compatibility with co-passengers.', 'Existing DRS optimize profit by maximizing the operational value for service providers or minimize travel time for users but they neglect the social experience of riders, which significantly influences the total value of the service to users.', "We propose DROPS, a dynamic ridesharing framework that factors the riders' social preferences in the matching process so as to improve the quality of the trips formed.", 'Scheduling trips for users is a multi-objective optimization that aims to maximize the operational value for the service provider, while simultaneously maximizing the value of the trip for the users.', 'The user value is estimated based on compatibility between co-passengers and the ride time.', 'We then present a real-time matching algorithm for trip formation.', 'Finally, we evaluate our approach empirically using real-world taxi trips data, and a population model including social preferences based on user surveys.', "The results demonstrate improvement in riders' social compatibility, without significantly affecting the vehicle miles for the service provider and travel time for users."]
### SUMMARY:
| [
"We propose a novel dynamic ridesharing framework to form trips that optimizes both operational value for the service provider and user value to the passengers by factoring the users' social preferences into the decision-making process."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction.', "To better understand such attacks, a characterization is needed of the properties of regions (the so-called `adversarial subspaces') in which adversarial examples lie.", 'We tackle this challenge by characterizing the dimensional properties of adversarial regions, via the use of Local Intrinsic Dimensionality (LID).', 'LID assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors.', 'We first provide explanations about how adversarial perturbation can affect the LID characteristic of adversarial regions, and then show empirically that LID characteristics can facilitate the distinction of adversarial examples generated using state-of-the-art attacks.', 'As a proof-of-concept, we show that a potential application of LID is to distinguish adversarial examples, and the preliminary results show that it can outperform several state-of-the-art detection measures by large margins for five attack strategies considered in this paper across three benchmark datasets.', 'Our analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.']
### SUMMARY:
| [
"We characterize the dimensional properties of adversarial subspaces in the neighborhood of adversarial examples via the use of Local Intrinsic Dimensionality (LID)."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this paper, we design and analyze a new zeroth-order (ZO) stochastic optimization algorithm, ZO-signSGD, which enjoys dual advantages of gradient-free operations and signSGD.', 'The latter requires only the sign information of gradient estimates but is able to achieve a comparable or even better convergence speed than SGD-type algorithms.', 'Our study shows that ZO signSGD requires $\\sqrt{d}$ times more iterations than signSGD, leading to a convergence rate of $O(\\sqrt{d}/\\sqrt{T})$ under mild conditions, where $d$ is the number of optimization variables, and $T$ is the number of iterations.', 'In addition, we analyze the effects of different types of gradient estimators on the convergence of ZO-signSGD, and propose two variants of ZO-signSGD that at least achieve $O(\\sqrt{d}/\\sqrt{T})$ convergence rate.', 'On the application side we explore the connection between ZO-signSGD and black-box adversarial attacks in robust deep learning. ', 'Our empirical evaluations on image classification datasets MNIST and CIFAR-10 demonstrate the superior performance of ZO-signSGD on the generation of adversarial examples from black-box neural networks.']
### SUMMARY:
| [
"We design and analyze a new zeroth-order stochastic optimization algorithm, ZO-signSGD, and demonstrate its connection and application to black-box adversarial attacks in robust deep learning"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The non-stationarity characteristic of the solar power renders traditional point forecasting methods to be less useful due to large prediction errors.', 'This results in increased uncertainties in the grid operation, thereby negatively affecting the reliability and resulting in increased cost of operation.', 'This research paper proposes a unified architecture for multi-time-horizon solar forecasting for short and long-term predictions using Recurrent Neural Networks (RNN).', 'The paper describes an end-to-end pipeline to implement the architecture along with methods to test and validate the performance of the prediction model.', 'The results demonstrate that the proposed method based on the unified architecture is effective for multi-horizon solar forecasting and achieves a lower root-mean-squared prediction error compared to the previous best performing methods which use one model for each time-horizon.', 'The proposed method enables multi-horizon forecasts with real-time inputs, which have a high potential for practical applications in the evolving smart grid.']
### SUMMARY:
| [
"This paper proposes a Unified Recurrent Neural Network Architecture for short-term multi-time-horizon solar forecasting and validates the forecast performance gains over the previously reported methods"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The ResNet and the batch-normalization (BN) achieved high performance even when only a few labeled data are available.', 'However, the reasons for its high performance are unclear.', 'To clear the reasons, we analyzed the effect of the skip-connection in ResNet and the BN on the data separation ability, which is an important ability for the classification problem.', 'Our results show that, in the multilayer perceptron with randomly initialized weights, the angle between two input vectors converges to zero in an exponential order of its depth, that the skip-connection makes this exponential decrease into a sub-exponential decrease, and that the BN relaxes this sub-exponential decrease into a reciprocal decrease.', 'Moreover, our analysis shows that the preservation of the angle at initialization encourages trained neural networks to separate points from different classes.', 'These imply that the skip-connection and the BN improve the data separation ability and achieve high performance even when only a few labeled data are available.']
### SUMMARY:
| [
"The Skip-connection in ResNet and the batch-normalization improve the data separation ability and help to train a deep neural network."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Learning representations of data is an important issue in machine learning.', 'Though GAN has led to significant improvements in the data representations, it still has several problems such as unstable training, hidden manifold of data, and huge computational overhead.', 'GAN tends to produce the data simply without any information about the manifold of the data, which hinders from controlling desired features to generate.', 'Moreover, most of GAN’s have a large size of manifold, resulting in poor scalability.', 'In this paper, we propose a novel GAN to control the latent semantic representation, called LSC-GAN, which allows us to produce desired data to generate and learns a representation of the data efficiently.', 'Unlike the conventional GAN models with hidden distribution of latent space, we define the distributions explicitly in advance that are trained to generate the data based on the corresponding features by inputting the latent variables that follow the distribution.', 'As the larger scale of latent space caused by deploying various distributions in one latent space makes training unstable while maintaining the dimension of latent space, we need to separate the process of defining the distributions explicitly and operation of generation.', 'We prove that a VAE is proper for the former and modify a loss function of VAE to map the data into the pre-defined latent space so as to locate the reconstructed data as close to the input data according to its characteristics.', 'Moreover, we add the KL divergence to the loss function of LSC-GAN to include this process.', 'The decoder of VAE, which generates the data with the corresponding features from the pre-defined latent space, is used as the generator of the LSC-GAN.', 'Several experiments on the CelebA dataset are conducted to verify the usefulness of the proposed method to generate desired data stably and efficiently, achieving a high compression ratio that can hold about 24 pixels of information in each dimension of latent space.', 'Besides, our model learns the reverse of features such as not laughing (rather frowning) only with data of ordinary and smiling facial expression.']
### SUMMARY:
| [
"We propose a generative model that not only produces data with desired features from the pre-defined latent space but also fully understands the features of the data to create characteristics that are not in the dataset."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['With the ever increasing demand and the resultant reduced quality of services, the focus has shifted towards easing network congestion to enable more efficient flow in systems like traffic, supply chains and electrical grids.', 'A step in this direction is to re-imagine the traditional heuristics based training of systems as this approach is incapable of modelling the involved dynamics.', 'While one can apply Multi-Agent Reinforcement Learning (MARL) to such problems by considering each vertex in the network as an agent, most MARL-based models assume the agents to be independent.', 'In many real-world tasks, agents need to behave as a group, rather than as a collection of individuals.', 'In this paper, we propose a framework that induces cooperation and coordination amongst agents, connected via an underlying network, using emergent communication in a MARL-based setup.', 'We formulate the problem in a general network setting and demonstrate the utility of communication in networks with the help of a case study on traffic systems.', 'Furthermore, we study the emergent communication protocol and show the formation of distinct communities with grounded vocabulary.', 'To the best of our knowledge, this is the only work that studies emergent language in a networked MARL setting.']
### SUMMARY:
| [
"A framework for studying emergent communication in a networked multi-agent reinforcement learning setup."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data.', 'Translation equivariance is an important inductive bias for many learning problems including time series modelling, spatial data, and images.', 'The model embeds data sets into an infinite-dimensional function space, as opposed to finite-dimensional vector spaces.', 'To formalize this notion, we extend the theory of neural representations of sets to include functional representations, and demonstrate that any translation-equivariant embedding can be represented using a convolutional deep-set.', 'We evaluate ConvCNPs in several settings, demonstrating that they achieve state-of-the-art performance compared to existing NPs.', 'We demonstrate that building in translation equivariance enables zero-shot generalization to challenging, out-of-domain tasks.']
### SUMMARY:
| [
"We extend deep sets to functional embeddings and Neural Processes to include translation equivariant members"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Classical models describe primary visual cortex (V1) as a filter bank of orientation-selective linear-nonlinear (LN) or energy models, but these models fail to predict neural responses to natural stimuli accurately.', 'Recent work shows that convolutional neural networks (CNNs) can be trained to predict V1 activity more accurately, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance.', 'Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations.', "We present a framework for identifying common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations.", 'We fit this rotation-equivariant CNN to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging.', 'We show that our rotation-equivariant network outperforms a regular CNN with the same number of feature maps and reveals a number of common features, which are shared by many V1 neurons and are pooled sparsely to predict neural activity.', 'Our findings are a first step towards a powerful new tool to study the nonlinear functional organization of visual cortex.']
### SUMMARY:
| [
"A rotation-equivariant CNN model of V1 that outperforms previous models and suggest functional groupings of V1 neurons."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In classic papers, Zellner (1988, 2002) demonstrated that Bayesian inference could be derived as the solution to an information theoretic functional. ', 'Below we derive a generalized form of this functional as a variational lower bound of a predictive information bottleneck objective. ', 'This generalized functional encompasses most modern inference procedures and suggests novel ones.']
### SUMMARY:
| [
"Rederive a wide class of inference procedures from an global information bottleneck objective."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In many applications, it is desirable to extract only the relevant information from complex input data, which involves making a decision about which input features are relevant.\n', 'The information bottleneck method formalizes this as an information-theoretic optimization problem by maintaining an optimal tradeoff between compression (throwing away irrelevant input information), and predicting the target.', 'In many problem settings, including the reinforcement learning problems we consider in this work, we might prefer to compress only part of the input.', "This is typically the case when we have a standard conditioning input, such as a state observation, and a ``privileged'' input, which might correspond to the goal of a task, the output of a costly planning algorithm, or communication with another agent.", 'In such cases, we might prefer to compress the privileged input, either to achieve better generalization (e.g., with respect to goals) or to minimize access to costly information (e.g., in the case of communication).', 'Practical implementations of the information bottleneck based on variational inference require access to the privileged input in order to compute the bottleneck variable, so although they perform compression, this compression operation itself needs unrestricted, lossless access.', 'In this work, we propose the variational bandwidth bottleneck, which decides for each example on the estimated value of the privileged information before seeing it, i.e., only based on the standard input, and then accordingly chooses stochastically, whether to access the privileged input or not.', 'We formulate a tractable approximation to this framework and demonstrate in a series of reinforcement learning experiments that it can improve generalization and reduce access to computationally costly information.']
### SUMMARY:
| [
"Training agents with adaptive computation based on information bottleneck can promote generalization. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Breathing exercises are an accessible way to manage stress and many mental illness symptoms.', 'Traditionally, learning breathing exercises involved in-person guidance or audio recordings.', 'The shift to mobile devices has led to a new way of learning and engaging in breathing exercises as seen in the rise of multiple mobile applications with different breathing representations.', 'However, limited work has been done to investigate the effectiveness of these visual representations in supporting breathing pace as measured by synchronization.', 'We utilized a within-subjects study to evaluate four common breathing visuals to understand which is most effective in providing breathing exercise guidance.', 'Through controlled lab studies and interviews, we identified two representations with clear advantages over the others.', 'In addition, we found that auditory guidance was not preferred by all users.', 'We identify potential usability issues with the representations and suggest design guidelines for future development of app-supported breathing training.']
### SUMMARY:
| [
"We utilized a within-subjects study to evaluate four paced breathing visuals common in mobile apps to understand which is most effective in providing breathing exercise guidance."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Current work on neural code synthesis consists of increasingly sophisticated architectures being trained on highly simplified domain-specific languages, using uniform sampling across program space of those languages for training.', "By comparison, program space for a C-like language is vast, and extremely sparsely populated in terms of `useful' functionalities; this requires a far more intelligent approach to corpus generation for effective training.", 'We use a genetic programming approach using an iteratively retrained discriminator to produce a population suitable as labelled training data for a neural code synthesis architecture.', 'We demonstrate that use of a discriminator-based training corpus generator, trained using only unlabelled problem specifications in classic Programming-by-Example format, greatly improves network performance compared to current uniform sampling techniques.']
### SUMMARY:
| [
"A way to generate training corpora for neural code synthesis using a discriminator trained on unlabelled data"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Neural reading comprehension models have recently achieved impressive gener- alisation results, yet still perform poorly when given adversarially selected input.', 'Most prior work has studied semantically invariant text perturbations which cause a model’s prediction to change when it should not.', 'In this work we focus on the complementary problem: excessive prediction undersensitivity where input text is meaningfully changed, and the model’s prediction does not change when it should.', 'We formulate a noisy adversarial attack which searches among semantic variations of comprehension questions for which a model still erroneously pro- duces the same answer as the original question – and with an even higher prob- ability.', 'We show that – despite comprising unanswerable questions – SQuAD2.0 and NewsQA models are vulnerable to this attack and commit a substantial frac- tion of errors on adversarially generated questions.', 'This indicates that current models—even where they can correctly predict the answer—rely on spurious sur- face patterns and are not necessarily aware of all information provided in a given comprehension question.', 'Developing this further, we experiment with both data augmentation and adversarial training as defence strategies: both are able to sub- stantially decrease a model’s vulnerability to undersensitivity attacks on held out evaluation data.', 'Finally, we demonstrate that adversarially robust models gener- alise better in a biased data setting with a train/evaluation distribution mismatch; they are less prone to overly rely on predictive cues only present in the training set and outperform a conventional model in the biased data setting by up to 11% F1.']
### SUMMARY:
| [
"We demonstrate vulnerability to undersensitivity attacks in SQuAD2.0 and NewsQA neural reading comprehension models, where the model predicts the same answer with increased confidence to adversarially chosen questions, and compare defence strategies."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['This paper puts forward a new text to tensor representation that relies on information compression techniques to assign shorter codes to the most frequently used characters.', 'This representation is language-independent with no need of pretraining and produces an encoding with no information loss.', 'It provides an adequate description of the morphology of text, as it is able to represent prefixes, declensions, and inflections with similar vectors and are able to represent even unseen words on the training dataset.', 'Similarly, as it is compact yet sparse, is ideal for speed up training times using tensor processing libraries.', 'As part of this paper, we show that this technique is especially effective when coupled with convolutional neural networks (CNNs) for text classification at character-level.', 'We apply two variants of CNN coupled with it.', 'Experimental results show that it drastically reduces the number of parameters to be optimized, resulting in competitive classification accuracy values in only a fraction of the time spent by one-hot encoding representations, thus enabling training in commodity hardware.']
### SUMMARY:
| [
"Using Compressing tecniques to Encoding of Words is a possibility for faster training of CNN and dimensionality reduction of representation"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Sequence prediction models can be learned from example sequences with a variety of training algorithms.', 'Maximum likelihood learning is simple and efficient, yet can suffer from compounding error at test time. \n', 'Reinforcement learning such as policy gradient addresses the issue but can have prohibitively poor exploration efficiency.', 'A rich set of other algorithms, such as data noising, RAML, and softmax policy gradient, have also been developed from different perspectives. \n', 'In this paper, we present a formalism of entropy regularized policy optimization, and show that the apparently distinct algorithms, including MLE, can be reformulated as special instances of the formulation.', 'The difference between them is characterized by the reward function and two weight hyperparameters.\n', 'The unifying interpretation enables us to systematically compare the algorithms side-by-side, and gain new insights into the trade-offs of the algorithm design.\n', 'The new perspective also leads to an improved approach that dynamically interpolates among the family of algorithms, and learns the model in a scheduled way.', 'Experiments on machine translation, text summarization, and game imitation learning demonstrate superiority of the proposed approach.']
### SUMMARY:
| [
"An entropy regularized policy optimization formalism subsumes a set of sequence prediction learning algorithms. A new interpolation algorithm with improved results on text generation and game imitation learning."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Origin-Destination (OD) flow data is an important instrument in transportation studies.', 'Precise prediction of customer demands from each original location to a destination given a series of previous snapshots helps ride-sharing platforms to better understand their market mechanism.', 'However, most existing prediction methods ignore the network structure of OD flow data and fail to utilize the topological dependencies among related OD pairs.', 'In this paper, we propose a latent spatial-temporal origin-destination (LSTOD) model, with a novel convolutional neural network (CNN) filter to learn the spatial features of OD pairs from a graph perspective and an attention structure to capture their long-term periodicity.', 'Experiments on a real customer request dataset with available OD information from a ride-sharing platform demonstrate the advantage of LSTOD in achieving at least 6.5% improvement in prediction accuracy over the second best model.']
### SUMMARY:
| [
"We propose a purely convolutional CNN model with attention mechanism to predict spatial-temporal origin-destination flows. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Adversarial training has been demonstrated as one of the most effective methods for training robust models to defend against adversarial examples.', 'However, adversarially trained models often lack adversarially robust generalization on unseen testing data.', 'Recent works show that adversarially trained models are more biased towards global structure features.', 'Instead, in this work, we would like to investigate the relationship between the generalization of adversarial training and the robust local features, as the robust local features generalize well for unseen shape variation.', 'To learn the robust local features, we develop a Random Block Shuffle (RBS) transformation to break up the global structure features on normal adversarial examples.', 'We continue to propose a new approach called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features by adversarial training on the RBS-transformed adversarial examples, and then transfers the robust local features into the training of normal adversarial examples.', 'To demonstrate the generality of our argument, we implement RLFAT in currently state-of-the-art adversarial training frameworks.', 'Extensive experiments on STL-10, CIFAR-10 and CIFAR-100 show that RLFAT significantly improves both the adversarially robust generalization and the standard generalization of adversarial training.', 'Additionally, we demonstrate that our models capture more local features of the object on the images, aligning better with human perception.']
### SUMMARY:
| [
"We propose a new stream of adversarial training approach called Robust Local Features for Adversarial Training (RLFAT) that significantly improves both the adversarially robust generalization and the standard generalization."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The verification of planning domain models is crucial to ensure the safety, integrity and correctness of planning-based automated systems.', 'This task is usually performed using model checking techniques. ', 'However, directly applying model checkers to verify planning domain models can result in false positives, i.e. counterexamples that are unreachable by a sound planner when using the domain under verification during a planning task.', 'In this paper, we discuss the downside of unconstrained planning domain model verification.', 'We then propose a fail-safe practice for designing planning domain models that can inherently guarantee the safety of the produced plans in case of undetected errors in domain models. ', 'In addition, we demonstrate how model checkers, as well as state trajectory constraints planning techniques, should be used to verify planning domain models so that unreachable counterexamples are not returned.']
### SUMMARY:
| [
"Why and how to constrain planning domain model verification with planning goals to avoid unreachable counterexamples (false positives verification outcomes)."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
["We've seen tremendous success of image generating models these years.", 'Generating images through a neural network is usually pixel-based, which is fundamentally different from how humans create artwork using brushes.', 'To imitate human drawing, interactions between the environment and the agent is required to allow trials.', 'However, the environment is usually non-differentiable, leading to slow convergence and massive computation.', 'In this paper we try to address the discrete nature of software environment with an intermediate, differentiable simulation.', 'We present StrokeNet, a novel model where the agent is trained upon a well-crafted neural approximation of the painting environment.', 'With this approach, our agent was able to learn to write characters such as MNIST digits faster than reinforcement learning approaches in an unsupervised manner.', 'Our primary contribution is the neural simulation of a real-world environment.', 'Furthermore, the agent trained with the emulated environment is able to directly transfer its skills to real-world software.']
### SUMMARY:
| [
"StrokeNet is a novel architecture where the agent is trained to draw by strokes on a differentiable simulation of the environment, which could effectively exploit the power of back-propagation."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We demonstrate how machine learning is able to model experiments in quantum physics.', 'Quantum entanglement is a cornerstone for upcoming quantum technologies such as quantum computation and quantum cryptography.', 'Of particular interest are complex quantum states with more than two particles and a large number of entangled quantum levels.', 'Given such a multiparticle high-dimensional quantum state, it is usually impossible to reconstruct an experimental setup that produces it.', 'To search for interesting experiments, one thus has to randomly create millions of setups on a computer and calculate the respective output states.', 'In this work, we show that machine learning models can provide significant improvement over random search.', 'We demonstrate that a long short-term memory (LSTM) neural network can successfully learn to model quantum experiments by correctly predicting output state characteristics for given setups without the necessity of computing the states themselves.', 'This approach not only allows for faster search but is also an essential step towards automated design of multiparticle high-dimensional quantum experiments using generative machine learning models.']
### SUMMARY:
| [
"We demonstrate how machine learning is able to model experiments in quantum physics."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this paper, we propose a nonlinear unsupervised metric learning framework to boost of the performance of clustering algorithms.', 'Under our framework, nonlinear distance metric learning and manifold embedding are integrated and conducted simultaneously to increase the natural separations among data samples.', 'The metric learning component is implemented through feature space transformations, regulated by a nonlinear deformable model called Coherent Point Drifting (CPD).', 'Driven by CPD, data points can get to a higher level of linear separability, which is subsequently picked up by the manifold embedding component to generate well-separable sample projections for clustering.', 'Experimental results on synthetic and benchmark datasets show the effectiveness of our proposed approach over the state-of-the-art solutions in unsupervised metric learning.\n']
### SUMMARY:
| [
" a nonlinear unsupervised metric learning framework to boost the performance of clustering algorithms."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['As machine learning becomes ubiquitous, deployed systems need to be as accu- rate as they can.', 'As a result, machine learning service providers have a surging need for useful, additional training data that benefits training, without giving up all the details about the trained program.', 'At the same time, data owners would like to trade their data for its value, without having to first give away the data itself be- fore receiving compensation.', 'It is difficult for data providers and model providers to agree on a fair price without first revealing the data or the trained model to the other side.', 'Escrow systems only complicate this further, adding an additional layer of trust required of both parties.', 'Currently, data owners and model owners don’t have a fair pricing system that eliminates the need to trust a third party and training the model on the data, which', '1) takes a long time to complete,', '2) does not guarantee that useful data is paid valuably and that useless data isn’t, without trusting in the third party with both the model and the data.', 'Existing improve- ments to secure the transaction focus heavily on encrypting or approximating the data, such as training on encrypted data, and variants of federated learning.', 'As powerful as the methods appear to be, we show them to be impractical in our use case with real world assumptions for preserving privacy for the data owners when facing black-box models.', 'Thus, a fair pricing scheme that does not rely on secure data encryption and obfuscation is needed before the exchange of data.', 'This pa- per proposes a novel method for fair pricing using data-model efficacy techniques such as influence functions, model extraction, and model compression methods, thus enabling secure data transactions.', 'We successfully show that without running the data through the model, one can approximate the value of the data; that is, if the data turns out redundant, the pricing is minimal, and if the data leads to proper improvement, its value is properly assessed, without placing strong assumptions on the nature of the model.', 'Future work will be focused on establishing a system with stronger transactional security against adversarial attacks that will reveal details about the model or the data to the other party.']
### SUMMARY:
| [
"Facing complex, black-box models, encrypting the data is not as usable as approximating the model and using it to price a potential transaction."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['We present Line-Storm, an interactive computer system for creative performance.', 'The context we investigated was writing on paper using Line-Storm.', 'We used self-report questionnaires as part of research involving human participants, to evaluate Line-Storm.', 'Line-Storm consisted of a writing stylus and writing pad, augmented with electronics.', 'The writing pad was connected to a contact microphone, and the writing stylus had a small micro-controller board and peripherals attached to it.', 'The signals from these electronic augmentations were fed into the audio-synthesis environment Max/MSP to produce an interactive soundscape.', 'We attempted to discover whether Line-Storm enhanced a self-reported sense of being present and engaged during a writing task, and we compared Line-Storm to a non-interactive control condition.', 'After performing statistical analysis in SPSS, we were unable to support our research hypothesis, that presence and engagement were enhanced by Line-Storm.', 'Participants reported they were, on average, no more present and engaged during the experimental condition than during the control condition.', 'As creativity is subtle, and varies with person, time, context, space and so many other factors, this result was somewhat expected by us.', 'A statistically significant result of our study is that some participants responded to Line-Storm more positively than others.', 'These Preservers of Line-Storm were a group, distinct from other participants, who reported greater presence and engagement and who wrote more words with Line-Storm and during the control condition.', 'We discuss the results of our research and place Line-Storm in an artistic-technological context, drawing upon writings by Martin Heidegger when considering the nature of Line-Storm.', 'Future work includes modifying interactive components, improving aesthetics and using more miniaturized electronics, experimenting with a drawing task instead of a writing task, and collaborating with a composer of electronic music to make a more interesting, immersive, and engaging interactive soundscape for writing or drawing performance.']
### SUMMARY:
| [
"Interactive stylus based sound incorporating writing system"
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Language style transfer is the problem of migrating the content of a source sentence to a target style.', 'In many applications, parallel training data are not available and source sentences to be transferred may have arbitrary and unknown styles.', 'In this paper, we present an encoder-decoder framework under this problem setting.', 'Each sentence is encoded into its content and style latent representations.', 'By recombining the content with the target style, we can decode a sentence aligned in the target domain.', 'To adequately constrain the encoding and decoding functions, we couple them with two loss functions.', 'The first is a style discrepancy loss, enforcing that the style representation accurately encodes the style information guided by the discrepancy between the sentence style and the target style.', 'The second is a cycle consistency loss, which ensures that the transferred sentence should preserve the content of the original sentence disentangled from its style.', 'We validate the effectiveness of our proposed model on two tasks: sentiment modification of restaurant reviews, and dialog response revision with a romantic style.']
### SUMMARY:
| [
"We present an encoder-decoder framework for language style transfer, which allows for the use of non-parallel data and source data with various unknown language styles."
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['In this paper, we propose a new loss function for performing principal component analysis (PCA) using linear autoencoders (LAEs).', 'Optimizing the standard L2 loss results in a decoder matrix that spans the principal subspace of the sample covariance of the data, but fails to identify the exact eigenvectors.', 'This downside originates from an invariance that cancels out in the global map.', 'Here, we prove that our loss function eliminates this issue, i.e. the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix.', 'For this new loss, we establish that all local minima are global optima and also show that computing the new loss (and also its gradients) has the same order of complexity as the classical loss.', "We report numerical results on both synthetic simulations, and a real-data PCA experiment on MNIST (i.e., a 60,000 x784 matrix), demonstrating our approach to be practically applicable and rectify previous LAEs' downsides."]
### SUMMARY:
| [
"A new loss function for PCA with linear autoencoders that provably yields ordered exact eigenvectors "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['Users have tremendous potential to aid in the construction and maintenance of knowledges bases (KBs) through the contribution of feedback that identifies incorrect and missing entity attributes and relations.', 'However, as new data is added to the KB, the KB entities, which are constructed by running entity resolution (ER), can change, rendering the intended targets of user feedback unknown–a problem we term identity uncertainty.', 'In this work, we present a framework for integrating user feedback into KBs in the presence of identity uncertainty.', 'Our approach is based on having user feedback participate alongside mentions in ER.', 'We propose a specific representation of user feedback as feedback mentions and introduce a new online algorithm for integrating these mentions into an existing KB.', 'In experiments, we demonstrate that our proposed approach outperforms the baselines in 70% of experimental conditions.']
### SUMMARY:
| [
"This paper develops a framework for integrating user feedback under identity uncertainty in knowledge bases. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
["Machine learning algorithms are vulnerable to poisoning attacks: An adversary can inject malicious points in the training dataset to influence the learning process and degrade the algorithm's performance.", 'Optimal poisoning attacks have already been proposed to evaluate worst-case scenarios, modelling attacks as a bi-level optimization problem.', 'Solving these problems is computationally demanding and has limited applicability for some models such as deep networks.', "In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i.e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training.", 'We propose a Generative Adversarial Net with three components: generator, discriminator, and the target classifier.', 'This approach allows us to model naturally the detectability constrains that can be expected in realistic attacks and to identify the regions of the underlying data distribution that can be more vulnerable to data poisoning.', 'Our experimental evaluation shows the effectiveness of our attack to compromise machine learning classifiers, including deep networks.']
### SUMMARY:
| [
"In this paper we propose a novel generative model to craft systematic poisoning attacks with detectability constraints against machine learning classifiers, including deep networks. "
] |
### Instruction->PROVIDE ME WITH SUMMARY FOR THE GIVEN INPUT WHILE KEEPING THE MOST IMPORTANT DETAILS INTACT:
['The Expectation-Maximization (EM) algorithm is a fundamental tool in unsupervised machine learning.', 'It is often used as an efficient way to solve Maximum Likelihood (ML) and Maximum A Posteriori estimation problems, especially for models with latent variables.', 'It is also the algorithm of choice to fit mixture models: generative models that represent unlabelled points originating from $k$ different processes, as samples from $k$ multivariate distributions.', 'In this work we define and use a quantum version of EM to fit a Gaussian Mixture Model.', 'Given quantum access to a dataset of $n$ vectors of dimension $d$, our algorithm has convergence and precision guarantees similar to the classical algorithm, but the runtime is only polylogarithmic in the number of elements in the training set, and is polynomial in other parameters - as the dimension of the feature space, and the number of components in the mixture.', 'We generalize further the algorithm by fitting any mixture model of base distributions in the exponential family.', 'We discuss the performance of the algorithm on datasets that are expected to be classified successfully by those algorithms, arguing that on those cases we can give strong guarantees on the runtime.']
### SUMMARY:
| [
"It's the quantum algorithm for Expectation Maximization. It's fast: the runtime depends only polylogarithmically on the number of elements in the dataset. "
] |