source
sequence
source_labels
sequence
rouge_scores
sequence
paper_id
stringlengths
9
11
ic
unknown
target
sequence
[ "In order to mimic the human ability of continual acquisition and transfer of knowledge across various tasks, a learning system needs the capability for life-long learning, effectively utilizing the previously acquired skills.", "As such, the key challenge is to transfer and generalize the knowledge learned from one task to other tasks, avoiding interference from previous knowledge and improving the overall performance.", "In this paper, within the continual learning paradigm, we introduce a method that effectively forgets the less useful data samples continuously across different tasks.", "The method uses statistical leverage score information to measure the importance of the data samples in every task and adopts frequent directions approach to enable a life-long learning property.", "This effectively maintains a constant training size across all tasks.", "We first provide some mathematical intuition for the method and then demonstrate its effectiveness with experiments on variants of MNIST and CIFAR100 datasets.", "It is a typical practice to design and optimize machine learning (ML) models to solve a single task.", "On the other hand, humans, instead of learning over isolated complex tasks, are capable of generalizing and transferring knowledge and skills learned from one task to another.", "This ability to remember, learn and transfer information across tasks is referred to as lifelong learning or continual learning BID16 BID3 BID11 .", "The major challenge for creating ML models with lifelong learning ability is that they are prone to catastrophic forgetting BID9 BID10 .", "ML models tend to forget the knowledge learned from previous tasks when re-trained on new observations corresponding to a different (but related) task.", "Specifically when a deep neural network (DNN) is fed with a sequence of tasks, the ability to solve the first task will decline significantly after training on the following tasks.", "The typical structure of DNNs by design does not possess the capability of preserving previously learned knowledge without interference between tasks or catastrophic forgetting.", "There have been different approaches proposed to address this issue and they can be broadly categorized in three types: I) Regularization: It constrains or regularizes the model parameters by adding some terms in the loss function that prevent the model from deviating significantly from the parameters important to earlier tasks.", "Typical algorithms include elastic weight consolidation (EWC) BID4 and continual learning through synaptic intelligence (SynInt) BID19 .", "II) Architectural modification: It revises the model structure successively after each task in order to provide more memory and additional free parameters in the model for new task input.", "Recent examples in this direction are progressive neural networks BID14 and dynamically expanding networks BID18 .", "III) Memory replay: It stores data samples from previous tasks in a separate memory buffer and retrains the new model based on both the new task input and the memory buffer.", "Popular algorithms here are gradient episodic memory (GEM) BID8 , incremental classifier and representation learning (iCaRL) BID12 .Among", "these approaches, regularization is particularly prone to saturation of learning when the number of tasks is large. The additional", "/ regularization term in the loss function will soon lose its competency when important parameters from different tasks are overlapped too many times. Modifications", "on network architectures like progressive networks resolve the saturation issue, but do not scale as number and complexity of tasks increase. The scalability", "problem is also present when using memory replay and often suffer from high computational and memory costs.In this paper, we propose a novel approach to lifelong learning with DNNs that addresses both the learning saturation and high computational complexity issues. In this method,", "we progressively compresses the input information learned thus far along with the input from current task and form more efficiently condensed data samples. The compression", "technique is based on the statistical leverage scores measure, and it uses frequent directions idea in order to connect the series of compression steps for a sequence of tasks. Our approach resembles", "the use of memory replay since it preserves the original input data samples from earlier tasks for further training. However, our method does", "not require extra memory for training and is cost efficient compared to most memory replay methods. Furthermore, unlike the", "importance assigned to model specific parameters when using regularization methods like EWC or SynInt, we assign importance to the training data that is relevant in effectively learning new tasks, while forgetting less important information.", "We presented a new approach in addressing the lifelong learning problem with deep neural networks.", "It is inspired by the randomization and compression techniques typically used in statistical analysis.", "We combined a simple importance sampling technique -leverage score sampling with the frequent directions concept and developed an online effective forgetting or compression mechanism that enables lifelong learning across a sequence of tasks.", "Despite its simple structure, the results on MNIST and CIFAR100 experiments show its effectiveness as compared to recent state of the art." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.24561403691768646, 0.15686273574829102, 0.23529411852359772, 0.9454545378684998, 0.052631575614213943, 0.1599999964237213, 0.22727271914482117, 0.22641508281230927, 0.1666666567325592, 0.08163265138864517, 0.19999998807907104, 0.1818181723356247, 0.07843136787414551, 0.11428570747375488, 0.09090908616781235, 0.22641508281230927, 0.0952380895614624, 0.30188679695129395, 0.08695651590824127, 0.17777776718139648, 0.07692307233810425, 0.11764705181121826, 0.1875, 0.23529411852359772, 0.41379308700561523, 0.19999998807907104, 0.1304347813129425, 0.2666666507720947, 0.2790697515010834, 0.1904761791229248, 0.3050847351551056, 0.1666666567325592 ]
Hygm4cBj24
true
[ "A new method uses statistical leverage score information to measure the importance of the data samples in every task and adopts frequent directions approach to enable a life-long learning property." ]
[ "Convolutional neural networks (CNNs) are inherently equivariant to translation.", "Efforts to embed other forms of equivariance have concentrated solely on rotation.", "We expand the notion of equivariance in CNNs through the Polar Transformer Network (PTN).", "PTN combines ideas from the Spatial Transformer Network (STN) and canonical coordinate representations.", "The result is a network invariant to translation and equivariant to both rotation and scale.", "PTN is trained end-to-end and composed of three distinct stages: a polar origin predictor, the newly introduced polar transformer module and a classifier.", "PTN achieves state-of-the-art on rotated MNIST and the newly introduced SIM2MNIST dataset, an MNIST variation obtained by adding clutter and perturbing digits with translation, rotation and scaling.", "The ideas of PTN are extensible to 3D which we demonstrate through the Cylindrical Transformer Network.", "Whether at the global pattern or local feature level BID8 , the quest for (in/equi)variant representations is as old as the field of computer vision and pattern recognition itself.", "State-of-the-art in \"hand-crafted\" approaches is typified by SIFT (Lowe, 2004) .", "These detector/descriptors identify the intrinsic scale or rotation of a region BID19 BID1 and produce an equivariant descriptor which is normalized for scale and/or rotation invariance.", "The burden of these methods is in the computation of the orbit (i.e. a sampling the transformation space) which is necessary to achieve equivariance.", "This motivated steerable filtering which guarantees transformed filter responses can be interpolated from a finite number of filter responses.", "Steerability was proved for rotations of Gaussian derivatives BID6 and extended to scale and translations in the shiftable pyramid BID31 .", "Use of the orbit and SVD to create a filter basis was proposed by BID26 and in parallel, BID29 proved for certain classes of transformations there exists canonical coordinates where deformation of the input presents as translation of the output.", "Following this work, BID25 and BID10 ; Teo & BID33 proposed a methodology for computing the bases of equivariant spaces given the Lie generators of a transformation.", "and most recently, BID30 proposed the scattering transform which offers representations invariant to translation, scaling, and rotations.The current consensus is representations should be learned not designed.", "Equivariance to translations by convolution and invariance to local deformations by pooling are now textbook BID17 , p.335) but approaches to equivariance of more general deformations are still maturing.", "The main veins are: Spatial Transformer Network (STN) BID13 which similarly to SIFT learn a canonical pose and produce an invariant representation through warping, work which constrains the structure of convolutional filters BID36 and work which uses the filter orbit BID3 to enforce an equivariance to a specific transformation group.In this paper, we propose the Polar Transformer Network (PTN), which combines the ideas of STN and canonical coordinate representations to achieve equivariance to translations, rotations, and dilations.", "The three stage network learns to identify the object center then transforms the input into logpolar coordinates.", "In this coordinate system, planar convolutions correspond to group-convolutions in rotation and scale.", "PTN produces a representation equivariant to rotations and dilations without http://github.com/daniilidis-group//polar-transformer-networks Figure 1 : In the log-polar representation, rotations around the origin become vertical shifts, and dilations around the origin become horizontal shifts.", "The distance between the yellow and green lines is proportional to the rotation angle/scale factor.", "Top rows: sequence of rotations, and the corresponding polar images.", "Bottom rows: sequence of dilations, and the corresponding polar images.the challenging parameter regression of STN.", "We enlarge the notion of equivariance in CNNs beyond Harmonic Networks BID36 and Group Convolutions BID3 by capturing both rotations and dilations of arbitrary precision.", "Similar to STN; however, PTN accommodates only global deformations.We present state-of-the-art performance on rotated MNIST and SIM2MNIST, which we introduce.", "To summarize our contributions:• We develop a CNN architecture capable of learning an image representation invariant to translation and equivariant to rotation and dilation.•", "We propose the polar transformer module, which performs a differentiable log-polar transform, amenable to backpropagation training. The", "transform origin is a latent variable.• We", "show how the polar transform origin can be learned effectively as the centroid of a single channel heatmap predicted by a fully convolutional network.", "We have proposed a novel network whose output is invariant to translations and equivariant to the group of dilations/rotations.", "We have combined the idea of learning the translation (similar to the spatial transformer) but providing equivariance for the scaling and rotation, avoiding, thus, fully connected layers required for the pose regression in the spatial transformer.", "Equivariance with respect to dilated rotations is achieved by convolution in this group.", "Such a convolution would require the production of multiple group copies, however, we avoid this by transforming into canonical coordinates.", "We improve the state of the art performance on rotated MNIST by a large margin, and outperform all other tested methods on a new dataset we call SIM2MNIST.", "We expect our approach to be applicable to other problems, where the presence of different orientations and scales hinder the performance of conventional CNNs." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19999998807907104, 0.17391303181648254, 0.0833333283662796, 0.0833333283662796, 0.5, 0.06451612710952759, 0.17142856121063232, 0.07407406717538834, 0.1111111044883728, 0, 0.22857142984867096, 0.0624999962747097, 0, 0.19999998807907104, 0.08888888359069824, 0.11428570747375488, 0.2222222238779068, 0.1111111044883728, 0.11764705926179886, 0.07407406717538834, 0.3333333432674408, 0.1621621549129486, 0.23999999463558197, 0.0952380895614624, 0.07999999821186066, 0.11764705181121826, 0.1875, 0.3529411852359772, 0.1428571343421936, 0.10526315122842789, 0, 0.3448275923728943, 0.14999999105930328, 0.0833333283662796, 0, 0.1111111044883728, 0.1875 ]
HktRlUlAZ
true
[ "We learn feature maps invariant to translation, and equivariant to rotation and scale." ]
[ "Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task.", "Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks.", "Here, we reformulate the model-agnostic meta-learning algorithm (MAML) of Finn et al. (2017) as a method for probabilistic inference in a hierarchical Bayesian model.", "In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference.", "Furthermore, the identification of MAML as hierarchical Bayes provides a way to understand the algorithm’s operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies for efficient inference.", "We use this opportunity to propose an improvement to the MAML algorithm that makes use of techniques from approximate inference and curvature estimation.", "A remarkable aspect of human intelligence is the ability to quickly solve a novel problem and to be able to do so even in the face of limited experience in a novel domain.", "Such fast adaptation is made possible by leveraging prior learning experience in order to improve the efficiency of later learning.", "This capacity for meta-learning also has the potential to enable an artificially intelligent agent to learn more efficiently in situations with little available data or limited computational resources BID45 BID4 BID37 .In", "machine learning, meta-learning is formulated as the extraction of domain-general information that can act as an inductive bias to improve learning efficiency in novel tasks (Caruana, 1998; BID52 . This", "inductive bias has been implemented in various ways: as learned hyperparameters in a hierarchical Bayesian model that regularize task-specific parameters BID18 , as a learned metric space in which to group neighbors BID7 , as a trained recurrent neural network that allows encoding and retrieval of episodic information BID43 , or as an optimization algorithm with learned parameters BID45 BID3 .The model-agnostic", "meta-learning (MAML) of BID12 is an instance of a learned optimization procedure that directly optimizes the standard gradient descent rule. The algorithm estimates", "an initial parameter set to be shared among the task-specific models; the intuition is that gradient descent from the learned initialization provides a favorable inductive bias for fast adaptation. However, this inductive", "bias has been evaluated only empirically in prior work BID12 .In this work, we present", "a novel derivation of and a novel extension to MAML, illustrating that this algorithm can be understood as inference for the parameters of a prior distribution in a hierarchical Bayesian model. The learned prior allows", "for quick adaptation to unseen tasks on the basis of an implicit predictive density over task-specific parameters. The reinterpretation as", "hierarchical Bayes gives a principled statistical motivation for MAML as a meta-learning algorithm, and sheds light on the reasons for its favorable performance even among methods with significantly more parameters.More importantly, by casting gradient-based meta-learning within a Bayesian framework, we are able to improve MAML by taking insights from Bayesian posterior estimation as novel augmentations to the gradient-based meta-learning procedure. We experimentally demonstrate", "that this enables better performance on a few-shot learning benchmark.", "We have shown that model-agnostic meta-learning (MAML) estimates the parameters of a prior in a hierarchical Bayesian model.", "By casting gradient-based meta-learning within a Bayesian framework, our analysis opens the door to novel improvements inspired by probabilistic machinery.As a step in this direction, we propose an extension to MAML that employs a Laplace approximation to the posterior distribution over task-specific parameters.", "This technique provides a more accurate estimate of the integral that, in the original MAML algorithm, is approximated via a point estimate.", "We show how to estimate the quantity required by the Laplace approximation using Kroneckerfactored approximate curvature (K-FAC), a method recently proposed to approximate the quadratic curvature of a neural network objective for the purpose of a second-order gradient descent technique.Our contribution illuminates the road to exploring further connections between gradient-based metalearning methods and hierarchical Bayesian modeling.", "For instance, in this work we assume that the predictive distribution over new data-points is narrow and well-approximated by a point estimate.", "We may instead employ methods that make use of the variance of the distribution over task-specific parameters in order to model the predictive density over examples from a novel task.Furthermore, it is known that the Laplace approximation is inaccurate in cases where the integral is highly skewed, or is not unimodal and thus is not amenable to approximation by a single Gaussian mode.", "This could be solved by using a finite mixture of Gaussians, which can approximate many density functions arbitrarily well BID49 BID0 .", "The exploration of additional improvements such as this is an exciting line of future work." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11999999731779099, 0.19999998807907104, 0.2641509473323822, 0.37931033968925476, 0.28070175647735596, 0.47058823704719543, 0.2142857164144516, 0.16326530277729034, 0.13114753365516663, 0.20689654350280762, 0.202531635761261, 0.19230768084526062, 0.20338982343673706, 0.08888888359069824, 0.33898305892944336, 0.07999999821186066, 0.3658536672592163, 0.09999999403953552, 0.2978723347187042, 0.260869562625885, 0.2448979616165161, 0.2631579041481018, 0.19230768084526062, 0.25641024112701416, 0.07843136787414551, 0.13636362552642822 ]
BJ_UL-k0b
true
[ "A specific gradient-based meta-learning algorithm, MAML, is equivalent to an inference procedure in a hierarchical Bayesian model. We use this connection to improve MAML via methods from approximate inference and curvature estimation." ]
[ "This work provides an automatic machine learning (AutoML) modelling architecture called Autostacker.", "Autostacker improves the prediction accuracy of machine learning baselines by utilizing an innovative hierarchical stacking architecture and an efficient parameter search algorithm.", "Neither prior domain knowledge about the data nor feature preprocessing is needed.", "We significantly reduce the time of AutoML with a naturally inspired algorithm - Parallel Hill Climbing (PHC).", "By parallelizing PHC, Autostacker can provide candidate pipelines with sufficient prediction accuracy within a short amount of time.", "These pipelines can be used as is or as a starting point for human experts to build on.", "By focusing on the modelling process, Autostacker breaks the tradition of following fixed order pipelines by exploring not only single model pipeline but also innovative combinations and structures.", "As we will show in the experiment section, Autostacker achieves significantly better performance both in terms of test accuracy and time cost comparing with human initial trials and recent popular AutoML system.", "Machine Learning nowadays is the main approach for people to solve prediction problems by utilizing the power of data and algorithms.", "More and more models have been proposed to solve diverse problems based on the character of these problems.", "More specifically, different learning targets and collected data correspond to different modelling problems.", "To solve them, data scientists not only need to know the advantages and disadvantages of various models, they also need to manually tune the hyperparameters within these models.", "However, understanding thoroughly all of the models and running experiments to tune the hyperparameters involves a lot of effort and cost.", "Thus, automating the modelling procedure is highly desired both in academic areas and industry.An AutoML system aims at providing an automatically generated baseline with better performance to support data scientists and experts with specific domain knowledge to solve machine learning problems with less effort.", "The input to AutoML is a cleanly formatted dataset and the output is one or multiple modelling pipelines which enables the data scientists to begin working from a better starting point.", "There are some pioneering efforts addressing the challenge of finding appropriate configurations of modelling pipelines and providing some mechanisms to automate this process.", "However, these works often rely on fixed order machine learning pipelines which are obtained by mimicking the traditional working pipelines of human experts.", "This initial constraint limits the potential of machine to find better pipelines which may or may not be straightforward, and may or may not have been tried by human experts before.In this work, we present an architecture called Autostacker which borrows the stacking Wolpert (1992) BID1 method from ensemble learning, but allows for the discovery of pipelines made up of simply one model or many models combined in an innovative way.", "All of the automatically generated pipelines from Autostacker will provide a good enough starting point compared with initial trials of human experts.", "However, there are several challenges to accomplish this:• The quality of the datasets.", "Even though we are stepping into a big data era, we have to admit that there are still a lot of problems for which it is hard to collect enough data, especially data with little noise, such as historical events, medical research, natural disasters and so on.", "We tackle this challenge by always using the raw dataset in all of the stacking layers Figure 1 : This figure describes the pipeline architecture of Autostacker.", "Autostacker pipelines consists of one or multiple layers and one or multiple nodes inside each layer.", "Each node represents a machine learning primitive model, such as SVM, MLP, etc.", "The number of layers and the number of nodes per layer can be specified beforehand or they can be changeable as part of the hyperparameters.", "In the first layer, the raw dataset is used as input.", "Then in the following layers, the prediction results from each node will be added to the raw dataset as synthetic features (new colors).", "The new dataset generated by each layer will be used as input to the next layer.while also adding synthetic features in each stacking layer to fully use the information in the current dataset.", "More details are provided in the Approach section below.•", "The generalization ability of the AutoML framework. As", "mentioned above, existing AutoML frameworks only allow systems to generate an assembly line from data preprocessing and feature engineering to model selection where only a specific single model will be utilized by plugging in a previous model library. In", "this paper, depending on the computational cost and time cost, we make the number of such primitive models a variable which can be changed dynamically during the pipeline generation process or initialized in the beginning. This", "means that the simplest pipeline could be a single model, and the most complex pipeline could contain hundreds of primitive models as shown in Figure 1 • The large space of variables. The", "second challenge mentioned above leads to this problem naturally. Considering", "the whole AutoML framework, variables include the type of primitive machine learning models, the configuration settings of the framework (for instance, the number of primitive models in each stacking layer) and the hyperparameters in each primitive model. One way to", "address this issue is to treat this as an optimization problem BID3 . Here in this", "paper, we instead treat this challenge as a search problem. We propose to", "use a naturally inspired algorithm, Parallel Hill Climbing (PHC), BID10 to effectively search for appropriate candidate pipelines.To make the definition of the problem clear, we will use the terminology listed below throughout this paper:• Primitive and Pipeline: primitive denotes an existed single machine learning model, for example, a DecisionTree. In addition,", "these also include traditional ensemble learning models, such as Adaboost and Bagging. The pipeline", "is the form of the output of Autostacker, which is a single primitive or a combination of primitives.• Layer and Node", ": Figure 1 shows the architecture of Autostacker which is formed by multiple stacking layers and multiple nodes in each layers. Each node represents", "a machine learning primitive model.", "During the experiments and research process, we noticed that Autostacker still has several limitations.", "Here we will describe these limitations and possible future solutions:• The ability to automate the machine learning process for large scale datasets is limited.", "Nowadays, there are more sophisticated models or deep learning approaches which achieve very good results on large scale datasets and multi-task problems.", "Our current primitive library and modelling structure is very limited at solving these problems.", "One of the future solutions could be to incorporate more advanced primitives and to choose to use them when necessary.•", "Autostacker can be made more efficient with better search algorithms. There", "are a lot of modern evolutionary algorithms, and some of them are based on the Parallel Hill Climber that we use in this work. We believe", "that Autostacker could be made faster by incorporating them. We also believe", "traditional methods and knowledge from statistics and probability will be very helpful to better understand the output of Autostacker, such as by answering questions like: why do was a particular pipeline chosen as one of the final candidate pipelines?", "In this work, we contribute to automating the machine learning modelling process by proposing Autostacker, a machine learning system with an innovative architecture for automatic modelling and a well-behaved efficient search algorithm.", "We show how this system works and what the performance of this system is, comparing with human initial trails and related state of art techniques.", "We also demonstrate the scaling and parallelization ability of our system.", "In conclusion, we automate the machine learning modelling process by providing an efficient, flexible and well-behaved system which provides the potential to be generalized into complicated problems and is able to be integrated with data and feature processing modules." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.1428571343421936, 0.4324324131011963, 0, 0.12121211737394333, 0.11764705181121826, 0.060606054961681366, 0.1395348757505417, 0.17391303181648254, 0.1111111044883728, 0.12121211737394333, 0.2142857164144516, 0.09756097197532654, 0.11764705181121826, 0.24561403691768646, 0.1395348757505417, 0.10810810327529907, 0.10526315122842789, 0.1599999964237213, 0.10810810327529907, 0.06896550953388214, 0.10344827175140381, 0, 0.06896550953388214, 0.13793103396892548, 0.05714285373687744, 0, 0.05405404791235924, 0.0476190410554409, 0, 0, 0.11999999731779099, 0.04081632196903229, 0.045454539358615875, 0.07692307233810425, 0.2222222238779068, 0.06896550953388214, 0.13793103396892548, 0.158730149269104, 0.13333332538604736, 0.060606054961681366, 0.052631575614213943, 0.2857142686843872, 0.06666666269302368, 0.19999998807907104, 0.10526315122842789, 0.13333332538604736, 0.11428570747375488, 0.29629629850387573, 0.04999999701976776, 0, 0.11764705181121826, 0.45454543828964233, 0.1621621549129486, 0.14814814925193787, 0.23999999463558197 ]
SyvCD-b0W
true
[ "Automate machine learning system with efficient search algorithm and innovative structure to provide better model baselines." ]
[ "Surrogate models can be used to accelerate approximate Bayesian computation (ABC).", "In one such framework the discrepancy between simulated and observed data is modelled with a Gaussian process.", "So far principled strategies have been proposed only for sequential selection of the simulation locations.", "To address this limitation, we develop Bayesian optimal design strategies to parallellise the expensive simulations.", "We also touch the problem of quantifying the uncertainty of the ABC posterior due to the limited budget of simulations.", "Approximate Bayesian computation (Marin et al., 2012; Lintusaari et al., 2017 ) is used for Bayesian inference when the analytic form of the likelihood function of a statistical model of interest is either unavailable or too costly to evaluate, but simulating the model is feasible.", "Unfortunately, many models e.g. in genomics and epidemiology (Numminen et al., 2013; Marttinen et al., 2015; McKinley et al., 2018) and climate science (Holden et al., 2018) are costly to simulate making sampling-based ABC inference algorithms infeasible.", "To increase sample-efficiency of ABC, various methods using surrogate models such as neural networks (Papamakarios and Murray, 2016; Papamakarios et al., 2019; Lueckmann et al., 2019; Greenberg et al., 2019) and Gaussian processes (Meeds and Welling, 2014; Wilkinson, 2014; Gutmann and Corander, 2016; Järvenpää et al., 2018 Järvenpää et al., , 2019a have been proposed.", "In one promising surrogate-based ABC framework the discrepancy between the observed and simulated data is modelled with a Gaussian process (GP) (Gutmann and Corander, 2016; Järvenpää et al., 2018 Järvenpää et al., , 2019a .", "Sequential Bayesian experimental design (or active learning) methods to select the simulation locations so as to maximise the sample-efficiency in this framework were proposed by Järvenpää et al. (2019a) .", "However, one often has access to multiple computers to run some of the simulations in parallel.", "In this work, motivated by the related problem of batch Bayesian optimisation (Ginsbourger et al., 2010; Desautels et al., 2014; Shah and Ghahramani, 2015; Wu and Frazier, 2016) and the parallel GP-based method by Järvenpää et al. (2019b) for inference tasks where noisy and potentially expensive log-likelihood evaluations can be obtained, we resolve this limitation by developing principled batch simulation methods which considerably decrease the wall-time needed for ABC inference.", "The posterior distribution is often summarised for further decision making using e.g. expectation and variance.", "When the computational resources for ABC inference are limited, it would be important to assess the accuracy of such summaries, but this has not been explicitly acknowledged in earlier work.", "We devise an approximate numerical method to propagate the uncertainty of the discrepancy, represented by the GP model, to the resulting ABC posterior summaries.", "We call our resulting framework as Bayesian ABC in analogy with the related problems of Bayesian quadrature (O'Hagan, 1991; Osborne et al., 2012; Briol et al., 2019) and Bayesian optimisation (BO) (Brochu et al., 2010; Shahriari et al., 2015) ." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.1666666567325592, 0.2857142686843872, 0.25, 0.19999998807907104, 0.25, 0.19672130048274994, 0.0714285671710968, 0.0923076868057251, 0.25925925374031067, 0.23076923191547394, 0.14999999105930328, 0.1975308656692505, 0.1463414579629898, 0.14814814925193787, 0.35555556416511536, 0.24561403691768646 ]
SkgNKyhEtB
true
[ "We propose principled batch Bayesian experimental design strategies and a method for uncertainty quantification of the posterior summaries in a Gaussian process surrogate-based approximate Bayesian computation framework." ]
[ "Randomly initialized first-order optimization algorithms are the method of choice for solving many high-dimensional nonconvex problems in machine learning, yet general theoretical guarantees cannot rule out convergence to critical points of poor objective value.", "For some highly structured nonconvex problems however, the success of gradient descent can be understood by studying the geometry of the objective.", "We study one such problem -- complete orthogonal dictionary learning, and provide converge guarantees for randomly initialized gradient descent to the neighborhood of a global optimum.", "The resulting rates scale as low order polynomials in the dimension even though the objective possesses an exponential number of saddle points.", "This efficient convergence can be viewed as a consequence of negative curvature normal to the stable manifolds associated with saddle points, and we provide evidence that this feature is shared by other nonconvex problems of importance as well.", "Many central problems in machine learning and signal processing are most naturally formulated as optimization problems.", "These problems are often both nonconvex and highdimensional.", "High dimensionality makes the evaluation of second-order information prohibitively expensive, and thus randomly initialized first-order methods are usually employed instead.", "This has prompted great interest in recent years in understanding the behavior of gradient descent on nonconvex objectives (18; 14; 17; 11) .", "General analysis of first-and second-order methods on such problems can provide guarantees for convergence to critical points but these may be highly suboptimal, since nonconvex optimization is in general an NP-hard probem BID3 .", "Outside of a convex setting (28) one must assume additional structure in order to make statements about convergence to optimal or high quality solutions.", "It is a curious fact that for certain classes of problems such as ones that involve sparsification (25; 6) or matrix/tensor recovery (21; 19; 1) first-order methods can be used effectively.", "Even for some highly nonconvex problems where there is no ground truth available such as the training of neural networks first-order methods converge to high-quality solutions (40).Dictionary", "learning is a problem of inferring a sparse representation of data that was originally developed in the neuroscience literature (30), and has since seen a number of important applications including image denoising, compressive signal acquisition and signal classification (13; 26) . In this work", "we study a formulation of the dictionary learning problem that can be solved efficiently using randomly initialized gradient descent despite possessing a number of saddle points exponential in the dimension. A feature that", "appears to enable efficient optimization is the existence of sufficient negative curvature in the directions normal to the stable manifolds of all critical points that are not global minima BID0 . This property", "ensures that the regions of the space that feed into small gradient regions under gradient flow do not dominate the parameter space. FIG0 illustrates", "the value of this property: negative curvature prevents measure from concentrating about the stable manifold. As a consequence", "randomly initialized gradient methods avoid the \"slow region\" of around the saddle point. Negative curvature", "helps gradient descent. Red: \"slow region\"", "of small gradient around a saddle point. Green: stable manifold", "associated with the saddle point. Black: points that flow", "to the slow region. Left: global negative curvature", "normal to the stable manifold. Right: positive curvature normal", "to the stable manifold -randomly initialized gradient descent is more likely to encounter the slow region.The main results of this work is a convergence rate for randomly initialized gradient descent for complete orthogonal dictionary learning to the neighborhood of a global minimum of the objective. Our results are probabilistic since", "they rely on initialization in certain regions of the parameter space, yet they allow one to flexibly trade off between the maximal number of iterations in the bound and the probability of the bound holding.While our focus is on dictionary learning, it has been recently shown that for other important nonconvex problems such as phase retrieval BID7 performance guarantees for randomly initialized gradient descent can be obtained as well. In fact, in Appendix C we show that", "negative curvature normal to the stable manifolds of saddle points (illustrated in FIG0 ) is also a feature of the population objective of generalized phase retrieval, and can be used to obtain an efficient convergence rate.", "The above analysis suggests that second-order properties -namely negative curvature normal to the stable manifolds of saddle points -play an important role in the success of randomly initialized gradient descent in the solution of complete orthogonal dictionary learning.", "This was done by furnishing a convergence rate guarantee that holds when the random initialization is not in regions that feed into small gradient regions around saddle points, and bounding the probability of such an initialization.", "In Appendix C we provide an additional example of a nonconvex problem that for which an efficient rate can be obtained based on an analysis that relies on negative curvature normal to stable manifolds of saddles -generalized phase retrieval.", "An interesting direction of further work is to more precisely characterize the class of functions that share this feature.The effect of curvature can be seen in the dependence of the maximal number of iterations T on the parameter ζ 0 .", "This parameter controlled the volume of regions where initialization would lead to slow progress and the failure probability of the bound 1 − P was linear in ζ 0 , while T depended logarithmically on ζ 0 .", "This logarithmic dependence is due to a geometric increase in the distance from the stable manifolds of the saddles during gradient descent, which is a consequence of negative curvature.", "Note that the choice of ζ 0 allows one to flexibly trade off between T and 1 − P. By decreasing ζ 0 , the bound holds with higher probability, at the price of an increase in T .", "This is because the volume of acceptable initializations now contains regions of smaller minimal gradient norm.", "In a sense, the result is an extrapolation of works such as (23) that analyze the ζ 0 = 0 case to finite ζ 0 .Our", "analysis uses precise knowledge of the location of the stable manifolds of saddle points.For less symmetric problems, including variants of sparse blind deconvolution (41) and overcomplete tensor decomposition, there is no closed form expression for the stable manifolds. However", ", it is still possible to coarsely localize them in regions containing negative curvature. Understanding", "the implications of this geometric structure for randomly initialized first-order methods is an important direction for future work.One may hope that studying simple model problems and identifying structures (here, negative curvature orthogonal to the stable manifold) that enable efficient optimization will inspire approaches to broader classes of problems. One problem of", "obvious interest is the training of deep neural networks for classification, which shares certain high-level features with the problems discussed in this paper. The objective", "is also highly nonconvex and is conjectured to contain a proliferation of saddle points BID10 , yet these appear to be avoided by first-order methods BID15 for reasons that are still quite poorly understood beyond the two-layer case (39).[19] Prateek Jain,", "Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix", "completion using alternating minimization. DISPLAYFORM0 .", "Thus critical", "points are ones where either tanh( q µ ) = 0 (which cannot happen on S n−1 ) or tanh( q µ ) is in the nullspace of (I − qq * ), which implies tanh( q µ ) = cq for some constant b. The equation", "tanh( x µ ) = bx has either a single solution at the origin or 3 solutions at {0, ±r(b)} for some r(b). Since this equation", "must be solves simultaneously for every element of q, we obtain ∀i ∈ [n] : q i ∈ {0, ±r(b)}. To obtain solutions", "on the sphere, one then uses the freedom we have in choosing b (and thus r(b)) such that q = 1. The resulting set of", "critical points is thus DISPLAYFORM1 To prove the form of the stable manifolds, we first show that for q i such that |q i | = q ∞ and any q j such that |q j | + ∆ = |q i | and sufficiently small ∆ > 0, we have DISPLAYFORM2 For ease of notation we now assume q i , q j > 0 and hence ∆ = q i − q j , otherwise the argument can be repeated exactly with absolute values instead. The above inequality", "can then be written as DISPLAYFORM3 If we now define DISPLAYFORM4 where the O(∆ 2 ) term is bounded. Defining a vector r", "∈ R n by DISPLAYFORM5 we have r 2 = 1. Since tanh(x) is concave", "for x > 0, and |r i | ≤ 1, we find DISPLAYFORM6 From DISPLAYFORM7 and thus q j ≥ 1 √ n − ∆. Using this inequality and", "properties of the hyperbolic secant we obtain DISPLAYFORM8 and plugging in µ = c √ n log n for some c < 1 DISPLAYFORM9 log n + log log n + log 4).We can bound this quantity", "by a constant, say h 2 ≤ 1 2 , by requiring DISPLAYFORM10 ) log n + log log n ≤ − log 8and for and c < 1, using − log n + log log n < 0 we have DISPLAYFORM11 Since ∆ can be taken arbitrarily small, it is clear that c can be chosen in an n-independent manner such that A ≤ − log 8. We then find DISPLAYFORM12", "since this inequality is strict, ∆ can be chosen small enough such that O(∆ 2 ) < ∆(h 1 − h 2 ) and hence h > 0, proving 9.It follows that under negative gradient flow, a point with |q j | < ||q|| ∞ cannot flow to a point q such that |q j | = ||q || ∞ . From the form of the critical", "points, for every such j, q must thus flow to a point such that q j = 0 (the value of the j coordinate cannot pass through 0 to a point where |q j | = ||q || ∞ since from smoothness of the objective this would require passing some q with q j = 0, at which point grad [f Sep ] (q ) j = 0).As for the maximal magnitude coordinates", ", if there is more than one coordinate satisfying |q i1 | = |q i2 | = q ∞ , it is clear from symmetry that at any subsequent point q along the gradient flow line q i1 = q i2 . These coordinates cannot change sign since", "from the smoothness of the objective this would require that they pass through a point where they have magnitude smaller than 1/ √ n, at which point some other coordinate must have a larger magnitude (in order not to violate the spherical constraint), contradicting the above result for non-maximal elements. It follows that the sign pattern of these", "elements is preserved during the flow. Thus there is a single critical point to", "which any q can flow, and this is given by setting all the coordinates with |q j | < q ∞ to 0 and multiplying the remaining coordinates by a positive constant to ensure the resulting vector is on S n . Denoting this critical point by α, there", "is a vector b such that q = P S n−1 [a(α) + b] and supp(a(α)) ∩ supp(b) = ∅, b ∞ < 1 with the form of a(α) given by 5 . The collection of all such points defines", "the stable manifold of α.Proof of Lemma 2: (Separable objective gradient projection). i) We consider the sign(w i ) = 1 case; the", "sign(w i ) = −1 case follows directly.Recalling that DISPLAYFORM13 qn , we first prove DISPLAYFORM14 for some c > 0 whose form will be determined later. The inequality clearly holds for w i = q n", ".To DISPLAYFORM15 verify that it holds for smaller", "values of w i as well, we now show that ∂ ∂w i tanh w i µ − tanh q n µ w i q n − c(q n − w i ) < 0 which will ensure that it holds for all w i . We define s 2 = 1 − ||w|| 2 + w 2 i and denote q", "n = s 2 − w 2 i to extract the w i dependence, givingWhere in the last inequality we used properties of the sech function and q n ≥ w i . We thus want to show DISPLAYFORM16 and it follows", "that 10 holds. For µ < 1 BID15 we are guaranteed that c > 0.From", "examining the RHS of 10 (and plugging in q n = s 2 − w 2 i ) we see that any lower bound on the gradient of an element w j applies also to any element |w i | ≤ |w j |. Since for |w j | = ||w|| ∞ we have q n − w j = w", "j ζ, for every log( 1 µ )µ ≤ w i we obtain the bound DISPLAYFORM17 Proof of Theorem 1: (Gradient descent convergence rate for separable function).We obtain a convergence rate by first bounding the", "number of iterations of Riemannian gradient descent in C ζ0 \\C 1 , and then considering DISPLAYFORM18 . Choosing c 2 so that µ < 1 2 , we can apply Lemma", "2, and for u defined in 7, we thus have DISPLAYFORM19 Since from Lemma 7 the Riemannian gradient norm is bounded by √ n, we can choose c 1 , c 2 such that µ log( DISPLAYFORM20 . This choice of η then satisfies the conditions of", "Lemma 17 with r = µ log( DISPLAYFORM21 , M = √ n, which gives that after a gradient step DISPLAYFORM22 for some suitably chosenc > 0. If we now define by w (t) the t-th iterate of Riemannian", "gradient descent and DISPLAYFORM23 and the number of iterations required to exit C ζ0 \\C 1 is DISPLAYFORM24 To bound the remaining iterations, we use Lemma 2 to obtain that for every w ∈ C ζ0 \\B ∞ r , DISPLAYFORM25 where we have used ||u DISPLAYFORM26 We thus have DISPLAYFORM27 Choosing DISPLAYFORM28 where L is the gradient Lipschitz constant of f s , from Lemma 5 we obtain DISPLAYFORM29 According to Lemma B, L = 1/µ and thus the above holds if we demand η < µ 2 . Combining 12 and 13 gives DISPLAYFORM30 .To obtain the final", "rate, we use in g(w 0 ) − g * ≤ √ n andcη", "< 1 ⇒ 1 log(1+cη) <C cη for somẽ C > 0. Thus one can choose C > 0 such that DISPLAYFORM31 From Lemma", "1 the ball B ∞ r contains a global minimizer of the objective, located at the origin.The probability of initializing in Ȃ C ζ0 is simply given from Lemma 3 and by summing over the 2n possible choices of C ζ0 , one for each global minimizer (corresponding to a single signed basis vector)." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.15094339847564697, 0.20512819290161133, 0.43478259444236755, 0.1463414579629898, 0.178571417927742, 0.05714285373687744, 0, 0.04999999329447746, 0.19512194395065308, 0.22641508281230927, 0.09302324801683426, 0.07999999821186066, 0.0833333283662796, 0.10526315122842789, 0.2448979616165161, 0.0833333283662796, 0.10526315122842789, 0.10810810327529907, 0.11764705181121826, 0.1538461446762085, 0.13333332538604736, 0.06896550953388214, 0.0714285671710968, 0.0714285671710968, 0.42105263471603394, 0.1428571343421936, 0.26923075318336487, 0.3396226465702057, 0.23076923191547394, 0.3333333432674408, 0.07407406717538834, 0.07692307233810425, 0.1818181723356247, 0.07692307233810425, 0.11428570747375488, 0.1428571343421936, 0.11320754140615463, 0, 0.1875, 0.13636362552642822, 0.09999999403953552, 0, 0, 0.10526315122842789, 0.13333332538604736, 0.04878048226237297, 0.09090908616781235, 0.051948048174381256, 0.09302324801683426, 0, 0.04255318641662598, 0.12244897335767746, 0.11428570747375488, 0.08219178020954132, 0.10810810327529907, 0.0714285671710968, 0.11940298229455948, 0.12121211737394333, 0.10344827175140381, 0.072727270424366, 0.19512194395065308, 0.0363636314868927, 0.0714285671710968, 0.06896550953388214, 0.07843136787414551, 0, 0.16949151456356049, 0.2745097875595093, 0.08510638028383255, 0.09677419066429138, 0.13793103396892548, 0.10989010334014893, 0, 0.04999999329447746, 0.0923076868057251 ]
HyxlHsActm
true
[ "We provide an efficient convergence rate for gradient descent on the complete orthogonal dictionary learning objective based on a geometric analysis." ]
[ "Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age.", "Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century.", "In this paper we study whether it is possible to automate the discovery of decoding algorithms via deep learning.", "We study a family of sequential codes parametrized by recurrent neural network (RNN) architectures.", "We show that cre- atively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms).", "We show strong gen- eralizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting.", "Reliable digital communication, both wireline (ethernet, cable and DSL modems) and wireless (cellular, satellite, deep space), is a primary workhorse of the modern information age.", "A critical aspect of reliable communication involves the design of codes that allow transmissions to be robustly (and computationally efficiently) decoded under noisy conditions.", "This is the discipline of coding theory; over the past century and especially the past 70 years (since the birth of information theory BID22 ) much progress has been made in the design of near optimal codes.", "Landmark codes include convolutional codes, turbo codes, low density parity check (LDPC) codes and, recently, polar codes.", "The impact on humanity is enormous -every cellular phone designed uses one of these codes, which feature in global cellular standards ranging from the 2nd generation to the 5th generation respectively, and are text book material BID16 .The", "canonical setting is one of point-to-point reliable communication over the additive white Gaussian noise (AWGN) channel and performance of a code in this setting is its gold standard. The", "AWGN channel fits much of wireline and wireless communications although the front end of the receiver may have to be specifically designed before being processed by the decoder (example: intersymbol equalization in cable modems, beamforming and sphere decoding in multiple antenna wireless systems); again this is text book material BID26 . There", "are two long term goals in coding theory: (a) design", "of new, computationally efficient, codes that improve the state of the art (probability of correct reception) over the AWGN setting. Since the", "current codes already operate close to the information theoretic \"Shannon limit\", the emphasis is on robustness and adaptability to deviations from the AWGN settings (a list of channel models motivated by practical settings, (such as urban, pedestrian, vehicular) in the recent 5th generation cellular standard is available in Annex B of 3GPP TS 36.101.) (b) design", "of new codes for multi-terminal (i.e., beyond point-to-point) settings -examples include the feedback channel, the relay channel and the interference channel.Progress over these long term goals has generally been driven by individual human ingenuity and, befittingly, is sporadic. For instance", ", the time duration between convolutional codes (2nd generation cellular standards) to polar codes (5th generation cellular standards) is over 4 decades. Deep learning", "is fast emerging as capable of learning sophisticated algorithms from observed data (input, action, output) alone and has been remarkably successful in a large variety of human endeavors (ranging from language BID11 to vision BID17 to playing Go BID23 ). Motivated by", "these successes, we envision that deep learning methods can play a crucial role in solving both the aforementioned goals of coding theory.While the learning framework is clear and there is virtually unlimited training data available, there are two main challenges: (a) The space", "of codes is very vast and the sizes astronomical; for instance a rate 1/2 code over 100 information bits involves designing 2 100 codewords in a 200 dimensional space. Computationally", "efficient encoding and decoding procedures are a must, apart from high reliability over the AWGN channel. (b) Generalization", "is highly desirable across block lengths and data rate that each work very well over a wide range of channel signal to noise ratios (SNR). In other words, one", "is looking to design a family of codes (parametrized by data rate and number of information bits) and their performance is evaluated over a range of channel SNRs.For example, it is shown that when a neural decoder is exposed to nearly 90% of the codewords of a rate 1/2 polar code over 8 information bits, its performance on the unseen codewords is poor . In part due to these", "challenges, recent deep learning works on decoding known codes using data-driven neural decoders have been limited to short or moderate block lengths BID4 BID13 . Other deep learning", "works on coding theory focus on decoding known codes by training a neural decoder that is initialized with the existing decoding algorithm but is more general than the existing algorithm BID12 BID29 . The main challenge", "is to restrict oneself to a class of codes that neural networks can naturally encode and decode. In this paper, we", "restrict ourselves to a class of sequential encoding and decoding schemes, of which convolutional and turbo codes are part of. These sequential", "coding schemes naturally meld with the family of recurrent neural network (RNN) architectures, which have recently seen large success in a wide variety of time-series tasks. The ancillary advantage", "of sequential schemes is that arbitrarily long information bits can be encoded and also at a large variety of coding rates.Working within sequential codes parametrized by RNN architectures, we make the following contributions.(1) Focusing on convolutional", "codes we aim to decode them on the AWGN channel using RNN architectures. Efficient optimal decoding of", "convolutional codes has represented historically fundamental progress in the broad arena of algorithms; optimal bit error decoding is achieved by the 'Viterbi decoder' BID27 which is simply dynamic programming or Dijkstra's algorithm on a specific graph (the 'trellis') induced by the convolutional code. Optimal block error decoding", "is the BCJR decoder BID0 which is part of a family of forward-backward algorithms. While early work had shown that", "vanilla-RNNs are capable in principle of emulating both Viterbi and BCJR decoders BID28 BID21 we show empirically, through a careful construction of RNN architectures and training methodology, that neural network decoding is possible at very near optimal performances (both bit error rate (BER) and block error rate (BLER)). The key point is that we train", "a RNN decoder at a specific SNR and over short information bit lengths (100 bits) and show strong generalization capabilities by testing over a wide range of SNR and block lengths (up to 10,000 bits). The specific training SNR is closely", "related to the Shannon limit of the AWGN channel at the rate of the code and provides strong information theoretic collateral to our empirical results.(2) Turbo codes are naturally built on", "top of convolutional codes, both in terms of encoding and decoding. A natural generalization of our RNN convolutional", "decoders allow us to decode turbo codes at BER comparable to, and at certain regimes, even better than state of the art turbo decoders on the AWGN channel. That data driven, SGD-learnt, RNN architectures can", "decode comparably is fairly remarkable since turbo codes already operate near the Shannon limit of reliable communication over the AWGN channel.(3) We show the afore-described neural network decoders", "for both convolutional and turbo codes are robust to variations to the AWGN channel model. We consider a problem of contemporary interest: communication", "over a \"bursty\" AWGN channel (where a small fraction of noise has much higher variance than usual) which models inter-cell interference in OFDM cellular systems (used in 4G and 5G cellular standards) or co-channel radar interference. We demonstrate empirically the neural network architectures can", "adapt to such variations and beat state of the art heuristics comfortably (despite evidence elsewhere that neural network are sensitive to models they are trained on BID24 ). Via an innovative local perturbation analysis (akin to BID15 ))", ", we demonstrate the neural network to have learnt sophisticated preprocessing heuristics in engineering of real world systems BID10 .", "In this paper we have demonstrated that appropriately designed and trained RNN architectures can 'learn' the landmark algorithms of Viterbi and BCJR decoding based on the strong generalization capabilities we demonstrate.", "This is similar in spirit to recent works on 'program learning' in the literature BID14 BID2 .", "In those works, the learning is assisted significantly by a low level program trace on an input; here we learn the Viterbi and BCJR algorithms only by end-to-end training samples; we conjecture that this could be related to the strong \"algebraic\" nature of the Viterbi and BCJR algorithms.", "The representation capabilities and learnability of the RNN architectures in decoding existing codes suggest a possibility that new codes could be leant on the AWGN channel itself and improve the state of the art (constituted by turbo, LDPC and polar codes).", "Also interesting is a new look at classical multi-terminal communication problems, including the relay and interference channels.", "Both are active areas of present research." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10526315122842789, 0, 0.051282044500112534, 0.23529411852359772, 0.47887322306632996, 0.178571417927742, 0.045454539358615875, 0.1395348757505417, 0.11999999731779099, 0.05882352590560913, 0.1111111044883728, 0.04347825422883034, 0.0923076868057251, 0, 0.10810810327529907, 0.11267605423927307, 0.06666666269302368, 0.09999999403953552, 0.06779660284519196, 0.09999999403953552, 0.0833333283662796, 0.052631575614213943, 0.16326530277729034, 0.11594202369451523, 0.13333332538604736, 0.12244897335767746, 0.29999998211860657, 0.21052631735801697, 0, 0.2142857164144516, 0.3243243098258972, 0.06666666269302368, 0.052631575614213943, 0.20895521342754364, 0.15686273574829102, 0.12765957415103912, 0.11428570747375488, 0.2745097875595093, 0.1702127605676651, 0.1904761791229248, 0.13333332538604736, 0.145454540848732, 0.051282044500112534, 0.2916666567325592, 0.05714285373687744, 0.10169491171836853, 0.18518517911434174, 0.05405404791235924, 0 ]
ryazCMbR-
true
[ "We show that creatively designed and trained RNN architectures can decode well known sequential codes and achieve close to optimal performances." ]
[ "Adam is shown not being able to converge to the optimal solution in certain cases.", "Researchers recently propose several algorithms to avoid the issue of non-convergence of Adam, but their efficiency turns out to be unsatisfactory in practice.", "In this paper, we provide a new insight into the non-convergence issue of Adam as well as other adaptive learning rate methods.", "We argue that there exists an inappropriate correlation between gradient $g_t$ and the second moment term $v_t$ in Adam ($t$ is the timestep), which results in that a large gradient is likely to have small step size while a small gradient may have a large step size.", "We demonstrate that such unbalanced step sizes are the fundamental cause of non-convergence of Adam, and we further prove that decorrelating $v_t$ and $g_t$ will lead to unbiased step size for each gradient, thus solving the non-convergence problem of Adam.", "Finally, we propose AdaShift, a novel adaptive learning rate method that decorrelates $v_t$ and $g_t$ by temporal shifting, i.e., using temporally shifted gradient $g_{t-n}$ to calculate $v_t$.", "The experiment results demonstrate that AdaShift is able to address the non-convergence issue of Adam, while still maintaining a competitive performance with Adam in terms of both training speed and generalization.", "First-order optimization algorithms with adaptive learning rate play an important role in deep learning due to their efficiency in solving large-scale optimization problems.", "Denote g t ∈ R n as the gradient of loss function f with respect to its parameters θ ∈ R n at timestep t, then the general updating rule of these algorithms can be written as follows (Reddi et al., 2018) : DISPLAYFORM0 In the above equation, m t φ(g 1 , . . . , g t ) ∈ R n is a function of the historical gradients; v t ψ(g 1 , . . . , g t ) ∈ R n + is an n-dimension vector with non-negative elements, which adapts the learning rate for the n elements in g t respectively; α t is the base learning rate; and αt √ vt is the adaptive step size for m t .One", "common choice of φ(g 1 , . . . , g t ) is the exponential moving average of the gradients used in Momentum (Qian, 1999) and Adam (Kingma & Ba, 2014) , which helps alleviate gradient oscillations. The", "commonly-used ψ(g 1 , . . . , g t ) in deep learning community is the exponential moving average of squared gradients, such as Adadelta (Zeiler, 2012) , RMSProp (Tieleman & Hinton, 2012) , Adam (Kingma & Ba, 2014) and Nadam (Dozat, 2016) .Adam", "(Kingma & Ba, 2014 ) is a typical adaptive learning rate method, which assembles the idea of using exponential moving average of first and second moments and bias correction. In general", ", Adam is robust and efficient in both dense and sparse gradient cases, and is popular in deep learning research. However,", "Adam is shown not being able to converge to optimal solution in certain cases. Reddi et", "al. (2018) point out that the key issue in the convergence proof of Adam lies in the quantity DISPLAYFORM1 which is assumed to be positive, but unfortunately, such an assumption does not always hold in Adam. They provide", "a set of counterexamples and demonstrate that the violation of positiveness of Γ t will lead to undesirable convergence behavior in Adam. Reddi et al.", "(2018) then propose two variants, AMSGrad and AdamNC, to address the issue by keeping Γ t positive. Specifically", ", AMSGrad definesv t as the historical maximum of v t , i.e.,v t = max {v i } t i=1 , and replaces v t withv t to keep v t non-decreasing and therefore forces Γ t to be positive; while AdamNC forces v t to have \"long-term memory\" of past gradients and calculates v t as their average to make it stable. Though these", "two algorithms solve the non-convergence problem of Adam to a certain extent, they turn out to be inefficient in practice: they have to maintain a very large v t once a large gradient appears, and a large v t decreases the adaptive learning rate αt √ vt and slows down the training process.In this paper, we provide a new insight into adaptive learning rate methods, which brings a new perspective on solving the non-convergence issue of Adam. Specifically", ", in Section 3, we study the counterexamples provided by Reddi et al. (2018) via analyzing the accumulated step size of each gradient g t . We observe", "that in the common adaptive learning rate methods, a large gradient tends to have a relatively small step size, while a small gradient is likely to have a relatively large step size. We show that", "the unbalanced step sizes stem from the inappropriate positive correlation between v t and g t , and we argue that this is the fundamental cause of the non-convergence issue of Adam.In Section 4, we further prove that decorrelating v t and g t leads to equal and unbiased expected step size for each gradient, thus solving the non-convergence issue of Adam. We subsequently", "propose AdaShift, a decorrelated variant of adaptive learning rate methods, which achieves decorrelation between v t and g t by calculating v t using temporally shifted gradients. Finally, in Section", "5, we study the performance of our proposed AdaShift, and demonstrate that it solves the non-convergence issue of Adam, while still maintaining a decent performance compared with Adam in terms of both training speed and generalization.", "In this paper, we study the non-convergence issue of adaptive learning rate methods from the perspective of the equivalent accumulated step size of each gradient, i.e., the net update factor defined in this paper.", "We show that there exists an inappropriate correlation between v t and g t , which leads to unbalanced net update factor for each gradient.", "We demonstrate that such unbalanced step sizes are the fundamental cause of non-convergence of Adam, and we further prove that decorrelating v t and g t will lead to unbiased expected step size for each gradient, thus solving the non-convergence problem of Adam.", "Finally, we propose AdaShift, a novel adaptive learning rate method that decorrelates v t and g t via calculating v t using temporally shifted gradient g t−n .In", "addition, based on our new perspective on adaptive learning rate methods, v t is no longer necessarily the second moment of g t , but a random variable that is independent of g t and reflects the overall gradient scale. Thus", ", it is valid to calculate v t with the spatial elements of previous gradients. We", "further found that when the spatial operation φ outputs a shared scalar for each block, the resulting algorithm turns out to be closely related to SGD, where each block has an overall adaptive learning rate and the relative gradient scale in each block is maintained. The", "experiment results demonstrate that AdaShift is able to solve the non-convergence issue of Adam. In", "the meantime, AdaShift achieves competitive and even better training and testing performance when compared with Adam. FIG7", ". It suggests", "that for a fixed sequential online optimization problem, both of β 1 and β 2 determine the direction and speed of Adam optimization process. Furthermore", ", we also study the threshold point of C and d, under which Adam will change to the incorrect direction, for each fixed β 1 and β 2 that vary among [0, 1). To simplify", "the experiments, we keep d = C such that the overall gradient of each epoch being +1. The result", "is shown in FIG7 , which suggests, at the condition of larger β 1 or larger β 2 , it needs a larger C to make Adam stride on the opposite direction. In other words", ", large β 1 and β 2 will make the non-convergence rare to happen." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17391303181648254, 0.2666666507720947, 0.3333333432674408, 0.1860465109348297, 0.2857142686843872, 0.054054051637649536, 0.307692289352417, 0, 0.06896551698446274, 0.190476194024086, 0.17391304671764374, 0.15789473056793213, 0.1538461446762085, 0.0833333283662796, 0.190476194024086, 0.25, 0.2222222238779068, 0.11320754140615463, 0.2153846174478531, 0.17142856121063232, 0.12121211737394333, 0.25925925374031067, 0.1111111044883728, 0.29999998211860657, 0.20512820780277252, 0.12121211737394333, 0.27272728085517883, 0.060606054961681366, 0.1395348757505417, 0.23999999463558197, 0.08163265138864517, 0.5, 0.23999999463558197, 0, 0.2666666507720947, 0.19512194395065308, 0.14814814925193787, 0.1538461446762085, 0.27272728085517883 ]
HkgTkhRcKQ
true
[ "We analysis and solve the non-convergence issue of Adam." ]
[ "Most domain adaptation methods consider the problem of transferring knowledge to the target domain from a single source dataset.", "However, in practical applications, we typically have access to multiple sources.", "In this paper we propose the first approach for Multi-Source Domain Adaptation (MSDA) based on Generative Adversarial Networks.", "Our method is inspired by the observation that the appearance of a given image depends on three factors: the domain, the style (characterized in terms of low-level features variations) and the content.", "For this reason we propose to project the image features onto a space where only the dependence from the content is kept, and then re-project this invariant representation onto the pixel space using the target domain and style.", "In this way, new labeled images can be generated which are used to train a final target classifier.", "We test our approach using common MSDA benchmarks, showing that it outperforms state-of-the-art methods.", "A well known problem in computer vision is the need to adapt a classifier trained on a given source domain in order to work on another domain, i.e. the target.", "Since the two domains typically have different marginal feature distributions, the adaptation process needs to align the one to the other in order to reduce the domain shift (Torralba & Efros (2011) ).", "In many practical scenarios, the target data are not annotated and Unsupervised Domain Adaptation (UDA) methods are required.", "While most previous adaptation approaches consider a single source domain, in real world applications we may have access to multiple datasets.", "In this case, Multi-Source Domain Adaptation (MSDA) (Yao & Doretto (2010) ; Mansour et al. (2009) ; Xu et al. (2018) ; Peng et al. (2019) ) methods may be adopted, in which more than one source dataset is considered in order to make the adaptation process more robust.", "However, despite more data can be used, MSDA is challenging as multiple domain shift problems need to be simultaneously and coherently solved.", "In this paper we tackle MSDA (unsupervised) problem and we propose a novel Generative Adversarial Network (GAN) for addressing the domain shift when multiple source domains are available.", "Our solution is based on generating artificial target samples by transforming images from all the source domains.", "Then the synthetically generated images are used for training the target classifier.", "While this strategy has been recently adopted in single-source UDA scenarios (Russo et al. (2018) ; ; Liu & Tuzel (2016) ; Murez et al. (2018) ; Sankaranarayanan et al. (2018) ), we are the first to show how it can be effectively exploited in a MSDA setting.", "The holy grail of any domain adaptation method is to obtain domain invariant representations.", "Similarly, in multi-domain image-to-image translation tasks it is very crucial to obtain domain invariant representations in order to reduce the number of learned translations from O(N 2 ) to O(N ), where N is the number of domains.", "Several domain adaptation methods (Roy et al. (2019) ; Carlucci et al. (2017) ; ; Tzeng et al. (2014) ) achieve domain-invariant representations by aligning only domain specific distributions.", "However, we postulate that style is the most important latent factor that describe a domain and need to be modelled separately for obtaining optimal domain invariant representation.", "More precisely, in our work we assume that the appearance of an image depends on three factors: i.e. the content, the domain and the style.", "The domain models properties that are shared by the elements of a dataset but which may not be shared by other datasets, whereas, the factor style represents a property that is shared among different parts of a single image and describes low-level features which concern a specific image.", "Our generator obtains the do-main invariant representation in a two-step process, by first obtaining style invariant representations followed by achieving domain invariant representation.", "In more detail, the proposed translation is implemented using a style-and-domain translation generator.", "This generator is composed of two main components, an encoder and a decoder.", "Inspired by (Roy et al. (2019) ) in the encoder we embed whitening layers that progressively align the styleand-domain feature distributions in order to obtain a representation of the image content which is invariant to these factors.", "Then, in the decoder, we project this invariant representation onto a new domain-and-style specific distribution with Whitening and Coloring (W C) ) batch transformations, according to the target data.", "Importantly, the use of an intermediate, explicit invariant representation, obtained through W C, makes the number of domain transformations which need to be learned linear with the number of domains.", "In other words, this design choice ensures scalability when the number of domains increases, which is a crucial aspect for an effective MSDA method.", "Contributions.", "Our main contributions can be summarized as follows.", "(i) We propose the first generative model dealing with MSDA.", "We call our approach TriGAN because it is based on three different factors of the images: the style, the domain and the content.", "(ii) The proposed style-anddomain translation generator is based on style and domain specific statistics which are first removed from and then added to the source images by means of modified W C layers: Instance Whitening Transform (IW T ), Domain Whitening Transform (DW T ) (Roy et al. (2019) ), conditional Domain Whitening Transform (cDW T ) and Adaptive Instance Whitening Transform (AdaIW T ).", "Notably, the IW T and AdaIW T are novel layers introduced with this paper.", "(iii) We test our method on two MSDA datasets, Digits-Five (Xu et al. (2018) ) and Office-Caltech10 (Gong et al. (2012) ), outperforming state-of-the-art methods." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1666666567325592, 0.06666666269302368, 0.4324324131011963, 0.21739129722118378, 0.2448979616165161, 0.10810810327529907, 0, 0.08888888359069824, 0.08695651590824127, 0.1111111044883728, 0.09999999403953552, 0.09999999403953552, 0.09999999403953552, 0.3478260934352875, 0.1111111044883728, 0.06666666269302368, 0.07017543166875839, 0.25, 0.08163265138864517, 0.09756097197532654, 0.22727271914482117, 0.3333333432674408, 0.1428571343421936, 0.10526315122842789, 0.06451612710952759, 0.1249999925494194, 0.11538460850715637, 0.12765957415103912, 0.09090908616781235, 0.23255813121795654, 0, 0.13793103396892548, 0.307692289352417, 0.1764705777168274, 0.1875, 0.1428571343421936 ]
rygHq6EFvB
true
[ "In this paper we propose generative method for multisource domain adaptation based on decomposition of content, style and domain factors." ]
[ "Inferring the most likely configuration for a subset of variables of a joint distribution given the remaining ones – which we refer to as co-generation – is an important challenge that is computationally demanding for all but the simplest settings.", "This task has received a considerable amount of attention, particularly for classical ways of modeling distributions like structured prediction.", "In contrast, almost nothing is known about this task when considering recently proposed techniques for modeling high-dimensional distributions, particularly generative adversarial nets (GANs).", "Therefore, in this paper, we study the occurring challenges for co-generation with GANs.", "To address those challenges we develop an annealed importance sampling (AIS) based Hamiltonian Monte Carlo (HMC) co-generation algorithm.", "The presented approach significantly outperforms classical gradient-based methods on synthetic data and on CelebA.", "While generative adversarial nets (GANs) [6] and variational auto-encoders (VAEs) [8] model a joint probability distribution which implicitly captures the correlations between multiple parts of the output, e.g., pixels in an image, and while those methods permit easy sampling from the entire output space domain, it remains an open question how to sample from part of the domain given the remainder?", "We refer to this task as co-generation.", "To enable co-generation for a domain unknown at training time, for GANs, optimization based algorithms have been proposed [15, 10] .", "Intuitively, they aim at finding that latent sample which accurately matches the observed part.", "However, successful training of the GAN leads to an increasingly ragged energy landscape, making the search for an appropriate latent variable via backpropagation through the generator harder and harder until it eventually fails.", "To deal with this ragged energy landscape during co-generation, we develop a method using an annealed importance sampling (AIS) [11] based Hamiltonian Monte Carlo (HMC) algorithm [4, 12] , which is typically used to estimate (ratios of) the partition function [14, 13] .", "Rather than focus on the partition function, the proposed approach leverages the benefits of AIS, i.e., gradually annealing a complex probability distribution, and HMC, i.e., avoiding a localized random walk.", "We evaluate the proposed approach on synthetic data and imaging data (CelebA), showing compelling results via MSE and MSSIM metrics.", "For more details and results please see our main conference paper [5] .", "We propose a co-generation approach, i.e., we complete partially given input data, using annealed importance sampling (AIS) based on the Hamiltonian Monte Carlo (HMC).", "Different from classical optimization based methods, specifically GD, which get easily trapped in local optima when solving this task, the proposed approach is much more robust.", "Importantly, the method is able to traverse large energy barriers that occur when training generative adversarial nets.", "Its robustness is due to AIS gradually annealing a probability distribution and HMC avoiding localized walks.", "We show additional results for real data experiments.", "We observe our proposed algorithm to recover masked images more accurately than baselines and to generate better high-resolution images given low-resolution images.", "We show masked CelebA (Fig. 5) and LSUN (Fig. 6 ) recovery results for baselines and our method, given a Progressive GAN generator.", "Note that our algorithm is pretty robust to the position of the z initialization, since the generated results are consistent in Fig. 5 .", "(a)", "(b)", "(c)" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0952380895614624, 0, 0, 0.1818181723356247, 0.29629629850387573, 0.09090908616781235, 0.0624999962747097, 0.1249999925494194, 0.0714285671710968, 0.08695651590824127, 0.052631575614213943, 0.15686273574829102, 0.10810810327529907, 0.14814814925193787, 0, 0.34285715222358704, 0.05714285373687744, 0.07692307233810425, 0, 0, 0, 0, 0.06666666269302368 ]
rkeahX3qLr
true
[ "Using annealed importance sampling on the co-generation problem. " ]
[ "Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly.", "We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions.", "Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite.", "We also demonstrate encouraging experimental results.", "Generative modelling is a cornerstone of machine learning and has received increasing attention.", "Recent models like variational autoencoders (VAEs) BID32 BID45 and generative adversarial nets (GANs) BID21 BID25 , have delivered impressive advances in performance and generated a lot of excitement.Generative models can be classified into two categories: prescribed models and implicit models BID12 BID40 .", "Prescribed models are defined by an explicit specification of the density, and so their unnormalized complete likelihood can be usually expressed in closed form.", "Examples include models whose complete likelihoods lie in the exponential family, such as mixture of Gaussians BID18 , hidden Markov models BID5 , Boltzmann machines BID27 .", "Because computing the normalization constant, also known as the partition function, is generally intractable, sampling from these models is challenging.On the other hand, implicit models are defined most naturally in terms of a (simple) sampling procedure.", "Most models take the form of a deterministic parameterized transformation T θ (·) of an analytic distribution, like an isotropic Gaussian.", "This can be naturally viewed as the distribution induced by the following sampling procedure:1.", "Sample z ∼ N (0, I) 2.", "Return x := T θ (z)The transformation T θ (·) often takes the form of a highly expressive function approximator, like a neural net.", "Examples include generative adversarial nets (GANs) BID21 BID25 and generative moment matching nets (GMMNs) BID36 BID16 .", "The marginal likelihood of such models can be characterized as follows: DISPLAYFORM0 where φ(·) denotes the probability density function (PDF) of N (0, I).In", "general, attempting to reduce this to a closed-form expression is hopeless. Evaluating", "it numerically is also challenging, since the domain of integration could consist of an exponential number of disjoint regions and numerical differentiation is ill-conditioned.These two categories of generative models are not mutually exclusive. Some models", "admit both an explicit specification of the density and a simple sampling procedure and so can be considered as both prescribed and implicit. Examples include", "variational autoencoders BID32 BID45 , their predecessors BID38 BID10 and extensions BID11 , and directed/autoregressive models, e.g., BID42 BID6 BID33 van den Oord et al., 2016 ).", "In this section, we consider and address some possible concerns about our method.", "We presented a simple and versatile method for parameter estimation when the form of the likelihood is unknown.", "The method works by drawing samples from the model, finding the nearest sample to every data example and adjusting the parameters of the model so that it is closer to the data example.", "We showed that performing this procedure is equivalent to maximizing likelihood under some conditions.", "The proposed method can capture the full diversity of the data and avoids common issues like mode collapse, vanishing gradients and training instability.", "The method combined with vanilla model architectures is able to achieve encouraging results on MNIST, TFD and CIFAR-10." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.1538461446762085, 0.42307692766189575, 0, 0.08695651590824127, 0.13333332538604736, 0.0363636314868927, 0.04878048226237297, 0, 0.08163265138864517, 0.0555555522441864, 0, 0, 0.052631575614213943, 0, 0.04878048226237297, 0.2142857164144516, 0.0416666604578495, 0.05128204822540283, 0, 0.13333332538604736, 0.4117647111415863, 0.1860465109348297, 0.5806451439857483, 0.052631575614213943, 0.17142856121063232 ]
rygunsAqYQ
true
[ "We develop a new likelihood-free parameter estimation method that is equivalent to maximum likelihood under some conditions" ]
[ "While most approaches to the problem of Inverse Reinforcement Learning (IRL) focus on estimating a reward function that best explains an expert agent’s policy or demonstrated behavior on a control task, it is often the case that such behavior is more succinctly represented by a simple reward combined with a set of hard constraints.", "In this setting, the agent is attempting to maximize cumulative rewards subject to these given constraints on their behavior.", "We reformulate the problem of IRL on Markov Decision Processes (MDPs) such that, given a nominal model of the environment and a nominal reward function, we seek to estimate state, action, and feature constraints in the environment that motivate an agent’s behavior.", "Our approach is based on the Maximum Entropy IRL framework, which allows us to reason about the likelihood of an expert agent’s demonstrations given our knowledge of an MDP.", "Using our method, we can infer which constraints can be added to the MDP to most increase the likelihood of observing these demonstrations.", "We present an algorithm which iteratively infers the Maximum Likelihood Constraint to best explain observed behavior, and we evaluate its efficacy using both simulated behavior and recorded data of humans navigating around an obstacle.", "Advances in mechanical design and artificial intelligence continue to expand the horizons of robotic applications.", "In these new domains, it can be difficult to design a specific robot behavior by hand.", "Even manually specifying a task for a reinforcement-learning-enabled agent is notoriously difficult (Ho et al., 2015; Amodei et al., 2016) .", "Inverse Reinforcement Learning (IRL) techniques can help alleviate this burden by automatically identifying the objectives driving certain behavior.", "Since first being introduced as Inverse Optimal Control by Kalman (1964) , much of the work on IRL has focused on learning environmental rewards to represent the task of interest (Ng et al., 2000; Abbeel & Ng, 2004; Ratliff et al., 2006; Ziebart et al., 2008) .", "While these types of IRL algorithms have proven useful in a variety of situations (Abbeel et al., 2007; Vasquez et al., 2014; Ziebart, 2010; , their basis in assuming that reward functions fully represent task specifications makes them ill suited to problem domains with hard constraints or non-Markovian objectives.", "Recent work has attempted to address these pitfalls by using demonstrations to learn a rich class of possible specifications that can represent a task (Vazquez-Chanlatte et al., 2018) .", "Others have focused specifically on learning constraints, that is, behaviors that are expressly forbidden or infeasible (Pardowitz et al., 2005; Pérez-D'Arpino & Shah, 2017; Subramani et al., 2018; McPherson et al., 2018; Chou et al., 2018) .", "Such constraints arise in safety-critical systems, where requirements such as an autonomous vehicle avoiding collisions with pedestrians are more naturally expressed as hard constraints than as soft reward penalties.", "It is towards the problem of inferring such constraints that we turn our attention.", "In this work, we present a novel method for inferring constraints, drawing primarily from the Maximum Entropy approach to IRL described by Ziebart et al. (2008) .", "We use this framework to reason about the likelihood of observing a set of demonstrations given a nominal task description, as well as about their likelihood if we imposed additional constraints on the task.", "This knowledge allows us to select a constraint, or set of constraints, which maximizes the demonstrations' likelihood and best explains the differences between expected and demonstrated behavior.", "Our method improves on prior work by being able to simultaneously consider constraints on states, actions and features in a Markov Decision Process (MDP) to provide a principled ranking of all options according to their effect on demonstration likelihood.", "We have presented our novel technique for learning constraints from demonstrations.", "We improve upon previous work in constraint-learning IRL by providing a principled framework for identifying the most likely constraint(s), and we do so in a way that explicitly makes state, action, and feature constraints all directly comparable to one another.", "We believe that the numerical results presented in Section 4 are promising and highlight the usefulness of our approach.", "Despite its benefits, one drawback of our approach is that the formulation is based on (3), which only exactly holds for deterministic MDPs.", "As mentioned in Section 3.3, we plan to investigate the use of a maximum causal entropy approach to address this issue and fully handle stochastic MDPs.", "Additionally, the methods presented here require all demonstrations to contain no violations of the constraints we will estimate.", "We believe that softening this requirement, which would allow reasoning about the likelihood of constraints that are occasionally violated in the demonstration set, may be beneficial in cases where trajectory data is collected without explicit labels of success or failure.", "Finally, the structure of Algorithm 1, which tracks the expected features accruals of trajectories over time, suggests that we may be able to reason about non-Markovian constraints by using this historical information to our advantage.", "Overall, we believe that our formulation of maximum likelihood constraint inference for IRL shows promising results and presents attractive avenues for further investigation." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.20895521342754364, 0.24390242993831635, 0.20689654350280762, 0.2448979616165161, 0.23255813121795654, 0.1818181723356247, 0.15789473056793213, 0.1538461446762085, 0.04878048226237297, 0.1463414579629898, 0.19354838132858276, 0.11764705181121826, 0.19999998807907104, 0.038461532443761826, 0.04081632196903229, 0.1621621549129486, 0.20408162474632263, 0.2800000011920929, 0.1666666567325592, 0.24561403691768646, 0.1764705777168274, 0.13333332538604736, 0.09756097197532654, 0.13333332538604736, 0.20408162474632263, 0.25, 0.10169491171836853, 0.1818181723356247, 0.08888888359069824 ]
BJliakStvH
true
[ "Our method infers constraints on task execution by leveraging the principle of maximum entropy to quantify how demonstrations differ from expected, un-constrained behavior." ]
[ "The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning.", "In this work, we discuss the required elements of a robotic system that can continually and autonomously improve with data collected in the real world, and propose a particular instantiation of such a system.", "Subsequently, we investigate a number of challenges of learning without instrumentation -- including the lack of episodic resets, state estimation, and hand-engineered rewards -- and propose simple, scalable solutions to these challenges.", "We demonstrate the efficacy of our proposed system on dexterous robotic manipulation tasks in simulation and the real world, and also provide an insightful analysis and ablation study of the challenges associated with this learning paradigm.", "Reinforcement learning (RL) can in principle enable real-world autonomous systems, such as robots, to autonomously acquire a large repertoire of skills.", "Perhaps more importantly, reinforcement learning can enable such systems to continuously improve the proficiency of their skills from experience.", "However, realizing this promise in reality has proven challenging: even with reinforcement learning methods that can acquire complex behaviors from high-dimensional low-level observations, such as images, the typical assumptions of the reinforcement learning problem setting do not fit perfectly into the constraints of the real world.", "For this reason, most successful robotic learning experiments have been demonstrated with varying levels of instrumentation, in order to make it practical to define reward functions (e.g. by using auxiliary sensors (Haarnoja et al., 2018a; Kumar et al., 2016; Andrychowicz et al., 2018) ), and in order to make it practical to reset the environment between trials (e.g. using manually engineered contraptions ).", "In order to really make it practical for autonomous learning systems to improve continuously through real-world operation, we must lift these constraints and design learning systems whose assumptions match the constraints of the real world, and allow for uninterrupted continuous learning with large amounts of real world experience.", "What exactly is holding back our reinforcement learning algorithms from being deployed for learning robotic tasks (for instance manipulation) directly in the real world?", "We hypothesize that our current reinforcement learning algorithms make a number of unrealistic assumptions that make real world deployment challenging -access to low-dimensional Markovian state, known reward functions, and availability of episodic resets.", "In practice, this means that significant human engineering is required to materialize these assumptions in order to conduct real-world reinforcement learning, which limits the ability of learning-enabled robots to collect large amounts of experience automatically in a variety of naturally occuring environments.", "Even if we can engineer a complex solution for instrumentation in one environment, the same may need to be done for every environment being learned in.", "When using deep function approximators, actually collecting large amounts of real world experience is typically crucial for effective generalization.", "The inability to collect large amounts of real world data autonomously significantly limits the ability of these robots to learn robust, generalizable behaviors.", "In this work, we propose that overcoming these challenges requires designing robotic systems that possess three fundamental capabilities: (1) they are able to learn from their own raw sensory inputs, (2) they are able to assign rewards to their own behaviors with minimal human intervention, (3) they are able to learn continuously in non-episodic settings without requiring human operators to manually reset the environment.", "We believe that a system with these capabilities will bring us significantly closer to the goal of continuously improv-ing robotic agents that leverage large amounts of their own real world experience, without requiring significant human instrumentation and engineering effort.", "Having laid out these requirements, we propose a practical instantiation of such a learning system, which afford the above capabilities.", "While prior works have studied each of these issues in isolation, combining solutions to these issues is non-trivial and results in a particularly challenging learning problem.", "We provide a detailed empirical analysis of these issues, both in simulation and on a real-world robotic platform, and propose a number of simple but effective solutions that can make it possible to produce a complete robotic learning system that can learn autonomously, handle raw sensory inputs, learn reward functions from easily available supervision, and learn without manually designed reset mechanisms.", "We show that this system is well suited for learning dexterous robotic manipulation tasks in the real world, and substantially outperforms ablations and prior work.", "While the individual components that we combine to design our robotic learning system are based heavily on prior work, both the combination of these components and their specific instantiations are novel.", "Indeed, we show that without the particular design decisions motivated by our experiments, naïve designs that follow prior work generally fail to satisfy one of the three requirements that we lay out.", "We presented the design and instantiation of R3L , a system for real world reinforcement learning.", "We identify and investigate the various ingredients required for such a system to scale gracefully with minimal human engineering and supervision.", "We show that this system must be able to learn from raw sensory observations, learn from very easily specified reward functions without reward engineering, and learn without any episodic resets.", "We describe the basic elements that are required to construct such a system, and identify unexpected learning challenges that arise from interplay of these elements.", "We propose simple and scalable fixes to these challenges through introducing unsupervised representation learning and a randomized perturbation controller.", "We show the effectiveness on such a system at learning without instrumentation in several simulated and real world environments.", "The ability to train robots directly in the real world with minimal instrumentation opens a number of exciting avenues for future research.", "Robots that can learn unattended, without resets or handdesigned reward functions, can in principle collect very large amounts of experience autonomously, which may enable very broad generalization in the future.", "Furthermore, fully autonomous learning should make it possible for robots to acquire large behavioral repertoires, since each additional task requires only the initial examples needed to learn the reward.", "However, there are also a number of additional challenges, including sample complexity, optimization and exploration difficulties on more complex tasks, safe operation, communication latency, sensing and actuation noise, and so forth, all of which would need to be addressed in future work in order to enable truly scalable realworld robotic learning.", "Initialize RND target and predictor networks f (s),f (s)", "Initialize VICE reward classifier r VICE (s)", "Initialize replay buffer D" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.37837836146354675, 0.2380952388048172, 0.24390242993831635, 0.31111109256744385, 0.17142856121063232, 0.24242423474788666, 0.25925925374031067, 0.1846153885126114, 0.23076923191547394, 0.37837836146354675, 0.22727271914482117, 0.15686273574829102, 0.21052631735801697, 0.12121211737394333, 0.2857142686843872, 0.2222222238779068, 0.31372547149658203, 0.12121211737394333, 0.1621621549129486, 0.1875, 0.31578946113586426, 0.1904761791229248, 0.1428571343421936, 0.3333333134651184, 0.1764705777168274, 0.1538461446762085, 0.1621621549129486, 0.1249999925494194, 0.42424240708351135, 0.3888888955116272, 0.19512194395065308, 0.19512194395065308, 0.13333332538604736, 0, 0, 0 ]
rJe2syrtvS
true
[ "System to learn robotic tasks in the real world with reinforcement learning without instrumentation" ]
[ "We study the problem of cross-lingual voice conversion in non-parallel speech corpora and one-shot learning setting.", "Most prior work require either parallel speech corpora or enough amount of training data from a target speaker.", "However, we convert an arbitrary sentences of an arbitrary source speaker to target speaker's given only one target speaker training utterance.", "To achieve this, we formulate the problem as learning disentangled speaker-specific and context-specific representations and follow the idea of [1] which uses Factorized Hierarchical Variational Autoencoder (FHVAE).", "After training FHVAE on multi-speaker training data, given arbitrary source and target speakers' utterance, we estimate those latent representations and then reconstruct the desired utterance of converted voice to that of target speaker.", "We use multi-language speech corpus to learn a universal model that works for all of the languages.", "We investigate the use of a one-hot language embedding to condition the model on the language of the utterance being queried and show the effectiveness of the approach.", "We conduct voice conversion experiments with varying size of training utterances and it was able to achieve reasonable performance with even just one training utterance.", "We also investigate the effect of using or not using the language conditioning.", "Furthermore, we visualize the embeddings of the different languages and sexes.", "Finally, in the subjective tests, for one language and cross-lingual voice conversion, our approach achieved moderately better or comparable results compared to the baseline in speech quality and similarity." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.25, 0.0952380895614624, 0.04878048226237297, 0.16326530277729034, 0.11320754140615463, 0.3414634168148041, 0.3181818127632141, 0.25531914830207825, 0.17142856121063232, 0.05882352590560913, 0.1599999964237213 ]
S1eN99RioX
false
[ "We use a Variational Autoencoder to separate style and content, and achieve voice conversion by modifying style embedding and decoding. We investigate using a multi-language speech corpus and investigate its effects." ]
[ "We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification.", "The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map.", "Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parametrised by the score matrices, must alone be used for classification.", "Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values.", "Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter.", "Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets.", "When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset.", "We also demonstrate improved robustness against the fast gradient sign method of adversarial attack.", "Feed-forward convolutional neural networks (CNNs) have demonstrated impressive results on a wide variety of visual tasks, such as image classification, captioning, segmentation, and object detection.", "However, the visual reasoning which they implement in solving these problems remains largely inscrutable, impeding understanding of their successes and failures alike.One approach to visualising and interpreting the inner workings of CNNs is the attention map: a scalar matrix representing the relative importance of layer activations at different 2D spatial locations with respect to the target task BID21 .", "This notion of a nonuniform spatial distribution of relevant features being used to form a task-specific representation, and the explicit scalar representation of their relative relevance, is what we term 'attention'.", "Previous works have shown that for a classification CNN trained using image-level annotations alone, extracting the attention map provides a straightforward way of determining the location of the object of interest BID2 BID31 and/or its segmentation mask BID21 , as well as helping to identify discriminative visual properties across classes BID31 .", "More recently, it has also been shown that training smaller networks to mimic the attention maps of larger and higher-performing network architectures can lead to gains in classification accuracy of those smaller networks BID29 .The", "works of BID21 ; BID2 ; BID31 represent one series of increasingly sophisticated techniques for estimating attention maps in classification CNNs. However", ", these approaches share a crucial limitation: all are implemented as post-hoc additions to fully trained networks. On the", "other hand, integrated attention mechanisms whose parameters are learned over the course of end-to-end training of the entire network have been proposed, and have shown benefits in various applications that can leverage attention as a cue. These", "include attribute prediction BID19 , machine translation BID1 , image captioning BID28 Mun et al., 2016) and visual question answering (VQA) BID24 BID26 . Similarly", "to these approaches, we here represent attention as a probabilistic map over the input image locations, and implement its estimation via an end-to-end framework. The novelty", "of our contribution lies in repurposing the global image representation as a query to estimate multi-scale attention in classification, a task which, unlike e.g. image captioning or VQA, does not naturally involve a query.Fig. 1 provides", "an overview of the proposed method. Henceforth", ", we will use the terms 'local features' and 'global features' to refer to features extracted by some layer of the CNN whose effective receptive fields are, respectively, contiguous proper subsets of the image ('local') and the entire image ('global'). By defining", "a compatibility measure between local and global features, we redesign standard architectures such that they must classify the input image using only a weighted combination of local features, with the weights represented here by the attention map. The network", "is thus forced to learn a pattern of attention relevant to solving the task at hand.We experiment with applying the proposed attention mechanism to the popular CNN architectures of VGGNet BID20 and ResNet BID11 , and capturing coarse-to-fine attention maps at multiple levels. We observe", "that the proposed mechanism can bootstrap baseline CNN architectures for the task of image classification: for example, adding attention to the VGG model offers an accuracy gain of 7% on CIFAR-100. Our use of", "attention-weighted representations leads to improved fine-grained recognition and superior generalisation on 6 benchmark datasets for domain-shifted classification. As observed", "on models trained for fine-grained bird recognition, attention aware models offer limited resistance to adversarial fooling at low and moderate L ∞ -noise norms. The trained", "attention maps outperform other CNN-derived attention maps BID31 , traditional saliency maps BID14 BID30 ), and top object proposals on the task of weakly supervised segmentation of the Object Discovery dataset ). In §5, we present", "sample results which suggest that these improvements may owe to the method's tendency to highlight the object of interest while suppressing background clutter.", "We propose a trainable attention module for generating probabilistic landscapes that highlight where and in what proportion a network attends to different regions of the input image for the task of classification.", "We demonstrate that the method, when deployed at multiple levels within a network, affords significant performance gains in classification of seen and unseen categories by focusing on the object of interest.", "We also show that the attention landscapes can facilitate weakly supervised segmentation of the predominant object.", "Further, the proposed attention scheme is amenable to popular post-processing techniques such as conditional random fields for refining the segmentation masks, and has shown promise in learning robustness to certain kinds of adversarial attacks." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.10810810327529907, 0.19230768084526062, 0.1071428507566452, 0.10256409645080566, 0.09090908616781235, 0.08695651590824127, 0.0833333283662796, 0.0555555522441864, 0.04255318641662598, 0.16438356041908264, 0.11999999731779099, 0.1515151411294937, 0.18867923319339752, 0.1904761791229248, 0.09756097197532654, 0.2181818187236786, 0, 0.1666666567325592, 0.145454540848732, 0.06896551698446274, 0.0357142798602581, 0.1428571343421936, 0.10344827175140381, 0.15686273574829102, 0.1463414579629898, 0.17391303181648254, 0.037735845893621445, 0.09302324801683426, 0.23999999463558197, 0.11764705181121826, 0.10810810327529907, 0.18518517911434174 ]
HyzbhfWRW
true
[ "The paper proposes a method for forcing CNNs to leverage spatial attention in learning more object-centric representations that perform better in various respects." ]
[ "Recurrent neural network(RNN) is an effective neural network in solving very complex supervised and unsupervised tasks.There has been a significant improvement in RNN field such as natural language processing, speech processing, computer vision and other multiple domains.", "This paper deals with RNN application on different use cases like Incident Detection , Fraud Detection , and Android Malware Classification.", "The best performing neural network architecture is chosen\n", "by conducting different chain of experiments for different network parameters and structures.The network is run up to 1000 epochs with learning rate set in the range of 0.01 to 0.5.Obviously, RNN performed very well when compared to classical machine learning algorithms.", "This is mainly possible because RNNs implicitly extracts the underlying features and also identifies the characteristics of the data.", "This lead to better accuracy.\n", "In today's data world, malware is the common threat to everyone from big organizations to common people and we need to safeguard our systems, computer networks, and valuable data.", "Cyber-crimes has risen to the peak and many hacks, data stealing, and many more cyber-attacks.", "Hackers gain access through any loopholes and steal all valuable data, passwords and other useful information.Mainly in android platform malicious attacks increased due to increase in large number of application.In other hand its very easy for persons to develop multiple malicious malwares and feed it into android market very easily using a third party software's.Attacks can be through any means like e-mails, exe files, software, etc.", "Criminals make use of security vulnerabilities and exploit their opponents.", "This forces the importance of an effective system to handle the fraudulent activities.", "But today's sophisticated attacking algorithms avoid being detected by the security Email address: harishharunn@gmail.com (Mohammed Harun Babu R) mechanisms.", "Every day the attackers develop new exploitation techniques and escape from Anti-virus and Malware softwares.", "Thus nowadays security solution companies are moving towards deep learning and machine learning techniques where the algorithm learns the underlying information from the large collection of security data itself and makes predictions on new data.", "This, in turn, motivates the hackers to develop new methods to escape from the detection mechanisms.Malware attack remains one of the major security threat in cyberspace.", "It is an unwanted program which makes the system behave differently than it is supposed to behave.", "The solutions provided by antivirus software against this malware can only be used as a primary weapon of resistance because they fail to detect the new and upcoming malware created using polymorphic, metamorphic, domain flux and IP flux.", "The machine learning algorithms were employed which solves complex security threats in more than three decades BID0 .", "These methods have the capability to detect new malwares.", "Research is going at a high phase for security problems like Intrusion Detection Systems(IDS), Mal-ware Detection, Information Leakage, etc.", "Fortunately, today's Deep Learning(DL) approaches have performed well in various long-standing AI challenges BID1 such as nlp, computer vision, speech recognition.", "Recently, the application of deep learning techniques have been applied for various use cases of cyber security BID2 .It", "has the ability to detect the cyber attacks by learning the complex underlying structure, hidden sequential relationships and hierarchical feature representations from a huge set of security data. In", "this paper, we are evaluating the efficiency of SVM and RNN machine learning algorithms for cybersecurity problems. Cybersecurity", "provides a set of actions to safeguard computer networks, systems, and data. This paper is", "arranged accordingly where related work are discussed in section 2 the background knowledge of recurrent neural network (RNN) in section 3 .In section 4 proposed", "methodology including description,data set are discussed and at last results are furnished in Section 5. Section 6 is conclude", "with conclusion.", "In this paper performance of RNN Vs other classical machine learning classifiers are evaluated for cybersecuriy use cases such as Android malware classification, incident detection, and fraud detection.", "In all the three" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.09999999403953552, 0, 0.1428571343421936, 0.045454543083906174, 0, 0, 0, 0, 0.0307692289352417, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.07999999821186066, 0, 0.0833333283662796, 0, 0.1666666567325592, 0, 0.0714285671710968, 0, 0.05882352590560913, 0 ]
HJg8wH70c7
true
[ "Recurrent neural networks for Cybersecurity use-cases" ]
[ "Anatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations.", "However, it remains unclear how neural circuits encode complex spatio-temporal patterns.", "We show that neural dynamics are strongly influenced by the phase alignment between the input and the spontaneous chaotic activity.", "Input alignment along the dominant chaotic projections causes the chaotic trajectories to become stable channels (or attractors), hence, improving the computational capability of a recurrent network.", "Using mean field analysis, we derive the impact of input alignment on the overall stability of attractors formed.", "Our results indicate that input alignment determines the extent of intrinsic noise suppression and hence, alters the attractor state stability, thereby controlling the network's inference ability.", "Brain actively untangles the input sensory data and fits them in behaviorally relevant dimensions that enables an organism to perform recognition effortlessly, in spite of variations DiCarlo et al. (2012) ; Thorpe et al. (1996) ; DiCarlo & Cox (2007) .", "For instance, in visual data, object translation, rotation, lighting changes and so forth cause complex nonlinear changes in the original input space.", "However, the brain still extracts high-level behaviorally relevant constructs from these varying input conditions and recognizes the objects accurately.", "What remains unknown is how brain accomplishes this untangling.", "Here, we introduce the concept of chaos-guided input alignment in a recurrent network (specifically, reservoir computing model) that provides an avenue to untangle stimuli in the input space and improve the ability of a stimulus to entrain neural dynamics.", "Specifically, we show that the complex dynamics arising from the recurrent structure of a randomly connected reservoir Rajan & Abbott (2006) ; Kadmon & Sompolinsky (2015) ; Stern et al. (2014) can be used to extract an explicit phase relationship between the input stimulus and the spontaneous chaotic neuronal response.", "Then, aligning the input phase along the dominant projections determining the intrinsic chaotic activity, causes the random chaotic fluctuations or trajectories of the network to become locally stable channels or dynamic attractor states that, in turn, improve its' inference capability.", "In fact, using mean field analysis, we derive the effect of introducing varying phase association between the input and the network's spontaneous chaotic activity.", "Our results demonstrate that successful formation of stable attractors is strongly determined from the input alignment.", "We also illustrate the effectiveness of input alignment on a complex motor pattern generation task with reliable generation of learnt patterns over multiple trials, even in presence of external perturbations.", "Models of cortical networks often use diverse plasticity mechanisms for effective tuning of recurrent connections to suppress the intrinsic chaos (or fluctuations) Laje & Buonomano (2013) ; Panda & Roy (2017) .", "We show that input alignment alone produces stable and repeatable trajectories, even, in presence of variable internal neuronal dynamics for dynamical computations.", "Combining input alignment with recurrent synaptic plasticity mechanism can further enable learning of stable correlated network activity at the output (or readout layer) that is resistant to external perturbation to a large extent.", "Furthermore, since input subspace alignment allows us to operate networks at low amplitude while maintaining a stable network activity, it provides an additional advantage of higher dimensionality.", "A network of higher dimensionality offers larger number of disassociated principal chaotic projections along which different inputs can be aligned (see Appendix A, Fig. A1(c) ).", "Thus, for a classification task, wherein the network has to discriminate between 10 different inputs (of varying frequencies and underlying statistics), our notion of untangling with chaos-guided input alignment can, thus, serve as a foundation for building robust recurrent networks with improved inference ability.", "Further investigation is required to examine which orientations specifically improve the discrimination capability of the network and the impact of a given alignment on the stability of the readout dynamics around an output target.", "In summary, the analyses we present suggest that input alignment in the chaotic subspace has a large impact on the network dynamics and eventually determines the stability of an attractor state.", "In fact, we can control the network's convergence toward different stable attractor channels during its voyage in the neural state space by regulating the input orientation.", "This indicates that, besides synaptic strength variance Rajan & Abbott (2006) , a critical quantity that might be modified by modulatory and plasticity mechanisms controlling neural circuit dynamics is the input stimulus alignment." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0952380895614624, 0, 0, 0.13793103396892548, 0, 0, 0, 0, 0, 0, 0, 0, 0.04999999701976776, 0, 0, 0, 0.05714285373687744, 0.0714285671710968, 0, 0, 0.06451612710952759, 0.042553190141916275, 0, 0, 0, 0 ]
Sklia3EFPH
true
[ "Input Structuring along Chaos for Stability" ]
[ "Generative adversarial networks are a learning framework that rely on training a discriminator to estimate a measure of difference between a target and generated distributions.", "GANs, as normally formulated, rely on the generated samples being completely differentiable w.r.t.", "the generative parameters, and thus do not work for discrete data.", "We introduce a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator.", "The importance weights have a strong connection to the decision boundary of the discriminator, and we call our method boundary-seeking GANs (BGANs).", "We demonstrate the effectiveness of the proposed algorithm with discrete image and character-based natural language generation. ", "In addition, the boundary-seeking objective extends to continuous data, which can be used to improve stability of training, and we demonstrate this on Celeba, Large-scale Scene Understanding (LSUN) bedrooms, and Imagenet without conditioning.", "Generative adversarial networks (GAN, BID7 involve a unique generative learning framework that uses two separate models, a generator and discriminator, with opposing or adversarial objectives.", "Training a GAN only requires back-propagating a learning signal that originates from a learned objective function, which corresponds to the loss of the discriminator trained in an adversarial manner.", "This framework is powerful because it trains a generator without relying on an explicit formulation of the probability density, using only samples from the generator to train.GANs have been shown to generate often-diverse and realistic samples even when trained on highdimensional large-scale continuous data BID31 .", "GANs however have a serious limitation on the type of variables they can model, because they require the composition of the generator and discriminator to be fully differentiable.With discrete variables, this is not true.", "For instance, consider using a step function at the end of a generator in order to generate a discrete value.", "In this case, back-propagation alone cannot provide the training signal, because the derivative of a step function is 0 almost everywhere.", "This is problematic, as many important real-world datasets are discrete, such as character-or word-based representations of language.", "The general issue of credit assignment for computational graphs with discrete operations (e.g. discrete stochastic neurons) is difficult and open problem, and only approximate solutions have been proposed in the past BID2 BID8 BID10 BID14 BID22 BID40 .", "However, none of these have yet been shown to work with GANs.", "In this work, we make the following contributions:• We provide a theoretical foundation for boundary-seeking GANs (BGAN), a principled method for training a generator of discrete data using a discriminator optimized to estimate an f -divergence BID29 BID30 .", "The discriminator can then be used to formulate importance weights which provide policy gradients for the generator.•", "We verify this approach quantitatively works across a set of f -divergences on a simple classification task and on a variety of image and natural language benchmarks.•", "We demonstrate that BGAN performs quantitatively better than WGAN-GP BID9 in the simple discrete setting.•", "We show that the boundary-seeking objective extends theoretically to the continuous case and verify it works well with some common and difficult image benchmarks. Finally", ", we show that this objective has some improved stability properties within training and without.", "On estimating likelihood ratios from the discriminator Our work relies on estimating the likelihood ratio from the discriminator, the theoretical foundation of which we draw from f -GAN BID30 .", "The connection between the likelihood ratios and the policy gradient is known in previous literature BID15 , and the connection between the discriminator output and the likelihood ratio was also made in the context of continuous GANs BID26 BID39 .", "However, our work is the first to successfully formulate and apply this approach to the discrete setting.Importance sampling Our method is very similar to re-weighted wake-sleep (RWS, BID3 , which is a method for training Helmholtz machines with discrete variables.", "RWS also relies on minimizing the KL divergence, the gradients of which also involve a policy gradient over the likelihood ratio.", "Neural variational inference and learning (NVIL, BID25 , on the other hand, relies on the reverse KL.", "These two methods are analogous to our importance sampling and REINFORCE-based BGAN formulations above.GAN for discrete variables Training GANs with discrete data is an active and unsolved area of research, particularly with language model data involving recurrent neural network (RNN) generators BID20 .", "Many REINFORCE-based methods have been proposed for language modeling BID20 BID6 which are similar to our REINFORCE-based BGAN formulation and effectively use the sigmoid of the estimated loglikelihood ratio.", "The primary focus of these works however is on improving credit assignment, and their approaches are compatible with the policy gradients provided in our work.There have also been some improvements recently on training GANs on language data by rephrasing the problem into a GAN over some continuous space BID19 BID16 BID9 .", "However, each of these works bypass the difficulty of training GANs with discrete data by rephrasing the deterministic game in terms of continuous latent variables or simply ignoring the discrete sampling process altogether, and do not directly solve the problem of optimizing the generator from a difference measure estimated from the discriminator.Remarks on stabilizing adversarial learning, IPMs, and regularization A number of variants of GANs have been introduced recently to address stability issues with GANs.", "Specifically, generated samples tend to collapse to a set of singular values that resemble the data on neither a persample or distribution basis.", "Several early attempts in modifying the train procedure (Berthelot et al., 2017; BID35 as well as the identifying of a taxonomy of working architectures BID31 addressed stability in some limited setting, but it wasn't until Wassertstein GANs (WGAN, BID1 were introduced that there was any significant progress on reliable training of GANs.WGANs rely on an integral probability metric (IPM, BID36 ) that is the dual to the Wasserstein distance.", "Other GANs based on IPMs, such as Fisher GAN tout improved stability in training.", "In contrast to GANs based on f -divergences, besides being based on metrics that are \"weak\", IPMs rely on restricting T to a subset of all possible functions.", "For instance in WGANs, T = {T | T L ≤ K}, is the set of K-Lipschitz functions.", "Ensuring a statistic network, T φ , with a large number of parameters is Lipschitz-continuous is hard, and these methods rely on some sort of regularization to satisfy the necessary constraints.", "This includes the original formulation of WGANs, which relied on weight-clipping, and a later work BID9 which used a gradient penalty over interpolations between real and generated data.Unfortunately, the above works provide little details on whether T φ is actually in the constrained set in practice, as this is probably very hard to evaluate in the high-dimensional setting.", "Recently, BID32 introduced a gradient norm penalty similar to that in BID9 without interpolations and which is formulated in terms of f -divergences.", "In our work, we've found that this approach greatly improves stability, and we use it in nearly all of our results.", "That said, it is still unclear empirically how the discriminator objective plays a strong role in stabilizing adversarial learning, but at this time it appears that correctly regularizing the discriminator is sufficient.", "Reinterpreting the generator objective to match the proposal target distribution reveals a novel learning algorithm for training a generative adversarial network (GANs, BID7 .", "This proposed approach of boundary-seeking provides us with a unified framework under which learning algorithms for both discrete and continuous variables are derived.", "Empirically, we verified our approach quantitatively and showed the effectiveness of training a GAN with the proposed learning algorithm, which we call a boundary-seeking GAN (BGAN), on both discrete and continuous variables, as well as demonstrated some properties of stability.Starting image (generated) 10k updates GAN Proxy GAN BGAN 20k updates Figure 5 : Following the generator objective using gradient descent on the pixels.", "BGAN and the proxy have sharp initial gradients that decay to zero quickly, while the variational lower-bound objective gradient slowly increases.", "The variational lower-bound objective leads to very poor images, while the proxy and BGAN objectives are noticeably better.", "Overall, BGAN performs the best in this task, indicating that its objective will not overly disrupt adversarial learning.Berthelot, David, Schumm, Tom, and Metz, Luke.", "Began: Boundary equilibrium generative adversarial networks.", "arXiv preprint arXiv:1703.10717, 2017." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.15789473056793213, 0, 0.14814814925193787, 0.4444444477558136, 0.10810810327529907, 0.1875, 0, 0.1538461446762085, 0.0952380895614624, 0.10526315122842789, 0.12765957415103912, 0.11764705181121826, 0.1111111044883728, 0, 0.07692307233810425, 0.1428571343421936, 0.23999999463558197, 0.05882352590560913, 0.1538461446762085, 0.1875, 0.1538461446762085, 0.12903225421905518, 0, 0.13636362552642822, 0.1599999964237213, 0.1764705777168274, 0, 0.145454540848732, 0, 0.21875, 0.21052631735801697, 0.1621621549129486, 0.10389610379934311, 0.13333332538604736, 0.14999999105930328, 0, 0.09090908616781235, 0.0923076868057251, 0.15789473056793213, 0.0555555522441864, 0.09090908616781235, 0.10810810327529907, 0.1538461446762085, 0.1492537260055542, 0.1111111044883728, 0, 0.04878048226237297, 0, 0 ]
rkTS8lZAb
true
[ "We address training GANs with discrete data by formulating a policy gradient that generalizes across f-divergences" ]
[ "Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates.", "The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces.", "To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP.", "We demonstrate and quantify the benefit of the action-dependent baseline through both theoretical analysis as well as numerical results, including an analysis of the suboptimality of the optimal state-dependent baseline.", "The result is a computationally efficient policy gradient algorithm, which scales to high-dimensional control problems, as demonstrated by a synthetic 2000-dimensional target matching task.", "Our experimental results indicate that action-dependent baselines allow for faster learning on standard reinforcement learning benchmarks and high-dimensional hand manipulation and synthetic tasks.", "Finally, we show that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks.", "Deep reinforcement learning has achieved impressive results in recent years in domains such as video games from raw visual inputs BID10 , board games , simulated control tasks BID16 , and robotics ).", "An important class of methods behind many of these success stories are policy gradient methods BID28 BID22 BID5 BID18 BID11 , which directly optimize parameters of a stochastic policy through local gradient information obtained by interacting with the environment using the current policy.", "Policy gradient methods operate by increasing the log probability of actions proportional to the future rewards influenced by these actions.", "On average, actions which perform better will acquire higher probability, and the policy's expected performance improves.A critical challenge of policy gradient methods is the high variance of the gradient estimator.", "This high variance is caused in part due to difficulty in credit assignment to the actions which affected the future rewards.", "Such issues are further exacerbated in long horizon problems, where assigning credits properly becomes even more challenging.", "To reduce variance, a \"baseline\" is often employed, which allows us to increase or decrease the log probability of actions based on whether they perform better or worse than the average performance when starting from the same state.", "This is particularly useful in long horizon problems, since the baseline helps with temporal credit assignment by removing the influence of future actions from the total reward.", "A better baseline, which predicts the average performance more accurately, will lead to lower variance of the gradient estimator.The key insight of this paper is that when the individual actions produced by the policy can be decomposed into multiple factors, we can incorporate this additional information into the baseline to further reduce variance.", "In particular, when these factors are conditionally independent given the current state, we can compute a separate baseline for each factor, whose value can depend on all quantities of interest except that factor.", "This serves to further help credit assignment by removing the influence of other factors on the rewards, thereby reducing variance.", "In other words, information about the other factors can provide a better evaluation of how well a specific factor performs.", "Such factorized policies are very common, with some examples listed below.•", "In continuous control and robotics tasks, multivariate Gaussian policies with a diagonal covariance matrix are often used. In", "such cases, each action coordinate can be considered a factor. Similarly", ", factorized categorical policies are used in game domains like board games and Atari.• In multi-agent", "and distributed systems, each agent deploys its own policy, and thus the actions of each agent can be considered a factor of the union of all actions (by all agents). This is particularly", "useful in the recent emerging paradigm of centralized learning and decentralized execution BID2 BID9 . In contrast to the previous", "example, where factorized policies are a common design choice, in these problems they are dictated by the problem setting.We demonstrate that action-dependent baselines consistently improve the performance compared to baselines that use only state information. The relative performance gain", "is task-specific, but in certain tasks, we observe significant speed-up in the learning process. We evaluate our proposed method", "on standard benchmark continuous control tasks, as well as on a high-dimensional door opening task with a five-fingered hand, a synthetic high-dimensional target matching task, on a blind peg insertion POMDP task, and a multi-agent communication task. We believe that our method will", "facilitate further applications of reinforcement learning methods in domains with extremely highdimensional actions, including multi-agent systems. Videos and additional results of", "the paper are available at https://sites.google.com/view/ad-baselines.", "An action-dependent baseline enables using additional signals beyond the state to achieve bias-free variance reduction.", "In this work, we consider both conditionally independent policies and general policies, and derive an optimal action-dependent baseline.", "We provide analysis of the variance DISPLAYFORM0 (a) Success percentage on the blind peg insertion task.", "The policy still acts on the observations and does not know the hole location.", "However, the baseline has access to this goal information, in addition to the observations and action, and helps to speed up the learning.", "By comparison, in blue, the baseline has access only to the observations and actions.", "reduction improvement over non-optimal baselines, including the traditional optimal baseline that only depends on state.", "We additionally propose several practical action-dependent baselines which perform well on a variety of continuous control tasks and synthetic high-dimensional action problems.", "The use of additional signals beyond the local state generalizes to other problem settings, for instance in POMDP and multi-agent tasks.", "In future work, we propose to investigate related methods in such settings on large-scale problems." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17142856121063232, 0.060606054961681366, 0.2448979616165161, 0.05128204822540283, 0.09999999403953552, 0.15789473056793213, 0.3255814015865326, 0.043478257954120636, 0.11320754140615463, 0.11764705181121826, 0.22727271914482117, 0.05714285373687744, 0, 0.038461532443761826, 0, 0.16393442451953888, 0.08163265138864517, 0.0555555522441864, 0.05714285373687744, 0, 0.05882352590560913, 0.1428571343421936, 0.05882352590560913, 0.1428571343421936, 0.05714285373687744, 0.038461532443761826, 0, 0.03999999538064003, 0.10810810327529907, 0, 0.1875, 0.05882352590560913, 0.0624999962747097, 0.13333332538604736, 0.05714285373687744, 0.06666666269302368, 0.0624999962747097, 0.10256409645080566, 0.10526315122842789, 0.0624999962747097 ]
H1tSsb-AW
true
[ "Action-dependent baselines can be bias-free and yield greater variance reduction than state-only dependent baselines for policy gradient methods." ]
[ "The cost of annotating training data has traditionally been a bottleneck for supervised learning approaches.", "The problem is further exacerbated when supervised learning is applied to a number of correlated tasks simultaneously since the amount of labels required scales with the number of tasks.", "To mitigate this concern, we propose an active multitask learning algorithm that achieves knowledge transfer between tasks.", "The approach forms a so-called committee for each task that jointly makes decisions and directly shares data across similar tasks.", "Our approach reduces the number of queries needed during training while maintaining high accuracy on test data.", "Empirical results on benchmark datasets show significant improvements on both accuracy and number of query requests.", "A triumph of machine learning is the ability to predict with high accuracy.", "However, for the dominant paradigm, which is supervised learning, the main bottleneck is the need to annotate data, namely, to obtain labeled training examples.", "The problem becomes more pronounced in applications and systems which require a high level of personalization, such as music recommenders, spam filters, etc.", "Several thousand labeled emails are usually sufficient for training a good spam filter for a particular user.", "However, in real world email systems, the number of registered users is potentially in the millions, and it might not be feasible to learn a highly personalized spam filter for each of them by getting several thousand labeled data points for each user.One method to relieve the need of the prohibitively large amount of labeled data is to leverage the relationship between the tasks, especially by transferring relevant knowledge from information-rich tasks to information-poor ones, which is called multitask learning in the literature.", "We consider multitask learning in an online setting where the learner sees the data sequentially, which is more practical in real world applications.", "In this setting, the learner receives an example at each time round, along with its task identifier, and then predicts its true label.", "Afterwards, the learner queries the true label and updates the model(s) accordingly.The online multitask setting has received increasing attention in the machine learning community in recent years BID6 BID0 BID7 BID9 BID4 BID13 BID11 .", "However, they make the assumption that the true label is readily available to be queried, which is impractical in many applications.", "Also, querying blindly can be inefficient when annotation is costly.Active learning further reduces the work of the annotator by selectively requesting true labels from the oracles.", "Most approaches in active learning for sequential and streambased problems adopt a measure of uncertainty / confidence of the learner in the current example BID5 BID3 BID12 BID8 BID1 .The", "recent work by BID10 combines active learning with online multitask learning using peers or related tasks. When", "the classifier of the current task is not confident, it first queries its similar tasks before requesting a true label from the oracle, incurring a lower cost. Their", "learner gives priority to the current task by always checking its confidence first. In the", "case when the current task is confident, the opinions of its peers are ignored. This paper", "proposes an active multitask learning framework which is more humble, in a sense that both the current task and its peers' predictions are considered simultaneously using a weighted sum. We have a", "committee which makes joint decisions for each task. In addition", ", after the true label of a training sample is obtained, this sample is shared directly to similar tasks, which makes training more efficient.", "We propose a new active multitask learning algorithm that encourages more knowledge transfer among tasks compared to the state-of-the-art models, by using joint decision / prediction and directly sharing training examples with true labels among similar tasks.", "Our proposed methods achieve both higher accuracy and lower number of queries on three benchmark datasets for multitask learning problems.", "Future work includes theoretical analysis of the error bound and comparison with those of the baseline models.", "Another interesting direction is to handle unbalanced task data.", "In other words, one task has much more / less training data than the others." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0714285671710968, 0.1111111044883728, 0.800000011920929, 0.12121211737394333, 0, 0, 0.07692307233810425, 0, 0, 0, 0.1315789371728897, 0.23529411852359772, 0.05714285373687744, 0.09090908616781235, 0.0624999962747097, 0.052631575614213943, 0.09999999403953552, 0.27586206793785095, 0.052631575614213943, 0, 0, 0.2790697515010834, 0, 0, 0.4166666567325592, 0.12121211737394333, 0, 0, 0 ]
rJe4EcSohE
true
[ "We propose an active multitask learning algorithm that achieves knowledge transfer between tasks." ]
[ "Detection of photo manipulation relies on subtle statistical traces, notoriously removed by aggressive lossy compression employed online.", "We demonstrate that end-to-end modeling of complex photo dissemination channels allows for codec optimization with explicit provenance objectives.", "We design a lightweight trainable lossy image codec, that delivers competitive rate-distortion performance, on par with best hand-engineered alternatives, but has lower computational footprint on modern GPU-enabled platforms.", "Our results show that significant improvements in manipulation detection accuracy are possible at fractional costs in bandwidth/storage.", "Our codec improved the accuracy from 37% to 86% even at very low bit-rates, well below the practicality of JPEG (QF 20).", "While the proposed approach can successfully facilitate pre-screening of photographs shared online, further research is needed to improve model generalization.", "We observed that the fine-tuning procedure tends bias the DCN/FAN models towards the secondary image dataset, in our case the native camera output (NCO).", "The baseline DCN was pre-trained on mixed natural images (MNI) with extensive augmentation, leading to competitive results on all test sets.", "However, fine-tuning was performed on NCO only.", "Characteristic pixel correlations, e.g., due to color interpolation, bias the codec and lead to occasional artifacts in MNIs (mostly in the clic test set; see Appendix B), and deterioration of the rate-distortion trade-off.", "The problem is present regardless of λ c , which suggests issues with the fine-tuning protocol (data diversity) and not the forensic optimization objective.", "We ran additional experiments by skipping photo acquisition and fine-tuning directly on MNI from the original training set (subset of 2,500 RGB images).", "We observed the same behavior (see Appendix C), and the optimized codec was artifact-free on all test sets.", "(Although, due to a smaller training set, the model loses some of its performance; cf. MNI results in Fig. A.6 .)", "However, the FANs generalized well only to clic and kodak images.", "The originally trained FANs generalized reasonably well to different NCO images (including images from other 3 cameras) but not to clic or kodak.", "This confirms that existing forensics models are sensitive to data distribution, and that further work will be needed to establish more universal training protocols (see detailed discussion in Appendix D).", "Short fine-tuning is known to help (Cozzolino et al., 2018) , and we leave this aspect for future work.", "We are also planning to explore new transfer learning protocols (Li & Hoiem, 2017) .", "Generalization should also consider other forensic tasks.", "We optimized for manipulation detection, which serves as a building block for more complex problems, like processing history analysis or tampering localization (Korus, 2017; Mayer & Stamm, 2019; Wu et al., 2019; Marra et al., 2019a) .", "However, additional pre-screening may also be needed, e.g., analysis of sensor fingerprints (Chen et al., 2008) , or identification of computer graphics or synthetic content (Marra et al., 2019b) .", "Our study shows that lossy image codecs can be explicitly optimized to retain subtle low-level traces that are useful for photo manipulation detection.", "Interestingly, simple inclusion of high frequencies in the signal is insufficient, and the models learns more complex frequency attenuation/amplification patterns.", "This allows for reliable authentication even at very low bit-rates, where standard JPEG compression is no longer practical, e.g., at bit-rates around 0.4 bpp where our DCN codec with lowquality settings improved manipulation detection accuracy from 37% to 86%.", "We believe the proposed approach is particularly valuable for online media platforms (e.g., Truepic, or Facebook), who need to pre-screen content upon reception, but need to aggressively optimize bandwidth/storage.", "The standard soft quantization with a Gaussian kernel (Mentzer et al., 2018) works well for rounding to arbitrary integers, but leads to numerical issues for smaller codebooks.", "Values significantly exceeding codebook endpoints have zero affinity to any of the entries, and collapse to the mean (i.e., ≈ 0 in our case; Fig. A.1a) .", "Such issues can be addressed by increasing numerical precision, sacrificing accuracy (due to larger kernel bandwidth), or adding explicit conditional statements in the code.", "The latter approach is inelegant and cumbersome in graph-based machine learning frameworks like Tensorflow.", "We used a t-Student kernel instead and increased precision of the computation to 64-bits.", "This doesn't solve the problem entirely, but successfully eliminated all issues that we came across in our experiments, and further improved our entropy estimation accuracy.", "Fig.", "A.2 shows entropy estimation error for Laplace-distributed random values, and different hyper-parameters of the kernels.", "We observed the best results for a t-Student kernel with 50 degrees of freedom and bandwidth γ = 25 (marked in red).", "This kernel is used in all subsequent experiments.", "We experimented with different codebooks and entropy regularization strengths.", "Fig.", "A.3a shows how the quantized latent representation (QLR) changes with the size of the codebook.", "The figures also compare the actual histogram with its soft estimate (equation 6).", "We observed that the binary codebook is sub-optimal and significantly limits the achievable image quality, especially as the number of feature channels grows.", "Adding more entries steadily improved quality and the codebook with M = 32 entires (values from -15 to 16) seemed to be the point of diminishing returns.", "Our entropy-based regularization turned out to be very effective at shaping the QLR (Fig. A.3b ) and dispensed with the need to use other normalization techniques (e.g., GDN).", "We used only a single scalar multiplication factor responsible for scaling the distribution.", "All baseline and finetuned models use λ H = 250 (last column).", "Fig.", "A.4 visually compares the QLRs of our baseline low-quality codec (16 feature channels) with weak and strong regularization." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1395348757505417, 0.1818181723356247, 0.15094339847564697, 0.2857142686843872, 0.21276594698429108, 0.1304347813129425, 0.1702127605676651, 0.04347825422883034, 0, 0.1428571343421936, 0.04081632196903229, 0.12244897335767746, 0.1860465109348297, 0.0833333283662796, 0.10810810327529907, 0.04255318641662598, 0.18518517911434174, 0.08888888359069824, 0.09999999403953552, 0, 0.10169491171836853, 0.038461532443761826, 0.4166666567325592, 0.08888888359069824, 0.24242423474788666, 0.072727270424366, 0.038461532443761826, 0.11320754140615463, 0.1599999964237213, 0.09999999403953552, 0.14999999105930328, 0.11999999731779099, 0.0476190410554409, 0.1249999925494194, 0.05882352590560913, 0.11428570747375488, 0, 0, 0.1702127605676651, 0.11764705181121826, 0.145454540848732, 0.05128204822540283, 0.052631575614213943, 0.08888888359069824 ]
HyxG3p4twS
true
[ "We learn an efficient lossy image codec that can be optimized to facilitate reliable photo manipulation detection at fractional cost in payload/quality and even at low bitrates." ]
[ "Recurrent Neural Networks have long been the dominating choice for sequence modeling.", "However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure.", "Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently.", "Notably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks.", "Despite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts.", "In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks.", "The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings.", "We evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks.", "Recurrent Neural Networks (RNNs) especially its variants such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have achieved great success in a wide range of sequence learning tasks including language modeling, speech recognition, recommendation, etc (Mikolov et al., 2010; Sundermeyer et al., 2012; Graves & Jaitly, 2014; Hinton et al., 2012; Hidasi et al., 2015) .", "Despite their success, however, the recurrent structure is often troubled by two notorious issues.", "First, it easily suffers from gradient vanishing and exploding problems, which largely limits their ability to learn very long-term dependencies (Pascanu et al., 2013) .", "Second, the sequential nature of both forward and backward passes makes it extremely difficult, if not impossible, to parallelize the computation, which dramatically increases the time complexity in both training and testing procedure.", "Therefore, many recently developed sequence learning models have completely jettisoned the recurrent structure and only rely on convolution operation or attention mechanism that are easy to parallelize and allow the information flow at an arbitrary length.", "Two representative models that have drawn great attention are Temporal Convolution Networks(TCN) (Bai et al., 2018) and Transformer (Vaswani et al., 2017) .", "In a variety of sequence learning tasks, they have demonstrated comparable or even better performance than that of RNNs (Gehring et al., 2017; Bai et al., 2018; Devlin et al., 2018) .", "The remarkable performance achieved by such models largely comes from their ability to capture long-term dependencies in sequences.", "In particular, the multi-head attention mechanism in Transformer allows every position to be directly connected to any other positions in a sequence.", "Thus, the information can flow across positions without any intermediate loss.", "Nevertheless, there are two issues that can harm the effectiveness of multi-head attention mechanism for sequence learning.", "The first comes from the loss of sequential information of positions as it treats every position identically.", "To mitigate this problem, Transformer introduces position embeddings, whose effects, The illustration of one layer of R-Transformer.", "There are three different networks that are arranged hierarchically.", "In particular, the lower-level is localRNNs that process positions in a local window sequentially (This figure shows an example of local window of size 3); The middle-level is multi-head attention networks which capture the global long-term dependencies; The upper-level is Position-wise feedforward networks that conduct non-linear feature transformation.", "These three networks are connected by a residual and layer normalization operation.", "The circles with dash line are the paddings of the input sequence however, have been shown to be limited (Dehghani et al., 2018; Al-Rfou et al., 2018) .", "In addition, it requires considerable amount of efforts to design more effective position embeddings or different ways to incorporate them in the learning process (Dai et al., 2019) .", "Second, while multi-head attention mechanism is able to learn the global dependencies, we argue that it ignores the local structures that are inherently important in sequences such as natural languages.", "Even with the help of position embeddings, the signals at local positions can still be very weak as the number of other positions is significantly more.", "To address the aforementioned limitations of the standard Transformer, in this paper, we propose a novel sequence learning model, termed as R-Transformer.", "It is a multi-layer architecture built on RNNs and the standard Transformer, and enjoys the advantages of both worlds while naturally avoids their respective drawbacks.", "More specifically, before computing global dependencies of positions with the multi-head attention mechanism, we firstly refine the representation of each position such that the sequential and local information within its neighborhood can be compressed in the representation.", "To do this, we introduce a local recurrent neural network, referred to as LocalRNN, to process signals within a local window ending at a given position.", "In addition, the LocalRNN operates on local windows of all the positions identically and independently and produces a latent representation for each of them.", "In this way, the locality in the sequence is explicitly captured.", "In addition, as the local window is sliding along the sequence one position by one position, the global sequential information is also incorporated.", "More importantly, because the localRNN is only applied to local windows, the aforementioned two drawbacks of RNNs can be naturally mitigated.", "We evaluate the effectiveness of R-Transformer with a various of sequence learning tasks from different domains and the empirical results demonstrate that R-Transformer achieves much stronger performance than both TCN and standard Transformer as well as other state-of-the-art sequence models.", "The rest of the paper is organized as follows: Section 2 discusses the sequence modeling problem we aim to solve; The proposed R-Transformer model is presented in Section 3.", "In Section 4, we describe the experimental details and discuss the results.", "The related work is briefly reviewed in Section 5.", "Section 6 concludes this work.", "In summary, experimental results have shown that the standard Transformer can achieve better results than RNNs when sequences exhibit very long-term dependencies, i.e., sequential MNIST while its performance can drop dramatically when strong locality exists in sequences, i.e., polyphonic music and language.", "Meanwhile, TCN is a very strong sequence model that can effectively learn both local structures and long-term dependencies and has very stable performance in different tasks.", "More importantly, the proposed R-Transformer that combines a lower level LocalRNN and a higher level multi-head attention, outperforms both TCN and Transformer by a large margin consistently in most of the tasks.", "The experiments are conducted on various sequential learning tasks with datasets from different domains.", "Moreover, all experimental settings are fair to all baselines.", "Thus, the observations from the experiments are reliable with the current experimental settings.", "However, due to the computational limitations, we are currently restricted our evaluation settings to moderate model and dataset sizes.", "Thus, more evaluations on big models and large datasets can make the results more convincing.", "We would like to leave this as one future work.", "In this paper, we propose a novel generic sequence model that enjoys the advantages of both RNN and the multi-head attention while mitigating their disadvantages.", "Specifically, it consists of a LocalRNN that learns the local structures without suffering from any of the weaknesses of RNN and a multi-head attention pooling that effectively captures long-term dependencies without any help of position embeddings.", "In addition, the model can be easily implemented with full parallelization over the positions in a sequence.", "The empirical results on sequence modeling tasks from a wide range of domains have demonstrated the remarkable advantages of R-Transformer over state-of-the-art nonrecurrent sequence models such as TCN and standard Transformer as well as canonical recurrent architectures." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.13333332538604736, 0.10256409645080566, 0.17142856121063232, 0.14999999105930328, 0.11999999731779099, 0.3499999940395355, 0.20512819290161133, 0.12765957415103912, 0.0882352888584137, 0.0624999962747097, 0.09302324801683426, 0.21276594698429108, 0.19230768084526062, 0.10256409645080566, 0.13333332538604736, 0, 0.15789473056793213, 0.06896550953388214, 0.22857142984867096, 0.11764705181121826, 0.05882352590560913, 0, 0.17543859779834747, 0.06666666269302368, 0.1395348757505417, 0.1304347813129425, 0.08695651590824127, 0.09999999403953552, 0.1538461446762085, 0.24390242993831635, 0.1599999964237213, 0, 0.1538461446762085, 0.1428571343421936, 0.10810810327529907, 0.15789473056793213, 0.19230768084526062, 0.23255813121795654, 0.13793103396892548, 0, 0, 0.10344827175140381, 0.1904761791229248, 0.17777776718139648, 0, 0, 0.06896550953388214, 0.1666666567325592, 0.1249999925494194, 0, 0.380952388048172, 0.17391303181648254, 0.1764705777168274, 0.15686273574829102 ]
HJx4PAEYDH
true
[ "This paper proposes an effective generic sequence model which leverages the strengths of both RNNs and Multi-head attention." ]
[ "Many tasks in natural language processing and related domains require high precision output that obeys dataset-specific constraints.", "This level of fine-grained control can be difficult to obtain in large-scale neural network models.", "In this work, we propose a structured latent-variable approach that adds discrete control states within a standard autoregressive neural paradigm.", "Under this formulation, we can include a range of rich, posterior constraints to enforce task-specific knowledge that is effectively trained into the neural model.", "This approach allows us to provide arbitrary grounding of internal model decisions, without sacrificing any representational power of neural models.", "Experiments consider applications of this approach for text generation and part-of-speech induction.", "For natural language generation, we find that this method improves over standard benchmarks, while also providing fine-grained control." ]
[ 0, 0, 0, 0, 1, 0, 0 ]
[ 0.043478257954120636, 0.22727271914482117, 0.5833333134651184, 0.22641508281230927, 0.6666666865348816, 0.09756097197532654, 0.12765957415103912 ]
r1gzaCEtPS
false
[ "A structured latent-variable approach that adds discrete control states within a standard autoregressive neural paradigm to provide arbitrary grounding of internal model decisions, without sacrificing any representational power of neural models." ]
[ "Suppose a deep classification model is trained with samples that need to be kept private for privacy or confidentiality reasons.", "In this setting, can an adversary obtain the private samples if the classification model is given to the adversary?", "We call this reverse engineering against the classification model the Classifier-to-Generator (C2G) Attack.", "This situation arises when the classification model is embedded into mobile devices for offline prediction (e.g., object recognition for the automatic driving car and face recognition for mobile phone authentication).\n", "For C2G attack, we introduce a novel GAN, PreImageGAN.", "In PreImageGAN, the generator is designed to estimate the the sample distribution conditioned by the preimage of classification model $f$, $P(X|f(X)=y)$, where $X$ is the random variable on the sample space and $y$ is the probability vector representing the target label arbitrary specified by the adversary.", "In experiments, we demonstrate PreImageGAN works successfully with hand-written character recognition and face recognition.", "In character recognition, we show that, given a recognition model of hand-written digits, PreImageGAN allows the adversary to extract alphabet letter images without knowing that the model is built for alphabet letter images.", "In face recognition, we show that, when an adversary obtains a face recognition model for a set of individuals, PreImageGAN allows the adversary to extract face images of specific individuals contained in the set, even when the adversary has no knowledge of the face of the individuals.", "Recent rapid advances in deep learning technologies are expected to promote the application of deep learning to online services with recognition of complex objects.", "Let us consider the face recognition task as an example.", "The probabilistic classification model f takes a face image x and the model predicts the probability of which the given face image is associated with an individual t, f (x) ≃ Pr[T = t|X = x].The", "following three scenarios pose situations that probabilistic classification models need to revealed in public for online services in real applications:Prediction with cloud environment: Suppose an enterprise provides an online prediction service with a cloud environment, in which the service takes input from a user and returns predictions to the user in an online manner. The", "enterprise needs to deploy the model f into the cloud to achieve this.Prediction with private information: Suppose an enterprise develops a prediction model f (e.g., disease risk prediction) and a user wishes to have a prediction of the model with private input (e.g., personal genetic information). The", "most straightforward way to preserve the user's privacy entirely is to let the user download the entire model and perform prediction on the user side locally.Offline prediction: Automatic driving cars or laptops with face authentication contain face/object recognition systems in the device. Since", "these devices are for mobile use and need to work standalone, the full model f needs to be embedded in the device.In such situations that classification model f is revealed, we consider a reverse-engineering problem of models with deep architectures. Let D", "tr and d X,T be a set of training samples and its underlying distribution, respectively. Let f", "be a model trained with D tr . In this", "situation, is it possible for an adversary to obtain the training samples D tr (or its underlying distribution d X,T ) if the classification model is given to the adversary?. If this", "is possible, this can cause serious problems, particularly when D tr or d X,T is private or confidential information.Privacy violation by releasing face authentication: Let us consider the face authentication task as an example again. Suppose an adversary is given the classification model f . The adversary aims to estimate the data (face) distribution of a target individual t * , d X|T =t * . If this kind of reverseengineering works successfully, serious privacy violation arises because individual faces are private information. Furthermore, once d X|T =t * is revealed, the adversary can draw samples from d X|T =t * , which would cause another privacy violation (say, the adversary can draw an arbitrary number of the target's face images).Confidential information leakage by releasing object recognizer: Let us consider an object recognition system for automatic driving cars. Suppose a model f takes as input images from car-mounted cameras and detect various objects such as traffic signs or traffic lights. Given f , the reverse engineering reveals the sample distribution of the training samples, which might help adversaries having malicious intentions. For example, generation of adversarial examples that make the recognition system confuse without being detected would be possible. Also, this kind of attack allows exposure of hidden functionalities for privileged users or unexpected vulnerabilities of the system.If this kind of attack is possible, it indicates that careful treatment is needed before releasing model f in public considering that publication of f might cause serious problems as listed above. We name this type of reverse engineering classifier-to-generator (C2G) attack . In principle, estimation of labeled sample distributions from a classification/recognition model of complex objects (e.g., face images) is a difficult task because of the following two reasons. First, estimation of generative models of complex objects is believed to be a challenging problem itself. Second, model f often does not contain sufficient information to estimate the generative model of samples. In supervised classification, the label space is always much more abstract than the sample space. The classification model thus makes use of only a limited amount of information in the sample space that is sufficient to classify objects into the abstract label space. In this sense, it is difficult to estimate the sample distribution given only classification model f .To resolve the first difficulty, we employ Generative Adversarial Networks (GANs). GANs are a neural network architecture for generative models which has developed dramatically in the field of deep learning. Also, we exploit one remarkable property of GANs, the ability to interpolate latent variables of inputs. With this interpolation, GANs can generate samples (say, images) that are not included in the training samples, but realistic samples 1 .Even with this powerful generation ability of GANs, it is difficult to resolve the second difficulty. To overcome this for the C2G attack, we assume that the adversary can make use of unlabeled auxiliary samples D aux as background knowledge. Suppose f be a face recognition model that recognizes Alice and Bob, and the adversary tries to extract Alice's face image from f . It is natural to suppose that the adversary can use public face image samples that do not contain Alice's and Bob's face images as D aux . PreImageGAN exploits unlabeled auxiliary samples to complement knowledge extracted from the model f .", "As described in this paper, we formulated the Classifier-to-Generator (C2G) Attack, which estimates the training sample distribution ρ t * from given classification model f and auxiliary dataset D tr .", "As an algorithm for C2G attack, we proposed PreImageGAN which is based on ACGAN and WGAN.", "Fig.", "7 represents the results of the C2G attack when the auxiliary data consists of noisy images which are drawn from the uniform distribution.", "All generated images look like noise images, not numeric letters.", "This result reveals that the C2G attack fails when the auxiliary dataset is not sufficiently informative.", "More specifically, we can consider the C2G attack fails when the attacker does not have appropriate background knowledge of the training data distribution.(a", ") t * = 0 DISPLAYFORM0 Figure 7: Images generated by the C2G attack when the target label is set as t * = 0, 1, 2 and uniformly generated noise images are used as the auxiliary dataset. We", "used an alphanumeric letter classifier (label num:62) described in Sec. 5.2 as f for this experiment." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.06666666269302368, 0, 0, 0, 0, 0.09090908616781235, 0, 0.052631575614213943, 0.0476190447807312, 0.06666666269302368, 0, 0.05128204822540283, 0.039215683937072754, 0.043478257954120636, 0, 0.04081632196903229, 0.1538461446762085, 0.10526315122842789, 0.10526315122842789, 0.0361010804772377, 0.1538461446762085, 0, 0.27586206793785095, 0, 0, 0.25, 0, 0.07407406717538834 ]
SJOl4DlCZ
true
[ "Estimation of training data distribution from trained classifier using GAN." ]
[ "The goal of standard compressive sensing is to estimate an unknown vector from linear measurements under the assumption of sparsity in some basis.", "Recently, it has been shown that significantly fewer measurements may be required if the sparsity assumption is replaced by the assumption that the unknown vector lies near the range of a suitably-chosen generative model. ", "In particular, in (Bora {\\em et al.}, 2017) it was shown that roughly $O(k\\log L)$ random Gaussian measurements suffice for accurate recovery when the $k$-input generative model is bounded and $L$-Lipschitz, and that $O(kd \\log w)$ measurements suffice for $k$-input ReLU networks with depth $d$ and width $w$. In this paper, we establish corresponding algorithm-independent lower bounds on the sample complexity using tools from minimax statistical analysis. ", "In accordance with the above upper bounds, our results are summarized as follows: (i) We construct an $L$-Lipschitz generative model capable of generating group-sparse signals, and show that the resulting necessary number of measurements is $\\Omega(k \\log L)$;", "(ii) Using similar ideas, we construct two-layer ReLU networks of high width requiring $\\Omega(k \\log w)$ measurements, as well as lower-width deep ReLU networks requiring $\\Omega(k d)$ measurements. ", "As a result, we establish that the scaling laws derived in (Bora {\\em et al.}, 2017) are optimal or near-optimal in the absence of further assumptions.", "The problem of sparse estimation via linear measurements (commonly referred to as compressive sensing) is well-understood, with theoretical developments including sharp performance bounds for both practical algorithms [1, 2, 3, 4] and (potentially intractable) information-theoretically optimal algorithms [5, 6, 7, 8] .", "Following the tremendous success of deep generative models in a variety of applications [9] , a new perspective on compressive sensing was recently introduced, in which the sparsity assumption is replaced by the assumption of the underlying signal being well-modeled by a generative model (typically corresponding to a deep neural network) [10] .", "This approach was seen to exhibit impressive performance in experiments, with reductions in the number of measurements by large factors such as 5 to 10 compared to sparsity-based methods.", "In addition, [10] provided theoretical guarantees on their proposed algorithm, essentially showing that an L-Lipschitz generative model with bounded k-dimensional inputs leads to reliable recovery with m = O(k log L) random Gaussian measurements (see Section 2 for a precise statement).", "Moreover, for a ReLU network generative model from R k to R n with width w and depth d, it suffices to have m = O(kd log w).", "A variety of follow-up works provided additional theoretical guarantees (e.g., for more specific optimization algorithms [11, 12] , more general models [13] , or under random neural network weights [14, 15] ) for compressive sensing with generative models, but the main results of [10] are by far the most relevant to ours.", "In this paper, we address a prominent gap in the existing literature by establishing algorithmindependent lower bounds on the number of measurements needed (e.g., this is explicitly posed as an open problem in [15] ).", "Using tools from minimax statistical analysis, we show that for generative models satisfying the assumptions of [10] , the above-mentioned dependencies m = O(k log L) and m = O(kd log w) cannot be improved (or in the latter case, cannot be improved by more than a log n factor) without further assumptions.", "Our argument is essentially based on a reduction to compressive sensing with a group sparsity model (e.g., see [16] ), i.e., forming a neural network that is capable of producing such signals.", "The proofs are presented in the full paper [17] .", "We have established, to our knowledge, the first lower bounds on the sample complexity for compressive sensing with generative models.", "To achieve these, we constructed generative models capable of producing group-sparse signals, and then applied a minimax lower bound for group-sparse recovery.", "For bounded Lipschitz-continuous generative models we matched the O(k log L) scaling law derived in [10] , and for ReLU-based generative models, we showed that the dependence of the O(kd log w) bound from [10] has an optimal or near-optimal dependence on both the width and depth.", "A possible direction for future research is to understand what additional assumptions could be placed on the generative model to further reduce the sample complexity." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1395348757505417, 0.11764705181121826, 0.1975308656692505, 0.17543859779834747, 0.04444443807005882, 0.8260869383811951, 0.06557376682758331, 0.13333332538604736, 0.12765957415103912, 0.03278687968850136, 0, 0.14492753148078918, 0.145454540848732, 0.21875, 0.11320754140615463, 0.20000000298023224, 0.09999999403953552, 0.0476190410554409, 0.33898305892944336, 0.13636362552642822 ]
SJgYiQnq8H
true
[ "We establish that the scaling laws derived in (Bora et al., 2017) are optimal or near-optimal in the absence of further assumptions." ]
[ "Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents.", "Here we explore whether modern deep reinforcement learning can be used to train agents to perform causal reasoning.", "We adopt a meta-learning approach, where the agent learns a policy for conducting experiments via causal interventions, in order to support a subsequent task which rewards making accurate causal inferences.We also found the agent could make sophisticated counterfactual predictions, as well as learn to draw causal inferences from purely observational data.", "Though powerful formalisms for causal reasoning have been developed, applying them in real-world domains can be difficult because fitting to large amounts of high dimensional data often requires making idealized assumptions.", "Our results suggest that causal reasoning in complex settings may benefit from powerful learning-based approaches.", "More generally, this work may offer new strategies for structured exploration in reinforcement learning, by providing agents with the ability to perform—and interpret—experiments.", "Many machine learning algorithms are rooted in discovering patterns of correlation in data.", "While this has been sufficient to excel in several areas BID20 BID7 , sometimes the problems we are interested in are fundamentally causal.", "Answering questions such as \"Does smoking cause cancer?\" or \"Was this person denied a job due to racial discrimination?\" or \"Did this marketing campaign cause sales to go up?\" all require an ability to reason about causes and effects and cannot be achieved by purely associative inference.", "Even for problems that are not obviously causal, like image classification, it has been suggested that some failure modes emerge from lack of causal understanding.", "Causal reasoning may be an essential component of natural intelligence and is present in human babies, rats and even birds BID23 BID14 BID15 BID5 BID21 .", "There is a rich literature on formal approaches for defining and performing causal reasoning BID29 BID33 BID8 BID30 ).Here", "we investigate whether procedures for learning and using causal structure can be produced by meta-learning. The", "approach of meta-learning is to learn the learning (or inference) procedure itself, directly from data. We", "adopt the specific method of BID9 and BID35 , training a recurrent neural network (RNN) through model-free reinforcement learning. We", "train on a large family of tasks, each underpinned by a different causal structure.The use of meta-learning avoids the need to manually implement explicit causal reasoning methods in an algorithm, offers advantages of scalability by amortizing computations, and allows automatic incorporation of complex prior knowledge BID1 BID35 BID11 . Additionally", ", by learning end-to-end, the algorithm has the potential to find the internal representations of causal structure best suited for the types of causal inference required.", "This work is the first demonstration that causal reasoning can arise out of model-free reinforcement learning.", "This opens up the possibility of leveraging powerful learning-based methods for causal inference in complex settings.", "Traditional formal approaches usually decouple the two problems of causal induction (i.e. inferring the structure of the underlying model) and causal inference (i.e. estimating causal effects and answering counterfactual questions), and despite advances in both BID26 BID6 BID27 BID32 BID12 BID22 , inducing models often requires assumptions that are difficult to fit to complex real-world conditions.", "By learning these end-to-end, our method can potentially find representations of causal structure best tuned to the specific causal inferences required.", "Another key advantage of our meta-RL approach is that it allows the agent to learn to interact with the environment in order to acquire necessary observations in the service of its task-i.e. to perform active learning.", "In our experimental domain, our agents' active intervention policy was close to optimal, which demonstrates the promise of agents that can learn to experiment on their environment and perform rich causal reasoning on the observations.", "Future work should explore agents that perform experiments to support structured exploration in RL, and optimal experiment design in complex domains where large numbers of blind interventions are prohibitive.", "To this end, follow-up work should focus on scaling up our approach to larger environments, with more complex causal structure and a more diverse range of tasks.", "Though the results here are a first step in this direction which use relatively standard deep RL components, our approach will likely benefit from more advanced architectures (e.g. BID16 BID17 that allow longer more complex episodes, as well as models which are more explicitly compositional (e.g. BID3 BID0 or have richer semantics (e.g. BID13 , that more explicitly leverage symmetries like equivalance classes in the environment.", "We can also compare the performance of these agents to two standard model-free RL baselines.", "The Q-total agent learns a Q-value for each action across all steps for all the episodes.", "The Q-episode agent learns a Q-value for each action conditioned on the input at each time step [o t ,a t−1 ,r t−1 ], but with no LSTM memory to store previous actions and observations.", "Since the relationship between action and reward is random between episodes, Q-total was equivalent to selecting actions randomly, resulting in a considerably negative reward.", "The Q-episode agent essentially makes sure to not choose the arm that is indicated by m t to be the external intervention (which is assured to be equal to −5), and essentially chooses randomly otherwise, giving an average reward of 0." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17391303181648254, 0.23999999463558197, 0.07999999821186066, 0.1538461446762085, 0.17391303181648254, 0, 0.19999998807907104, 0.06896550953388214, 0.04081632196903229, 0.1249999925494194, 0.1249999925494194, 0.2142857164144516, 0.1666666567325592, 0.1666666567325592, 0.2142857164144516, 0.1538461446762085, 0.27586206793785095, 0.3333333432674408, 0.1666666567325592, 0.0714285671710968, 0.2142857164144516, 0.10526315122842789, 0.1538461446762085, 0.0555555522441864, 0.1764705777168274, 0.0317460298538208, 0.08695651590824127, 0.09090908616781235, 0.04878048598766327, 0.06666666269302368, 0.0476190447807312 ]
H1ltQ3R9KQ
true
[ "meta-learn a learning algorithm capable of causal reasoning" ]
[ "Sentiment classification is an active research area with several applications including analysis of political opinions, classifying comments, movie reviews, news reviews and product reviews.", "To employ rule based sentiment classification, we require sentiment lexicons.", "However, manual construction of sentiment lexicon is time consuming and costly for resource-limited languages.", "To bypass manual development time and costs, we tried to build Amharic Sentiment Lexicons relying on corpus based approach.", "The intention of this approach is to handle sentiment terms specific to Amharic language from Amharic Corpus.", "Small set of seed terms are manually prepared from three parts of speech such as noun, adjective and verb.", "We developed algorithms for constructing Amharic sentiment lexicons automatically from Amharic news corpus.", "Corpus based approach is proposed relying on the word co-occurrence distributional embedding including frequency based embedding (i.e. Positive Point-wise Mutual Information PPMI).", "First we build word-context unigram frequency count matrix and transform it to point-wise mutual Information matrix.", "Using this matrix, we computed the cosine distance of mean vector of seed lists and each word in the corpus vocabulary.", "Based on the threshold value, the top closest words to the mean vector of seed list are added to the lexicon.", "Then the mean vector of the new sentiment seed list is updated and process is repeated until we get sufficient terms in the lexicon.", "Using PPMI with threshold value of 100 and 200, we got corpus based Amharic Sentiment lexicons of size 1811 and 3794 respectively by expanding 519 seeds.", "Finally, the lexicon generated in corpus based approach is evaluated.\n", "Most of sentiment mining research papers are associated to English languages.", "Linguistic computational resources in languages other than English are limited.", "Amharic is one of resource limited languages.", "Due to the advancement of World Wide Web, Amharic opinionated texts is increasing in size.", "To manage prediction of sentiment orientation towards a particular object or service is crucial for business intelligence, government intelligence, market intelligence, or support decision making.", "For carrying out Amharic sentiment classification, the availability of sentiment lexicons is crucial.", "To-date, there are two generated Amharic sentiment lexicons.", "These are manually generated lexicon(1000) (Gebremeskel, 2010) and dictionary based Amharic SWN and SOCAL lexicons (Neshir Alemneh et al., 2019) .", "However, dictionary based generated lexicons has short-comings in that it has difficulty in capturing cultural connotation and language specific features of the language.", "For example, Amharic words which are spoken culturally and used to express opinions will not be obtained from dictionary based sentiment lexicons.", "The word ጉርሻ/\"feed in other people with hands which expresses love and live in harmony with others\"/ in the Amharic text: \"እንደ ጉርሻ ግን የሚያግባባን የለም. . . ጉርሻ እኮ አንዱ ለሌላው የማጉረስ ተግባር ብቻ አይደለም፤ በተጠቀለለው እንጀራ ውስጥ ፍቅር አለ፣ መተሳሰብ አለ፣ አክብሮት አለ።\" has positive connotation or positive sentiment.", "But the dictionary meaning of the word ጉርሻ is \"bonus\".", "This is far away from the cultural connotation that it is intended to represent and express.", "We assumed that such kind of culture (or language specific) words are found in a collection of Amharic texts.", "However, dictionary based lexicons has short comings to capture sentiment terms which has strong ties to language and culture specific connotations of Amharic.", "Thus, this work builds corpus based algorithm to handle language and culture specific words in the lexicons.", "However, it could probably be impossible to handle all the words in the language as the corpus is a limited resource in almost all less resourced languages like Amharic.", "But still it is possible to build sentiment lexicons in particular domain where large amount of Amharic corpus is available.", "Due to this reason, the lexicon built using this approach is usually used for lexicon based sentiment analysis in the same domain from which it is built.", "The research questions to be addressed utilizing this approach are: (1) How can we build an approach to generate Amharic Sentiment Lexicon from corpus?", "(2)How do we evaluate the validity and quality of the generated lexicon?", "In this work, we set this approach to build Amharic polarity lexicons in automatic way relying on Amharic corpora which is mentioned shortly.", "The corpora are collected from different local news media organizations and also from facebook news' comments and you tube video comments to extend and enhance corpus size to capture sentiment terms into the generated PPMI based lexicon.", "Using corpus based approach, Amharic sentiment lexicon is built where it allows finding domain dependent opinions which might not be possible by sentiment lexicon generated using dictionary based approach.", "In this section, we have attempted to develop new approaches to bootstrapping relying on word-context semantic space representation of large Amharic corpora.", "Creating a sentiment lexicon generation is not an objective process.", "The generated lexicon is dependent on the task it is applied.", "Thus, in this work we have seen that it is possible to create Sentiment lexicon for low resourced languages from corpus.", "This captures the language specific features and connotations related to the culture where the language is spoken.", "This can not be handled using dictionary based approach that propagates labels from resource rich languages.", "To the best of our knowledge, the the PPMI based approach to generate Amharic Sentiment lexicon form corpus is performed for first time for Amharic language with almost minimal costs and time.", "Thus, the generated lexicons can be used in combination with other sentiment lexicons to enhance the performance of sentiment classifications in Amharic language.", "The approach is a generic approach which can be adapted to other resource limited languages to reduce cost of human annotation and the time it takes to annotated sentiment lexicons.", "Though the PPMI based Amharic Sentiment lexicon outperforms the manual lexicon, prediction (word embedding) based approach is recommended to generate sentiment lexicon for Amharic language to handle context sensitive terms." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11428570747375488, 0.0952380895614624, 0.1538461446762085, 0.3870967626571655, 0.2222222238779068, 0, 0.25, 0.3030303120613098, 0, 0.06451612710952759, 0.13793103396892548, 0.12121211737394333, 0.2222222238779068, 0.3478260934352875, 0, 0, 0.21052631735801697, 0.14814814925193787, 0.05882352590560913, 0.1666666567325592, 0.09999999403953552, 0.1249999925494194, 0.0624999962747097, 0.11764705181121826, 0.0357142835855484, 0.0952380895614624, 0.07407406717538834, 0.06666666269302368, 0.12121211737394333, 0.13793103396892548, 0.1621621549129486, 0.19354838132858276, 0.1764705777168274, 0.1764705777168274, 0, 0.24242423474788666, 0.13636362552642822, 0.2631579041481018, 0.1818181723356247, 0.1818181723356247, 0.27272728085517883, 0.24242423474788666, 0.07692307233810425, 0.0714285671710968, 0.3589743673801422, 0.06451612710952759, 0.05128204822540283, 0.3243243098258972 ]
BJxVT3EKDH
true
[ "Corpus based Algorithm is developed generate Amharic Sentiment lexicon relying on corpus" ]
[ "Optimistic initialisation is an effective strategy for efficient exploration in reinforcement learning (RL).", "In the tabular case, all provably efficient model-free algorithms rely on it.", "However, model-free deep RL algorithms do not use optimistic initialisation despite taking inspiration from these provably efficient tabular algorithms.", "In particular, in scenarios with only positive rewards, Q-values are initialised at their lowest possible values due to commonly used network initialisation schemes, a pessimistic initialisation.", "Merely initialising the network to output optimistic Q-values is not enough, since we cannot ensure that they remain optimistic for novel state-action pairs, which is crucial for exploration.", "We propose a simple count-based augmentation to pessimistically initialised Q-values that separates the source of optimism from the neural network.", "We show that this scheme is provably efficient in the tabular setting and extend it to the deep RL setting.", "Our algorithm, Optimistic Pessimistically Initialised Q-Learning (OPIQ), augments the Q-value estimates of a DQN-based agent with count-derived bonuses to ensure optimism during both action selection and bootstrapping.", "We show that OPIQ outperforms non-optimistic DQN variants that utilise a pseudocount-based intrinsic motivation in hard exploration tasks, and that it predicts optimistic estimates for novel state-action pairs.", "In reinforcement learning (RL), exploration is crucial for gathering sufficient data to infer a good control policy.", "As environment complexity grows, exploration becomes more challenging and simple randomisation strategies become inefficient.", "While most provably efficient methods for tabular RL are model-based (Brafman and Tennenholtz, 2002; Strehl and Littman, 2008; Azar et al., 2017) , in deep RL, learning models that are useful for planning is notoriously difficult and often more complex (Hafner et al., 2019) than modelfree methods.", "Consequently, model-free approaches have shown the best final performance on large complex tasks (Mnih et al., 2015; 2016; Hessel et al., 2018) , especially those requiring hard exploration (Bellemare et al., 2016; Ostrovski et al., 2017) .", "Therefore, in this paper, we focus on how to devise model-free RL algorithms for efficient exploration that scale to large complex state spaces and have strong theoretical underpinnings.", "Despite taking inspiration from tabular algorithms, current model-free approaches to exploration in deep RL do not employ optimistic initialisation, which is crucial to provably efficient exploration in all model-free tabular algorithms.", "This is because deep RL algorithms do not pay special attention to the initialisation of the neural networks and instead use common initialisation schemes that yield initial Q-values around zero.", "In the common case of non-negative rewards, this means Q-values are initialised to their lowest possible values, i.e., a pessimistic initialisation.", "While initialising a neural network optimistically would be trivial, e.g., by setting the bias of the final layer of the network, the uncontrolled generalisation in neural networks changes this initialisation quickly.", "Instead, to benefit exploration, we require the Q-values for novel state-action pairs must remain high until they are explored.", "An empirically successful approach to exploration in deep RL, especially when reward is sparse, is intrinsic motivation (Oudeyer and Kaplan, 2009) .", "A popular variant is based on pseudocounts (Bellemare et al., 2016) , which derive an intrinsic bonus from approximate visitation counts over states and is inspired by the tabular MBIE-EB algorithm (Strehl and Littman, 2008) .", "However, adding a positive intrinsic bonus to the reward yields optimistic Q-values only for state-action pairs that have already been chosen sufficiently often.", "Incentives to explore unvisited states rely therefore on the generalisation of the neural network.", "Exactly how the network generalises to those novel state-action pairs is unknown, and thus it is unclear whether those estimates are optimistic when compared to nearby visited state-action pairs.", "Figure 1 Consider the simple example with a single state and two actions shown in Figure 1 .", "The left action yields +0.1 reward and the right action yields +1 reward.", "An agent whose Q-value estimates have been zero-initialised must at the first time step select an action randomly.", "As both actions are underestimated, this will increase the estimate of the chosen action.", "Greedy agents always pick the action with the largest Q-value estimate and will select the same action forever, failing to explore the alternative.", "Whether the agent learns the optimal policy or not is thus decided purely at random based on the initial Q-value estimates.", "This effect will only be amplified by intrinsic reward.", "To ensure optimism in unvisited, novel state-action pairs, we introduce Optimistic Pessimistically Initialised Q-Learning (OPIQ).", "OPIQ does not rely on an optimistic initialisation to ensure efficient exploration, but instead augments the Q-value estimates with count-based bonuses in the following manner:", "where N (s, a) is the number of times a state-action pair has been visited and M, C > 0 are hyperparameters.", "These Q + -values are then used for both action selection and during bootstrapping, unlike the above methods which only utilise Q-values during these steps.", "This allows OPIQ to maintain optimism when selecting actions and bootstrapping, since the Q + -values can be optimistic even when the Q-values are not.", "In the tabular domain, we base OPIQ on UCB-H (Jin et al., 2018) , a simple online Q-learning algorithm that uses count-based intrinsic rewards and optimistic initialisation.", "Instead of optimistically initialising the Q-values, we pessimistically initialise them and use Q + -values during action selection and bootstrapping.", "Pessimistic initialisation is used to enable a worst case analysis where all of our Q-value estimates underestimate Q * and is not a requirement for OPIQ.", "We show that these modifications retain the theoretical guarantees of UCB-H.", "Furthermore, our algorithm easily extends to the Deep RL setting.", "The primary difficulty lies in obtaining appropriate state-action counts in high-dimensional and/or continuous state spaces, which has been tackled by a variety of approaches (Bellemare et al., 2016; Ostrovski et al., 2017; Tang et al., 2017; Machado et al., 2018a) and is orthogonal to our contributions.", "We demonstrate clear performance improvements in sparse reward tasks over", "1) a baseline DQN that just uses intrinsic motivation derived from the approximate counts,", "2) simpler schemes that aim for an optimistic initialisation when using neural networks, and", "3) strong exploration baselines.", "We show the importance of optimism during action selection for ensuring efficient exploration.", "Visualising the predicted Q + -values shows that they are indeed optimistic for novel state-action pairs.", "This paper presented OPIQ, a model-free algorithm that does not rely on an optimistic initialisation to ensure efficient exploration.", "Instead, OPIQ augments the Q-values estimates with a count-based optimism bonus.", "We showed that this is provably efficient in the tabular setting by modifying UCB-H to use a pessimistic initialisation and our augmented Q + -values for action selection and bootstrapping.", "Since our method does not rely on a specific initialisation scheme, it easily scales to deep RL when paired with an appropriate counting scheme.", "Our results showed the benefits of maintaining optimism both during action selection and bootstrapping for exploration on a number of hard sparse reward environments including Montezuma's Revenge.", "In future work, we aim to extend OPIQ by integrating it with more expressive counting schemes." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0, 0.060606054961681366, 0, 0.17391303181648254, 0.08695651590824127, 0.29999998211860657, 0.20512819290161133, 0.4166666567325592, 0.21276594698429108, 0.052631575614213943, 0.05714285373687744, 0.09836065024137497, 0.03999999538064003, 0.0833333283662796, 0, 0.12244897335767746, 0.1818181723356247, 0.08163265138864517, 0.09999999403953552, 0.04878048226237297, 0.1111111044883728, 0.1818181723356247, 0.05882352590560913, 0.17777776718139648, 0.2222222238779068, 0.1875, 0.20512819290161133, 0.1764705777168274, 0.25, 0.14999999105930328, 0, 0.0555555522441864, 0.2222222238779068, 0.1860465109348297, 0.31111109256744385, 0.27272728085517883, 0.2083333283662796, 0.25, 0.17777776718139648, 0.1875, 0.06451612710952759, 0.06666666269302368, 0.06451612710952759, 0.17142856121063232, 0.11428570747375488, 0, 0.3529411852359772, 0.1621621549129486, 0.09999999403953552, 0.4375, 0.3199999928474426, 0.08888888359069824, 0.2978723347187042, 0.05405404791235924 ]
r1xGP6VYwH
true
[ "We augment the Q-value estimates with a count-based bonus that ensures optimism during action selection and bootstrapping, even if the Q-value estimates are pessimistic." ]
[ "Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks.", "However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning.", "Both of these challenges severely limit the applicability of such methods to complex, real-world domains.", "In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework.", "In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible.", "Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods.", "By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods.", "Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.", "We presented soft actor-critic (SAC), an off-policy maximum entropy deep reinforcement learning algorithm that provides sample-efficient learning while retaining the benefits of entropy maximization and stability.", "Our theoretical results derive soft policy iteration, which we show to converge to the optimal policy.", "From this result, we can formulate a soft actor-critic algorithm, and we empirically show that it outperforms state-of-the-art model-free deep RL methods, including the off-policy DDPG algorithm and the on-policy TRPO algorithm.", "In fact, the sample efficiency of this approach actually exceeds that of DDPG by a substantial margin.", "Our results suggest that stochastic, entropy maximizing reinforcement learning algorithms can provide a promising avenue for improved robustness and stability, and further exploration of maximum entropy methods, including methods that incorporate second order information (e.g., trust regions BID21 ) or more expressive policy classes is an exciting avenue for future work." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.21621620655059814, 0, 0.0624999962747097, 0.8717948794364929, 0.0952380895614624, 0.3243243098258972, 0.13333332538604736, 0.04999999329447746, 0.5714285373687744, 0.1249999925494194, 0.30434781312942505, 0.05882352590560913, 0.1515151411294937 ]
HJjvxl-Cb
true
[ "We propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework." ]
[ "In many settings, it is desirable to learn decision-making and control policies through learning or from expert demonstrations.", "The most common approaches under this framework are Behaviour Cloning (BC), and Inverse Reinforcement Learning (IRL).", "Recent methods for IRL have demonstrated the capacity to learn effective policies with access to a very limited set of demonstrations, a scenario in which BC methods often fail.", "Unfortunately, directly comparing the algorithms for these methods does not provide adequate intuition for understanding this difference in performance.", "This is the motivating factor for our work.", "We begin by presenting $f$-MAX, a generalization of AIRL (Fu et al., 2018), a state-of-the-art IRL method.", "$f$-MAX provides grounds for more directly comparing the objectives for LfD.", "We demonstrate that $f$-MAX, and by inheritance AIRL, is a subset of the cost-regularized IRL framework laid out by Ho & Ermon (2016).", "We conclude by empirically evaluating the factors of difference between various LfD objectives in the continuous control domain.", "Modern advances in reinforcement learning aim to alleviate the need for hand-engineered decisionmaking and control algorithms by designing general purpose methods that learn to optimize provided reward functions.", "In many cases however, it is either too challenging to optimize a given reward (e.g. due to sparsity of signal), or it is simply impossible to design a reward function that captures the intricate details of desired outcomes.", "One approach to overcoming such hurdles is Learning from Demonstrations (LfD), where algorithms are provided with expert demonstrations of how to accomplish desired tasks.The most common approaches in the LfD framework are Behaviour Cloning (BC) and Inverse Reinforcement Learning (IRL) BID22 BID15 .", "In standard BC, learning from demonstrations is treated as a supervised learning problem and policies are trained to regress expert actions from a dataset of expert demonstrations.", "Other forms of Behaviour Cloning, such as DAgger BID21 , consider how to make use of an expert in a more optimal fashion.", "On the other hand, in IRL the aim is to infer the reward function of the expert, and subsequently train a policy to optimize this reward.", "The motivation for IRL stems from the intuition that the reward function is the most concise and portable representation of a task BID15 BID0 .Unfortunately", ", the standard IRL formulation BID15 faces degeneracy issues 1 . A successful", "framework for overcoming such challenges is the Maximum-Entropy (Max-Ent) IRL method BID28 BID27 . A line of research", "stemming from the Max-Ent IRL framework has lead to recent \"adversarial\" methods BID12 BID4 BID7 1 for example, any policy is optimal for the constant reward function r(s, a) = 0 2 BACKGROUND", "The motivation for this work stemmed from the superior performance of recent direct Max-Ent IRL methods BID12 BID7 compared to BC in the low-data regime, and the desire to understand the relation between various approaches for Learning from Demonstrations.", "We first presented f -MAX, a generalization of AIRL BID7 , which allowed us to interpret AIRL as optimizing for KL (ρ π (s, a)||ρ exp (s, a)).", "We demonstrated that f -MAX, and by inhertance AIRL, is a subset of the cost-regularized IRL framework laid out by BID12 .", "Comparing to the standard BC objective, E ρ exp (s) [KL (ρ exp (a|s)||ρ π (a|s))], we hypothesized two reasons for the superior performance of AIRL:", "1) the additional terms in the objective encouraging the matching of marginal state distributions, and", "2) the direction of the KL divergence being optimized.", "Setting out to empirically evaluate these claims we presented FAIRL, a one-line modification of the AIRL algorithm that optimizes KL (ρ exp (s, a)||ρ π (s, a)).", "FAIRL outperformed BC in a similar fashion to AIRL, which allowed us to conclude the key factor being the matching of state marginals.", "Additional comparisons between FAIRL and AIRL provided initial understanding about the role of the direction of the KL being optimized.", "In future work we aim to produce results on a more diverse set of more challenging environments.", "Additionally, evaluating other choices of f -divergence beyond forward and reverse KL may present interesting avenues for improvement BID26 .", "Lastly, but importantly, we would like to understand whether the mode-covering behaviour of FAIRL could result in more robust policies BID19 .A", "SOME USEFUL IDENTITIES Let h : S × A → R be an arbitrary function. If", "all episodes have the same length T , we have, DISPLAYFORM0 DISPLAYFORM1 In a somewhat similar fashion, in the infinite horizon case with fixed probability γ ∈ (0, 1)", "of transitioning to a terminal state, for the discounted sum below we have, DISPLAYFORM2 DISPLAYFORM3 where Γ := 1 1−γ is the normalizer of the sum t γ t . Since", "the integral of an infinite series is not always equal to the infinite series of integrals, some analytic considerations must be made to go from equation 34 to 35. But,", "one simple case in which it holds is when the ranges of h and all ρ π (s t , a t ) are bounded." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10526315122842789, 0.3333333432674408, 0.17391303181648254, 0.15789473056793213, 0.0714285671710968, 0.05405404791235924, 0.20000000298023224, 0.0476190410554409, 0, 0.12765957415103912, 0.07692307233810425, 0.23333333432674408, 0.0952380895614624, 0.1428571343421936, 0.09756097197532654, 0.09302324801683426, 0, 0.10810810327529907, 0.11764705181121826, 0.15094339847564697, 0.1304347813129425, 0.04999999329447746, 0.09090908616781235, 0.060606054961681366, 0.0714285671710968, 0.08695651590824127, 0.1463414579629898, 0, 0.1111111044883728, 0.051282044500112534, 0.0476190410554409, 0, 0.0416666604578495, 0.13333332538604736, 0.045454539358615875, 0.045454539358615875 ]
rkeXrIIt_4
true
[ "Distribution matching through divergence minimization provides a common ground for comparing adversarial Maximum-Entropy Inverse Reinforcement Learning methods to Behaviour Cloning." ]
[ "Value-based methods constitute a fundamental methodology in planning and deep reinforcement learning (RL).", "In this paper, we propose to exploit the underlying structures of the state-action value function, i.e., Q function, for both planning and deep RL.", "In particular, if the underlying system dynamics lead to some global structures of the Q function, one should be capable of inferring the function better by leveraging such structures.", "Specifically, we investigate the low-rank structure, which widely exists for big data matrices.", "We verify empirically the existence of low-rank Q functions in the context of control and deep RL tasks (Atari games).", "As our key contribution, by leveraging Matrix Estimation (ME) techniques, we propose a general framework to exploit the underlying low-rank structure in Q functions, leading to a more efficient planning procedure for classical control, and additionally, a simple scheme that can be applied to any value-based RL techniques to consistently achieve better performance on ''low-rank'' tasks.", "Extensive experiments on control tasks and Atari games confirm the efficacy of our approach.", "Value-based methods are widely used in control, planning and reinforcement learning (Gorodetsky et al., 2018; Alora et al., 2016; Mnih et al., 2015) .", "To solve a Markov Decision Process (MDP), one common method is value iteration, which finds the optimal value function.", "This process can be done by iteratively computing and updating the state-action value function, represented by Q(s, a) (i.e., the Q-value function).", "In simple cases with small state and action spaces, value iteration can be ideal for efficient and accurate planning.", "However, for modern MDPs, the data that encodes the value function usually lies in thousands or millions of dimensions (Gorodetsky et al., 2018; 2019) , including images in deep reinforcement learning (Mnih et al., 2015; Tassa et al., 2018) .", "These practical constraints significantly hamper the efficiency and applicability of the vanilla value iteration.", "Yet, the Q-value function is intrinsically induced by the underlying system dynamics.", "These dynamics are likely to possess some structured forms in various settings, such as being governed by partial differential equations.", "In addition, states and actions may also contain latent features (e.g., similar states could have similar optimal actions).", "Thus, it is reasonable to expect the structured dynamic to impose a structure on the Q-value.", "Since the Q function can be treated as a giant matrix, with rows as states and columns as actions, a structured Q function naturally translates to a structured Q matrix.", "In this work, we explore the low-rank structures.", "To check whether low-rank Q matrices are common, we examine the benchmark Atari games, as well as 4 classical stochastic control tasks.", "As we demonstrate in Sections 3 and 4, more than 40 out of 57 Atari games and all 4 control tasks exhibit low-rank Q matrices.", "This leads us to a natural question: How do we leverage the low-rank structure in Q matrices to allow value-based techniques to achieve better performance on \"low-rank\" tasks?", "We propose a generic framework that allows for exploiting the low-rank structure in both classical planning and modern deep RL.", "Our scheme leverages Matrix Estimation (ME), a theoretically guaranteed framework for recovering low-rank matrices from noisy or incomplete measurements (Chen & Chi, 2018) .", "In particular, for classical control tasks, we propose Structured Value-based Planning (SVP).", "For the Q matrix of dimension |S| × |A|, at each value iteration, SVP randomly updates a small portion of the Q(s, a) and employs ME to reconstruct the remaining elements.", "We show that planning problems can greatly benefit from such a scheme, where much fewer samples (only sample around 20% of (s, a) pairs at each iteration) can achieve almost the same policy as the optimal one.", "For more advanced deep RL tasks, we extend our intuition and propose Structured Value-based Deep RL (SV-RL), applicable for any value-based methods such as DQN (Mnih et al., 2015) .", "Here, instead of the full Q matrix, SV-RL naturally focuses on the \"sub-matrix\", corresponding to the sampled batch of states at the current iteration.", "For each sampled Q matrix, we again apply ME to represent the deep Q learning target in a structured way, which poses a low rank regularization on this \"sub-matrix\" throughout the training process, and hence eventually the Q-network's predictions.", "Intuitively, as learning a deep RL policy is often noisy with high variance, if the task possesses a low-rank property, this scheme will give a clear guidance on the learning space during training, after which a better policy can be anticipated.", "We confirm that SV-RL indeed can improve the performance of various value-based methods on \"low-rank\" Atari games: SV-RL consistently achieves higher scores on those games.", "Interestingly, for complex, \"high-rank\" games, SV-RL performs comparably.", "ME naturally seeks solutions that balance low rank and a small reconstruction error (cf. Section 3.1).", "Such a balance on reconstruction error helps to maintain or only slightly degrade the performance for \"high-rank\" situation.", "We summarize our contributions as follows:", "• We are the first to propose a framework that leverages matrix estimation as a general scheme to exploit the low-rank structures, from planning to deep reinforcement learning.", "• We demonstrate the effectiveness of our approach on classical stochastic control tasks, where the low-rank structure allows for efficient planning with less computation.", "• We extend our scheme to deep RL, which is naturally applicable for any value-based techniques.", "Across a variety of methods, such as DQN, double DQN, and dueling DQN, experimental results on all Atari games show that SV-RL can consistently improve the performance of value-based methods, achieving higher scores for tasks when low-rank structures are confirmed to exist.", "We investigated the structures in value function, and proposed a complete framework to understand, validate, and leverage such structures in various tasks, from planning to deep reinforcement learning.", "The proposed SVP and SV-RL algorithms harness the strong low-rank structures in the Q function, showing consistent benefits for both planning tasks and value-based deep reinforcement learning techniques.", "Extensive experiments validated the significance of the proposed schemes, which can be easily embedded into existing planning and RL frameworks for further improvements.", "randomly sample a set Ω of observed entries from S × A, each with probability p 4: / * update the randomly selected state-action pairs * / 5:", "end for 8:", "/ * reconstruct the Q matrix via matrix estimation * / 9:", "apply matrix completion to the observed values {Q(s, a)} (s,a)∈Ω to reconstruct Q (t+1) :" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.4375, 0.3255814015865326, 0.045454539358615875, 0.1875, 0.3243243098258972, 0.3142856955528259, 0.12121211737394333, 0.25641024112701416, 0.10810810327529907, 0.09756097197532654, 0.1621621549129486, 0.26923075318336487, 0.1249999925494194, 0.06666666269302368, 0.051282044500112534, 0.05405404791235924, 0.1818181723356247, 0.1463414579629898, 0.14814814925193787, 0.09999999403953552, 0.1395348757505417, 0.2222222238779068, 0.8717948794364929, 0.1904761791229248, 0.12903225421905518, 0.12765957415103912, 0.18518517911434174, 0.1666666567325592, 0.051282044500112534, 0.2222222238779068, 0.18518517911434174, 0.1428571343421936, 0.07407406717538834, 0.1666666567325592, 0.1621621549129486, 0.07999999821186066, 0.5116279125213623, 0.3333333432674408, 0.17142856121063232, 0.21052631735801697, 0.4651162624359131, 0.4444444477558136, 0.19512194395065308, 0.09090908616781235, 0.09090908616781235, 0.0714285671710968, 0.060606054961681366 ]
rklHqRVKvH
true
[ "We propose a generic framework that allows for exploiting the low-rank structure in both planning and deep reinforcement learning." ]
[ "Learned representations of source code enable various software developer tools, e.g., to detect bugs or to predict program properties.", "At the core of code representations often are word embeddings of identifier names in source code, because identifiers account for the majority of source code vocabulary and convey important semantic information.", "Unfortunately, there currently is no generally accepted way of evaluating the quality of word embeddings of identifiers, and current evaluations are biased toward specific downstream tasks.", "This paper presents IdBench, the first benchmark for evaluating to what extent word embeddings of identifiers represent semantic relatedness and similarity.", "The benchmark is based on thousands of ratings gathered by surveying 500 software developers.", "We use IdBench to evaluate state-of-the-art embedding techniques proposed for natural language, an embedding technique specifically designed for source code, and lexical string distance functions, as these are often used in current developer tools.", "Our results show that the effectiveness of embeddings varies significantly across different embedding techniques and that the best available embeddings successfully represent semantic relatedness.", "On the downside, no existing embedding provides a satisfactory representation of semantic similarities, e.g., because embeddings consider identifiers with opposing meanings as similar, which may lead to fatal mistakes in downstream developer tools.", "IdBench provides a gold standard to guide the development of novel embeddings that address the current limitations.\n", "Reasoning about source code based on learned representations has various applications, such as predicting method names (Allamanis et al., 2015) , detecting bugs (Pradel & Sen, 2018) and vulnerabilities (Harer et al., 2018) , predicting types (Malik et al., 2019) , detecting similar code (White et al., 2016; Xu et al., 2017) , inferring specifications (DeFreez et al., 2018) , code de-obfuscation (Raychev et al., 2015; Alon et al., 2018a) , and program repair (Devlin et al., 2017) .", "Many of these techniques are based on embeddings of source code, which map a given piece of code into a continuous vector representation that encodes some aspect of the semantics of the code.", "A core component of most code embeddings are semantic representations of identifier names, i.e., the names of variables, functions, classes, fields, etc. in source code.", "Similar to words in natural languages, identifiers are the basic building block of source code.", "Identifiers not only account for the majority of the vocabulary of source code, but they also convey important information about the (intended) meaning of code.", "To reason about identifiers and their meaning, code analysis techniques build on learned embeddings of identifiers, either by adapting embeddings that were originally proposed for natural languages (Mikolov et al., 2013a; or with embeddings specifically designed for source code (Alon et al., 2018a) .", "Given the importance of identifier embeddings, a crucial challenge is measuring how effective an embedding represents the semantic relationships between identifiers.", "For word embeddings in natural language, the community has addressed this question through a series of gold standards (Finkelstein et al., 2002; Bruni et al., 2014a; Rubenstein & Goodenough, 1965; Miller & Charles, 1991; Hill et al., 2015; Gerz et al., 2016) .", "These gold standards define how similar two words are based on ratings by human judges, enabling an evaluation that measures how well an embedding reflects the human ratings.", "Unfortunately, simply reusing existing gold standards to identifiers in source code would be misleading.", "One reason is that the vocabularies of natural languages and source code overlap only partially, because source code contains various terms and abbreviations not found in natural language texts.", "Moreover, source code has a constantly growing vocabulary, as developers tend to invent new identifiers, e.g., for newly emerging application domains Babii et al. (2019) .", "Finally, even words present in both natural languages and source code may differ in their meaning due to computer science-specific meanings of some words, e.g., \"float\" or \"string\".", "This paper addresses the problem of measuring and comparing the effectiveness of embeddings of identifiers.", "We present IdBench, a benchmark for evaluating techniques that represent semantic similarities of identifiers.", "The basis of the benchmark is a dataset of developer opinions about the similarity of pairs of identifiers.", "We gather this dataset through two surveys that show realworld identifiers and code snippets to hundreds of developers, asking them to rate their similarity.", "Taking the developer opinions as a gold standard, IdBench allows for evaluating embeddings in a systematic way by measuring to what extent an embedding agrees with ratings given by developers.", "Moreover, inspecting pairs of identifiers for which an embedding strongly agrees or disagrees with the benchmark helps understand the strengths and weaknesses of current embeddings.", "Overall, we gather thousands of ratings from 500 developers.", "Cleaning and compiling this raw dataset into a benchmark yields several hundreds of pairs of identifiers with gold standard similarities, including identifiers from a wide range of application domains.", "We apply our approach to a corpus of JavaScript code, because several recent pieces of work on identifier names and code embeddings focus on this language (Pradel & Sen, 2018; Alon et al., 2018b; a; Malik et al., 2019) .", "Applying our methodology to another language is straightforward.", "Based on the newly created benchmark, we evaluate and compare state-of-the-art embeddings of identifiers.", "We find that different embedding techniques differ heavily in terms of their ability to accurately represent identifier relatedness and similarity.", "The best available technique, the CBOW variant of FastText, accurately represents relatedness, but none of the available techniques accurately represents identifier similarities.", "One reason is that some embeddings are confused about identifiers with opposite meaning, e.g., rows and cols, and about identifiers that belong to the same application domain but are not similar.", "Another reason is that some embeddings miss synonyms, e.g., file and record.", "We also find that simple string distance functions, which measure the similarity of identifiers without any learning, are surprisingly effective, and even outperform some learned embeddings for the similarity task.", "In summary, this paper makes the following contributions.", "(1) Methodology: To the best of our knowledge, we are the first to systematically evaluate embeddings of identifiers.", "Our methodology is based on surveying developers and summarizing their opinions into gold standard similarities of pairs of identifiers.", "(2) Reusable benchmark: We make available a benchmark of hundreds of pairs of identifiers, providing a way to systematically evaluate existing and future embeddings.", "While the best available embeddings are highly effective at representing relatedness, none of the studied techniques reaches the same level of agreement for similarity.", "In fact, even the best results in Figures 4b and 4c (39%) clearly stay beyond the IRA of our benchmark (62%), showing a huge potential for improvement.", "For many applications of embeddings of identifiers, semantic similarity is crucial.", "For example, tools to suggest suitable variable or method names (Allamanis et al., 2015; Alon et al., 2018a) aim for the name that is most similar, not only most related, to the concept represented by the variable or method.", "Likewise, identifier name-based tools for finding programming errors (Pradel & Sen, 2018) or variable misuses (Allamanis et al., 2017) want to identify situations where the developer uses a wrong, but perhaps related, variable.", "The lack of embeddings that accurately represent the semantic similarities of identifiers motivates more work on embedding techniques suitable for this task.", "To better understand why current embeddings sometimes fail to accurately represent similarities, Table 1 shows the most similar identifiers of selected identifiers according to the FastText-cbow and path-based embeddings.", "The examples illustrate two observations.", "First, FastText, due to its use of n-grams (Bojanowski et al., 2017) , tends to cluster identifiers based on lexical commonalities.", "While many lexically similar identifiers are also semantically similar, e.g., substr and substring, this approach misses other synonyms, e.g., item and entry.", "Another downside is that lexical similarity may also establish wrong relationships.", "For example, substring and substrCount represent different concepts, but FastText finds them to be highly similar.", "Second, in contrast to FastText, path-based embeddings tend to cluster words based on their structural and syntactical contexts.", "This approach helps the embeddings to identify synonyms despite their lexical differences, e.g., count and total, or files and records.", "The downside is that it also clusters various related but not similar identifiers, e.g., minText and maxText, or substr and getPadding.", "Some of these identifiers even have opposing meanings, e.g., rows and cols, which can mislead code analysis tools when reasoning about the semantics of code.", "A somewhat surprising result is that simple string distance functions achieve a level of agreement with IdBench's similarity gold standards as high as some learned embeddings.", "The reason why string distance functions sometimes correctly identify semantic similarities is that some semantically similar identifiers are also be lexically similar.", "One downside of lexical approaches is that they miss synonymous identifiers, e.g., count and total.", "This paper presents the first benchmark for evaluating vector space embeddings of identifiers names, which are at the core of many machine learning models of source code.", "We compile thousands of ratings gathered from 500 developers into three benchmarks that provide gold standard similarity scores representing the relatedness, similarity, and contextual similarity of identifiers.", "Using IdBench to experimentally compare five embedding techniques and two string distance functions shows that these techniques differ significantly in their agreement with our gold standard.", "The best available embedding is very effective at representing how related identifiers are.", "However, all studied techniques show huge room for improvement in their ability to represent how similar identifiers are.", "IdBench will help steer future efforts on improved embeddings of identifiers, which will eventually enable better machine learning models of source code." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.25806450843811035, 0.3243243098258972, 0.11428570747375488, 0.3125, 0.1599999964237213, 0.1860465109348297, 0.1249999925494194, 0.21739129722118378, 0.2142857164144516, 0.06666666269302368, 0.21621620655059814, 0.34285715222358704, 0.4615384638309479, 0.1875, 0.20408162474632263, 0.12903225421905518, 0.12765957415103912, 0, 0.4000000059604645, 0.2222222238779068, 0.15789473056793213, 0.25, 0.260869562625885, 0.23999999463558197, 0.23999999463558197, 0.23529411852359772, 0.1538461446762085, 0.23529411852359772, 0.09999999403953552, 0.1666666567325592, 0.1702127605676651, 0.10526315122842789, 0.3199999928474426, 0.19354838132858276, 0.0714285671710968, 0.1538461446762085, 0.07999999821186066, 0.1538461446762085, 0, 0.37037035822868347, 0.13793103396892548, 0.3125, 0.1249999925494194, 0.1621621549129486, 0.1904761791229248, 0.0476190447807312, 0.045454543083906174, 0.1875, 0.2222222238779068, 0, 0.19354838132858276, 0.060606054961681366, 0, 0.07407406717538834, 0.2142857164144516, 0.1249999925494194, 0, 0.1666666567325592, 0.1666666567325592, 0.0624999962747097, 0.0714285671710968, 0.34285715222358704, 0.1111111044883728, 0.1111111044883728, 0.0833333283662796, 0.20689654350280762, 0.25806450843811035 ]
SklibJBFDB
true
[ "A benchmark to evaluate neural embeddings of identifiers in source code." ]
[ "Generative adversarial nets (GANs) are widely used to learn the data sampling process and their performance may heavily depend on the loss functions, given a limited computational budget.", "This study revisits MMD-GAN that uses the maximum mean discrepancy (MMD) as the loss function for GAN and makes two contributions.", "First, we argue that the existing MMD loss function may discourage the learning of fine details in data as it attempts to contract the discriminator outputs of real data.", "To address this issue, we propose a repulsive loss function to actively learn the difference among the real data by simply rearranging the terms in MMD.", "Second, inspired by the hinge loss, we propose a bounded Gaussian kernel to stabilize the training of MMD-GAN with the repulsive loss function.", "The proposed methods are applied to the unsupervised image generation tasks on CIFAR-10, STL-10, CelebA, and LSUN bedroom datasets.", "Results show that the repulsive loss function significantly improves over the MMD loss at no additional computational cost and outperforms other representative loss functions.", "The proposed methods achieve an FID score of 16.21 on the CIFAR-10 dataset using a single DCGAN network and spectral normalization.", "Generative adversarial nets (GANs) BID7 ) are a branch of generative models that learns to mimic the real data generating process.", "GANs have been intensively studied in recent years, with a variety of successful applications (Karras et al. (2018) ; Li et al. (2017b) ; Lai et al. (2017) ; Zhu et al. (2017) ; BID13 ).", "The idea of GANs is to jointly train a generator network that attempts to produce artificial samples, and a discriminator network or critic that distinguishes the generated samples from the real ones.", "Compared to maximum likelihood based methods, GANs tend to produce samples with sharper and more vivid details but require more efforts to train.Recent studies on improving GAN training have mainly focused on designing loss functions, network architectures and training procedures.", "The loss function, or simply loss, defines quantitatively the difference of discriminator outputs between real and generated samples.", "The gradients of loss functions are used to train the generator and discriminator.", "This study focuses on a loss function called maximum mean discrepancy (MMD), which is well known as the distance metric between two probability distributions and widely applied in kernel two-sample test BID8 ).", "Theoretically, MMD reaches its global minimum zero if and only if the two distributions are equal.", "Thus, MMD has been applied to compare the generated samples to real ones directly (Li et al. (2015) ; BID5 ) and extended as the loss function to the GAN framework recently (Unterthiner et al. (2018) ; Li et al. (2017a) ; ).In", "this paper, we interpret the optimization of MMD loss by the discriminator as a combination of attraction and repulsion processes, similar to that of linear discriminant analysis. We", "argue that the existing MMD loss may discourage the learning of fine details in data, as the discriminator attempts to minimize the within-group variance of its outputs for the real data. To", "address this issue, we propose a repulsive loss for the discriminator that explicitly explores the differences among real data. The", "proposed loss achieved significant improvements over the MMD loss on image generation tasks of four benchmark datasets, without incurring any additional computational cost. Furthermore", ", a bounded Gaussian kernel is proposed to stabilize the training of discriminator. As such, using", "a single kernel in MMD-GAN is sufficient, in contrast to a linear combination of kernels used in Li et al. (2017a) and . By using a single", "kernel, the computational cost of the MMD loss can potentially be reduced in a variety of applications.The paper is organized as follows. Section 2 reviews", "the GANs trained using the MMD loss (MMD-GAN) . We propose the repulsive", "loss for discriminator in Section 3, introduce two practical techniques to stabilize the training process in Section 4, and present the results of extensive experiments in Section 5. In the last section, we", "discuss the connections between our model and existing work.", "This study extends the previous work on MMD-GAN (Li et al. (2017a) ) with two contributions.", "First, we interpreted the optimization of MMD loss as a combination of attraction and repulsion processes, and proposed a repulsive loss for the discriminator that actively learns the difference among real data.", "Second, we proposed a bounded Gaussian RBF (RBF-B) kernel to address the saturation issue.", "Empirically, we observed that the repulsive loss may result in unstable training, due to factors including initialization (Appendix A.2), learning rate ( FIG7 and Lipschitz constraints on the discriminator (Appendix C.3).", "The RBF-B kernel managed to stabilize the MMD-GAN training in many cases.", "Tuning the hyper-parameters in RBF-B kernel or using other regularization methods may further improve our results.The theoretical advantages of MMD-GAN require the discriminator to be injective.", "The proposed repulsive loss (Eq. 4) attempts to realize this by explicitly maximizing the pair-wise distances among the real samples.", "Li et al. (2017a) achieved the injection property by using the discriminator as the encoder and an auxiliary network as the decoder to reconstruct the real and generated samples, which is more computationally extensive than our proposed approach.", "On the other hand, ; imposed a Lipschitz constraint on the discriminator in MMD-GAN via gradient penalty, which may not necessarily promote an injective discriminator.The idea of repulsion on real sample scores is in line with existing studies.", "It has been widely accepted that the quality of generated samples can be significantly improved by integrating labels (Odena et al. (2017); Miyato & Koyama (2018) ; Zhou et al. (2018) ) or even pseudo-labels generated by k-means method BID9 ) in the training of discriminator.", "The reason may be that the labels help concentrate the data from the same class and separate those from different classes.", "Using a pre-trained classifier may also help produce vivid image samples BID14 ) as the learned representations of the real samples in the hidden layers of the classifier tend to be well separated/organized and may produce more meaningful gradients to the generator.At last, we note that the proposed repulsive loss is orthogonal to the GAN studies on designing network structures and training procedures, and thus may be combined with a variety of novel techniques.", "For example, the ResNet architecture BID11 ) has been reported to outperform the plain DCGAN used in our experiments on image generation tasks (Miyato et al. (2018) ; BID10 ) and self-attention module may further improve the results (Zhang et al. (2018) ).", "On the other hand, Karras et al. (2018) proposed to progressively grows the size of both discriminator and generator and achieved the state-of-the-art performance on unsupervised training of GANs on the CIFAR-10 dataset.", "Future work may explore these directions.", "This section shows that constant discriminator output DISPLAYFORM0 may have no discrimination power.", "First, we make the following assumptions:Assumption", "3.", "1. D is a multilayer perceptron where each layer l can be factorized into an affine transform and an element-wise activation function f l .", "2. Each activation function f l ∈ C 0 ; furthermore, f l has a finite number of discontinuities and f l ∈ C 06", ". 3. Input data to D is continuous and its support S is compact in R d with non-zero measure in each dimension and d > 1 7 .Based on Assumption 3, we have the following proposition:Proposition", "2. If ∀x ∈ S, D(x) = c, where c is constant, then there always exists distortion δx such that x + δx ∈ S and D(x + δx) = c." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.21739129722118378, 0.3589743673801422, 0.27272728085517883, 0.2790697515010834, 0.25, 0.052631575614213943, 0.14999999105930328, 0.1463414579629898, 0.29999998211860657, 0.13333332538604736, 0.17391303181648254, 0.07407406717538834, 0.21621620655059814, 0.25, 0.307692289352417, 0.05882352590560913, 0.11538460850715637, 0.22727271914482117, 0.260869562625885, 0.2631579041481018, 0.1428571343421936, 0.22857142984867096, 0.14999999105930328, 0.23255813121795654, 0.13793103396892548, 0.260869562625885, 0.0714285671710968, 0.05714285373687744, 0.2666666507720947, 0.12121211737394333, 0.1599999964237213, 0.12903225421905518, 0.17777776718139648, 0.10526315122842789, 0.07843136787414551, 0.18518517911434174, 0.14035087823867798, 0.05405404791235924, 0.13333332538604736, 0.072727270424366, 0.1304347813129425, 0, 0.0624999962747097, 0.07999999821186066, 0.09756097197532654, 0.15789473056793213, 0.07843136787414551, 0 ]
HygjqjR9Km
true
[ "Rearranging the terms in maximum mean discrepancy yields a much better loss function for the discriminator of generative adversarial nets" ]
[ "Deep neural networks have shown incredible performance for inference tasks in a variety of domains.", "Unfortunately, most current deep networks are enormous cloud-based structures that require significant storage space, which limits scaling of deep learning as a service (DLaaS) and use for on-device augmented intelligence. ", "This paper finds algorithms that directly use lossless compressed representations of deep feedforward networks (with synaptic weights drawn from discrete sets), to perform inference without full decompression.", "The basic insight that allows less rate than naive approaches is the recognition that the bipartite graph layers of feedforward networks have a kind of permutation invariance to the labeling of nodes, in terms of inferential operation and that the inference operation depends locally on the edges directly connected to it.", "We also provide experimental results of our approach on the MNIST dataset.", "Deep learning has achieved incredible performance for inference tasks such as speech recognition, image recognition, and natural language processing.", "Most current deep neural networks, however, are enormous cloud-based structures that are too large and too complex to perform fast, energyefficient inference on device or for scaling deep learning as a service (DLaaS).", "Compression, with the capability of providing inference without full decompression, is important.", "Universal source coding for feedforward deep networks having synaptic weights drawn from finite sets that essentially achieve the entropy lower bound were introduced in BID0 .", "Here, we provide-for the first time-an algorithm that directly uses these compressed representations for inference tasks without complete decompression.", "Structures that can represent information near the entropy bound while also allowing efficient operations on them are called succinct structures (2; 3; 4).", "Thus, we provide a succinct structure for feedforward neural networks, which may fit on-device and enable scaling of DLaaS.Related Work: There has been recent interest in compact representations of neural networks (5; 6; 7; 8; 9; 10; 11; 12; 13; 14) .", "While most of these algorithms are lossy, we provide an efficient lossless algorithm, which can be used on top of any lossy algorithm that quantizes or prunes network weights; prior work on lossless compression of neural networks either used Huffman coding in a way that did not exploit invariances or was not succinct and required full decompression for inference.", "The proposed algorithm builds on the sublinear entropy-achieving representation in (1) but is the first time succinctness-the further ability to perform inference with negligible space needed for partial decompression-has been attempted or achieved.", "Our inference algorithm is similar to arithmetic decoding and so computational performance is also governed by efficient implementations of arithmetic coding.", "Efficient high-throughput implementations of arithmetic coding/decoding have been developed for video, e.g. as part of the H.264/AVC and HEVC standards (15; 16)." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11428570747375488, 0.1599999964237213, 0.8085106611251831, 0.20000000298023224, 0.0624999962747097, 0.052631575614213943, 0.23999999463558197, 0.25, 0.13333332538604736, 0.3589743673801422, 0.04651162400841713, 0.13333332538604736, 0.19718308746814728, 0.11538460850715637, 0.1538461446762085, 0.04651162400841713 ]
HJgPAWD-iX
true
[ "This paper finds algorithms that directly use lossless compressed representations of deep feedforward networks, to perform inference without full decompression." ]
[ "Generative adversarial networks (GANs) form a generative modeling approach known for producing appealing samples, but they are notably difficult to train.", "One common way to tackle this issue has been to propose new formulations of the GAN objective.", "Yet, surprisingly few studies have looked at optimization methods designed for this adversarial training.", "In this work, we cast GAN optimization problems in the general variational inequality framework.", "Tapping into the mathematical programming literature, we counter some common misconceptions about the difficulties of saddle point optimization and propose to extend methods designed for variational inequalities to the training of GANs.", "We apply averaging, extrapolation and a computationally cheaper variant that we call extrapolation from the past to the stochastic gradient method (SGD) and Adam.", "Generative adversarial networks (GANs) BID12 ) form a generative modeling approach known for producing realistic natural images (Karras et al., 2018) as well as high quality super-resolution (Ledig et al., 2017) and style transfer (Zhu et al., 2017) .", "Nevertheless, GANs are also known to be difficult to train, often displaying an unstable behavior BID11 .", "Much recent work has tried to tackle these training difficulties, usually by proposing new formulations of the GAN objective (Nowozin et al., 2016; .", "Each of these formulations can be understood as a two-player game, in the sense of game theory (Von Neumann and Morgenstern, 1944) , and can be addressed as a variational inequality problem (VIP) BID15 , a framework that encompasses traditional saddle point optimization algorithms (Korpelevich, 1976) .Solving", "such GAN games is traditionally approached by running variants of stochastic gradient descent (SGD) initially developed for optimizing supervised neural network objectives. Yet it", "is known that for some games (Goodfellow, 2016, §8. 2) SGD exhibits oscillatory behavior and fails to converge. This oscillatory", "behavior, which does not arise from stochasticity, highlights a fundamental problem: while a direct application of basic gradient descent is an appropriate method for regular minimization problems, it is not a sound optimization algorithm for the kind of two-player games of GANs. This constitutes", "a fundamental issue for GAN training, and calls for the use of more principled methods with more reassuring convergence guarantees.Contributions. We point out that", "multi-player games can be cast as variational inequality problems (VIPs) and consequently the same applies to any GAN formulation posed as a minimax or non-zerosum game. We present two techniques", "from this literature, namely averaging and extrapolation, widely used to solve VIPs but which have not been explored in the context of GANs before. 1 We extend standard GAN", "training methods such as SGD or Adam into variants that incorporate these techniques (Alg. 4 is new). We also explain that the", "oscillations of basic SGD for GAN training previously noticed BID11 can be explained by standard variational inequality optimization results and we illustrate how averaging and extrapolation can fix this issue.We introduce a technique, called extrapolation from the past, that only requires one gradient computation per update compared to extrapolation which requires to compute the gradient twice, rediscovering, with a VIP perspective, a particular case of optimistic mirror descent (Rakhlin and Sridharan, 2013) . We prove its convergence", "for strongly monotone operators and in the stochastic VIP setting.Finally, we test these techniques in the context of GAN training. We observe a 4-6% improvement", "over Miyato et al. (2018) on the inception score and the Fréchet inception distance on the CIFAR-10 dataset using a WGAN-GP BID14 ) and a ResNet generator.", "We newly addressed GAN objectives in the framework of variational inequality.", "We tapped into the optimization literature to provide more principled techniques to optimize such games.", "We leveraged these techniques to develop practical optimization algorithms suitable for a wide range of GAN training objectives (including non-zero sum games and projections onto constraints).", "We experimentally verified that this could yield better trained models, improving the previous state of the art.", "The presented techniques address a fundamental problem in GAN training in a principled way, and are orthogonal to the design of new GAN architectures and objectives.", "They are thus likely to be widely applicable, and benefit future development of GANs." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0833333283662796, 0.1395348757505417, 0.1463414579629898, 0.39024388790130615, 0.290909081697464, 0.25, 0.06666666269302368, 0.0952380895614624, 0.11764705181121826, 0.1818181723356247, 0.039215680211782455, 0.1304347813129425, 0.1249999925494194, 0.1599999964237213, 0.2857142686843872, 0.2857142686843872, 0.1666666567325592, 0.24175824224948883, 0.3529411852359772, 0.0833333283662796, 0.31578946113586426, 0.2926829159259796, 0.22641508281230927, 0.1395348757505417, 0.2448979616165161, 0.1463414579629898 ]
r1laEnA5Ym
true
[ "We cast GANs in the variational inequality framework and import techniques from this literature to optimize GANs better; we give algorithmic extensions and empirically test their performance for training GANs." ]
[ "In order to efficiently learn with small amount of data on new tasks, meta-learning transfers knowledge learned from previous tasks to the new ones.", "However, a critical challenge in meta-learning is the task heterogeneity which cannot be well handled by traditional globally shared meta-learning methods.", "In addition, current task-specific meta-learning methods may either suffer from hand-crafted structure design or lack the capability to capture complex relations between tasks.", "In this paper, motivated by the way of knowledge organization in knowledge bases, we propose an automated relational meta-learning (ARML) framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph.", "When a new task arrives, it can quickly find the most relevant structure and tailor the learned structure knowledge to the meta-learner.", "As a result, the proposed framework not only addresses the challenge of task heterogeneity by a learned meta-knowledge graph, but also increases the model interpretability.", "We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.", "Learning quickly is the key characteristic of human intelligence, which remains a daunting problem in machine intelligence.", "The mechanism of meta-learning is widely used to generalize and transfer prior knowledge learned from previous tasks to improve the effectiveness of learning on new tasks, which has benefited various applications, such as computer vision (Kang et al., 2019; , natural language processing (Gu et al., 2018; Lin et al., 2019) and social good (Zhang et al., 2019; Yao et al., 2019a) .", "Most of existing meta-learning algorithms learn a globally shared meta-learner (e.g., parameter initialization (Finn et al., 2017; , meta-optimizer (Ravi & Larochelle, 2016) , metric space (Snell et al., 2017; Garcia & Bruna, 2017; Oreshkin et al., 2018) ).", "However, globally shared meta-learners fail to handle tasks lying in different distributions, which is known as task heterogeneity (Vuorio et al., 2018; Yao et al., 2019b) .", "Task heterogeneity has been regarded as one of the most challenging issues in meta-learning, and thus it is desirable to design meta-learning models that effectively optimize each of the heterogeneous tasks.", "The key challenge to deal with task heterogeneity is how to customize globally shared meta-learner by using task-specific information?", "Recently, a handful of works try to solve the problem by learning a task-specific representation for tailoring the transferred knowledge to each task (Oreshkin et al., 2018; Vuorio et al., 2018; Lee & Choi, 2018) .", "However, the expressiveness of these methods is limited due to the impaired knowledge generalization between highly related tasks.", "Recently, learning the underlying structure across tasks provides a more effective way for balancing the customization and generalization.", "Representatively, Yao et al. propose a hierarchically structured meta-learning method to customize the globally shared knowledge to each cluster (Yao et al., 2019b) .", "Nonetheless, the hierarchical clustering structure completely relies on the handcrafted design which needs to be tuned carefully and may lack the capability to capture complex relationships.", "Hence, we are motivated to propose a framework to automatically extract underlying relational structures from historical tasks and leverage those relational structures to facilitate knowledge customization on a new task.", "This inspiration comes from the way of structuring knowledge in knowledge bases (i.e., knowledge graphs).", "In knowledge bases, the underlying relational structures across text entities are automatically constructed and applied to a new query to improve the searching efficiency.", "In the meta-learning problem, similarly, we aim at automatically establishing the metaknowledge graph between prior knowledge learned from previous tasks.", "When a new task arrives, it queries the meta-knowledge graph and quickly attends to the most relevant entities (vertices), and then takes advantage of the relational knowledge structures between them to boost the learning effectiveness with the limited training data.", "The proposed meta-learning framework is named as Automated Relational Meta-Learning (ARML).", "Specifically, the ARML automatically builds the meta-knowledge graph from meta-training tasks to memorize and organize learned knowledge from historical tasks, where each vertex represents one type of meta-knowledge (e.g., the common contour between birds and aircrafts).", "To learn the meta-knowledge graph at meta-training time, for each task, we construct a prototype-based relational graph for each class, where each vertex represents one prototype.", "The prototype-based relational graph not only captures the underlying relationship behind samples, but alleviates the potential effects of abnormal samples.", "The meta-knowledge graph is then learned by summarizing the information from the corresponding prototype-based relational graphs of meta-training tasks.", "After constructing the meta-knowledge graph, when a new task comes in, the prototype-based relational graph of the new task taps into the meta-knowledge graph for acquiring the most relevant knowledge, which further enhances the task representation and facilitates its training process.", "Our major contributions of the proposed ARML are three-fold: (1) it automatically constructs the meta-knowledge graph to facilitate learning a new task; (2) it empirically outperforms the state-ofthe-art meta-learning algorithms; (3) the meta-knowledge graph well captures the relationship among tasks and improves the interpretability of meta-learning algorithms.", "In this paper, to improve the effectiveness of meta-learning for handling heterogeneous task, we propose a new framework called ARML, which automatically extract relation across tasks and construct a meta-knowledge graph.", "When a new task comes in, it can quickly find the most relevant relations through the meta-knowledge graph and use this knowledge to facilitate its training process.", "The experiments demonstrate the effectiveness of our proposed algorithm.", "In the future, we plan to investigate the problem in the following directions: (1) we are interested to investigate the more explainable semantic meaning in the meta-knowledge graph on this problem; (2) Figure 3 : Interpretation of meta-knowledge graph on Art-Multi dataset.", "For each subdataset, we randomly select one task from them.", "In the left, we show the similarity heatmap between prototypes (P0-P5) and meta-knowledge vertices (V0-V7).", "In the right part, we show the meta-knowledge graph.", "we plan to extend the ARML to the continual learning scenario where the structure of meta-knowledge graph will change over time; (3) our proposed model focuses on tasks where the feature space, the label space are shared.", "We plan to explore the relational structure on tasks with different feature and label spaces.", "In this dataset, we use pencil and blur filers to change the task distribution.", "To investigate the effect of pencil and blur filters, we provide one example in Figure 4 .", "We can observe that different filters result in different data distributions.", "All used filter are provided by OpenCV 1 ." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0624999962747097, 0.3333333432674408, 0.060606054961681366, 0.25641024112701416, 0.06896550953388214, 0.25, 0, 0.14814814925193787, 0.03333333134651184, 0.0476190447807312, 0.17142856121063232, 0.1538461446762085, 0.2142857164144516, 0.14999999105930328, 0, 0, 0.06451612710952759, 0, 0.05714285373687744, 0.07999999821186066, 0, 0.13793103396892548, 0.13636362552642822, 0.0952380895614624, 0.09302325546741486, 0.1249999925494194, 0.06896550953388214, 0.2142857164144516, 0.1463414579629898, 0.12765957415103912, 0.14999999105930328, 0.1666666567325592, 0, 0.19512194395065308, 0.09999999403953552, 0.0833333283662796, 0.2222222238779068, 0.09756097197532654, 0, 0.0833333283662796, 0.07692307233810425, 0.09999999403953552, 0.1111111044883728 ]
rklp93EtwH
true
[ "Addressing task heterogeneity problem in meta-learning by introducing meta-knowledge graph" ]
[ "In this paper, a deep boosting algorithm is developed to\n", "learn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNs (base experts)\n", "with diverse capabilities, e.g., these base deep CNNs are\n", "sequentially trained to recognize a set of \n", "object classes in an easy-to-hard way according to their\n", "learning complexities.", "Our experimental results have demonstrated\n", "that our deep boosting algorithm can significantly improve the\n", "accuracy rates on large-scale visual recognition.", "The rapid growth of computational powers of GPUs has provided good opportunities for us to develop scalable learning algorithms to leverage massive digital images to train more discriminative classifiers for large-scale visual recognition applications, and deep learning BID19 BID20 BID3 has demonstrated its outstanding performance because highly invariant and discriminant features and multi-way softmax classifier are learned jointly in an end-to-end fashion.Before deep learning becomes so popular, boosting has achieved good success on visual recognition BID21 .", "By embedding multiple weak learners to construct an ensemble one, boosting BID15 can significantly improve the performance by sequentially training multiple weak learners with respect to a weighted error function which assigns larger weights to the samples misclassified by the previous weak learners.", "Thus it is very attractive to invest whether boosting can be integrated with deep learning to achieve higher accuracy rates on large-scale visual recognition.By using neural networks to replace the traditional weak learners in the boosting frameworks, boosting of neural networks has received enough attentions BID23 BID10 BID7 BID9 .", "All these existing deep boosting algorithms simply use the weighted error function (proposed by Adaboost (Schapire, 1999) ) to replace the softmax error function (used in deep learning ) that treats all the errors equally.", "Because different object classes may have different learning complexities, it is more attractive to invest new deep boosting algorithm that can use different weights over various object classes rather than over different training samples.Motivated by this observation, a deep boosting algorithm is developed to generate more discriminative ensemble classifier by combining a set of base deep CNNs with diverse capabilities, e.g., all these base deep CNNs (base experts) are sequentially trained to recognize different subsets of object classes in an easy-to-hard way according to their learning complexities.", "The rest of the paper is organized as: Section 2 briefly reviews the related work; Section 3 introduce our deep boosting algorithm; Section 4 reports our experimental results; and we conclude this paper at Section 5.", "In this paper, we develop a deep boosting algorithm is to learn more discriminative ensemble classifier by combining a set of base experts with diverse capabilities.", "The base experts are from the family of deep CNNs and they are sequentially trained to recognize a set of object classes in an easy-to-hard way according to their learning complexities.", "As for the future network, we would like to investigate the performance of heterogeneous base deep networks from different families." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.46666666865348816, 0.7777777910232544, 0.19354838132858276, 0.29629629850387573, 0.06896550953388214, 0, 0.20689654350280762, 0, 0.16867469251155853, 0.18867923319339752, 0.158730149269104, 0.16326530277729034, 0.4000000059604645, 0.1599999964237213, 0.7111111283302307, 0.2916666567325592, 0.20512819290161133 ]
B16_iGWCW
true
[ " A deep boosting algorithm is developed to learn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNs." ]
[ "We present a method for translating music across musical instruments and styles.", "This method is based on unsupervised training of a multi-domain wavenet autoencoder, with a shared encoder and a domain-independent latent space that is trained end-to-end on waveforms.", "Employing a diverse training dataset and large net capacity, the single encoder allows us to translate also from musical domains that were not seen during training.", "We evaluate our method on a dataset collected from professional musicians, and achieve convincing translations.", "We also study the properties of the obtained translation and demonstrate translating even from a whistle, potentially enabling the creation of instrumental music by untrained humans.", "Humans have always created music and replicated it -whether it is by singing, whistling, clapping, or, after some training, playing improvised or standard musical instruments.", "This ability is not unique to us, and there are many other vocal mimicking species that are able to repeat music from hearing.", "Music is also one of the first domains to be digitized and processed by modern computers and algorithms.", "It is, therefore, somewhat surprising that in the core music task of mimicry, AI is still much inferior to biological systems.In this work, we present a novel way to produce convincing musical translation between instruments and styles.", "For example 1 , we convert the audio of a Mozart symphony performed by an orchestra to an audio in the style of a pianist playing Beethoven.", "Our ability builds upon two technologies that have recently become available:", "(i) the ability to synthesize high quality audio using autoregressive models, and", "(ii) the recent advent of methods that transform between domains in an unsupervised way.", "The first technology allows us to generate high quality and realistic audio and thanks to the teacher forcing technique, autoregressive models are efficiently trained as decoders.", "The second family of technologies contributes to the practicality of the solution, since posing the learning problem in the supervised setting, would require a parallel dataset of different musical instruments.In our architecture, we employ a single, universal, encoder and apply it to all inputs (universal here means that a single encoder can address all input music, allowing us to achieve capabilities that are known as universal translation).", "In addition to the advantage of training fewer networks, this also enables us to convert from musical domains that were not heard during training to any of the domains encountered.The key to being able to train a single encoder architecture, is making sure that the domain-specific information is not encoded.", "We do this using a domain confusion network that provides an adversarial signal to the encoder.", "In addition, it is important for the encoder not to memorize the input signal but to encode it in a semantic way.", "We achieve this by distorting the input audio by random local pitch modulation.", "During training, the network is trained as a denoising autoencoder, which recovers the undistorted version of the original input.", "Since the distorted input is no longer in the musical domain of the output, the network learns to project out-of-domain inputs to the desired output domain.", "In addition, the network no longer benefits from memorizing the input signal and employs a higher-level encoding.Asked to convert one musical instrument to another, our network shows a level of performance that seems to approach that of musicians.", "When controlling for audio quality, which is still lower for generated music, it is many times hard to tell which is the original audio file and which is the output of the conversion that mimics a completely different instrument.", "The network is also able to successfully process unseen musical instruments such as drums, or other sources, such as whistles.", "Our work demonstrates capabilities in music conversion, which is a high-level task (a terminology that means that they are more semantic than low-level audio processing tasks), and could open the door to other high-level tasks, such as composition.", "We have initial results that we find interesting: by reducing the size of the latent space, the decoders become more \"creative\" and produce outputs that are natural yet novel, in the sense that the exact association with the original input is lost." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.5454545617103577, 0.12121211737394333, 0.05714285373687744, 0.1599999964237213, 0.12121211737394333, 0.1764705777168274, 0.12903225421905518, 0.07407406717538834, 0.21276594698429108, 0, 0, 0.09090908616781235, 0.0833333283662796, 0.05882352590560913, 0.060606058686971664, 0, 0, 0.06896550953388214, 0, 0, 0, 0.0476190447807312, 0.09999999403953552, 0.0714285671710968, 0.08695651590824127, 0.04444444179534912 ]
HJGkisCcKm
true
[ "An automatic method for converting music between instruments and styles" ]
[ "Most existing defenses against adversarial attacks only consider robustness to L_p-bounded distortions.", "In reality, the specific attack is rarely known in advance and adversaries are free to modify images in ways which lie outside any fixed distortion model; for example, adversarial rotations lie outside the set of L_p-bounded distortions.", "In this work, we advocate measuring robustness against a much broader range of unforeseen attacks, attacks whose precise form is unknown during defense design.\n\n", "We propose several new attacks and a methodology for evaluating a defense against a diverse range of unforeseen distortions.", "First, we construct novel adversarial JPEG, Fog, Gabor, and Snow distortions to simulate more diverse adversaries.", "We then introduce UAR, a summary metric that measures the robustness of a defense against a given distortion. ", "Using UAR to assess robustness against existing and novel attacks, we perform an extensive study of adversarial robustness.", "We find that evaluation against existing L_p attacks yields redundant information which does not generalize to other attacks; we instead recommend evaluating against our significantly more diverse set of attacks.", "We further find that adversarial training against either one or multiple distortions fails to confer robustness to attacks with other distortion types. ", "These results underscore the need to evaluate and study robustness against unforeseen distortions.", "Neural networks perform well on many benchmark tasks (He et al., 2016) yet can be fooled by adversarial examples (Goodfellow et al., 2014) or inputs designed to subvert a given model.", "Adversaries are usually assumed to be constrained by an L ∞ budget (Goodfellow et al., 2014; Madry et al., 2017; Xie et al., 2018) , while other modifications such as adversarial geometric transformations, patches, and even 3D-printed objects have also been considered (Engstrom et al., 2017; Brown et al., 2017; Athalye et al., 2017) .", "However, most work on adversarial robustness assumes that the adversary is fixed and known in advance.", "Defenses against adversarial attacks are often constructed in view of this specific assumption (Madry et al., 2017) .", "In practice, adversaries can modify and adapt their attacks so that they are unforeseen.", "In this work, we propose novel attacks which enable the diverse assessment of robustness to unforeseen attacks.", "Our attacks are varied ( §2) and qualitatively distinct from current attacks.", "We propose adversarial JPEG, Fog, Gabor, and Snow attacks (sample images in Figure 1 ).", "We propose an unforeseen attack evaluation methodology ( §3) that involves evaluating a defense against a diverse set of held-out distortions decoupled from the defense design.", "For a fixed, held-out distortion, we then evaluate the defense against the distortion for a calibrated range of distortion sizes whose strength is roughly comparable across distortions.", "For each fixed distortion, we summarize the robustness of a defense against that distortion relative to a model adversarially trained on that distortion, a measure we call UAR.", "We provide code and calibrations to easily evaluate a defense against our suite of attacks at https://github.com/iclr-2020-submission/ advex-uar.", "By applying our method to 87 adversarially trained models and 8 different distortion types ( §4), we find that existing defenses and evaluation practices have marked weaknesses.", "Our results show", "New Attacks JPEG Fog Gabor Snow", "Figure 1: Attacked images (label \"espresso maker\") against adversarially trained models with large ε.", "Each of the adversarial images above are optimized to maximize the classification loss.", "that existing defenses based on adversarial training do not generalize to unforeseen adversaries, even when restricted to the 8 distortions in Figure 1 .", "This adds to the mounting evidence that achieving robustness against a single distortion type is insufficient to impart robustness to unforeseen attacks (Jacobsen et al., 2019; Jordan et al., 2019; Tramèr & Boneh, 2019) .", "Turning to evaluation, our results demonstrate that accuracy against different L p distortions is highly correlated relative to the other distortions we consider.", "This suggest that the common practice of evaluating only against L p distortions to test a model's adversarial robustness can give a misleading account.", "Our analysis demonstrates that our full suite of attacks adds substantive attack diversity and gives a more complete picture of a model's robustness to unforeseen attacks.", "A natural approach is to defend against multiple distortion types simultaneously in the hope that seeing a larger space of distortions provides greater transfer to unforeseen distortions.", "Unfortunately, we find that defending against even two different distortion types via joint adversarial training is difficult ( §5).", "Specifically, joint adversarial training leads to overfitting at moderate distortion sizes.", "In summary, we propose a metric UAR to assess robustness of defenses against unforeseen adversaries.", "We introduce a total of 4 novel attacks.", "We apply UAR to assess how robustness transfers to existing attacks and our novel attacks.", "Our results demonstrate that existing defense and evaluation methods do not generalize well to unforeseen attacks.", "We have seen that robustness to one attack provides limited information about robustness to other attacks, and moreover that adversarial training provides limited robustness to unforeseen attacks.", "These results suggest a need to modify or move beyond adversarial training.", "While joint adversarial training is one possible alternative, our results show it often leads to overfitting.", "Even ignoring this, it is not clear that joint training would confer robustness to attacks outside of those trained against.", "Evaluating robustness has proven difficult, necessitating detailed study of best practices even for a single fixed attack (Papernot et al., 2017; Athalye et al., 2018) .", "We build on these best practices by showing how to choose and calibrate a diverse set of unforeseen attacks.", "Our work is a supplement to existing practices, not a replacement-we strongly recommend following the guidelines in Papernot et al. (2017) and Athalye et al. (2018) in addition to our recommendations.", "Some caution is necessary when interpreting specific numeric results in our paper.", "Many previous implementations of adversarial training fell prone to gradient masking (Papernot et al., 2017; Engstrom et al., 2018) , with apparently successful training occurring only recently (Madry et al., 2017; Xie et al., 2018) .", "While evaluating with moderately many PGD steps (200) helps guard against this, (Qian & Wegman, 2019) shows that an L ∞ -trained model that appeared robust against L 2 actually had substantially less robustness when evaluating with 10 6 PGD steps.", "If this effect is pervasive, then there may be even less transfer between attacks than our current results suggest.", "For evaluating against a fixed attack, DeepFool Moosavi-Dezfooli et al. (2015) and CLEVER Weng et al. (2018) can be seen as existing alternatives to UAR.", "They work by estimating \"empirical robustness\", which is the expected minimum ε needed to successfully attack an image.", "However, these apply only to attacks which optimize over an L p -ball of radius ε, and CLEVER can be susceptible to gradient masking Goodfellow (2018).", "In addition, empirical robustness is equivalent to linearly averaging accuracy over ε, which has smaller dynamic range than the geometric average in UAR.", "Our results add to a growing line of evidence that evaluating against a single known attack type provides a misleading picture of the robustness of a model (Sharma & Chen, 2017; Engstrom et al., 2017; Jordan et al., 2019; Tramèr & Boneh, 2019; Jacobsen et al., 2019) .", "Going one step further, we believe that robustness itself provides only a narrow window into model behavior; in addition to robustness, we should seek to build a diverse toolbox for understanding machine learning models, including visualization (Olah et al., 2018; Zhang & Zhu, 2019) , disentanglement of relevant features (Geirhos et al., 2018) , and measurement of extrapolation to different datasets (Torralba & Efros, 2011) or the long tail of natural but unusual inputs (Hendrycks et al., 2019) .", "Together, these windows into model behavior can give us a clearer picture of how to make models reliable in the real world." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.38461539149284363, 0.12765957415103912, 0.25641024112701416, 0.6451612710952759, 0.19999998807907104, 0.25806450843811035, 0.32258063554763794, 0.1904761791229248, 0.3333333432674408, 0.37037035822868347, 0.13636362552642822, 0.1071428507566452, 0.19999998807907104, 0.1875, 0.2142857164144516, 0.3333333134651184, 0.1599999964237213, 0.3448275923728943, 0.31578946113586426, 0.10526315122842789, 0.2702702581882477, 0.3636363446712494, 0.09999999403953552, 0, 0, 0.0714285671710968, 0.1538461446762085, 0.1666666567325592, 0.2790697515010834, 0.11428570747375488, 0.2702702581882477, 0.3243243098258972, 0.20512820780277252, 0.12121211737394333, 0.1599999964237213, 0.41379308700561523, 0.27272728085517883, 0.37037035822868347, 0.2666666507720947, 0.4117647111415863, 0.23076923191547394, 0.13333332538604736, 0.23529411852359772, 0.10526315122842789, 0.3636363446712494, 0.14999999105930328, 0, 0.09999999403953552, 0.0833333283662796, 0.060606054961681366, 0.21621620655059814, 0.0624999962747097, 0.1538461446762085, 0.10810810327529907, 0.1599999964237213, 0.1012658178806305, 0.1111111044883728 ]
Hyl5V0EYvB
true
[ "We propose several new attacks and a methodology to measure robustness against unforeseen adversarial attacks." ]
[ "Deep neural networks (DNNs) have witnessed as a powerful approach in this year by solving long-standing Artificial\n", "intelligence (AI) supervised and unsupervised tasks exists in natural language processing, speech processing, computer vision and others.", "In this paper, we attempt to apply DNNs on three different cyber security use cases: Android malware classification, incident detection and fraud detection.", "The data set of each use case contains real known benign and malicious activities samples.", "These use cases are part of Cybersecurity Data Mining Competition (CDMC) 2017.", "The efficient network architecture for DNNs is chosen by conducting various trails of experiments for network parameters and network structures.", "The experiments of such chosen efficient configurations of DNNs are run up to 1000 epochs with learning rate set in the range [0.01-0.5].", "Experiments of DNNs performed well in comparison to the classical machine learning algorithm in all cases of experiments of cyber security use cases.", "This is due to the fact that DNNs implicitly extract and build better features, identifies the characteristics of the data that lead to better accuracy.", "The best accuracy obtained by DNNs and XGBoost on Android malware classification 0.940 and 0.741, incident detection 1.00 and 0.997, and fraud detection 0.972 and 0.916 respectively.", "The accuracy obtained by DNNs varies -0.05%, +0.02%, -0.01% from the top scored system in CDMC 2017 tasks.", "In this era of technical modernization, explosion of new opportunities and efficient potential resources for organizations have emerged but at the same time these technologies have resulted in threats to the economy.", "In such a scenario proper security measures plays a major role.", "Now days, hacking has become a common practice in organizations in order to steal data and information.", "This highlights the need for an efficient system to detect and prevent the fraudulent activities.", "cyber security is all about the protection of systems, networks and data in the cyberspace.Malware remains one of the maximum enormous security threats on the Internet.", "Malware are the softwares which indicate malicious activity of the file or programs.", "These are unwanted programs since they cause harm to the intended use of the system by making it behave in a very different manner than it is supposed to behave.", "Solutions with Antivirus and blacklists are used as the primary weapons of resistance against these malwares.", "Both approaches are not effective.", "This can only be used as an initial shelter in real time malware detection system.", "This is primarily due to the fact that both approaches completely fails in detecting the new malware that is created using polymorphic, metamorphic, domain flux and IP flux.Machine learning algorithms have played a pivotal role in several use cases of cyber security BID0 .", "Fortunately, deep learning approaches are prevailing subject in recent days due to the remarkable performance in various long-standing artificial intelligence (AI) supervised and unsupervised challenges BID1 .", "This paper evaluates the effectiveness of deep neural networks (DNNs) for cyber security use cases: Android malware classification, incident detection and fraud detection.The paper is structured as follows.", "Section II discusses the related work.", "Section III discusses the background knowledge of deep neural networks (DNNs).", "Section IV presents the proposed methodology including the description of the data set.", "Results are displayed in Section V. Conclusion is placed in Section VI.", "This paper has evaluated the performance of deep neural networks (DNNs) for cyber security uses cases: Android malware classification, incident detection and fraud detection.", "Additionally, other classical machine learning classifier is used.", "In all cases, the performance of DNNs is good in comparison to the classical machine learning classifier.", "Moreover, the same architecture is able to perform better than the other classical machine learning classifier in all use cases.", "The reported results of DNNs can be further improved by promoting training or stacking a few more layer to the existing architectures.", "This will be remained as one of the direction towards the future work." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.07692307233810425, 0, 0, 0, 0, 0.07692307233810425, 0, 0, 0, 0, 0, 0.052631575614213943, 0, 0, 0.08695651590824127, 0, 0, 0, 0, 0, 0, 0, 0, 0.0555555522441864, 0, 0, 0, 0, 0.0624999962747097, 0, 0, 0, 0, 0 ]
rkeeQmm0cX
true
[ "Deep-Net: Deep Neural Network for Cyber Security Use Cases" ]
[ "In this paper, we present an approach to learn recomposable motor primitives across large-scale and diverse manipulation demonstrations.", "Current approaches to decomposing demonstrations into primitives often assume manually defined primitives and bypass the difficulty of discovering these primitives.", "On the other hand, approaches in primitive discovery put restrictive assumptions on the complexity of a primitive, which limit applicability to narrow tasks.", "Our approach attempts to circumvent these challenges by jointly learning both the underlying motor primitives and recomposing these primitives to form the original demonstration.", "Through constraints on both the parsimony of primitive decomposition and the simplicity of a given primitive, we are able to learn a diverse set of motor primitives, as well as a coherent latent representation for these primitives.", "We demonstrate, both qualitatively and quantitatively, that our learned primitives capture semantically meaningful aspects of a demonstration.", "This allows us to compose these primitives in a hierarchical reinforcement learning setup to efficiently solve robotic manipulation tasks like reaching and pushing.", "We have seen impressive progress over the recent years in learning based approaches to perform a plethora of manipulation tasks Andrychowicz et al., 2018; Pinto & Gupta, 2016; Agrawal et al., 2016) .", "However, these systems are typically task-centric savants -able to only execute a single task that they were trained for.", "This is because these systems, whether leveraging demonstrations or environmental rewards, attempt to learn each task tabula rasa, where low to high level motor behaviours, are all acquired from scratch in context of the specified task.", "In contrast, we humans are adept at a variety of basic manipulation skills e.g. picking, pushing, grasping etc., and can effortlessly perform these diverse tasks via a unified manipulation system.", "Sample motor programs that emerge by discovering the space of motor programs from a diverse set of robot demonstration data in an unsupervised manner.", "These motor programs facilitate understanding the commonalities across various demonstrations, and accelerate learning for downstream tasks.", "How can we step-away from the paradigm of learning task-centric savants, and move towards building similar unified manipulation systems?", "We can begin by not treating these tasks independently, but via instead exploiting the commonalities across them.", "One such commonality relates to the primitive actions executed to accomplish the tasks -while the high-level semantics of tasks may differ significantly, the low and mid-level motor programs across them are often shared e.g. to either pick or push an object, one must move the hand towards it.", "This concept of motor programs can be traced back to the work of Lashley, who noted that human motor movements consist of 'orderly sequences' that are not simply sequences of stimulus-response patterns.", "The term 'motor programs' is however better attributed to Keele (1968) as being representative of 'muscle commands that execute a movement sequence uninfluenced by peripheral feedback', though later works shifted the focus from muscle commands to the movement itself, while allowing for some feedback (Adams, 1971) .", "More directly relevant to our motivation is Schmidt's notion of 'generalized' motor programs (Schmidt, 1975) that can allow abstracting a class of movement patterns instead of a singular one.", "In this work, we present an approach to discover the shared space of (generalized) motor programs underlying a variety of tasks, and show that elements from this space can be composed to accomplish diverse tasks.", "Not only does this allow understanding the commonalities and shared structure across diverse skills, the discovered space of motor programs can provide a high-level abstraction using which new skills can be acquired quickly by simply learning the set of desired motor programs to compose.", "We are not the first to advocate the use of such mid-level primitives for efficient learning or generalization, and there have been several reincarnations of this idea over the decades, from 'operators' in the classical STRIPS algorithm (Fikes & Nilsson, 1971) , to 'options' (Sutton et al., 1999) or 'primitives' (Schaal et al., 2005) in modern usage.", "These previous approaches however assume a set of manually defined/programmed primitives and therefore bypass the difficulty of discovering them.", "While some attempts have been made to simultaneously learn the desired skill and the underlying primitives, learning both from scratch is difficult, and are therefore restricted to narrow tasks.", "Towards overcoming this difficulty, we observe that instead of learning the primitives from scratch in the context of a specific task, we can instead discover them using demonstrations of a diverse set of tasks.", "Concretely, by leveraging demonstrations for different skills e.g. pouring, grasping, opening etc., we discover the motor programs (or movement primitives) that occur across these.", "We present an approach to discover movement primitives from a set of unstructured robot demonstration i.e. demonstrations without additional parsing or segmentation labels available.", "This is a challenging task as each demonstration is composed of a varying number of unknown primitives, and therefore the process of learning entails both, learning the space of primitives as well as understanding the available demonstrations in context of these.", "Our approach is based on the insight that an abstraction of a demonstrations into a sequence of motor programs or primitives, each of which correspond to an implied movement sequence, and must yield back the demonstration when the inferred primitives are 'recomposed'.", "We build on this and formulate an unsupervised approach to jointly learn the space of movement primitives, as well as a parsing of the available demonstrations into a high-level sequence of these primitives.", "We demonstrate that our method allows us to learn a primitive space that captures the shared motions required across diverse skills, and that these motor programs can be adapted and composed to further perform specific tasks.", "Furthermore, we show that these motor programs are semantically meaningful, and can be recombined to solved robotic tasks using reinforcement learning.", "Specifically, solving reaching and pushing tasks with reinforcement learning over the space of primitives achieves 2 orders of magnitude faster training than reinforcement learning in the low-level control space.", "We have presented an unsupervised approach to discover motor programs from a set of unstructured robot demonstrations.", "Through the insight that learned motor programs should recompose into the original demonstration while being simplistic, we discover a coherent and diverse latent space of primitives on the MIME (Sharma et al., 2018) dataset.", "We also observed that the learned primitives were semantically meaningful, and useful for efficiently learning downstream tasks in simulation.", "We hope that the contributions from our work enable learning and executing primitives in a plethora of real-world robotic tasks.", "It would also be interesting to leverage the learned motor programs in context of continual learning, to investigate how the discovered space can be adapted and expanded in context of novel robotic tasks." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.19512194395065308, 0.19512194395065308, 0.13333332538604736, 0.1860465109348297, 0.3333333432674408, 0.3499999940395355, 0.2222222238779068, 0.14814814925193787, 0.1904761791229248, 0.21052631735801697, 0.2641509473323822, 0.27272728085517883, 0.25641024112701416, 0.1904761791229248, 0.19999998807907104, 0.1538461446762085, 0.19999998807907104, 0.1230769157409668, 0.16326530277729034, 0.40740740299224854, 0.26229506731033325, 0.19718308746814728, 0.19512194395065308, 0.20408162474632263, 0.23999999463558197, 0.12244897335767746, 0.25, 0.2641509473323822, 0.20338982343673706, 0.31372547149658203, 0.4000000059604645, 0.40909090638160706, 0.21276594698429108, 0.29999998211860657, 0.2142857164144516, 0.2857142686843872, 0.3255814015865326, 0.2800000011920929 ]
rkgHY0NYwr
true
[ "We learn a space of motor primitives from unannotated robot demonstrations, and show these primitives are semantically meaningful and can be composed for new robot tasks." ]
[ "Using modern deep learning models to make predictions on time series data from wearable sensors generally requires large amounts of labeled data.", "However, labeling these large datasets can be both cumbersome and costly.", "In this paper, we apply weak supervision to time series data, and programmatically label a dataset from sensors worn by patients with Parkinson's.", "We then built a LSTM model that predicts when these patients exhibit clinically relevant freezing behavior (inability to make effective forward stepping).", "We show that (1) when our model is trained using patient-specific data (prior sensor sessions), we come within 9% AUROC of a model trained using hand-labeled data and (2) when we assume no prior observations of subjects, our weakly supervised model matched performance with hand-labeled data.", "These results demonstrate that weak supervision may help reduce the need to painstakingly hand label time series training data.", "Time series data generated by wearable sensors are an increasingly common source of biomedical data.", "With their ability to monitor events in non-laboratory conditions, sensors offer new insights into human health across a diverse range of applications, including continuous glucose monitoring BID1 , atrial fibrillation detection BID11 , fall detection BID2 , and general human movement monitoring BID6 .Supervised", "machine learning with sensor time series data can help automate many of these monitoring tasks and enable medical professionals make more informed decisions. However, developing", "these supervised models is challenging due to the cost and difficultly in obtaining labeled training data, especially in settings with considerable inter-subject variability, as is common in human movement research BID5 . Traditionally, medical", "professionals must hand label events observed in controlled laboratory settings. When the events of interest", "are rare this process is time consuming, expensive, and does not scale to the sizes needed to train robust machine learning models. Thus there is a need to efficiently", "label the large amounts of data that machine learning algorithms require for time series tasks.In this work, we explore weakly supervised BID10 ) models for time series classification. Instead of using manually labeled training", "data, weak supervision encodes domain insights into the form of heuristic labeling functions, which are used to create large, probabilistically labeled training sets. This method is especially useful for time", "series classification, where the sheer number of data points makes manual labeling difficult.As a motivating test case, we focus on training a deep learning model to classify freezing behaviors in people with Parkinson's disease. We hypothesize that by encoding biomechanical", "knowledge about human movement and Parkinson's BID5 into our weakly supervised model, we can reduce the need for large amounts of hand labeled data and achieve similar performance to fully supervised models for classifying freezing behavior. We focus on two typical clinical use cases when", "making predictions for a patient: (1) where we have no prior observations of the patient, and (2) where we have at least one observation of the patient.", "Our work demonstrates the potential of weak supervision on time series tasks.", "In both experiments, our weakly supervised models performed close to or match the fully supervised models.", "Further, the amount of data available for the weak supervision task was fairly small -with more unlabeled data, we expect to be able to improve performance BID9 .", "These results show that costly and time-intensive hand labeling may not be required to get the desired performance of a given classifier.In the future, we plan to add more and different types of sensor streams and modalities (e.g., video).", "We also plan to use labeling functions to better model the temporal correlation between individual segments of these streams, which can potentially improve our generative model and hence end to end performance." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2631579041481018, 0, 0.14999999105930328, 0.10256409645080566, 0.26923075318336487, 0.277777761220932, 0.25806450843811035, 0.0714285671710968, 0.2380952388048172, 0.08510638028383255, 0.12903225421905518, 0.1395348757505417, 0.375, 0.1702127605676651, 0.21052631735801697, 0.23728813230991364, 0.20512819290161133, 0.27586206793785095, 0.19354838132858276, 0.1904761791229248, 0.14814814925193787, 0.13333332538604736 ]
SJedYj5ruV
true
[ "We demonstrate the feasibility of a weakly supervised time series classification approach for wearable sensor data. " ]
[ "Learning semantic correspondence between the structured data (e.g., slot-value pairs) and associated texts is a core problem for many downstream NLP applications, e.g., data-to-text generation.", "Recent neural generation methods require to use large scale training data.", "However, the collected data-text pairs for training are usually loosely corresponded, where texts contain additional or contradicted information compare to its paired input.", "In this paper, we propose a local-to-global alignment (L2GA) framework to learn semantic correspondences from loosely related data-text pairs.", "First, a local alignment model based on multi-instance learning is applied to build the semantic correspondences within a data-text pair.", "Then, a global alignment model built on top of a memory guided conditional random field (CRF) layer is designed to exploit dependencies among alignments in the entire training corpus, where the memory is used to integrate the alignment clues provided by the local alignment model.", "Therefore, it is capable of inducing missing alignments for text spans that are not supported by its imperfect paired input.", "Experiments on recent restaurant dataset show that our proposed method can improve the alignment accuracy and as a by product, our method is also applicable to induce semantically equivalent training data-text pairs for neural generation models.", "Learning semantic correspondences between the structured data (e.g., slot-values pairs in a meaning representation (MR)) and associated description texts is one of core problem in NLP community (Barzilay & Lapata, 2005) , e.g., data-to-text generation produces texts based on the learned semantic correspondences.", "Recent data-to-text generation methods, especially neural-base methods which are data-hungry, adopt data-text pairs collected from web for training.", "Such collected corpus usually contain loosely corresponded data text pairs (Perez-Beltrachini & Gardent, 2017; Nie et al., 2019) , where text spans contain information that are not supported by its imperfect structured input.", "Figure 1 depicts an example, where the slot-value pair Price=Cheap can be aligned to text span low price range while the text span restaurant doesn't supported by any slot-value pair in paired input MR. Most of previous work for learning semantic correspondences (Barzilay & Lapata, 2005; Liang et al., 2009; Kim & Mooney, 2010; Perez-Beltrachini & Lapata, 2018) focus on characterizing local interactions between every text span with a corresponded slots presented in its paired MR. Such methods cannot work directly on loosely corresponded data-text pairs, as setting is different.", "In this work, we make a step towards explicit semantic correspondences (i.e., alignments) in loosely corresponded data text pairs.", "Compared with traditional setting, which only attempts inducing alignments for every text span with a corresponded slot presented in its paired MR. We propose a Local-to-Global Alignment (L2GA) framework, where the local alignment model discovers the correspondences within a single data-text pair (e.g., low price range is aligned with the slot Price in Figure 1 ) and a global alignment model exploits dependencies among alignments presented in the entire data-text pairs and therefore, is able to induce missing attributes for text spans not supported in its noisy input data (e.g., restaurant is aligned with the slot EatType in Figure 1 ).", "Specially, our proposed L2GA is composed of two parts.", "The local alignment model is a neural method optimized via a multi-instance learning paradigm (Perez-Beltrachini & Lapata, 2018) which automatically captures correspondences by maximizing the similarities between co-occurred slots and texts within a data-text pair.", "Our proposed global alignment model is a memory guided conditional random field (CRF) based sequence labeling framework.", "The CRF layer is able to learn dependencies among semantic labels over the entire corpus and therefore is suitable for inferring missing alignments of unsupported text spans.", "However, since there are no semantic labels provided for sequence labeling, we can only leverage limited supervision provided in a data-text pair.", "We start by generating pseudo labels using string matching heuristic between words and slots (e.g., Golden Palace is aligned with Name in Figure 1 ).", "The pseudo labels result in large portion of unmatched text spans (e.g., low price and restaurant cannot be directly matched in Figure 1 ), we tackle this challenge by:", "a) changing the calculation of prediction probability in CRF layer, where we sum probabilities over possible label sequences for unmatched text spans to allow inference on unmatched words;", "b) incorporating alignment results produced by the local alignment model as an additional memory to guide the CRF layer, therefore, the semantic correspondences captured by local alignment model can together work with the CRF layer to induce alignments locally and globally.", "We conduct experiments of our proposed method on a recent restaurant dataset, E2E challenge benchmark (Novikova et al., 2017a) , results show that our framework can improve the alignment accuracy with respect to previous methods.", "Moreover, our proposed method can explicitly detect unaligned errors presented in the original training corpus and provide semantically equivalent training data-text pairs for neural generation models.", "Experimental results also show that our proposed method can improve content consistency for neural generation models.", "In this paper, we study the problem of learning alignments in loosely related data-text pairs.", "We propose a local-to-global framework which not only induces semantic correspondences for words that are related to its paired input but also infers potential labels for text spans that are not supported by its incomplete input.", "We find that our proposed method improves the alignment accuracy, and can be of help to reduce the noise in original training corpus.", "In the future, we will explore more challenging datasets with more complex data schema.", "Under review as a conference paper at ICLR 2020 are 300 and 100 respectively.", "The dimensions of trainable hidden units in LSTMs are all set to 400.", "We first pre-train our local model for 5 epochs and then train our proposed local-to-global model jointly with 10 epochs according to validation set.", "During training, we regularize all layers with a dropout rate of 0.1.", "We use stochastic gradient descent (SGD) for optimisation with learning rate 0.015.", "The gradient is truncated by 5." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09090908616781235, 0.0714285671710968, 0.14999999105930328, 0.6666666865348816, 0.3333333134651184, 0.11538460850715637, 0, 0.19607841968536377, 0.1428571343421936, 0.17142856121063232, 0.0416666604578495, 0.1304347813129425, 0.21052631735801697, 0.22727271914482117, 0, 0.1599999964237213, 0.1764705777168274, 0.1395348757505417, 0.21052631735801697, 0.09090908616781235, 0, 0.045454539358615875, 0.2083333283662796, 0.23529411852359772, 0.0952380895614624, 0, 0.1249999925494194, 0.3404255211353302, 0.1538461446762085, 0.06666666269302368, 0.06451612710952759, 0.06666666269302368, 0.21052631735801697, 0.13333332538604736, 0.13333332538604736, 0 ]
Byx_GeSKPS
true
[ "We propose a local-to-global alignment framework to learn semantic correspondences from noisy data-text pairs with weak supervision" ]
[ "Imitation learning algorithms provide a simple and straightforward approach for training control policies via standard supervised learning methods.", "By maximizing the likelihood of good actions provided by an expert demonstrator, supervised imitation learning can produce effective policies without the algorithmic complexities and optimization challenges of reinforcement learning, at the cost of requiring an expert demonstrator -- typically a person -- to provide the demonstrations.", "In this paper, we ask: can we use imitation learning to train effective policies without any expert demonstrations?", "The key observation that makes this possible is that, in the multi-task setting, trajectories that are generated by a suboptimal policy can still serve as optimal examples for other tasks.", "In particular, in the setting where the tasks correspond to different goals, every trajectory is a successful demonstration for the state that it actually reaches.", "Informed by this observation, we propose a very simple algorithm for learning behaviors without any demonstrations, user-provided reward functions, or complex reinforcement learning methods.", "Our method simply maximizes the likelihood of actions the agent actually took in its own previous rollouts, conditioned on the goal being the state that it actually reached.", "Although related variants of this approach have been proposed previously in imitation learning settings with example demonstrations, we present the first instance of this approach as a method for learning goal-reaching policies entirely from scratch.", "We present a theoretical result linking self-supervised imitation learning and reinforcement learning, and empirical results showing that it performs competitively with more complex reinforcement learning methods on a range of challenging goal reaching problems.", "Reinforcement learning (RL) algorithms hold the promise of providing a broadly-applicable tool for automating control, and the combination of high-capacity deep neural network models with RL extends their applicability to settings with complex observations and that require intricate policies.", "However, RL with function approximation, including deep RL, presents a challenging optimization problem.", "Despite years of research, current deep RL methods are far from a turnkey solution: most popular methods lack convergence guarantees (Baird, 1995; Tsitsiklis & Van Roy, 1997) or require prohibitive numbers of samples (Schulman et al., 2015; Lillicrap et al., 2015) .", "Moreover, in practice, many commonly used algorithms are extremely sensitive to hyperparameters (Henderson et al., 2018) .", "Besides the optimization challenges, another usability challenge of RL is reward function design: although RL automatically determines how to solve the task, the task itself must be specified in a form that the RL algorithm can interpret and optimize.", "These challenges prompt us to consider whether there might exist a general method for learning behaviors without the need for complex, deep RL algorithms.", "Imitation learning is an alternative paradigm to RL that provides a simple and straightforward approach for training control policies via standard supervised learning methods.", "By maximizing the likelihood of good actions provided by an expert demonstrator, supervised imitation learning can produce effective policies without the algorithmic complexities and optimization challenges of RL.", "Supervised learning algorithms in deep learning have matured to the point of being robust and reliable, and imitation learning algorithms have demonstrated success in acquiring behaviors robustly and reliably from high-dimensional sensory data such as images (Rajeswaran et al., 2017; Lynch et al., 2019) .", "The catch is that imitation learning methods require an expert demonstrator -typically a human -to provide a number of demonstrations of optimal behavior.", "Obtaining expert demonstrations can be challenging; the large number of demonstrations required limits the scalability of such algorithms.", "In this paper, we ask: can we use ideas from imitation learning to train effective policies without any expert demonstrations, retaining the benefits of imitation learning, but making it possible to learn goal-directed behavior autonomously from scratch?", "The key observation for making progress on this problem is that, in the multi-task setting, trajectories that are generated by a suboptimal policy can serve as optimal examples for other tasks.", "In particular, in the setting where the tasks correspond to reaching different goal states, every trajectory is a successful demonstration for the state that it actually reaches.", "Similar observations have been made in prior works as well (Kaelbling, 1993; Andrychowicz et al., 2017; Nair et al., 2018; Mavrin et al., 2019; Savinov et al., 2018) , but have been used to motivate data reuse in off-policy RL or semiparametric methods.", "Our approach will leverage this idea to obtain near-optimal goal-conditioned policies without RL or reward functions.", "The algorithm that we study is, at its core, very simple: at each iteration, we run our latest goalconditioned policy, collect data, and then use this data to train a policy with supervised learning.", "Supervision is obtained by noting that each action that is taken is a good action for reaching the states that actually occurred in future time steps along the same trajectory.", "This algorithm resembles imitation learning, but is self-supervised.", "This procedure combines the benefits of goal-conditioned policies with the simplicity of supervised learning, and we theoretically show that this algorithm corresponds to a convergent policy learning procedure.", "While several prior works have proposed training goal-conditioned policies via imitation learning based on a superficially similar algorithm (Ding et al., 2019; Lynch et al., 2019) , to our knowledge no prior work proposes a complete policy learning algorithm based on this idea that learns from scratch, without expert demonstrations.", "This procedure reaps the benefits of off-policy data re-use without the need for learning complex Q functions or value functions.", "Moreover, we can bootstrap our algorithm with a small number of expert demonstrations, such that it can continue to improve its behavior self supervised, without dealing with the challenges of combining imitation learning with off-policy RL.", "The main contribution of our work is a complete algorithm for learning policies from scratch via goal-conditioned imitation learning, and to show that this algorithm can successfully train goalconditioned policies.", "Our theoretical analysis of self-supervised goal-conditioned imitation learning shows that this method optimizes a lower bound on the probability that the agent reaches the desired goal.", "Empirically, we show that our proposed algorithm is able to learn goal reaching behaviors from scratch without the need for an explicit reward function or expert demonstrations." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.06451612710952759, 0.1538461446762085, 0.19354838132858276, 0.04651162400841713, 0.05405404791235924, 0.10810810327529907, 0, 0.2222222238779068, 0.13636362552642822, 0.12244897335767746, 0.07407406717538834, 0.038461536169052124, 0.06451612710952759, 0.0833333283662796, 0.10810810327529907, 0.10810810327529907, 0.14999999105930328, 0.20000000298023224, 0.11428570747375488, 0, 0.1702127605676651, 0.045454539358615875, 0.05128204822540283, 0.0833333283662796, 0.06666666269302368, 0.17391303181648254, 0.052631575614213943, 0.09090908616781235, 0.1538461446762085, 0.1428571343421936, 0.1249999925494194, 0.17391303181648254, 0.2380952388048172, 0.10810810327529907, 0.1463414579629898 ]
ByxoqJrtvr
true
[ "Learning how to reach goals from scratch by using imitation learning with data relabeling" ]
[ "Neural networks have recently shown excellent performance on numerous classi- fication tasks.", "These networks often have a large number of parameters and thus require much data to train.", "When the number of training data points is small, however, a network with high flexibility will quickly overfit the training data, resulting in a large model variance and a poor generalization performance.", "To address this problem, we propose a new ensemble learning method called InterBoost for small-sample image classification.", "In the training phase, InterBoost first randomly generates two complementary datasets to train two base networks of the same structure, separately, and then next two complementary datasets for further training the networks are generated through interaction (or information sharing) between the two base networks trained previously.", "This interactive training process continues iteratively until a stop criterion is met.", "In the testing phase, the outputs of the two networks are combined to obtain one final score for classification.", "Detailed analysis of the method is provided for an in-depth understanding of its mechanism.", "Image classification is an important application of machine learning and data mining.", "Recent years have witnessed tremendous improvement in large-scale image classification due to the advances of deep learning BID15 BID17 BID7 BID4 .", "Despite recent breakthroughs in applying deep networks, one persistent challenge is classification with a small number of training data points BID12 .", "Small-sample classification is important, not only because humans learn a concept of class without millions or billions of data but also because many kinds of real-world data have a small quantity.", "Given a small number of training data points, a large network will inevitably encounter the overfitting problem, even when dropout BID16 and weight decay are applied during training BID19 .", "This is mainly because a large network represents a large function space, in which many functions can fit a given small-sample dataset, making it difficult to find the underlying true function that is able to generalize well.", "As a result, a neural network trained with a small number of data points usually exhibits a large variance.Ensemble learning is one way to reduce the variance.", "According to bias-variance dilemma BID2 , there is a trade-off between the bias and variance contributions to estimation or classification errors.", "The variance is reduced when multiple models or ensemble members are trained with different datasets and are combined for decision making, and the effect is more pronounced if ensemble members are accurate and diverse BID3 .There", "exist two classic strategies of ensemble learning BID21 BID13 . The first", "one is Bagging BID20 and variants thereof. This strategy", "trains independent classifiers on bootstrap re-samples of training data and then combines classifiers based on some rules, e.g. weighted average. Bagging methods", "attempt to obtain diversity by bootstrap sampling, i.e. random sampling with replacement. There is no guarantee", "to find complementary ensemble members and new datasets constructed by bootstrap sampling will contain even fewer data points, which can potentially make the overfitting problem even more severe. The second strategy is", "Boosting BID14 BID10 and its variants. This strategy starts from", "a classifier trained on the available data and then sequentially trains new member classifiers. Taking Adaboost BID20 as", "an example, a classifier in Adaboost is trained according to the training error rates of previous classifiers. Adaboost works well for", "weak base classifiers. If the base classifier", "is of high complexity, such as a large neural network, the first base learner will overfit the training data. Consequently, either the", "Adaboost procedure is stopped or the second classifier has to be trained on data with original weights, i.e. to start from the scratch again, which in no way is able to ensure the diversity of base networks.In addition, there also exist some \"implicit\" ensemble methods in the area of neural networks. Dropout BID16 , DropConnect", "BID18 and Stochastic Depth techniques BID5 create an ensemble by dropping some hidden nodes, connections (weights) and layers, respectively. Snapshot Ensembling BID6 )", "is a method that is able to, by training only one time and finding multiple local minima of objective function, get many ensemble members, and then combines these members to get a final decision. Temporal ensembling, a parallel", "work to Snapshot Ensembling, trains on a single network, but the predictions made on different epochs correspond to an ensemble prediction of multiple sub-networks because of dropout regularization BID8 . These works have demonstrated advantages", "of using an ensemble technique. In these existing \"implicit\" ensemble methods", ", however, achieving diversity is left to randomness, making them ineffective for small-sample classification.Therefore, there is a need for new ensemble learning methods able to train diverse and complementary neural networks for small-sample classification. In this paper, we propose a new ensemble method", "called InterBoost for training two base neural networks with the same structure. In the method, the original dataset is first re-weighted", "by two sets of complementary weights. Secondly, the two base neural networks are trained on the", "two re-weighted datasets, separately. Then we update training data weights according to prediction", "scores of the two base networks on training data, so there is an interaction between the two base networks during the training process. When base networks are trained interactively with the purpose", "of deliberately pushing each other in opposite directions, they will be complementary. This process of training network and updating weights is repeated", "until a stop criterion is met.In this paper, we present the training and test procedure of the proposed ensemble method and evaluate it on the UIUC-Sports dataset BID9 ) and the LabelMe dataset BID11 with a comparison to Bagging, Adaboost, SnapShot Ensembling and other existing methods.", "During the training process, we always keep the constraints W 1d +W 2d = 1 and 0 < W 1d , W 2d < 1, to ensure the base networks diverse and complementary.", "Equation FORMULA10 and FORMULA11 are designed for updating weights of data points, so that the weight updating rule is sensitive to small differences between prediction probabilities from two base networks to prevent premature training.", "Furthermore, if the prediction of a data point in one network is more accurate than another network, its weight in next round will be smaller than its weight for another network, thus making the training of individual network on more different regions.The training process generates many diverse training dataset pairs, as shown in Figure 3 .", "That is, each base network will be trained on these diverse datasets in sequence, which is equivalent to that an \"implicit\" ensemble is applied on each base network.", "Therefore, the base network will get more and more accurate during training process.", "At the same time, the two networks are complementary to each other.In each iteration, determination of the number of epochs for training base networks is also crucial.", "If the number is too large, the two base networks will fit training data too well, making it difficult to change data weights of to generate diverse datasets.", "If it is too small, it is difficult to obtain accurate base classifiers.", "In experiments, we find that a suitable epoch number in each iteration is the ones that make the classification accuracy of the base network fall in the interval of (0.9, 0.98).Similar", "to Bagging and Adaboost, our method has no limitation on the type of neural networks. In addition", ", it is straightforward to extend the proposed ensemble method for multiple networks, just by keeping DISPLAYFORM0 .., D}, in which", "H is the number of base networks and 0 < W id < 1.", "In the paper, we have proposed an ensemble method called InterBoost for training neural networks for small-sample classification and detailed the training and test procedures.", "In the training procedure, the two base networks share information with each other in order to push each other optimized in different directions.", "At the same time, each base network is trained on diverse datasets iteratively.", "Experimental results on UIUC-Sports (UIUC) and LabelMe (LM) datasets showed that our ensemble method does not outperform other ensemble methods.", "Future work includes improving the proposed method, increasing the number of networks, experimenting on different types of network as well as different kinds of data to evaluate the effectiveness of the InterBoost method." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.09999999403953552, 0.09090908616781235, 0.178571417927742, 0.35555556416511536, 0.22580644488334656, 0.04999999701976776, 0.2222222238779068, 0.19512194395065308, 0.14999999105930328, 0.08163265138864517, 0.08163265138864517, 0.03703703358769417, 0.1090909019112587, 0.06779660284519196, 0.07692307233810425, 0.1249999925494194, 0.17543859779834747, 0.10256409645080566, 0.054054051637649536, 0.08163265138864517, 0, 0.1355932205915451, 0.052631575614213943, 0.08695651590824127, 0.1666666567325592, 0.05882352590560913, 0.1249999925494194, 0.15789473056793213, 0.11999999731779099, 0.13114753365516663, 0.10169491171836853, 0.15789473056793213, 0.3492063581943512, 0.3404255211353302, 0.1428571343421936, 0.09756097197532654, 0.15686273574829102, 0.12244897335767746, 0.2985074520111084, 0.18867923319339752, 0.1666666567325592, 0.14084506034851074, 0.07843136787414551, 0.14999999105930328, 0.23529411852359772, 0.11538460850715637, 0, 0.1428571343421936, 0.31111109256744385, 0.2083333283662796, 0.1463414579629898, 0.6938775181770325, 0.21276594698429108, 0.04878048226237297, 0.1702127605676651, 0.15094339847564697 ]
r16u6i_Xz
true
[ "In the paper, we proposed an ensemble method called InterBoost for training neural networks for small-sample classification. The method has better generalization performance than other ensemble methods, and reduces variances significantly." ]
[ "Interpreting neural networks is a crucial and challenging task in machine learning.", "In this paper, we develop a novel framework for detecting statistical interactions captured by a feedforward multilayer neural network by directly interpreting its learned weights.", "Depending on the desired interactions, our method can achieve significantly better or similar interaction detection performance compared to the state-of-the-art without searching an exponential solution space of possible interactions.", "We obtain this accuracy and efficiency by observing that interactions between input features are created by the non-additive effect of nonlinear activation functions, and that interacting paths are encoded in weight matrices.", "We demonstrate the performance of our method and the importance of discovered interactions via experimental results on both synthetic datasets and real-world application datasets.", "Despite their strong predictive power, neural networks have traditionally been treated as \"black box\" models, preventing their adoption in many application domains.", "It has been noted that complex machine learning models can learn unintended patterns from data, raising significant risks to stakeholders BID43 .", "Therefore, in applications where machine learning models are intended for making critical decisions, such as healthcare or finance, it is paramount to understand how they make predictions BID6 BID17 .Existing", "approaches to interpreting neural networks can be summarized into two types. One type", "is direct interpretation, which focuses on 1) explaining", "individual feature importance, for example by computing input gradients BID37 BID34 BID40 or by decomposing predictions BID2 BID36 , 2) developing attention-based models, which illustrate where neural networks focus during inference BID20 BID27 BID47 , and 3) providing", "model-specific visualizations, such as feature map and gate activation visualizations BID48 BID21 . The other type", "is indirect interpretation, for example post-hoc interpretations of feature importance BID32 and knowledge distillation to simpler interpretable models BID7 .It has been commonly", "believed that one major advantage of neural networks is their capability of modeling complex statistical interactions between features for automatic feature learning. Statistical interactions", "capture important information on where features often have joint effects with other features on predicting an outcome. The discovery of interactions", "is especially useful for scientific discoveries and hypothesis validation. For example, physicists may be", "interested in understanding what joint factors provide evidence for new elementary particles; doctors may want to know what interactions are accounted for in risk prediction models, to compare against known interactions from existing medical literature.In this paper, we propose an accurate and efficient framework, called Neural Interaction Detection (NID), which detects statistical interactions of any order or form captured by a feedforward neural network, by examining its weight matrices. Our approach is efficient because", "it avoids searching over an exponential solution space of interaction candidates by making an approximation of hidden unit importance at the first hidden layer via all weights above and doing a 2D traversal of the input weight matrix. We provide theoretical justifications", "on why interactions between features are created at hidden units and why our hidden unit importance approximation satisfies bounds on hidden unit gradients. Top-K true interactions are determined", "from interaction rankings by using a special form of generalized additive model, which accounts for interactions of variable order BID46 BID25 . Experimental results on simulated datasets", "and real-world datasets demonstrate the effectiveness of NID compared to the state-of-the-art methods in detecting statistical interactions.The rest of the paper is organized as follows: we first review related work and define notations in Section 2. In Section 3, we examine and quantify the", "interactions encoded in a neural network, which leads to our framework for interaction detection detailed in Section 4. Finally, we study our framework empirically", "and demonstrate its practical utility on real-world datasets in Section 5.", "We presented our NID framework, which detects statistical interactions by interpreting the learned weights of a feedforward neural network.", "The framework has the practical utility of accurately detecting general types of interactions without searching an exponential solution space of interaction candidates.", "Our core insight was that interactions between features must be modeled at common hidden units, and our framework decoded the weights according to this insight.In future work, we plan to detect feature interactions by accounting for common units in intermediate hidden layers of feedforward networks.", "We would also like to use the perspective of interaction detection to interpret weights in other deep neural architectures.A PROOF AND DISCUSSION FOR PROPOSITION 2Given a trained feedforward neural network as defined in Section 2.3, we can construct a directed acyclic graph G = (V, E) based on non-zero weights as follows.", "We create a vertex for each input feature and hidden unit in the neural network: V = {v ,i |∀i, }, where v ,i is the vertex corresponding to the i-th hidden unit in the -th layer.", "Note that the final output y is not included.", "We create edges based on the non-zero entries in the weight matrices, i.e., DISPLAYFORM0 Note that under the graph representation, the value of any hidden unit is a function of parent hidden units.", "In the following proposition, we will use vertices and hidden units interchangeably.", "Proposition 2 (Interactions at Common Hidden Units).", "Consider a feedforward neural network with input feature DISPLAYFORM1 , there exists a vertex v I in the associated directed graph such that I is a subset of the ancestors of v I at the input layer (i.e., = 0).Proof", ". We prove", "Proposition 2 by contradiction.Let I be an interaction where there is no vertex in the associated graph which satisfies the condition. Then, for", "any vertex v L,i at the L-th layer, the value f i of the corresponding hidden unit is a function of its ancestors at the input layer I i where I ⊂ I i .Next, we group", "the hidden units at the L-th layer into non-overlapping subsets by the first missing feature with respect to the interaction I. That is, for element", "i in I, we create an index set S i ∈ [p L ]: DISPLAYFORM2 Note that the final output of the network is a weighed summation over the hidden units at the L-th layer: DISPLAYFORM3 Since that j∈Si w y j f j x Ij is not a function of x i , we have that ϕ (·) is a function without the interaction I, which contradicts our assumption.The reverse of this statement, that a common descendant will create an interaction among input features, holds true in most cases. The existence of counterexamples", "is manifested when early hidden layers capture an interaction that is negated in later layers. For example, the effects of two", "interactions may be directly removed in the next layer, as in the case of the following expression: max{w 1 x 1 + w 2 x 2 , 0} − max{−w 1 x 1 − w 2 x 2 , 0} = w 1 x 1 + w 2 x 2 . Such an counterexample is legitimate", "; however, due to random fluctuations, it is highly unlikely in practice that the w 1 s and the w 2 s from the left hand side are exactly equal." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1428571343421936, 0.7179487347602844, 0.045454539358615875, 0.13636362552642822, 0.1111111044883728, 0.05405404791235924, 0, 0, 0.13793103396892548, 0, 0.07692307233810425, 0, 0, 0.15789473056793213, 0.05714285373687744, 0, 0.1927710771560669, 0.15094339847564697, 0.05405404791235924, 0.1463414579629898, 0.07692307233810425, 0.1621621549129486, 0.07407406717538834, 0.6285714507102966, 0.0555555522441864, 0.17543859779834747, 0.1875, 0.13333332538604736, 0, 0.08695651590824127, 0, 0, 0.16326530277729034, 0.1111111119389534, 0.05128204822540283, 0.09090908616781235, 0.052631575614213943, 0.0476190447807312, 0, 0.08695651590824127, 0 ]
ByOfBggRZ
true
[ "We detect statistical interactions captured by a feedforward multilayer neural network by directly interpreting its learned weights." ]
[ "The neural linear model is a simple adaptive Bayesian linear regression method that has recently been used in a number of problems ranging from Bayesian optimization to reinforcement learning.", "Despite its apparent successes in these settings, to the best of our knowledge there has been no systematic exploration of its capabilities on simple regression tasks.", "In this work we characterize these on the UCI datasets, a popular benchmark for Bayesian regression models, as well as on the recently introduced ''gap'' datasets, which are better tests of out-of-distribution uncertainty.", "We demonstrate that the neural linear model is a simple method that shows competitive performance on these tasks.", "Despite the recent successes that neural networks have shown in an impressive range of tasks, they tend to be overconfident in their predictions (Guo et al., 2017) .", "Bayesian neural networks (BNNs; Neal (1995) ) attempt to address this by providing a principled framework for uncertainty estimation in predictions.", "However, inference in BNNs is intractable to compute, requiring approximate inference techniques.", "Of these, Monte Carlo methods and variational methods, including Monte Carlo dropout (MCD) (Gal and Ghahramani, 2016) , are popular; however, the former are difficult to tune, and the latter are often limited in their expressiveness (Foong et al., 2019b; Yao et al., 2019; Foong et al., 2019a) .", "The neural linear model represents a compromise between tractability and expressiveness for BNNs in regression settings: instead of attempting to perform approximate inference over the entire set of weights, it performs exact inference on only the last layer, where prediction can be done in closed form.", "It has recently been used in active learning (Pinsler et al., 2019) , Bayesian optimization (Snoek et al., 2015) , reinforcement learning (Riquelme et al., 2018) , and AutoML (Zhou and Precioso, 2019), among others; however, to the best of our knowledge, there has been no systematic attempt to benchmark the model in the simple regression setting.", "In this work we do so, first demonstrating the model on a toy example, followed by experiments on the popular UCI datasets (as in Hernández-Lobato and Adams (2015) ) and the recent UCI gap datasets from Foong et al. (2019b) , who identified (along with Yao et al. (2019) ) well-calibrated 'in-between' uncertainty as a desirable feature of BNNs.", "In this section, we briefly describe the different models we train in this work, which are variations of the neural linear (NL) model, in which a neural network extracts features from the input to be used as basis functions for Bayesian linear regression.", "The central issue in the neural linear model is how to train the network: in this work, we provide three different models, with a total of four different training methods.", "For a more complete mathematical description of the models, refer to Appendix A; we summarize the models in Appendix C. Snoek et al. (2015) , we can first train the neural network using maximum a posteriori (MAP) estimation.", "After this training phase, the outputs of the last hidden layer of the network are used as the features for Bayesian linear regression.", "To reduce overfitting, the noise variance and prior variance (for the Bayesian linear regression) are subsequently marginalized out by slice sampling (Neal et al., 2003) according to the tractable marginal likelihood, using uniform priors.", "We refer to this model as the maximum a posteriori neural linear model (which we abbreviate as MAP-L NL, where L is the number of hidden layers in the network).", "We tune the hyperparameters for the MAP estimation via Bayesian optimization (Snoek et al., 2012) .", "We have shown benchmark results for different variants of the neural linear model in the regression setting.", "Our results show that the successes these models have seen in other areas such as reinforcement and active learning are not unmerited, with the models achieving generally good performance despite their simplicity.", "Furthermore, they are not as susceptible to the the inability to express gap uncertainty as MFVI or MCD.", "However, we have shown that to obtain reasonable performance extensive hyperparameter tuning is often required, unlike MFVI or MCD.", "Finally, our work suggests that exact inference on a subset of parameters can perform better than approximate inference on the entire set, at least for BNNs.", "We believe this broader issue is worthy of further investigation." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.1621621549129486, 0.11428570747375488, 0.20000000298023224, 0.4285714328289032, 0.10526315122842789, 0.0624999962747097, 0, 0.0833333283662796, 0.22641509771347046, 0.14814814925193787, 0.20000000298023224, 0.1304347813129425, 0.21052631735801697, 0.09090908616781235, 0.13333332538604736, 0.1395348757505417, 0.2702702581882477, 0.1538461446762085, 0.4444444477558136, 0.09756097197532654, 0.07692307233810425, 0, 0.11428570747375488, 0.0952380895614624 ]
S1xmc12EKS
true
[ "We benchmark the neural linear model on the UCI and UCI \"gap\" datasets." ]
[ "The reproducibility of reinforcement-learning research has been highlighted as a key challenge area in the field.", "In this paper, we present a case study in reproducing the results of one groundbreaking algorithm, AlphaZero, a reinforcement learning system that learns how to play Go at a superhuman level given only the rules of the game.", "We describe Minigo, a reproduction of the AlphaZero system using publicly available Google Cloud Platform infrastructure and Google Cloud TPUs.", "The Minigo system includes both the central reinforcement learning loop as well as auxiliary monitoring and evaluation infrastructure.", "With ten days of training from scratch on 800 Cloud TPUs, Minigo can play evenly against LeelaZero and ELF OpenGo, two of the strongest publicly available Go AIs.", "We discuss the difficulties of scaling a reinforcement learning system and the monitoring systems required to understand the complex interplay of hyperparameter configurations.", "In March 2016, Google DeepMind's AlphaGo BID0 defeated world champion Lee Sedol by using two deep neural networks (a policy and a value network) and Monte Carlo Tree Search (MCTS) to synthesize the output of these two neural networks.", "The policy network was trained via supervised learning from human games, and the value network was trained from a much larger corpus of synthetic games generated by sampling game trajectories from the policy network.", "AlphaGo Zero BID1 , published in October 2017, described a continuous pipeline, which when initialized with random weights, could train itself to defeat the original AlphaGo system.", "The requirement for expert human data was replaced with a requirement for vast amounts of compute: approximately two thousand TPUs were used for 72 hours to train AlphaGo Zero to its full strength.", "AlphaZero BID2 presents a refinement of the AlphaGoZero pipeline, notably removing the gating mechanism for publishing new models.In many ways, AlphaGo Zero can be seen as the logical culmination of fully automating and streamlining the bootstrapping process: the original AlphaGo system was bootstrapped from expert human data and reached a final strength that was somewhat stronger than the best humans.", "Then, by generating new training data with the stronger AlphaGo system and repeating the bootstrap process, an even stronger system was created.", "By automating the bootstrapping process until it is continuous, a system is created that can train itself to surpass human levels of play, even when starting from random play.In this paper, we discuss our experiences creating Minigo.", "About half of our effort went into rebuilding the infrastructure necessary to coordinate a thousand selfplay workers.", "The other half of the effort went into monitoring infrastructure to test and verify that what we had built was bug-free.", "Despite having at hand a paper describing the final architecture of AlphaZero, we rediscovered the hard way which components of the system were absolutely necessary to get right, and which components we could be messy with.", "It stands to reason that without the benefit of pre-existing work, monitoring systems are even more important in the discovery process.", "We discuss in particular," ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0.4000000059604645, 0, 0.11764705926179886, 0.07407406717538834, 0.0476190447807312, 0, 0, 0, 0.03448275476694107, 0, 0, 0, 0, 0, 0, 0.1818181723356247 ]
H1eerhIpLV
true
[ "We reproduced AlphaZero on Google Cloud Platform" ]
[ "Generative adversarial networks (GANs) train implicit generative models through solving minimax problems.", "Such minimax problems are known as nonconvex- nonconcave, for which the dynamics of first-order methods are not well understood.", "In this paper, we consider GANs in the type of the integral probability metrics (IPMs) with the generator represented by an overparametrized neural network.", "When the discriminator is solved to approximate optimality in each iteration, we prove that stochastic gradient descent on a regularized IPM objective converges globally to a stationary point with a sublinear rate.", "Moreover, we prove that when the width of the generator network is sufficiently large and the discriminator function class has enough discriminative ability, the obtained stationary point corresponds to a generator that yields a distribution that is close to the distribution of the observed data in terms of the total variation.", "To the best of our knowledge, we seem to first establish both the global convergence and global optimality of training GANs when the generator is parametrized by a neural network." ]
[ 0, 0, 0, 0, 0, 1 ]
[ 0, 0.1111111044883728, 0.3499999940395355, 0.1702127605676651, 0.18518517911434174, 0.5 ]
H1lnZlHYDS
false
[ "We establish global convergence to optimality for IPM-based GANs where the generator is an overparametrized neural network. " ]
[ "We present network embedding algorithms that capture information about a node from the local distribution over node attributes around it, as observed over random walks following an approach similar to Skip-gram.", "Observations from neighborhoods of different sizes are either pooled (AE) or encoded distinctly in a multi-scale approach (MUSAE). ", "Capturing attribute-neighborhood relationships over multiple scales is useful for a diverse range of applications, including latent feature identification across disconnected networks with similar attributes.", "We prove theoretically that matrices of node-feature pointwise mutual information are implicitly factorized by the embeddings.", "Experiments show that our algorithms are robust, computationally efficient and outperform comparable models on social, web and citation network datasets.", "We investigated attributed node embedding and proposes efficient pooled (AE) and multi-scale (MUSAE) attributed node embedding algorithms with linear runtime.", "We proved that these algorithms implicitly factorize probability matrices of features appearing in the neighbourhood of nodes.", "Two widely used neighbourhood preserving node embedding methods Perozzi et al. (2014; are in fact simplified cases of our models.", "On several datasets (Wikipedia, Facebook, Github, and citation networks) we found that representations learned by our methods, in particular MUSAE, outperform neighbourhood based node embedding methods (Perozzi et al. (2014) ; Grover & Leskovec (2016) Our proposed embedding models are differentiated from other methods in that they encode feature information from higher order neighborhoods.", "The most similar previous model BANE (Yang et al., 2018) encodes node attributes from higher order neighbourhoods but has non-linear runtime complexity and the product of adjacency matrix power and feature matrix is decomposed explicitly.", "A PROOFS Lemma 1.", "The empirical statistics of node-feature pairs obtained from random walks give unbiased estimates of joint probabilities of observing feature f ∈ F r steps", "(i) after; or", "(ii) before node v ∈ V, as given by:" ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1463414579629898, 0.06451612710952759, 0.0555555522441864, 0.0714285671710968, 0.12903225421905518, 0.4285714328289032, 0.0714285671710968, 0.0624999962747097, 0.032786883413791656, 0, 0, 0, 0, 0 ]
HJxiMAVtPH
true
[ "We develop efficient multi-scale approximate attributed network embedding procedures with provable properties." ]
[ "Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples.", "While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult.", "In this paper, we present", "1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the gap across methods including the baseline,", "2) a slightly modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini-ImageNet and the CUB datasets, and", "3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms.", "Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones.", "In a realistic, cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.", "Deep learning models have achieved state-of-the-art performance on visual recognition tasks such as image classification.", "The strong performance, however, heavily relies on training a network with abundant labeled instances with diverse visual variations (e.g., thousands of examples for each new class even with pre-training on large-scale dataset with base classes).", "The human annotation cost as well as the scarcity of data in some classes (e.g., rare species) significantly limit the applicability of current vision systems to learn new visual concepts efficiently.", "In contrast, the human visual systems can recognize new classes with extremely few labeled examples.", "It is thus of great interest to learn to generalize to new classes with a limited amount of labeled examples for each novel class.The problem of learning to generalize to unseen classes during training, known as few-shot classification, has attracted considerable attention BID29 ; BID27 ; BID6 ; BID25 ; BID28 ; BID9 ; BID24 .", "One promising direction to few-shot classification is the meta-learning paradigm where transferable knowledge is extracted and propagated from a collection of tasks to prevent overfitting and improve generalization.", "Examples include model initialization based methods BID25 ; BID6 , metric learning methods BID29 ; BID27 ; BID28 , and hallucination based methods BID0 ; BID11 ; BID31 .", "Another line of work BID10 ; BID24 also demonstrates promising results by directly predicting the weights of the classifiers for novel classes.Limitations.", "While many few-shot classification algorithms have reported improved performance over the state-of-the-art, there are two main challenges that prevent us from making a fair comparison and measuring the actual progress.", "First, the discrepancy of the implementation details among multiple few-shot learning algorithms obscures the relative performance gain.", "The performance of baseline approaches can also be significantly under-estimated (e.g., training without data augmentation).", "Second, while the current evaluation focuses on recognizing novel class with limited training examples, these novel classes are sampled from the same dataset.", "The lack of domain shift between the base and novel classes makes the evaluation scenarios unrealistic.Our work.", "In this paper, we present a detailed empirical study to shed new light on the few-shot classification problem.", "First, we conduct consistent comparative experiments to compare several representative few-shot classification methods on common ground.", "Our results show that using a deep backbone shrinks the performance gap between different methods in the setting of limited domain differences between base and novel classes.", "Second, by replacing the linear classifier with a distance-based classifier as used in BID10 ; BID24 , the baseline method is surprisingly competitive to current state-of-art meta-learning algorithms.", "Third, we introduce a practical evaluation setting where there exists domain shift between base and novel classes (e.g., sampling base classes from generic object categories and novel classes from fine-grained categories).", "Our results show that sophisticated few-shot learning algorithms do not provide performance improvement over the baseline under this setting.", "Through making the source code and model implementations with a consistent evaluation setting publicly available, we hope to foster future progress in the field.", "1 Our contributions.1.", "We provide a unified testbed for several different few-shot classification algorithms for a fair comparison.", "Our empirical evaluation results reveal that the use of a shallow backbone commonly used in existing work leads to favorable results for methods that explicitly reduce intra-class variation.", "Increasing the model capacity of the feature backbone reduces the performance gap between different methods when domain differences are limited.2.", "We show that a baseline method with a distance-based classifier surprisingly achieves competitive performance with the state-of-the-art meta-learning methods on both mini-ImageNet and CUB datasets.3.", "We investigate a practical evaluation setting where base and novel classes are sampled from different domains.", "We show that current few-shot classification algorithms fail to address such domain shifts and are inferior even to the baseline method, highlighting the importance of learning to adapt to domain differences in few-shot learning.", "In this paper, we have investigated the limits of the standard evaluation setting for few-shot classification.", "Through comparing methods on a common ground, our results show that the Baseline++ model is competitive to state of art under standard conditions, and the Baseline model achieves competitive performance with recent state-of-the-art meta-learning algorithms on both CUB and mini-ImageNet benchmark datasets when using a deeper feature backbone.", "Surprisingly, the Baseline compares favorably against all the evaluated meta-learning algorithms under a realistic scenario where there exists domain shift between the base and novel classes.", "By making our source code publicly available, we believe that community can benefit from the consistent comparative experiments and move forward to tackle the challenge of potential domain shifts in the context of few-shot learning." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05882352590560913, 0.1428571343421936, 0, 0.1860465109348297, 0.14999999105930328, 0.25, 0.04878048226237297, 0.24390242993831635, 0.060606054961681366, 0.07843136787414551, 0.0833333283662796, 0.060606054961681366, 0.09836065024137497, 0.23255813121795654, 0.052631575614213943, 0, 0.25531914830207825, 0.060606054961681366, 0, 0.051282044500112534, 0.11428570747375488, 0.3888888955116272, 0.11764705181121826, 0.23255813121795654, 0.09090908616781235, 0.17777776718139648, 0.1621621549129486, 0.24390242993831635, 0, 0.19354838132858276, 0.22727271914482117, 0, 0.1428571343421936, 0.23529411852359772, 0.2222222238779068, 0.3030303120613098, 0.13333332538604736, 0.0952380895614624, 0.1599999964237213 ]
HkxLXnAcFQ
true
[ " A detailed empirical study in few-shot classification that revealing challenges in standard evaluation setting and showing a new direction." ]
[ "Temporal logics are useful for describing dynamic system behavior, and have been successfully used as a language for goal definitions during task planning.", "Prior works on inferring temporal logic specifications have focused on \"summarizing\" the input dataset -- i.e., finding specifications that are satisfied by all plan traces belonging to the given set.", "In this paper, we examine the problem of inferring specifications that describe temporal differences between two sets of plan traces.", "We formalize the concept of providing such contrastive explanations, then present a Bayesian probabilistic model for inferring contrastive explanations as linear temporal logic specifications.", "We demonstrate the efficacy, scalability, and robustness of our model for inferring correct specifications across various benchmark planning domains and for a simulated air combat mission.", "In a meeting where multiple plan options are under deliberation by a team, it would be helpful for that team's resolution process if someone could intuitively explain how the plans under consideration differ from one another.", "Also, given a need to identify differences in execution behavior between distinct groups of users (e.g., a group of users who successfully completed a task using a particular system versus those who did not), explanations that identify distinguishing patterns between group behaviors can yield valuable analytics and insights toward iterative system refinement.In this paper, we seek to generate explanations for how two sets of divergent plans differ.", "We focus on generating such contrastive explanations by discovering specifications satisfied by one set of plans, but not the other.", "Prior works on plan explanations include those related to plan recognition for inferring latent goals through observations BID25 BID35 , works on system diagnosis and excuse generation in order to explain plan failures BID29 BID10 , and those focused on synthesizing \"explicable\" plans -i.e., plans that are self-explanatory with respect to a human's mental model BID16 .", "The aforementioned works, however, only involve the explanation or generation of a single plan; we instead focus on explaining differences between multiple plans, which can be helpful in various applications, such as the analysis of competing systems and compliance models, and detecting anomalous behaviour of users.A specification language should be used in order to achieve clear and effective plan explanations.", "Prior works have considered surface-level metrics such as plan cost and action (or causal link) similarity measures to describe plan differences BID23 BID3 .", "In this work, we leverage linear temporal logic (LTL) BID24 which is an expressive language for capturing temporal relations of state variables.", "We use a plan's individual satisfaction (or dissatisfaction) of LTL specifications to describe their differences.LTL specifications have been widely used in both industrial systems and planning algorithms to compactly describe temporal properties BID32 .", "They are human interpretable when expressed as compositions of predefined templates; inversely, they can be constructed from natural language descriptions BID7 ) and serve as natural patterns when encoding high-level human strategies for planning constraints BID14 .Although", "a suite of LTL miners have been developed for software engineering and verification purposes BID32 BID17 BID28 , they primarily focus on mining properties that summarize the overall behavior on a single set of plan traces. Recently", ", BID22 presented SAT-based algorithms to construct a LTL specification that asserts contrast between two sets of traces. The algorithms", ", however, are designed to output only a single explanation, and are susceptible to failure when the input contains imperfect traces. Similar to Neider", "and Gavran, our problem focuses on mining contrastive explanations between two sets of traces, but we adopt a probabilistic approach -we present a Bayesian inference model that can generate multiple explanations while demonstrating robustness to noisy input. The model also permits", "scalability when searching in large hypothesis spaces and allows for flexibility in incorporating various forms of prior knowledge and system designer preferences. We demonstrate the efficacy", "of our model for extracting correct explanations on plan traces across various benchmark planning domains and for a simulated air combat mission.Plan explanations are becoming increasingly important as automated planners and humans collaborate. This first involves humans", "making sense of the planner's output (e.g., PDDL plans), where prior work has focused on developing user-friendly interfaces that provide graphical visualizations to describe the causal links and temporal relations of plan steps BID1 BID26 BID21 . The outputs of these systems", ", however, require an expert for interpretation and do not provide a direct explanation as to why the planner made certain decisions to realize the outputted plan.Automatic generation of explanations has been studied in goal recognition settings, where the objective is to infer the latent goal state that best explains the incomplete sequence of observations BID25 BID30 . Works on explicable planning", "emphasize the generation of plans that are deemed selfexplanatory, defined in terms of optimizing plan costs for a human's mental model of the world BID16 . Mixed-initiative planners iteratively", "revise their plan generation based on user input (e.g. action modifications), indirectly promoting an understanding of differences across newly generated plans through continual user interaction BID27 BID3 . All aforementioned works deal with explainability", "with respect to a single planning problem specification, whereas our model deals with explaining differences in specifications governing two distinct sets of plans given as input.Works on model reconciliation focus on producing explanations for planning models (i.e. predicates, preconditions and effects), instead of the realized plans . Explanations are specified in the form of model updates", ", iteratively bringing an incomplete model to a more complete world model. The term, \"contrastive explanation,\" is used in these", "works to identify the relevant differences between the input pair of models. Our work is similar in spirit but focuses on producing", "a specification of differences in the constraints satisfied among realized plans. Our approach takes sets of observed plans as input rather", "than planning models.While model updates are an important modality for providing plan explanations, there are certain limitations. We note that an optimal plan generated with respect to a", "complete environment/world model is not always explicable or self-explanatory. The space of optimal plans may be large, and the underlying", "preference or constraint that drives the generation of a particular plan may be difficult to pre-specify and incorporate within the planning model representation. We focus on explanations stemming directly from the realized", "plans themselves. Environment/world models (e.g. PDDL domain files) can be helpful", "in providing additional context, but are not necessary for our approach.Our work leverages LTL as an explanation language. Temporal patterns can offer greater expressivity and explanatory", "power in describing why a set of plans occurred and how they differ, and may reveal hidden plan dynamics that cannot be captured by the use of surface-level metrics like plan cost or action similarities. Our work on using LTL for contrastive explanations directly contributes", "to exploring how we can answer the following roadmap questions for XAIP BID9 : \"why did you do that? why didn't you do something else (that I would have done)?\" Prior research into mining LTL specifications has focused on generating", "a \"summary\" explanation of the observed traces. BID12 explored mining globally persistent specifications from demonstrated", "action traces for a finite state Markov decision process. BID17 introduced Texada, a system for mining all possible instances of a given", "LTL template from an output log where each unique string is represented as a new proposition. BID28 proposed a template-based probabilistic model to infer task specifications", "given a set of demonstrations. However, all of these approaches focus on inferring a specification that all the", "demonstrated traces satisfy.For contrastive explanations, Neider and Gavran (2018) presented SAT-based algorithms to infer a LTL specification that delineates between the positive and negative sets of traces. Unlike existing LTL miners, the algorithms construct an arbitrary, minimal LTL specification", "without requiring predefined templates. However, they are designed to output only a single specification, and can fail when the sets", "contain imperfect traces (i.e., if there exists no specification consistent with every single input trace.). We present a probabilistic model for the same problem and generate multiple contrastive explanations", "while offering robustness to noisy input.Some works have proposed algorithms to infer contrastive explanations for continuous valued time-series data based on restricted signal temporal logic (STL) grammar BID33 BID15 . However, the continuous space semantics of STL and a restricted subset of temporal operators make the", "grammar unsuitable for use with planning domain problems. To the best of our knowledge, our proposed model is the first probabilistic model to infer contrastive", "LTL specifications for sets of traces in domains defined by PDDL.", "The runtime for our model and the delimited enumeration baseline with 2,000 samples ranged between 1.2-4.7 seconds (increase in |V | only had marginal effect on the runtime).", "The SAT-based miner by Neider and Gavran often failed to generate a solution within a five minute cutoff (see the number of its timeout cases in the last column of TAB2 ).", "The prior work can only output a single ϕ * , which frequently took on a form of Fp i .", "It did not scale well to problems that required more complex ϕ as solutions.", "This is because increasing the \"depth\" of ϕ (the number of temporal / Boolean operators and propositions) exponentially increased the size of the compiled SAT problem.", "In our experiments, the prior work timed out for problems requiring solutions with depth ≥ 3 (note that Fp i has depth of 2).", "Robustness to Noisy Input In order to test robustness, we perturbed the input X by randomly swapping traces between π A and π B .", "For example, a noise rate of 0.2 would swap 20% of the traces, where the accuracy of ϕ ground on the perturbed data, X = ( π A , π B ), would evaluate to 0.8 (note that it may be possible to discover other ϕ that achieve better accuracy on X).", "The MAP estimates inferred from X, { ϕ * }, were evaluated on the original input X to assess any loss of ability to provide contrast.", "Figure 3 shows the average accuracy of { ϕ * }, evaluated on both X and X, across varying noise rate.", "Even at a moderate noise rate of 0.25, the inferred ϕ * s were able to maintain an average accuracy greater than 0.9 on X. Such a threshold is promising for real-world applications.", "The robustness did start to sharply decline as noise rate increased past 0.4.", "For all test cases, the Neider and Gavran miner failed to generate a solution for anything with a noise rate ≥ 0.1.", "We have presented a probabilistic Bayesian model to infer contrastive LTL specifications describing how two sets of plan traces differ.", "Our model generates multiple contrastive explanations more efficiently than the state-of-the-art and demonstrates robustness to noisy input.", "It also provides a principled approach to incorporate various forms of prior knowledge or preferences during search.", "It can serve as a strong foundation that can be naturally extended to multiple input sets by repeating the algorithm for all pairwise or one-vs.-rest", "comparisons.Interesting avenues for future work include gauging the saliency of propositions, as well as deriving a minimal set of contrastive explanations. Furthermore", ", we seek to test the model in human-in-the-loop settings, with the goal of understanding the relationship between different planning heuristics for the saliency of propositions (e.g. landmarks and causal links) to their actual explicability when the explanation is communicated to a human." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.09302324801683426, 0.11999999731779099, 0.25, 0.3636363446712494, 0.17777776718139648, 0.145454540848732, 0.20512820780277252, 0.19999998807907104, 0.1492537260055542, 0.13333332538604736, 0.09302324801683426, 0.0476190410554409, 0.19607841968536377, 0.03703703358769417, 0.1818181723356247, 0.3499999940395355, 0.1428571343421936, 0.37288135290145874, 0.08888888359069824, 0.2181818187236786, 0.09836065024137497, 0.1599999964237213, 0.17391303181648254, 0.07407406717538834, 0.20588235557079315, 0.14999999105930328, 0.0952380895614624, 0.14999999105930328, 0.21276594698429108, 0.09756097197532654, 0.2745097875595093, 0, 0.0416666604578495, 0.2539682388305664, 0.10169491171836853, 0.1666666567325592, 0.14999999105930328, 0.21739129722118378, 0.1111111044883728, 0.29629629850387573, 0.1463414579629898, 0.2641509473323822, 0.19672130048274994, 0.23255813121795654, 0.25, 0.03999999538064003, 0.11999999731779099, 0.09999999403953552, 0.05714285373687744, 0.04651162400841713, 0.045454539358615875, 0.09302324801683426, 0.0952380895614624, 0.08695651590824127, 0.0476190410554409, 0.1111111044883728, 0.05714285373687744, 0.09302324801683426, 0.7804877758026123, 0.21052631735801697, 0.15789473056793213, 0.1304347813129425, 0.1904761791229248, 0.13793103396892548 ]
rkg2m6hXcE
true
[ "We present a Bayesian inference model to infer contrastive explanations (as LTL specifications) describing how two sets of plan traces differ." ]
[ "This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piece-wise linear non-linearity activations.", "We use tropical geometry, a new development in the area of algebraic geometry, to provide a characterization of the decision boundaries of a simple neural network of the form (Affine, ReLU, Affine).", "Specifically, we show that the decision boundaries are a subset of a tropical hypersurface, which is intimately related to a polytope formed by the convex hull of two zonotopes.", "The generators of the zonotopes are precise functions of the neural network parameters.", "We utilize this geometric characterization to shed light and new perspective on three tasks.", "In doing so, we propose a new tropical perspective for the lottery ticket hypothesis, where we see the effect of different initializations on the tropical geometric representation of the decision boundaries.", "Also, we leverage this characterization as a new set of tropical regularizers, which deal directly with the decision boundaries of a network.", "We investigate the use of these regularizers in neural network pruning (removing network parameters that do not contribute to the tropical geometric representation of the decision boundaries) and in generating adversarial input attacks (with input perturbations explicitly perturbing the decision boundaries geometry to change the network prediction of the input).", "In this paper, we leverage tropical geometry to characterize the decision boundaries of neural networks in the form (Affine, ReLU, Affine) and relate it to well-studied geometric objects such as zonotopes and polytopes.", "We leaverage this representation in providing a tropical perspective to support the lottery ticket hypothesis, network pruning and designing adversarial attacks.", "One natural extension for this work is a compact derivation for the characterization of the decision boundaries of convolutional neural networks (CNNs) and graphical convolutional networks (GCNs).", "Diego Ardila, Atilla P. Kiraly, Sujeeth Bharadwaj, Bokyung Choi, Joshua J. Reicher, Lily Peng, Daniel Tse, Mozziyar Etemadi, Wenxing Ye, Greg Corrado, David P. Naidich, and Shravya Shetty.", "End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography.", "Nature Medicine, 2019.", "A PRELIMINARIES AND DEFINITIONS.", "Fact", "1. P+Q = {p + q, ∀p ∈ P and q ∈ Q} is the Minkowski sum between two sets P and Q. Fact", "2. Let f be a tropical polynomial and let a ∈ N. Then", "Let both f and g be tropical polynomials, Then", "Note that V(P(f )) is the set of vertices of the polytope P(f )." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3888888955116272, 0.2857142686843872, 0.23255813121795654, 0.20689654350280762, 0.1875, 0.1860465109348297, 0.21052631735801697, 0.290909081697464, 0.375, 0.1538461446762085, 0.3499999940395355, 0.04444443807005882, 0, 0, 0, 0.10256409645080566, 0.13333332538604736, 0.14814814925193787, 0.13333332538604736 ]
BylldnNFwS
true
[ "Tropical geometry can be leveraged to represent the decision boundaries of neural networks and bring to light interesting insights." ]
[ "First-order methods such as stochastic gradient descent (SGD) are currently the standard algorithm for training deep neural networks.", "Second-order methods, despite their better convergence rate, are rarely used in practice due to the pro- hibitive computational cost in calculating the second-order information.", "In this paper, we propose a novel Gram-Gauss-Newton (GGN) algorithm to train deep neural networks for regression problems with square loss.", "Our method draws inspiration from the connection between neural network optimization and kernel regression of neural tangent kernel (NTK).", "Different from typical second-order methods that have heavy computational cost in each iteration, GGN only has minor overhead compared to first-order methods such as SGD.", "We also give theoretical results to show that for sufficiently wide neural networks, the convergence rate of GGN is quadratic.", "Furthermore, we provide convergence guarantee for mini-batch GGN algorithm, which is, to our knowledge, the first convergence result for the mini-batch version of a second-order method on overparameterized neural net- works.", "Preliminary experiments on regression tasks demonstrate that for training standard networks, our GGN algorithm converges much faster and achieves better performance than SGD.", "First-order methods such as Stochastic Gradient Descent (SGD) are currently the standard choice for training deep neural networks.", "The merit of first-order methods is obvious: they only calculate the gradient and therefore are computationally efficient.", "In addition to better computational efficiency, SGD has even more advantages among the first-order methods.", "At each iteration, SGD computes the gradient only on a mini-batch instead of all training data.", "Such randomness introduced by sampling the mini-batch can lead to better generalization (Hardt et al., 2015; Keskar et al., 2016; Masters & Luschi, 2018; Mou et al., 2017; Zhu et al., 2018) and better convergence (Ge et al., 2015; Jin et al., 2017a; b) , which is crucial when the function class is highly overparameterized deep neural networks.", "Recently there is a huge body of works trying to develop more efficient first-order methods beyond SGD (Duchi et al., 2011; Kingma & Ba, 2014; Luo et al., 2019; Liu et al., 2019) .", "Second-order methods, despite their better convergence rate, are rarely used to train deep neural networks.", "At each iteration, the algorithm has to compute second order information, for example, the Hessian or its approximation, which is typically an m by m matrix where m is the number of parameters of the neural network.", "Moreover, the algorithm needs to compute the inverse of this matrix.", "The computational cost is prohibitive and usually it is not even possible to store such a matrix.", "Formula and require subtle implementation tricks to use backpropagation.", "In contrast, GGN has simpler update rule and better guarantee for neural networks.", "In a concurrent and independent work, Zhang et al. (2019a) showed that natural gradient method and K-FAC have a linear convergence rate for sufficiently wide networks in full-batch setting.", "In contrast, our method enjoys a higher-order (quadratic) convergence rate guarantee for overparameterized networks, and we focus on developing a practical and theoretically sound optimization method.", "We also reveal the relation between our method and NTK kernel regression, so using results based on NTK (Arora et al., 2019b) , one can easily give generalization guarantee of our method.", "Another independent work (Achiam et al., 2019) proposed a preconditioned Q-learning algorithm which has similar form of our update rule.", "Unlike the methods considered in Zhang et al. (2019a) ; Achiam et al. (2019) which contain the learning rate that needed to be tuned, our derivation of GGN does not introduce a learning rate term (or understood as suggesting that the learning rate can be fixed to be 1 to get good performance which is verified in Figure 2 (c)).", "We propose a novel Gram-Gauss-Newton (GGN) method for solving regression problems with square loss using overparameterized neural networks.", "Despite being a second-order method, the computation overhead of the GGN algorithm at each iteration is small compared to SGD.", "We also prove that if the neural network is sufficiently wide, GGN algorithm enjoys a quadratic convergence rate.", "Experimental results on two regression tasks demonstrate that GGN compares favorably to SGD on these data sets with standard network architectures.", "Our work illustrates that second-order methods have the potential to compete with first-order methods for learning deep neural networks with huge number of parameters.", "In this paper, we mainly focus on the regression task, but our method can be easily generalized to other tasks such as classification as well.", "Consider the k-category classification problem, the neural network outputs a vector with k entries.", "Although this will increase the computational complexity of getting the Jacobian whose size increases k times, i.e., J ∈ R (bk)×m , each row of J can be still computed in parallel, which means the extra cost only comes from parallel computation overhead when we calculate in a fully parallel setting.", "While most first-order methods for training neural networks can hardly make use of the computational resource in parallel or distributed settings to accelerate training, our GGN method can exploit this ability.", "For first-order methods, basically extra computational resource can only be used to calculate more gradients at a time by increasing batch size, which harms generalization a lot.", "But for GGN, more resource can be used to refine the gradients and achieve accelerated convergence speed with the help of second-order information.", "It is an important future work to study the application of GGN to classification problems." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.04999999329447746, 0.09090908616781235, 0.2790697515010834, 0.25641024112701416, 0.04347825422883034, 0.1904761791229248, 0.16326530277729034, 0.08888888359069824, 0.04999999329447746, 0.05128204822540283, 0.05405404791235924, 0, 0.1515151411294937, 0.038461532443761826, 0.21621620655059814, 0.11538460850715637, 0.0624999962747097, 0.10526315122842789, 0.12903225421905518, 0.11428570747375488, 0.12244897335767746, 0.2222222238779068, 0.11764705181121826, 0, 0.02985074184834957, 0.25, 0.09756097197532654, 0.09999999403953552, 0.0952380895614624, 0.13636362552642822, 0.08695651590824127, 0.11428570747375488, 0, 0.11538460850715637, 0.0833333283662796, 0.22727271914482117, 0.0555555522441864 ]
H1gCeyHFDS
true
[ "A novel Gram-Gauss-Newton method to train neural networks, inspired by neural tangent kernel and Gauss-Newton method, with fast convergence speed both theoretically and experimentally." ]
[ "Recent pretrained sentence encoders achieve state of the art results on language understanding tasks, but does this mean they have implicit knowledge of syntactic structures?", "We introduce a grammatically annotated development set for the Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2018), which we use to investigate the grammatical knowledge of three pretrained encoders, including the popular OpenAI Transformer (Radford et al., 2018) and BERT (Devlin et al., 2018).", "We fine-tune these encoders to do acceptability classification over CoLA and compare the models’ performance on the annotated analysis set.", "Some phenomena, e.g. modification by adjuncts, are easy to learn for all models, while others, e.g. long-distance movement, are learned effectively only by models with strong overall performance, and others still, e.g. morphological agreement, are hardly learned by any model.", "The effectiveness and ubiquity of pretrained sentence embeddings for natural language understanding has grown dramatically in recent years.", "Recent sentence encoders like OpenAI's Generative Pretrained Transformer (GPT; Radford et al., 2018) and BERT (Devlin et al., 2018) achieve the state of the art on the GLUE benchmark (Wang et al., 2018) .", "Among the GLUE tasks, these stateof-the-art systems make their greatest gains on the acceptability task with the Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2018) .", "CoLA contains example sentences from linguistics publications labeled by experts for grammatical acceptability, and written to show subtle grammatical features.", "Because minimal syntactic differences can separate acceptable sentences from unacceptable ones (What did Bo write a book about? / *What was a book about written by Bo?), and acceptability classifiers are more reliable when trained on GPT and BERT than on recurrent models, it stands to reason that GPT and BERT have better implicit knowledge of syntactic features relevant to acceptability.Our goal in this paper is to develop an evaluation dataset that can locate which syntactic features that a model successfully learns by identifying the syntactic domains of CoLA in which it performs the best.", "Using this evaluation set, we compare the syntactic knowledge of GPT and BERT in detail, and investigate the strengths of these models over the baseline BiLSTM model published by Warstadt et al. (2018) .", "The analysis set includes expert annotations labeling the entire CoLA development set for the presence of 63 fine-grained syntactic features.We identify many specific syntactic features that make sentences harder to classify, and many that have little effect.", "For instance, sentences involving unusual or marked argument structures are no harder than the average sentence, while sentences with long distance dependencies are hard to learn.", "We also find features of sentences that accentuate or minimize the differences between models.", "Specifically, the transformer models seem to learn long-distance dependencies much better than the recurrent model, yet have no advantage on sentences with morphological violations.", "Using a new grammatically annotated analysis set, we identify several syntactic phenomena that are predictive of good or bad performance of current state of the art sentence encoders on CoLA.", "We also use these results to develop hypotheses about why BERT is successful, and why transformer models outperform sequence models.Our findings can guide future work on sentence embeddings.", "A current weakness of all sentence encoders we investigate, including BERT, is the identification of morphological violations.", "Future engineering work should investigate whether switching to a character-level model can mitigate this problem.", "Additionally, transformer models appear to have an advantage over sequence models with long-distance dependencies, but still struggle with these constructions relative to more local phenomena.", "It stands to reason that this performance gap might be widened by training larger or deeper transformer models, or training on longer or more complex sentences.", "This analysis set can be used by engineers interested in evaluating the syntactic knowledge of their encoders.", "Finally, these findings suggest possible controlled experiments that could confirm whether there is a causal relation between the presence of the syntactic features we single out as interesting and model performance.", "Our results are purely correlational, and do not mark whether a particular construction is crucial for the acceptability of the sentence.", "Future experiments following Ettinger et al. (2018) and Kann et al. (2019) can semi-automatically generate datasets manipulating, for example, length of long-distance dependencies, inflectional violations, or the presence of interrogatives, while controlling for factors like sentence length and word choice, in order determine the extent to which these features impact the quality of sentence embeddings.", "(1) Included", "a. John owns the book.", "(37)", "b. Park Square has a festive air.", "(131)", "c. *Herself likes Mary's mother.", "FORMULA0 (2) Excluded", "a. Bill has eaten cake.", "b. I gave Joe a book." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.27272728085517883, 0.3103448152542114, 0.307692289352417, 0.03703703358769417, 0.15789473056793213, 0.1304347813129425, 0.17777776718139648, 0.051282044500112534, 0.17391304671764374, 0.20408162474632263, 0.26923075318336487, 0.13636362552642822, 0.23529411852359772, 0.1395348757505417, 0.375, 0.12765957415103912, 0.1666666567325592, 0.11428570747375488, 0.0476190410554409, 0.04651162400841713, 0.3243243098258972, 0.1599999964237213, 0.25, 0.1230769157409668, 0.1599999964237213, 0.07407406717538834, 0, 0, 0.07999999821186066, 0.07692307233810425 ]
Hkx5cU26kN
true
[ "We investigate the implicit syntactic knowledge of sentence embeddings using a new analysis set of grammatically annotated sentences with acceptability judgments." ]
[ "When considering simultaneously a finite number of tasks, multi-output learning enables one to account for the similarities of the tasks via appropriate regularizers.", "We propose a generalization of the classical setting to a continuum of tasks by using vector-valued RKHSs.", "Several fundamental problems in machine learning and statistics can be phrased as the minimization of a loss function described by a hyperparameter.", "The hyperparameter might capture numerous aspects of the problem:", "(i) the tolerance w.", "r.", "t.", "outliers as the -insensitivity in Support Vector Regression (Vapnik et al., 1997) ,", "(ii) importance of smoothness or sparsity such as the weight of the l 2 -norm in Tikhonov regularization (Tikhonov & Arsenin, 1977) , l 1 -norm in LASSO (Tibshirani, 1996) , or more general structured-sparsity inducing norms BID3 ,", "(iii) Density Level-Set Estimation (DLSE), see for example one-class support vector machines One-Class Support Vector Machine (OCSVM, Schölkopf et al., 2000) ,", "(iv) confidence as exemplified by Quantile Regression (QR, Koenker & Bassett Jr, 1978) , or", "(v) importance of different decisions as implemented by Cost-Sensitive Classification (CSC, Zadrozny & Elkan, 2001) .", "In various cases including QR, CSC or DLSE, one is interested in solving the parameterized task for several hyperparameter values.", "Multi-Task Learning (Evgeniou & Pontil, 2004 ) provides a principled way of benefiting from the relationship between similar tasks while preserving local properties of the algorithms: ν-property in DLSE (Glazer et al., 2013) or quantile property in QR (Takeuchi et al., 2006) .A", "natural extension from the traditional multi-task setting is to provide a prediction tool being able to deal with any value of the hyperparameter. In", "their seminal work, (Takeuchi et al., 2013) extended multi-task learning by considering an infinite number of parametrized tasks in a framework called Parametric Task Learning (PTL) . Assuming", "that the loss is piecewise affine in the hyperparameter, the authors are able to get the whole solution path through parametric programming, relying on techniques developed by Hastie et al. (2004) .In this paper", "1 , we relax the affine model assumption on the tasks as well as the piecewise-linear assumption on the loss, and take a different angle. We propose Infinite", "Task Learning (ITL) within the framework of functionvalued function learning to handle a continuum number of parameterized tasks using Vector-Valued Reproducing Kernel Hilbert Space (vv-RKHS, Pedrick, 1957) ." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.34285715222358704, 0.5517241358757019, 0.17142856121063232, 0.08695651590824127, 0, 0, 0.04444444179534912, 0, 0, 0.06896550953388214, 0, 0.11320754140615463, 0.2222222238779068, 0.2380952388048172, 0.04444444179534912, 0.2222222238779068, 0.3499999940395355 ]
H1gIN5Bs3E
true
[ "We propose an extension of multi-output learning to a continuum of tasks using operator-valued kernels." ]
[ "We analyze the joint probability distribution on the lengths of the\n", "vectors of hidden variables in different layers of a fully connected\n", "deep network, when the weights and biases are chosen randomly according to\n", "Gaussian distributions, and the input is binary-valued.", " We show\n", "that, if the activation function satisfies a minimal set of\n", "assumptions, satisfied by all activation functions that we know that\n", "are used in practice, then, as the width of the network gets large,\n", "the ``length process'' converges in probability to a length map\n", "that is determined as a simple function of the variances of the\n", "random weights and biases, and the activation function.\n\n", "We also show that this convergence may fail for activation functions \n", "that violate our assumptions." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23529411852359772, 0.2857142686843872, 0.1621621549129486, 0.0624999962747097, 0.2857142686843872, 0.11764705181121826, 0.2702702581882477, 0.2857142686843872, 0.22857142984867096, 0.12121211737394333, 0.2222222238779068, 0 ]
HJej3s09Km
false
[ "We prove that, for activation functions satisfying some conditions, as a deep network gets wide, the lengths of the vectors of hidden variables converge to a length map." ]
[ "Data augmentation is one of the most effective approaches for improving the accuracy of modern machine learning models, and it is also indispensable to train a deep model for meta-learning.", "However, most current data augmentation implementations applied in meta-learning are the same as those used in the conventional image classification.", "In this paper, we introduce a new data augmentation method for meta-learning, which is named as ``Task Level Data Augmentation'' (referred to Task Aug).", "The basic idea of Task Aug is to increase the number of image classes rather than the number of images in each class.", "In contrast, with a larger amount of classes, we can sample more diverse task instances during training.", "This allows us to train a deep network by meta-learning methods with little over-fitting.", "Experimental results show that our approach achieves state-of-the-art performance on miniImageNet, CIFAR-FS, and FC100 few-shot learning benchmarks.", "Once paper is accepted, we will provide the link to code.", "Although the machine learning systems have achieved a human-level ability in many fields with a large amount of data, learning from a few examples is still a challenge for modern machine learning techniques.", "Recently, the machine learning community has paid significant attention to this problem, where few-shot learning is the common task for meta-learning (e.g., Ravi & Larochelle (2017) ; Finn et al. (2017) ; ; Snell et al. (2017) ).", "The purpose of few-shot learning is to learn to maximize generalization accuracy across different tasks with few training examples.", "In a classification application of the few-shot learning, tasks are generated by sampling from a conventional classification dataset; then, training samples are randomly selected from several classes in the classification dataset.", "In addition, a part of the examples is used as training examples and testing examples.", "Thus, a tiny learning task is formed by these examples.", "The meta-learning methods are applied to control the learning process of a base learner, so as to correctly classify on testing examples.", "Data augmentation is widely used to improve the training of deep learning models.", "Usually, the data augmentation is regarded as an explicit form of regularization He et al. (2016) ; Simonyan & Zisserman (2014) ; .", "Thus, the data augmentation aims at artificially generating the training data by using various translations on existing data, such as: adding noises, cropping, flipping, rotation, translation, etc.", "The general idea of data augmentations is increasing the number of data by change data slightly to be different from original data, but the data still can be recognized by human.", "The new data involved in the classes are identical to the original data.", "However, the minimum units of meta-learning are the tasks rather than data.", "Increasing the data of original class cannot increase the types of task instances.", "Therefore, \"Task Aug\" increases the data that can be clearly recognized as the different classes as the original data.", "With novel classes, the more diverse task instances can be generated.", "This is important for the meta-learning, since metalearning models must predict unseen classes during the testing phase.", "Therefore, a larger number of classes is helpful for models to generate task instances with different classes.", "In this work, the natural images are augmented by being rotated 90, 180, 270 degrees (we show examples in Figure 1 ).", "We compare two cases,", "1) the new images are converted to the classes of original images and", "2) the new images are separated to the new classes.", "The proposed method is evaluated by experiments with the state of art meta-learning Methods Snell et al. (2017) (2017) .", "The experimental result analysis shows that Task Aug can reduce over-fitting and improve the performance, while the conventional data augmentation (referred to Data Aug) of rotation, which converts the novel data into the classes of original data, does not improve the performance and even causes the worse result.", "In the comparative experiments, Task Aug achieves the best accuracy of the meta-learning methods applied.", "Besides, the best results of our experiments exceed the current state-of-art result over a large margin." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3499999940395355, 0.1875, 0.2631579041481018, 0.060606054961681366, 0.06451612710952759, 0.1428571343421936, 0.19354838132858276, 0.07999999821186066, 0.1463414579629898, 0.1304347813129425, 0.0624999962747097, 0.05128204822540283, 0.2222222238779068, 0.1666666567325592, 0.11428570747375488, 0.14814814925193787, 0.17142856121063232, 0.10256409645080566, 0.10526315122842789, 0.07999999821186066, 0.1599999964237213, 0.07999999821186066, 0.13793103396892548, 0, 0.13333332538604736, 0.19999998807907104, 0, 0.1111111044883728, 0.07999999821186066, 0, 0.1249999925494194, 0.1538461446762085, 0.07407406717538834, 0.06896550953388214 ]
Hkx9UaNKDH
true
[ "We propose a data augmentation approach for meta-learning and prove that it is valid." ]
[ "In this paper, we present a general framework for distilling expectations with respect to the Bayesian posterior distribution of a deep neural network, significantly extending prior work on a method known as ``Bayesian Dark Knowledge.", "\" Our generalized framework applies to the case of classification models and takes as input the architecture of a ``teacher\" network, a general posterior expectation of interest, and the architecture of a ``student\" network.", "The distillation method performs an online compression of the selected posterior expectation using iteratively generated Monte Carlo samples from the parameter posterior of the teacher model.", "We further consider the problem of optimizing the student model architecture with respect to an accuracy-speed-storage trade-off.", "We present experimental results investigating multiple data sets, distillation targets, teacher model architectures, and approaches to searching for student model architectures.", "We establish the key result that distilling into a student model with an architecture that matches the teacher, as is done in Bayesian Dark Knowledge, can lead to sub-optimal performance.", "Lastly, we show that student architecture search methods can identify student models with significantly improved performance.", "Deep learning models have shown promising results in the areas including computer vision, natural language processing, speech recognition, and more (Krizhevsky et al., 2012; Graves et al., 2013a; b; Huang et al., 2016; Devlin et al., 2018) .", "However, existing point estimation-based training methods for these models may result in predictive uncertainties that are not well calibrated, including the occurrence of confident errors.", "It is well-known that Bayesian inference can often provide more robust posterior predictive distributions in the classification setting compared to the use of point estimation-based training.", "However, the integrals required to perform Bayesian inference in neural network models are also well-known to be intractable.", "Monte Carlo methods provide one solution to representing neural network parameter posteriors as ensembles of networks, but this can require large amounts of both storage and compute time (Neal, 1996; Welling & Teh, 2011) .", "To help overcome these problems, Balan et al. (2015) introduced an interesting model training method referred to as Bayesian Dark Knowledge.", "In the classification setting, Bayesian Dark Knowledge attempts to compress the Bayesian posterior predictive distribution induced by the full parameter posterior of a \"teacher\" network into a \"student\" network.", "The parameter posterior of the teacher network is represented through a Monte Carlo ensemble of specific instances of the teacher network (the teacher ensemble), and the analytically intractable posterior predictive distributions are approximated as Monte Carlo averages over the output of the networks in the teacher ensemble.", "The major advantage of this approach is that the computational complexity of prediction at test time is drastically reduced compared to computing Monte Carlo averages over a large ensemble of networks.", "As a result, methods of this type have the potential to be much better suited to learning models for deployment in resource constrained settings.", "In this paper, we present a Bayesian posterior distillation framework that generalizes the Bayesian Dark Knowledge approach in several significant directions.", "The primary modeling and algorithmic contributions of this work are: (1) we generalize the target of distillation in the classification case from the posterior predictive distribution to general posterior expectations; (2) we generalize the student architecture from being restricted to match the teacher architecture to being a free choice in the distillation procedure.", "The primary empirical contributions of this work are (1) evaluating the distillation of both the posterior predictive distribution and expected posterior entropy across a range of models and data sets including manipulations of data sets that increase posterior uncertainty; and (2) evaluating the impact of the student model architecture on distillation performance including the investigation of sparsity-inducing regularization and pruning for student model architecture optimization.", "The key empirical findings are that (1) distilling into a student model that matches the architecture of the teacher, as in Balan et al. (2015) , can be sub-optimal; and (2) student architecture optimization methods can identify significantly improved student models.", "We note that the significance of generalizing distillation to arbitrary posterior expectations is that it allows us to capture a wider range of useful statistics of the posterior that are of interest from an uncertainty quantification perspective.", "As noted above, we focus on the case of distilling the expected posterior entropy in addition to the posterior predictive distribution itself.", "When combined with the entropy of the posterior predictive distribution, the expected posterior entropy enables disentangling model uncertainty (epistemic uncertainty) from fundamental uncertainty due to class overlap (aleatoric uncertainty).", "This distinction is extremely important in determining why predictions are uncertain for a given data case.", "Indeed, the difference between these two terms is the basis for the Bayesian active learning by disagreement (BALD) score used in active learning, which samples instances with the goal of minimizing model uncertainty (Houlsby et al., 2011) .", "The remainder of this paper is organized as follows.", "In the next section, we begin by presenting background material and related work in Section 2.", "In Section 3, we present the proposed framework and associated Generalized Posterior Expectation Distillation (GPED) algorithm.", "In Section 4, we present experiments and results.", "Additional details regarding data sets and experiments can be found in Appendix A, with supplemental results included in Appendix B.", "We have presented a framework for distilling expectations with respect to the Bayesian posterior distribution of a deep neural network that generalizes the Bayesian Dark Knowledge approach in several significant directions.", "Our results show that the performance of posterior distillation can be highly sensitive to the architecture of the student model, but that basic architecture search methods can help to identify student model architectures with improved speed-storage-accuracy trade-offs.", "There are many directions for future work including considering the distillation of a broader class of posterior statistics including percentiles, assessing and developing more advanced student model architecture search methods, and applying the framework to larger state-of-the-art models.", "A DATASETS AND MODEL DETAILS" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.40909090638160706, 0.1666666567325592, 0.060606054961681366, 0, 0.06451612710952759, 0.10256409645080566, 0, 0, 0.0555555522441864, 0.1111111044883728, 0.1428571343421936, 0.045454543083906174, 0.0624999962747097, 0.11764705181121826, 0.0952380895614624, 0.05128204822540283, 0.05882352590560913, 0.19354838132858276, 0.0833333283662796, 0.07407406717538834, 0.043478257954120636, 0.09999999403953552, 0.13333332538604736, 0.05882352590560913, 0.07407406717538834, 0.08888888359069824, 0, 0, 0.07407406717538834, 0, 0, 0.41025641560554504, 0.04999999701976776, 0.13333332538604736, 0.1249999925494194 ]
Byg_vREtvB
true
[ "A general framework for distilling Bayesian posterior expectations for deep neural networks." ]
[ "Variational Autoencoders (VAEs) have proven to be powerful latent variable models.", "How- ever, the form of the approximate posterior can limit the expressiveness of the model.", "Categorical distributions are flexible and useful building blocks for example in neural memory layers.", "We introduce the Hierarchical Discrete Variational Autoencoder (HD-VAE): a hi- erarchy of variational memory layers.", "The Concrete/Gumbel-Softmax relaxation allows maximizing a surrogate of the Evidence Lower Bound by stochastic gradient ascent.", "We show that, when using a limited number of latent variables, HD-VAE outperforms the Gaussian baseline on modelling multiple binary image datasets.", "Training very deep HD-VAE remains a challenge due to the relaxation bias that is induced by the use of a surrogate objective.", "We introduce a formal definition and conduct a preliminary theoretical and empirical study of the bias.", "Unsupervised learning has proven powerful at leveraging vast amounts of raw unstructured data (Kingma et al., 2014; Radford et al., 2017; Peters et al., 2018; Devlin et al., 2018) .", "Through unsupervised learning, latent variable models learn the explicit likelihood over an unlabeled dataset with an aim to discover hidden factors of variation as well as a generative process.", "An example hereof, is the Variational Autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014 ) that exploits neural networks to perform amortized approximate inference over the latent variables.", "This approximation comes with limitations, both in terms of the latent prior and the amortized inference network (Burda et al., 2015; Hoffman and Johnson, 2016) .", "It has been proposed to go beyond Gaussian priors and approximate posterior using, for instance, autoregressive flows (Chen et al., 2016; Kingma et al., 2016) , a hierarchy of latent variables (Sønderby et al., 2016; Maaløe et al., 2016 Maaløe et al., , 2019 , a mixture of priors (Tomczak and Welling, 2017) or discrete distributions (van den Oord et al., 2017; Razavi et al., 2019; Rolfe, 2016; Vahdat et al., 2018b,a; Sadeghi et al., 2019) .", "Current state-of-the-art deep learning models are trained on web-scaled datasets and increasing the number of parameters has proven to be a way to yield remarkable results (Radford et al., 2019) .", "Nonetheless, time complexity and GPU memory are scarce resources, and the need for both resources increases linearly with the depth of neural network.", "Li et al. (2016) and Lample et al. (2019) showed that large memory layers are an effective way to increase the capacity of a model while reducing the computation time.", "Bornschein et al. (2017) showed that discrete variational distributions are analogous to neural memory (Graves et al., 2016) , which can be used to improve generative models (Li et al., 2016; Lample et al., 2019) .", "Also, memory values are yet another way to embed data, allowing for applications such as one-shot transfer learning (Rezende et al., 2016) and semi-supervised learning that scales (Jang et al., 2016) .", "Depth promises to bring VAEs to the next frontier (Maaløe et al., 2019) .", "However, the available computing resources may shorten that course.", "Motivated by the versatility and the scalability of discrete distributions, we introduce the Hierarchical Discrete Variational Autoencoder.", "HD-VAE is a VAE with a hierarchy of factorized categorical latent variables.", "In contrast to the existing discrete latent variable methods, our model", "(a) is hierarchical,", "(b) trained using Concrete/Gumbel-Softmax,", "(c) relies on a conditional prior that is learned end-to-end and", "(d) uses a variational distribution that is parameterized as a large stochastic memory layer.", "Despite being optimized for a biased surrogate objective we show that a shallow HD-VAE outperforms the baseline Gaussian-based models on multiple binary images datasets in terms of test log-likelihood.", "This motivates us to introduce a definition of the relaxation bias and to measure how it is affected by the configuration of latent variables.", "In this preliminary research, we have introduced a design for variational memory layers and shown that it can be exploited to build hierarchical discrete VAEs, that outperform Gaussian prior VAEs.", "However, without explicitly constraining the model, the relaxation bias grows with the number of latent layers, which prevents us from building deep hierarchical models that are competitive with state-of-the-art methods.", "In future work we will attempt to harness the relaxed-ELBO to improve the performance of the HD-VAE further.", "Optimization During training, we mitigate the posterior collapse using the freebits (Kingma et al., 2016) strategy with λ = 2 for each stochastic layer.", "A dropout of 0.5 is used to avoid overfitting.", "We linearly decrease the temperature τ from 0.8 to 0.3 during the first 2 · 10 5 steps and from 0.3 to 0.1 during the next 2 · 10 5 steps.", "We use the Adamax optimizer (Kingma and Ba, 2014) with initial learning rate of 2 · 10 −3 for all parameters except for the memory values that are trained using a learning rate of 2 · 10 −2 to compensate for sparsity.", "We use a batch size of 128.", "All models are trained until they overfit and we evaluate the log-likelihood using 1000 importance weighted samples (Burda et al., 2015) .", "Despite its large number of parameters, HD-VAE seems to be more robust to overfitting, which may be explained by the sparse update of the memory values.", "Runtime Sparse CUDA operations are currently not used, which means there is room to make HD-VAE more memory efficient.", "Even during training, one may truncate the relaxed samples to benefit from the sparse optimizations.", "The table 3 shows the average elapsed time training iteration as well as the memory usage for a 6 layers LVAE with 6 × 16 stochastic units and K = 16 2 and batch size of 128.", "Table 4 : Measured one-importance-weighted ELBO on binarized MNIST for a LVAE model with different number of layers and different numbers of stochastic units using relaxed (τ = 0.1) and hard samples (τ = 0).", "We report N = L l=1 n l , where n l relates to the number of latent variables at the layer l and we set K = 256 for all the variables.", "Let x be an observed variable, and consider a VAE model with one layer of N categorical latent variables z = {z 1 , . . . , z N } each with K classes.", "The generative model is p θ (x, z) and the inference model is q φ (z|x).", "For a temperature parameter τ > 0, the equivalent relaxed concrete variables are denotedẑ = {ẑ 1 , . . . ,ẑ N },ẑ i ∈ [0, 1] K .", "We define H = one hot • arg max and", "Following Tucker et al. (2017), using the Gumbel-Max trick, one can notice that", "We now assume that f θ,φ,x is κ-Lipschitz for L 2 .", "Then, by definition,", "The relaxation bias can therefore be bounded as follows:", "Furthermore, we can define the adjusted Evidence Lower Bound for relaxed categorical variables (relaxed-ELBO):", "As shown by the experiment presented in the section 4.2, the quantity L τ >0", "1 (θ, φ) appears to be a positive quantity.", "Furthermore, as the model attempts to exploit the relaxation of z to maximize the surrogate objective, one may consider that", "is a tight bound of δ τ (θ, φ), meaning that the relaxed-ELBO is a tight lower bound of the ELBO.", "The relaxed-ELBO is differentiable and may enable automatic control of the temperature as left and right terms of the relaxed-ELBO seek respectively seek for high and low temperature.", "κ-Lipschitz neural networks can be designed using Weight Normalization (Salimans and Kingma, 2016) or Spectral Normalization (Miyato et al., 2018) .", "Nevertheless handling residual connections and multiple layers of latent variables is not trivial.", "We note however that in the case of a one layer VAE, one only needs to constrain the VAE decoder to be κ-Lispchitz as the surrogate objective is computed as", "In the appendix E, we show how the relaxed-ELBO can be extended to multiple layers of latent variables in the LVAE setting.", "Appendix D. Defining f θ,φ on the domain of the relaxed Categorical Variablesz f θ,φ is only defined for categorical samples.", "For relaxed samplesz, we define f θ,φ as:", ".", "The introduction of the function H is necessary as the terms", "(b) and", "(c) are only defined for categorical samples.", "This expression remains valid for hard samplesz.", "During training, relaxing the expressions", "(b) and", "(c) can potentially yield gradients of lower variance.", "In the case of a single categorical variable z described by the set of K class probabilities π = {π 1 , ...π K }.", "One can define:", "Alternatively, asides from being a relaxed Categorical distribution, the Concrete/GumbelSoftmax also defines a proper continuous distribution.", "When treated as such, this results in a proper probabilistic model with continuous latent variables, and the objective is unbiased.", "In that case, the density is given by" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.04878048226237297, 0.09756097197532654, 0.09090908616781235, 0.17777776718139648, 0.21739129722118378, 0.19230768084526062, 0.23999999463558197, 0.22727271914482117, 0.03703703358769417, 0.17543859779834747, 0.1666666567325592, 0.14814814925193787, 0.1975308656692505, 0.13333332538604736, 0.15686273574829102, 0.21052631735801697, 0.06896550953388214, 0.10344827175140381, 0.04651162400841713, 0.10256409645080566, 0.2666666507720947, 0.2926829159259796, 0.19512194395065308, 0, 0.05882352590560913, 0.1463414579629898, 0.09302324801683426, 0.27586206793785095, 0.31372547149658203, 0.2711864411830902, 0.17543859779834747, 0.17777776718139648, 0.14814814925193787, 0.04999999701976776, 0.07843136787414551, 0.2222222238779068, 0.10810810327529907, 0.1538461446762085, 0.07692307233810425, 0, 0.045454539358615875, 0.16129031777381897, 0.16393442451953888, 0.25, 0.23728813230991364, 0.09090908616781235, 0.10526315122842789, 0.04999999701976776, 0.1395348757505417, 0.09756097197532654, 0, 0.05128204822540283, 0.22727271914482117, 0.045454539358615875, 0.05128204822540283, 0.1702127605676651, 0.2222222238779068, 0.15686273574829102, 0.07999999821186066, 0.1860465109348297, 0.1818181723356247, 0.23999999463558197, 0.1666666567325592, 0.052631575614213943, 0.09999999403953552, 0.10810810327529907, 0.054054051637649536, 0.05714285373687744, 0.052631575614213943, 0.19607841968536377, 0, 0.08888888359069824, 0.2800000011920929, 0.15789473056793213 ]
HkxXcy2EYB
true
[ "In this paper, we introduce a discrete hierarchy of categorical latent variables that we train using the Concrete/Gumbel-Softmax relaxation and we derive an upper bound for the absolute difference between the unbiased and the biased objective." ]
[ "In this paper, we propose a novel technique for improving the stochastic gradient descent (SGD) method to train deep networks, which we term \\emph{PowerSGD}.", "The proposed PowerSGD method simply raises the stochastic gradient to a certain power $\\gamma\\in[0,1]$ during iterations and introduces only one additional parameter, namely, the power exponent $\\gamma$ (when $\\gamma=1$, PowerSGD reduces to SGD).", "We further propose PowerSGD with momentum, which we term \\emph{PowerSGDM}, and provide convergence rate analysis on both PowerSGD and PowerSGDM methods.", "Experiments are conducted on popular deep learning models and benchmark datasets.", "Empirical results show that the proposed PowerSGD and PowerSGDM obtain faster initial training speed than adaptive gradient methods, comparable generalization ability with SGD, and improved robustness to hyper-parameter selection and vanishing gradients.", "PowerSGD is essentially a gradient modifier via a nonlinear transformation.", "As such, it is orthogonal and complementary to other techniques for accelerating gradient-based optimization.", "Stochastic optimization as an essential part of deep learning has received much attention from both the research and industry communities.", "High-dimensional parameter spaces and stochastic objective functions make the training of deep neural network (DNN) extremely challenging.", "Stochastic gradient descent (SGD) (Robbins & Monro, 1951 ) is the first widely used method in this field.", "It iteratively updates the parameters of a model by moving them in the direction of the negative gradient of the objective evaluated on a mini-batch.", "Based on SGD, other stochastic optimization algorithms, e.g., SGD with Momentum (SGDM) (Qian, 1999) , AdaGrad (Duchi et al., 2011) , RMSProp (Tieleman & Hinton, 2012) , Adam (Kingma & Ba, 2015) are proposed to train DNN more efficiently.", "Despite the popularity of Adam, its generalization performance as an adaptive method has been demonstrated to be worse than the non-adaptive ones.", "Adaptive methods (like AdaGrad, RMSProp and Adam) often obtain faster convergence rates in the initial iterations of training process.", "Their performance, however, quickly plateaus on the testing data (Wilson et al., 2017) .", "In Reddi et al. (2018) , the authors provided a convex optimization example to demonstrate that the exponential moving average technique can cause non-convergence in the RMSProp and Adam, and they proposed a variant of Adam called AMSGrad, hoping to solve this problem.", "The authors provide a theoretical guarantee of convergence but only illustrate its better performance on training data.", "However, the generalization ability of AMSGrad on test data is found to be similar to that of Adam, and a considerable performance gap still exists between AMSGrad and SGD (Keskar & Socher, 2017; Chen et al., 2018) .", "Indeed, the optimizer is chosen as SGD (or with Momentum) in several recent state-of-the-art works in natural language processing and computer vision (Luo et al., 2018; Wu & He, 2018) , where in these instances SGD does perform better than adaptive methods.", "Despite the practical success of SGD, obtaining sharp convergence results in the non-convex setting for SGD to efficiently escape saddle points (i.e., convergence to second-order stationary points) remains a topic of active research (Jin et al., 2019; Fang et al., 2019) .", "Related Works: SGD, as the first efficient stochastic optimizer for training deep networks, iteratively updates the parameters of a model by moving them in the direction of the negative gradient of the objective function evaluated on a mini-batch.", "SGDM brings a Momentum term from the physical perspective, which obtains faster convergence speed than SGD.", "The Momentum idea can be seen as a particular case of exponential moving average (EMA).", "Then the adaptive learning rate (ALR) technique is widely adopted but also disputed in deep learning, which is first introduced by AdaGrad.", "Contrast to the SGD, AdaGrad updates the parameters according to the square roots of the sum of squared coordinates in all the past gradients.", "AdaGrad can potentially lead to huge gains in terms of convergence (Duchi et al., 2011) when the gradients are sparse.", "However, it will also lead to rapid learning rate decay when the gradients are dense.", "RMSProp, which first appeared in an unpublished work (Tieleman & Hinton, 2012) , was proposed to handle the aggressive, rapidly decreasing learning rate in AdaGrad.", "It computes the exponential moving average of the past squared gradients, instead of computing the sum of the squares of all the past gradients in AdaGrad.", "The idea of AdaGrad and RMSProp propelled another representative algorithm: Adam, which updates the weights according to the mean divided by the root mean square of recent gradients, and has achieved enormous success.", "Recently, research to link discrete gradient-based optimization to continuous dynamic system theory has received much attention (Yuan et al., 2016; Mazumdar & Ratliff, 2018) .", "While the proposed optimizer excels at improving initial training, it is completely complementary to the use of learning rate schedules (Smith & Topin, 2019; Loshchilov & Hutter, 2016) .", "We will explore how to combine learning rate schedules with the PoweredSGD optimizer in future work.", "While other popular techniques focus on modifying the learning rates and/or adopting momentum terms in the iterations, we propose to modify the gradient terms via a nonlinear function called the Powerball function by the authors of Yuan et al. (2016) .", "In Yuan et al. (2016) , the authors presented the basic idea of applying the Powerball function in gradient descent methods.", "In this paper, we", "1) systematically present the methods for stochastic optimization with and without momentum;", "2) provide convergence proofs;", "3) include experiments using popular deep learning models and benchmark datasets.", "Another related work was presented in Bernstein et al. (2018) , where the authors presented a version of stochastic gradient descent which uses only the signs of gradients.", "This essentially corresponds to the special case of PoweredSGD (or PoweredSGDM) when the power exponential γ is set to 0.", "We also point out that despite the name resemblance, the power PowerSign optimizer proposed in Bello et al. (2017) is a conditional scaling of the gradient, whereas the proposed PoweredSGD optimizer applies a component-wise trasformation to the gradient." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.20512819290161133, 0.08888888359069824, 0.11428570747375488, 0, 0.043478257954120636, 0.4000000059604645, 0.13333332538604736, 0.1111111044883728, 0.060606054961681366, 0.05882352590560913, 0.17142856121063232, 0.037735845893621445, 0.05405404791235924, 0.05714285373687744, 0, 0.1111111044883728, 0.12121211737394333, 0.07999999821186066, 0, 0.15094339847564697, 0.1702127605676651, 0.0624999962747097, 0.12903225421905518, 0, 0.05882352590560913, 0.05405404791235924, 0, 0, 0.05882352590560913, 0.045454539358615875, 0.04999999701976776, 0.0476190410554409, 0.0624999962747097, 0.23999999463558197, 0.11428570747375488, 0, 0.1428571343421936, 0, 0, 0.1463414579629898, 0.05882352590560913, 0.1702127605676651 ]
rJlqoTEtDB
true
[ "We propose a new class of optimizers for accelerated non-convex optimization via a nonlinear gradient transformation. " ]
[ "We aim to build complex humanoid agents that integrate perception, motor control, and memory.", "In this work, we partly factor this problem into low-level motor control from proprioception and high-level coordination of the low-level skills informed by vision.", "We develop an architecture capable of surprisingly flexible, task-directed motor control of a relatively high-DoF humanoid body by combining pre-training of low-level motor controllers with a high-level, task-focused controller that switches among low-level sub-policies.", "The resulting system is able to control a physically-simulated humanoid body to solve tasks that require coupling visual perception from an unstabilized egocentric RGB camera during locomotion in the environment.", "Supplementary video link: https://youtu.be/fBoir7PNxPk", "In reinforcement learning (RL), a major challenge is to simultaneously cope with high-dimensional input and high-dimensional action spaces.", "As techniques have matured, it is now possible to train high-dimensional vision-based policies from scratch to generate a range of interesting behaviors ranging from game-playing to navigation BID17 BID32 BID41 .", "Likewise, for controlling bodies with a large number of degrees of freedom (DoFs), in simulation, reinforcement learning methods are beginning to surpass optimal control techniques.", "Here, we try to synthesize this progress and tackle high-dimensional input and output at the same time.", "We evaluate the feasibility of full-body visuomotor control by comparing several strategies for humanoid control from vision.Both to simplify the engineering of a visuomotor system and to reduce the complexity of taskdirected exploration, we construct modular agents in which a high-level system possessing egocentric vision and memory is coupled to a low-level, reactive motor control system.", "We build on recent advances in imitation learning to make flexible low-level motor controllers for high-DoF humanoids.", "The motor skills embodied by the low-level controllers are coordinated and sequenced by the high-level system, which is trained to maximize sparse task reward.Our approach is inspired by themes from neuroscience as well as ideas developed and made concrete algorithmically in the animation and robotics literatures.", "In motor neuroscience, studies of spinal reflexes in animals ranging from frogs to cats have led to the view that locomotion and reaching are highly prestructured, enabling subcortical structures such as the basal ganglia to coordinate a motor repertoire; and cortical systems with access to visual input can send low complexity signals to motor systems in order to evoke elaborate movements BID7 BID1 BID9 .The", "study of \"movement primitives\" for robotics descends from the work of BID16 . Subsequent", "research has focused on innovations for learning or constructing primitives for control of movments BID15 BID20 ), deploying and sequencing them to solve tasks BID36 BID19 BID22 , and increasing the complexity of the control inputs to the primitives BID31 . Particularly", "relevant to our cause is the work of BID21 in which primitives were coupled by reinforcement learning to external perceptual inputs.Research in the animation literature has also sought to produce physically simulated characters capable of distinct movements that can be flexibly sequenced. This ambition", "can be traced to the virtual stuntman BID6 a) and has been advanced markedly in the work of Liu BID27 . Further recent", "work has relied on reinforcement learning to schedule control policies known as \"control fragments\", each one able to carry out only a specialized short movement segment BID24 . In work to date", ", such control fragments have yet to be coupled to visual input as we will pursue here. From the perspective", "of the RL literature BID38 , motor primitives and control fragments may be considered specialized instantiations of \"option\" sub-policies.Our work aims to contribute to this multi-disciplinary literature by demonstrating concretely how control-fragment-like low-level movements can be coupled to and controlled by a vision and memory-based high-level controller to solve tasks. Furthermore, we demonstrate", "the scalability of the approach to greater number of control fragments than previous works. Taken together, we demonstrate", "progress towards the goal of integrated agents with vision, memory, and motor control.", "In this work we explored the problem of learning to reuse motor skills to solve whole body humanoid tasks from egocentric camera observations.", "We compared a range of approaches for reusing lowlevel motor skills that were obtained from motion capture data, including variations related to those presented in BID24 BID34 .", "To date, there is limited learning-based work on humanoids in simulation reusing motor skills to solve new tasks, and much of what does exist is in the animation literature.", "A technical contribution of the present work was to move past hand-designed observation features (as used in BID34 ) towards a more ecological observation setting: using a front-facing camera is more similar to the kinds of observations a real-world, embodied agent would have.", "We also show that hierarchical motor skill reuse allowed us to solve tasks that we could not with a flat policy.", "For the walls and go-to-target tasks, learning from scratch was slower and produced less robust behavior.", "For the forage tasks, learning from scratch failed completely.", "Finally, the heterogeneous forage is an example of task that integrates memory and perception.There are some other very clear continuities between what we present here and previous work.", "For learning low-level tracking policies from motion capture data, we employed a manually specified similarity measure against motion capture reference trajectories, consistent with previous work BID26 BID34 .", "Additionally, the low-level policies were time-indexed: they operated over only a certain temporal duration and received time or phase as input.", "Considerably less research has focused on learning imitation policies either without a pre-specified scoring function or without time-indexing (but see e.g. ).", "Compared to previous work using control fragments BID24 , our low-level controllers were built without a sampling-based planner and were parameterized as neural networks rather than linear-feedback policies.We also want to make clear that the graph-transition and steerable structured low-level control approaches require significant manual curation and design: motion capture clips must be segmented by hand, possibly manipulated by blending/smoothing clips from the end of one clip to the beginning of another.", "This labor intensive process requires considerable skill as an animator; in some sense this almost treats humanoid control as a computer-aided animation problem, whereas we aim to treat humanoid motor control as an automated and data-driven machine learning problem.", "We acknowledge that relative to previous work aimed at graphics and animation, our controllers are less graceful.", "Each approach involving motion capture data can suffer from distinct artifacts, especially without detailed manual editing -the hand-designed controllers have artifacts at transitions due to imprecise kinematic blending but are smooth within a behavior, whereas the control fragments have a lesser but consistent level of jitter throughout due to frequent switching.", "Methods to automatically (i.e. without human labor) reduce movement artifacts when dealing with large movement repertoires would be interesting to pursue.Moreover, we wish to emphasize that due to the human-intensive components of training structured low-level controllers, fully objective algorithm comparison with previous work can be somewhat difficult.", "This will remain an issue so long as human editing is a significant component of the dominant solutions.", "Here, we focused on building movement behaviors with minimal curation, at scale, that can be recruited to solve tasks.", "Specifically, we presented two methods that do not require curation and can re-use low-level skills with cold-switching.", "Additionally, these methods can scale to a large number of different behaviors without further intervention.We view this work as an important step toward the flexible use of motor skills in an integrated visuomotor agent that is able to cope with tasks that pose simultaneous perceptual, memory, and motor challenges to the agent.", "Future work will necessarily involve refining the naturalness of the motor skills to enable more general environment interactions and to subserve more complicated, compositional tasks." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.07407406717538834, 0.05714285373687744, 0.0476190447807312, 0.1904761791229248, 0, 0, 0.04999999701976776, 0, 0, 0.072727270424366, 0, 0.038461536169052124, 0.05970149114727974, 0.07999999821186066, 0.043478257954120636, 0, 0, 0, 0, 0.03448275476694107, 0, 0, 0.17142856121063232, 0.20000000298023224, 0.04999999701976776, 0, 0.060606054961681366, 0.1428571343421936, 0.09090908616781235, 0, 0.15789473056793213, 0, 0, 0.0810810774564743, 0.04255318641662598, 0, 0.16949151456356049, 0, 0, 0.0624999962747097, 0, 0.03448275476694107, 0.05714285373687744 ]
BJfYvo09Y7
true
[ "Solve tasks involving vision-guided humanoid locomotion, reusing locomotion behavior from motion capture data." ]
[ "The gap between the empirical success of deep learning and the lack of strong theoretical guarantees calls for studying simpler models.", "By observing that a ReLU neuron is a product of a linear function with a gate (the latter determines whether the neuron is active or not), where both share a jointly trained weight vector, we propose to decouple the two.", "We introduce GaLU networks — networks in which each neuron is a product of a Linear Unit, defined by a weight vector which is being trained, with a Gate, defined by a different weight vector which is not being trained.", "Generally speaking, given a base model and a simpler version of it, the two parameters that determine the quality of the simpler version are whether its practical performance is close enough to the base model and whether it is easier to analyze it theoretically.", "We show that GaLU networks perform similarly to ReLU networks on standard datasets and we initiate a study of their theoretical properties, demonstrating that they are indeed easier to analyze.", "We believe that further research of GaLU networks may be fruitful for the development of a theory of deep learning.", "An artificial neuron with the ReLU activation function is the function f w (x) : R d → R such that f w (x) = max{x w, 0} = 1 x w≥0 · x w .The", "latter formulation demonstrates that the parameter vector w has a dual role; it acts both as a filter or a gate that decides if the neuron is active or not, and as linear weights that control the value of the neuron if it is active. We", "introduce an alternative neuron, called Gated Linear Unit or GaLU for short, which decouples between those roles. A", "0 − 1 GaLU neuron is a function g w,u (x) : R d → R such that g w,u (x) = 1 x u≥0 · x w .(1", ") GaLU neurons, and therefore GaLU networks, are at least as expressive as their ReLU counterparts, since f w = g w,w . On", "the other hand, GaLU networks appear problematic from an optimization perspective, because the parameter u cannot be trained using gradient based optimization (since ∇ u g w,u (x) is always zero). In", "other words, training GaLU networks with gradient based algorithms is equivalent to initializing the vector u and keeping it constant thereafter. A", "more general definition of a GaLU network is given in section 2.The main claim of the paper is that GaLU networks are on one hand as effective as ReLU networks on real world datasets (section 3) while on the other hand they are easier to analyze and understand (section 4).", "The standard paradigm in deep learning is to use neurons of the form σ x w for some differentiable non linear function σ : R → R. In this article we proposed a different kind of neurons, σ i,j · x w, where σ i,j is some function of the example and the neuron index that remains constant along the training.", "Those networks achieve similar results to those of their standard counterparts, and they are easier to analyze and understand.To the extent that our arguments are convincing, it gives new directions for further research.", "Better understanding of the one hidden layer case (from section 5) seems feasible.", "And as GaLU and ReLU networks behave identically for this problem, it gives us reasons to hope that understanding the behavior of GaLU networks would also explain ReLU networks and maybe other non-linearities as well.", "As for deeper network, it is also not beyond hope that GaLU0 networks would allow some better theoretical analysis than what we have so far." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.178571417927742, 0.2448979616165161, 0.2641509473323822, 0.4000000059604645, 0.19512194395065308, 0.07999999821186066, 0.1090909019112587, 0.1463414579629898, 0.08695651590824127, 0.045454539358615875, 0.038461532443761826, 0.08888888359069824, 0.317460298538208, 0.08571428060531616, 0.18518517911434174, 0, 0.1538461446762085, 0.0833333283662796 ]
SJGyFiRqK7
true
[ "We propose Gated Linear Unit networks — a model that performs similarly to ReLU networks on real data while being much easier to analyze theoretically." ]
[ "Machine learning systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a different distribution from the one used for training.", "With their growing use in critical applications, it becomes important to develop systems that are able to accurately quantify its predictive uncertainty and screen out these anomalous inputs.", "However, unlike standard learning tasks, there is currently no well established guiding principle for designing architectures that can accurately quantify uncertainty.", "Moreover, commonly used OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples.", "To address these problems, we first seek to identify guiding principles for designing uncertainty-aware architectures, by proposing Neural Architecture Distribution Search (NADS).", "Unlike standard neural architecture search methods which seek for a single best performing architecture, NADS searches for a distribution of architectures that perform well on a given task, allowing us to identify building blocks common among all uncertainty aware architectures.", "With this formulation, we are able to optimize a stochastic outlier detection objective and construct an ensemble of models to perform OoD detection.", "We perform multiple OoD detection experiments and observe that our NADS performs favorably compared to state-of-the-art OoD detection methods.", "Detecting anomalous data is crucial for safely applying machine learning in autonomous systems for critical applications and for AI safety (Amodei et al., 2016) .", "Such anomalous data can come in settings such as in autonomous driving NHTSA, 2017) , disease monitoring (Hendrycks & Gimpel, 2016) , and fault detection (Hendrycks et al., 2019b) .", "In these situations, it is important for these systems to reliably detect abnormal inputs so that their occurrence can be overseen by a human, or the system can proceed using a more conservative policy.", "The widespread use of deep learning models within these autonomous systems have aggravated this issue.", "Despite having high performance in many predictive tasks, deep networks tend to give high confidence predictions on Out-of-Distribution (OoD) data (Goodfellow et al., 2015; Nguyen et al., 2015) .", "Moreover, commonly used OoD detection approaches are prone to errors and even assign higher likelihoods to samples from other datasets (Lee et al., 2018; Hendrycks & Gimpel, 2016) .", "Unlike common machine learning tasks such as image classification, segmentation, and speech recognition, there are currently no well established guidelines for designing architectures that can accurately screen out OoD data and quantify its uncertainty.", "Such a gap in our knowledge makes Neural Architecture Search (NAS) a promising option to explore the better design of uncertaintyaware models (Elsken et al., 2018) .", "NAS algorithms attempt to find an optimal neural network architecture for a specific task.", "Existing efforts have primarily focused on searching for architectures that perform well on image classification or segmentation.", "However, it is unclear whether architecture components that are beneficial for image classification and segmentation models would also lead to better uncertainty quantification and thereafter be effective for OoD detection.", "Moreover, previous work on deep uncertainty quantification shows that ensembles can help calibrate OoD classifier based methods, as well as improve OoD detection performance of likelihood estimation models (Lakshminarayanan et al., 2017; Choi & Jang, 2018) .", "Because of this, instead of a single best performing architecture for uncertainty awareness, one might consider a distribution of wellperforming architectures.", "Along this direction, designing an optimization objective which leads to uncertainty-aware models is also not straightforward.", "With no access to labels, unsupervised/self-supervised generative models which maximize the likelihood of in-distribution data become the primary tools for uncertainty quantification (Hendrycks et al., 2019a) .", "However, these models counter-intuitively assign high likelihoods to OoD data (Nalisnick et al., 2019a; Choi & Jang, 2018; Hendrycks et al., 2019a; Shafaei et al.) .", "Because of this, maximizing the log-likelihood is inadequate for OoD detection.", "On the other hand, Choi & Jang (2018) proposed using the Widely Applicable Information Criterion (WAIC) (Watanabe, 2013) , a penalized log-likelihood score, as the OoD detection criterion.", "However, the score was approximated using an ensemble of models that was trained on maximizing the likelihood and did not directly optimize the WAIC score.", "To this end, we propose a novel Neural Architecture Distribution Search (NADS) framework to identify common building blocks that naturally incorporate model uncertainty quantification and compose good OoD detection models.", "NADS is an architecture search method designed to search for a distribution of well-performing architectures, instead of a single best architecture by formulating the architecture search problem as a stochastic optimization problem.", "Using NADS, we optimize the WAIC score of the architecture distribution, a score that was shown to be robust towards model uncertainty.", "Such an optimization problem with a stochastic objective over a probability distribution of architectures is unamenable to traditional NAS optimization strategies.", "We make this optimization problem tractable by taking advantage of weight sharing between different architectures, as well as through a parameterization of the architecture distribution, which allows for a continuous relaxation of the discrete search problem.", "Using the learned posterior architecture distribution, we construct a Bayesian ensemble of deep models to perform OoD detection.", "Finally, we perform multiple OoD detection experiments to show the efficacy of our proposed method.", "Unlike NAS for common learning tasks, specifying a model and an objective to optimize for uncertainty estimation and outlier detection is not straightforward.", "Moreover, using a single model may not be sufficient to accurately quantify uncertainty and successfully screen out OoD data.", "We developed a novel neural architecture distribution search (NADS) formulation to identify a random ensemble of architectures that perform well on a given task.", "Instead of seeking to maximize the likelihood of in-distribution data which may cause OoD samples to be mistakenly given a higher likelihood, we developed a search algorithm to optimize the WAIC score, a Bayesian adjusted estimation of the data entropy.", "Using this formulation, we have identified several key features that make up good uncertainty quantification architectures, namely a simple structure in the shallower layers, use of information preserving operations, and a larger, more expressive structure with skip connections for deeper layers to ensure optimization stability.", "Using the architecture distribution learned by NADS, we then constructed an ensemble of models to estimate the data entropy using the WAIC score.", "We demonstrated the superiority of our method to existing OoD detection methods and showed that our method has highly competitive performance without requiring access to OoD samples.", "Overall, NADS as a new uncertainty-aware architecture search strategy enables model uncertainty quantification that is critical for more robust and generalizable deep learning, a crucial step in safely applying deep learning to healthcare, autonomous driving, and disaster response.", "A FIXED MODEL ABLATION STUDY" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.13636362552642822, 0.1666666567325592, 0.0952380895614624, 0.15789473056793213, 0.1395348757505417, 0.31578946113586426, 0.4285714328289032, 0.21052631735801697, 0.09090908616781235, 0.08510638028383255, 0.1538461446762085, 0.1111111044883728, 0.04255318641662598, 0.12244897335767746, 0.1111111044883728, 0.12765957415103912, 0.2857142686843872, 0.10810810327529907, 0.2448979616165161, 0.0714285671710968, 0.307692289352417, 0.10810810327529907, 0.12765957415103912, 0.04651162400841713, 0.1875, 0.08510638028383255, 0.1904761791229248, 0.23529411852359772, 0.4000000059604645, 0.19512194395065308, 0.29999998211860657, 0.23529411852359772, 0.41025641560554504, 0.2222222238779068, 0.3333333134651184, 0.14999999105930328, 0.4651162624359131, 0.19230768084526062, 0.1875, 0.2857142686843872, 0.27272728085517883, 0.2142857164144516, 0 ]
rJeXDANKwr
true
[ "We propose an architecture search method to identify a distribution of architectures and use it to construct a Bayesian ensemble for outlier detection." ]
[ "Modern applications from Autonomous Vehicles to Video Surveillance generate massive amounts of image data.", "In this work we propose a novel image outlier detection approach (IOD for short) that leverages the cutting-edge image classifier to discover outliers without using any labeled outlier.", "We observe that although intuitively the confidence that a convolutional neural network (CNN) has that an image belongs to a particular class could serve as outlierness measure to each image, directly applying this confidence to detect outlier does not work well.", "This is because CNN often has high confidence on an outlier image that does not belong to any target class due to its generalization ability that ensures the high accuracy in classification.", "To solve this issue, we propose a Deep Neural Forest-based approach that harmonizes the contradictory requirements of accurately classifying images and correctly detecting the outlier images.", "Our experiments using several benchmark image datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN demonstrate the effectiveness of our IOD approach for outlier detection, capturing more than 90% of outliers generated by injecting one image dataset into another, while still preserving the classification accuracy of the multi-class classification problem.", "Motivation.", "As modern applications such as autonomous vehicles and video surveillance generate larger amount of image data, the discovery of outliers from such image data is becoming increasingly critical.", "Examples of such image outliers include unauthorized personnel observed in a secret military base or unexpected objects encountered by self-driving cars on the road.", "Capturing these outliers can prevent intelligence leaks or save human lives.State-of-the-Art.", "Due to the exceptional success of deep learning over classical methods in computer vision, in recent years a number of works BID17 BID27 BID5 BID23 leverage the representation learning ability of a deep autoencoder or GAN BID7 for outlier detection.", "Outliers are either detected by plugging in the learned representation into classical outlier detection methods or directly reported by employing the reconstruction error as the outlier score BID36 BID4 .", "However, these approaches use a generic network that is not trained specifically for outlier detection.", "Although the produced representation is perhaps effective in representing the common features of the \"normal\" data, it is not necessarily effective in distinguishing \"outliers\" from \"inliers\".", "Recently, some works BID26 BID21 were proposed to solve this issue by incorporating the outlier detection objective actively into the learning process.", "However, these approaches are all based on the one-class technique BID28 BID18 BID33 that learns a single boundary between outliers and inliers.", "Although they perform relatively well when handling simplistic data sets such as MNIST, they perform poorly at supporting complex data sets with multiple \"normal\" classes such as CIFAR-10 ( BID11 ).", "This is due to the difficulty in finding a separator that encompasses all normal classes yet none of the outliers.Proposed Approach and Contributions.", "In this work we propose a novel image outlier detection (IOD) strategy that successfully detects image outliers from complex real data sets with multiple normal classes.", "IOD unifies the core principles of cutting edge deep learning image classifiers BID7 and classical outlier detection within one framework.Classical outlier detection techniques BID3 BID9 BID1 consider an object as an outlier if its outlierness score is above a certain cutoff threshold ct.", "Intuitively given a Convolutional Neural Network (CNN) BID12 ) trained using normal training data (namely, data without labeled outliers), the confidence that the CNN has that an image belongs to a particular class could be leveraged to measure the outlierness of the image.", "This is based on the intuition that we expect a CNN to be less confident about an outlier compared to inlier objects, since outliers by definition are dissimilar from any normal class.", "By using the confidence as an outlier score, IOD could separate outliers from all normal classes.", "However, our experiments (Sec. 2) show that directly using the confidence produced by CNN to identify outliers in fact is not particularly effective.", "This is because the requirements of accurately classifying images and correctly detecting the outlier images conflict with each other.", "CNN achieves high accuracy in image classification because of its excellent generalization capability that enables a CNN to overcome the gap between the training and testing images.", "However, the generalization capability jeopardizes the detection of outliers, because it increases the chance of erroneously assigning an outlier image to some class with high confidence to which actually it does not fit.We solve this problem by proposing a deep neural decision forest-based (DNDF) approach equipped with an information theory-based regularization function that leverages the strong bias of the classification decisions made within each single decision tree and the ensemble nature of the overall decision forest.", "Further, we introduce a new architecture of the DNDF that ensures independence amongst the trees and in turn improves the classification accuracy.", "Finally, we use a joint optimization strategy to train both the spit and leaf nodes of each tree.", "This speeds up the convergence.We demonstrate the effectiveness of our outlierness measure, the deep neural forest-based approach, the regularization function, and the new architecture using benchmark datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN -with the accuracy higher than 0.9 at detecting outliers, while preserving the accuracy of multi-class classification.", "In this work we propose a novel approach that effectively detects outliers from image data.", "The key novelties include a general image outlier detection framework and effective outlierness measure that leverages the deep neural decision forest.", "Optimizations such as new architecture that connects deep neural network and decision tree and regularization to penalize the large entropy routing decisions are also proposed to further enhance the outlier detection capacity of IOD.", "In the future we plan to investigate how to make our approach work in multi-label classification setting." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.20689654350280762, 0.2926829159259796, 0.11999999731779099, 0.22727271914482117, 0.20512820780277252, 0.31578946113586426, 0.29999998211860657, 0.20512820780277252, 0.07407406717538834, 0.0833333283662796, 0.04999999701976776, 0.06666666269302368, 0.2222222238779068, 0.0555555522441864, 0.1621621549129486, 0, 0.21052631735801697, 0.29999998211860657, 0.1090909019112587, 0.1599999964237213, 0.17391303181648254, 0.19354838132858276, 0.15789473056793213, 0.1249999925494194, 0.29999998211860657, 0.15584415197372437, 0.2857142686843872, 0.12121211737394333, 0.21052631735801697, 0.46666666865348816, 0.1666666567325592, 0.1304347813129425, 0.19354838132858276 ]
HygTE309t7
true
[ "A novel approach that detects outliers from image data, while preserving the classification accuracy of image classification" ]
[ "This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources.", "We design a Dynamic Point-cloud Convolution (D-Conv) operator as the core component of CloudLSTMs, which performs convolution directly over point-clouds and extracts local spatial features from sets of neighboring points that surround different elements of the input.", "This operator maintains the permutation invariance of sequence-to-sequence learning frameworks, while representing neighboring correlations at each time step -- an important aspect in spatiotemporal predictive learning.", "The D-Conv operator resolves the grid-structural data requirements of existing spatiotemporal forecasting models and can be easily plugged into traditional LSTM architectures with sequence-to-sequence learning and attention mechanisms.\n ", "We apply our proposed architecture to two representative, practical use cases that involve point-cloud streams, i.e. mobile service traffic forecasting and air quality indicator forecasting.", "Our results, obtained with real-world datasets collected in diverse scenarios for each use case, show that CloudLSTM delivers accurate long-term predictions, outperforming a variety of neural network models.", "Point-cloud stream forecasting aims at predicting the future values and/or locations of data streams generated by a geospatial point-cloud S, given sequences of historical observations (Shi & Yeung, 2018) .", "Example data sources include mobile network antennas that serve the traffic generated by ubiquitous mobile services at city scale (Zhang et al., 2019b) , sensors that monitor the air quality of a target region (Cheng et al., 2018) , or moving crowds that produce individual trajectories.", "Unlike traditional spatiotemporal forecasting on grid-structural data, like precipitation nowcasting (Shi et al., 2015) or video frame prediction (Wang et al., 2018) , point-cloud stream forecasting needs to operate on geometrically scattered sets of points, which are irregular and unordered, and encapsulate complex spatial correlations.", "While vanilla Long Short-term Memories (LSTMs) have modest abilities to exploit spatial features (Shi et al., 2015) , convolution-based recurrent neural network (RNN) models, such as ConvLSTM (Shi et al., 2015) and PredRNN++ (Wang et al., 2018) , are limited to modeling grid-structural data, and are therefore inappropriate for handling scattered point-clouds.", ": Different approaches to geospatial data stream forecasting: predicting over input data streams that are inherently grid-structured, e.g., video frames using ConvLSTMs (top); mapping of pointcloud input to a grid, e.g., mobile network traffic collected at different antennas in a city, to enable forecasting using existing neural network structures (middle); forecasting directly over point-cloud data streams using historical information (as above, but without pre-processing), as proposed in this paper (bottom).", "permutations for the features (Li et al., 2018) .", "Through this, the proposed PointCNN leverages spatial-local correlations of point clouds, irrespective of the order of the input.", "Notably, although these architectures can learn spatial features of point-clouds, they are designed to work with static data, thus have limited ability to discover temporal dependencies.", "We introduce CloudLSTM, a dedicated neural model for spatiotemporal forecasting tailored to pointcloud data streams.", "The CloudLSTM builds upon the DConv operator, which performs convolution over point-clouds to learn spatial features while maintaining permutation invariance.", "The DConv simultaneously predicts the values and coordinates of each point, thereby adapting to changing spatial correlations of the data at each time step.", "DConv is flexible, as it can be easily combined with various RNN models (i.e., RNN, GRU, and LSTM), Seq2seq learning, and attention mechanisms." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 1, 0.1071428507566452, 0.08510638028383255, 0.1599999964237213, 0.12765957415103912, 0.1599999964237213, 0.35999998450279236, 0.19672130048274994, 0.12903225421905518, 0.0937499925494194, 0.2750000059604645, 0, 0.0555555522441864, 0.08510638028383255, 0.4324324131011963, 0.0952380895614624, 0.1395348757505417, 0.04347825422883034 ]
BJlowyHYPr
true
[ "This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources." ]
[ "Knowledge Graphs (KG), composed of entities and relations, provide a structured representation of knowledge.", "For easy access to statistical approaches on relational data, multiple methods to embed a KG as components of R^d have been introduced.", "We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space.", "TransINT maps set of entities (tied by a relation) to continuous sets of vectors that are inclusion-ordered isomorphically to relation implications.", "With a novel parameter sharing scheme, TransINT enables automatic training on missing but implied facts without rule grounding.", "We achieve new state-of-the-art performances with signficant margins in Link Prediction and Triple Classification on FB122 dataset, with boosted performance even on test instances that cannot be inferred by logical rules.", "The angles between the continuous sets embedded by TransINT provide an interpretable way to mine semantic relatedness and implication rules among relations.", "Recently, learning distributed vector representations of multi-relational knowledge has become an active area of research (Bordes et al.; Nickel et al.; Kazemi & Poole; Wang et al.; Bordes et al.) .", "These methods map components of a KG (entities and relations) to elements of R d and capture statistical patterns, regarding vectors close in distance as representing similar concepts.", "However, they lack common sense knowledge which are essential for reasoning (Wang et al.; Guo et al.; Nickel & Kiela) .", "For example, \"parent\" and \"father\" would be deemed similar by KG embeddings, but by common sense, \"parent ⇒ father\" yet not the other way around.", "Thus, one focus of current research is to bring common sense rules to KG embeddings (Guo et al.; Wang et al.; Wei et al.( . Some", "methods impose hard geometric constraints and embed asymmetric orderings of knowledge (Nickel & Kiela; Vendrov et al.; Vilnis et al.( . However", ", they only embed hierarchy (unary Is_a relations), and cannot embed n-ary relations in KG's. Moreover", ", their hierarchy learning is largely incompatible with conventional relational learning, because they put hard constraints on distance to represent partial ordering, which is a common metric of similarity/ relatedness in relational learning.", "We propose TransINT, a new KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space.", "TransINT restrict entities tied by a relation to be embedded to vectors in a particular region of R d included isomorphically to the order of relation implication.", "For example, we map any entities tied by is_father_of to vectors in a region that is part of the region for is_parent_of; thus, we can automatically know that if John is a father of Tom, he is also his parent even if such a fact is missing in the KG.", "Such embeddings are constructed by sharing and rank-ordering the basis of the linear subspaces where the vectors are required to belong.", "Mathematically, a relation can be viewed as sets of entities tied by a constraint (Stoll) .", "We take such a view on KG's, since it gives consistancy and interpretability to model behavior.", "Furthermore, for the first time in KG embedding, we map sets of entitites under relation constraint to a continuous set of points (whose elements are entity vectors) -which learns relationships among not only individual entity vectors but also sets of entities.", "We show that angles between embedded relation sets can identify semantic patterns and implication rules -an extension of the line of thought as in word/ image embedding methods such as Mikolov et al., Frome et al. to relational embedding.", "Such mining is both limited and less interpretable if embedded sets are discrete (Vilnis et al.; Vendrov et al.) or each entitity itself is embedded to a region, not a member vector of it (Vilnis et al.) .", "1 TransINT's such interpretable meta-learning opens up possibilities for explainable reasoning in applications such as recommender systems (Ma et al.) and question answering (Hamilton et al.", "We presented TransINT, a new KG embedding method that embed sets of entities (tied by relations) to continuous sets in R d that are inclusion-ordered isomorphically to relation implications.", "Our method achieved new state-of-the-art performances with signficant margins in Link Prediction and Triple Classification on the FB122 dataset, with boosted performance even on test instances that are not affected by rules.", "We further propose and interpretable criterion for mining semantic similairty among sets of entities with TransINT." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10256409645080566, 0.08510638028383255, 0.8695651888847351, 0.13333332538604736, 0.09090908616781235, 0.145454540848732, 0.3333333432674408, 0.039215680211782455, 0.1538461446762085, 0, 0.1599999964237213, 0.0416666604578495, 0.04255318641662598, 0.1463414579629898, 0.0714285671710968, 0.7727272510528564, 0.2083333283662796, 0.15625, 0.09090908616781235, 0.04999999701976776, 0.1428571343421936, 0.158730149269104, 0.2295081913471222, 0.10526315122842789, 0.11999999731779099, 0.3461538553237915, 0.178571417927742, 0.2380952388048172 ]
r1lxvxBtvr
true
[ "We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space in an explainable, robust, and geometrically coherent way." ]
[ "Unsupervised domain adaptive object detection aims to learn a robust detector on the domain shift circumstance, where the training (source) domain is label-rich with bounding box annotations, while the testing (target) domain is label-agnostic and the feature distributions between training and testing domains are dissimilar or even totally different.", "In this paper, we propose a gradient detach based Stacked Complementary Losses (SCL) method that uses detection objective (cross entropy and smooth l1 regression) as the primary objective, and cuts in several auxiliary losses in different network stages to utilize information from the complement data (target images) that can be effective in adapting model parameters to both source and target domains.", "A gradient detach operation is applied between detection and context sub-networks during training to force networks to learn discriminative representations.", "We argue that the conventional training with primary objective mainly leverages the information from the source-domain for maximizing likelihood and ignores the complement data in shallow layers of networks, which leads to an insufficient integration within different domains.", "Thus, our proposed method is a more syncretic adaptation learning process.", "We conduct comprehensive experiments on seven datasets, the results demonstrate that our method performs favorably better than the state-of-the-art methods by a large margin.", "For instance, from Cityscapes to FoggyCityscapes, we achieve 37.9% mAP, outperforming the previous art Strong-Weak by 3.6%.", "In real world scenarios, generic object detection always faces severe challenges from variations in viewpoint, background, object appearance, illumination, occlusion conditions, scene change, etc.", "These unavoidable factors make object detection in domain-shift circumstance becoming a challenging and new rising research topic in the recent years.", "Also, domain change is a widely-recognized, intractable problem that urgently needs to break through in reality of detection tasks, like video surveillance, autonomous driving, etc. (see Figure 2 ).", "Revisiting Domain-Shift Object Detection.", "Common approaches for tackling domain-shift object detection are mainly in two directions:", "(i) training supervised model then fine-tuning on the target domain; or", "(ii) unsupervised cross-domain representation learning.", "The former requires additional instance-level annotations on target data, which is fairly laborious, expensive and time-consuming.", "So most approaches focus on the latter one but still have some challenges.", "The first challenge is that the representations of source and target domain data should be embedded into a common space for matching the object, such as the hidden feature space (Saito et al., 2019; Chen et al., 2018) , input space Cai et al., 2019) or both of them (Kim et al., 2019b) .", "The second is that a feature alignment/matching operation or mechanism for source/target domains should be further defined, such as subspace alignment (Raj et al., 2015) , H-divergence and adversarial learning (Chen et al., 2018) , MRL (Kim et al., 2019b) , Strong-Weak alignment (Saito et al., 2019) , etc.", "In general, our SCL is also a learning-based alignment method across domains with an end-to-end framework.", "(a) Non-adapted", "(b) CVPR'18 (Chen et al., 2018)", "(c) CVPR'19 (Saito et al., 2019)", "(d) SCL (Ours)", "(e) Non-adapted", "(f) CVPR'18 (Chen et al., 2018)", "(g) CVPR'19 (Saito et al., 2019)", "(h) SCL (Ours) Figure 1: Visualization of features from PASCAL to Clipart (first row) and from Cityscapes to FoggyCityscapes (second row) by t-SNE (Maaten & Hinton, 2008) .", "Red indicates the source examples and blue is the target one.", "If source and target features locate in the same position, it is shown as light blue.", "All models are re-trained with a unified setting to ensure fair comparisons.", "It can be observed that our feature embedding results are consistently much better than previous approaches on either dissimilar domains (PASCAL and Clipart) or similar domains (Cityscapes and FoggyCityscapes).", "Our Key Ideas.", "The goal of this paper is to introduce a simple design that is specific to convolutional neural network optimization and improves its training on tasks that adapt on discrepant domains.", "Unsupervised domain adaptation for recognition has been widely studied by a large body of previous literature (Ganin et al., 2016; Long et al., 2016; Tzeng et al., 2017; Panareda Busto & Gall, 2017; Hoffman et al., 2018; Murez et al., 2018; Zhao et al., 2019; Wu et al., 2019) , our method more or less draws merits from them, like aligning source and target distributions with adversarial learning (domain-invariant alignment).", "However, object detection is a technically different problem from classification, since we would like to focus more on the object of interests (local regions).", "In this paper, we have addressed unsupervised domain adaptive object detection through stacked complementary losses.", "One of our key contributions is gradient detach training, enabled by suppressing gradients flowing back to the detection backbone.", "In addition, we proposed to use multiple complementary losses for better optimization.", "We conduct extensive experiments and ablation studies to verify the effectiveness of each component that we proposed.", "Our experimental results outperform the state-of-the-art approaches by a large margin on a variety of benchmarks.", "Our future work will focus on exploring the domain-shift detection from scratch, i.e., without the pre-trained models like DSOD (Shen et al., 2017) , to avoid involving bias from the pre-trained dataset." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2181818187236786, 0.17142856121063232, 0.22857142984867096, 0.15686273574829102, 0.07407406717538834, 0.10256409645080566, 0, 0.10256409645080566, 0.2222222238779068, 0.13333332538604736, 0, 0.2142857164144516, 0.07407406717538834, 0, 0, 0, 0.10344827175140381, 0.072727270424366, 0.0624999962747097, 0, 0, 0, 0, 0, 0, 0, 0, 0.0714285671710968, 0, 0, 0.1428571343421936, 0.0845070406794548, 0.1538461446762085, 0.32258063554763794, 0.17142856121063232, 0.1428571343421936, 0.060606054961681366, 0.06451612710952759, 0.04444443807005882 ]
rJx5_hNFwr
true
[ "We introduce a new gradient detach based complementary objective training strategy for domain adaptive object detection." ]
[ "Convolutional neural networks (CNN) have become the most successful and popular approach in many vision-related domains.", "While CNNs are particularly well-suited for capturing a proper hierarchy of concepts from real-world images, they are limited to domains where data is abundant.", "Recent attempts have looked into mitigating this data scarcity problem by casting their original single-task problem into a new multi-task learning (MTL) problem.", "The main goal of this inductive transfer mechanism is to leverage domain-specific information from related tasks, in order to improve generalization on the main task.", "While recent results in the deep learning (DL) community have shown the promising potential of training task-specific CNNs in a soft parameter sharing framework, integrating the recent DL advances for improving knowledge sharing is still an open problem.", "In this paper, we propose the Deep Collaboration Network (DCNet), a novel approach for connecting task-specific CNNs in a MTL framework.", "We define connectivity in terms of two distinct non-linear transformation blocks.", "One aggregates task-specific features into global features, while the other merges back the global features with each task-specific network.", "Based on the observation that task relevance depends on depth, our transformation blocks use skip connections as suggested by residual network approaches, to more easily deactivate unrelated task-dependent features.", "To validate our approach, we employed facial landmark detection (FLD) datasets as they are readily amenable to MTL, given the number of tasks they include.", "Experimental results show that we can achieve up to 24.31% relative improvement in landmark failure rate over other state-of-the-art MTL approaches.", "We finally perform an ablation study showing that our approach effectively allows knowledge sharing, by leveraging domain-specific features at particular depths from tasks that we know are related.", "Over the past few years, convolutional neural networks (CNNs) have become the leading approach in many vision-related tasks BID12 .", "By creating a hierarchy of increasingly abstract concepts, they can transform complex high-dimensional input images into simple low-dimensional output features.", "Although CNNs are particularly well-suited for capturing a proper hierarchy of concepts from real-world images, successively training them requires large amount of data.", "Optimizing deep networks is tricky, not only because of problems like vanishing / exploding gradients BID8 or internal covariate shift BID9 , but also because they typically have many parameters to be learned (which can go up to 137 billions BID21 ).", "While previous works have looked at networks pre-trained on a large image-based dataset as a starting point for their gradient descent optimization, others have considered improving generalization by casting their original single-task problem into a new multi-task learning (MTL) problem (see BID31 for a review).", "As BID2 explained in his seminal work: \"MTL improves generalization by leveraging the domain-specific information contained in the training signals of related tasks\".", "Exploring new ways to efficiently gather more information from related tasks -the core contribution of our approach -can thus help a network to further improve upon its main task.The use of MTL goes back several years, but has recently proven its value in several domains.", "As a consequence, it has become a dominant field of machine learning BID30 .", "Although many early and influential works contributed to this field BID5 ), recent major advances in neural networks opened up opportunities for novel contributions in MTL.", "Works on grasping BID17 , pedestrian detection BID24 , natural language processing BID14 , face recognition BID26 BID27 and object detection BID16 have all shown that MTL has been finally adopted by the deep learning (DL) community as a way to mitigate the lack of data, and is thus growing in popularity.MTL strategies can be divided into two major categories: hard and soft parameter sharing.", "Hard parameter sharing is the earliest and most common strategy for performing MTL, which dates back to the original work of BID2 .", "Approaches in this category generally share the hidden layers between all tasks, while keeping separate outputs.", "Recent results in the DL community have shown that a central CNN with separate task-specific fully connected (FC) layers can successfully leverage domain-specific information BID18 BID17 BID27 .", "Although hard parameter sharing reduces the risk of over-fitting BID1 , shared layers are prone to be overwhelmed by features or contaminated by noise coming from particular noxious related tasks .Soft", "parameter sharing has been proposed as an alternative to alleviate this drawback, and has been growing in popularity as a potential successor. Approaches", "in this category separate all hidden layers into task-specific models, while providing a knowledge sharing mechanism. Each model", "can then learn task-specific features without interfering with others, while still sharing their knowledge. Recent works", "using one network per task have looked at regularizing the distance between taskspecific parameters with a 2 norm BID4 or a trace norm BID25 , training shared and private LSTM submodules , partitioning the hidden layers into subspaces BID19 and regularizing the FC layers with tensor normal priors BID15 . In the domain", "of continual learning, progressive network BID20 has also shown promising results for cross-domain sequential transfer learning, by employing lateral connections to previously learned networks. Although all", "these soft parameter approaches have shown promising potential, improving the knowledge sharing mechanism is still an open problem.In this paper, we thus present the deep collaboration network (DCNet), a novel approach for connecting task-specific networks in a soft parameter sharing MTL framework. We contribute", "with a novel knowledge sharing mechanism, dubbed the collaborative block, which implements connectivity in terms of two distinct non-linear transformations. One aggregates", "task-specific features into global features, and the other merges back the global features into each task-specific network. We demonstrate", "that our collaborative block can be dropped in any existing architectures as a whole, and can easily enable MTL for any approaches. We evaluated our", "method on the problem of facial landmark detection in a MTL framework and obtained better results in comparison to other approaches of the literature. We further assess", "the objectivity of our training framework by randomly varying the contribution of each related tasks, and finally give insights on how our collaborative block enables knowledge sharing with an ablation study on our DCNet.The content of our paper is organized as follows. We first describe", "in Section 2 works on MTL closely related to our approach. We also describe", "Facial landmark detection, our targeted application. Architectural details", "of our proposed Multi-Task approach and its motivation are spelled out in Section 3. We then present in Section", "4 a number of comparative results on this Facial landmark detection problem for two CNN architectures, AlexNet and ResNet18, that have been adapted with various MTL frameworks including ours. It also contains discussions", "on an ablation study showing at which depth feature maps from other tasks are borrowed to improve the main task. We conclude our paper in Section", "5.2 RELATED WORK 2.1 MULTI-TASK LEARNING Our proposed deep collaboration network (DCNet) is related to other existing approaches. The first one is the cross-stitch", "(CS) BID16 ) network, which connects task-specific networks through linear combinations of the spatial feature maps at specific layers. One drawback of CS is that they are", "limited to capturing linear dependencies only, something we address in our proposed approach by employing non-linearities when sharing feature maps. Indeed, non-linear combinations are", "usually able to learn richer relationships, as demonstrated in deep networks. Another related approach is tasks-constrained", "deep convolutional network (TCDCN) for facial landmarks detection . In it, the authors proposed an early-stopping", "criterion for removing auxiliary tasks before the network starts to over-fit to the detriment of the main task. One drawback of their approach is that their", "criterion has several hyper-parameters, which must all be selected manually. For instance, they define an hyper-parameter", "controlling the period length of the local window and a threshold that stops the task when the criterion exceeds it, all of which can be specified for each task independently. Unlike TCDCN, our approach has no hyper-parameters", "that depend on the tasks at hand, which greatly simplifies the training process. Our two transformation blocks consist of a series", "of batch normalization, ReLU, and convolutional layers shaped in a standard setting based on recent advances in residual network (see Sec. 3). This is particularly useful for computationally expensive", "deep networks, since integrating our proposed approach requires no additional hyper-parameter tuning experiments.Our proposed approach is also related to HyperFace BID18 . In this work, the authors proposed to fuse the intermediate", "layers of AlexNet and exploit the hierarchical nature of the features. Their goal was to allow low-level features containing better", "localization properties to help tasks such as landmark localization and pose detection, and allow more class-specific high-level features to help tasks like face detection and gender recognition. Although HyperFace uses a single shared CNN instead of task-specific", "CNNs and is not entirely related to our approach, the idea of feature fusion is also central in our work. Instead of fusing the features at intermediate layers of a single CNN", ", our approach aggregates same-level features of multiple CNNs, at different depth independently. Also, one drawback of HyperFace is that the proposed feature fusion", "is specific to AlexNet, while our method is not specific to any network. In fact, our approach takes into account the vast diversity of existing", "network architectures, since it can be added to any architecture without modification.", "In this paper, we proposed the deep collaboration network (DCNet), a novel approach for connecting task-specific networks in a multi-task learning setting.", "It implements feature connectivity and sharing through two distinct non-linear transformations inside a collaborative block, which also incorporates skip connection and residual mapping that are known for their good training behavior.", "The first transformation aggregates the task-specific feature maps into a global feature map representing unified knowledge, and the second one merges it back into each task-specific network.", "One key characteristic of our collaborative blocks is that they can be dropped in virtually any existing architectures, making them universal adapters to endow deep networks with multi-task learning capabilities.Our results on the MTFL, AFW and AFLW datasets showed that our DCNet outperformed several state-of-the-art approaches, including cross-stitch networks.", "Our additional ablation study, using ResNet18 as underlying network, confirmed our intuition that the task-specific networks exploited the added flexibility provided by our approach.", "Additionally, these task-specific networks successfully incorporated features having varying levels of abstraction.", "Evaluating our proposed approach on other MTL problems could be an interesting avenue for future works.", "For instance, the recurrent networks used to solve natural language processing problems could benefit from incorporating our novel method leveraging domain-information of related tasks." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17142856121063232, 0.0952380895614624, 0.1538461446762085, 0.0952380895614624, 0.26923075318336487, 0.41025641560554504, 0.13333332538604736, 0.11764705181121826, 0.12765957415103912, 0, 0.04878048226237297, 0.08695651590824127, 0.1621621549129486, 0.051282044500112534, 0.09756097197532654, 0.03389830142259598, 0.21052631735801697, 0.04999999329447746, 0.13114753365516663, 0.12903225421905518, 0.27272728085517883, 0.10256409645080566, 0.04999999329447746, 0.05714285373687744, 0.1304347813129425, 0, 0.10256409645080566, 0.1621621549129486, 0.05714285373687744, 0.06557376682758331, 0.13636362552642822, 0.33898305892944336, 0.1463414579629898, 0.1818181723356247, 0.19512194395065308, 0.1860465109348297, 0.06896550953388214, 0.24242423474788666, 0, 0.1666666567325592, 0.11538460850715637, 0.13333332538604736, 0.04651162400841713, 0.08888888359069824, 0.09302324801683426, 0.17142856121063232, 0.11764705181121826, 0.14999999105930328, 0, 0.11764705181121826, 0.10256409645080566, 0.42553192377090454, 0.04347825422883034, 0, 0.07999999821186066, 0.08695651590824127, 0.0476190410554409, 0.09999999403953552, 0.06451612710952759, 0.6000000238418579, 0.12244897335767746, 0.1428571343421936, 0.1515151411294937, 0.1463414579629898, 0.12903225421905518, 0.17142856121063232, 0.09302324801683426 ]
r17Q6WWA-
true
[ "We propose a novel approach for connecting task-specific networks in a multi-task learning setting based on recent residual network advances." ]
[ "Zero-Shot Learning (ZSL) is a classification task where some classes referred as unseen classes have no labeled training images.", "Instead, we only have side information (or description) about seen and unseen classes, often in the form of semantic or descriptive attributes.", "Lack of training images from a set of classes restricts the use of standard classification techniques and losses, including the popular cross-entropy loss.", "The key step in tackling ZSL problem is bridging visual to semantic space via learning a nonlinear embedding.", "A well established approach is to obtain the semantic representation of the visual information and perform classification in the semantic space.", "In this paper, we propose a novel architecture of casting ZSL as a fully connected neural-network with cross-entropy loss to embed visual space to semantic space.", "During training in order to introduce unseen visual information to the network, we utilize soft-labeling based on semantic similarities between seen and unseen classes.", "To the best of our knowledge, such similarity based soft-labeling is not explored for cross-modal transfer and ZSL.", "We evaluate the proposed model on five benchmark datasets for zero-shot learning, AwA1, AwA2, aPY, SUN and CUB datasets, and show that, despite the simplicity, our approach achieves the state-of-the-art performance in Generalized-ZSL setting on all of these datasets and outperforms the state-of-the-art for some datasets.", "Supervised classifiers, specifically Deep Neural Networks, need a large number of labeled samples to perform well.", "Deep learning frameworks are known to have limitations in fine-grained classification regime and detecting object categories with no labeled data Socher et al., 2013; Zhang & Koniusz, 2018) .", "On the contrary, humans can recognize new classes using their previous knowledge.", "This power is due to the ability of humans to transfer their prior knowledge to recognize new objects (Fu & Sigal, 2016; Lake et al., 2015) .", "Zero-shot learning aims to achieve this human-like capability for learning algorithms, which naturally reduces the burden of labeling.", "In zero-shot learning problem, there are no training samples available for a set of classes, referred to as unseen classes.", "Instead, semantic information (in the form of visual attributes or textual features) is available for unseen classes (Lampert et al., 2009; 2014) .", "Besides, we have standard supervised training data for a different set of classes, referred to as seen classes along with the semantic information of seen classes.", "The key to solving zero-shot learning problem is to leverage trained classifier on seen classes to predict unseen classes by transferring knowledge analogous to humans.", "Early variants of ZSL assume that during inference, samples are only from unseen classes.", "Recent observations Scheirer et al., 2013; realize that such an assumption is not realistic.", "Generalized ZSL (GZSL) addresses this concern and considers a more practical variant.", "In GZSL there is no restriction on seen and unseen classes during inference.", "We are required to discriminate between all the classes.", "Clearly, GZSL is more challenging because the trained classifier is generally biased toward seen classes.", "In order to create a bridge between visual space and semantic attribute space, some methods utilize embedding techniques (Palatucci et al., 2009; Romera-Paredes & Torr, 2015; Socher et al., 2013; Bucher et al., 2016; Xu et al., 2017; Zhang et al., 2017; Simonyan & Zisserman, 2014; Xian et al., 2016; Zhang & Saligrama, 2016; Al-Halah et al., 2016; Zhang & Shi, 2019; Atzmon & Chechik, 2019) and the others use semantic similarity between seen and unseen classes (Zhang & Saligrama, 2015; Mensink et al., 2014) .", "Semantic similarity based models represent each unseen class as a mixture of seen classes.", "While the embedding based models follow three various directions; mapping visual space to semantic space (Palatucci et al., 2009; Romera-Paredes & Torr, 2015; Socher et al., 2013; Bucher et al., 2016; Xu et al., 2017; Socher et al., 2013) , mapping semantic space to the visual space (Zhang et al., 2017; Shojaee & Baghshah, 2016; Ye & Guo, 2017) , and finding a latent space then mapping both visual and semantic space into the joint embedding space Simonyan & Zisserman, 2014; Xian et al., 2016; Zhang & Saligrama, 2016; Al-Halah et al., 2016) .", "The loss functions in embedding based models have training samples only from the seen classes.", "For unseen classes, we do not have any samples.", "It is not difficult to see that this lack of training samples biases the learning process towards seen classes only.", "One of the recently proposed techniques to address this issue is augmenting the loss function with some unsupervised regularization such as entropy minimization over the unseen classes .", "Another recent methodology which follows a different perspective is deploying Generative Adversarial Network (GAN) to generate synthetic samples for unseen classes by utilizing their attribute information Zhu et al., 2018; Xian et al., 2018) .", "Although generative models boost the results significantly, it is difficult to train these models.", "Furthermore, the training requires generation of large number of samples followed by training on a much larger augmented data which hurts their scalability.", "The two most recent state-of-the-art GZSL methods, CRnet (Zhang & Shi, 2019) and COSMO (Atzmon & Chechik, 2019) , both employ a complex mixture of experts approach.", "CRnet is based on k-means clustering with an expert module on each cluster (seen class) to map semantic space to visual space.", "The output of experts (cooperation modules) are integrated and finally sent to a complex loss (relation module) to make a decision.", "CRnet is a multi-module (multi-network) method that needs end-to-end training with many hyperparameters.", "Also COSMO is a complex gating model with three modules: a seen/unseen classifier and two expert classifiers over seen and unseen classes.", "Both of these methods have many modules, and hence, several hyperparameters; architectural, and learning decisions.", "A complex pipeline is susceptible to errors, for example, CRnet uses k-means clustering for training and determining the number of experts and a weak clustering will lead to bad results.", "Our Contribution: We propose a simple fully connected neural network architecture with unified (both seen and unseen classes together) cross-entropy loss along with soft-labeling.", "Soft-labeling is the key novelty of our approach which enables the training data from the seen classes to also train the unseen class.", "We directly use attribute similarity information between the correct seen class and the unseen classes to create a soft unseen label for each training data.", "As a result of soft labeling, training instances for seen classes also serve as soft training instance for the unseen class without increasing the training corpus.", "This soft labeling leads to implicit supervision for the unseen classes that eliminates the need for any unsupervised regularization such as entropy loss in .", "Soft-labeling along with crossentropy loss enables a simple MLP network to tackle GZSL problem.", "Our proposed model, which we call Soft-labeled ZSL (SZSL), is simple (unlike GANs) and efficient (unlike visual-semantic pairwise embedding models) approach which achieves the state-of-the-art performance in Generalized-ZSL setting on all five ZSL benchmark datasets and outperforms the state-of-the-art for some of them.", "We proposed a discriminative GZSL classifier with visual-to-semantic mapping and cross-entropy loss.", "During training, while SZSL is trained on a seen class, it simultaneously learns similar unseen classes through soft labels based on semantic class attributes.", "We deploy similarity based soft labeling on unseen classes that allows us to learn both seen and unseen signatures simultaneously via a simple architecture.", "Our proposed soft-labeling strategy along with cross-entropy loss leads to a novel regularization via generalized similarity-based weighted cross-entropy loss that can successfully tackle GZSL problem.", "Soft-labeling offers a trade-off between seen and unseen accuracies and provides the capability to adjust these accuracies based on the particular application.", "We achieve state-of-the-art performance, in GZSL setting, on all five ZSL benchmark datasets while keeping the model simple, efficient and easy to train." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.12765957415103912, 0.07843136787414551, 0.2448979616165161, 0.1702127605676651, 0.08510638028383255, 0.23076923191547394, 0.19607841968536377, 0.12765957415103912, 0.2769230604171753, 0.08888888359069824, 0.13793103396892548, 0.04878048226237297, 0.03703703358769417, 0.17391303181648254, 0.2448979616165161, 0.11538460850715637, 0.19230768084526062, 0.19999998807907104, 0.1860465109348297, 0.045454539358615875, 0.1463414579629898, 0.1904761791229248, 0.10526315122842789, 0.04651162400841713, 0.1428571343421936, 0.1395348757505417, 0.07407406717538834, 0.09090908616781235, 0.052631575614213943, 0.16326530277729034, 0.18518517911434174, 0.16129031777381897, 0.0476190447807312, 0.07999999821186066, 0.1111111044883728, 0.1249999925494194, 0.1666666567325592, 0.1428571343421936, 0.20408162474632263, 0.09302324801683426, 0.145454540848732, 0.307692289352417, 0.12244897335767746, 0.307692289352417, 0.19999998807907104, 0.31372547149658203, 0.23255813121795654, 0.3333333432674408, 0.24390242993831635, 0.19230768084526062, 0.38461539149284363, 0.23076923191547394, 0.2083333283662796, 0.307692289352417 ]
B1lmSeHKwB
true
[ "How to use cross-entropy loss for zero shot learning with soft labeling on unseen classes : a simple and effective solution that achieves state-of-the-art performance on five ZSL benchmark datasets." ]
[ "In complex tasks, such as those with large combinatorial action spaces, random exploration may be too inefficient to achieve meaningful learning progress.", "In this work, we use a curriculum of progressively growing action spaces to accelerate learning.", "We assume the environment is out of our control, but that the agent may set an internal curriculum by initially restricting its action space.", "Our approach uses off-policy reinforcement learning to estimate optimal value functions for multiple action spaces simultaneously and efficiently transfers data, value estimates, and state representations from restricted action spaces to the full task.", "We show the efficacy of our approach in proof-of-concept control tasks and on challenging large-scale StarCraft micromanagement tasks with large, multi-agent action spaces.", "The value of curricula has been well established in machine learning, reinforcement learning, and in biological systems.", "When a desired behaviour is sufficiently complex, or the environment too unforgiving, it can be intractable to learn the behaviour from scratch through random exploration.", "Instead, by \"starting small\" (Elman, 1993) , an agent can build skills, representations, and a dataset of meaningful experiences that allow it to accelerate its learning.", "Such curricula can drastically improve sample efficiency (Bengio et al., 2009 ).", "Typically, curriculum learning uses a progression of tasks or environments.", "Simple tasks that provide meaningful feedback to random agents are used first, and some schedule is used to introduce more challenging tasks later during training (Graves et al., 2017) .", "However, in many contexts neither the agent nor experimenter has such unimpeded control over the environment.", "In this work, we instead make use of curricula that are internal to the agent, simplifying the exploration problem without changing the environment.", "In particular, we grow the size of the action space of reinforcement learning agents over the course of training.", "At the beginning of training, our agents use a severely restricted action space.", "This helps exploration by guiding the agent towards rewards and meaningful experiences, and provides low variance updates during learning.", "The action space is then grown progressively.", "Eventually, using the most unrestricted action space, the agents are able to find superior policies.", "Each action space is a strict superset of the more restricted ones.", "This paradigm requires some domain knowledge to identify a suitable hierarchy of action spaces.", "However, such a hierarchy is often easy to find.", "Continuous action spaces can be discretised with increasing resolution.", "Similarly, curricula for coping with the large combinatorial action spaces induced by many agents can be obtained from the prior that nearby agents are more likely to need to coordinate.", "For example, in routing or traffic flow problems nearby agents or nodes may wish to adopt similar local policies to alleviate global congestion.", "Our method will be valuable when it is possible to identify a restricted action space in which random exploration leads to significantly more meaningful experiences than random exploration in the full action space.", "We propose an approach that uses off-policy reinforcement learning to improve sample efficiency in this type of curriculum learning.", "Since data from exploration using a restricted action space is still valid in the Markov Decision Processes (MDPs) corresponding to the less restricted action spaces, we can learn value functions in the less restricted action space with 'off-action-space' data collected by exploring in the restricted action space.", "In our approach, we learn value functions corresponding to each level of restriction simultaneously.", "We can use the relationships of these value functions to each other to accelerate learning further, by using value estimates themselves as initialisations or as bootstrap targets for the less restricted action spaces, as well as sharing learned state representations.", "Empirically, we first demonstrate the efficacy of our approach in two simple control tasks, in which the resolution of discretised actions is progressively increased.", "We then tackle a more challenging set of problems with combinatorial action spaces, in the context of StarCraft micromanagement with large numbers of agents (50-100).", "Given the heuristic prior that nearby agents in a multiagent setting are likely to need to coordinate, we use hierarchical clustering to impose a restricted action space on the agents.", "Agents in a cluster are restricted to take the same action, but we progressively increase the number of groups that can act independently of one another over the course of training.", "Our method substantially improves sample efficiency on a number of tasks, outperforming learning any particular action space from scratch, a number of ablations, and an actor-critic baseline that learns a single value function for the behaviour policy, as in the work of Czarnecki et al. (2018) .", "Code is available, but redacted here for anonymity.", "We also compare against a Mix&Match (MM) baseline using the actor-critic approach of Czarnecki et al. (2018) , but adapted for our new multi-agent setting and supporting a third level in the mixture Figure 3 : StarCraft micromanagement with growing action spaces.", "We report the mean and standard error (over 5 random seeds) of the evaluation winrate during training, with a moving average over the past 500 episodes.", "of policies (A 0 , A 1 , A 2 ).", "We tuned hyperparameters for all algorithms on the easiest, fastesttraining scenario (80 marines vs. 80 marines).", "On this scenario, MM learns faster but plateaus at the same level as GAS(2).", "MM underperforms on all other scenarios to varying degrees.", "Learning separate value functions for each A , as in our approach, appears to accelerate the transfer learning in the majority of settings.", "Another possible explanation is that MM may be more sensitive to hyperparameters.", "We do not use population based training to tune hyperparameters on the fly, which could otherwise help MM adapt to each scenario.", "However, GAS would presumably also benefit from population based training, at the cost of further computation and sample efficiency.", "The policies learned by GAS exhibit good tactics.", "Control of separate groups is used to position our army so as to maximise the number of attacking units by forming a wall or a concave that surrounds the enemy, and by coordinating a simultaneous assault.", "Figure 5 in the Appendix shows some example learned policies.", "In scenarios where MM fails to learn well, it typically falls into a local minimum of attacking head-on.", "In each scenario, we test an ablation GAS (2): ON-AC that does not use our off-action-space update, instead training each level of the Q-function only with data sampled at that level.", "This ablation performs somewhat worse on average, although the size of the impact varies in different scenarios.", "In some tasks, it is beneficial to accelerate learning for finer action spaces using data drawn from the off-action-space policy.", "In Appendix A.1.1, the same ablation shows significantly worse performance on the Mountain Car task and comparable performance on Acrobot.", "We present a number of further ablations on two scenarios.", "The most striking failure is of the 'SEP-Q' variant which does not compose the value function as a sum of scores in the hierarchy.", "It is critical to ensure that values are well-initialised as we move to less restricted action spaces.", "In the discretised continuous control tasks, 'SEP-Q' also underperforms, although less dramatically.", "The choice of target is less important: performing a max over coarser action spaces to construct the target as described in Section 4.2 does not improve learning speed as intended.", "One potential reason is that maximising over more potential targets increases the maximisation bias already present in Q-learning (Hasselt, 2010).", "Additionally, we use an n-step objective which combines a partial onpolicy return with the bootstrap target, which could reduce the relative impact of the choice of target.", "Finally, we experiment with a higher .", "Unfortunately, asymptotic performance is degraded slightly once we use A 3 or higher.", "One potential reason is that it decreases the average group size, pushing against the limits of the spatial resolution that may be captured by our CNN architecture.", "Higher increases the amount of time that there are fewer units than groups, leaving certain groups empty and rendering our masked pooling operation degenerate.", "We do not see a fundamental limitation that should restrict the further growth of the action space, although we note that most hierarchical approaches in the literature avoid too many levels of depth.", "For example, Czarnecki et al. (2018) only mix between two sizes of action spaces rather than the three we progress through in the majority of our GAS experiments.", "In this work, we presented an algorithm for growing action spaces with off-policy reinforcement learning to efficiently shape exploration.", "We learn value functions for all levels of a hierarchy of restricted action spaces simultaneously, and transfer data, value estimates, and representations from more restricted to less restricted action spaces.", "We also present a strategy for using this approach in cooperative multi-agent control.", "In discretised continuous control tasks and challenging multiagent StarCraft micromanagement scenarios, we demonstrate empirically the effectiveness of our approach and the value of off-action-space learning.", "An interesting avenue for future work is to automatically identify how to restrict action spaces for efficient exploration, potentially through meta-optimisation.", "We also look to explore more complex and deeper hierarchies of action spaces." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11428570747375488, 0.3571428656578064, 0.277777761220932, 0.19512194395065308, 0.11428570747375488, 0, 0.1666666567325592, 0.10256409645080566, 0, 0.260869562625885, 0.09999999403953552, 0.0714285671710968, 0.05882352590560913, 0.3571428656578064, 0.38461539149284363, 0.12903225421905518, 0.29999998211860657, 0.2222222238779068, 0.4000000059604645, 0.14814814925193787, 0.1818181723356247, 0.09090908616781235, 0.20000000298023224, 0.05882352590560913, 0.25, 0.12903225421905518, 0.2222222238779068, 0, 0.1702127605676651, 0.11764705181121826, 0.22857142984867096, 0.2631579041481018, 0.09999999403953552, 0.22641509771347046, 0.1904761791229248, 0.18867924809455872, 0.10810810327529907, 0, 0.13793103396892548, 0.07407406717538834, 0, 0.1764705777168274, 0.07999999821186066, 0.05882352590560913, 0.0624999962747097, 0, 0.1395348757505417, 0.08695651590824127, 0.06451612710952759, 0.04878048226237297, 0.06896550953388214, 0.3030303120613098, 0.0624999962747097, 0.08695651590824127, 0.1764705777168274, 0.13793103396892548, 0.07999999821186066, 0.2380952388048172, 0.1249999925494194, 0.1111111044883728, 0.10526315122842789, 0.07692307233810425, 0.10810810327529907, 0.05405404791235924, 0.1428571343421936, 0.10256409645080566, 0.25, 0.1666666567325592, 0.1538461446762085, 0.11428570747375488, 0.1875, 0.07692307233810425 ]
Skl4LTEtDS
true
[ "Progressively growing the available action space is a great curriculum for learning agents" ]
[ "Recently, researchers have discovered that the state-of-the-art object classifiers can be fooled easily by small perturbations in the input unnoticeable to human eyes.", " It is known that an attacker can generate strong adversarial examples if she knows the classifier parameters", ". Conversely, a defender can robustify the classifier by retraining if she has the adversarial examples", ". \nThe cat-and-mouse game nature of attacks and defenses raises the question of the presence of equilibria in the dynamics", ".\nIn this paper, we present a neural-network based attack class to approximate a larger but intractable class of attacks, and \n", "formulate the attacker-defender interaction as a zero-sum leader-follower game.", "We present sensitivity-penalized optimization algorithms to find minimax solutions, which are the best worst-case defenses against whitebox attacks.", "Advantages of the learning-based attacks and defenses compared to gradient-based attacks and defenses are demonstrated with MNIST and CIFAR-10.", "Recently, researchers have made an unsettling discovery that the state-of-the-art object classifiers can be fooled easily by small perturbations in the input unnoticeable to human eyes BID24 BID7 .", "Following studies tried to explain the cause of the seeming failure of deep learning toward such adversarial examples.", "The vulnerability was ascribed to linearity BID24 , low flexibility BID4 , or the flatness/curvedness of decision boundaries BID20 , but a more complete picture is still under research.", "This is troublesome since such a vulnerability can be exploited in critical situations such as an autonomous car misreading traffic signs or a facial recognition system granting access to an impersonator without being noticed.", "Several methods of generating adversarial examples were proposed BID7 BID19 BID2 , most of which use the knowledge of the classifier to craft examples.", "In response, a few defense methods were proposed: retraining target classifiers with adversarial examples called adversarial training BID24 BID7 ; suppressing gradient by retraining with soft labels called defensive distillation BID22 ; hardening target classifiers by training with an ensemble of adversarial examples BID25 .In", "this paper we focus on whitebox attacks, that is, the model and the parameters of the classifier are known to the attacker. This", "requires a more robust classifier or defense method than simply relying on the secrecy of the parameters as defense. When", "the classifier parameters are known to an attacker, existing attack methods are very successful at fooling the classifiers. Conversely", ", when the attack is known to the classifier, e.g., in the form of adversarial examples, one can weaken the attack by retraining the classifier with adversarial examples, called adversarial training. However,", "if we repeat adversarial sample generation and adversarial training back-to-back, it is observed that the current adversarially-trained classifier is no longer robust to previous attacks (see Sec. 3.1.) To find the classifier robust against the class of gradient-based attacks, we first propose a sensitivitypenalized optimization procedure. Experiments", "show that the classifier from the procedure is more robust than adversarially-trained classifiers against previous attacks, but it still remains vulnerable to some degrees. This raises", "the main question of the paper: Can a classifier be robust to all types of attacks? The answer", "seems to be negative in light of the strong adversarial examples that can be crafted by direct optimization procedures from or BID2 . Note that", "the class of optimization-based attack is very large, as there is no restriction on the adversarial patterns that can be generated except for certain bounds such as l p -norm bounds. The vastness", "of the optimization-based attack class is a hindrance to the study of the problem, as the defender cannot learn efficiently about the attack class from a finite number of samples. To study the", "problem analytically, we use a class of learning-based attack that can be generated by a class of neural networks. This class of", "attack can be considered an approximation of the class of optimization -based attacks, in that the search space of optimal perturbation is restricted to the parameter space of a neural network architecture, e.g., all perturbations that can be generated by fully-connected 3-layer ReLU networks. Similar to what", "we propose, others have recently considered training neural networks to generate adversarial examples BID21 BID0 . While the proposed", "learning-based attack is weaker than the optimization-based attack, it can generate adversarial examples in test time with only single feedforward passes, which makes real-time attacks possible. We also show that", "the class of neural-network based attacks is quite different from the the class of gradient-based attacks (see Sec. 4.1.) Using the learning-based attack class, we introduce a continuous game formulation for analyzing the dynamics of attack-defense. The game is played", "by an attacker and a defender/classifier 1 , where the attacker tries to maximize the risk of the classification task by perturbing input samples under certain constraints such as l p -norm bounds, and the defender/classifier tries to adjust its parameters to minimize the same risk given the perturbed inputs. It is important to", "note that for adversarial attack problems, the performance of an attack or a defense cannot be measured in isolation, but only in pairs of (attack, defense) . This is because the", "effectiveness of an attack/defense depends on the defense/attack it is against. As a two-player game", ", there may not be a dominant defense that is no less robust than all other defenses against all attacks. However, there is a", "natural notion of the best defense or attack in the worst case. Suppose one player", "moves first by choosing her parameters and the other player responds with the knowledge of the first player's move. This is an example", "of a leader-follower game BID1 for which there are two well-known states, the minimax and the maximin solutions if it is a constant-sum game. To find those solutions", "empirically, we propose a new continuous optimization method using the sensitivity penalization term. We show that the minimax", "solution from the proposed method is indeed different from the solution from the conventional alternating descent/ascent and is also more robust. We also show that the strength/weakness", "of the minimax-trained classifier is different from that of adversarially-trained classifiers for gradient-based attacks. The contributions of this paper are summarized", "as follows.• We provide a continuous game model to analyze", "adversarial example attacks and defenses, using the neural network-based attack class as a feasible approximation to a larger but intractable class of optimization-based attacks.• We demonstrate the difficulty of defending against", "multiple attack types and present the minimax defense as the best worst-case defense methods.• We propose a sensitivity-penalized optimization method", "(Alg. 1) to numerically find continuous minimax solutions, which is better than alternating descent/ascent. The proposed optimization method can also be used for other", "minimax problems beyond the adversarial example problem.The proposed methods are demonstrated with the MNIST and the CIFAR-10 datasets. For readability, details about experimental settings and the", "results with CIFAR-10 are presented in the appendix.", "In this paper, we present a continuous game formulation of adversarial attacks and defenses using a learning-based attack class implemented by neural networks.", "We show that this class of attacks is quite different from the gradient-based attacks.", "While a classifier robust to all types of attack may yet be an elusive goal, the minimax defense against the neural network-based attack class is well-defined and practically achievable.", "We show that the proposed optimization method can find minimax defenses which are more robust than adversarially-trained classifiers and the classifiers from simple alternating descent/ascent.", "We demonstrate these with MNIST and CIFAR-10." ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.06666666269302368, 0.07999999821186066, 0.09090908616781235, 0.260869562625885, 0.1538461446762085, 0, 0.23076923191547394, 0.3478260934352875, 0.05714285373687744, 0.1666666567325592, 0.05714285373687744, 0.05128204822540283, 0.1428571343421936, 0.04999999701976776, 0.1428571343421936, 0, 0.07999999821186066, 0.12121211737394333, 0.1599999964237213, 0.060606058686971664, 0.0833333283662796, 0.13333332538604736, 0.054054051637649536, 0.06666666269302368, 0, 0.042553190141916275, 0.1538461446762085, 0.10526315122842789, 0.04999999701976776, 0.0833333283662796, 0.05882352590560913, 0, 0.1428571343421936, 0, 0.0714285671710968, 0.0624999962747097, 0, 0.14814814925193787, 0.07407406717538834, 0.10526315122842789, 0.22857142984867096, 0.07407406717538834, 0.0624999962747097, 0.12903225421905518, 0, 0.2666666507720947, 0.0952380895614624, 0.11428570747375488, 0.12903225421905518, 0.13333332538604736 ]
ByqFhGZCW
true
[ "A game-theoretic solution to adversarial attacks and defenses." ]
[ "Supervised learning with irregularly sampled time series have been a challenge to Machine Learning methods due to the obstacle of dealing with irregular time intervals.", "Some papers introduced recently recurrent neural network models that deals with irregularity, but most of them rely on complex mechanisms to achieve a better performance.", "This work propose a novel method to represent timestamps (hours or dates) as dense vectors using sinusoidal functions, called Time Embeddings.", "As a data input method it and can be applied to most machine learning models.", "The method was evaluated with two predictive tasks from MIMIC III, a dataset of irregularly sampled time series of electronic health records.", "Our tests showed an improvement to LSTM-based and classical machine learning models, specially with very irregular data.", "An irregularly (or unevenly) sampled time series is a sequence of samples with irregular time intervals between observations.", "This class of data add a time sparsity factor when the intervals between observations are large.", "Most machine learning methods do not have time comprehension, this means they only consider observation order.", "This makes it harder to learn time dependencies found in time series problems.", "To solve this problem recent work propose models that are able to deal with such irregularity (Lipton et al., 2016; Bahadori & Lipton, 2019; Che et al., 2018; Shukla & Marlin, 2018) , but they often rely on complex mechanisms to represent irregularity or to impute missing data.", "In this paper, we introduce a novel way to represent time as a dense vector representation, which is able to improve the expressiveness of irregularly sampled data, we call it Time Embeddings (TEs).", "The proposed method is based on sinusoidal functions discretized to create a continuous representation of time.", "TEs can make a model capable of estimating time intervals between observations, and they do so without the addition of any trainable parameters.", "We evaluate the method with a publicly available real-world dataset of irregularly sampled electronic health records called MIMIC-III (Johnson et al., 2016) .", "The tests were made with two tasks: a classification task (in-hospital mortality prediction) and a regression (length of stay).", "To evaluate the impact of time representation in the data expressiveness we used LSTM and SelfAttentive LSTM models.", "Both are common RNN models that have been reported to achieve great performance in several time series classification problems, and specifically with the MIMIC-III dataset (Lipton et al., 2016; Shukla & Marlin, 2018; Bahadori & Lipton, 2019; Zhang et al., 2018) .", "We also evaluated simpler models such as linear and logistic regression and a shallow Multi Layer Perceptron.", "All models were evaluated with and without TEs to asses possible improvements.", "This paper propose a novel method to represent hour time or dates as dense vectors to improve irregularly sampled time series.", "It was evaluated with two different approaches and evaluated in two tasks from the MIMIC III dataset.", "Our method showed some improvement with most models tested, including recurrent neural networks and classic machine learning methods.", "Despite being outperformed by binary masking in some tests we believe TEs can still be an viable option.", "Specially to very irregular time series and high dimensional data, were TEs can be applied by addition without increasing the input dimensionality." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1538461446762085, 0.1428571343421936, 0.21052631735801697, 0.1875, 0.15789473056793213, 0.05882352590560913, 0.11764705181121826, 0.12121211737394333, 0.060606054961681366, 0.13793103396892548, 0.06779660284519196, 0.21276594698429108, 0.3030303120613098, 0.1538461446762085, 0.09999999403953552, 0.05714285373687744, 0.1818181723356247, 0.1071428507566452, 0.060606054961681366, 0.13793103396892548, 0.277777761220932, 0, 0.11428570747375488, 0, 0.10256409645080566 ]
SkeJMCVFDS
true
[ "A novel method to create dense descriptors of time (Time Embeddings) to make simple models understand temporal structures" ]
[ "Community detection in graphs can be solved via spectral methods or posterior inference under certain probabilistic graphical models.", "Focusing on random graph families such as the stochastic block model, recent research has unified both approaches and identified both statistical and computational detection thresholds in terms of the signal-to-noise ratio.", "By recasting community detection as a node-wise classification problem on graphs, we can also study it from a learning perspective.", "We present a novel family of Graph Neural Networks (GNNs) for solving community detection problems in a supervised learning setting.", "We show that, in a data-driven manner and without access to the underlying generative models, they can match or even surpass the performance of the belief propagation algorithm on binary and multiclass stochastic block models, which is believed to reach the computational threshold in these cases.", "In particular, we propose to augment GNNs with the non-backtracking operator defined on the line graph of edge adjacencies.", "The GNNs are achieved good performance on real-world datasets. ", "In addition, we perform the first analysis of the optimization landscape of using (linear) GNNs to solve community detection problems, demonstrating that under certain simplifications and assumptions, the loss value at any local minimum is close to the loss value at the global minimum/minima.", "Graph inference problems encompass a large class of tasks and domains, from posterior inference in probabilistic graphical models to community detection and ranking in generic networks, image segmentation or compressed sensing on non-Euclidean domains.", "They are motivated both by practical applications, such as in the case of PageRank, and also by fundamental questions on the algorithmic hardness of solving such tasks.From a data-driven perspective, these problems can be formulated in unsupervised, semi-supervised or supervised learning settings.", "In the supervised case, one assumes a dataset of graphs with labels on their nodes, edges or the entire graphs, and attempts to perform node-wise, edge-wise and graph-wise classification by optimizing a loss over a certain parametric class, e.g. neural networks.", "Graph Neural Networks (GNNs) are natural extensions of Convolutional Neural Networks to graph-structured data, and have emerged as a powerful class of algorithms to perform complex graph inference leveraging labeled data (Gori et al., 2005; BID3 (and references therein).", "In essence, these neural networks learn cascaded linear combinations of intrinsic graph operators interleaved with node-wise (or edge-wise) activation functions.", "Since they utilize intrinsic graph operators, they can be applied to varying input graphs, and they offer the same parameter sharing advantages as their CNN counterparts.In this work, we focus on community detection problems, a wide class of node classification tasks that attempt to discover a clustered, segmented structure within a graph.", "The algorithmic approaches to this problem include a rich class of spectral methods, which take advantage of the spectrum of certain operators defined on the graph, as well as approximate message-passing methods such as belief propagation (BP), which performs approximate posterior inference under predefined graphical models (Decelle et al., 2011) .", "Focusing on the supervised setting, we study the ability of GNNs to approximate, generalize or even improve upon these class of algorithms.", "Our motivation is two-fold.", "On the one hand, this problem exhibits algorithmic hardness on some settings, opening up the possibility to discover more efficient algorithms than the current ones.", "On the other hand, many practical scenarios fall beyond pre-specified probabilistic models, requiring data-driven solutions.We propose modifications to the GNN architecture, which allow it to exploit edge adjacency information, by incorporating the non-backtracking operator of the graph.", "This operator is defined over the edges of the graph and allows a directed flow of information even when the original graph is undirected.", "It was introduced to community detection problems by Krzakala et al. (2013) , who propose a spectral method based on the non-backtracking operator.", "We refer to the resulting GNN model as a Line Graph Neural Network (LGNN).", "Focusing on important random graph families exhibiting community structure, such as the stochastic block model (SBM) and the geometric block model (GBM), we demonstrate improvements in the performance by our GNN and LGNN models compared to other methods, including BP, even in regimes within the so-called computational-to-statistical gap.", "A perhaps surprising aspect is that these gains can be obtained even with linear LGNNs, which become parametric versions of power iteration algorithms.We want to mention that besides community detection tasks, GNN and LGNN can be applied to other node-wise classification problems too.", "The reason we are focusing on community detection problems is that this is a relatively well-studied setup, for which different algorithms have been proposed and where computational and statistical thresholds have been studied in several scenarios.", "Moreover, synthetic datasets can be easily generated for community detection tasks.", "Therefore, we think it is a nice setup for comparing different algorithms, besides its practical values.The good performances of GNN and LGNN motivate our second main contribution: the analysis of the optimization landscape of simplified and linear GNN models when trained with planted solutions of a given graph distribution.", "Under reparametrization, we provide an upper bound on the energy gap controlling the energy difference between local and global minima (or minimum).", "With some assumptions on the spectral concentration of certain random matrices, this energy gap will shrink as the size of the input graphs increases, which would mean that the optimization landscape is benign on large enough graphs.", "In this work, we have studied data-driven approaches to supervised community detection with graph neural networks.", "Our models achieve comparable performance to BP in binary SBM for various SNRs, and outperform BP in the sparse regime of 5-class SBM that falls between the computationalto-statistical gap.", "This is made possible by considering a family of graph operators including the power graph adjacency matrices, and importantly by introducing the line graph equipped with the non-backtracking matrix.", "We also provided a theoretical analysis of the optimization landscapes of simplified linear GNN for community detection and showed the gap between the loss value at local and global minima are bounded by quantities related to the concentration of certain random matricies.One word of caution is that our empirical results are inherently non-asymptotic.", "Whereas models trained for given graph sizes can be used for inference on arbitrarily sized graphs (owing to the parameter sharing of GNNs), further work is needed in order to understand the generalization properties as |V | increases.", "Nevertheless, we believe our work opens up interesting questions, namely better understanding how our results on the energy landscape depend upon specific signal-to-noise ratios, or whether the network parameters can be interpreted mathematically.", "This could be useful in the study of computational-to-statistical gaps, where our model can be used to inquire about the form of computationally tractable approximations.", "Another current limitation of our model is that it presumes a fixed number of communities to be detected.", "Other directions of future research include the extension to the case where the number of communities is unknown and varied, or even increasing with |V |, as well as applications to ranking and edge-cut problems.", "A PROOF OF THEOREM 5.1For simplicity and with an abuse of notation, in the remaining part we redefine L andL in the following way, to be the negative of their original definition in the main section: DISPLAYFORM0 .", "Thus, minimizing the loss function (5) is equivalent to maximizing the function L n (β) redefined here.We write the Cholesky decomposition of EX n as EX n = R n R T n , and define DISPLAYFORM1 n ) T , and ∆B n = B n − I n .", "Given a symmetric matrix K ∈ R M ×M , we let λ 1 (K), λ 2 (K), ..., λ M (K) denote the eigenvalues of K in nondecreasing order." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13636362552642822, 0.2222222238779068, 0.17777776718139648, 0.2666666507720947, 0.1846153736114502, 0.3636363446712494, 0.0555555522441864, 0.13114753365516663, 0.24561403691768646, 0.1875, 0.21875, 0.09677419066429138, 0.08695651590824127, 0.21917808055877686, 0.11428570747375488, 0.08695651590824127, 0, 0.08163265138864517, 0.19999998807907104, 0.2666666507720947, 0.3265306055545807, 0.14999999105930328, 0.20895521342754364, 0.12121211737394333, 0.20689654350280762, 0.1621621549129486, 0.14492753148078918, 0.1304347813129425, 0.10526315122842789, 0.1904761791229248, 0.11764705181121826, 0.23999999463558197, 0.1666666567325592, 0.16393442451953888, 0.10526315122842789, 0.0833333283662796, 0.04651162400841713, 0.072727270424366, 0.10344827175140381, 0.09999999403953552, 0.1599999964237213 ]
H1g0Z3A9Fm
true
[ "We propose a novel graph neural network architecture based on the non-backtracking matrix defined over the edge adjacencies and demonstrate its effectiveness in community detection tasks on graphs." ]
[ "Residual networks (Resnets) have become a prominent architecture in deep learning.", "However, a comprehensive understanding of Resnets is still a topic of ongoing research.", "A recent view argues that Resnets perform iterative refinement of features.", "We attempt to further expose properties of this aspect.", "To this end, we study Resnets both analytically and empirically.", "We formalize the notion of iterative refinement in Resnets by showing that residual architectures naturally encourage features to move along the negative gradient of loss during the feedforward phase.", "In addition, our empirical analysis suggests that Resnets are able to perform both representation learning and iterative refinement.", "In general, a Resnet block tends to concentrate representation learning behavior in the first few layers while higher layers perform iterative refinement of features.", "Finally we observe that sharing residual layers naively leads to representation explosion and hurts generalization performance, and show that simple existing strategies can help alleviating this problem.", "Traditionally, deep neural network architectures (e.g. VGG Simonyan & Zisserman (2014) , AlexNet Krizhevsky et al. (2012) , etc.", ") have been compositional in nature, meaning a hidden layer applies an affine transformation followed by non-linearity, with a different transformation at each layer.", "However, a major problem with deep architectures has been that of vanishing and exploding gradients.", "To address this problem, solutions like better activations (ReLU Nair & Hinton (2010) ), weight initialization methods Glorot & Bengio (2010) ; He et al. (2015) and normalization methods Ioffe & Szegedy (2015) ; BID0 have been proposed.", "Nonetheless, training compositional networks deeper than 15 − 20 layers remains a challenging task.Recently, residual networks (Resnets He et al. (2016a) ) were introduced to tackle these issues and are considered a breakthrough in deep learning because of their ability to learn very deep networks and achieve state-of-the-art performance.", "Besides this, performance of Resnets are generally found to remain largely unaffected by removing individual residual blocks or shuffling adjacent blocks Veit et al. (2016) .", "These attributes of Resnets stem from the fact that residual blocks transform representations additively instead of compositionally (like traditional deep networks).", "This additive framework along with the aforementioned attributes has given rise to two school of thoughts about Resnets-the ensemble view where they are thought to learn an exponential ensemble of shallower models Veit et al. (2016) , and the unrolled iterative estimation view Liao & Poggio (2016) ; Greff et al. (2016) , where Resnet layers are thought to iteratively refine representations instead of learning new ones.", "While the success of Resnets may be attributed partly to both these views, our work takes steps towards achieving a deeper understanding of Resnets in terms of its iterative feature refinement perspective.", "Our contributions are as follows:1.", "We study Resnets analytically and provide a formal view of iterative feature refinement using Taylor's expansion, showing that for any loss function, a residual block naturally encourages representations to move along the negative gradient of the loss with respect to hidden representations.", "Each residual block is therefore encouraged to take a gradient step in order to minimize the loss in the hidden representation space.", "We empirically confirm this by measuring the cosine between the output of a residual block and the gradient of loss with respect to the hidden representations prior to the application of the residual block.2.", "We empirically observe that Resnet blocks can perform both hierarchical representation learning (where each block discovers a different representation) and iterative feature refinement (where each block improves slightly but keeps the semantics of the representation of the previous layer).", "Specifically in Resnets, lower residual blocks learn to perform representation learning, meaning that they change representations significantly and removing these blocks can sometimes drastically hurt prediction performance.", "The higher blocks on the other hand essentially learn to perform iterative inference-minimizing the loss function by moving the hidden representation along the negative gradient direction.", "In the presence of shortcut connections 1 , representation learning is dominantly performed by the shortcut connection layer and most of residual blocks tend to perform iterative feature refinement.3.", "The iterative refinement view suggests that deep networks can potentially leverage intensive parameter sharing for the layer performing iterative inference.", "But sharing large number of residual blocks without loss of performance has not been successfully achieved yet.", "Towards this end we study two ways of reusing residual blocks:", "1. Sharing residual blocks during training;", "2. Unrolling a residual block for more steps that it was trained to unroll.", "We find that training Resnet with naively shared blocks leads to bad performance.", "We expose reasons for this failure and investigate a preliminary fix for this problem.", "Our main contribution is formalizing the view of iterative refinement in Resnets and showing analytically that residual blocks naturally encourage representations to move in the half space of negative loss gradient, thus implementing a gradient descent in the activation space (each block reduces loss and improves accuracy).", "We validate theory experimentally on a wide range of Resnet architectures.We further explored two forms of sharing blocks in Resnet.", "We show that Resnet can be unrolled to more steps than it was trained on.", "Next, we found that counterintuitively training residual blocks with shared blocks leads to overfitting.", "While we propose a variant of batch normalization to mitigate it, we leave further investigation of this phenomena for future work.", "We hope that our developed formal view, and practical results, will aid analysis of other models employing iterative inference and residual connections.", "∂ho , then it is equivalent to updating the parameters of the convolution layer using a gradient update step.", "To see this, consider the change in h o from updating parameters using gradient descent with step size η.", "This is given by, DISPLAYFORM0 Thus, moving h o in the half space of − ∂L ∂ho has the same effect as that achieved by updating the parameters W, b using gradient descent.", "Although we found this insight interesting, we don't build upon it in this paper.", "We leave this as a future work." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11764705181121826, 0, 0.23529411852359772, 0, 0, 0.0624999962747097, 0.1666666567325592, 0.13793103396892548, 0, 0, 0, 0, 0, 0, 0, 0, 0.03448275849223137, 0.05714285373687744, 0, 0.0476190447807312, 0, 0, 0.10526315867900848, 0.0624999962747097, 0.13793103396892548, 0.1818181723356247, 0.1599999964237213, 0, 0, 0, 0, 0, 0, 0.04444444179534912, 0, 0, 0, 0, 0.2222222238779068, 0, 0, 0, 0, 0 ]
SJa9iHgAZ
true
[ "Residual connections really perform iterative inference" ]
[ "We develop end-to-end learned reconstructions for lensless mask-based cameras, including an experimental system for capturing aligned lensless and lensed images for training. ", "Various reconstruction methods are explored, on a scale from classic iterative approaches (based on the physical imaging model) to deep learned methods with many learned parameters. ", "In the middle ground, we present several variations of unrolled alternating direction method of multipliers (ADMM) with varying numbers of learned parameters.", "The network structure combines knowledge of the physical imaging model with learned parameters updated from the data, which compensate for artifacts caused by physical approximations.", "Our unrolled approach is 20X faster than classic methods and produces better reconstruction quality than both the classic and deep methods on our experimental system. " ]
[ 1, 0, 0, 0, 0 ]
[ 0.3255814015865326, 0.1702127605676651, 0.09302324801683426, 0.260869562625885, 0.31111109256744385 ]
HJgAjm3qLB
false
[ "We improve the reconstruction time and quality on an experimental mask-based lensless imager using an end-to-end learning approach which incorporates knowledge of the imaging model." ]
[ "Deep learning, a rebranding of deep neural network research works, has achieved a remarkable success in recent years.", "With multiple hidden layers, deep learning models aim at computing the hierarchical feature representations of the observational data.", "Meanwhile, due to its severe disadvantages in data consumption, computational resources, parameter tuning costs and the lack of result explainability, deep learning has also suffered from lots of criticism.", "In this paper, we will introduce a new representation learning model, namely “Sample-Ensemble Genetic Evolutionary Network” (SEGEN), which can serve as an alternative approach to deep learning models.", "Instead of building one single deep model, based on a set of sampled sub-instances, SEGEN adopts a genetic-evolutionary learning strategy to build a group of unit models generations by generations.", "The unit models incorporated in SEGEN can be either traditional machine learning models or the recent deep learning models with a much “narrower” and “shallower” architecture.", "The learning results of each instance at the final generation will be effectively combined from each unit model via diffusive propagation and ensemble learning strategies.", "From the computational perspective, SEGEN requires far less data, fewer computational resources and parameter tuning efforts, but has sound theoretic interpretability of the learning process and results.", "Extensive experiments have been done on several different real-world benchmark datasets, and the experimental results obtained by SEGEN have demonstrated its advantages over the state-of-the-art representation learning models.", "In recent years, deep learning, a rebranding of deep neural network research works, has achieved a remarkable success.", "The essence of deep learning is to compute the hierarchical feature representations of the observational data BID8 ; BID16 .", "With multiple hidden layers, the deep learning models have the capacity to capture very good projections from the input data space to the objective output space, whose outstanding performance has been widely illustrated in various applications, including speech and audio processing BID7 ; , language modeling and processing BID0 ; BID19 , information retrieval BID10 ; BID22 , objective recognition and computer vision BID16 , as well as multimodal and multi-task learning BID27 BID28 .", "By this context so far, various kinds of deep learning models have been proposed already, including deep belief network BID11 , deep Boltzmann machine BID22 , deep neural network BID13 ; BID14 and deep autoencoder model BID24 .Meanwhile", ", deep learning models also suffer from several serious criticism due to their several severe disadvantages BID29 . Generally", ", learning and training deep learning models usually demands (1) a large amount of training data, (2) large and powerful computational facilities, (3) heavy parameter tuning costs, but lacks (4) theoretic explanation of the learning process and results. These disadvantages", "greatly hinder the application of deep learning models in many areas which cannot meet the requirements or requests a clear interpretability of the learning performance. Due to these reasons", ", by this context so far, deep learning research and application works are mostly carried out within/via the collaboration with several big technical companies, but the models proposed by them (involving hundreds of hidden layers, billions of parameters, and using a large cluster with thousands of server nodes BID5 ) can hardly be applied in other real-world applications.In this paper, we propose a brand new model, namely SEGEN (Sample-Ensemble Genetic Evolutionary Network), which can work as an alternative approach to the deep learning models. Instead of building", "one single model with a deep architecture, SEGEN adopts a genetic-evolutionary learning strategy to train a group of unit models generations by generations. Here, the unit models", "can be either traditional machine learning models or deep learning models with a much \"narrower\" and \"shallower\" structure. Each unit model will", "be trained with a batch of training instances sampled form the dataset. By selecting the good", "unit models from each generation (according to their performance on a validation set), SEGEN will evolve itself and create the next generation of unit modes with probabilistic genetic crossover and mutation, where the selection and crossover probabilities are highly dependent on their performance fitness evaluation. Finally, the learning", "results of the data instances will be effectively combined from each unit model via diffusive propagation and ensemble learning strategies. These terms and techniques", "mentioned here will be explained in great detail in Section 4. Compared with the existing", "deep learning models, SEGEN have several great advantages, and we will illustrate them from both the bionics perspective and the computational perspective as follows.From the bionics perspective, SEGEN effectively models the evolution of creatures from generations to generations, where the creatures suitable for the environment will have a larger chance to survive and generate the offsprings. Meanwhile, the offsprings", "inheriting good genes from its parents will be likely to adapt to the environment as well. In the SEGEN model, each", "unit network model in generations can be treated as an independent creature, which will receive a different subsets of training instances and learn its own model variables. For the unit models suitable", "for the environment (i.e., achieving a good performance on a validation set), they will have a larger chance to generate their child models. The parent model achieving better", "performance will also have a greater chance to pass their variables to the child model.From the computational perspective, SEGEN requires far less data and resources, and also has a sound theoretic explanation of the learning process and results. The unit models in each generation", "of SEGEN are of a much simpler architecture, learning of which can be accomplished with much less training data, less computational resources and less hyper-parameter tuning efforts. In addition, the training dataset", "pool, model hyper-parameters are shared by the unit models, and the increase of generation size (i.e., unit model number in each generation) or generation number (i.e., how many generation rounds will be needed) will not increase the learning resources consumption. The relatively \"narrower\" and \"shallower", "\" structure of unit models will also significantly enhance the interpretability of the unit models training process as well as the learning results, especially if the unit models are the traditional non-deep learning models. Furthermore, the sound theoretical foundations", "of genetic algorithm and ensemble learning will also help explain the information inheritance through generations and result ensemble in SEGEN. In this paper, we will use network embedding problem", "BID25 BID2 ; BID20 (applying autoencoder as the unit model) as an example to illustrate the SEGEN model. Meanwhile, applications of SEGEN on other data categories", "(e.g., images and raw feature inputs) with CNN and MLP as the unit model will also be provided in Section 5.3. The following parts of this paper are organized as follows", ". The problem formulation is provided in Section 3. Model SEGEN", "will be introduced in Section 4, whose performance", "will be evaluated in Section 5. Finally, Section 2 introduces the related works and we conclude", "this paper in Section 6.", "In this paper, we have introduced an alternative approach to deep learning models, namely SEGEN.", "Significantly different from the existing deep learning models, SEGEN builds a group of unit models generations by generations, instead of building one single model with extremely deep architectures.", "The choice of unit models covered in SEGEN can be either traditional machine learning models or the latest deep learning models with a \"smaller\" and \"narrower\" architecture.", "SEGEN has great advantages over deep learning models, since it requires much less training data, computational resources, parameter tuning efforts but provides more information about its learning and result integration process.", "The effectiveness of efficiency of SEGEN have been well demonstrated with the extensive experiments done on the real-world network structured datasets." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09999999403953552, 0.14999999105930328, 0.11764705181121826, 0.8799999952316284, 0.25, 0.21739129722118378, 0.04347825422883034, 0.04255318641662598, 0.12244897335767746, 0.10256409645080566, 0.14999999105930328, 0.12345678359270096, 0.1090909019112587, 0.19999998807907104, 0.1428571343421936, 0.25, 0.3298968970775604, 0.2222222238779068, 0.23255813121795654, 0.052631575614213943, 0.13114753365516663, 0.04347825422883034, 0, 0.1846153736114502, 0.1428571343421936, 0.22641508281230927, 0.11999999731779099, 0.13114753365516663, 0.16326530277729034, 0.03389830142259598, 0.12244897335767746, 0.04081632196903229, 0.1304347813129425, 0.0363636314868927, 0, 0, 0, 0, 0.3684210479259491, 0.16326530277729034, 0.21276594698429108, 0.07547169178724289, 0 ]
HJgVisRqtX
true
[ "We introduce a new representation learning model, namely “Sample-Ensemble Genetic Evolutionary Network” (SEGEN), which can serve as an alternative approach to deep learning models." ]
[ "How can we teach artificial agents to use human language flexibly to solve problems in a real-world environment?", "We have one example in nature of agents being able to solve this problem: human babies eventually learn to use human language to solve problems, and they are taught with an adult human-in-the-loop.", "Unfortunately, current machine learning methods (e.g. from deep reinforcement learning) are too data inefficient to learn a language in this way (3).", "An outstanding goal is finding an algorithm with a suitable ‘language learning prior’ that allows it to learn human language, while minimizing the number of required human interactions.\n\n", "In this paper, we propose to learn such a prior in simulation, leveraging the increasing amount of available compute for machine learning experiments (1).", "We call our approach Learning to Learn to Communicate (L2C).", "Specifically, in L2C we train a meta-learning agent in simulation to interact with populations of pre-trained agents, each with their own distinct communication protocol.", "Once the meta-learning agent is able to quickly adapt to each population of agents, it can be deployed in new populations unseen during training, including populations of humans.", "To show the promise of the L2C framework, we conduct some preliminary experiments in a Lewis signaling game (4), where we show that agents\n", "trained with L2C are able to learn a simple form of human language (represented by a hand-coded compositional language) in fewer iterations than randomly initialized agents.", "Language is one of the most important aspects of human intelligence; it allows humans to coordinate and share knowledge with each other.", "We will want artificial agents to understand language as it is a natural means for us to specify their goals.So how can we train agents to understand language?", "We adopt the functional view of language BID16 that has recently gained popularity (8; 14) : agents understand language when they can use language to carry out tasks in the real world.", "One approach to training agents that can use language in their environment is via emergent communication, where researchers train randomly initialized agents to solve tasks requiring communication (7; 16 ).", "An open question in emergent communication is how the resulting communication protocols can be transferred to learning human language.", "Existing approaches attempt to do this using auxiliary tasks, for example having agents predict the label of an image in English while simultaneously playing an image-based referential game BID11 .", "While this works for learning the names of objects, it's unclear if simply using an auxiliary loss will scale to learning the English names of complex concepts, or learning to use English to interact in an grounded environment.One approach that we know will work (eventually) for training language learning agents is using a human-in-the-loop, as this is how human babies acquire language.", "In other words, if we had a good enough model architecture and learning algorithm, the human-in-the-loop approach should work.", "However, recent work in this direction has concluded that current algorithms are too sample inefficient to effectively learn a language with compositional properties from humans (3).", "Human guidance is expensive, and thus we would want such an algorithm to be as sample efficient as possible.", "An open problem is thus to create an algorithm or training procedure that results in increased sampleefficiency for language learning with a human-in-the-loop.In this paper, we present the Learning to Learn to Communicate (L2C) framework, with the goal of training agents to quickly learn new (human) languages.", "The core idea behind L2C is to leverage the increasing amount of available compute for machine learning experiments (1) to learn a 'language learning prior' by training agents via meta-learning in Figure 1 .", "Diagram of the L2C framework.", "An advantage of L2C is that agents can be trained in an external environment (which grounds the language), where agents interact with the environment via actions and language.", "Thus, (in theory) L2C could be scaled to learn complicated grounded tasks involving language.simulation.", "Specifically, we train a meta-learning agent in simulation to interact with populations of pre-trained agents, each with their own distinct communication protocol.", "Once the meta-learning agent is able to quickly adapt to each population of agents, it can be deployed in new populations unseen during training, including populations of humans.", "The L2C framework has two main advantages: (1) permits for agents to learn language that is grounded in an environment with which the agents can interact (i.e. it is not limited to referential games); and (2) in contrast with work from the instruction following literature (2), agents can be trained via L2C to both speak (output language to help accomplish their goal) and listen (map from the language to a goal or sequence of actions).To", "show the promise of the L2C framework, we provide some preliminary experiments in a Lewis signaling game BID3 . Specifically", ", we show that agents trained with L2C are able to learn a simple form of human language (represented by a hand-coded compositional language) in fewer iterations than randomly initialized agents. These preliminary", "results suggest that L2C is a promising framework for training agents to learn human language from few human interactions." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23529411852359772, 0.21739129722118378, 0.14999999105930328, 0.17777776718139648, 0.24390242993831635, 0.1538461446762085, 0.20512819290161133, 0.1428571343421936, 0.10526315122842789, 0.1904761791229248, 0.10526315122842789, 0.2380952388048172, 0.21739129722118378, 0.17777776718139648, 0.11428570747375488, 0.13333332538604736, 0.1875, 0.0555555522441864, 0.1395348757505417, 0.11428570747375488, 0.16949151456356049, 0.25, 0.09090908616781235, 0.1428571343421936, 0.1249999925494194, 0.21052631735801697, 0.1428571343421936, 0.15584415197372437, 0.11428570747375488, 0.1666666567325592, 0.22857142984867096 ]
rkxJrcHo2V
true
[ "We propose to use meta-learning for more efficient language learning, via a kind of 'domain randomization'. " ]
[ "We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization. ", "This allows us to explicitly search for schedules that achieve good generalization.", "We describe the structure of the gradient of a validation error w.r.t.", "the learning rates, the hypergradient, and based on this we introduce a novel online algorithm.", "Our method adaptively interpolates between two recently proposed techniques (Franceschi et al., 2017; Baydin et al.,2018), featuring increased stability and faster convergence.", "We show empirically that the proposed technique compares favorably with baselines and related methodsin terms of final test accuracy.", "Learning rate (LR) adaptation for first-order optimization methods is one of the most widely studied aspects in optimization for learning methods -in particular neural networks -with early work dating back to the origins of connectionism (Jacobs, 1988; Vogl et al., 1988) .", "More recent work focused on developing complex schedules that depend on a small number of hyperparameters (Loshchilov & Hutter, 2017; Orabona & Pál, 2016) .", "Other papers in this area have focused on the optimization of the (regularized) training loss (Schaul et al., 2013; Baydin et al., 2018; Wu et al., 2018) .", "While quick optimization is desirable, the true goal of supervised learning is to minimize the generalization error, which is commonly estimated by holding out part of the available data for validation.", "Hyperparameter optimization (HPO), a related but distinct branch of the literature, specifically focuses on this aspect, with less emphasis on the goal of rapid convergence on a single task.", "Research in this direction is vast (see Hutter et al. (2019) for an overview) and includes model-based (Snoek et al., 2012; Hutter et al., 2015) , model-free (Bergstra & Bengio, 2012; Hansen, 2016) , and gradientbased (Domke, 2012; Maclaurin et al., 2015) approaches.", "Additionally, works in the area of learning to optimize (Andrychowicz et al., 2016; Wichrowska et al., 2017) have focused on the problem of tuning parameterized optimizers on whole classes of learning problems but require prior expensive optimization and are not designed to speed up training on a single specific task.", "The goal of this paper is to automatically compute in an online fashion a learning rate schedule for stochastic optimization methods (such as SGD) only on the basis of the given learning task, aiming at producing models with associated small validation error.", "We study the problem of finding a LR schedule under the framework of gradient-based hyperparameter optimization (Franceschi et al., 2017) : we consider as an optimal schedule η * = (η * 0 , . . . , η * T −1 ) ∈ R T + a solution to the following constrained optimization problem min{f T (η) = E(w T (η)) : η ∈ R T + } s.t. w 0 =w, w t+1 (η) = Φ t (w t (η), η t )", "for t = {0, . . . , T − 1} = [T ] , where E : R d → R + is an objective function, Φ t :", "is a (possibly stochastic) weight update dynamics,w ∈ R d represents the initial model weights (parameters) and finally w t are the weights after t iterations.", "We can think of E as either the training or the validation loss of the model, while the dynamics Φ describe the update rule (such as SGD, SGD-Momentum, Adam etc.).", "For example in the case of SGD, Φ t (w t , η t ) = w t − η t ∇L t (w t ), with L t (w t ) the (possibly regularized) training loss on the t-th minibatch.", "The horizon T should be large enough so that the training error can be effectively minimized, in order to avoid underfitting.", "Note that a too large value of T does not necessarily harm since η k = 0 for k >T is still a feasible solution, implementing early stopping in this setting.", "Finding a good learning rate schedule is an old but crucially important issue in machine learning.", "This paper makes a step forward, proposing an automatic method to obtain performing LR schedules that uses an adaptive moving average over increasingly long hypergradient approximations.", "MARTHE interpolates between HD and RTHO taking the best of the two worlds.", "The implementation of our algorithm is fairly simple within modern automatic differentiation and deep learning environments, adding only a moderate computational overhead over the underlying optimizer complexity.", "In this work, we studied the case of optimizing the learning rate schedules for image classification tasks; we note, however, that MARTHE is a general technique for finding online hyperparameter schedules (albeit it scales linearly with the number of hyperparameters), possibly implementing a competitive alternative in other application scenarios, such as tuning regularization parameters (Luketina et al., 2016) .", "We plan to further validate the method both in other learning domains for adapting the LR and also to automatically tune other crucial hyperparameters.", "We believe that another interesting future research direction could be to learn the adaptive rules for µ and β in a meta learning fashion." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.6451612710952759, 0.1428571343421936, 0.2142857164144516, 0.19999998807907104, 0.052631575614213943, 0.11428570747375488, 0.22641508281230927, 0.15789473056793213, 0.1538461446762085, 0.2380952388048172, 0.19999998807907104, 0, 0.20689654350280762, 0.2545454502105713, 0.17142856121063232, 0, 0.10256409645080566, 0.09756097197532654, 0.09302324801683426, 0.1111111044883728, 0.08888888359069824, 0.19354838132858276, 0.19512194395065308, 0.1428571343421936, 0.1860465109348297, 0.20588235557079315, 0.21621620655059814, 0.19999998807907104 ]
Ske6qJSKPH
true
[ "MARTHE: a new method to fit task-specific learning rate schedules from the perspective of hyperparameter optimization" ]
[ "Recent years have witnessed some exciting developments in the domain of generating images from scene-based text descriptions.", "These approaches have primarily focused on generating images from a static text description and are limited to generating images in a single pass.", "They are unable to generate an image interactively based on an incrementally additive text description (something that is more intuitive and similar to the way we describe an image).\n ", "We propose a method to generate an image incrementally based on a sequence of graphs of scene descriptions (scene-graphs).", "We propose a recurrent network architecture that preserves the image content generated in previous steps and modifies the cumulative image as per the newly provided scene information.", "Our model utilizes Graph Convolutional Networks (GCN) to cater to variable-sized scene graphs along with Generative Adversarial image translation networks to generate realistic multi-object images without needing any intermediate supervision during training.", "We experiment with Coco-Stuff dataset which has multi-object images along with annotations describing the visual scene and show that our model significantly outperforms other approaches on the same dataset in generating visually consistent images for incrementally growing scene graphs.", "To truly understand the visual world, our models should be able to not only recognize images but also generate them.", "Generative Adversarial Networks, proposed by BID3 have proven immensely useful in generating real world images.", "GANs are composed of a generator and a discriminator that are trained with competing goals.", "The generator is trained to generate samples towards the true data distribution to fool the discriminator, while the discriminator is optimized to distinguish between real samples from the true data distribution and fake samples produced by the generator.", "The next step in this area is to generate customized images and videos in response to the individual tastes of a user.", "A grounding of language semantics in the context of visual modality has wide-reaching impacts in the fields of Robotics, AI, Design and image retrieval.", "To this end, there has been exciting recent progress on generating images from natural language descriptions.", "Conditioned on given text descriptions, conditional-GANs BID11 are able to generate images that are highly related to the text meanings.", "Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts.Leading methods for generating images from sentences struggle with complex sentences containing many objects.", "A recent development in this field has been to represent the information conveyed by a complex sentence more explicitly as a scene graph of objects and their relationships BID7 .", "Scene graphs are a powerful structured representation for both images and language; they have been used for semantic image retrieval BID6 and for evaluating BID0 and improving BID9 image captioning.", "In our work, we propose to leverage these scene graphs by incrementally expanding them into more complex structures and generating corresponding images.", "Most of the current approaches lack the ability to generate images incrementally in multiple steps while preserving the contents of the image generated so far.", "We overcome this shortcoming by conditioning the image generation process over the cumulative image generated over the previous steps and over the unseen parts of the scene graph.", "This allows our approach to generate high quality complex real-world scenes with several objects by distributing the image generation over multiple steps without losing the context.", "Recently, BID2 proposed an approach for incremental image generation but their method is limited to synthetic images due to the need of supervision in the intermediate step.", "Our approach circumvents the need for intermediate supervision by enforcing perceptual regularization and is therefore compatible with training for even real world images (as we show later).A", "visualization of our framework's outputs with a progressively growing scene graph can be seen in Figure 1 . We", "can see how at each step new objects get inserted into the image generated so far without losing the context. To", "summarize, we make the following contributions,• We present a framework to generate images from structured scene graphs that allows the images to be interactively modified, while preserving the context and contents of the image generated over previous steps.• Our", "method does not need any kind of intermediate supervision and hence, is not limited to synthetic images (where you can manually generate ground truth intermediate images). It is", "therefore useful for generating real-world images (such as for MS-COCO) which, to the best of our knowledge, is the first attempt of its kind.", "In this paper, we proposed an approach to sequentially generate images using incrementally growing scene graphs with context preservation.", "Through extensive evaluation and qualitative results, we demonstrate that our approach is indeed able to generate an image sequence that is consistent over time and preserves the context in terms of objects generated in previous steps.", "In future, we plan to explore generating end-to-end with text description by augmenting our methodology with module to generate scene graphs from language input.", "While scene-graphs provide a very convenient modality to capture image semantics, we would like to explore ways to take natural sentences as inputs to modify the underlying scene graph.", "The current baseline method does single shot generation by passing the entire layout map through the Cascade Refinement Net for the final image generation.", "We plan to investigate whether the quality of generation can be improved by instead using attention on the GCN embeddings during generation.", "This could also potentially make the task of only modifying certain regions in the image easier.", "Further, we plan to explore better architectures for image generation through layouts for higher resolution image generation." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2702702581882477, 0.14999999105930328, 0.12765957415103912, 0.2702702581882477, 0.3181818127632141, 0.11999999731779099, 0.25925925374031067, 0.04999999329447746, 0.11428570747375488, 0.12121211737394333, 0.13333332538604736, 0.14999999105930328, 0.19999998807907104, 0.1111111044883728, 0.05405404791235924, 0.17241378128528595, 0.1666666567325592, 0.08888888359069824, 0.1904761791229248, 0.5365853905677795, 0.3414634168148041, 0.17777776718139648, 0.17777776718139648, 0.04255318641662598, 0.21052631735801697, 0.14999999105930328, 0.4363636374473572, 0.04444443807005882, 0.1463414579629898, 0.25641024112701416, 0.26923075318336487, 0.1904761791229248, 0.1304347813129425, 0.09756097197532654, 0.14999999105930328, 0.22857142984867096, 0.05882352590560913 ]
SJx-SULKOV
true
[ "Interactively generating image from incrementally growing scene graphs in multiple steps using GANs while preserving the contents of image generated in previous steps" ]
[ "In some important computer vision domains, such as medical or hyperspectral imaging, we care about the classification of tiny objects in large images.", "However, most Convolutional Neural Networks (CNNs) for image classification were developed using biased datasets that contain large objects, in mostly central image positions.", "To assess whether classical CNN architectures work well for tiny object classification we build a comprehensive testbed containing two datasets: one derived from MNIST digits and one from histopathology images.", "This testbed allows controlled experiments to stress-test CNN architectures with a broad spectrum of signal-to-noise ratios.", "Our observations indicate that: (1) There exists a limit to signal-to-noise below which CNNs fail to generalize and that this limit is affected by dataset size - more data leading to better performances; however, the amount of training data required for the model to generalize scales rapidly with the inverse of the object-to-image ratio (2) in general, higher capacity models exhibit better generalization; (3) when knowing the approximate object sizes, adapting receptive field is beneficial; and (4) for very small signal-to-noise ratio the choice of global pooling operation affects optimization, whereas for relatively large signal-to-noise values, all tested global pooling operations exhibit similar performance.", "Convolutional Neural Networks (CNNs) are the current state-of-the-art approach for image classification (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015; Huang et al., 2017) .", "The goal of image classification is to assign an image-level label to an image.", "Typically, it is assumed that an object (or concept) that correlates with the label is clearly visible and occupies a significant portion of the image Krizhevsky, 2009; Deng et al., 2009 ).", "Yet, in a variety of real-life applications, such as medical image or hyperspectral image analysis, only a small portion of the input correlates with the label, resulting in low signal-to-noise ratio.", "We define this input image signal-to-noise ratio as Object to Image (O2I) ratio.", "The O2I ratio range for three real-life datasets is depicted in Figure 1 .", "As can be seen, there exists a distribution shift between standard classification benchmarks and domain specific datasets.", "For instance, in the ImageNet dataset (Deng et al., 2009 ) objects fill at least 1% of the entire image, while in histopathology slices (Ehteshami Bejnordi et al., 2017) cancer cells can occupy as little as 10 −6 % of the whole image.", "Recent works have studied CNNs under different noise scenarios, either by performing random input-to-label experiments (Zhang et al., 2017; or by directly working with noisy annotations (Mahajan et al., 2018; Jiang et al., 2017; Han et al., 2018) .", "While, it has been shown that large amounts of label-corruption noise hinders the CNNs generalization (Zhang et al., 2017; , it has been further demonstrated that CNNs can mitigate this label-corruption noise by increasing the size of training data (Mahajan et al., 2018) , tuning the optimizer hyperparameters (Jastrzębski et al., 2017) or weighting input training samples (Jiang et al., 2017; Han et al., 2018) .", "However, all these works focus on input-to-label corruption and do not consider the case of noiseless input-to-label assignments with low and very low O2I ratios.", "In this paper, we build a novel testbed allowing us to specifically study the performance of CNNs when applied to tiny object classification and to investigate the interplay between input signal-to-noise ratio and model generalization.", "We create two synthetic datasets inspired by the children's puzzle book Where's Wally?", "(Handford, 1987) .", "The first dataset is derived from MNIST digits and allows us for two medical imaging datasets (CAME-LYON17 (Ehteshami Bejnordi et al., 2017) and MiniMIAS (Suckling, 1994) ) as well as one standard computer vision classification dataset (ImageNet (Deng et al., 2009) ).", "The ratio is defined as O2I =", "Although low input image signal-to-noise scenarios have been extensively studied in signal processing field (e.g. in tasks such as image reconstruction), less attention has been devoted to low signal-tonoise classification scenarios.", "Thus, in this paper we identified an unexplored machine learning problem, namely image classification in low and very low signal-to-noise ratios.", "In order to study such scenarios, we built two datasets that allowed us to perform controlled experiments by manipulating the input image signal-to-noise ratio and highlighted that CNNs struggle to show good generalization for low and very low signal-to-noise ratios even for a relatively elementary MNIST-based dataset.", "Finally, we ran a series of controlled experiments 9 that explore both a variety of CNNs' architectural choices and the importance of training data scale for the low and very low signal-to-noise classification.", "One of our main observation was that properly designed CNNs can be trained in low O2I regime without using any pixel-level annotations and generalize if we leverage enough training data; however, the amount of training data required for the model to generalize scales rapidly with the inverse of the O2I ratio.", "Thus, with our paper (and the code release) we invite the community to work on data-efficient solutions to low and very low signal-to-noise classification.", "Our experimental study exhibits limitations: First, due to the lack of large scale datasets that allow for explicit control of the input signal-to-noise ratios, we were forced to use the synthetically built nMNIST dataset for most of our analysis.", "As a real life dataset, we used crops from the histopathology CAMELYON dataset; however, due to relatively a small number of unique lesions we were unable to scale the histopathology experiments to the extent as the nMNIST experiments, and, as result, some conclusions might be affected by the limited dataset size.", "Other large scale computer vision datasets like MS COCO (Lin et al., 2014 ) exhibit correlations of the object of interest with the image background.", "For MS COCO, the smallest O2I ratios are for the object category \"sports ball\" which on average occupies between 0.3% and 0.4% of an image and its presence tends to be correlated with the image background (e. g. presence of sports fields and players).", "However, future research could examine a setup in which negative images contain objects of the categories \"person\" and \"baseball bat\" and positive images also contain \"sports ball\".", "Second, all the tested models improve the generalization with larger dataset sizes; however, scaling datasets such as CAMELYON to tens of thousands of samples might be prohibitively expensive.", "Instead, further research should be devoted to developing computationally-scalable, data-efficient inductive biases that can handle very low signal-to-noise ratios with limited dataset sizes.", "Future work, could explore the knowledge of the low O2I ratio and therefore sparse signal as an inductive bias.", "Finally, we studied low signal-to-noise scenarios only for binary classification scenarios 10 ; further investigation should be devoted to multiclass problems.", "We hope that this study will stimulate the research in image classification for low signal-to-noise input scenarios." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3199999928474426, 0.12244897335767746, 0.1090909019112587, 0.09302324801683426, 0.09345793724060059, 0.11538460850715637, 0.21052631735801697, 0.24561403691768646, 0.2641509473323822, 0.10256409645080566, 0, 0.09090908616781235, 0.1875, 0.10344827175140381, 0.11428570747375488, 0.16326530277729034, 0.20689654350280762, 0.09999999403953552, 0, 0.0937499925494194, 0, 0.14814814925193787, 0.1304347813129425, 0.17910447716712952, 0.18518517911434174, 0.1428571343421936, 0.1666666567325592, 0.13333332538604736, 0.05882352590560913, 0.15686273574829102, 0.2153846174478531, 0.15686273574829102, 0.11320754140615463, 0.07999999821186066, 0.13333332538604736, 0.04255318641662598, 0.27272728085517883 ]
H1xTup4KPr
true
[ "We study low- and very-low-signal-to-noise classification scenarios, where objects that correlate with class label occupy tiny proportion of the entire image (e.g. medical or hyperspectral imaging)." ]
[ "Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block.", "Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. (2019) showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks.", "This raises the question: do learned attention layers operate similarly to convolutional layers?", "This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice.", "Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer.", "Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis.", "Our code is publicly available.", "Recent advances in Natural Language Processing (NLP) are largely attributed to the rise of the transformer (Vaswani et al., 2017) .", "Pre-trained to solve an unsupervised task on large corpora of text, transformer-based architectures, such as GPT-2 (Radford et al., 2018) , BERT (Devlin et al., 2018) and Transformer-XL , seem to possess the capacity to learn the underlying structure of text and, as a consequence, to learn representations that generalize across tasks.", "The key difference between transformers and previous methods, such as recurrent neural networks (Hochreiter & Schmidhuber, 1997) and convolutional neural networks (CNN), is that the former can simultaneously attend to every word of their input sequence.", "This is made possible thanks to the attention mechanism-originally introduced in Neural Machine Translation to better handle long-range dependencies (Bahdanau et al., 2015) .", "With self-attention in particular, the similarity of two words in a sequence is captured by an attention score measuring the distance of their representations.", "The representation of each word is then updated based on those words whose attention score is highest.", "Inspired by its capacity to learn meaningful inter-dependencies between words, researchers have recently considered utilizing self-attention in vision tasks.", "Self-attention was first added to CNN by either using channel-based attention (Hu et al., 2018) or non-local relationships across the image (Wang et al., 2018) .", "More recently, augmented CNNs by replacing some convolutional layers with self-attention layers, leading to improvements on image classification and object detection tasks.", "Interestingly, Ramachandran et al. (2019) noticed that, even though state-of-the art results are reached when attention and convolutional features are combined, under same computation and model size constraints, self-attention-only architectures also reach competitive image classification accuracy.", "These findings raise the question, do self-attention layers process images in a similar manner to convolutional layers?", "From a theoretical perspective, one could argue that transfomers have the capacity to simulate any function-including a CNN.", "Indeed, Pérez et al. (2019) showed that a multilayer attention-based architecture with additive positional encodings is Turing complete under some strong theoretical assumptions, such as unbounded precision arithmetic.", "Unfortunately, universality results do not reveal how a machine solves a task, only that it has the capacity to do so.", "Thus, the question of how self-attention layers actually process images remains open.", "We showed that self-attention layers applied to images can express any convolutional layer (given sufficiently many heads) and that fully-attentional models learn to combine local behavior (similar to convolution) and global attention based on input content.", "More generally, fully-attentional models seem to learn a generalization of CNNs where the kernel pattern is learned at the same time as the filters-similar to deformable convolutions (Dai et al., 2017; Zampieri, 2019) .", "Interesting directions for future work include translating existing insights from the rich CNNs literature back to transformers on various data modalities, including images, text and time series." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1111111044883728, 0.20512820780277252, 0.14814814925193787, 0.5294117331504822, 0.11764705181121826, 0.1249999925494194, 0, 0.11764705181121826, 0.072727270424366, 0.12765957415103912, 0.10810810327529907, 0.11428570747375488, 0, 0.1818181723356247, 0.05405404791235924, 0.1666666567325592, 0.0416666641831398, 0.25806450843811035, 0.06451612710952759, 0, 0.1818181723356247, 0.07692307233810425, 0.21739129722118378, 0.04444444179534912, 0.09756097197532654 ]
HJlnC1rKPB
true
[ "A self-attention layer can perform convolution and often learns to do so in practice." ]
[ "We introduce a “learning-based” algorithm for the low-rank decomposition problem: given an $n \\times d$ matrix $A$, and a parameter $k$, compute a rank-$k$ matrix $A'$ that minimizes the approximation loss $||A- A'||_F$.", "The algorithm uses a training set of input matrices in order to optimize its performance.", "Specifically, some of the most efficient approximate algorithms for computing low-rank approximations proceed by computing a projection $SA$, where $S$ is a sparse random $m \\times n$ “sketching matrix”, and then performing the singular value decomposition of $SA$.", "We show how to replace the random matrix $S$ with a “learned” matrix of the same sparsity to reduce the error.\n\n", "Our experiments show that, for multiple types of data sets, a learned sketch matrix can substantially reduce the approximation loss compared to a random matrix $S$, sometimes by one order of magnitude.", "We also study mixed matrices where only some of the rows are trained and the remaining ones are random, and show that matrices still offer improved performance while retaining worst-case guarantees.", "The success of modern machine learning made it applicable to problems that lie outside of the scope of \"classic AI\".", "In particular, there has been a growing interest in using machine learning to improve the performance of \"standard\" algorithms, by fine-tuning their behavior to adapt to the properties of the input distribution, see e.g., [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] .", "This \"learning-based\" approach to algorithm design has attracted a considerable attention over the last few years, due to its potential to significantly improve the efficiency of some of the most widely used algorithmic tasks.", "Many applications involve processing streams of data (video, data logs, customer activity etc) by executing the same algorithm on an hourly, daily or weekly basis.", "These data sets are typically not \"random\" or \"worst-case\"; instead, they come from some distribution which does not change rapidly from execution to execution.", "This makes it possible to design better algorithms tailored to the specific data distribution, trained on past instances of the problem.", "The method has been particularly successful in the context of compressed sensing.", "In the latter framework, the goal is to recover an approximation to an n-dimensional vector x, given its \"linear measurement\" of the form Sx, where S is an m × n matrix.", "Theoretical results [14, 15] show that, if the matrix S is selected at random, it is possible to recover the k largest coefficients of x with high probability using a matrix S with m = O(k log n) rows.", "This guarantee is general and applies to arbitrary vectors x.", "However, if vectors x are selected from some natural distribution (e.g., they represent images), recent works [8, 9, 11] show that one can use samples from that distribution to compute matrices S that improve over a completely random matrix in terms of the recovery error.", "Compressed sensing is an example of a broader class of problems which can be solved using random projections.", "Another well-studied problem of this type is low-rank decomposition: given an n × d matrix A, and a parameter k, compute a rank-k matrix", "Low-rank approximation is one of the most widely used tools in massive data analysis, machine learning and statistics, and has been a subject of many algorithmic studies.", "In particular, multiple algorithms developed over the last decade use the \"sketching\" approach, see e.g., [16] [17] [18] [19] [20] [21] [22] [23] [24] .", "Its idea is to use efficiently computable random projections (a.k.a., \"sketches\") to reduce the problem size before performing low-rank decomposition, which makes the computation more space and time efficient.", "For example, [16, 19] show that if S is a random matrix of size m × n chosen from an appropriate distribution, for m depending on , then one can recover a rank-k matrix A such that", "by performing an SVD on SA ∈ R m×d followed by some post-processing.", "Typically the sketch length m is small, so the matrix SA can be stored using little space (in the context of streaming algorithms) or efficiently communicated (in the context of distributed algorithms).", "Furthermore, the SVD of SA can be computed efficiently, especially after another round of sketching, reducing the overall computation time.", "See the survey [25] for an overview of these developments.", "In light of the aforementioned work on learning-based compressive sensing, it is natural to ask whether similar improvements in performance could be obtained for other sketch-based algorithms, notably for low-rank decompositions.", "In particular, reducing the sketch length m while preserving its accuracy would make sketch-based algorithms more efficient.", "Alternatively, one could make sketches more accurate for the same values of m.", "This is the problem we address in this paper.", "Our Results.", "Our main finding is that learned sketch matrices can indeed yield (much) more accurate low-rank decompositions than purely random matrices.", "We focus our study on a streaming algorithm for low-rank decomposition due to [16, 19] , described in more detail in Section 2.", "Specifically, suppose we have a training set of matrices Tr = {A 1 , . . . , A N } sampled from some distribution D. Based on this training set, we compute a matrix S * that (locally) minimizes the empirical loss", "where SCW(S * , A i ) denotes the output of the aforementioned Sarlos-Clarkson-Woodruff streaming low-rank decomposition algorithm on matrix A i using the sketch matrix S * .", "Once the the sketch matrix S * is computed, it can be used instead of a random sketch matrix in all future executions of the SCW algorithm.", "We demonstrate empirically that, for multiple types of data sets, an optimized sketch matrix S * can substantially reduce the approximation loss compared to a random matrix S, sometimes by one order of magnitude (see Figure 1) .", "Equivalently, the optimized sketch matrix can achieve the same approximation loss for lower values of m.", "A possible disadvantage of learned sketch matrices is that an algorithm that uses them no longer offers worst-case guarantees.", "As a result, if such an algorithm is applied to an input matrix that does not conform to the training distribution, the results might be worse than if random matrices were used.", "To alleviate this issue, we also study mixed sketch matrices, where (say) half of the rows are trained and the other half are random.", "We observe that if such matrices are used in conjunction with the SCW algorithm, its results are no worse than if only the random part of the matrix was used 2 .", "Thus, the resulting algorithm inherits the worst-case performance guarantees of the random part of the sketching matrix.", "At the same time, we show that mixed matrices still substantially reduce the approximation loss compared to random ones, in some cases nearly matching the performance of \"pure\" learned matrices with the same number of rows.", "Thus, mixed random matrices offer \"the best of both worlds\": improved performance for matrices from the training distribution, and worst-case guarantees otherwise." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17391303181648254, 0.1249999925494194, 0.19607841968536377, 0.11428570747375488, 0.21739129722118378, 0.27272728085517883, 0.11428570747375488, 0.13114753365516663, 0.1304347813129425, 0.09756097197532654, 0, 0.2222222238779068, 0.13793103396892548, 0.1395348757505417, 0.07843136787414551, 0.07407406717538834, 0.13333332538604736, 0.11764705181121826, 0.1538461446762085, 0.1428571343421936, 0.0952380895614624, 0.12765957415103912, 0.11999999731779099, 0, 0.1428571343421936, 0.17142856121063232, 0.2222222238779068, 0.21276594698429108, 0.1764705777168274, 0.19999998807907104, 0.1538461446762085, 0.1111111044883728, 0.10256409645080566, 0.07407406717538834, 0.1538461446762085, 0.1538461446762085, 0.19230768084526062, 0.3125, 0.11428570747375488, 0.04444443807005882, 0.10526315122842789, 0.09302324801683426, 0.2666666507720947, 0.1702127605676651, 0.2631579041481018 ]
S1l5s7298H
true
[ "Learning-based algorithms can improve upon the performance of classical algorithms for the low-rank approximation problem while retaining the worst-case guarantee." ]
[ "Neural conversational models are widely used in applications like personal assistants and chat bots.", "These models seem to give better performance when operating on word level.", "However, for fusion languages like French, Russian and Polish vocabulary size sometimes become infeasible since most of the words have lots of word forms.", "We propose a neural network architecture for transforming normalized text into a grammatically correct one.", "Our model efficiently employs correspondence between normalized and target words and significantly outperforms character-level models while being 2x faster in training and 20\\% faster at evaluation.", "We also propose a new pipeline for building conversational models: first generate a normalized answer and then transform it into a grammatically correct one using our network.", "The proposed pipeline gives better performance than character-level conversational models according to assessor testing.", "Neural conversational models BID18 are used in a large number of applications: from technical support and chat bots to personal assistants.", "While being a powerful framework, they often suffer from high computational costs.The main computational and memory bottleneck occurs at the vocabulary part of the model.", "Vocabulary is used to map a sequence of input tokens to embedding vectors: one embedding vector is stored for each word in vocabulary.English is de-facto a standard language for training conversational models, mostly for a large number of speakers and simple grammar.", "In english, words usually have only a few word forms.", "For example, verbs may occur in present and past tenses, nouns can have singular and plural forms.For many other languages, however, some words may have tens of word forms.", "This is the case for Polish, Russian, French and many other languages.", "For these languages storing all forms of frequent words in a vocabulary significantly increase computational costs.To reduce vocabulary size, we propose to normalize input and output sentences by putting them into a standard form.", "Generated texts can then be converted into grammatically correct ones by solving morphological agreement task.", "This can be efficiently done by a model proposed in this work.Our contribution is two-fold:• We propose a neural network architecture for performing morphological agreement in fusion languages such as French, Polish and Russian (Section 2).•", "We introduce a new approach to building conversational models: generating normalized text and then performing morphological agreement with proposed model (Section 3);", "In this paper we proposed a neural network model that can efficiently employ relationship between input and output words in morphological agreement task.", "We also proposed a modification for this model that uses context sentence.", "We apply this model for neural conversational model in a new pipeline: we use normalized question to generate normalized answer and then apply proposed model to obtain grammatically correct response.", "This model showed better performance than character level neural conversational model based on assessors responses.We achieved significant improvement comparing to character-level, bigram and hierarchical sequenceto-sequence models on morphological agreement task for Russian, French and Polish languages.", "Trained models seem to understand main grammatical rules and notions such as tenses, cases and pluralities." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.10526315122842789, 0, 0.0952380895614624, 0, 0, 0.0952380895614624, 0.0714285671710968, 0, 0.04878048598766327, 0, 0, 0, 0.04999999701976776, 0.27272728085517883, 0.1395348757505417, 0.20689654350280762, 0.20000000298023224, 0, 0.0624999962747097, 0.19512194395065308, 0.09090908616781235 ]
HyTrSegCb
true
[ "Proposed architecture to solve morphological agreement task" ]
[ "This paper proposes the use of spectral element methods \\citep{canuto_spectral_1988} for fast and accurate training of Neural Ordinary Differential Equations (ODE-Nets; \\citealp{Chen2018NeuralOD}) for system identification.", "This is achieved by expressing their dynamics as a truncated series of Legendre polynomials.", "The series coefficients, as well as the network weights, are computed by minimizing the weighted sum of the loss function and the violation of the ODE-Net dynamics.", "The problem is solved by coordinate descent that alternately minimizes, with respect to the coefficients and the weights, two unconstrained sub-problems using standard backpropagation and gradient methods.", "The resulting optimization scheme is fully time-parallel and results in a low memory footprint.", "Experimental comparison to standard methods, such as backpropagation through explicit solvers and the adjoint technique \\citep{Chen2018NeuralOD}, on training surrogate models of small and medium-scale dynamical systems shows that it is at least one order of magnitude faster at reaching a comparable value of the loss function.", "The corresponding testing MSE is one order of magnitude smaller as well, suggesting generalization capabilities increase.", "Neural Ordinary Differential Equations (ODE-Nets; Chen et al., 2018) can learn latent models from observations that are sparse in time.", "This property has the potential to enhance the performance of neural network predictive models in applications where information is sparse in time and it is important to account for exact arrival times and delays.", "In complex control systems and model-based reinforcement learning, planning over a long horizon is often needed, while high frequency feedback is necessary for maintaining stability (Franklin et al., 2014) .", "Discrete-time models, including RNNs (Jain & Medsker, 1999) , often struggle to fully meet the needs of such applications due to the fixed time resolution.", "ODE-Nets have been shown to provide superior performance with respect to classic RNNs on time series forecasting with sparse training data.", "However, learning their parameters can be computationally intensive.", "In particular, ODE-Nets are memory efficient but time inefficient.", "In this paper, we address this bottleneck and propose a novel alternative strategy for system identification." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.9302325248718262, 0.11764705181121826, 0.1463414579629898, 0.13333332538604736, 0.05882352590560913, 0.13114753365516663, 0.0555555522441864, 0.19512194395065308, 0.20408162474632263, 0.08163265138864517, 0.09302324801683426, 0.051282044500112534, 0, 0, 0.22857142984867096 ]
Sye0XkBKvS
true
[ "This paper proposes the use of spectral element methods for fast and accurate training of Neural Ordinary Differential Equations for system identification." ]
[ "Exploration in sparse reward reinforcement learning remains an open challenge.", "Many state-of-the-art methods use intrinsic motivation to complement the sparse extrinsic reward signal, giving the agent more opportunities to receive feedback during exploration.", "Commonly these signals are added as bonus rewards, which results in a mixture policy that neither conducts exploration nor task fulfillment resolutely.\n", "In this paper, we instead learn separate intrinsic and extrinsic task policies and schedule between these different drives to accelerate exploration and stabilize learning.", "Moreover, we introduce a new type of intrinsic reward denoted as successor feature control (SFC), which is general and not task-specific.", "It takes into account statistics over complete trajectories and thus differs from previous methods that only use local information to evaluate intrinsic motivation.", "We evaluate our proposed scheduled intrinsic drive (SID) agent using three different environments with pure visual inputs: VizDoom, DeepMind Lab and DeepMind Control Suite.", "The results show a substantially improved exploration efficiency with SFC and the hierarchical usage of the intrinsic drives.", "A video of our experimental results can be found at https://gofile.io/?c=HpEwTd", ".", "Reinforcement learning (RL) agents learn on evaluative feedback (reward signals) instead of instructive feedback (ground truth labels), which takes the process of automating the development of intelligent problem-solving agents one step further (Sutton & Barto, 2018) .", "With deep networks as powerful function approximators bringing traditional RL into high-dimensional domains, deep reinforcement learning (DRL) has shown great potential (Mnih et al., 2015; Schulman et al., 2017; Horgan et al., 2018) .", "However, the success of DRL often relies on carefully shaped dense extrinsic reward signals.", "Although shaping extrinsic rewards can greatly support the agent in finding solutions and shortening the interaction time, designing such dense extrinsic signals often requires substantial domain knowledge, and calculating them typically requires ground truth state information, both of which is hard to obtain in the context of robots acting in the real world.", "When not carefully designed, the reward shape could sometimes serve as bias or even distractions and could potentially hinder the discovery of optimal solutions.", "More importantly, learning on dense extrinsic rewards goes backwards on the progress of reducing supervision and could prevent the agent from taking full advantage of the RL framework.", "In this paper, we consider terminal reward RL settings, where a signal is only given when the final goal is achieved.", "When learning with only an extrinsic terminal reward indicating the task at hand, intelligent agents are given the opportunity to potentially discover optimal solutions even out of the scope of the well established domain knowledge.", "However, in many real-world problems defining a task only by a terminal reward means that the learning signal can be extremely sparse.", "The RL agent would have no clue about what task to accomplish until it receives the terminal reward for the first time by chance.", "Therefore in those scenarios guided and structured exploration is crucial, which is where intrinsically-motivated exploration (Oudeyer & Kaplan, 2008; Schmidhuber, 2010) has recently gained great success (Pathak et al., 2017; Burda et al., 2018b) .", "Most commonly in current state-of-the-art approaches, an intrinsic reward is added as a reward bonus to the extrinsic reward.", "Maximizing this combined reward signal, however, results in a mixture policy that neither acts greedily with regard to extrinsic reward max-imization nor to exploration.", "Furthermore, the non-stationary nature of the intrinsic signals could potentially lead to unstable learning on the combined reward.", "In addition, current state-of-the-art methods have been mostly looking at local information calculated out of 1-step lookahead for the estimation of the intrinsic rewards, e.g. one step prediction error (Pathak et al., 2017) , or network distillation error of the next state (Burda et al., 2018b) .", "Although those intrinsic signals can be propagated back to earlier states with temporal difference (TD) learning, it is not clear that this results in optimal long-term exploration.", "We seek to address the aforementioned issues as follows:", "1. We propose a hierarchical agent scheduled intrinsic drive (SID) that focuses on one motivation at a time: It learns two separate policies which maximize the extrinsic and intrinsic rewards respectively.", "A high-level scheduler periodically selects to follow either the extrinsic or the intrinsic policy to gather experiences.", "Disentangling the two policies allows the agent to faithfully conduct either pure exploration or pure extrinsic task fulfillment.", "Moreover, scheduling (even within an episode) implicitly increases the behavior policy space exponentially, which drastically differs from previous methods where the behavior policy could only change slowly due to the incremental nature of TD learning.", "2. We introduce successor feature control (SFC), a novel intrinsic reward that is based on the concept of successor features.", "This feature representation characterizes states through the features of all its successor states instead of looking at local information only.", "This implicitly makes our method temporarily extended, which enables more structured and farsighted exploration that is crucial in exploration-challenging environments.", "We note that both the proposed intrinsic reward SFC and the hierarchical exploration framework SID are without any task-specific components, and can be incorporated into existing DRL methods with minimal computation overhead.", "We present experimental results in three sets of environments, evaluating our proposed agent in the domains of visual navigation and control from pixels, as well as its capabilities of finding optimal solutions under distraction.", "In this paper, we investigate an alternative way of utilizing intrinsic motivation for exploration in DRL.", "We propose a hierarchical agent SID that schedules between following extrinsic and intrinsic drives.", "Moreover, we propose a new type of intrinsic reward SFC that is general and evaluates the intrinsic motivation based on longer time horizons.", "We conduct experiments in three sets of environments and show that both our contributions SID and SFC help greatly in improving exploration efficiency.", "We consider many possible research directions that could stem from this work, including designing more efficient scheduling strategies, incorporating several intrinsic drives (that are possibly orthogonal and complementary) instead of only one into SID, testing our framework in other control domains such as manipulation, combining the successor representation with learned feature representations and extending our evaluation onto real robotics systems." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.07692307233810425, 0.21621620655059814, 0.05128204822540283, 0.21052631735801697, 0.3243243098258972, 0.1538461446762085, 0.10256409645080566, 0.1818181723356247, 0.0714285671710968, 0.04255318641662598, 0, 0.19999998807907104, 0.09999999403953552, 0.10526315122842789, 0.14999999105930328, 0.1666666567325592, 0.12765957415103912, 0.1621621549129486, 0.10256409645080566, 0.04255318641662598, 0.3030303120613098, 0.21052631735801697, 0.25, 0.0357142798602581, 0.09302324801683426, 0.07999999821186066, 0.2222222238779068, 0.25806450843811035, 0.1249999925494194, 0.04255318641662598, 0.4571428596973419, 0.11764705181121826, 0.0555555522441864, 0.1304347813129425, 0.043478257954120636, 0.1249999925494194, 0.2666666507720947, 0.3684210479259491, 0.05405404791235924, 0.0810810774564743 ]
SklSQgHFDS
true
[ "A new intrinsic reward signal based on successor features and a novel way to combine extrinsic and intrinsic reward." ]
[ "Meta-Reinforcement learning approaches aim to develop learning procedures that can adapt quickly to a distribution of tasks with the help of a few examples.", "Developing efficient exploration strategies capable of finding the most useful samples becomes critical in such settings.", "Existing approaches to finding efficient exploration strategies add auxiliary objectives to promote exploration by the pre-update policy, however, this makes the adaptation using a few gradient steps difficult as the pre-update (exploration) and post-update (exploitation) policies are quite different.", "Instead, we propose to explicitly model a separate exploration policy for the task distribution.", "Having two different policies gives more flexibility in training the exploration policy and also makes adaptation to any specific task easier.", "We show that using self-supervised or supervised learning objectives for adaptation stabilizes the training process and also demonstrate the superior performance of our model compared to prior works in this domain.", "Reinforcement learning (RL) approaches have seen many successes in recent years, from mastering the complex game of Go BID10 to even discovering molecules BID8 .", "However, a common limitation of these methods is their propensity to overfitting on a single task and inability to adapt to even slightly perturbed configuration BID12 .", "On the other hand, humans have this astonishing ability to learn new tasks in a matter of minutes by using their prior knowledge and understanding of the underlying task mechanics.", "Drawing inspiration from human behaviors, researchers have proposed to incorporate multiple inductive biases and heuristics to help the models learn quickly and generalize to unseen scenarios.", "However, despite a lot of effort it has been difficult to approach human levels of data efficiency and generalization.Meta-RL tries to address these shortcomings by learning these inductive biases and heuristics from the data itself.", "These inductive biases or heuristics can be induced in the model in various ways like optimization, policy initialization, loss function, exploration strategies, etc.", "Recently, a class of policy initialization based meta-learning approaches have gained attention like Model Agnostic MetaLearning (MAML) BID1 .", "MAML finds a good initialization for a policy that can be adapted to a new task by fine-tuning with policy gradient updates from a few samples of that task.Given the objective of meta RL algorithms to adapt to a new task from a few examples, efficient exploration strategies are crucial for quickly finding the optimal policy in a new environment.", "Some recent works BID3 have tried to address this problem by using latent variables to model the distribution of exploration behaviors.", "Another set of approaches BID11 BID9 focus on improving the credit assignment of the meta learning objective to the pre-update trajectory distribution.", "However, all these prior works use one or few policy gradient updates to transition from preto post-update policy.", "This limits the applicability of these methods to cases where the post-update (exploitation) policy is similar to the pre-update (exploration) policy and can be obtained with only a few updates.", "Also, for cases where pre-and post-update policies are expected to exhibit different behaviors, large gradient updates may result in training instabilities and lack of convergence.", "To address this problem, we propose to explicitly model a separate exploration policy for the distribution of tasks.", "The exploration policy is trained to find trajectories that can lead to fast adaptation of the exploitation policy on the given task.", "This formulation provides much more flexibility in training the exploration policy.", "In the process, we also establish that, in order to adapt as quickly as possible to the new task, it is often more useful to use self-supervised or supervised learning approaches, where possible, to get more effective updates.", "Unlike conventional meta-RL approaches, we proposed to explicitly model a separate exploration policy for the task distribution.", "Having two different policies gives more flexibility in training the exploration policy and also makes adaptation to any specific task easier.", "Hence, as future work, we would like to explore the use of separate exploration and exploitation policies in other meta-learning approaches as well.", "We showed that, through various experiments on both sparse and dense reward tasks, our model outperforms previous works while also being very stable during training.", "This validates that using self-supervised techniques increases the stability of these updates thus allowing us to use a separate exploration policy to collect the initial trajectories.", "Further, we also show that the variance reduction techniques used in the objective of exploration policy also have a huge impact on the performance.", "However, we would like to note that the idea of using a separate exploration and exploitation policy is much more general and doesn't need to be restricted to MAML.", "that to compute M β,z (s t , a t ) = w T β m β (s t , a t ).", "Using the successor representations can effectively be seen as using a more accurate/powerful baseline than directly predicting the N-step returns using the (s t , a t )pair." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.1599999964237213, 0.1304347813129425, 0.1875, 0.3181818127632141, 0.3529411852359772, 0.4000000059604645, 0.1111111044883728, 0.11320754140615463, 0.20689654350280762, 0.11320754140615463, 0.13114753365516663, 0.1538461446762085, 0.0833333283662796, 0.2535211145877838, 0.1599999964237213, 0.12244897335767746, 0.12765957415103912, 0.178571417927742, 0.145454540848732, 0.2916666567325592, 0.2448979616165161, 0.3414634168148041, 0.22580644488334656, 0.25531914830207825, 0.3529411852359772, 0.26923075318336487, 0.1818181723356247, 0.4444444477558136, 0.3921568691730499, 0.4285714328289032, 0.13333332538604736, 0.15094339847564697 ]
SyexH9ronN
true
[ "We propose to use a separate exploration policy to collect the pre-adaptation trajectories in MAML. We also show that using a self-supervised objective in the inner loop leads to more stable training and much better performance." ]
[ "The “Supersymmetric Artificial Neural Network” in deep learning (denoted (x; θ, bar{θ})Tw), espouses the importance of considering biological constraints in the aim of further generalizing backward propagation. \n\n", "Looking at the progression of ‘solution geometries’; going from SO(n) representation (such as Perceptron like models) to SU(n) representation (such as UnitaryRNNs) has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable.", "The Supersymmetric Artificial Neural Network explores a natural step forward, namely SU(m|n) representation.", "These supersymmetric biological brain representations (Perez et al.) can be represented by supercharge compatible special unitary notation SU(m|n), or (x; θ, bar{θ})Tw parameterized by θ, bar{θ}, which are supersymmetric directions, unlike θ seen in the typical non-supersymmetric deep learning model.", "Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of “partner potential” signals for example.", "Pertinently, the \"Edward Witten/String theory powered supersymmetric artificial neural network\", is one wherein supersymmetric weights are sought.Many machine learning algorithms are not empirically shown to be exactly biologically plausible, i.e. Deep Neural Network algorithms, have not been observed to occur in the brain, but regardless, such algorithms work in practice in machine learning.Likewise, regardless of Supersymmetry's elusiveness at the LHC, as seen above, it may be quite feasible to borrow formal methods from strategies in physics even if such strategies are yet to show related physical phenomena to exist; thus it may be pertinent/feasible to try to construct a model that learns supersymmetric weights, as I proposed throughout this paper, following the progression of solution geometries going from ( ) to ( ) and onwards to ( | ) BID15 ." ]
[ 1, 0, 0, 0, 0, 0 ]
[ 0.060606058686971664, 0.04651162400841713, 0, 0, 0, 0.056603770703077316 ]
SJewsu6qOV
true
[ "Generalizing backward propagation, using formal methods from supersymmetry." ]
[ "Regularization-based continual learning approaches generally prevent catastrophic forgetting by augmenting the training loss with an auxiliary objective.", "However in most practical optimization scenarios with noisy data and/or gradients, it is possible that stochastic gradient descent can inadvertently change critical parameters.\n", "In this paper, we argue for the importance of regularizing optimization trajectories directly.", "We derive a new co-natural gradient update rule for continual learning whereby the new task gradients are preconditioned with the empirical Fisher information of previously learnt tasks.", "We show that using the co-natural gradient systematically reduces forgetting in continual learning.", "Moreover, it helps combat overfitting when learning a new task in a low resource scenario.", "It is good to have an end to journey toward; but it is the journey that matters, in the end.", "We have presented the co-natural gradient, a technique that regularizes the optimization trajectory of models trained in a continual setting.", "We have shown that the co-natural gradient stands on its own as an efficient approach for overcoming catastrophic forgetting, and that it effectively complements and stabilizes other existing techniques at a minimal cost.", "We believe that the co-natural gradientand more generally, trajectory regularization -can serve as a solid bedrock for building agents that learn without forgetting." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.25806450843811035, 0.10526315122842789, 0.2222222238779068, 0.307692289352417, 0.2222222238779068, 0, 0.06896550953388214, 0.25, 0.08888888359069824, 0.1666666567325592 ]
Hkl4EANFDH
true
[ "Regularizing the optimization trajectory with the Fisher information of old tasks reduces catastrophic forgetting greatly" ]
[ "We study the problem of generating source code in a strongly typed,\n", "Java-like programming language, given a label (for example a set of\n", "API calls or types) carrying a small amount of information about the\n", "code that is desired.", "The generated programs are expected to respect a\n", "`\"realistic\" relationship between programs and labels, as exemplified\n", "by a corpus of labeled programs available during training.\n\n", "Two challenges in such *conditional program generation* are that\n", "the generated programs must satisfy a rich set of syntactic and\n", "semantic constraints, and that source code contains many low-level\n", "features that impede learning.", " We address these problems by training\n", "a neural generator not on code but on *program sketches*, or\n", "models of program syntax that abstract out names and operations that\n", "do not generalize across programs.", "During generation, we infer a\n", "posterior distribution over sketches, then concretize samples from\n", "this distribution into type-safe programs using combinatorial\n", "techniques.", " We implement our ideas in a system for generating\n", "API-heavy Java code, and show that it can often predict the entire\n", "body of a method given just a few API calls or data types that appear\n", "in the method.", "Neural networks have been successfully applied to many generative modeling tasks in the recent past BID22 BID11 BID33 .", "However, the use of these models in generating highly structured text remains relatively understudied.", "In this paper, we present a method, combining neural and combinatorial techniques, for the condition generation of an important category of such text: the source code of programs in Java-like programming languages.The specific problem we consider is one of supervised learning.", "During training, we are given a set of programs, each program annotated with a label, which may contain information such as the set of API calls or the types used in the code.", "Our goal is to learn a function g such that for a test case of the form (X, Prog) (where Prog is a program and X is a label), g(X) is a compilable, type-safe program that is equivalent to Prog.This problem has immediate applications in helping humans solve programming tasks BID12 BID26 .", "In the usage scenario that we envision, a human programmer uses a label to specify a small amount of information about a program that they have in mind.", "Based on this information, our generator seeks to produce a program equivalent to the \"target\" program, thus performing a particularly powerful form of code completion.Conditional program generation is a special case of program synthesis BID19 BID32 , the classic problem of generating a program given a constraint on its behavior.", "This problem has received significant interest in recent years BID2 BID10 .", "In particular, several neural approaches to program synthesis driven by input-output examples have emerged BID3 BID23 BID5 .", "Fundamentally, these approaches are tasked with associating a program's syntax with its semantics.", "As doing so in general is extremely hard, these methods choose to only generate programs in highly controlled domainspecific languages.", "For example, BID3 consider a functional language in which the only data types permitted are integers and integer arrays, control flow is linear, and there is a sum total of 15 library functions.", "Given a set of input-output examples, their method predicts a vector of binary attributes indicating the presence or absence of various tokens (library functions) in the target program, and uses this prediction to guide a combinatorial search for programs.In contrast, in conditional program generation, we are already given a set of tokens (for example library functions or types) that appear in a program or its metadata.", "Thus, we sidestep the problem of learning the semantics of the programming language from data.", "We ask: does this simpler setting permit the generation of programs from a much richer, Java-like language, with one has thousands of data types and API methods, rich control flow and exception handling, and a strong type system?", "While simpler than general program synthesis, this problem is still highly nontrivial.", "Perhaps the central issue is that to be acceptable to a compiler, a generated program must satisfy a rich set of structural and semantic constraints such as \"do not use undeclared variables as arguments to a procedure call\" or \"only use API calls and variables in a type-safe way\".", "Learning such constraints automatically from data is hard.", "Moreover, as this is also a supervised learning problem, the generated programs also have to follow the patterns in the data while satisfying these constraints.We approach this problem with a combination of neural learning and type-guided combinatorial search BID6 .", "Our central idea is to learn not over source code, but over tree-structured syntactic models, or sketches, of programs.", "A sketch abstracts out low-level names and operations from a program, but retains information about the program's control structure, the orders in which it invokes API methods, and the types of arguments and return values of these methods.", "We propose a particular kind of probabilistic encoder-decoder, called a Gaussian Encoder-Decoder or GED, to learn a distribution over sketches conditioned on labels.", "During synthesis, we sample sketches from this distribution, then flesh out these samples into type-safe programs using a combinatorial method for program synthesis.", "Doing so effectively is possible because our sketches are designed to contain rich information about control flow and types.We have implemented our approach in a system called BAYOU.", "1 We evaluate BAYOU in the generation of API-manipulating Android methods, using a corpus of about 150,000 methods drawn from an online repository.", "Our experiments show that BAYOU can often generate complex method bodies, including methods implementing tasks not encountered during training, given a few tokens as input.", "We have given a method for generating type-safe programs in a Java-like language, given a label containing a small amount of information about a program's code or metadata.", "Our main idea is to learn a model that can predict sketches of programs relevant to a label.", "The predicted sketches are concretized into code using combinatorial techniques.", "We have implemented our ideas in BAYOU, a system for the generation of API-heavy code.", "Our experiments indicate that the system can often generate complex method bodies from just a few tokens, and that learning at the level of sketches is key to performing such generation effectively.An important distinction between our work and classical program synthesis is that our generator is conditioned on uncertain, syntactic information about the target program, as opposed to hard constraints on the program's semantics.", "Of course, the programs that we generate are type-safe, and therefore guaranteed to satisfy certain semantic constraints.", "However, these constraints are invariant across generation tasks; in contrast, traditional program synthesis permits instance-specific semantic constraints.", "Future work will seek to condition program generation on syntactic labels as well as semantic constraints.", "As mentioned earlier, learning correlations between the syntax and semantics of programs written in complex languages is difficult.", "However, the approach of first generating and then concretizing a sketch could reduce this difficulty: sketches could be generated using a limited amount of semantic information, and the concretizer could use logic-based techniques BID2 BID10 to ensure that the programs synthesized from these sketches match the semantic constraints exactly.", "A key challenge here would be to calibrate the amount of semantic information on which sketch generation is conditioned.", "A THE AML LANGUAGE AML is a core language that is designed to capture the essence of API usage in Java-like languages.", "Now we present this language.", "DISPLAYFORM0 AML uses a finite set of API data types.", "A type is identified with a finite set of API method names (including constructors); the type for which this set is empty is said to be void.", "Each method name a is associated with a type signature (τ 1 , . . . , τ k ) → τ 0 , where τ 1 , . . . , τ k are the method's input types and τ 0 is its return type.", "A method for which τ 0 is void is interpreted to not return a value.", "Finally, we assume predefined universes of constants and variable names.The grammar for AML is as in FIG4 .", "Here, x, x 1 , . . . are variable names, c is a constant, and a is a method name.", "The syntax for programs Prog includes method calls, loops, branches, statement sequencing, and exception handling.", "We use variables to feed the output of one method into another, and the keyword let to store the return value of a call in a fresh variable.", "Exp stands for (objectvalued) expressions, which include constants, variables, method calls, and let-expressions such as \"let x = Call : Exp\", which stores the return value of a call in a fresh variable x, then uses this binding to evaluate the expression Exp. (Arithmetic and relational operators are assumed to be encompassed by API methods.)The operational semantics and type system for AML are standard, and consequently, we do not describe these in detail." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.42424240708351135, 0.32258063554763794, 0.42424240708351135, 0.1599999964237213, 0.13793103396892548, 0.06896550953388214, 0.19354838132858276, 0.06666666269302368, 0.3125, 0.06666666269302368, 0, 0.07407406717538834, 0.12903225421905518, 0.06451612710952759, 0.07692307233810425, 0.07692307233810425, 0, 0.1428571343421936, 0.3333333432674408, 0.060606054961681366, 0.22857142984867096, 0.25, 0.10256409645080566, 0.22857142984867096, 0.27586206793785095, 0.2857142686843872, 0.19354838132858276, 0.35555556416511536, 0.19999998807907104, 0.0624999962747097, 0, 0.060606054961681366, 0.09999999403953552, 0.15686273574829102, 0.21917808055877686, 0.12121211737394333, 0.2545454502105713, 0, 0.1666666567325592, 0, 0.2181818187236786, 0.1538461446762085, 0.2222222238779068, 0.1428571343421936, 0.22727271914482117, 0.20408162474632263, 0.2790697515010834, 0.1304347813129425, 0.7727272510528564, 0.1621621549129486, 0.06451612710952759, 0.3888888955116272, 0.18666666746139526, 0.10526315122842789, 0.05405404791235924, 0.0555555522441864, 0.20512819290161133, 0.19999998807907104, 0.19999998807907104, 0.24390242993831635, 0, 0.12903225421905518, 0.22727271914482117, 0.1249999925494194, 0.17142856121063232, 0.1538461446762085, 0.10810810327529907, 0.1666666567325592, 0.27272728085517883, 0.1428571343421936 ]
HkfXMz-Ab
true
[ "We give a method for generating type-safe programs in a Java-like language, given a small amount of syntactic information about the desired code." ]
[ "We propose an approach for sequence modeling based on autoregressive normalizing flows.", "Each autoregressive transform, acting across time, serves as a moving reference frame for modeling higher-level dynamics.", "This technique provides a simple, general-purpose method for improving sequence modeling, with connections to existing and classical techniques.", "We demonstrate the proposed approach both with standalone models, as well as a part of larger sequential latent variable models.", "Results are presented on three benchmark video datasets, where flow-based dynamics improve log-likelihood performance over baseline models.", "Data often contain sequential structure, providing a rich signal for learning models of the world.", "Such models are useful for learning self-supervised representations of sequences (Li & Mandt, 2018; Ha & Schmidhuber, 2018) and planning sequences of actions (Chua et al., 2018; Hafner et al., 2019) .", "While sequential models have a longstanding tradition in probabilistic modeling (Kalman et al., 1960) , it is only recently that improved computational techniques, primarily deep networks, have facilitated learning such models from high-dimensional data (Graves, 2013) , particularly video and audio.", "Dynamics in these models typically contain a combination of stochastic and deterministic variables (Bayer & Osendorfer, 2014; Chung et al., 2015; Gan et al., 2015; Fraccaro et al., 2016) , using simple distributions (e.g. Gaussian) to directly model the likelihood of data observations.", "However, attempting to capture all sequential dependencies with relatively unstructured dynamics may make it more difficult to learn such models.", "Intuitively, the model should use its dynamical components to track changes in the input instead of simultaneously modeling the entire signal.", "Rather than expanding the computational capacity of the model, we seek a method for altering the representation of the data to provide a more structured form of dynamics.", "To incorporate more structured dynamics, we propose an approach for sequence modeling based on autoregressive normalizing flows (Kingma et al., 2016; Papamakarios et al., 2017) , consisting of one or more autoregressive transforms in time.", "A single transform is equivalent to a Gaussian autoregressive model.", "However, by stacking additional transforms or latent variables on top, we can arrive at more expressive models.", "Each autoregressive transform serves as a moving reference frame in which higher-level structure is modeled.", "This provides a general mechanism for separating different forms of dynamics, with higher-level stochastic dynamics modeled in the simplified space provided by lower-level deterministic transforms.", "In fact, as we discuss, this approach generalizes the technique of modeling temporal derivatives to simplify dynamics estimation (Friston, 2008 ).", "We empirically demonstrate this approach, both with standalone autoregressive normalizing flows, as well as by incorporating these flows within more flexible sequential latent variable models.", "While normalizing flows have been applied in a few sequential contexts previously, we emphasize the use of these models in conjunction with sequential latent variable models.", "We present experimental results on three benchmark video datasets, showing improved quantitative performance in terms of log-likelihood.", "In formulating this general technique for improving dynamics estimation in the framework of normalizing flows, we also help to contextualize previous work.", "Figure 1: Affine Autoregressive Transforms.", "Computational diagrams for forward and inverse affine autoregressive transforms (Papamakarios et al., 2017) .", "Each y t is an affine transform of x t , with the affine parameters potentially non-linear functions of x <t .", "The inverse transform is capable of converting a correlated input, x 1:T , into a less correlated variable, y 1:T .", "We have presented a technique for improving sequence modeling based on autoregressive normalizing flows.", "This technique uses affine transforms to temporally decorrelate sequential data, thereby simplifying the estimation of dynamics.", "We have drawn connections to classical approaches, which involve modeling temporal derivatives.", "Finally, we have empirically shown how this technique can improve sequential latent variable models." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ 0.23076923191547394, 0.06666666269302368, 0.0624999962747097, 0.3030303120613098, 0.12903225421905518, 0.13793103396892548, 0.04999999701976776, 0.07692307233810425, 0.07692307233810425, 0.1818181723356247, 0.060606054961681366, 0.0555555522441864, 0.08888888359069824, 0.1666666567325592, 0.19354838132858276, 0.06896550953388214, 0, 0.05714285373687744, 0.3684210479259491, 0.2702702581882477, 0.06451612710952759, 0.0555555522441864, 0, 0.0714285671710968, 0, 0, 0.2142857164144516, 0.13333332538604736, 0.1538461446762085, 0.5 ]
HklvmlrKPB
true
[ "We show how autoregressive flows can be used to improve sequential latent variable models." ]
[ "It is well-known that many machine learning models are susceptible to adversarial attacks, in which an attacker evades a classifier by making small perturbations to inputs.", "This paper discusses how industrial copyright detection tools, which serve a central role on the web, are susceptible to adversarial attacks.", "We discuss a range of copyright detection systems, and why they are particularly vulnerable to attacks. ", "These vulnerabilities are especially apparent for neural network based systems. ", "As proof of concept, we describe a well-known music identification method and implement this system in the form of a neural net.", "We then attack this system using simple gradient methods.", "Adversarial music created this way successfully fools industrial systems, including the AudioTag copyright detector and YouTube's Content ID system.", "Our goal is to raise awareness of the threats posed by adversarial examples in this space and to highlight the importance of hardening copyright detection systems to attacks.", "Machine learning systems are easily manipulated by adversarial attacks, in which small perturbations to input data cause large changes to the output of a model.", "Such attacks have been demonstrated on a number of potentially sensitive systems, largely in an idealized academic context, and occasionally in the real-world (Tencent, 2019; Kurakin et al., 2016; Athalye et al., 2017; Eykholt et al., 2017; Yakura & Sakuma, 2018; Qin et al., 2019) .", "Copyright detection systems are among the most widely used machine learning systems in industry, and the security of these systems is of foundational importance to some of the largest companies in the world.", "Despite their importance, copyright systems have gone largely unstudied by the ML security community.", "Common approaches to copyright detection extract features, called fingerprints, from sampled video or audio, and then match these features with a library of known fingerprints.", "Examples include YouTube's Content ID, which flags copyrighted material on YouTube and enables copyright owners to monetize and control their content.", "At the time of writing this paper, more than 100 million dollars have been spent on Content ID, which has resulted in more than 3 billion dollars in revenue for copyright holders (Manara, 2018) .", "Closely related tools such as Google Jigsaw detect and remove videos that promote terrorism or jeopardized national security.", "There is also a regulatory push for the use of copyright detection systems; the recent EU Copyright Directive requires any service that allows users to post text, sound, or video to implement a copyright filter.", "A wide range of copyright detection systems exist, most of which are proprietary.", "It is not possible to demonstrate attacks against all systems, and this is not our goal.", "Rather, the purpose of this paper is to discuss why copyright detectors are especially vulnerable to adversarial attacks and establish how existing attacks in the literature can potentially exploit audio and video copyright systems.", "As a proof of concept, we demonstrate an attack against real-world copyright detection systems for music.", "To do this, we reinterpret a simple version of the well-known \"Shazam\" algorithm for music fingerprinting as a neural network and build a differentiable implementation of it in TensorFlow (Abadi et al., 2016) .", "By using a gradient-based attack and an objective that is designed to achieve good transferability to black-box models, we create adversarial music that is easily recognizable to a human, while evading detection by a machine.", "With sufficient perturbations, our adversarial music successfully fools industrial systems, 1 including the AudioTag music recognition service (AudioTag, 2009), and YouTube's Content ID system (Google, 2019) .", "Copyright detection systems are an important category of machine learning methods, but the robustness of these systems to adversarial attacks has not been addressed yet by the machine learning community.", "We discussed the vulnerability of copyright detection systems, and explain how different kinds of systems may be vulnerable to attacks using known methods.", "As a proof of concept, we build a simple song identification method using neural network primitives and attack it using well-known gradient methods.", "Surprisingly, attacks on this model transfer well to online systems.", "Note that none of the authors of this paper are experts in audio processing or fingerprinting systems.", "The implementations used in this study are far from optimal, and we expect that attacks can be strengthened using sharper technical tools, including perturbation types that are less perceptible to the human ear.", "Furthermore, we are doing transfer attacks using fairly rudimentary surrogate models that rely on hand-crafted features, while commercial system likely rely on full trainable neural nets.", "Our goal here is not to facilitate copyright evasion, but rather to raise awareness of the threats posed by adversarial examples in this space, and to highlight the importance of hardening copyright detection and content control systems to attack.", "A number of defenses already exist that can be utilized for this purpose, including adversarial training." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.13793103396892548, 0.1599999964237213, 0, 0.0714285671710968, 0.11764705181121826, 0.29629629850387573, 0.1875, 0, 0, 0.060606058686971664, 0.09090908616781235, 0.12121211737394333, 0.1428571343421936, 0.052631575614213943, 0, 0.10256409645080566, 0.19999998807907104, 0, 0.10810810327529907, 0.1666666567325592, 0, 0.054054051637649536, 0.12121211737394333, 0.060606058686971664, 0.13333332538604736, 0, 0, 0, 0.05128204822540283, 0.0624999962747097, 0.14999999105930328, 0.0833333283662796 ]
SJlRWC4FDB
true
[ "Adversarial examples can fool YouTube's copyright detection system" ]
[ "Equilibrium Propagation (EP) is a learning algorithm that bridges Machine Learning and Neuroscience, by computing gradients closely matching those of Backpropagation Through Time (BPTT), but with a learning rule local in space.\n", "Given an input x and associated target y, EP proceeds in two phases: in the first phase neurons evolve freely towards a first steady state; in the second phase output neurons are nudged towards y until they reach a second steady state.\n", "However, in existing implementations of EP, the learning rule is not local in time:\n", "the weight update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.\n", "This is a major impediment to the biological plausibility of EP and its efficient hardware implementation.\n", "In this work, we propose a version of EP named Continual Equilibrium Propagation (C-EP) where neuron and synapse dynamics occur simultaneously throughout the second phase, so that the weight update becomes local in time.", "We prove theoretically that, provided the learning rates are sufficiently small, at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss given by BPTT (Theorem 1).\n", "We demonstrate training with C-EP on MNIST and generalize C-EP to neural networks where neurons are connected by asymmetric connections.", "We show through experiments that the more the network updates follows the gradients of BPTT, the best it performs in terms of training.", "These results bring EP a step closer to biology while maintaining its intimate link with backpropagation.", "A motivation for deep learning is that a few simple principles may explain animal intelligence and allow us to build intelligent machines, and learning paradigms must be at the heart of such principles, creating a synergy between neuroscience and Artificial Intelligence (AI) research.", "In the deep learning approach to AI (LeCun et al., 2015) , backpropagation thrives as the most powerful algorithm for training artificial neural networks.", "Unfortunately, its implementation on conventional computer or dedicated hardware consumes more energy than the brain by several orders of magnitude (Strubell et al., 2019) .", "One path towards reducing the gap between brains and machines in terms of power consumption is by investigating alternative learning paradigms relying on locally available information, which would allow radically different hardware implementations: such local learning rules could be used for the development of extremely energy efficient learning-capable hardware.", "Investigating such bioplausible learning schemes with real-world applicability is therefore of interest not only for neuroscience, but also for developing neuromorphic computing hardware that takes inspiration from information-encoding, dynamics and topology of the brain to reach fast and energy efficient AI (Ambrogio et al., 2018; Romera et al., 2018) .", "In these regards, Equilibrium Propagation (EP) is an alternative style of computation for estimating error gradients that presents significant advantages (Scellier and Bengio, 2017) .", "EP belongs to the family of contrastive Hebbian learning (CHL) algorithms (Ackley et al., 1985; Movellan, 1991; Hinton, 2002) and therefore benefits from an important feature of these algorithms: neural dynamics and synaptic updates depend solely on information that is locally available.", "As a CHL algorithm, EP applies to convergent RNNs, i.e. RNNs that are fed by a static input and converge to a steady state.", "Training such a convergent RNN consists in adjusting the weights so that the steady state corresponding to an input x produces output values close to associated targets y.", "CHL algorithms proceed in two phases: in the first phase, neurons evolve freely without external influence and settle to a (first) steady state; in the second phase, the values of output neurons are influenced by the target y and the neurons settle to a second steady state.", "CHL weight updates consist in a Hebbian rule strengthening the connections between co-activated neurons at the first steady state, and an anti-Hebbian rule with opposite effect at the second steady state.", "A difference between Equilibrium Propagation and standard CHL algorithms is that output neurons are not clamped in the second phase but elastically pulled towards the target y.", "A second key property of EP is that, unlike CHL and other related algorithms, it is intimately linked to backpropagation.", "It has been shown that synaptic updates in EP follow gradients of recurrent backpropagation (RBP) and backpropagation through time (BPTT) (Ernoult et al., 2019) .", "This makes it especially attractive to bridge the gap between neural networks developed by neuroscientists, neuromorphic researchers and deep learning researchers.", "Nevertheless, the bioplausibility of EP still undergoes two major limitations.", "First, although EP is local in space, it is non-local in time.", "In all existing implementations of EP the weight update is performed after the dynamics of the second phase have converged, when the first steady state is no longer physically available.", "Thus the first steady state has to be artificially stored.", "Second, the network dynamics have to derive from a primitive function, which is equivalent to the requirement of symmetric weights in the Hopfield model.", "These two requirements are biologically unrealistic and also hinder the development of efficient EP computing hardware.", "In this work, we propose an alternative implementation of EP (called C-EP) which features temporal locality, by enabling synaptic dynamics to occur throughout the second phase, simultaneously with neural dynamics.", "We then address the second issue by adapting C-EP to systems having asymmetric synaptic connections, taking inspiration from Scellier et al. (2018) ; we call this modified version C-VF.", "More specifically, the contributions of the current paper are the following:", "• We introduce Continual Equilibrium Propagation (C-EP, Section 3.1-3.2), a new version of EP with continual weight updates: the weights of the network are adjusted continually in the second phase of training using local information in space and time.", "Neuron steady states do not need to be stored after the first phase, in contrast with standard EP where a global weight update is performed at the end of the second phase.", "Like standard EP, the C-EP algorithm applies to networks whose synaptic connections between neurons are assumed to be symmetric and tied.", "• We show mathematically that, provided that the changes in synaptic strengths are sufficiently slow (i.e. the learning rates are sufficiently small), at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss obtained with BPTT (Theorem 1 and Fig. 2 , Section 3.3).", "We call this property the Gradient Descending Dynamics (GDD) property, for consistency with the terminology used in Ernoult et al. (2019) .", "• We demonstrate training with C-EP on MNIST, with accuracy approaching the one obtained with standard EP (Section 4.2).", "• Finally, we adapt our C-EP algorithm to the more bio-realistic situation of a neural network with asymmetric connections between neurons.", "We call this modified version C-VF as it is inspired by the Vector Field method proposed in Scellier et al. (2018) .", "We demonstrate this approach on MNIST, and show numerically that the training performance is correlated with the satisfaction of Gradient Descending Dynamics (Section 4.3).", "For completeness, we also show how the Recurrent Backpropagation (RBP) algorithm of Almeida (1987) ; Pineda (1987) relates to C-EP, EP and BPTT.", "We illustrate the equivalence of these four algorithms on a simple analytical model ( Fig. 3 ) and we develop their relationship in Appendix A.", "Equilibrium Propagation is an algorithm that leverages the dynamical nature of neurons to compute weight gradients through the physics of the neural network.", "C-EP embraces simultaneous synapse and neuron dynamics, resolving the initial need of artificial memory units for storing the neuron values between different phases.", "The C-EP framework preserves the equivalence with Backpropagation Through Time: in the limit of sufficiently slow synaptic dynamics (i.e. small learning rates), the system satisfies Gradient Descending Dynamics (Theorem 1).", "Our experimental results confirm this theorem.", "When training our vanilla RNN with symmetric weights with C-EP while ensuring convergence in 100 epochs, a modest reduction in MNIST accuracy is seen with regards to standard EP.", "This accuracy reduction can be eliminated by using smaller learning rates and rescaling up the total weight update at the end of the second phase (Appendix F.2).", "On top of extending the theory of Ernoult et al. (2019) , Theorem 1 also appears to provide a statistically robust tool for C-EP based learning.", "Our experimental results show as in Ernoult et al. (2019) that, for a given network with specified neuron and synapse dynamics, the more the updates of Equilibrium Propagation follow the gradients provided by Backpropagation Through Time before training (in terms of angle in this work), the better this network can learn.", "Our C-EP and C-VF algorithms exhibit features reminiscent of biology.", "C-VF extends C-EP training to RNNs with asymmetric weights between neurons, as is the case in biology.", "Its learning rule, local in space and time, is furthermore closely acquainted to Spike Timing Dependent Plasticity (STDP), a learning rule widely studied in neuroscience, inferred in vitro and in vivo from neural recordings in the hippocampus (Dan and Poo, 2004) .", "In STDP, the synaptic strength is modulated by the relative timings of pre and post synaptic spikes within a precise time window (Bi and Poo, 1998; 2001) .", "Each randomly selected synapse corresponds to one color.", "While dashed and continuous lines coincide for standard EP, they split apart upon untying the weights and using continual updates.", "Strikingly, the same rule that we use for C-VF learning can approximate STDP correlations in a rate-based formulation, as shown through numerical experiments by .", "From this viewpoint our work brings EP a step closer to biology.", "However, C-EP and C-VF do not aim at being models of biological learning per se, in that it would account for how the brain works or how animals learn, for which Reinforcement Learning might be a more suited learning paradigm.", "The core motivation of this work is to propose a fully local implementation of EP, in particular to foster its hardware implementation.", "When computed on a standard computer, due to the use of small learning rates to mimic analog dynamics within a finite number of epochs, training our models with C-EP and C-VF entail long simulation times.", "With a Titan RTX GPU, training a fully connected architecture on MNIST takes 2 hours 39 mins with 1 hidden layer and 10 hours 49 mins with 2 hidden layers.", "On the other hand, C-EP and C-VF might be particularly efficient in terms of speed and energy consumption when operated on neuromorphic hardware that employs analog device physics (Ambrogio et al., 2018; Romera et al., 2018) .", "To this purpose, our work can provide an engineering guidance to map our algorithm onto a neuromorphic system.", "Fig.", "5 (a) shows that hyperparameters should be tuned so that before training, C-EP updates stay within 90", "• of the gradients provided by BPTT.", "More concretely in practice, it amounts to tune the degree of symmetry of the dynamics, for instance the angle between forward and backward weights -see Fig. 4 .1.", "Our work is one step towards bridging Equilibrium Propagation with neuromorphic computing and thereby energy efficient implementations of gradient-based learning algorithms.", "A PROOF OF THEOREM 1", "In this appendix, we prove Theorem 1, which we recall here.", "Theorem 1 (GDD Property).", "Let s 0 , s 1 , . . . , s T be the convergent sequence of states and denote s * = s T the steady state.", "Further assume that there exists some step K where 0 < K ≤ T such that s * = s T = s T −1 = . . . s T −K .", "Then, in the limit η → 0 and β → 0, the first K normalized updates in the second phase of C-EP are equal to the negatives of the first K gradients of BPTT, i.e.", "A.1", "A SPECTRUM OF FOUR COMPUTATIONALLY EQUIVALENT LEARNING ALGORITHMS Proving Theorem 1 amounts to prove the equivalence of C-EP and BPTT.", "In fact we can prove the equivalence of four algorithms, which all compute the gradient of the loss:", "1. Backpropagation Through Time (BPTT), presented in Section B.2,", "2. Recurrent Backpropagation (RBP), presented in Section B.3,", "3. Equilibrium Propagation (EP), presented in Section 2,", "4. Equilibrium Propagation with Continual Weight Updates (C-EP), introduced in Section 3.", "In this spectrum of algorithms, BPTT is the most practical algorithm to date from the point of view of machine learning, but also the less biologically realistic.", "In contrast, C-EP is the most realistic in terms of implementation in biological systems, while it is to date the least practical and least efficient for conventional machine learning (computations on standard Von-Neumann hardware are considerably slower due to repeated parameter updates, requiring memory access at each time-step of the second phase)." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1818181723356247, 0.14035087823867798, 0.10810810327529907, 0.21739129722118378, 0.19512194395065308, 0.5614035129547119, 0.2181818187236786, 0.1860465109348297, 0.1395348757505417, 0.09999999403953552, 0.1269841194152832, 0.04255318641662598, 0.08163265138864517, 0.08695651590824127, 0.14492753148078918, 0.1249999925494194, 0.1230769157409668, 0.08695651590824127, 0.07999999821186066, 0.2142857164144516, 0.19999998807907104, 0.1599999964237213, 0.1395348757505417, 0.0833333283662796, 0.09090908616781235, 0.11764705181121826, 0, 0.16326530277729034, 0.05882352590560913, 0.17777776718139648, 0.14999999105930328, 0.37735849618911743, 0.15094339847564697, 0.12121211737394333, 0.3333333432674408, 0.25925925374031067, 0.09090908616781235, 0.2028985470533371, 0.13636362552642822, 0.1428571343421936, 0.17777776718139648, 0.13333332538604736, 0.2083333283662796, 0.1304347813129425, 0.20408162474632263, 0.13636362552642822, 0.2222222238779068, 0.15094339847564697, 0, 0.07999999821186066, 0.1599999964237213, 0.12244897335767746, 0.23529411852359772, 0.11764705181121826, 0.09756097197532654, 0.10344827175140381, 0.1666666567325592, 0.0624999962747097, 0.1395348757505417, 0.1249999925494194, 0.0555555522441864, 0.13114753365516663, 0.1395348757505417, 0.2142857164144516, 0.1249999925494194, 0.10344827175140381, 0.04878048226237297, 0, 0.12903225421905518, 0.12244897335767746, 0.17777776718139648, 0, 0, 0, 0.1395348757505417, 0.045454539358615875, 0.1599999964237213, 0.13636362552642822, 0.10256409645080566, 0, 0, 0.0624999962747097, 0.1111111044883728, 0.08510638028383255, 0.11594202369451523 ]
H1xJhJStPS
true
[ "We propose a continual version of Equilibrium Propagation, where neuron and synapse dynamics occur simultaneously throughout the second phase, with theoretical guarantees and numerical simulations." ]
[ "There are two main lines of research on visual reasoning: neural module network (NMN) with explicit multi-hop reasoning through handcrafted neural modules, and monolithic network with implicit reasoning in the latent feature space.", "The former excels in interpretability and compositionality, while the latter usually achieves better performance due to model flexibility and parameter efficiency. \n", "In order to bridge the gap of the two, we present Meta Module Network (MMN), a novel hybrid approach that can efficiently utilize a Meta Module to perform versatile functionalities, while preserving compositionality and interpretability through modularized design.", "The proposed model first parses an input question into a functional program through a Program Generator.", "Instead of handcrafting a task-specific network to represent each function like traditional NMN, we use Recipe Encoder to translate the functions into their corresponding recipes (specifications), which are used to dynamically instantiate the Meta Module into Instance Modules.", "To endow different instance modules with designated functionality, a Teacher-Student framework is proposed, where a symbolic teacher pre-executes against the scene graphs to provide guidelines for the instantiated modules (student) to follow.", "In a nutshell, MMN adopts the meta module to increase its parameterization efficiency, and uses recipe encoding to improve its generalization ability over NMN.", "Experiments conducted on the GQA benchmark demonstrates that: (1) MMN achieves significant improvement over both NMN and monolithic network baselines; (2) MMN is able to generalize to unseen but related functions.", "Visual reasoning requires a model to learn strong compositionality and generalization abilities, i.e., understanding and answering compositional questions without having seen similar semantic compositions before.", "Such compositional visual reasoning is a hallmark for human intelligence that endows people with strong problem-solving skills given limited prior knowledge.", "Recently, neural module networks (NMNs) (Andreas et al., 2016a; Hu et al., 2017; Johnson et al., 2017b; Hu et al., 2018; Mao et al., 2019) have been proposed to perform such complex reasoning tasks.", "First, NMN needs to pre-define a set of functions and explicitly encode each function into unique shallow neural networks called modules, which are composed dynamically to build an instance-specific network for each input question.", "This approach has high compositionality and interpretability, as each module is specifically designed to accomplish a specific sub-task and multiple modules can be combined to perform unseen combinations during inference.", "However, with increased complexity of the task, the set of functional semantics and modules also scales up.", "As observed in Hudson & Manning (2018) , this leads to higher model complexity and poorer scalability on more challenging scenarios.", "Another line of research on visual reasoning is focused on designing monolithic network architecture, such as MFB (Yu et al., 2017) , BAN (Kim et al., 2018) , DCN (Nguyen & Okatani, 2018) , and MCAN .", "These black-box methods have achieved state-of-the-art performance on more challenging realistic image datasets like VQA (Hudson & Manning, 2019a) , surpassing the aforementioned NMN approach.", "They use a unified neural network to learn general-purpose reasoning skills (Hudson & Manning, 2018) , which is known to be more flexible and scalable without making strict assumption about the inputs or designing operation-specific networks for the predefined functional semantics.", "As the reasoning procedure is conducted in the latent feature space, the reasoning process is difficult to interpret.", "Such a model also lacks the ability to capture the compositionality of questions, thus suffering from poorer generalizability than module networks.", "In this paper, we propose Meta Module Network that bridges the gap between monolithic networks and traditional module networks.", "Our model is built upon a Meta Module, which can be instantiated into an instance module performing specific functionalities.", "Our approach significantly outperforms baseline methods and achieves comparable performance to state of the art.", "Detailed error analysis shows that relation modeling over scene graph could further boost MMN for higher performance.", "For future work, we plan to incorporate scene graph prediction into the proposed framework.", "A APPENDIX" ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19230768084526062, 0.13636362552642822, 0.25, 0.052631575614213943, 0.21052631735801697, 0.11764705181121826, 0.13333332538604736, 0.11538460850715637, 0.16326530277729034, 0.1818181723356247, 0.08163265138864517, 0.1090909019112587, 0.07843136787414551, 0.10526315122842789, 0.09090908616781235, 0.15094339847564697, 0.1666666567325592, 0.12903225421905518, 0.1621621549129486, 0.1860465109348297, 0.24390242993831635, 0.0952380895614624, 0.21052631735801697, 0.04999999329447746, 0.10810810327529907 ]
S1xFm6VKDH
true
[ "We propose a new Meta Module Network to resolve some of the restrictions of previous Neural Module Network to achieve strong performance on realistic visual reasoning dataset." ]
[ "We propose a new perspective on adversarial attacks against deep reinforcement learning agents.", "Our main contribution is CopyCAT, a targeted attack able to consistently lure an agent into following an outsider's policy.", "It is pre-computed, therefore fast inferred, and could thus be usable in a real-time scenario.", "We show its effectiveness on Atari 2600 games in the novel read-only setting.", "In the latter, the adversary cannot directly modify the agent's state -its representation of the environment- but can only attack the agent's observation -its perception of the environment.", "Directly modifying the agent's state would require a write-access to the agent's inner workings and we argue that this assumption is too strong in realistic settings.", "We are interested in the problem of attacking sequential control systems that use deep neural policies.", "In the context of supervised learning, previous work developed methods to attack neural classifiers by crafting so-called adversarial examples.", "These are malicious inputs particularly successful at fooling deep networks with high-dimensional input-data like images.", "Within the framework of sequential-decision-making, previous works used these adversarial examples only to break neural policies.", "Yet the attacks they build are rarely applicable in a real-time setting as they require to craft a new adversarial input at each time step.", "Besides, these methods use the strong assumption of having a write-access to what we call the agent's inner state -the actual input of the neural policy built by the algorithm from the observations-.", "When taking this assumption, the adversary -the algorithm attacking the agent-is not placed at the interface between the agent and the environment where the system is the most vulnerable.", "We wish to design an attack with a more general purpose than just shattering a neural policy as well as working in a more realistic setting.", "Our main contribution is CopyCAT, an algorithm for taking full-control of neural policies.", "It produces a simple attack that is: (1) targeted towards a policy, i.e., it aims at matching a neural policy's behavior with the one of an arbitrary policy; (2) only altering observation of the environment rather than complete agent's inner state; (3) composed of a finite set of pre-computed state-independent masks.", "This way it requires no additional time at inference hence it could be usable in a real-time setting.", "We introduce CopyCAT in the white-box scenario, with read-only access to the weights and the architecture of the neural policy.", "This is a realistic setting as prior work showed that after training substitute models, one could transfer an attack computed on these to the inaccessible attacked model (Papernot et al., 2016) .", "The context is the following: (1) We are given any agent using a neuralnetwork for decision-making (e.g., the Q-network for value-based agents, the policy network for actor-critic or imitation learning methods) and a target policy we want the agent to follow.", "(2) The only thing one can alter is the observation the agent receives from the environment and not the full input of the neural controller (the inner state).", "In other words, we are granted a read-only access to the agent's inner workings.", "In the case of Atari 2600 games, the agents builds its inner state by stacking the last four observations.", "Attacking the agent's inner state means writing in the agent's memory of the last observations.", "(3) The computed attack should be inferred fast enough to be used in real-time.", "We stress the fact that targeting a policy is a more general scheme than untargeted attacks where the goal is to stop the agent from taking its preferred action (hoping for it to take the worst).", "It is also more general than the targeted scheme of previous works where one wants the agent to take its least preferred action or to reach a specific state.", "In our setting, one can either hard-code or train a target policy.", "This policy could be minimizing the agent's true reward but also maximizing the reward for another task.", "For instance, this could mean taking full control of an autonomous vehicle, possibly bringing it to any place of your choice.", "We exemplify this approach on the classical benchmark of Atari 2600 games.", "We show that taking control of a trained deep RL agent so that its behavior matches a desired policy can be done with this very simple attack.", "We believe such an attack reveals the vulnerability of autonomous agents.", "As one could lure them into following catastrophic behaviors, autonomous cars, robots or any agent with high dimensional inputs are exposed to such manipulation.", "This suggests that it would be worth studying new defense mechanisms that could be specific to RL agents, but this is out of the scope of this paper.", "In this work, we built and showed the effectiveness of CopyCAT, a simple algorithm designed to attack neural policies in order to manipulate them.", "We showed its ability to lure a policy into having a desired behavior with a finite set of additive masks, usable in a real-time setting while being applied only on observations of the environment.", "We demonstrated the effectiveness of these universal masks in Atari games.", "As this work shows that one can easily manipulate a policy's behavior, a natural direction of work is to develop robust algorithms, either able to keep their normal behaviors when attacked or to detect attacks to treat them appropriately.", "Notice however that in a sequential-decisionmaking setting, detecting an attack is not enough as the agent cannot necessarily stop the process when detecting an attack and may have to keep outputting actions for incoming observations.", "It is thus an exciting direction of work to develop algorithm that are able to maintain their behavior under such manipulating attacks.", "Another interesting direction of work in order to build real-life attacks is to test targeted attacks on neural policies in the black-box scenario, with no access to network's weights and architecture.", "However, targeted adversarial examples are harder to compute than untargeted ones and we may experience more difficulties in reinforcement learning than supervised learning.", "Indeed, learned representations are known to be less interpretable and the variability between different random seeds to be higher than in supervised learning.", "Different policies trained with the same algorithm may thus lead to S → A mappings with very different decision boundaries.", "Transferring targeted examples may not be easy and would probably require to train imitation models to obtain mappings similar to π in order to compute transferable adversarial examples." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2857142686843872, 0.12121211737394333, 0.13333332538604736, 0.1428571343421936, 0.11428570747375488, 0.20512820780277252, 0.3870967626571655, 0.1764705777168274, 0, 0.19354838132858276, 0.15789473056793213, 0.1395348757505417, 0.052631575614213943, 0.3243243098258972, 0.3571428656578064, 0.13114753365516663, 0.1249999925494194, 0.25, 0.12765957415103912, 0.11999999731779099, 0.1538461446762085, 0.06896550953388214, 0.0624999962747097, 0.14814814925193787, 0.1428571343421936, 0.17777776718139648, 0.0952380895614624, 0.07407406717538834, 0.06666666269302368, 0.22857142984867096, 0.14814814925193787, 0.29999998211860657, 0.23076923191547394, 0, 0.10256409645080566, 0.31578946113586426, 0.17777776718139648, 0.23076923191547394, 0.08163265138864517, 0.17391303181648254, 0.0555555522441864, 0.1904761791229248, 0.0555555522441864, 0.0555555522441864, 0.05882352590560913, 0.05128204822540283 ]
SyxoygBKwB
true
[ "We propose a new attack for taking full control of neural policies in realistic settings." ]
[ "Cold-start and efficiency issues of the Top-k recommendation are critical to large-scale recommender systems.", "Previous hybrid recommendation methods are effective to deal with the cold-start issues by extracting real latent factors of cold-start items(users) from side information, but they still suffer low efficiency in online recommendation caused by the expensive similarity search in real latent space.", "This paper presents a collaborative generated hashing (CGH) to improve the efficiency by denoting users and items as binary codes, which applies to various settings: cold-start users, cold-start items and warm-start ones.", "Specifically, CGH is designed to learn hash functions of users and items through the Minimum Description Length (MDL) principle; thus, it can deal with various recommendation settings.", "In addition, CGH initiates a new marketing strategy through mining potential users by a generative step.", "To reconstruct effective users, the MDL principle is used to learn compact and informative binary codes from the content data.", "Extensive experiments on two public datasets show the advantages for recommendations in various settings over competing baselines and analyze the feasibility of the application in marketing.", "With the explosion of e-commerce, most customers are accustomed to receiving a variety of recommendations, such as movies, books, news, or hotels they might be interested in.", "Traditional recommender systems just recommended items that are similar to what they liked or rated in the previous.", "Recommendations help users find their desirable items, and also creates new revenue opportunities for vendors, such as Amazon, Taobao, eBay, etc.", "Among them, one of the most popular recommendation methods, collaborative filtering is dependent on a large amount of user-item interactive information to provide an accurate recommendation.", "However, most of new e-commerce vendors do not have enough interactive data, which leads to low recommendation accuracy, i.e., cold-start issues.", "Previous studies on cold-start issues generally modeled as a combination of collaborative filtering and content filtering, known as hybrid recommender systems.", "Specifically, they learned real latent factors by incorporating the side information into the interactive data.", "Such as Collaborative Deep Learning (CDL) (Wang et al., 2015) , Visual Bayesian Personalized Ranking (VBPR) (He & McAuley, 2016) , Collaborative Topic modeling for Recommedation (CTR) (Wang & Blei, 2011) , and the DropoutNet for addressing cold start (DropoutNet) (Volkovs et al., 2017) , ABCPRec for Bridging Consumer and Producer Roles for User-Generated Content Recommendation (ABCPRec) (Tsukuda et al., 2019) .", "All of the above hybrid recommender systems were modeled in real latent space, which leads to low efficiency for the online recommendation with the increasing scale of datasets.", "discrete objectives.", "Thus many scholars learned binary codes by some approximate techniques, such as the two-stage hashing learning method utilized in Preference Preserving Hashing(PPH) and the Iterative Quantization(ITQ) (Zhou & Zha, 2012) .", "To reduce information loss, two learning-based hashing frameworks: bit-wise learning and block-wise learning were respectively proposed in hashing based recommendation frameworks (Zhang et al., 2016; Zhang et al., 2018; .", "However, due to the requirement of binary outputs for learning-based hashing frameworks, the training procedure is expensive for large-scale recommendation, which motivates us to propose a generative approach to learn hash functions.", "In this paper, we propose the collaborative generated hashing(CGH) to learn hash functions of users and items from content data with the principle of Minimum Description Length (MDL) (Dai et al., 2017) .", "In marketing area, mining potential customers is crucial to the e-commerce.", "CGH provides a strategy to discover potential users by the generative step.", "To reconstruct effective users, uncorrelated and balanced limits are imposed to learn compact and informative binary codes with the principle of the MDL.", "Especially, discovering potential customers is vital to the success of adding new items for a recommendation platform (Papies et al., 2017) .", "Specifically, for a new item, we can generate a new potential user by the generative step (detailed in Section 2.1), and then search the nearest potential users in the user set.", "By recommending a new product to the potential users who might be interested in but didn't plan to buy, further e-commerce strategies can be developed to attract those potential users.", "We organize the paper as follows: Section 2 introduce the main techniques of CGH.", "We first introduce the framework of CGH and compare it with the closely related competing baselines: CDL (Wang et al., 2015) and DropoutNet (Volkovs et al., 2017) ; we then formulate the generative step in Section 2.1 and the inference step in Section 2.2, respectively; we finally summarize the training objective and introduce the optimization in Section 2.3.", "Particularly, we demonstrate the process of mining potential users for the marketing application in Section 2.1.", "Section 3 presents the experimental results for marketing analysis and recommendation accuracy in various settings.", "Section 4 concludes the paper.", "The main contributions of this paper are summarized as follows:", "(1) We propose the Collaborative Generated Hashing (CGH) with the principle of MDL to learn compact but informative hash codes, which applies to various settings for recommendation.", "(2) We provides a marketing strategy by discovering potential users by the generative step of CGH, which can be applied to boost the e-commence development.", "(3) We evaluate the effectiveness of the proposed CGH compared with the state-of-the-art baselines, and demonstrate its robustness and convergence properties on the public datasets.", "In this paper, a generated recommendation framework called collaborative generated hashing (CGH) is proposed to address the cold-start and efficiency issues for recommendation.", "The two main contributions are put forward in this paper: (1) we develop a collaborative generated hashing framework with the principle of Minimum Description Length together(MDL) with uncorrelated and balanced constraints on the inference process to derive compact and informative hash codes, which is significant for the accuracy of recommendation and marketing; (2) we propose a marketing strategy by the proposed CGH, specifically, we design a framework to discover the k potential users by the generate step; (3) we evaluate the proposed scheme on two the public datasets, the experimental results show the effectiveness of the proposed CGH for both warm-start and cold-start recommendation." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.12903225421905518, 0.11538460850715637, 0.13333332538604736, 0.1818181723356247, 0.1875, 0.1666666567325592, 0.14999999105930328, 0.04651162400841713, 0, 0.10526315122842789, 0.1463414579629898, 0.09999999403953552, 0.1621621549129486, 0, 0.0624999962747097, 0.0952380895614624, 0.08695651590824127, 0.09302324801683426, 0.13333332538604736, 0.0833333283662796, 0.0714285671710968, 0.13793103396892548, 0.15789473056793213, 0.1538461446762085, 0.2380952388048172, 0.0952380895614624, 0, 0.03333332762122154, 0.12121211737394333, 0.25, 0, 0, 0.1428571343421936, 0.19999998807907104, 0.052631575614213943, 0.2631579041481018, 0.19780220091342926 ]
HJel76NYPS
true
[ "It can generate effective hash codes for efficient cold-start recommendation and meanwhile provide a feasible marketing strategy." ]
[ "Recent efforts to combine Representation Learning with Formal Methods, commonly known as the Neuro-Symbolic Methods, have given rise to a new trend of applying rich neural architectures to solve classical combinatorial optimization problems.", "In this paper, we propose a neural framework that can learn to solve the Circuit Satisfiability problem.", "Our framework is built upon two fundamental contributions: a rich embedding architecture that encodes the problem structure and an end-to-end differentiable training procedure that mimics Reinforcement Learning and trains the model directly toward solving the SAT problem.", "The experimental results show the superior out-of-sample generalization performance of our framework compared to the recently developed NeuroSAT method.", "Recent advances in neural network models for discrete structures have given rise to a new field in Representation Learning known as the Neuro-Symbolic methods.", "Generally speaking, these methods aim at marrying the classical symbolic techniques in Formal Methods and Computer Science to Deep Learning in order to benefit both disciplines.", "One of the most exciting outcomes of this marriage is the emergence of neural models for learning how to solve the classical combinatorial optimization problems in Computer Science.", "The key observation behind many of these models is that in practice, for a given class of combinatorial problems in a specific domain, the problem instances are typically drawn from a certain (unknown) distribution.", "Therefore if a sufficient number of problem instances are available, then in principle, Statistical Learning should be able to extract the common structures among these instances and produce meta-algorithms (or models) that would, in theory, outperform the carefully hand-crafted algorithms.There have been two main approaches to realize this idea in practice.", "In the first group of methods, the general template of the solver algorithm (which is typically the greedy strategy) is directly imported from the classical heuristic search algorithm, and the Deep Learning component is only tasked to learn the optimal heuristics within this template.", "In combination with Reinforcement Learning, such strategy has been shown to be quite effective for various NP-complete problems -e.g. BID16 .", "Nevertheless, the resulted model is bounded by the greedy strategy, which is sub-optimal in general.", "The alternative is to go one step further and let Deep Learning figure out the entire solution structure from scratch.", "This approach is quite attractive as it allows the model not only learn the optimal (implicit) decision heuristics but also the optimal search strategies beyond the greedy strategy.", "However, this comes at a price: training such models can be quite challenging!", "To do so, a typical candidate is Reinforcement Learning (Policy Gradient, in specific), but such techniques are usually sample inefficient -e.g. BID4 .", "As an alternative method for training, more recently BID24 have proposed using the latent representations learned for the binary classification of the Satisfiability (SAT) problem to actually produce a neural SAT solver model.", "Even though using such proxy for learning a SAT solver is an interesting observation and provides us with an end-to-end differentiable architecture, the model is not directly trained toward solving a SAT problem (unlike Reinforcement Learning).", "As we will see later in this paper, that can indeed result in poor generalization and sub-optimal models.In this paper, we propose a neural Circuit-SAT solver framework that effectively belongs to the second class above; that is, it learns the entire solution structure from scratch.", "More importantly, to train such model, we propose a training strategy that, unlike the typical Policy Gradient, is differentiable end-toend, yet it trains the model directly toward the end goal (similar to Policy Gradient).", "Furthermore, our proposed training strategy enjoys an Explore-Exploit mechanism for better optimization even though it is not exactly a Reinforcement Learning approach.The other aspect of building neural models for solving combinatorial optimization problems is how the problem instance should be represented by the model.", "Using classical architectures like RNNs or LSTMs completely ignores the inherent structure present in the problem instances.", "For this very reason, there has been recently a strong push to employ structure-aware architectures such as different variations of neural graph embedding.", "Most neural graph embedding methodologies are based on the idea of synchronously propagating local information on an underlying (undirected) graph that represents the problem structure.", "The intuition behind using local information propagation for embedding comes from the fact that many original combinatorial optimization algorithms can actually be seen propagating information.", "In our case, since we are dealing with Boolean circuits and circuit are Directed Acyclic Graphs (DAG), we would need an embedding architecture that take into account the special architecture of DAGs (i.e. the topological order of the nodes).", "In particular, we note that in many DAG-structured problems (such as circuits, computational graphs, query DAGs, etc.), the information is propagated sequentially rather than synchronously, hence a justification to have sequential propagation for the embedding as well.", "To this end, we propose a rich embedding architecture that implements such propagation mechanism for DAGs.", "As we see in this paper, our proposed architecture is capable of harnessing the structural information in the input circuits.", "To summarize, our contributions in this work are three-fold:(a", ") We propose a general, rich graph embedding architecture that implements sequential propagation for DAG-structured data. (", "b) We adapt our proposed architecture to design a neural Circuit-SAT solver which is capable of harnessing structural signals in the input circuits to learn a SAT solver.", "(c) We propose a training strategy for our architecture that is end-to-end differentiable, yet similar to Reinforcement Learning techniques, it directly trains our model toward solving the SAT problem with an Explore-Exploit mechanism.The experimental results show the superior performance of our framework especially in terms of generalizing to new problem domains compared to the baseline.", "In this paper, we proposed a neural framework for efficiently learning a Circuit-SAT solver.", "Our methodology relies on two fundamental contributions: (1) a rich DAG-embedding architecture that implements the sequential propagation mechanism on DAG-structured data and is capable of learning useful representations for the input circuits, and (2) an efficient training procedure that trains the DAGembedding architecture directly toward solving the SAT problem without requiring SAT/UNSAT labels in general.", "Our proposed training strategy is fully differentiable end-to-end and at the same time enjoys many features of Reinforcement Learning such as an Explore-Exploit mechanism and direct training toward the end goal.As our experiments showed, the proposed embedding architecture is able to harness structural information in the input DAG distribution and as a result solve the test SAT cases in a fewer number of iterations compared to the baseline.", "This would also allow us to inject domain-specific heuristics into the circuit structure of the input data to obtain better models for that specific domain.", "Moreover, our direct training procedure as opposed to the indirect, classification-based method in NeuroSAT enables our model to generalize better to out-of-sample test cases, as demonstrated by the experiments.", "This superior generalization got even more expressed as we transferred the trained models to a complete new domain (i.e. graph coloring).", "Furthermore, we argued that not only does direct training give us superior out-of-sample generalization, but it is also essential for the problem domains where we cannot enforce the strict training regime where SAT and UNSAT cases come in pairs with almost identical structures, as proposed by BID24 .Future", "efforts in this direction would include closely examining the SAT solver algorithm learned by our framework to see if any high-level knowledge and insight can be extracted to further aide the classical SAT solvers. Needless", "to say, this type of neural models have a long way to go in order to compete with industrial SAT solvers; nevertheless, these preliminary results are promising enough to motivate the community to pursue this direction." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2083333283662796, 0.7428571581840515, 0.19999998807907104, 0.1666666567325592, 0.19512194395065308, 0.0952380895614624, 0.1904761791229248, 0.25, 0.1846153736114502, 0.1538461446762085, 0.051282044500112534, 0.06451612710952759, 0.15789473056793213, 0.0952380895614624, 0.12903225421905518, 0.04878048226237297, 0.25, 0.11999999731779099, 0.31578946113586426, 0.1666666567325592, 0.1355932205915451, 0.1764705777168274, 0.1463414579629898, 0.19999998807907104, 0.1904761791229248, 0.11538460850715637, 0.14814814925193787, 0.1764705777168274, 0.0555555522441864, 0, 0.22857142984867096, 0.2790697515010834, 0.24242423474788666, 0.19354838132858276, 0.12121211737394333, 0.1111111044883728, 0.19512194395065308, 0.0952380895614624, 0.14999999105930328, 0.09677419066429138, 0.1599999964237213, 0.16326530277729034 ]
BJxgz2R9t7
true
[ "We propose a neural framework that can learn to solve the Circuit Satisfiability problem from (unlabeled) circuit instances." ]
[ "Sequence generation models such as recurrent networks can be trained with a diverse set of learning algorithms.", "For example, maximum likelihood learning is simple and efficient, yet suffers from the exposure bias problem.", "Reinforcement learning like policy gradient addresses the problem but can have prohibitively poor exploration efficiency.", "A variety of other algorithms such as RAML, SPG, and data noising, have also been developed in different perspectives.", "This paper establishes a formal connection between these algorithms.", "We present a generalized entropy regularized policy optimization formulation, and show that the apparently divergent algorithms can all be reformulated as special instances of the framework, with the only difference being the configurations of reward function and a couple of hyperparameters.", "The unified interpretation offers a systematic view of the varying properties of exploration and learning efficiency.", "Besides, based on the framework, we present a new algorithm that dynamically interpolates among the existing algorithms for improved learning.", "Experiments on machine translation and text summarization demonstrate the superiority of the proposed algorithm.", "Sequence generation is a ubiquitous problem in many applications, such as machine translation BID28 , text summarization BID13 BID25 , image captioning BID15 , and so forth.", "Great advances in these tasks have been made by the development of sequence models such as recurrent neural networks (RNNs) with different cells BID12 BID6 and attention mechanisms BID1 BID19 .", "These models can be trained with a variety of learning algorithms.The standard training algorithm is based on maximum-likelihood estimation (MLE) which seeks to maximize the log-likelihood of ground-truth sequences.", "Despite the computational simplicity and efficiency, MLE training suffers from the exposure bias BID24 .", "That is, the model is trained to predict the next token given the previous ground-truth tokens; while at test time, since the resulting model does not have access to the ground truth, tokens generated by the model itself are instead used to make the next prediction.", "This discrepancy between training and test leads to the issue that mistakes in prediction can quickly accumulate.", "Recent efforts have been made to alleviate the issue, many of which resort to the reinforcement learning (RL) techniques BID24 BID2 BID8 .", "For example, BID24 adopt policy gradient BID29 that avoids the training/test discrepancy by using the same decoding strategy.", "However, RL-based approaches for sequence generation can face challenges of prohibitively poor sample efficiency and high variance.", "For more practical training, a diverse set of methods has been developed that are in a middle ground between the two paradigms of MLE and RL.", "For example, RAML adds reward-aware perturbation to the MLE data examples; SPG BID8 leverages reward distribution for effective sampling of policy gradient.", "Other approaches such as data noising BID34 ) also show improved results.In this paper, we establish a unified perspective of the broad set of learning algorithms.", "Specifically, we present a generalized entropy regularized policy optimization framework, and show that the apparently diverse algorithms, such as MLE, RAML, SPG, and data noising, can all be re-formulated as special instances of the framework, with the only difference being the choice of reward and the values of a couple of hyperparameters ( FIG0 ).", "In particular, we show MLE is equivalent to using a delta-function reward that assigns 1 to samples that exactly match data examples while −∞ to any other samples.", "Such extremely restricted reward has literally disabled any exploration of the model beyond training data, yielding the exposure bias.", "Other algorithms essentially use rewards that are more smooth, and also leverage model distribution for exploration, which generally results in a larger effective exploration space, more difficult training, and better test-time performance.Besides the new understandings of the existing algorithms, the unified perspective also facilitates to develop new algorithms for improved learning.", "We present an example new algorithm that, as training proceeds, gradually expands the exploration space by annealing the reward and hyperparameter values.", "The annealing in effect dynamically interpolates among the existing algorithms.", "Experiments on machine translation and text summarization show the interpolation algorithm achieves significant improvement over the various existing methods.", "We have presented a unified perspective of a variety of well-used learning algorithms for sequence generation.", "The framework is based on a generalized entropy regularized policy optimization formulation, and we show these algorithms are mathematically equivalent to specifying certain hyperparameter configurations in the framework.", "The new principled treatment provides systematic understanding and comparison among the algorithms, and inspires further enhancement.", "The proposed interpolation algorithm shows consistent improvement in machine translation and text summarization.", "We would be excited to extend the framework to other settings such as robotics and game environments.A POLICY GRADIENT & MIXER BID24 made an early attempt to address the exposure bias problem by exploiting the policy gradient algorithm BID29 .", "Policy gradient aims to maximizes the expected reward: DISPLAYFORM0 where R P G is usually a common reward function (e.g., BLEU).", "Taking gradient w.r.t θ gives: DISPLAYFORM1 We now reveal the relation between the ERPO framework we present and the policy gradient algorithm.Starting from the M-step of Eq.(2) and setting (α = 1, β = 0) as in SPG (section 3.4), we use p θ n as the proposal distribution and obtain the importance sampling estimate of the gradient (we omit the superscript n for notation simplicity): DISPLAYFORM2 where Z θ = y exp{log p θ + R} is the normalization constant of q, which can be considered as adjusting the step size of gradient descent.We can see that Eq.(12) recovers Eq.(11) if we further set R = log R P G , and omit the scaling factor Z θ . In", "other words, policy gradient can be seen as a special instance of the general ERPO framework with (R = log R P G , α = 1, β = 0) and with Z θ omitted.The MIXER algorithm BID24 incorporates an annealing strategy that mixes between MLE and policy gradient training. Specifically", ", given a ground-truth example y * , the first m tokens y * 1:m are used for evaluating MLE loss, and starting from step m + 1, policy gradient objective is used. The m value", "decreases as training proceeds. With the relation", "between policy gradient and ERPO as established above, MIXER can be seen as a specific instance of the proposed interpolation algorithm (section 4) that follows a restricted annealing strategy for token-level hyperparameters (λ 1 , λ 2 , λ 3 ). That is, for t <", "m in Eq.4 (i.e.,the first m steps), (λ 1 , λ 2 , λ 3 ) is set to (0, 0, 1) and c = 1, namely the MLE training; while for t > m, (λ 1 , λ 2 , λ 3 ) is set to (0.5, 0.5, 0) and c = 2." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2857142686843872, 0.05882352590560913, 0.060606054961681366, 0.4324324131011963, 0.07407406717538834, 0.11538460850715637, 0.1818181723356247, 0.1621621549129486, 0.06451612710952759, 0.09302324801683426, 0.1666666567325592, 0.12765957415103912, 0, 0, 0, 0.10526315122842789, 0, 0.17142856121063232, 0.0476190410554409, 0.14999999105930328, 0.3636363446712494, 0.2295081913471222, 0.0476190410554409, 0.0555555522441864, 0.19354838132858276, 0.051282044500112534, 0.0714285671710968, 0.0555555522441864, 0.4375, 0.04444443807005882, 0, 0, 0.1111111044883728, 0, 0.05607476457953453, 0.0624999962747097, 0.0416666604578495, 0.07999999821186066, 0.10169491171836853, 0.035087715834379196 ]
Syl1pGI9wN
true
[ "A unified perspective of various learning algorithms for sequence generation, such as MLE, RL, RAML, data noising, etc." ]
[ "We are reporting the SHINRA project, a project for structuring Wikipedia with collaborative construction scheme.", "The goal of the project is to create a huge and well-structured knowledge base to be used in NLP applications, such as QA, Dialogue systems and explainable NLP systems.", "It is created based on a scheme of ”Resource by Collaborative Contribution (RbCC)”.", "We conducted a shared task of structuring Wikipedia, and at the same, submitted results are used to construct a knowledge base.\n", "There are machine readable knowledge bases such as CYC, DBpedia, YAGO, Freebase Wikidata and so on, but each of them has problems to be solved.", "CYC has a coverage problem, and others have a coherence problem due to the fact that these are based on Wikipedia and/or created by many but inherently incoherent crowd workers.", "In order to solve the later problem, we started a project for structuring Wikipedia using automatic knowledge base construction shared-task.\n", "The automatic knowledge base construction shared-tasks have been popular and well studied for decades.", "However, these tasks are designed only to compare the performances of different systems, and to find which system ranks the best on limited test data.", "The results of the participated systems are not shared and the systems may be abandoned once the task is over.\n", "We believe this situation can be improved by the following changes:\n", "1. designing the shared-task to construct knowledge base rather than evaluating only limited test data\n", "2. making the outputs of all the systems open to public so that we can run ensemble learning to create the better results than the best systems\n", "3. repeating the task so that we can run the task with the larger and better training data from the output of the previous task (bootstrapping and active learning)\n", "We conducted “SHINRA2018” with the above mentioned scheme and in this paper\n", "we report the results and the future directions of the project.", "The task is to extract the values of the pre-defined attributes from Wikipedia pages.", "We have categorized most of the entities in Japanese Wikipedia (namely 730 thousand entities) into the 200 ENE categories.", "Based on this data, the shared-task is to extract the values of the attributes from Wikipedia pages.", "We gave out the 600 training data and the participants are required to submit the attribute-values for all remaining entities of the same category type.", "Then 100 data out of them for each category are used to evaluate the system output in the shared-task.\n", "We conducted a preliminary ensemble learning on the outputs and found 15 F1 score improvement on a category and the average of 8 F1 score improvements on all 5 categories we tested over a strong baseline.", "Based on this promising results, we decided to conduct three tasks in 2019; multi-lingual categorization task (ML), extraction for the same 5 categories in Japanese with a larger training data (JP-5) and extraction for 34 new categories in Japanese (JP-34).\n", "Wikipedia is a great resource as a knowledge base of the entities in the world.", "However, Wikipedia is created for human to read rather than machines to process.", "Our goal is to transform the current Wikipedia to a machine readable format based on a clean structure.", "There are several machine readable knowledge bases (KB) such as CYC BID4 , DBpedia BID3 , YAGO BID7 , Freebase BID0 , Wikidata BID13 and so on, but each of them has problems to be solved.", "CYC has a coverage problem, and others have a coherence problem due to the fact that these are based on Wikipedia and/or created by many but inherently incoherent crowd workers.", "In order to solve these problems, we started a project for structuring Wikipedia using automatic knowledge base construction (AKBC) shared-task using a cleaner ontology definition.The automatic knowledge base construction shared-tasks have been popular for decades.", "In particular, there are popular shared-tasks in the field of Information Extraction, Knowledge Base population and attribute extraction, such as KBP[U.S. National Institute of Standards and Technology (NIST) , 2018] and CoNLL.", "However, most of these tasks are designed only to compare the performances of participated systems, and to find which system ranks the best on limited test data.", "The outputs of the participated systems are not shared and the results and the systems may be abandoned once the evaluation task is over.We believe this situation can be improved by the following changes:1.", "designing the shared-task to construct knowledge base rather than only evaluating on limited test data", "2. making the outputs of all the systems open to public so that anyone can run ensemble learning to create the better results than the best single system", "3. repeating the task so that we can run the task with the larger and better training data from the output of the previous task (active learning and bootstrapping)We conducted \"SHINRA2018\" with the aforementioned ideas, we call it \"Resource by Collaborative Contribution (RbCC)\".", "In this paper we report the first results and the future directions of the project.The task is to extract the values of the pre-defined attributes from Wikipedia entity pages.", "We used Extended Named Entity (ENE) as the definition of the category (in total 200 categories in the ontology) and the attributes (average of 20 attributes) for each category.", "We have categorized most of the entities in Japanese Wikipedia (namely 730 thousand entities) into the ENE categories prior to this project.", "Based on this data, the sharedtask is to extract values of the attributes defined for the category of each entity.", "At the SHINRA2018 project, we limited the target categories to 5, namely, person, company, city, airport and chemical compound.", "We gave out the 600 training data each for 5 categories at and the participants are supposed to submit the attribute-values for all remaining entities of the categories in Japanese Wikipedia.", "Then 100 data out of the entire pages of the category are used at the evaluation of the participated systems in the shared-task.", "For example, there are about 200K person entities in Japanese Wikipedia, and the participants have to extract the attribute-values, such as \"birthday\", \"the organizations he/she have belonged\", \"mentor\" and \"awards\" from all the remaining entities (i.e. 199.4K = 200K-600 entities).", "Before starting the project, the participants signed the contract that all the output will be shared among all participants, so that anyone can conduct the ensemble learning on those outputs, and hence create a better knowledge base than the best system in the task.", "Note that, for the sake of participant's interest, i.e. a company may want to keep the system as their property, the outputs are required to be shared, but their systems are not necessarily to be shared.", "A promising results of the ensemble learning is achieved and we envision that it will lead to the cleaner machine readable knowledge base construction.", "We proposed a scheme of knowledge base creation: \"Resource by Collaborative Contribution\".", "We conducted the Japanese Wikipedia structuring project, SHINRA2018, based on that scheme.", "Based on Extended Named Entity, the top-down definition of categories and attributed for named entities, the task is to extract the attribute-values from Japanese Wikipedia pages.", "8 groups participated to the task, and the ensemble learning results shows that the RbCC scheme is practical and promising.", "A quite big improvement over the the best single system was achieved on \"airport\" category (more than 15 F-score), and the average of 8 F-score improvement was achieved using the weighted voting methods.", "We are planning to conduct SHINRA2019 based on the RbCC scheme on 3 tasks.", "These are the multi-lingual categorization, the extraction of attribute-value on the same 5 categories, and the extraction of attribute-values on 30 new categories in Japanese.We'd like to express our deep appreciation to all the participants and collaborators who helped this project.", "Without the participation, we couldn't even try the ensemble learning and achieve the goal.", "We are hoping to expand and spread the idea of RbCC scheme, not only limited to this kind of task and resource." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2857142686843872, 0.15789473056793213, 0.307692289352417, 0.1764705777168274, 0.052631575614213943, 0.1904761791229248, 0.1764705777168274, 0, 0.0555555522441864, 0, 0.1666666567325592, 0.0714285671710968, 0.11428570747375488, 0, 0.1599999964237213, 0, 0.1538461446762085, 0.12903225421905518, 0.1428571343421936, 0.11428570747375488, 0.0624999962747097, 0.09756097197532654, 0.0833333283662796, 0.1538461446762085, 0.1599999964237213, 0.20689654350280762, 0.043478257954120636, 0.1904761791229248, 0.1428571343421936, 0, 0.05405404791235924, 0.09756097197532654, 0.0714285671710968, 0.10810810327529907, 0.1304347813129425, 0.10526315122842789, 0.05405404791235924, 0.1764705777168274, 0.06666666269302368, 0.06451612710952759, 0.1538461446762085, 0, 0.03999999538064003, 0.08163265138864517, 0.09302324801683426, 0.0555555522441864, 0.47999998927116394, 0.23999999463558197, 0.10810810327529907, 0.13333332538604736, 0, 0.23076923191547394, 0.043478257954120636, 0, 0.1249999925494194 ]
HygfXWqTpm
true
[ "We introduce a \"Resource by Collaborative Construction\" scheme to create KB, structured Wikipedia " ]
[ "Recent image super-resolution(SR) studies leverage very deep convolutional neural networks and the rich hierarchical features they offered, which leads to better reconstruction performance than conventional methods.", "However, the small receptive fields in the up-sampling and reconstruction process of those models stop them to take full advantage of global contextual information.", "This causes problems for further performance improvement.", "In this paper, inspired by image reconstruction principles of human visual system, we propose an image super-resolution global reasoning network (SRGRN) to effectively learn the correlations between different regions of an image, through global reasoning.", "Specifically, we propose global reasoning up-sampling module (GRUM) and global reasoning reconstruction block (GRRB).", "They construct a graph model to perform relation reasoning on regions of low resolution (LR) images.They aim to reason the interactions between different regions in the up-sampling and reconstruction process and thus leverage more contextual information to generate accurate details.", "Our proposed SRGRN are more robust and can handle low resolution images that are corrupted by multiple types of degradation.", "Extensive experiments on different benchmark data-sets show that our model outperforms other state-of-the-art methods.", "Also our model is lightweight and consumes less computing power, which makes it very suitable for real life deployment.", "Image Super-Resolution (SR) aims to reconstruct an accurate high-resolution (HR) image given its low-resolution (LR) counterpart.", "It is a typical ill-posed problem, since the LR to HR mapping is highly uncertain.", "In order to solve this problem, a large number of methods have been proposed, including interpolation-based (Zhang & Wu., 2006) , reconstruction-based (Zhang et al., 2012) , and learning-based methods (Timofte et al., 2013; Peleg & Elad., 2014; Schulter et al., 2015; Huang et al., 2015; Tai et al., 2017; Tong et al., 2017; Zhang et al., 2018a; Dong et al., 2016) .", "In recent years, deep learning based methods have achieved outstanding performance in superresolution reconstruction.", "Some effective residual or dense blocks Zhang et al., 2018b; Lim et al., 2017; Ledig et al., 2017; Ahn et al.; Li et al., 2018) have been proposed to make the network wider and deeper and achieved better results.", "However, they only pay close attention to improving the feature extraction module, ignoring that the upsampling process with smaller receptive fields does not make full use of those extracted features.", "Small convolution receptive field means that the upsampling process can only perform super-resolution reconstruction based on local feature relationships in LR.", "As we all know, different features interact with each other, and features which are in different regions have corresponding effects on upsampling and reconstruction of a certain region.", "That is to say that a lot of information is lost in the process of upsampling and reconstruction due to the limitation of the receptive field, although the network extracts a large number of hierarchical features which are from low frequency to high frequency.", "Chariker et al. (2016; show that the brain generates the images we see based on a small amount of information observed by the human eye, ranther than acquiring the complete data from the point-by-point scan of the retina.", "This process of generating an image is similar to a SR process.", "According to their thought, we add global information in SR reconstruction and propose to use relational reasoning to implement the process that the human visual system reconstructs images with observed global information.", "In general, extracting global information requires a large receptive field.", "A large convolution receptive field usually requires stacking a large number of convolutional layers, but this method does not work in the upsampling and reconstruction process.", "Because this will produce a huge number of parameters.", "Based on the above analysis, we propose an image super-resolution global reasoning network (SR-GRN) which introduces the global reasoning mechanism to the upsampling module and the reconstruction layer.", "The model can capture the relationship between disjoint features of the image with a small respective field, thereby fully exploits global information as a reference for upsampling and reconstruction.", "We mainly propose global reasoning upsampling module (GRUM) and global reasoning reconstruction block (GRRB) as the core structure of the network.", "GRUM and GRRB first convert the LR feature map into N nodes, each of which not only represents a feature region in the LR image, but also contains the influence of pixels in other regions on this feature.", "Then they learn the relationship between the nodes and fuse the information of each node in a global scope.", "After that, GRUM learns the relationship between the channels in each node and amplifies the number of channels for the upsampling process.", "And then they convert N nodes into pixels with global reasoning information.", "Finally, GRUM and GRRB complete the upsampling and reconstruction process respectively.", "In general, our work mainly has the following three contributions:", "• We propose an image super-resolution global reasoning network (SRGRN) which draws on the idea of image reconstruction principles of human visual system.", "We mainly focus on the upsampling module and the reconstruction module.", "The model reconstructs SR images based on relational reasoning in a global scope.", "• We propose a global reasoning upsampling module (GRUM) and global reasoning reconstruction block (GRRB), which construct a graph model to implement the relational reasoning among the feature regions in an image via 1D and 2D convolution, and finally adds the information obtained by global reasoning to each pixel.", "It can provide more contextual information to help generate more accurate details.", "• Our proposed GRUM and GRRB are lightweight, which makes it suitable for real life deployment.", "More importantly, GRUM and GRRB balance the number of parameters and the reconstruction performance well.", "They can be easily inserted into other models.", "In this paper, inspired by the process of reconstructing images from the human visual system, we propose an super-resolution global reasoning network (SRGRN) for image SR, which aims at completing the reconstruction of SR images through global reasoning.", "We mainly propose global reasoning upsampling module (GRUM) and global reasoning reconstruction block (GRRB) as the core of the network.", "The GRUM can give the upsampling module the ability to perform relational reasoning in a global scope, which allows this process to overcome the limitations of the receptive field and recover more faithful details by analyzing more contextual information.", "The GRRB also enables the reconstruction block to make full use of the interaction between the regions and pixels to reconstruct SR images.", "We exploit SRGRN not only to handle low resolution images that are corrupted by three degradation model, but also to handle real-world images.", "Extensive benchmark evaluations demonstrate the importance of GRUM and GRRB.", "It also indicates that our SRGRN achieves superiority over state-of-the-art methods through global reasoning." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0555555522441864, 0.0624999962747097, 0.11764705181121826, 0.20000000298023224, 0.1818181723356247, 0.13333332538604736, 0, 0.25, 0.13793103396892548, 0.07692307233810425, 0, 0, 0.0833333283662796, 0, 0, 0.19354838132858276, 0.05714285373687744, 0, 0.0952380895614624, 0.0952380895614624, 0.10810810327529907, 0.09999999403953552, 0.05714285373687744, 0, 0.3030303120613098, 0.21621620655059814, 0.1428571343421936, 0.04878048226237297, 0.07407406717538834, 0.0714285671710968, 0.1818181723356247, 0, 0, 0.32258063554763794, 0.10526315122842789, 0.43478259444236755, 0.1666666567325592, 0, 0.07692307233810425, 0, 0, 0.2380952388048172, 0.14814814925193787, 0.09090908616781235, 0, 0, 0, 0.25 ]
S1gE6TEYDB
true
[ "A state-of-the-art model based on global reasoning for image super-resolution" ]