source
sequence
source_labels
sequence
rouge_scores
sequence
paper_id
stringlengths
9
11
ic
unknown
target
sequence
[ "Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect.", "Particularly, the properties of critical points and the landscape around them are of importance to determine the convergence performance of optimization algorithms.", "In this paper, we provide a necessary and sufficient characterization of the analytical forms for the critical points (as well as global minimizers) of the square loss functions for linear neural networks.", "We show that the analytical forms of the critical points characterize the values of the corresponding loss functions as well as the necessary and sufficient conditions to achieve global minimum.", "Furthermore, we exploit the analytical forms of the critical points to characterize the landscape properties for the loss functions of linear neural networks and shallow ReLU networks.", "One particular conclusion is that: While the loss function of linear networks has no spurious local minimum, the loss function of one-hidden-layer nonlinear networks with ReLU activation function does have local minimum that is not global minimum.", "In the past decade, deep neural networks BID8 have become a popular tool that has successfully solved many challenging tasks in a variety of areas such as machine learning, artificial intelligence, computer vision, and natural language processing, etc.", "As the understandings of deep neural networks from different aspects are mostly based on empirical studies, there is a rising need and interest to develop understandings of neural networks from theoretical aspects such as generalization error, representation power, and landscape (also referred to as geometry) properties, etc.", "In particular, the landscape properties of loss functions (that are typically nonconex for neural networks) play a central role to determine the iteration path and convergence performance of optimization algorithms.One major landscape property is the nature of critical points, which can possibly be global minima, local minima, saddle points.", "There have been intensive efforts in the past into understanding such an issue for various neural networks.", "For example, it has been shown that every local minimum of the loss function is also a global minimum for shallow linear networks under the autoencoder setting and invertibility assumptions BID1 and for deep linear networks BID11 ; BID14 ; Yun et al. (2017) respectively under different assumptions.", "The conditions on the equivalence between local minimum or critical point and global minimum has also been established for various nonlinear neural networks Yu & Chen (1995) ; BID9 ; BID15 ; BID17 ; BID6 under respective assumptions.However, most previous studies did not provide characterization of analytical forms for critical points of loss functions for neural networks with only very few exceptions.", "In BID1 , the authors provided an analytical form for the critical points of the square loss function of shallow linear networks under certain conditions.", "Such an analytical form further helps to establish the landscape properties around the critical points.", "Further in BID13 , the authors characterized certain sufficient form of critical points for the square loss function of matrix factorization problems and deep linear networks.The focus of this paper is on characterizing the sufficient and necessary forms of critical points for broader scenarios, i.e., shallow and deep linear networks with no assumptions on data matrices and network dimensions, and shallow ReLU networks over certain parameter space.", "In particular, such analytical forms of critical points capture the corresponding loss function values and the necessary and sufficient conditions to achieve global minimum.", "This further enables us to establish new landscape properties around these critical points for the loss function of these networks under general settings, and provides alternative (yet simpler and more intuitive) proofs for existing understanding of the landscape properties.OUR CONTRIBUTION", "1) For the square loss function of linear networks with one hidden layer, we provide a full (necessary and sufficient) characterization of the analytical forms for its critical points and global minimizers.", "These results generalize the characterization in BID1 to arbitrary network parameter dimensions and any data matrices.", "Such a generalization further enables us to establish the landscape property, i.e., every local minimum is also a global minimum and all other critical points are saddle points, under no assumptions on parameter dimensions and data matrices.", "From a technical standpoint, we exploit the analytical forms of critical points to provide a new proof for characterizing the landscape around the critical points under full relaxation of assumptions, where the corresponding approaches in BID1 are not applicable.", "As a special case of linear networks, the matrix factorization problem satisfies all these landscape properties.2) For the square loss function of deep linear networks, we establish a full (necessary and sufficient) characterization of the analytical forms for its critical points and global minimizers.", "Such characterizations are new and have not been established in the existing art.", "Furthermore, such analytical form divides the set of non-global-minimum critical points into different categories.", "We identify the directions along which the loss function value decreases for two categories of the critical points, for which our result directly implies the equivalence between the local minimum and the global minimum.", "For these cases, our proof generalizes the result in BID11 under no assumptions on the network parameter dimensions and data matrices.3) For the square loss function of one-hidden-layer nonlinear neural networks with ReLU activation function, we provide a full characterization of both the existence and the analytical forms of the critical points in certain types of regions in the parameter space.", "Particularly, in the case where there is one hidden unit, our results fully characterize the existence and the analytical forms of the critical points in the entire parameter space.", "Such characterization were not provided in previous work on nonlinear neural networks.", "Moreover, we apply our results to a concrete example to demonstrate that both local minimum that is not a global minimum and local maximum do exist in such a case.", "In this paper, we provide full characterization of the analytical forms of the critical points for the square loss function of three types of neural networks, namely, shallow linear networks, deep linear networks, and shallow ReLU nonlinear networks.", "We show that such analytical forms of the critical points have direct implications on the values of the corresponding loss functions, achievement of global minimum, and various landscape properties around these critical points.", "As a consequence, the loss function for linear networks has no spurious local minimum, while such point does exist for nonlinear networks with ReLU activation.", "In the future, it is interesting to further explore nonlinear neural networks.", "In particular, we wish to characterize the analytical form of critical points for deep nonlinear networks and over the full parameter space.", "Such results will further facilitate the understanding of the landscape properties around these critical points." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.30188679695129395, 0.3720930218696594, 0.6037735939025879, 0.5714285373687744, 0.7234042286872864, 0.15094339847564697, 0.16129031777381897, 0.2222222238779068, 0.3478260934352875, 0.2380952388048172, 0.1875, 0.3589743673801422, 0.3829787075519562, 0.3589743673801422, 0.3243243098258972, 0.4680851101875305, 0.4067796468734741, 0.4444444477558136, 0.1463414579629898, 0.19672130048274994, 0.38596490025520325, 0.4516128897666931, 0.10526315122842789, 0.25641024112701416, 0.2745097875595093, 0.3561643660068512, 0.3265306055545807, 0.10810810327529907, 0.08163265138864517, 0.5185185074806213, 0.5, 0.1666666567325592, 0.21621620655059814, 0.43478259444236755, 0.3589743673801422 ]
SysEexbRb
true
[ "We provide necessary and sufficient analytical forms for the critical points of the square loss functions for various neural networks, and exploit the analytical forms to characterize the landscape properties for the loss functions of these neural networks." ]
[ "The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain.", "One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways.", "To address this “weight transport problem” (Grossberg, 1987), two biologically-plausible algorithms, proposed by Liao et al. (2016) and Lillicrap et al. (2016), relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets.", "However, a recent study by Bartunov et al. (2018) finds that although feedback alignment (FA) and some variants of target-propagation (TP) perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet.", "Here, we additionally evaluate the sign-symmetry (SS) algorithm (Liao et al., 2016), which differs from both BP and FA in that the feedback and feedforward weights do not share magnitudes but share signs.", "We examined the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet; RetinaNet for MS COCO).", "Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks.", "These results complement the study by Bartunov et al. (2018) and establish a new benchmark for future biologically-plausible learning algorithms on more difficult datasets and more complex architectures.", "Deep learning models today are highly successful in task performance, learning useful representations, and even matching representations in the brain BID26 BID24 .", "However, it remains a contentious issue whether these models reflect how the brain learns.", "Core to the problem is the fact that backpropagation, the learning algorithm underlying most of today's deep networks, is difficult to implement in the brain given what we know about the brain's hardware BID2 however, see Hinton 2007) .", "One main reason why backpropagation seems implausible in the brain is that it requires sharing of feedforward and feedback weights.", "Since synapses are unidirectional in the brain, feedforward and feedback connections are physically distinct.", "Requiring them to shared their weights, even as weights are adjusted during learning, seems highly implausible.One approach to addressing this issue is to relax the requirement for weight-symmetry in error backpropagation.", "Surprisingly, when the feedback weights share only the sign but not the magnitude of the feedforward weights BID16 or even when the feedback weights are random (but fixed) BID17 , they can still guide useful learning in the network, with performance comparable to and sometimes even better than performance of backpropagation, on datasets such as MNIST and CIFAR.", "Here, we refer to these two algorithms, respectively, as \"sign-symmetry\" and \"feedback alignment.\"", "Since weight symmetry in backpropagation is required for accurately propagating the derivative of the loss function through layers, the success of asymmetric feedback algorithms indicates that learning can be supported even by inaccurate estimation of the error derivative.", "In feedback alignment, the authors propose that the feedforward weights learn to align with the random feedback weights, thereby allowing feedback to provide approximate yet useful learning signals BID17 .However", ", a recent paper by BID0 finds that feedback alignment and a few other biologically-plausible algorithms, including variants of target propagation, do not generalize to larger and more difficult problems such as ImageNet BID4 ) and perform much worse than backpropagation. Nevertheless", ", the specific conditions Bartunov et al. tested are somewhat restrictive. They only tested", "locally-connected networks (i.e., weight sharing is not allowed among convolution filters at different spatial locations), a choice that is motivated by biological plausibility but in practice limits the size of the network (without weight sharing, each convolutional layer needs much more memory to store its weights), making it unclear whether poor performance was attributable solely to the algorithm, or to the algorithm on those architectures.1 Second, Bartunov", "et al. did not test sign-symmetry, which may be more powerful than feedback alignment since signsymmetric feedback weights may carry more information about the feedforward weights than the random feedback weights used in feedback alignment.In this work, we re-examine the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using standard ConvNet architectures (i.e., ResNet-18, AlexNet, and RetinaNet). We find that sign-symmetry", "can in fact train networks on both tasks, achieving similar performance to backpropagation on ImageNet and reasonable performance on MS COCO. In addition, we test the use", "of backpropagation exclusively in the last layer while otherwise using feedback alignment, hypothesizing that in the brain, the classifier layer may not be a fully-connected layer and may deliver the error signal through some other unspecified mechanism. Such partial feedback alignment", "can achieve better performance (relative to backpropagation) than in BID0 . Taken together, these results extend", "previous findings and indicate that existing biologicallyplausible learning algorithms remain viable options both for training artificial neural networks and for modeling how learning can occur in the brain.", "Recent work shows that biologically-plausible learning algorithms do not scale to challenging problems such as ImageNet.", "We evaluated sign-symmetry and re-evaluated feedback alignment on their effectiveness training ResNet and AlexNet on ImageNet and RetinaNet on MS COCO.", "We find that", "1) sign-symmetry performed nearly as well as backpropagation on ImageNet,", "2) slightly modified feedback alignment performed better than previously reported, and", "3) both algorithms had reasonable performance on MS COCO with minimal hyperparameter tuning.", "Taken together, these results indicate that biologically-plausible learning algorithms, in particular sign-symmetry, remain promising options for training artificial neural networks and modeling learning in the brain." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0.1304347813129425, 0.1428571343421936, 0, 0.11764705181121826, 0, 0.1111111044883728, 0.06666666269302368, 0, 0.0476190447807312, 0, 0, 0, 0.072727270424366, 0.0833333283662796, 0.0476190447807312, 0.05714285373687744, 0.08163265138864517, 0, 0.027397258207201958, 0.09677419066429138, 0.11764705181121826, 0, 0, 0.05714285373687744, 0.23076923191547394, 0.14814814925193787, 0, 0.21052631735801697, 0, 0.08695651590824127, 0.1764705777168274 ]
SygvZ209F7
true
[ "Biologically plausible learning algorithms, particularly sign-symmetry, work well on ImageNet" ]
[ "We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors.", "We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.\n", "Deep learning contains many differentiable algorithms for computing with learned representations.", "These representations form vector spaces, sometimes equipped with additional structure.", "A recent example is the Transformer (Vaswani et al., 2017) in which there is a vector space V of value vectors and an inner product space H of query and key vectors.", "This structure supports a kind of messagepassing, where a value vector v j ∈ V derived from entity j is propagated to update an entity i with weight q i · k j , where q i ∈ H is a query vector derived from entity i, k j ∈ H is a key vector derived from entity j, and the inner product on H is written as a dot product.", "The Transformer therefore represents a relational inductive bias, where a relation from entity j to entity i is perceived to the extent that q i · k j is large and positive.", "However, the real world has structure beyond entities and their direct relationships: for example, the three blocks in Figure 1 are arranged in such a way that if either of the supporting blocks is removed, the top block will fall.", "This is a simple 3-way relationship between entities i, j, k that is complex to represent as a system of 2-way relationships.", "It is natural to make the hypothesis that such higher-order relationships are essential to extracting the full predictive power of data, across many domains.", "In accordance with this hypothesis, we introduce a generalisation of the Transformer architecture, the 2-simplicial Transformer, which incorporates both 2-and 3-way interactions.", "Mathematically, the key observation is that higher-order interactions between entities can be understood using algebras.", "This is nothing but Boole's insight (Boole, 1847) which set in motion the development of modern logic.", "In our situation, an appropriate algebra is the Clifford algebra Cl(H) of the space H of queries and keys, which contains that space H ⊆ Cl(H) and in which queries and keys can be multiplied.", "To represent a 3-way interaction we map each entity i to a triple (p i , l k ) using a natural continuous function η : Cl(H) −→ R associated to the Z-grading of Cl(H).", "This scalar measures how strongly the network perceives a 3-way interaction involving i, j, k.", "In summary, the 2-simplicial Transformer learns how to represent entities in its environment as vectors v ∈ V , and how to transform those entities to queries and (pairs of) keys in H, so that the signals provided by the scalars q i · k j and η(p i l 1 j l 2 k ) are informative about higher-order structure in the environment.", "As a toy example of higher-order structure, we consider the reinforcement learning problem in a variant of the BoxWorld environment from (Zambaldi et al., 2019) .", "The original BoxWorld is played on a rectangular grid populated by keys and locked boxes of varying colours, with the goal being to open the box containing the \"Gem\".", "In our variant of the BoxWorld environment, bridge BoxWorld, the agent must use two keys simultaneously to obtain the Gem; this structure in the environment creates many 3-way relationships between entities, including for example the relationship between the locked boxes j, k providing the two keys and the Gem entity i.", "This structure in the environment is fundamentally logical in nature, and encodes a particular kind of conjunction; see Appendix I.", "The architecture of our deep reinforcement learning agent largely follows (Zambaldi et al., 2019) and the details are given in Section 4.", "The key difference between our simplicial agent and the relational agent of (Zambaldi et al., 2019) is that in place of a standard Transformer block we use a 2-simplicial Transformer block.", "Our experiments show that the simplicial agent confers an advantage over the relational agent as an inductive bias in our reasoning task.", "Motivation from neuroscience for a simplicial inductive bias for abstract reasoning is contained in Appendix J.", "Our use of tensor products of value vectors is inspired by the semantics of linear logic in vector spaces (Girard, 1987; Mellis, 2009; Clift & Murfet, 2017; Wallbridge, 2018) in which an algorithm with multiple inputs computes on the tensor product of those inputs, but this is an old idea in natural language processing, used in models including the second-order RNN (Giles et al., 1989; Pollack, 1991; Goudreau et al., 1994; Giles et al., 1991) , multiplicative RNN (Sutskever et al., 2011; Irsoy & Cardie, 2015) , Neural Tensor Network (Socher et al., 2013 ) and the factored 3-way Restricted Boltzmann Machine (Ranzato et al., 2010) , see Appendix A. Tensors have been used to model predicates in a number of neural network architectures aimed at logical reasoning (Serafini & Garcez, 2016; Dong et al., 2019) .", "The main novelty in our model lies in the introduction of the 2-simplicial attention, which allows these ideas to be incorporated into the Transformer architecture.", "On general grounds one might expect that in the limit of infinite experience, any reinforcement learning agent with a sufficiently deep neural network will be able to solve any environment, in-cluding those like bridge BoxWorld that involve higher-order relations between entities.", "In practice, however, we do not care about the infinite computation limit.", "In the regime of bounded computation it is reasonable to introduce biases towards learning representations of structures that are found in a wide range of environments that we consider important.", "We argue that higher-order relations between entities are an important example of such structures, and that the 2-simplicial Transformer is a natural inductive bias for 3-way interactions between entities.", "We have given preliminary evidence for the utility of this bias by showing that in the bridge BoxWorld environment the simplicial agent has better performance than a purely relational agent, and that this performance involves in a meaningful way the prediction of 3-way interactions (or 2-simplices).", "We believe that simplicial Transformers may be useful for any problem in which higher-order relations between entities are important.", "The long history of interactions between logic and algebra is a natural source of inspiration for the design of inductive biases in deep learning.", "In this paper we have exhibited one example: Boole's idea, that relationships between entities can be modeled by multiplication in an algebra, may be realised in the context of deep learning as an augmentation to the Transformer architecture using Clifford algebras of spaces of representations." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3333333432674408, 0.8888888955116272, 0.11428570747375488, 0, 0.26923075318336487, 0.1515151411294937, 0.2800000011920929, 0.2711864411830902, 0.1818181723356247, 0.17391303181648254, 0.31111109256744385, 0.1538461446762085, 0.19512194395065308, 0.2448979616165161, 0.1111111044883728, 0.10256409645080566, 0.1666666567325592, 0.25531914830207825, 0.19607841968536377, 0.1846153736114502, 0.3255814015865326, 0.3404255211353302, 0.3529411852359772, 0.3255814015865326, 0.3589743673801422, 0.140625, 0.260869562625885, 0.2539682388305664, 0.0555555522441864, 0.31372547149658203, 0.47999998927116394, 0.32786884903907776, 0.23255813121795654, 0.43478259444236755, 0.317460298538208 ]
rkecJ6VFvr
true
[ "We introduce the 2-simplicial Transformer and show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning." ]
[ "We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics.", "Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation.", "Our proposed tensor recurrent architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions.", "Furthermore, we decompose the higher-order structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance.", "We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs.", "We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data.", "One of the central questions in science is forecasting: given the past history, how well can we predict the future?", "In many domains with complex multivariate correlation structures and nonlinear dynamics, forecasting is highly challenging since the system has long-term temporal dependencies and higher-order dynamics.", "Examples of such systems abound in science and engineering, from biological neural network activity, fluid turbulence, to climate and traffic systems (see FIG0 ).", "Since current forecasting systems are unable to faithfully represent the higher-order dynamics, they have limited ability for accurate long-term forecasting.", "Therefore, a key challenge is accurately modeling nonlinear dynamics and obtaining stable long-term predictions, given a dataset of realizations of the dynamics.", "Here, the forecasting problem can be stated as follows: how can we efficiently learn a model that, given only few initial states, can reliably predict a sequence of future states over a long horizon of T time-steps?", "Common approaches to forecasting involve linear time series models such as auto-regressive moving average (ARMA), state space models such as hidden Markov model (HMM), and deep neural networks.", "We refer readers to a survey on time series forecasting by BID2 and the references therein.", "A recurrent neural network (RNN), as well as its memory-based extensions such as the LSTM, is a class of models that have achieved good performance on sequence prediction tasks from demand forecasting BID5 to speech recognition BID15 and video analysis BID9 .", "Although these methods can be effective for short-term, smooth dynamics, neither analytic nor data-driven learning methods tend to generalize well to capturing long-term nonlinear dynamics and predicting them over longer time horizons.To address this issue, we propose a novel family of tensor-train recurrent neural networks that can learn stable long-term forecasting.", "These models have two key features: they", "1) explicitly model the higher-order dynamics, by using a longer history of previous hidden states and high-order state interactions with multiplicative memory units; and", "2) they are scalable by using tensor trains, a structured low-rank tensor decomposition that greatly reduces the number of model parameters, while mostly preserving the correlation structure of the full-rank model.In this work, we analyze Tensor-Train RNNs theoretically, and also experimentally validate them over a wide range of forecasting domains.", "Our contributions can be summarized as follows:• We describe how TT-RNNs encode higher-order non-Markovian dynamics and high-order state interactions.", "To address the memory issue, we propose a tensor-train (TT) decomposition that makes learning tractable and fast.•", "We provide theoretical guarantees for the representation power of TT-RNNs for nonlinear dynamics, and obtain the connection between the target dynamics and TT-RNN approximation. In", "contrast, no such theoretical results are known for standard recurrent networks.• We", "validate TT-RNNs on simulated data and two real-world environments with nonlinear dynamics (climate and traffic). Here", ", we show that TT-RNNs can forecast more accurately for significantly longer time horizons compared to standard RNNs and LSTMs.", "In this work, we considered forecasting under nonlinear dynamics.We propose a novel class of RNNs -TT-RNN.", "We provide approximation guarantees for TT-RNN and characterize its representation power.", "We demonstrate the benefits of TT-RNN to forecast accurately for significantly longer time horizon in both synthetic and real-world multivariate time series data.As we observed, chaotic dynamics still present a significant challenge to any sequential prediction model.", "Hence, it would be interesting to study how to learn robust models for chaotic dynamics.", "In other sequential prediction settings, such as natural language processing, there does not (or is not known to) exist a succinct analytical description of the data-generating process.", "It would be interesting to further investigate the effectiveness of TT-RNNs in such domains as well." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.06666666269302368, 0.06451612710952759, 0.060606054961681366, 0.13793103396892548, 0.06666666269302368, 0.052631575614213943, 0, 0.05882352590560913, 0, 0.06896550953388214, 0, 0.1428571343421936, 0.11428570747375488, 0.1538461446762085, 0.04081632196903229, 0.17241379618644714, 0, 0.060606054961681366, 0.14814814925193787, 0, 0.0714285671710968, 0, 0, 0, 0.20000000298023224, 0.14814814925193787, 0, 0.043478257954120636, 0, 0, 0 ]
HJJ0w--0W
true
[ "Accurate forecasting over very long time horizons using tensor-train RNNs" ]
[ "Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret.", "We propose a variational message-passing algorithm for variational inference in such models.", "We make three contributions.", "First, we propose structured inference networks that incorporate the structure of the graphical model in the inference network of variational auto-encoders (VAE).", "Second, we establish conditions under which such inference networks enable fast amortized inference similar to VAE.", "Finally, we derive a variational message passing algorithm to perform efficient natural-gradient inference while retaining the efficiency of the amortized inference.", "By simultaneously enabling structured, amortized, and natural-gradient inference for deep structured models, our method simplifies and generalizes existing methods.", "To analyze real-world data, machine learning relies on models that can extract useful patterns.", "Deep Neural Networks (DNNs) are a popular choice for this purpose because they can learn flexible representations.", "Another popular choice are probabilistic graphical models (PGMs) which can find interpretable structures in the data.", "Recent work on combining these two types of models hopes to exploit their complimentary strengths and provide powerful models that are also easy to interpret BID10 BID14 BID0 BID3 .To", "apply such hybrid models to real-world problems, we need efficient algorithms that can extract useful structure from the data. However", ", the two fields of deep learning and PGMs traditionally use different types of algorithms. For deep", "learning, stochastic-gradient methods are the most popular choice, e.g., those based on back-propagation. These algorithms", "are not only widely applicable, but can also employ amortized inference to enable fast inference at test time BID17 BID12 . On the other hand", ", most popular algorithms for PGMs exploit the model's graphical conjugacy structure to gain computational efficiency, e.g., variational message passing (VMP) BID18 , expectation propagation BID16 , Kalman filtering BID4 BID5 , and more recently natural-gradient variational inference BID9 and stochastic variational inference BID8 . In short, the two", "fields of deep learning and probabilistic modelling employ fundamentally different inferential strategies and a natural question is, whether we can design algorithms that combine their respective strengths.There have been several attempts to design such methods in the recent years, e.g., BID14 ; BID3 ; BID0 ; BID10 ; BID2 . Our work in this", "paper is inspired by the previous work of BID10 that aims to combine message-passing, natural-gradient, and amortized inference. Our proposed method", "in this paper simplifies and generalizes the method of BID10 .To do so, we propose", "Structured Inference Networks (SIN) that incorporate the PGM structure in the standard inference networks used in variational auto-encoders (VAE) BID12 BID17 . We derive conditions", "under which such inference networks can enable fast amortized inference similar to VAE. By using a recent VMP", "method of BID11 , we The generative models are just like the decoder in VAE but they employ a structured prior, e.g., Fig. (a) has a mixture-model prior while Fig. (b) has a dynamical system prior. SINs, just like the encoder", "in VAE, mimic the structure of the generative model by using parameters φ. One main difference is that", "in SIN the arrows between y n and x n are reversed compared to the model, while rest of the arrows have the same direction.derive a variational message-passing algorithm whose messages automatically reduce to stochasticgradients for the deep components of the model, while perform natural-gradient updates for the PGM part. Overall, our algorithm enables", "Structured, Amortized, and Natural-gradient (SAN) updates and therefore we call our algorithm the SAN algorithm. We show that our algorithm give", "comparable performance to the method of BID10 while simplifying and generalizing it. The code to reproduce our results", "is available at https://github.com/emtiyaz/vmp-for-svae/.", "We propose an algorithm to simplify and generalize the algorithm of BID10 for models that contain both deep networks and graphical models.", "Our proposed VMP algorithm enables structured, amortized, and natural-gradient updates given that the structured inference networks satisfy two conditions.", "The two conditions derived in this paper generally hold for PGMs that do not force dense correlations in the latent variables x.", "However, it is not clear how to extend our method to models where this is the case, e.g., Gaussian process models.", "It is possible to use ideas from sparse Gaussian process models and we will investigate this in the future.", "An additional issue is that our results are limited to small scale data.", "We found that it is non-trivial to implement a message-passing framework that goes well with the deep learning framework.", "We are going to pursue this direction in the future and investigate good platforms to integrate the capabilities of these two different flavors of algorithms." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.277777761220932, 0.5714285373687744, 0.0952380895614624, 0.34285715222358704, 0, 0.2222222238779068, 0.17142856121063232, 0.12903225421905518, 0.11764705181121826, 0.24242423474788666, 0.13333332538604736, 0.1621621549129486, 0.1875, 0.05882352590560913, 0.04999999329447746, 0.17241379618644714, 0.17910447716712952, 0.15789473056793213, 0.1875, 0.20512819290161133, 0.05882352590560913, 0.11764705181121826, 0.1764705777168274, 0.27586206793785095, 0.29411762952804565, 0.11764705181121826, 0, 0.6666666865348816, 0.2222222238779068, 0.15789473056793213, 0.10810810327529907, 0.1666666567325592, 0.06666666269302368, 0.3529411852359772, 0.1538461446762085 ]
HyH9lbZAW
true
[ "We propose a variational message-passing algorithm for models that contain both the deep model and probabilistic graphical model." ]
[ "Modern deep neural networks have a large amount of weights, which make them difficult to deploy on computation constrained devices such as mobile phones.", "One common approach to reduce the model size and computational cost is to use low-rank factorization to approximate a weight matrix.", "However, performing standard low-rank factorization with a small rank can hurt the model expressiveness and significantly decrease the performance.", "In this work, we propose to use a mixture of multiple low-rank factorizations to model a large weight matrix, and the mixture coefficients are computed dynamically depending on its input.", "We demonstrate the effectiveness of the proposed approach on both language modeling and image classification tasks.", "Experiments show that our method not only improves the computation efficiency but also maintains (sometimes outperforms) its accuracy compared with the full-rank counterparts.", "Modern neural networks usually contain millions of parameters BID4 BID8 , and they are difficult to be deployed on mobile devices with limited computation resources.", "To solve this problem, model compression techniques are proposed in recent years.", "Low-rank factorization is a popular way of reducing the matrix size.", "It has been extensively explored in the literature BID5 BID6 BID3 BID10 .", "Mathematically, a large weight matrix W ∈ R m×n is factorized to two small rank-d matrices U ∈ R m×d , V ∈ R n×d with W = U V T .", "Since both U and V are dense, no sparsity support is required from specialized hardware.", "It naturally fits the general-purpose, off-the-shelf CPUs and GPUs.To significantly reduce the model size and computation, the rank d in the low-rank factorization needs to be small.", "However, a small rank can limit the expressiveness of the model BID9 and lead to worse performance.", "To understand the limitations, given a n-dim feature vector h, we observe that DISPLAYFORM0 , is a linear projection from a high-dimensional space (n dims) to a low-dimensional space (d dims).", "This can lead to a significant loss of information.", "The conflict between the rank d and the model expressiveness prevents us from obtaining a both compact and accurate model.To address the dilemma, we propose to increase the expressiveness by learning an adaptive, inputdependent factorization, rather than performing a fixed factorization of a weight matrix.", "To do so, we use a mixture of multiple low-rank factorizations.", "The mixing weights are computed based on the input.", "This creates an adaptive linear projection from a high-dimensional space to a low-dimensional space.", "Compared to the conventional low-rank factorization, the proposed approach can significantly improve its performance while only introducing a small additional cost.", "DISPLAYFORM1 where z can be treated as the middle layer.", "Techniques like pooling can be applied to compute π to make it efficient." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.04651162400841713, 0.21052631735801697, 0.1621621549129486, 0.1304347813129425, 0.23529411852359772, 0.09756097197532654, 0.09090908616781235, 0, 0.06666666269302368, 0, 0.04651162400841713, 0.11764705181121826, 0.1860465109348297, 0.11428570747375488, 0.08888888359069824, 0.0714285671710968, 0.17543859779834747, 0.06666666269302368, 0, 0.06451612710952759, 0.1538461446762085, 0, 0.06451612710952759 ]
B1eHgu-Fim
true
[ "A simple modification to low-rank factorization that improves performances (in both image and language tasks) while still being compact." ]
[ "Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices.", "We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs).", "PCRs deviate from previous formats by leveraging progressive compression to split each training example into multiple examples of increasingly higher fidelity, without adding to the total data size.", "Training examples of similar fidelity are grouped together, which reduces both the system overhead and data bandwidth needed to train a model.", "We show that models can be trained on aggressively compressed representations of the training data and still retain high accuracy, and that PCRs can enable a 2x speedup on average over baseline formats using JPEG compression.", "Our results hold across deep learning architectures for a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ.", "Distributed deep learning exploits parallelism to reduce training time, and consists of three key components: the data pipeline (storage), the forward/backward computation (compute), and the variable synchronization (network).", "A plethora of work has investigated scaling deep learning from a compute-or network-bound perspective (e.g., Dean et al., 2012; Cui et al., 2016; Abadi et al., 2015; Cui et al., 2014; Jouppi et al., 2017; Lim et al., 2019; Alistarh et al., 2017; Wen et al., 2017; Wangni et al., 2018; .", "However, little attention has been paid toward scaling the storage layer, where training starts and training data is sourced.", "Unfortunately, hardware trends point to an increasing divide between compute and networking or storage bandwidth (Li et al., 2016; Lim et al., 2019; Kurth et al., 2018) .", "For example, the transportation of data for machine learning is a key factor in the design of modern data centers (Hazelwood et al., 2018) , which are expected to be serviced by slow, yet high capacity, storage media for the foreseeable future (David Reinsel, 2018; Cheng et al., 2015; Rosenthal et al., 2012) .", "This, combined with the memory wall-a lack of bandwidth between compute and memory-suggests that, while computation may be sufficient moving forward, the mechanisms for moving data to the compute may not (Wulf & McKee, 1995; Kwon & Rhu, 2018; Hsieh et al., 2017; Zinkevich et al., 2010) .", "The storage pipeline is therefore a natural area to seek improvements in overall training times, which manifest from the storage medium, through the network, and into the compute nodes.", "In this work, we propose a novel on-disk format called Progressive Compressed Records (PCRs) as a way to reduce the bandwidth cost associated with training over massive datasets.", "Our approach leverages a compression technique that decomposes each data item into deltas, each of which increases data fidelity.", "PCRs utilize deltas to dynamically compress entire datasets at a fidelity suitable for each application's needs, avoiding duplicating the dataset (potentially many times) at various fidelity levels.", "Applications control the trade-off between dataset size (and, thus, bandwidth) and fidelity, and a careful layout of deltas ensures that data access is efficient at a storage medium level.", "As a result, we find that for a variety of popular deep learning models and datasets, bandwidth (and therefore training time) can be easily reduced by 2× on average relative to JPEG compression without affecting model accuracy.", "Overall, we make the following contributions:", "1. In experiments with multiple architectures and several large-scale image datasets, we show that neural network training is robust to data compression in terms of test accuracy and training loss; however, the amount of compression that can be tolerated varies across learning tasks.", "2. We introduce Progressive Compressed Records (PCRs), a novel on-disk format for training data.", "PCRs combine progressive compression and careful data placement to enable applications to dynamically choose the fidelity of the dataset they consume, reducing data bandwidth.", "3. We demonstrate that by using PCRs, training speed can be improved by 2× on average over standard formats using JPEG compression.", "This is achieved by selecting a lower data fidelity, which, in turn, reduces the amount of data read without significantly impairing model performance.", "To continue making advances in machine learning, researchers will need access to larger and larger datasets, which will eventually spill into (potentially distributed) storage systems.", "Storage and networking bandwidth, which are precious resources, can be better utilized with efficient compression formats.", "We introduce a novel record format, Progressive Compressed Records (PCRs), that trades off data fidelity with storage and network demands, allowing the same model to be trained with 2× less storage bandwidth while retaining model accuracy.", "PCRs use progressive compression to split training examples into multiple examples of increasingly higher fidelity without the overheads of naive approaches.", "PCRs avoid duplicating space, are easy to implement, and can be applied to a broad range of tasks dynamically.", "While we apply our format in this work specifically to images with JPEG compression, PCRs are general enough to handle various data modalities or additional compression techniques; future work will include exploring these directions in fields outside of visual classification, such as audio generation or video segmentation." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2222222238779068, 0.30434781312942505, 0.1599999964237213, 0.2222222238779068, 0.2181818187236786, 0.1904761791229248, 0.25, 0.10526315122842789, 0.1463414579629898, 0.08510638028383255, 0.17910447716712952, 0.1269841194152832, 0.16326530277729034, 0.19999998807907104, 0.14999999105930328, 0.2083333283662796, 0.1599999964237213, 0.2711864411830902, 0, 0.19672130048274994, 0.2702702581882477, 0.22727271914482117, 0.1860465109348297, 0.13333332538604736, 0.08695651590824127, 0.10256409645080566, 0.2857142686843872, 0.1428571343421936, 0.24390242993831635, 0.09090908616781235 ]
S1e0ZlHYDB
true
[ "We propose a simple, general, and space-efficient data format to accelerate deep learning training by allowing sample fidelity to be dynamically selected at training time" ]
[ "It is fundamental and challenging to train robust and accurate Deep Neural Networks (DNNs) when semantically abnormal examples exist.", "Although great progress has been made, there is still one crucial research question which is not thoroughly explored yet: What training examples should be focused and how much more should they be emphasised to achieve robust learning?", "In this work, we study this question and propose gradient rescaling (GR) to solve it.", "GR modifies the magnitude of logit vector’s gradient to emphasise on relatively easier training data points when noise becomes more severe, which functions as explicit emphasis regularisation to improve the generalisation performance of DNNs.", "Apart from regularisation, we connect GR to examples weighting and designing robust loss functions.", "We empirically demonstrate that GR is highly anomaly-robust and outperforms the state-of-the-art by a large margin, e.g., increasing 7% on CIFAR100 with 40% noisy labels.", "It is also significantly superior to standard regularisers in both clean and abnormal settings.", "Furthermore, we present comprehensive ablation studies to explore the behaviours of GR under different cases, which is informative for applying GR in real-world scenarios.", "DNNs have been successfully applied in diverse applications (Socher et al., 2011; Krizhevsky et al., 2012; LeCun et al., 2015) .", "However, their success is heavily reliant on the quality of training data, especially accurate semantic labels for learning supervision.", "Unfortunately, on the one hand, maintaining the quality of semantic labels as the scale of training data increases is expensive and almost impossible when the scale becomes excessively large.", "On the other hand, it has been demonstrated that DNNs are capable of memorising the whole training data even when all training labels are random (Zhang et al., 2017) .", "Therefore, DNNs struggle to discern meaningful data patterns and ignore semantically abnormal examples 1 simultaneously (Krueger et al., 2017; Arpit et al., 2017) .", "Consequently, it becomes an inevitable demand for DNNs to hold robustness when training data contains anomalies (Larsen et al., 1998; Natarajan et al., 2013; Sukhbaatar & Fergus, 2014; Xiao et al., 2015; Patrini et al., 2017; Vahdat, 2017; Veit et al., 2017; Li et al., 2017) .", "Recently, great progress has been made towards robustness against anomalies when training DNNs (Krueger et al., 2017) .", "There are three appealing perspectives in terms of their simplicity and effectiveness:", "1) Examples weighting.", "For example, knowledge distilling from auxiliary models is popular for heuristically designing weighting schemes.", "However, it is challenging to select and train reliable auxiliary models in practice (Li et al., 2017; Malach & Shalev-Shwartz, 2017; Jiang et al., 2018; Ren et al., 2018; Han et al., 2018b) .", "2) Robust loss functions (Van Rooyen et al., 2015; Ghosh et al., 2017; Zhang & Sabuncu, 2018; Wang et al., 2019b) ; 3) Explicit regularisation techniques (Arpit et al., 2017; .", "Although designing robust losses or explicit regularisation is easier and more flexible in practice, the performance is not the optimal yet.", "1 One training example is composed of an input and its corresponding label.", "A semantically abnormal example means the input is semantically unrelated to its label, which may come from corrupted input or label.", "For example, in Figure 3 in the supplementary material:", "1) Out-of-distribution anomalies: An image may contain only background or an object which does not belong to any training class;", "2) In-distribution anomalies: An image of class a may be annotated to class b or an image may contain more than one semantic object.", "Regarding examples weighting, there is a core research question which is not well answered yet:", "What training examples should be focused on and how large the emphasis spread should be?", "In this work, we present a thorough study of this practical question under different settings.", "For better analysis, we propose two basic and necessary concepts: emphasis focus and spread with explicit definition in Sec. 3.2.", "They are conceptually introduced as follows:", "Emphasis focus.", "It is a common practice to focus on harder instances when training DNNs (Shrivastava et al., 2016; Lin et al., 2017) .", "When a dataset is clean, it achieves faster convergence and better performance to emphasise on harder examples because they own larger gradient magnitude, which means more information and a larger update step for model's parameters.", "However, when severe noise exists, as demonstrated in (Krueger et al., 2017; Arpit et al., 2017) , DNNs learn simple meaningful patterns first before memorising abnormal ones.", "In other words, anomalies are harder to fit and own larger gradient magnitude in the later stage.", "Consequently, if we use the default sample weighting in categorical cross entropy (CCE) where harder samples obtain higher weights, anomalies tend to be fitted well especially when a network has large enough capacity.", "That is why we need to move the emphasis focus towards relatively easier ones, which serves as emphasis regularisation.", "Emphasis spread.", "We term the weighting variance of training examples emphasis spread.", "The key concept is that we should not treat all examples equally, neither should we let only a few be emphasised and contribute to the training.", "Therefore, when emphasis focus changes, the emphasis spread should be adjusted accordingly.", "We integrate emphasis focus and spread into a unified example weighting framework.", "Emphasis focus defines what training examples own higher weights while emphasis spread indicates how large variance over their weights.", "Specifically, we propose gradient rescaling (GR), which modifies the magnitude of logit vector's gradient.", "The logit vector is the output of the last fully connected (FC) layer of a network.", "We remark that we do not design the weighting scheme heuristically from scratch.", "Instead, it is naturally motivated by the gradient analysis of several loss functions.", "Interestingly, GR can be naturally connected to examples weighting, robust losses, explicit regularisation:", "1) The gradient magnitude of logit vector can be regarded as weight assignment that is built-in in loss functions (Gopal, 2016; Alain et al., 2016; Zhang et al., 2018b) .", "Therefore, rescaling the gradient magnitude equals to adjusting the weights of examples;", "2) A specific loss function owns a fixed gradient derivation.", "Adjusting the gradient can be treated as a more direct and flexible way of modifying optimisation objectives;", "3) Instead of focusing on harder examples 2 by default, we can adjust emphasis focus to relative easier ones when noise is severe.", "GR serves as emphasis regularisation and is different from standard regularisers, e.g., L2 weight decay constraints on weight parameters and Dropout samples neural units randomly (Srivastava et al., 2014) ; GR is simple yet effective.", "We demonstrate its effectiveness on diverse computer vision tasks using different net architectures:", "1) Image classification with clean training data;", "2) Image classification with synthetic symmetric label noise, which is more challenging than asymmetric noise evaluated by (Vahdat, 2017; ; 3) Image classification with real-world unknown anomalies, which may contain open-set noise , e.g., images with only background, or outliers, etc.", ";", "4) Video person re-identification, a video retrieval task containing diverse anomalies.", "Beyond, we show that GR is notably better than other standard regularisers, e.g., L2 weight decay and dropout.", "Besides, to comprehensively understand GR's behaviours, we present extensive ablation studies.", "Main contribution.", "Intuitively and principally, we claim that two basic factors, emphasis focus and spread, should be babysat simultaneously when it comes to examples weighting.", "To the best of our knowledge, we are the first to thoroughly study and analyse them together in a unified framework.", "In this work, we present three main contributions:", "1) We analyse and answer a core research question: What training examples should be focused on and how large the emphasis spread should be?", "2) We uncover and analyse that two basic factors, emphasis focus and spread, should be babysat simultaneously when it comes to examples weighting.", "Consequently, we propose a simple yet effective gradient rescaling framework serving as emphasis regularisation.", "3) Extensive experiments on different tasks using different network architectures are reported for better understanding and demonstration of GR's effectiveness, which are also valuable for applying GR in practice.", "(Zheng et al., 2016) .", "Out-of-distribution anomalies:", "1) The first image in the 3rd row contains only background and no semantic information at all." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rylUOn4Yvr
true
[ "ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION PERSPECTIVE" ]
[ "Generative Adversarial Networks (GANs) have achieved remarkable results in the task of generating realistic natural images.", "In most applications, GAN models share two aspects in common.", "On the one hand, GANs training involves solving a challenging saddle point optimization problem, interpreted as an adversarial game between a generator and a discriminator functions.", "On the other hand, the generator and the discriminator are parametrized in terms of deep convolutional neural networks.", "The goal of this paper is to disentangle the contribution of these two factors to the success of GANs.", "In particular, we introduce Generative Latent Optimization (GLO), a framework to train deep convolutional generators without using discriminators, thus avoiding the instability of adversarial optimization problems.", "Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: learning from large data, synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors.", "Generative Adversarial Networks (GANs) BID15 are a powerful framework to learn generative models of natural images.", "GANs learn these generative models by setting up an adversarial game between two learning machines.", "On the one hand, a generator plays to transform noise vectors into fake samples, which resemble real samples drawn from a distribution of natural images.", "On the other hand, a discriminator plays to distinguish between real and fake samples.", "During training, the generator and the discriminator learn in turns.", "First, the discriminator learns to assign high scores to real samples, and low scores to fake samples.", "Then, the generator learns to increase the scores of fake samples, as to fool the discriminator.", "After proper training, the generator is able to produce realistic natural images from noise vectors.Recently, GANs have been used to produce high-quality images resembling handwritten digits, human faces, and house interiors BID36 .", "Furthermore, GANs exhibit three strong signs of generalization.", "First, the generator translates linear interpolations in the noise space into semantic interpolations in the image space.", "In other words, a linear interpolation in the noise space will generate a smooth interpolation of visually-appealing images.", "Second, the generator allows linear arithmetic in the noise space.", "Similarly to word embeddings BID31 , linear arithmetic indicates that the generator organizes the noise space to disentangle the nonlinear factors of variation of natural images into linear statistics.", "Third, the generator is able to to synthesize new images that resemble those of the data distribution.", "This allows for applications such as image in-painting BID18 and super-resolution BID26 .Despite", "their success, training and evaluating GANs is notoriously difficult. The adversarial", "optimization problem implemented by GANs is sensitive to random initialization, architectural choices, and hyper-parameter settings. In many cases,", "a fair amount of human care is necessary to find the correct configuration to train a GAN in a particular dataset. It is common to", "observe generators with similar architectures and hyper-parameters to exhibit dramatically different behaviors. Even when properly", "trained, the resulting generator may synthesize samples that resemble only a few localized regions (or modes) of the data distribution BID14 . While several advances", "have been made to stabilize the training of GANs BID37 , this task remains more art than science.The difficulty of training GANs is aggravated by the challenges in their evaluation: since evaluating the likelihood of a GAN with respect to the data is an intractable problem, the current gold standard to evaluate the quality of GANs is to eyeball the samples produced by the generator. The evaluation of discriminators", "is also difficult, since their visual features do not always transfer well to supervised tasks BID12 BID13 . Finally, the application of GANs", "to non-image data has been relatively limited.Research question To model natural images with GANs, the generator and discriminator are commonly parametrized as deep Convolutional Networks (convnets) BID24 . Therefore, it is reasonable to hypothesize", "that the reasons for the success of GANs in modeling natural images come from two complementary sources: (A1) Leveraging the powerful inductive bias of deep convnets. (A2) The adversarial training protocol.This", "work", "attempts to disentangle the factors of success (A1) and (A2) in GAN models. Specifically, we propose and study one algorithm", "that relies on (A1) and avoids (A2), but still obtains competitive results when compared to a GAN.", "The experimental results presented in this work suggest that, in the image domain, we can recover many of the properties of GAN models by using convnets trained with simple reconstruction losses.", "While this does not invalidate the promise of GANs as generic models of uncertainty or as methods for building generative models, our results suggest that, in order to more fully test the adversarial construction, research needs to move beyond images and convnets.", "On the other hand, practitioners who care only about generating images for a particular application, and find that the parameterized discriminator does improve their results can use reconstruction losses in their model searches, alleviating some of the instability of GAN training.While the visual quality of the results are promising, especially on the CelebA dataset, they are not yet to the level of the results obtained by GANs on the LSUN bedrooms.", "This suggest several research directions: one possibility, suggested by 3, is that being able to cover the entire dataset is too onerous a task if all that is required is to generate a few nice samples.", "In that figure we see that GANs have trouble reconstructing randomly chosen images at the same level of fidelity as their generations.", "However, GANs can produce good images after a single pass through the data with SGD.", "In future work we hope to better understand the tension between these two observations.", "There are many possibilities for improving the quality of GLO samples beyond understanding the effects of coverage.", "For example other loss functions (e.g. a VGG metric, as in BID32 ), model architectures (here we stayed close to DCGAN for ease of comparison), and more sophisticated sampling methods after training the model all may improve the visual quality of the samples.There is also much work to be done in adding structure to the Z space.", "Because the methods here keep track of the correspondence between samples and their representatives, and because the Z space is free, we hope to be able to organize the Z in interesting ways as we train." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.08695651590824127, 0.04999999701976776, 0.25925925374031067, 0.17391303181648254, 0.13333332538604736, 0.1428571343421936, 0.35483869910240173, 0.08695651590824127, 0.08888888359069824, 0.2222222238779068, 0.13636362552642822, 0.1538461446762085, 0.09090908616781235, 0.1395348757505417, 0.19999998807907104, 0.10526315122842789, 0.1428571343421936, 0.17391303181648254, 0.1538461446762085, 0.14814814925193787, 0.13333332538604736, 0.04651162400841713, 0.19512194395065308, 0.1249999925494194, 0.1599999964237213, 0.08888888359069824, 0.15094339847564697, 0.202531635761261, 0.11538460850715637, 0.1269841194152832, 0.16949151456356049, 0.16326530277729034, 0.12765957415103912, 0.3103448152542114, 0.1764705777168274, 0.20930232107639313, 0.06666666269302368, 0.11764705181121826, 0.17777776718139648, 0.045454539358615875, 0.13333332538604736, 0.14814814925193787, 0.1355932205915451 ]
ryj38zWRb
true
[ "Are GANs successful because of adversarial training or the use of ConvNets? We show a ConvNet generator trained with a simple reconstruction loss and learnable noise vectors leads many of the desirable properties of a GAN." ]
[ "In this paper, we propose a novel kind of kernel, random forest kernel, to enhance the empirical performance of MMD GAN.", "Different from common forests with deterministic routings, a probabilistic routing variant is used in our innovated random-forest kernel, which is possible to merge with the CNN frameworks.", "Our proposed random-forest kernel has the following advantages: From the perspective of random forest, the output of GAN discriminator can be viewed as feature inputs to the forest, where each tree gets access to merely a fraction of the features, and thus the entire forest benefits from ensemble learning.", "In the aspect of kernel method, random-forest kernel is proved to be characteristic, and therefore suitable for the MMD structure.", "Besides, being an asymmetric kernel, our random-forest kernel is much more flexible, in terms of capturing the differences between distributions.", "Sharing the advantages of CNN, kernel method, and ensemble learning, our random-forest kernel based MMD GAN obtains desirable empirical performances on CIFAR-10, CelebA and LSUN bedroom data sets.", "Furthermore, for the sake of completeness, we also put forward comprehensive theoretical analysis to support our experimental results.", "Generative adversarial nets (GANs; Goodfellow et al., 2014) are well-known generative models, which largely attribute to the sophisticated design of a generator and a discriminator which are trained jointly in an adversarial fashion.", "Nowadays GANs are intensely used in a variety of practical tasks, such as image-to-image translation (Tang et al., 2019; Mo et al., 2019) ; 3D reconstruction (Gecer et al., 2019) ; video prediction (Kwon & Park, 2019) ; text-to-image generation (Zhu et al., 2019) ; just to name a few.", "However, it's well-known that the training of GANs is a little tricky, see e.g. (Salimans et al., 2016) .", "One reason of instability of GAN training lies in the distance used in discriminator to measure the divergence between the generated distribution and the target distribution.", "For instance, concerning with the Jensen-Shannon divergence based GANs proposed in Goodfellow et al. (2014) , points out that if the generated distribution and the target distribution are supported on manifolds where the measure of intersection is zero, Jensen-Shannon divergence will be constant and the KL divergences be infinite.", "Consequently, the generator fails to obtain enough useful gradient to update, which undermines GAN training.", "Moreover, two non-overlapping distributions may be judged to be quite different by the Jensen-Shannon divergence, even if they are nearby with high probability.", "As a result, to better measure the difference between two distributions, Integral Probability Metrics (IPM) based GANs have been proposed.", "For instance, utilizes Wasserstein distance in GAN discriminator, while Li et al. (2017) adopts maximum mean discrepancy (MMD), managing to project and discriminate data in reproducing kernel Hilbert space (RKHS).", "To mention, the RKHS with characteristic kernels including Gaussian RBF kernel (Li et al., 2017) and rational quadratic kernel (Bińkowski et al., 2018) has strong power in the discrimination of two distributions, see e.g. (Sriperumbudur et al., 2010) .", "In this paper, inspired by non-linear discriminating power of decision forests, we propose a new type of kernel named random-forest kernel to improve the performance of MMD GAN discriminator.", "In order to fit with back-propagation training procedure, we borrow the decision forest model with stochastic and differentiable decision trees from Kontschieder et al. (2015) in our random-forest kernel.", "To be specific, each dimension of the GAN discriminator outputs is randomly connected to one internal node of a soft decision forest, serving as the candidate to-be-split dimension.", "Then, the tree is split with a soft decision function through a probabilistic routing.", "Other than the typical decision forest used in classification tasks where the value of each leaf node is a label, the leaf value of our random forest is the probability of a sample x i falling into a certain leaf node of the forest.", "If the output of the discriminator is denoted as h θ N (x i ) and the probability output of the t-th tree is denoted as µ t (h θ N (x i ); θ F ), the random forest kernel k RF can be formulated as", "where T is the total number of trees in the forest, θ N and θ F denote the parameters of the GAN discriminator and the random forest respectively.", "Recall that random forest and deep neural networks are first combined in Kontschieder et al. (2015) , where differentiable decision tree model and deep convolutional networks are trained together in an end-to-end manner to solve classification tasks.", "Then, Shen et al. (2017) extends the idea to label distribution learning, and Shen et al. (2018) makes further extensions in regression regime.", "Moreover, Zuo & Drummond (2017) , Zuo et al. (2018) and Avraham et al. (2019) also introduce deep decision forests.", "Apart from the typical ensemble method that averages the results across trees, they aggregate the results by multiplication.", "As for the combination of random forest and GAN, Zuo et al. (2018) introduce forests structure in GAN discriminator, combining CNN network and forest as a composited classifier, while Avraham et al. (2019) uses forest structure as one of non-linear mapping functions in regularization part.", "On the other hand, in the aspect of relationship between random forest and kernel method, Breiman (2000) initiates the literature concerning the link.", "He shows the fact that a purely random tree partition is equivalent to a kernel acting on the true margin, of which form can be viewed as the probability of two samples falling into the same terminal node.", "Shen & Vogelstein (2018) proves that random forest kernel is characteristic.", "Some more theoretical analysis can be found in Davies & Ghahramani (2014) , Arlot & Genuer (2014) , Scornet (2016) .", "However, despite their theoretical breakthroughs, forest decision functions used in these forest kernels are non-differentiable hard margins rather than differentiable soft ones, and thus cannot be directly used in back propagation regime.", "To the best of our knowledge, MMD GAN with our proposed random-forest kernel is the first to combine random forest with deep neural network in the form of kernel MMD GAN.", "Through theoretical analysis and numerical experiments, we evaluate the effectiveness of MMD GAN with our random-forest kernel.", "From the theoretical point of view, our random-forest kernel enjoys the property of being characteristic, and the gradient estimators used in the training process of random-forest kernel GAN are unbiased.", "In numerical experiments, we evaluate our random-forest kernel under the setting of both the original MMD GAN (Li et al., 2017) and the one with repulsive loss (Wang et al., 2019) .", "Besides, we also compare our random-forest kernel with Gaussian RBF kernel (Li et al., 2017) , rational quadratic kernel (Bińkowski et al., 2018) , and bounded RBF kernel (Wang et al., 2019) .", "As a result, MMD GAN with our random-forest kernel outperforms its counterparts with respect to both accuracy and training stability.", "This paper is organized as follows.", "First of all, we introduce some preliminaries of MMD GAN in Section 2.", "Then we review the concept of deep random forest and show how it is embedded within a CNN in 3.1.", "After that, random-forest kernels and MMD GAN with random-forest kernels are proposed in 3.2 and 3.3 respectively.", "Besides, the training techniques of MMD GAN with random-forest kernel are demonstrated in Section 3.4 and the theoretical results are shown in Section 3.5.", "Eventually, Section 4 presents the experimental setups and results, including the comparison between our proposed random-forest kernel and other kernels.", "In addition, all detailed theoretical proofs are included in the Appendices.", "The generative model captures the data distribution P X , by building a mapping function G : Z → X from a prior noise distribution P Z to data space.", "While the discriminative model D : X → R is used to distinguish generated distribution P Y from real data distribution P X .", "Taking X, X ∼ P X and Y, Y ∼ P Y := P G (Z) where Y := G(Z) and Y := G(Z ), the squared MMD is expressed as", "The loss of generator and discriminator in MMD GAN proposed in Li et al. (2017) is:", "Wang et al. (2019) proposed MMD GAN with repulsive loss, where the objective functions for G and D are:", "we can write an unbiased estimator of the squared MMD in terms of k as", "When k is a characteristic kernel, we have MMD 2 [P X , P Y ] ≥ 0 with equality applies if and only if P X = P Y .", "The best-known characteristic kernels are gaussian RBF kernel and rational quadratic kernel (Bińkowski et al., 2018) ." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.14814814925193787, 0.1818181723356247, 0.125, 0.23076923191547394, 0.1428571343421936, 0.1764705777168274, 0, 0.052631575614213943, 0.08695651590824127, 0.1428571343421936, 0, 0.0833333283662796, 0, 0.06666666269302368, 0.1428571343421936, 0.054054051637649536, 0.0952380895614624, 0.29411762952804565, 0.17142856121063232, 0.060606058686971664, 0.1904761791229248, 0.0555555522441864, 0.04999999701976776, 0, 0, 0, 0, 0, 0.045454543083906174, 0.0714285671710968, 0.09756097197532654, 0.10526315122842789, 0, 0, 0.25806450843811035, 0.3199999928474426, 0.12903225421905518, 0.2222222238779068, 0.19354838132858276, 0.37037035822868347, 0, 0.09999999403953552, 0.06896550953388214, 0.27272728085517883, 0.27586206793785095, 0.1538461446762085, 0, 0.0624999962747097, 0, 0.06896550953388214, 0.08695651590824127, 0.14814814925193787, 0.09090908616781235, 0.1818181723356247, 0.0833333283662796 ]
HJxhWa4KDr
true
[ "Equip MMD GANs with a new random-forest kernel." ]
[ "Reinforcement learning in an actor-critic setting relies on accurate value estimates of the critic.", "However, the combination of function approximation, temporal difference (TD) learning and off-policy training can lead to an overestimating value function.", "A solution is to use Clipped Double Q-learning (CDQ), which is used in the TD3 algorithm and computes the minimum of two critics in the TD-target. \n", "We show that CDQ induces an underestimation bias and propose a new algorithm that accounts for this by using a weighted average of the target from CDQ and the target coming from a single critic.\n", "The weighting parameter is adjusted during training such that the value estimates match the actual discounted return on the most recent episodes and by that it balances over- and underestimation.\n", "Empirically, we obtain more accurate value estimates and demonstrate state of the art results on several OpenAI gym tasks.", "In recent years it was shown that reinforcement learning algorithms are capable of solving very complex tasks, surpassing human expert performance in games like Go , Starcraft (DeepMind) or Dota (OpenAI).", "However, usually a large amount of training time is needed to achieve these results (e.g. 45,000 years of gameplay for Dota).", "For many important problems (e.g. in robotics) it is prohibitively expensive for the reinforcement learning agent to interact with its environment that much.", "This makes it difficult to apply such algorithms in the real world.", "Off-policy reinforcement learning holds the promise of being more data-efficient than on-policy methods as old experience can be reused several times for training.", "Unfortunately, the combination of temporal-difference (TD) learning, function approximation and off-policy training can be unstable, which is why it has been called the deadly triad (Sutton & Barto, 2018; van Hasselt et al., 2018) .", "If the action space is discrete, solutions like Double DQN (Van Hasselt et al., 2016) are very effective at preventing divergence of the value estimates by eliminating an otherwise prevailing overestimation bias.", "For continuous action spaces, which characterize many tasks, it was shown that Double DQN can not solve the overestimation problem Fujimoto et al. (2018) .", "In an actor-critic setting it is important that the value estimates of the critic are accurate in order for the actor to learn a policy from the critic.", "The TD3 Fujimoto et al. (2018) algorithm uses Clipped Double Q-learning (CDQ) to produce a critic without an overestimation bias, which greatly improved the performance of the algorithm.", "In CDQ two critics are trained at the same time and the TD target for both of them is the minimum over the two single TD targets.", "While the authors note that the CDQ critic update tends to underestimate the true values, this is not further examined.", "We show that this underestimation bias occurs in practice and propose a method that accounts for over-and underestimation of the critic at the same time.", "Similarly to CDQ we train two function approximators for the Q-values, but we regress them not on the same quantity.", "The TD target for each of the two critics is a weighted average of the single TD target for that critic and the TD target from CDQ.", "The weighting parameter is learned by comparing the value estimates for the most recent state-action pairs with the observed discounted returns for these pairs.", "As the one term of the average has an underestimation bias while the other one has an overestimation bias, the weighted average balances these biases and we show empirically that this method obtains much more accurate estimates of the Q-values.", "We verify that the more accurate critics improve the performance of the reinforcement learning agent as our method achieves state of the art results on a range of continuous control tasks from OpenAi gym Brockman et al. (2016) .", "To guarantee reproducibility we open source our code which is easy to execute and evaluate our algorithm on a large number of different random seeds.", "We showed that Clipped Double Q-learning (CDQ) induces an underestimation bias in the critic, while an overestimation bias occurs if just one Q-network is used.", "From that we derived the Balanced Clipped Double Q-learning algorithm (BCDQ) that updates the critic through a weighted average of the two mentioned update mechanisms.", "The weighting parameter is adjusted over the course of training by comparing the Q-values of recently visited state-action pairs with the actual discounted return observed from that pair onwards.", "It was shown that BCDQ achieves much more accurate value estimates by adjusting the weighting parameter.", "Replacing CDQ with BCDQ leads to the Balanced Twin Delayed Deep Deterministic policy gradient algorithm (BTD3).", "Our method achieves state of the art performance on a range of continuous control tasks.", "Furthermore, BCDQ can be added to any other actor-critic algorithm while it only minimally increases the computational complexity compared to CDQ.", "It is also be possible to use BCDQ for discrete action spaces.", "Evaluating that approach is an interesting area for future research." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.4166666567325592, 0.06896550953388214, 0.12121211737394333, 0.10526315122842789, 0.054054051637649536, 0.20689654350280762, 0.1463414579629898, 0.06451612710952759, 0.23529411852359772, 0.09090908616781235, 0.24242423474788666, 0, 0.0476190447807312, 0, 0.29411762952804565, 0.0555555522441864, 0.0624999962747097, 0.0714285671710968, 0.25, 0.0714285671710968, 0.13793103396892548, 0.13333332538604736, 0.19512194395065308, 0.23255813121795654, 0, 0.060606054961681366, 0.0624999962747097, 0, 0.23076923191547394, 0, 0.0833333283662796, 0, 0.09090908616781235, 0.09999999403953552 ]
r1xyayrtDS
true
[ "A method for more accurate critic estimates in reinforcement learning." ]
[ "We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos.", "As part of this framework, we construct ImageNet-Vid-Robust, a human-expert--reviewed dataset of 22,668 images grouped into 1,145 sets of perceptually similar images derived from frames in the ImageNet Video Object Detection dataset.", "We evaluate a diverse array of classifiers trained on ImageNet, including models trained for robustness, and show a median classification accuracy drop of 16\\%.", "Additionally, we evaluate the Faster R-CNN and R-FCN models for detection, and show that natural perturbations induce both classification as well as localization errors, leading to a median drop in detection mAP of 14 points.", "Our analysis shows that natural perturbations in the real world are heavily problematic for current CNNs, posing a significant challenge to their deployment in safety-critical environments that require reliable, low-latency predictions.", "Despite their strong performance on various computer vision benchmarks, convolutional neural networks (CNNs) still have many troubling failure modes.", "At one extreme,`padversarial examples can cause large drops in accuracy for state of the art models with visually imperceptible changes to the input image BID4 .", "But since carefully crafted`pperturbations are unlikely to occur naturally in the real world, they usually do not pose a problem outside a fully adversarial context.To study more realistic failure modes, researchers have investigated benign image perturbations such as rotations & translations, colorspace changes, and various image corruptions [7, 8, 4] .", "However, it is still unclear whether these perturbations reflect the robustness challenges commonly arising in real data since the perturbations also rely on synthetic image modifications.Recent work has therefore turned to videos as a source of naturally occurring perturbations of images [6, BID0 . In contrast to other failure modes, the perturbed images are taken from existing image data without further modifications that make the task more difficult. As a result, robustness to such perturbations directly corresponds to performance improvements on real data. However, it is currently unclear to what extent such video perturbations pose a significant robustness challenge. Azulay and Weiss BID0 only provide anecdotal evidence from a small number of videos. While [6] work with a larger video dataset to obtain accuracy estimates, they only observe a small drop in accuracy of around 2.7% on videoperturbed images, suggesting that small perturbations in videos may not actually reduce the accuracy of current CNNs significantly.We address this question by conducting a thorough evaluation of robustness to natural perturbations arising in videos.", "As a cornerstone of our investigation, we introduce ImageNet-Vid-Robust, a carefully curated subset of ImageNet-Vid [12] .", "In contrast to earlier work, all images in ImageNet-Vid-Robust were screened by a set of expert labelers to ensure a high annotation quality and to minimize selection biases that arise when filtering with CNNs.", "Overall, ImageNet-Vid-Robust contains 22,668 images grouped into 1,145 sets of temporally adjacent and visually similar images of a total of 30 classes.We then utilize ImageNet-Vid-Robust to measure the accuracy of current CNNs to small, naturally occurring perturbations.", "Our testbed contains over 40 different model types, varying both architecture and training methodology (adversarial training, data augmentation, etc).", "We find that natural perturbations from ImageNet-Vid-Robust induce a median 16% accuracy drop for classification tasks and a median 14% drop in mAP for detection tasks.", "Even for the best-performing model, we observe an accuracy drop of 14% -significantly larger than the 2.7% drop in [6] over the same time horizon in the video.Our results show that robustness to natural perturbations in videos is indeed a significant challenge for current CNNs.", "As these models are increasingly deployed in safety-critical environments that require both high accuracy and low latency (e.g., autonomous vehicles), ensuring reliable predictions on every frame of a video is an important direction for future work." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 1, 0.21276594698429108, 0.25, 0.26923075318336487, 0.25, 0, 0.23255813121795654, 0.1764705777168274, 0.1818181723356247, 0.1818181723356247, 0.19999998807907104, 0.3529411852359772, 0, 0.25, 0.3050847351551056, 0.14035087823867798 ]
SklRoy3qaN
true
[ "We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos." ]
[ "Structured tabular data is the most commonly used form of data in industry according to a Kaggle ML and DS Survey.", "Gradient Boosting Trees, Support Vector Machine, Random Forest, and Logistic Regression are typically used for classification tasks on tabular data.", "The recent work of Super Characters method using two-dimensional word embedding achieved state-of-the-art results in text classification tasks, showcasing the promise of this new approach.", "In this paper, we propose the SuperTML method, which borrows the idea of Super Characters method and two-dimensional embedding to address the problem of classification on tabular data.", "For each input of tabular data, the features are first projected into two-dimensional embedding like an image, and then this image is fed into fine-tuned ImageNet CNN models for classification.", "Experimental results have shown that the proposed SuperTML method have achieved state-of-the-art results on both large and small datasets.", "In data science, data is categorized into structured data and unstructured data.", "Structured data is also known as tabular data, and the terms will be used interchangeably.", "Anthony Goldbloom, the founder and CEO of Kaggle observed that winning techniques have been divided by whether the data was structured or unstructured BID12 .", "Currently, DNN models are widely applied for usage on unstructured data such as image, speech, and text.", "According to Anthony, \"When the data is unstructured, its definitely CNNs and RNNs that are carrying the day\" BID12 .", "The successful CNN model in the ImageNet competition BID8 has outperformed human Preliminary work.", "Under review by the International Conference on Machine Learning (ICML).", "Do not distribute.for image classification task by ResNet BID6 since 2015.On the other side of the spectrum, machine learning models such as Support Vector Machine (SVM), Gradient Boosting Trees (GBT), Random Forest, and Logistic Regression, have been used to process structured data.", "According to a recent survey of 14,000 data scientists by Kaggle (2017) , a subdivision of structured data known as relational data is reported as the most popular type of data in industry, with at least 65% working daily with relational data.", "Regarding structured data competitions, Anthony says that currently XGBoost is winning practically every competition in the structured data category BID4 .", "XGBoost BID2 is one popular package implementing the Gradient Boosting method.Recent research has tried using one-dimensional embedding and implementing RNNs or one-dimensional CNNs to address the TML (Tabular Machine Learning) tasks, or tasks that deal with structured data processing BID7 BID11 , and also categorical embedding for tabular data with categorical features BID5 .", "However, this reliance upon onedimensional embeddings may soon come to change.", "Recent NLP research has shown that the two-dimensional embedding of the Super Characters method BID9 is capable of achieving state-of-the-art results on large dataset benchmarks.", "The Super Characters method is a two-step method that was initially designed for text classification problems.", "In the first step, the characters of the input text are drawn onto a blank image.", "In the second step, the image is fed into two-dimensional CNN models for classification.", "The two-dimensional CNN models are trained by fine-tuning from pretrained models on large image dataset, e.g. ImageNet.In this paper, we propose the SuperTML method, which borrows the concept of the Super Characters method to address TML problems.", "For each input, tabular features are first projected onto a two-dimensional embedding and fed into fine-tuned two-dimensional CNN models for classification.", "The proposed SuperTML method handles the categorical type and missing values in tabular data automatically, without need for explicit conversion into numerical type values.", "The proposed SuperTML method borrows the idea of twodimensional embedding from Super Characters and transfers the knowledge learned from computer vision to the structured tabular data.", "Experimental results shows that the proposed SuperTML method has achieved state-of-the-art results on both large and small tabular dataset TAB2" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.12121211737394333, 0.1818181723356247, 0.05405404791235924, 0.10526315122842789, 0.1904761791229248, 0, 0.1818181723356247, 0.1428571343421936, 0.1111111044883728, 0.13333332538604736, 0.06451612710952759, 0.2222222238779068, 0, 0.178571417927742, 0.08888888359069824, 0.12903225421905518, 0.17241379618644714, 0, 0, 0.0714285671710968, 0, 0.1538461446762085, 0.12244897335767746, 0.1818181723356247, 0.17142856121063232, 0.2222222238779068, 0.0624999962747097 ]
r1MCjkn5pV
true
[ "Deep learning for structured tabular data machine learning using pre-trained CNN model from ImageNet." ]
[ "Learning rich representations from predictive learning without labels has been a longstanding challenge in the field of machine learning.", "Generative pre-training has so far not been as successful as contrastive methods in modeling representations of raw images.", "In this paper, we propose a neural architecture for self-supervised representation learning on raw images called the PatchFormer which learns to model spatial dependencies across patches in a raw image.", "Our method learns to model the conditional probability distribution of missing patches given the context of surrounding patches.", "We evaluate the utility of the learned representations by fine-tuning the pre-trained model on low data-regime classification tasks.", "Specifically, we benchmark our model on semi-supervised ImageNet classification which has become a popular benchmark recently for semi-supervised and self-supervised learning methods.", "Our model is able to achieve 30.3% and 65.5% top-1 accuracies when trained only using 1% and 10% of the labels on ImageNet showing the promise for generative pre-training methods.", "Deep neural networks are capable of learning rich abstract representations from raw high dimensional data in an end-to-end fashion (LeCun et al., 2015) .", "A big weakness of these neural networks is the reliance on abundant labeled datasets.", "Self-supervised and unsupervised representation learning approaches have been proposed to address this problem (Bengio et al., 2007) .", "It is still an open problem in the field to figure out how to take advantage of large unlabeled datasets, use them for learning rich representations and improving the data-efficiency of supervised learning systems.", "A classic example of successful unsupervised learning of rich representations is word2vec (Mikolov et al., 2013) where the authors showed that distributed vector representations of words could be learned by contrastively predicting the neighboring words given surrounding words.", "The shift from word embeddings to sequence embeddings in recent times began when (Dai & Le, 2015) showed that pre-trained sequence to sequence autoencoders on text corpora could be useful for a number of downstream tasks such as text classification and sentiment analysis.", "Followed by this, it was shown in (Peters et al., 2018 ) that language modeling is useful in providing deep contextual sentence embeddings that could be fine-tuned on a number of natural language understanding tasks.", "(Howard & Ruder, 2018 ) is another example of such a success.", "In more recent times, the transformer (Vaswani et al., 2017) has emerged as a powerful architecture to model complex dependencies across a long sequence using global self-attention.", "OpenAI Generative Pre-Training (GPT) (Radford et al., 2018) showed that training large Transformer models on BooksCorpus could lead to rich and useful representations that could be fine-tuned on a variety of downstream tasks covering language understanding, commonsense reasoning and question-answering.", "The biggest success in unsupervised pre-training was achieved by BERT (Devlin et al., 2018) where the assumption for using causal language modeling was pointed out as unnecessary and it was shown that training deep transformers in a bi-directional fashion to perform the objective of masked language modeling and next sentence prediction could lead to rich and useful representations covering a wide span of natural language understanding downstream tasks.", "Therefore, it is useful to address the following question: How do we translate the successes of masked language modeling and deep transformers to images?", "Unlike language which is a layer of abstraction to be able to understand the world and communicate thoughts, images are raw sensory observations.", "It is therefore much harder to model the relationship across pixels both spatially and temporally simply because the dimensionality is much higher.", "Let's first look at the question of whether generative pre-training is well suited for images or not.", "There is a belief that generative approaches are more suited to abstract inputs such as language wordpieces but not for less abstract entities like pixels or audio waveform bits (van den Oord et al., 2018; Hjelm et al., 2018; Bachman et al., 2019; Trinh et al., 2019) .", "While it may as well turn out to be true, it is useful to investigate how far we could push generative approaches for pre-training even on domains they are not well suited for, such as images.", "A successful example of such an approach is the adversarial method BiGAN (Donahue et al., 2016; Donahue & Simonyan, 2019) .", "While BiGAN (and BigBiGAN) are meant for learning useful highlevel representations of raw images, they still retain the generative modeling aspect of unsupervised learning by learning to jointly model an encoder and a generator using the generative adversarial loss.", "On the other hand, there has been incredible progress in recent years in generative modeling of raw pixels and audio waveforms using maximum likelihood.", "Beginning with (Oord et al., 2016b), we have seen successes in generating diverse images by modeling the conditional distribution of pixels given context of neighboring pixels.", "WaveNet (Oord et al., 2016a ) is an example of successful deployment of such techniques for modeling the distribution of raw audio waveforms when conditioned on text.", "(Kalchbrenner et al., 2017 ) adopt a similar technique for generating future frames of a video conditioned on the past.", "More recently, (Child et al., 2019 ) have pushed on using strided self-attention to achieve high-quality unconditional samples of ImageNet building upon successes of (Parmar et al., 2018) and (Menick & Kalchbrenner, 2018) .", "Therefore, it is very reasonable to ask ourselves the following question: If generative models can work on such high dimensional data, is it necessarily the case that they would be ill-suited from a representation learning perspective?", "If no, how do we leverage these successes for representation learning?", "Further, how do we take inspiration from the big representation learning successes in natural language processing (Devlin et al., 2018) and the generative modeling successes for images and audio and design a representation learning approach for images?", "As far as representation learning on images goes, the state-of-the-art systems at the moment are contrastive methods.", "Specifically, Contrastive Predictive Coding (CPC) (van den Oord et al., 2018) which learns to contrastively predict the future given the past by sampling negatives across and between sequences has been shown to be a universally powerful representation learning approach for multiple modalities (audio, images, text, control) .", "(Hénaff et al., 2019) and (Bachman et al., 2019) achieve impressive linear classifier probe metrics for their representations that were trained contrastively to maximize mutual information across views and space.", "(Hénaff et al., 2019) also show that these representations could be used for downstream tasks such as semi-supervised image classification in the low-data regime going on to record impressive results in the 1% and 10% ImageNet classification.", "While such impressive results have been shown using the contrastive methods, methods of such quality for generative approaches are ye to be shown on images.", "Secondly, CPC and related methods adopt convolutional architectures for learning the representations.", "We believe it is worth the research effort to investigate architectures that incorporate self-attention so that we could translate language domain's success to other domains.", "Stand-Alone Self-Attention (Ramachandran et al., 2019) has shown that self-attentive architectures could be designed to match convolutional architectures on image classification and object detection.", "Such a result is promising in the sense that we now know that self-attentive architectures are not a limiting factor for downstream classification performance.", "In this paper, we attempt to inspire from a few key engineering deicisons that have benefitted the various successful approaches discussed above to motivate our design of a generative pre-training method for images.", "1. Predicting subscales and low-bit depth for pixels: (Menick & Kalchbrenner, 2018) showed that modeling pixels by sequentially modeling the subscales and low-bit depth versions of the raw image is extremely useful.", "(Oord et al., 2016a ) also attempted to initially model 8-bit audio rather than 16-bit.", "Therefore, it makes sense to model the only the most significant few bits while attempting to decode pixels for representation learning.", "Higher order bits are more relevant for texture and finer-details and may not be crucial for representation learning performance.", "2. Use of self-attention for aggregating global context: Self-Attention (Vaswani et al., 2017 ) is an extremely powerful approach for aggregating global contextual representations across large sequences.", "The adoption of self-attention for images began with (Wang et al., 2018) who used non-local layers for activity recognition.", "(Zhang et al., 2018) and (Brock et al., 2018 ) exploit non-local layers for high-fidelity image generation.", "has also shown that self-attention can be used to good effect for modeling distribution of latents for likelihood-based image generation while (Parmar et al., 2018; Menick & Kalchbrenner, 2018; Child et al., 2019) are examples for self-attentive density models.", "3. Learning spatial dependencies across patches: CPC learns to spatially predict neighboring patches given context of surrounding patches.", "Image Transformers (Parmar et al., 2018) adopts self-attention that takes into account local as well as global dependencies behaving like a patch-based generative model.", "(Menick & Kalchbrenner, 2018) explot modeling spatial PixelCNNs over subscales for global image dependencies.", "(Trinh et al., 2019) attempt to modify CPC for image representation learning by using the patch-based data extraction and modeling dependencies in a BERT-like fashion using self-attention.", "Our key contributions are as follows:", "1. We propose a new architecture, PatchFormer, for modeling bi-directional dependencies across patches.", "Our architecture learning to decode missing patches in an image by extracting represenstations of the given patches, using attention-pooling to aggregate the context, and decode the low-bit grayscale sub-sampled versions of the missing patches.", "Specifically, we decode only the 2-bit grayscale version of the missing patch.", "2. We show that our model could be pre-trained on the unsupervised objective of decoding missing patches and fine-tuned on downstream low-data regime classification tasks.", "3. We achieve somewhat competitive downstream ImageNet classification results with CPC (Hénaff et al., 2019) and are surprisingly even better than the other contrastive approach for semi-supervised downstream classification, Selfie (Trinh et al., 2019) in spite of adopting a generative approach.", "We have proposed a new architecture for generative pre-training on images called the PatchFormer.", "We highlighted the key tricks to making our model learn useful representations for downstream classification tasks in spite of decoding pixels.", "We have shown that we are competitive with state-ofthe-art contrastive pre-training methods such as CPC on the low data-regime ImageNet classification benchmark." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0714285671710968, 0.07407406717538834, 0.2631579041481018, 0, 0.07692307233810425, 0.20000000298023224, 0.09999999403953552, 0.05882352590560913, 0.0833333283662796, 0.1428571343421936, 0.14999999105930328, 0.04651162400841713, 0.0833333283662796, 0.04651162400841713, 0, 0, 0.04255318641662598, 0.02985074371099472, 0, 0.0624999962747097, 0.06896550953388214, 0.14814814925193787, 0.07999999821186066, 0.1428571343421936, 0, 0.13636362552642822, 0.060606054961681366, 0.11428570747375488, 0.1111111044883728, 0.13333332538604736, 0.04999999701976776, 0.23255813121795654, 0.1904761791229248, 0.19512194395065308, 0.307692289352417, 0.1090909093618393, 0.054054051637649536, 0.08888888359069824, 0.1818181723356247, 0.1818181723356247, 0, 0.05882352590560913, 0.0624999962747097, 0.09756097197532654, 0.1111111044883728, 0, 0.27586206793785095, 0.2222222238779068, 0.05714285373687744, 0.13793103396892548, 0.07692307233810425, 0.08888888359069824, 0, 0, 0.0833333283662796, 0.1621621549129486, 0, 0.08695651590824127, 0.0555555522441864, 0, 0.05882352590560913, 0.04255318641662598, 0.25, 0.12903225421905518, 0.0624999962747097 ]
SJg1lxrYwS
true
[ "Decoding pixels can still work for representation learning on images" ]
[ "Adaptive regularization methods pre-multiply a descent direction by a preconditioning matrix.", "Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive.", "We show how to modify full-matrix adaptive regularization in order to make it practical and effective.", "We also provide novel theoretical analysis\n", "for adaptive regularization in non-convex optimization settings.", "The core of our algorithm, termed GGT, consists of efficient inverse computation of square roots of low-rank matrices.", "Our preliminary experiments underscore improved convergence rate of GGT across a variety of synthetic tasks and standard deep learning benchmarks.", "Stochastic gradient descent is the workhorse behind the recent deep learning revolution.", "This simple and age-old algorithm has been supplemented with a variety of enhancements to improve its practical performance, and sometimes its theoretical guarantees.Amongst the acceleration methods there are three main categories: momentum, adaptive regularization, and variance reduction.", "Momentum (in its various incarnations, like heavy-ball or Nesterov acceleration) is the oldest enhancement.", "It has a well-developed theory, and is known to improve practical convergence in a variety of tasks, small and large.", "It is also easy to implement.", "Variance reduction is the most recent advancement; in theory and practice, it is mostly applicable to convex optimization, and is thus less influential in deep learning.This brings us to adaptive regularization: the most sophisticated, hard to implement, and debated acceleration method.", "While state-of-the-art optimizers such as Adam and AdaGrad (Kingma & Ba, 2014; BID13 do use adaptive regularization, they do so in a very limited form: with diagonal matrices, often marketed as per-coordinate adaptive learning-rate methods.", "Despite solid theoretical guarantees, the practical value of diagonal adaptive regularization as compared to \"vanilla\" SGD has been the subject of much debate BID48 .", "However, the efficacy of full-matrix adaptive regularization has been relatively unexplored.", "This is due to the prohibitive computational cost associated with full-matrix operations: full AdaGrad requires taking the inverse square root of a large matrix.In this paper, we present GGT, a practical solution to the computational problems plaguing fullmatrix adaptive regularization, making this technique scalable for modern deep models.", "At the heart of our method is a simple, GPU-friendly way to apply the inverse square root of the low-rank second-moment matrix of recent gradients; see FIG0 .", "GGT's running time is comparable to state-of-the-art optimizers.We proceed to show that full-matrix preconditioning allows for much better exploitation of anisotropic curvature in loss landscapes.", "First, we show synthetic experiments which demonstate clear benefits of GGT over baselines, especially when the problem is ill-conditioned.", "Then, we implement GGT at scale, and show that the benefits translate to faster training on standard deep learning benchmarks.", "Our improvement is most salient in complicated landscapes like RNN training.Our algorithm comes with theoretical guarantees.", "We give the first proof of convergence to firstorder critical points for an algorithm with adaptive regularization in a stochastic non-convex setting, featuring a rate which is dependent on an adaptive ratio.", "We show examples where our bound is stronger than that for SGD, providing some theoretical basis for our empirical findings.", "This work investigates full-matrix adaptive regularization: our main contribution is to make this technique viable for large-scale optimization, by a method for efficient multiplication by the inverse square root of a full second-moment matrix over a short window of gradients.", "This leads to a new algorithm, GGT, a truly scalable optimization algorithm with full-matrix adaptive preconditioning.Through synthetic experiments, we have shown that GGT accelerates optimization in ill-conditioned loss landscapes; this is supported by accompanying adaptive convergence guarantees.", "Preliminary experiments show accelerated convergence on standard deep learning benchmarks, with very different training dynamics from existing diagonal adaptive methods.", "We accompany our algorithm and experiments with the first theoretical characterization of the benefits of adaptive regularization in a non-convex setting.", "We hope that GGT will be the first of a new class of algorithms for the modern large-scale optimization toolbox, and to foster new discussion towards an ever-elusive understanding of loss landscapes in deep learning." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.0714285671710968, 0.14814814925193787, 0, 0.42105263471603394, 0, 0, 0, 0.08510638028383255, 0, 0, 0, 0.08888888359069824, 0.09090908616781235, 0.05882352590560913, 0.17391303181648254, 0.1818181723356247, 0, 0.10810810327529907, 0, 0, 0.0714285671710968, 0.24390242993831635, 0.06666666269302368, 0.12765957415103912, 0.25531914830207825, 0.1249999925494194, 0.19354838132858276, 0.09302325546741486 ]
rkxd2oR9Y7
true
[ "fast, truly scalable full-matrix AdaGrad/Adam, with theory for adaptive stochastic non-convex optimization" ]
[ "Dialogue systems require a great deal of different but complementary expertise to assist, inform, and entertain humans.", "For example, different domains (e.g., restaurant reservation, train ticket booking) of goal-oriented dialogue systems can be viewed as different skills, and so does ordinary chatting abilities of chit-chat dialogue systems.", "In this paper, we propose to learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP).", "The experimental results show that this approach achieves competitive performance on a combined dataset of MultiWOZ (Budzianowski et al., 2018), In-Car Assistant (Eric et al.,2017), and Persona-Chat (Zhang et al., 2018).", "Finally, we demonstrate that each dialogue skill is effectively learned and can be combined with other skills to produce selective responses.", "Unlike humans who can do both, goal-oriented dialogues (Williams & Young, 2007; Young et al., 2013) and chit-chat conversations (Serban et al., 2016a; Vinyals & Le, 2015) are often learned with separate models.", "A more desirable approach for the users would be to have a single chat interface that can handle both casual talk and tasks such as reservation or scheduling.", "This can be formulated as a problem of learning different conversational skills across multiple domains.", "A skill can be either querying a database, generating daily conversational utterances, or interacting with users in a particular task-domain (e.g. booking a restaurant).", "One challenge of having multiple skills is that existing datasets either focus only on chit-chat or on goal-oriented dialogues.", "This is due to the fact that traditional goal-oriented systems are modularized (Williams & Young, 2007; Hori et al., 2009; Lee et al., 2009; Levin et al., 2000; Young et al., 2013) ; thus, they cannot be jointly trained with end-to-end architecture as in chit-chat.", "However, recently proposed end-to-end trainable models Wu et al., 2019; Reddy et al., 2018; Yavuz et al., 2018) and datasets (Bordes & Weston, 2017; allow us to combine goal-oriented (Budzianowski et al., 2018; and chit-chat (Zhang et al., 2018) into a single benchmark dataset with multiple conversational skills as shown in Table 1.", "A straight forward solution would be to have a single model for all the conversational skills, which has shown to be effective to a certain extent by (Zhao et al., 2017) and (McCann et al., 2018) .", "Putting aside the performance in the tasks, such fixed shared-parameter framework, without any task-specific designs, would lose controllability and interpretability in the response generation.", "In this paper, instead, we propose to model multiple conversational skills using the Mixture of Experts (MoE) (Jacobs et al., 1991) paradigm, i.e., a model that learns and combine independent specialized experts using a gating function.", "For instance, each expert could specialize in different dialogues domains (e.g., Hotel, Train, ChitChat etc.) and skills (e.g., generate SQL query).", "A popular implementation of MoE ) uses a set of linear transformation (i.e., experts) in between two LSTM (Schmidhuber, 1987) layers.", "However, several problems arise with this implementation:", "1) the model is computationally expensive as it has to decode multiple times each expert and make the combination at the representation-level;", "2) no prior knowledge is injected in the expert selection (e.g., domains);", "3) Seq2Seq model has limited ability in extracting information from a Knowledge Base (KB) (i.e., generated by the SQL query) , as required in end-to-end task-oriented dialogues Table 1 : An example from the dataset which includes both chit-chat and task-oriented conversations.", "The model has to predict all the Sys turn, which includes SQL query and generating response from a the Memory content, which is dynamically updated with the queries results.", "The skills are the prior knowledge needed for the response, where Persona refers to chit-chat.", "Spk.", "Conversation Skills Usr: Can you help me find a cheap 2 star hotel?", "In this paper, we propose a novel way to train a single end-to-end dialogue model with multiple composable and interpretable skills.", "Unlike previous work, that mostly focused on the representationlevel mixing , our proposed approach, Attention over Parameters, learns how to softly combine independent sets of specialized parameters (i.e., making SQL-Query, conversing with consistent persona, etc.) into a single set of parameters.", "By doing so, we not only achieve compositionality and interpretability but also gain algorithmically faster inference speed.", "To train and evaluate our model, we organize a multi-domain task-oriented datasets into end-to-end trainable formats and combine it with a conversational dataset (i.e. Persona-Chat).", "Our model learns to consider each task and domain as a separate skill that can be composed with each other, or used independently, and we verify the effectiveness of the interpretability and compositionality with competitive experimental results and thorough analysis." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2222222238779068, 0.178571417927742, 0.9818181991577148, 0.17543859779834747, 0.2448979616165161, 0.03389830142259598, 0.1428571343421936, 0.1395348757505417, 0.039215680211782455, 0.08695651590824127, 0.05970148742198944, 0.11267605423927307, 0.1355932205915451, 0.04081632196903229, 0.380952388048172, 0.11764705181121826, 0.07999999821186066, 0.05714285373687744, 0.1249999925494194, 0, 0.05970148742198944, 0.1111111044883728, 0.0476190447807312, 0.04878048226237297, 0.375, 0.23188404738903046, 0.08888888359069824, 0.1538461446762085, 0.25806450843811035 ]
BJepraEFPr
true
[ "In this paper, we propose to learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP). " ]
[ "Model distillation aims to distill the knowledge of a complex model into a simpler one.", "In this paper, we consider an alternative formulation called dataset distillation: we keep the model fixed and instead attempt to distill the knowledge from a large training dataset into a small one.", "The idea is to synthesize a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data.", "For example, we show that it is possible to compress 60,000 MNIST training images into just 10 synthetic distilled images (one per class) and achieve close to the original performance, given a fixed network initialization.", "We evaluate our method in various initialization settings. ", "Experiments on multiple datasets, MNIST, CIFAR10, PASCAL-VOC, and CUB-200, demonstrate the ad-vantage of our approach compared to alternative methods. ", "Finally, we include a real-world application of dataset distillation to the continual learning setting: we show that storing distilled images as episodic memory of previous tasks can alleviate forgetting more effectively than real images.", "proposed network distillation as a way to transfer the knowledge from an ensemble of many separately-trained networks into a single, typically compact network, performing a type of model compression.", "In this paper, we are considering a related but orthogonal task: rather than distilling the model, we propose to distill the dataset.", "Unlike network distillation, we keep the model fixed but encapsulate the knowledge of the entire training dataset, which typically contains thousands to millions of images, into a small number of synthetic training images.", "We show that we can go as low as one synthetic image per category, training the same model to reach surprisingly good performance on these synthetic images.", "For example, in Figure 1a , we compress 60, 000 training images of MNIST digit dataset into only 10 synthetic images (one per category), given a fixed network initialization.", "Training the standard LENET on these 10 images yields test-time MNIST recognition performance of 94%, compared to 99% for the original dataset.", "For networks with unknown random weights, 100 synthetic images train to 89%.", "We name our method Dataset Distillation and these images distilled images.", "But why is dataset distillation interesting?", "First, there is the purely scientific question of how much data is encoded in a given training set and how compressible it is?", "Second, we wish to know whether it is possible to \"load up\" a given network with an entire dataset-worth of knowledge by a handful of images.", "This is in contrast to traditional training that often requires tens of thousands of data samples.", "Finally, on the practical side, dataset distillation enables applications that require compressing data with its task.", "We demonstrate that under the continual learning setting, storing distilled images as memory of past task and data can alleviate catastrophic forgetting (McCloskey and Cohen, 1989) .", "A key question is whether it is even possible to compress a dataset into a small set of synthetic data samples.", "For example, is it possible to train an image classification model on synthetic images that are not on the manifold of natural images?", "Conventional wisdom would suggest that the answer is no, as the synthetic training data may not follow the same distribution of the real test data.", "Yet, in this work, we show that this is indeed possible.", "We present an optimization algorithm for synthesizing a small number of synthetic data samples not only capturing much of the original training data but also tailored explicitly for fast model training with only a few data point.", "To achieve our goal, we first derive the network weights as a We distill the knowledge of tens of thousands of images into a few synthetic training images called distilled images.", "On MNIST, 100 distilled images can train a standard LENET with a random initialization to 89% test accuracy, compared to 99% when fully trained.", "On CIFAR10, 100 distilled images can train a network with a random initialization to 41% test accuracy, compared to 80% when fully trained.", "In Section 3.6, we show that these distilled images can efficiently store knowledge of previous tasks for continual learning.", "differentiable function of our synthetic training data.", "Given this connection, instead of optimizing the network weights for a particular training objective, we optimize the pixel values of our distilled images.", "However, this formulation requires access to the initial weights of the network.", "To relax this assumption, we develop a method for generating distilled images for randomly initialized networks.", "To further boost performance, we propose an iterative version, where the same distilled images are reused over multiple gradient descent steps so that the knowledge can be fully transferred into the model.", "Finally, we study a simple linear model, deriving a lower bound on the size of distilled data required to achieve the same performance as training on the full dataset.", "We demonstrate that a handful of distilled images can be used to train a model with a fixed initialization to achieve surprisingly high performance.", "For networks pre-trained on other tasks, our method can find distilled images for fast model fine-tuning.", "We test our method on several initialization settings: fixed initialization, random initialization, fixed pre-trained weights, and random pre-trained weights.", "Extensive experiments on four publicly available datasets, MNIST, CIFAR10, PASCAL-VOC, and CUB-200, show that our approach often outperforms existing methods.", "Finally, we demonstrate that for continual learning methods that store limited-size past data samples as episodic memory (Lopez-Paz and Ranzato, 2017; Kirkpatrick et al., 2017) , storing our distilled data instead is much more effective.", "Our distilled images contain richer information about the past data and tasks, and we show experimental evidence on standard continual learning benchmarks.", "Our code, data, and models will be available upon publication.", "In this paper, we have presented dataset distillation for compressing the knowledge of entire training data into a few synthetic training images.", "We demonstrate how to train a network to reach surprisingly good performance with only a small number of distilled images.", "Finally, the distilled images can efficiently store the memory of previous tasks in the continual learning setting.", "Many challenges remain for knowledge distillation of data.", "Although our method generalizes well to random initializations, it is still limited to a particular network architecture.", "Since loss surfaces for different architectures might be drastically different, a more flexible method of applying the distilled data may overcome this difficulty.", "Another limitation is the increasing computation and memory requirements for finding the distilled data as the number of images and steps increases.", "To compress large-scale datasets such as ImageNet, we may need first-order gradient approximations to make the optimization computationally feasible.", "Nonetheless, we are encouraged by the findings in this paper on the possibilities of training large models with a few distilled data, leading to potential applications such as accelerating network evaluation in neural architecture search (Zoph and Le, 2017) .", "We believe that the ideas developed in this work might give new insights into the quantity and type of data that deep networks are able to process, and hopefully inspire others to think along this direction." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2857142686843872, 0.2857142686843872, 0.2545454502105713, 0.25925925374031067, 0.13333332538604736, 0.1463414579629898, 0.23076923191547394, 0.21276594698429108, 0.24390242993831635, 0.2448979616165161, 0.260869562625885, 0.20408162474632263, 0.2380952388048172, 0.24242423474788666, 0.06451612710952759, 0.07407406717538834, 0.1904761791229248, 0.13636362552642822, 0.2222222238779068, 0.1621621549129486, 0.21739129722118378, 0.44999998807907104, 0.23255813121795654, 0.1904761791229248, 0.06451612710952759, 0.2745097875595093, 0.260869562625885, 0.1860465109348297, 0.1904761791229248, 0.1463414579629898, 0.2142857164144516, 0.0952380895614624, 0.1249999925494194, 0.1111111044883728, 0.15686273574829102, 0.260869562625885, 0.380952388048172, 0.10810810327529907, 0.0555555522441864, 0.04878048226237297, 0.07407406717538834, 0.0476190410554409, 0, 0.2857142686843872, 0.3589743673801422, 0.1111111044883728, 0.13793103396892548, 0.10810810327529907, 0.13636362552642822, 0.09999999403953552, 0.04999999329447746, 0.13793103396892548, 0.26923075318336487 ]
ryxO3gBtPB
true
[ "We propose to distill a large dataset into a small set of synthetic data that can train networks close to original performance. " ]
[ "We relate the minimax game of generative adversarial networks (GANs) to finding the saddle points of the Lagrangian function for a convex optimization problem, where the discriminator outputs and the distribution of generator outputs play the roles of primal variables and dual variables, respectively.", "This formulation shows the connection between the standard GAN training process and the primal-dual subgradient methods for convex optimization.", "The inherent connection does not only provide a theoretical convergence proof for training GANs in the function space, but also inspires a novel objective function for training.", "The modified objective function forces the distribution of generator outputs to be updated along the direction according to the primal-dual subgradient methods.", "A toy example shows that the proposed method is able to resolve mode collapse, which in this case cannot be avoided by the standard GAN or Wasserstein GAN.", "Experiments on both Gaussian mixture synthetic data and real-world image datasets demonstrate the performance of the proposed method on generating diverse samples.", "Generative adversarial networks (GANs) are a class of game theoretical methods for learning data distributions.", "It trains the generative model by maintaining two deep neural networks, namely the discriminator network D and the generator network G. The generator aims to produce samples resembling real data samples, while the discriminator aims to distinguish the generated samples and real data samples.The standard GAN training procedure is formulated as the following minimax game: DISPLAYFORM0 where p d (x) is the data distribution and p z (z) is the noise distribution.", "The generated samples G(z) induces a generated distribution p g (x).", "Theoretically, the optimal solution to (1) is p * g = p d and D * (x) = 1/2 for all x in the support of data distribution.In practice, the discriminator network and the generator network are parameterized by θ θ θ d and θ θ θ g , respectively.", "The neural network parameters are updated iteratively according to gradient descent.", "In particular, the discriminator is first updated either with multiple gradient descent steps until convergence or with a single gradient descent step, then the generator is updated with a single descent step.", "However, the analysis of the convergence properties on the training approaches is challenging, as noted by Ian Goodfellow in BID10 , \"For GANs, there is no theoretical prediction as to whether simultaneous gradient descent should converge or not. Settling this theoretical question, and developing algorithms guaranteed to converge, remain important open research problems.\".", "There have been some recent studies on the convergence behaviours of GAN training (Nowozin et al., 2016; BID18 BID14 BID24 BID22 .The", "simultaneous gradient descent method is proved to converge assuming the objective function is convex-concave in the network parameters (Nowozin et al., 2016) . The", "local stability property is established in BID14 BID24 .One", "notable inconvergence issue with GAN training is referred to as mode collapse, where the generator characterizes only a few modes of the true data distribution BID11 BID18 . Various", "methods have been proposed to alleviate the mode collapse problem. Feature", "matching for intermediate layers of the discriminator has been proposed in (Salimans et al., 2016) . In BID23", ", the generator is updated based on a sequence of previous unrolled discriminators. A mixture", "of neural networks are used to generate diverse samples (Tolstikhin et al., 2017; BID15 BID2 . In , it was", "proposed that adding noise perturbation on the inputs to the discriminator can alleviate the mode collapse problem. It is shown", "that this training-with-noise technique is equivalent to adding a regularizer on the gradient norm of the discriminator (Roth et al., 2017) . The Wasserstein", "divergence is proposed to resolve the problem of incontinuous divergence when the generated distribution and the data distribution have disjoint supports BID12 . Mode regularization", "is used in the loss function to penalize the missing modes BID6 Srivastava et al., 2017) . The regularization", "is usually based on heuristics, which tries to minimize the distance between the data samples and the generated samples, but lacks theoretical convergence guarantee.In this paper, we formulate the minimax optimization for GAN training (1) as finding the saddle points of the Lagrangian function for a convex optimization problem. In the convex optimization", "problem, the discriminator function D(·) and the probabilities of generator outputs p g (·) play the roles of the primal variables and dual variables, respectively. This connection not only provides", "important insights in understanding the convergence of GAN training, but also enables us to leverage the primal-dual subgradient methods to design a novel objective function that helps to alleviate mode collapse. A toy example reveals that for some", "cases when standard GAN or WGAN inevitably leads to mode collapse, our proposed method can effectively avoid mode collapse and converge to the optimal point.In this paper, we do not aim at achieving superior performance over other GANs, but rather provide a new perspective of understanding GANs, and propose an improved training technique that can be applied on top of existing GANs. The contributions of the paper are", "as follows:• The standard training of GANs in the function space is formulated as primal-dual subgradient methods for solving convex optimizations.• This formulation enables us to show", "that with a proper gradient descent step size, updating the discriminator and generator probabilities according to the primal-dual algorithms will provably converge to the optimal point.• This formulation results in a novel", "training objective for the generator. With the proposed objective function,", "the generator is updated such that the probabilities of generator outputs are pushed to the optimal update direction derived by the primal-dual algorithms. Experiments have shown that this simple", "objective function can effectively alleviate mode collapse in GAN training.• The convex optimization framework incorporates", "different variants of GANs including the family of f -GAN (Nowozin et al., 2016) and an approximate variant of WGAN. For all these variants, the training objective can", "be improved by including the optimal update direction of the generated probabilities.", "In this paper, we propose a primal-dual formulation for generative adversarial learning.", "This formulation interprets GANs from the perspective of convex optimization, and gives the optimal update of the discriminator and the generated distribution with convergence guarantee.", "By framing different variants of GANs under the convex optimization framework, the corresponding training algorithms can all be improved by pushing the generated distribution along the optimal direction.", "Experiments on two synthetic datasets demonstrate that the proposed formulation can effectively avoid mode collapse.", "It also achieves competitive quantitative evaluation scores on two benchmark real-world image datasets.", "The proof of convergence for dual-driven algorithms can be found in BID4 , Chapter 3).The", "primal-dual-driven algorithm for continuous time update has been studied in BID8 . Here", ", we show the convergence for the discrete-time case.We choose a step size α(t) that satisfies DISPLAYFORM0 Let z(t) = [x(t), λ λ λ(t)] T be a vector consisting of the primal and dual variables at the t-th iteration. The", "primal-dual-driven update can be expressed as: DISPLAYFORM1 where DISPLAYFORM2 and DISPLAYFORM3 Since the subgradient is bounded by assumption, there exists M > 0 such that ||T (·)|| 2 2 < M , where ||.|| 2", "stands for the L 2 norm." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.16326530277729034, 0.3125, 0.21052631735801697, 0.11764705181121826, 0.1463414579629898, 0.11428570747375488, 0.13333332538604736, 0.0624999962747097, 0.07999999821186066, 0.07999999821186066, 0, 0.05405404791235924, 0.0952380895614624, 0.052631575614213943, 0.05405404791235924, 0, 0.1428571343421936, 0.1538461446762085, 0.060606054961681366, 0.06666666269302368, 0, 0.12121211737394333, 0.10526315122842789, 0.05714285373687744, 0, 0.17241379618644714, 0.04999999701976776, 0.2448979616165161, 0.2631579041481018, 0.2380952388048172, 0.13636362552642822, 0.17391303181648254, 0.10256409645080566, 0.25806450843811035, 0.1463414579629898, 0, 0.37037035822868347, 0.11428570747375488, 0.09999999403953552, 0.19999998807907104, 0, 0.06666666269302368, 0.07407406717538834, 0.15686273574829102, 0.08695651590824127, 0.0952380895614624 ]
BJNRFNlRW
true
[ "We propose a primal-dual subgradient method for training GANs and this method effectively alleviates mode collapse." ]
[ "Specifying reward functions is difficult, which motivates the area of reward inference: learning rewards from human behavior.", "The starting assumption in the area is that human behavior is optimal given the desired reward function, but in reality people have many different forms of irrationality, from noise to myopia to risk aversion and beyond.", "This fact seems like it will be strictly harmful to reward inference: it is already hard to infer the reward from rational behavior, and noise and systematic biases make actions have less direct of a relationship to the reward.", "Our insight in this work is that, contrary to expectations, irrationality can actually help rather than hinder reward inference.", "For some types and amounts of irrationality, the expert now produces more varied policies compared to rational behavior, which help disambiguate among different reward parameters -- those that otherwise correspond to the same rational behavior.", "We put this to the test in a systematic analysis of the effect of irrationality on reward inference.", "We start by covering the space of irrationalities as deviations from the Bellman update, simulate expert behavior, and measure the accuracy of inference to contrast the different types and study the gains and losses.", "We provide a mutual information-based analysis of our findings, and wrap up by discussing the need to accurately model irrationality, as well as to what extent we might expect (or be able to train) real people to exhibit helpful irrationalities when teaching rewards to learners.", "The application of reinforcement learning (RL) in increasingly complex environments has been most successful for problems that are already represented by a specified reward function (Lillicrap et al., 2015; Mnih et al., 2015; .", "Unfortunately, not only do real-world tasks usually lack an explicit exogenously-specified reward function, but attempting to specify one tends to lead to unexpected side-effects as the agent is faced with new situations (Lehman et al., 2018) .", "This has motivated the area of reward inference: the process of estimating a reward function from human inputs.", "The inputs are traditionally demonstrations, leading to inverse reinforcement learning (IRL) (Ng et al., 2000; Abbeel & Ng, 2004) or inverse optimal control (IOC) (Kalman, 1964; Jameson & Kreindler, 1973; Mombaur et al., 2010; Finn et al., 2016) .", "Recent work has expanded the range of inputs significantly,to comparisons (Wirth et al., 2017; Sadigh et al., 2017; Christiano et al., 2017) , natural language instructions (MacGlashan et al., 2015; Fu et al., 2019) , physical corrections (Jain et al., 2015; Bajcsy et al., 2017) , proxy rewards Ratner et al., 2018) , or scalar reward values (Griffith et al., 2013; Loftin et al., 2014) .", "The central assumption behind these methods is that human behavior is rational, i.e. optimal with respect to the desired reward (cumulative, in expectation).", "Unfortunately, decades of research in behavioral economics and cognitive science Chipman (2014) has unearthed a deluge of irrationalities, i.e. of ways in which people deviate from optimal decision making: hyperbolic discounting, scope insensitivity, optimism bias, decision noise, certainty effects, loss aversion, status quo bias, etc.", "Work on reward inference has predominantly used one model of irrationality: decision-making noise, where the probability of an action relates to the value that action has.", "The most widely used model by far is a Bolzmann distribution stemming from the Luce-Sherpard rule (Luce, 1959; Shepard, 1957; Lucas et al., 2009 ) and the principle of maximum (causal) entropy in (Ziebart et al., 2008; , which we will refer to as Bolzmann-rationality (Fisac et al., 2017) .", "Recent work has started to incorporate systematic biases though, like risk-aversion (Singh et al., 2017) , having the wrong dynamics belief (Reddy et al., 2018) , and myopia and hyperbolic discounting (Evans & Goodman, 2015; Evans et al., 2016) .", "Learning from irrational experts feels like daunting task: reward inference is already hard with rational behavior, but now a learner needs to make sense of behavior that is noisy or systematically biased.", "Our goal in this work is to characterize just how muddied the waters are -how (and how much) do different irrationalities affect reward inference?", "Our insight is that, contrary to expectations, irrationality can actually help, rather than hinder, reward inference.", "Our explanation is that how good reward inference is depends on the mutual information between the policies produced by the expert and the reward parameters to be inferred.", "While it is often possible for two reward parameters to produce the same rational behavior, irrationalities can sometimes produce different behaviors that disambiguate between those same two reward parameters.", "For instance, noise can help when it is related to the value function, as Boltzmann noise is, because it distinguishes the difference in values even when the optimal action stays the same.", "Optimism can be helpful because the expert takes fewer risk-avoiding actions and acts more directly on their goal.", "Overall, we contribute", "1) an analysis and comparison of the effects of different biases on reward inference testing our insight,", "2) a way to systematically formalize and cover the space of irrationalities in order to conduct such an analysis, and", "3) evidence for the importance of assuming the right type of irrationality during inference.", "Our good news is that irrationalities can indeed be an ally for inference.", "Of course, this is not always true -the details of which irrationality type and how much of it also matter.", "We see these results as opening the door to a better understanding of reward inference, as well as to practical ways of making inference easier by asking for the right kind of expert demonstrations -after all, in some cases it might be easier for people to act optimistically or myopically than to act rationally.", "Our results reinforce that optimal teaching is different from optimal doing, but point out that some forms of teaching might actually be easier than doing." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.0624999962747097, 0.0833333283662796, 0.1249999925494194, 0.17142856121063232, 0.1249999925494194, 0.1875, 0.1395348757505417, 0.0714285671710968, 0.08510638028383255, 0.039215680211782455, 0.12903225421905518, 0, 0, 0.05128204822540283, 0.07017543166875839, 0.10526315122842789, 0.06666666269302368, 0, 0.1702127605676651, 0, 0.1249999925494194, 0.10256409645080566, 0.09999999403953552, 0.0952380895614624, 0.1764705777168274, 0, 0.0624999962747097, 0.11764705181121826, 0.0714285671710968, 0.20689654350280762, 0.05714285373687744, 0.10169491171836853, 0.10526315122842789 ]
BJlo91BYPr
true
[ "We find that irrationality from an expert demonstrator can help a learner infer their preferences. " ]
[ "Natural Language Processing models lack a unified approach to robustness testing.", "In this paper we introduce WildNLP - a framework for testing model stability in a natural setting where text corruptions such as keyboard errors or misspelling occur.", "We compare robustness of models from 4 popular NLP tasks: Q&A, NLI, NER and Sentiment Analysis by testing their performance on aspects introduced in the framework.", "In particular, we focus on a comparison between recent state-of-the- art text representations and non-contextualized word embeddings.", "In order to improve robust- ness, we perform adversarial training on se- lected aspects and check its transferability to the improvement of models with various cor- ruption types.", "We find that the high perfor- mance of models does not ensure sufficient robustness, although modern embedding tech- niques help to improve it.", "We release cor- rupted datasets and code for WildNLP frame- work for the community.", "Adversarial examples have been shown to severely degrade performance of deep learning models BID10 BID14 .", "Natural Language Processing systems are no different in this respect.", "Multiple areas of NLP, such as machine translation BID1 , question answering BID12 , or text classification have been studied to assess the impact of adversaries generated with various methods.", "However, these works tend to focus on one area only, often with attacks designed just for the selected problem.", "It makes comparisons between models, datasets, and NLP areas impossible.", "In particular, the robustness of modern word embedding systems -such as ELMo BID17 , Flair BID0 and language model based BERT BID5 remains unstudied.In this article, we evaluate the behavior of natural language models in the wild.", "We propose WildNLP -a systematic and comprehensive robustness testing framework which can be used for any NLP model.", "Instead of focusing on elaborate attacks, which are unlikely to originate by accident, we measure the quality of models in a natural setting, where input data is poisoned with errors involuntarily generated by actual users.", "We put these notions into a set of tests called aspects.", "Moreover, we introduce the concept of corruption severity and prove that it is critical to model improvement via adversarial training.", "The framework is aimed at any NLP problem irrespective of its form of input and output.In summary, our contributions are the following:1.", "We offer a systematic framework for testing corruption robustness -the WildNLP.In total, we introduce 11 aspects of robustness testing, with multiple severity levels.", "We release the code and a collection of popular datasets that are corrupted with WildNLP for the community 1 .", "The framework is easy to extend.", "New aspects can be defined by the community.2.", "We test corruption robustness of a number of NLP tasks: question answering (Q&A), natural language inference (NLI), named entity recognition (NER), and sentiment analysis (SA).", "We verify stability of models trained on contextualized embeddings like ELMo and Flair in contrast to noncontextualized FastText BID2 and GloVe BID16 .We", "also analyze BERT in the task of Q&A. We", "find that new forms of text representation, despite greater contextual awareness, do not offer a sufficient increase in robustness.3. We", "find that model training on one aspect does improve performance on another aspect, contrary to previous studies BID1 . For", "this to be true, two corruption types must be similar to some extent.In section 2 we present related literature in the domain of NLP robustness. In", "section 3 we present WildNLP framework, describing in detail each introduced aspect. In", "section 4 we compare robustness of NER, Q&A, NLI and Sentiment Analysis. In", "section 5 we perform adversarial training on Qwerty aspect with different severities and test these models on other aspects. We", "conclude in section 6.", "In this work, we have presented the WildNLP framework for corruption robustness testing.", "We have introduced 11 text corruption types (at various severity levels) which can occur naturally in model deployment setting: misspellings, keyboard errors, attempts at masking emotional language, and others.", "We test on four NLP areas and 12 models in total, verifying corruption robustness of state-of-the-art BERT system and new LM-based embeddings: ELMo and Flair, contrasted with GloVe and Fasttext.", "We find that the problem of lacking corruption robustness is not solved by these recent systems.", "However, we find that the issue can be partially alleviated by adversarial training, even across aspects.", "We believe that problem of adversarial examples in NLP is still vague and hard to quantify.", "Without doubt, more work is needed to make models robust to natural noise, whether by robust word embeddings, model architectures, or better datasets." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1764705777168274, 0.04081632196903229, 0.8571428656578064, 0.09999999403953552, 0.1599999964237213, 0.1304347813129425, 0.1111111044883728, 0.15789473056793213, 0, 0.039215680211782455, 0.0476190410554409, 0.12121211737394333, 0.1428571343421936, 0.24390242993831635, 0.1428571343421936, 0.11764705181121826, 0.09302324801683426, 0.13333332538604736, 0.17391303181648254, 0.19512194395065308, 0, 0.0624999962747097, 0.25531914830207825, 0.22727271914482117, 0.1249999925494194, 0.13636362552642822, 0.09756097197532654, 0.12765957415103912, 0, 0.4444444477558136, 0.1904761791229248, 0, 0.1111111044883728, 0.07692307233810425, 0.2800000011920929, 0.20512819290161133, 0.05128204822540283, 0.20512819290161133, 0.09090908616781235 ]
SkxgBPr3iN
true
[ "We compare robustness of models from 4 popular NLP tasks: Q&A, NLI, NER and Sentiment Analysis by testing their performance on perturbed inputs." ]
[ "Training generative models like Generative Adversarial Network (GAN) is challenging for noisy data.", "A novel curriculum learning algorithm pertaining to clustering is proposed to address this issue in this paper.", "The curriculum construction is based on the centrality of underlying clusters in data points. ", "The data points of high centrality takes priority of being fed into generative models during training.", "To make our algorithm scalable to large-scale data, the active set is devised, in the sense that every round of training proceeds only on an active subset containing a small fraction of already trained data and the incremental data of lower centrality.", "Moreover, the geometric analysis is presented to interpret the necessity of cluster curriculum for generative models.", "The experiments on cat and human-face data validate that our algorithm is able to learn the optimal generative models (e.g. ProGAN) with respect to specified quality metrics for noisy data.", "An interesting finding is that the optimal cluster curriculum is closely related to the critical point of the geometric percolation process formulated in the paper.", "Deep generative models have piqued researchers' interest in the past decade.", "The fruitful progress has been achieved on this topic, such as auto-encoder (Hinton & Salakhutdinov, 2006) and variational auto-encoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) , generative adversarial network (GAN) (Goodfellow et al., 2014; , normalizing flow (Rezende & Mohamed, 2015; Dinh et al., 2015; Kingma & Dhariwal, 2018) , and autoregressive models (van den Oord et al., 2016b; a; .", "However, it is non-trivial to train a deep generative model that can converge to a proper minimum of the associated optimization.", "For example, GAN suffers non-stability, mode collapse, and generative distortion during training.", "Many insightful algorithms have been proposed to circumvent those issues, including feature engineering (Salimans et al., 2016) , various discrimination metrics (Mao et al., 2016; Berthelot et al., 2017) , distinctive gradient penalties (Gulrajani et al., 2017; Mescheder et al., 2018) , spectral normalization to discriminator (Miyato et al., 2018) , and orthogonal regularization to generator (Brock et al., 2019) .", "What is particularly of interest is that the breakthrough for GANs has been made with a simple technique of progressively growing neural networks of generators and discriminators from low-resolution images to high-resolution counterparts (Karras et al., 2018a) .", "This kind of progressive growing also helps push the state of the arts to a new level by enabling StyleGAN to produce photo-realistic and detail-sharp results (Karras et al., 2018b) , shedding new light on wide applications of GANs in solving real problems.", "This idea of progressive learning is actually a general manner of cognition process (Elman, 1993; Oudeyer et al., 2007) , which has been formally named curriculum learning in machine learning (Bengio et al., 2009) .", "The central topic of this paper is to explore a new curriculum for training deep generative models.", "To facilitate robust training of deep generative models with noisy data, we propose curriculum learning with clustering.", "The key contributions are listed as follows:", "• We first summarize four representative curricula for generative models, i.e. architecture (generation capacity), semantics (data content), dimension (data space), and cluster (data structure).", "Among these curricula, cluster curriculum is newly proposed in this paper.", "• Cluster curriculum is to treat data according to the centrality of each data point, which is pictorially illustrated and explained in detail.", "To foster large-scale learning, we devise the active set algorithm that only needs an active data subset of small fixed size for training.", "• The geometric principle is formulated to analyze hardness of noisy data and advantage of cluster curriculum.", "The geometry pertains to counting a small sphere packed in an ellipsoid, on which is based the percolation theory we use.", "The research on curriculum learning is diverse.", "Our work focuses on curricula that are closely related to data attributes, beyond which is not the scope we concern in this paper.", "Cluster curriculum is proposed for robust training of generative models.", "The active set of cluster curriculum is devised to facilitate scalable learning.", "The geometric principle behind cluster curriculum is analyzed in detail as well.", "The experimental results on the LSUN cat dataset and CelebA face dataset demonstrate that the generative models trained with cluster curriculum is capable of learning the optimal parameters with respect to the specified quality metric such as Fréchet inception distance and sliced Wasserstein distance.", "Geometric analysis indicates that the optimal curricula obtained from generative models are closely related to the critical points of the associated percolation processes established in this paper.", "This intriguing geometric phenomenon is worth being explored deeply in terms of the theoretical connection between generative models and high-dimensional geometry.", "It is worth emphasizing that the meaning of model optimality refers to the global minimum of the centrality-FID curve.", "As we already noted, the optimality is metric-dependent.", "We are able to obtain the optimal model with cluster curriculum, which does not mean that the algorithm only serves to this purpose.", "We know that more informative data can help learn a more powerful model covering the large data diversity.", "Here a trade-off arises, i.e. the robustness against noise and the capacity of fitting more data.", "The centrality-FID curve provides a visual tool to monitor the state of model training, thus aiding us in understanding the learning process and selecting suitable models according to noisy degree of given data.", "For instance, we can pick the trained model close to the optimal curriculum for heavily noisy data or the one near the end of the centrality-FID curve for datasets of little noise.", "In fact, this may be the most common way of using cluster curriculum.", "In this paper, we do not investigate the cluster-curriculum learning for the multi-class case, e.g. the ImageNet dataset with BigGAN (Brock et al., 2019) .", "The cluster-curriculum learning of multiple classes is more complex than that we have already analyzed on the face and cat data.", "We leave this study for future work." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.20689654350280762, 0.5161290168762207, 0.25806450843811035, 0.25806450843811035, 0.23076923191547394, 0.4516128897666931, 0.2666666507720947, 0.2702702581882477, 0.2222222238779068, 0.0615384578704834, 0.2857142686843872, 0.1428571343421936, 0.06896550953388214, 0.15686273574829102, 0.1111111044883728, 0.17777776718139648, 0.42424240708351135, 0.4375, 0, 0.05128204822540283, 0.2222222238779068, 0.277777761220932, 0.21052631735801697, 0.25, 0.1621621549129486, 0.260869562625885, 0.1538461446762085, 0.6153846383094788, 0.3571428656578064, 0.1428571343421936, 0.30188679695129395, 0.24390242993831635, 0.2702702581882477, 0.25, 0.1666666567325592, 0.1621621549129486, 0.0624999962747097, 0.1249999925494194, 0.21739129722118378, 0.1904761791229248, 0.20689654350280762, 0.09999999403953552, 0.21621620655059814, 0 ]
BklTQCEtwH
true
[ "A novel cluster-based algorithm of curriculum learning is proposed to solve the robust training of generative models." ]
[ "Backdoor attacks aim to manipulate a subset of training data by injecting adversarial triggers such that machine learning models trained on the tampered dataset will make arbitrarily (targeted) incorrect prediction on the testset with the same trigger embedded.", "While federated learning (FL) is capable of aggregating information provided by different parties for training a better model, its distributed learning methodology and inherently heterogeneous data distribution across parties may bring new vulnerabilities.", "In addition to recent centralized backdoor attacks on FL where each party embeds the same global trigger during training, we propose the distributed backdoor attack (DBA) --- a novel threat assessment framework developed by fully exploiting the distributed nature of FL.", "DBA decomposes a global trigger pattern into separate local patterns and embed them into the training set of different adversarial parties respectively.", "Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data.", "We conduct extensive experiments to show that the attack success rate of DBA is significantly higher than centralized backdoors under different settings.", "Moreover, we find that distributed attacks are indeed more insidious, as DBA can evade two state-of-the-art robust FL algorithms against centralized backdoors.", "We also provide explanations for the effectiveness of DBA via feature visual interpretation and feature importance ranking.\n", "To further explore the properties of DBA, we test the attack performance by varying different trigger factors, including local trigger variations (size, gap, and location), scaling factor in FL, data distribution, and poison ratio and interval.", "Our proposed DBA and thorough evaluation results shed lights on characterizing the robustness of FL.", "Federated learning (FL) has been recently proposed to address the problems for training machine learning models without direct access to diverse training data, especially for privacy-sensitive tasks (Smith et al., 2017; McMahan et al., 2017; Zhao et al., 2018) .", "Utilizing local training data of participants (i.e., parties), FL helps train a shared global model with improved performance.", "There have been prominent applications and ever-growing trends in deploying FL in practice, such as loan status prediction, health situation assessment (e.g. potential cancer risk assessment), and next-word prediction while typing (Hard et al., 2018; Yang et al., 2018; 2019) .", "Although FL is capable of aggregating dispersed (and often restricted) information provided by different parties to train a better model, its distributed learning methodology as well as inherently heterogeneous (i.e., non-i.i.d.) data distribution across different parties may unintentionally provide a venue to new attacks.", "In particular, the fact of limiting access to individual party's data due to privacy concerns or regulation constraints may facilitate backdoor attacks on the shared model trained with FL.", "Backdoor attack is a type of data poisoning attacks that aim to manipulate a subset of training data such that machine learning models trained on the tampered dataset will be vulnerable to the test set with similar trigger embedded (Gu et al., 2019) .", "Backdoor attacks on FL have been recently studied in (Bagdasaryan et al., 2018; Bhagoji et al., 2019) .", "However, current attacks do not fully exploit the distributed learning methodology of FL, as they embed the same global trigger pattern to all adversarial parties.", "We call such attacking scheme Figure 1: Overview of centralized and distributed backdoor attacks (DBA) on FL.", "The aggregator at round t + 1 combines information from local parties (benign and adversarial) in the previous round t, and update the shared model G t+1 .", "When implementing backdoor attacks, centralized attacker uses a global trigger while distributed attacker uses a local trigger which is part of the global one.", "centralized backdoor attack.", "Leveraging the power of FL in aggregating dispersed information from local parties to train a shared model, in this paper we propose distributed backdoor attack (DBA) against FL.", "Given the same global trigger pattern as the centralized attack, DBA decomposes it into local patterns and embed them to different adversarial parties respectively.", "A schematic comparison between the centralized and distributed backdoor attacks is illustrated in Fig.1 .", "Through extensive experiments on several financial and image datasets and in-depth analysis, we summarize our main contributions and findings as follows.", "• We propose a novel distributed backdoor attack strategy DBA on FL and show that DBA is more persistent and effective than centralized backdoor attack.", "Based on extensive experiments, we report a prominent phenomenon that although each adversarial party is only implanted with a local trigger pattern via DBA, their assembled pattern (i.e., global trigger) attains significantly better attack performance on the global model compared with the centralized attack.", "The results are consistent across datasets and under different attacking scenarios such as one-time (single-shot) and continuous (multiple-shot) poisoning settings.", "To the best of our knowledge, this paper is the first work studying distributed backdoor attacks.", "• When evaluating the robustness of two recent robust FL methods against centralized backdoor attack (Fung et al., 2018; Pillutla et al., 2019) , we find that DBA is more effective and stealthy, as its local trigger pattern is more insidious and hence easier to bypass the robust aggregation rules.", "• We provide in-depth explanations for the effectiveness of DBA from different perspectives, including feature visual interpretation and feature importance ranking.", "• We perform comprehensive analysis and ablation studies on several trigger factors in DBA, including the size, gap, and location of local triggers, scaling effect in FL, poisoning interval, data poisoning ratio, and data distribution.", "Specifically, at round t, the central server sends the current shared model G t to n ∈ [N ] selected parties, where [N ] denotes the integer set {1, 2, . . . , N }.", "The selected party i locally computes the function f i by running an optimization algorithm such as stochastic gradient descent (SGD) for E local epochs with its own dataset D i and learning rate l r to obtain a new local model L t+1 i", ". The local party then sends model update L t+1 i − G t back to the central server, who will averages over all updates with its own learning rate η to generate a new global model G t+1 :", "This aggregation process will be iterated until FL finds the final global model.", "Unless specified otherwise, we use G t (L t i ) to denote the model parameters of the global (local) model at round t.", "Attacker ability.", "Based on the Kerckhoffs's theory (Shannon, 1949) , we consider the strong attacker here who has full control of their local training process, such as backdoor data injection and updating local training hyperparameters including E and l r .", "This scenario is quite practical since each local dataset is usually owned by one of the local parties.", "However, attackers do not have the ability to influence the privilege of central server such as changing aggregation rules, nor tampering the training process and model updates of other parties.", "Objective of backdoor attack.", "Backdoor attack is designed to mislead the trained model to predict a target label τ on any input data that has an attacker-chosen pattern (i.e., a trigger) embedded.", "Instead of preventing the convergence in accuracy as Byzantine attacks (Blanchard et al., 2017) , the purpose of backdoor attacks in FL is to manipulate local models and simultaneously fit the main task and backdoor task, so that the global model would behave normally on untampered data samples while achieving high attack success rate on backdoored data samples.", "The adversarial objective for attacker i in round t with local datatset D i and target label τ is:", "Here, the poisoned dataset", "The function R transforms clean data in any class into backdoored data that have an attacker-chosen trigger pattern using a set of parameters φ.", "For example, for image data, φ is factored into trigger location TL, trigger size TS and trigger gap TG (φ = {TS, TG, TL}), which are shown in Fig.2 .", "The attacker can design his own trigger pattern and choose an optimal poison ratio r to result in a better model parameter w * i , with which G t+1 can both assign the highest probability to target label τ for backdoored data R(x i j , φ) and the ground truth label y i j for benign data x i j .", "Through extensive experiments on diverse datasets including LOAN and three image datasets in different settings, we show that in standard FL our proposed DBA is more persistent and effective than centralized backdoor attack: DBA achieves higher attack success rate, faster convergence and better resiliency in single-shot and multiple-shot attack scenarios.", "We also demonstrate that DBA is more stealthy and can successfully evade two robust FL approaches.", "The effectiveness of DBA is explained using feature visual interpretation for inspecting its role in aggregation.", "We also perform an in-depth analysis on the important factors that are unique to DBA to explore its properties and limitations.", "Our results suggest DBA is a new and more powerful attack on FL than current backdoor attacks.", "Our analysis and findings can provide new threat assessment tools and novel insights for evaluating the adversarial robustness of FL.", "A APPENDIX" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19999998807907104, 0.21212120354175568, 0.28169015049934387, 0.0714285671710968, 0.3333333432674408, 0.24561403691768646, 0.21052631735801697, 0.11538460850715637, 0.08955223113298416, 0.1599999964237213, 0.09090908616781235, 0.1090909019112587, 0.0555555522441864, 0.17721518874168396, 0.16129031777381897, 0.24657534062862396, 0.07843136787414551, 0.1355932205915451, 0.26923075318336487, 0.03389830142259598, 0.2222222238779068, 0.15789473056793213, 0.19672130048274994, 0.13793103396892548, 0.20000000298023224, 0.07407406717538834, 0.5357142686843872, 0.2432432323694229, 0.03703703358769417, 0.11999999731779099, 0.307692289352417, 0.072727270424366, 0.0923076868057251, 0.0307692252099514, 0.15789473056793213, 0.11428570747375488, 0.0833333283662796, 0.0363636314868927, 0.08695651590824127, 0.07843136787414551, 0.09677419066429138, 0.10256410390138626, 0.1904761791229248, 0.19512194395065308, 0.07547169178724289, 0, 0.06896550953388214, 0.0634920597076416, 0.0952380895614624, 0.33766233921051025, 0.31372547149658203, 0.039215680211782455, 0.2181818187236786, 0.307692289352417, 0.1111111044883728 ]
rkgyS0VFvr
true
[ "We proposed a novel distributed backdoor attack on federated learning and show that it is not only more effective compared with standard centralized attacks, but also harder to be defended by existing robust FL methods" ]
[ "Graph networks have recently attracted considerable interest, and in particular in the context of semi-supervised learning.", "These methods typically work by generating node representations that are propagated throughout a given weighted graph.\n\n", "Here we argue that for semi-supervised learning, it is more natural to consider propagating labels in the graph instead.", "Towards this end, we propose a differentiable neural version of the classic Label Propagation (LP) algorithm.", "This formulation can be used for learning edge weights, unlike other methods where weights are set heuristically.", "Starting from a layer implementing a single iteration of LP, we proceed by adding several important non-linear steps that significantly enhance the label-propagating mechanism.\n\n", "Experiments in two distinct settings demonstrate the utility of our approach.\n", "We study the problem of graph-based semi-supervised learning (SSL), where the goal is to correctly label all nodes of a graph, of which only a few are labeled.", "Methods for this problem are often based on assumptions regarding the relation between the graph and the predicted labels.", "One such assumption is smoothness, which states that adjacent nodes are likely to have similar labels.", "Smoothness can be encouraged by optimizing an objective where a loss term L over the labeled nodes is augmented with a quadratic penalty over edges: (1) Here, y are the true labels, f are \"soft\" label predictions, S is the set of labeled nodes, and w are non-negative edge weights.", "The quadratic term in Eq. (1) is often referred to as Laplacian Regularization since (for directed graphs) it can equivalently be expressed using the graph Laplacian BID5 .", "Many early methods for SSL have adopted the general form of Eq. (1) BID51 BID50 BID4 BID6 BID0 BID42 BID47 .", "Algorithms such as the seminal Label Propagation BID51 are simple, efficient, and theoretically grounded but are limited in two important ways.", "First, predictions are parameterized either naïvely or not at all.", "Second, edge weights are assumed to be given as input, and in practice are often set heuristically.Recent deep learning methods address the first point by offering intricate predictive models that are trained discriminatively BID47 BID38 BID48 BID28 BID20 BID21 BID34 .", "Nonetheless, many of them still require w as input, which may be surprising given the large body of work highlighting the importance of good weights BID51 BID24 BID46 BID4 BID25 .", "While some methods consider some form of weight learning BID45 BID35 , to some extent they have drifted away from the original quadratic criterion.Other works address the second point by proposing disciplined ways for learning w.", "However, these either assume specific simple parameterizations BID49 BID25 , or altogether consider weights disjointly from predictions BID46 BID32 .Our", "goal in this paper is to simultaneously addresses both issues. We", "propose a framework that, given a graph, jointly learns both a parametric predictive model and the edge weights. To", "do this, we begin by revisiting the Label Propagation (LP), and casting it as a differentiable neural network. Each", "layer in the network corresponds to a single iterative update, making a forward pass equivalent to a full run of the algorithm. Since", "the network is differentiable, we can then optimize the weights of the LP solution using gradient descent. As we", "show, this can be done efficiently with a suitable loss function.The key modeling point in our work is that labeled information is used as input to both the loss and the network. In contrast", "to most current methods, our network's hidden layers directly propagate labeling information, rather than node or feature representations. Each layer", "is therefore a self-map over the probability simplex; special care is therefore needed when introducing non-linearities. To this end", ", we introduce two novel architectural components that are explicitly designed to operate on distributions. The first", "is an information-gated attention mechanism, where attention is directed based on the informativeness and similarity of neighboring nodes' states. The second", "is a novel \"bifurcation\" operator that dynamically controls label convergence, and acts as a balancing factor to the model's depth.Our main guideline in designing our model was to tailor it to the semi-supervised setting. The result", "is a slim model having relatively few parameters and only one model-specific hyper-parameter (depth), making it suitable for tasks where only few labeled nodes are available. The final", "network provides a powerful generalization of the original propagation algorithm that can be trained efficiently. Experiments", "on benchmark datasets in two distinct learning settings show that our model compares favorably against strong baselines.", "In this work we presented a deep network for graph-based SSL.", "Our design process revolved around two main ideas: that edge weights should be learned, and that labeled data should be propagated.", "We began by revisiting the classic LP algorithm, whose simple structure allowed us to encode it as a differentiable neural network.", "We then proposed two novel ad-hoc components: information-gated attention and bifurcation, and kept our design slim and lightly parameterized.", "The resulting model is a powerful generalization of the original algorithm, that can be trained efficiently using the leave-one-out loss using few labeled nodes.We point out two avenues for future work.", "First, despite its non-linearities, the current network still employs the same simple averaging updates that LP does.", "An interesting challenge is to design general parametric update schemes, that can perhaps be learned.", "Second, since the Laplacian's eigenvalues play an important role in both theory and in practice, an interesting question is whether these can be used as the basis for an explicit form of regularization.", "We leave this for future work." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19354838132858276, 0.060606054961681366, 0.17142856121063232, 0.0624999962747097, 0.060606054961681366, 0.04999999701976776, 0.0714285671710968, 0.14999999105930328, 0.1818181723356247, 0, 0.06896550953388214, 0.0476190410554409, 0.1111111044883728, 0.1111111044883728, 0, 0.072727270424366, 0.04651162400841713, 0.08163265138864517, 0, 0, 0.12121211737394333, 0.11428570747375488, 0.05714285373687744, 0.0624999962747097, 0.0833333283662796, 0.2222222238779068, 0.060606054961681366, 0, 0.11428570747375488, 0.12244897335767746, 0.0952380895614624, 0.0624999962747097, 0, 0.14814814925193787, 0.05882352590560913, 0.05405404791235924, 0.060606054961681366, 0.08695651590824127, 0.0624999962747097, 0, 0.13333332538604736, 0.09090908616781235 ]
r1g7y2RqYX
true
[ "Neural net for graph-based semi-supervised learning; revisits the classics and propagates *labels* rather than feature representations" ]
[ "Neural architecture search (NAS) has made rapid progress incomputervision,wherebynewstate-of-the-artresultshave beenachievedinaseriesoftaskswithautomaticallysearched neural network (NN) architectures.", "In contrast, NAS has not made comparable advances in natural language understanding (NLU).", "Corresponding to encoder-aggregator meta architecture of typical neural networks models for NLU tasks (Gong et al. 2018), we re-define the search space, by splittingitinto twoparts:encodersearchspace,andaggregator search space.", "Encoder search space contains basic operations such as convolutions, RNNs, multi-head attention and its sparse variants, star-transformers.", "Dynamic routing is included in the aggregator search space, along with max (avg) pooling and self-attention pooling.", "Our search algorithm is then fulfilled via DARTS, a differentiable neural architecture search framework.", "We progressively reduce the search space every few epochs, which further reduces the search time and resource costs.", "Experiments on five benchmark data-sets show that, the new neural networks we generate can achieve performances comparable to the state-of-the-art models that does not involve language model pre-training.\n", "Neural architecture search (NAS) has recently attracted intensive attention.", "On one hand, promising methodological innovation for NAS have been developed, e.g. the seminal gradient-based NAS approach DARTS (Liu, Simonyan, and Yang 2018) , followed by improvements such as SNAS (Xie et al. 2018 ), P-DARTS , PC-DARTS (Xu et al. 2019) , etc.", "On the other hand, NAS has helped to discover better models to for a variety of vision tasks, e.g., image classification (Zoph and Le 2017; Zoph et al. 2017; Cai, Zhu, and Han 2018) , semantic segmentation , object detection (Ghiasi, Lin, and Le 2019) , superresolution (Ahn, Kang, and Sohn 2018) , etc.", "For natural language processing tasks, NAS is relatively less studied.", "Except for the general methodology-wise innovations NASNet (Zoph and Le 2016) , ENAS (Pham et al. 2018) and DARTS (Liu, Simonyan, and Yang 2018) which pay slight extra effort on searching for new RNN cells on Copyright c 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org).", "All rights reserved.", "language modeling (LM) tasks, there is little studies tailored to the NLU task.", "One such an example is the evolved transformer (So, Liang, and Le 2019) , which uses the evolutionbased NAS algorithm to search for better transformer architecture for machine translation.", "Although state-of-the-art performance has been achieved on 4 machine translation tasks, the computation cost is exceedingly high since they have to evaluate a large number of models.", "In fact, NAS has not been fully investigated for a wide variety of fundamental natural language understanding (NLU) tasks, such as classification (e.g. or sentiment analysis), natural language inference (NLI), sequence tagging tasks such as named entity recognition (NER).", "Especially, there is no existing work on the effectiveness of one-shot architecture search (Bender et al. 2018 ) methods on NLU tasks, which could also otherwise significantly reduce the search cost as done in vision tasks.", "A typical neural network architecture for NLU includes an encoder which contextualizes the embedded text inputs and extracts higher-level features, and an aggregator that aggregates the encoded inputs to a fix-length vector to make a prediction (Gong et al. 2018) .", "In terms of encoders, many previous NAS literature restrict the search space to nonlinear maps such as tanh and sigmoid, and the objective to be the discovery of a new recurrent cell to form a new type of recurrent neural network (RNN).", "However, other than RNNs, there are many other available encoders, for example, convolutional networks (CNN) (Kim 2014) , and attentionbased model such as transformer (Vaswani et al. 2017) , etc.", "In addition, recent works e.g. star-transformer (Guo et al. 2019) have proposed more sparse versions of transformer to reduce the computational complexity and improve the generalization when there is no pre-trained language model.", "In addition, as far as we know, there is no existing work on searching for an aggregator.", "A collection of aggregators are available (Gong et al. 2018) .", "However, one have to choose manually in a trial-and-error fashion.", "In this work, we design an encoder search space that contains a rich collection of encoders.", "The involved operations include:", "i) the zero map and identity map;", "ii) the two most commonly used RNNs, LSTM (Hochreiter and Schmidhuber 1997) and GRU (Cho et al. 2014) ;", "iii) highway network (Srivastava, Greff, and Schmidhuber 2015) ;", "iv) a series of convolutional networks with different kernel sizes;", "v) multi-head attention from (Vaswani et al. 2017) ;", "vi) startransformer (Guo et al. 2019) and its variants, which will be explained later in the next section.", "The combination of encoder operations is searched in a encoder search cell, which is a directed acyclic graph (DAG) of intermediate nodes collected by the encoder operations from the encoder search space.", "To further reduce the human designs, we propose to search for a suitable aggregator along with the search of encoder cell via an aggregator search cell which includes max (average) pooling, self-attention pooling and dynamic routing (Gong et al. 2018) .", "The aggregator search cell is a DAG with only one step in which the only node is connected to the inputs by a mixture of aggregators.", "Our search strategy is mainly based on DARTS (Liu, Simonyan, and Yang 2018) .", "To reduce computation cost, we employ a progressive search space reduction strategy similar to P-DARTS .", "Experiments are performed on three different kinds of NLU tasks, i.e., text classification, NLI and NER, with 5 benchmark datasets.", "For fair comparison, we only compare our results with former state-of-the-art (SOTA) models without large-scale LM pre-training, or any other outside resources like knowledge bases, or any human designed features.", "Results have shown that with the help of NAS on our search space, we achieve results that are comparable to the SOTA on these 5 tasks, indicating the effectiveness of NAS in the field of NLU research.", "Our work contributes the field by the following aspects:", "• We re-define the search space for neural architecture search in NLU tasks, by extending and modifying the encoder search space from the evolved transformer, and define the aggregator search space.", "• To the best of our knowledge, we are the first to conduct NAS experiments on NLU tasks such as classification, NLI, NER tasks, with one-shot NAS.", "• Our approach achieves the results that are comparable to the state-of-the-art models designed by human experts, on various NLU tasks (classification, NLI, NER), by using neural architecture search over the search space defined above.", "In addition, we demonstrate the effectiveness of one-shot architecture search for NLU tasks.", "• We propose a modularized version of star-transformer and its variant, thus including a sparse version of transformer into the search space, which is also novel in the literature.", "The resulting advantage is that the search cost can be reduced notably and the network's generalization capability can also be improved.", "Related Work Recently, a new research field named neural architecture search (NAS) has been drawing more and more attention.", "The goal is to find automatic mechanisms for generating new neural architectures to replace conventional handcrafted ones.", "Recently, it is widely applied to computer vision tasks, such as image classification (Zoph and Le 2017; Zoph et al. 2017; Cai, Zhu, and Han 2018) , semantic segmentation , object detection (Ghiasi, Lin, and Le 2019) , super-resolution (Ahn, Kang, and Sohn 2018) , etc.", "However, NAS is less well studied in the field of natural language understanding (NLU).", "Recent works (Zoph and Le 2016; Pham et al. 2018; Liu, Simonyan, and Yang 2018) search new recurrent cells for the language modeling (LM) task on the PennTreebank dataset 1 .", "The recurrent cell discovered by (Liu, Simonyan, and Yang 2018) achieves the test perplexity of 56.1, which is competitive with the stateof-the-art model enhanced by a mixture of softmaxes .", "The evolved transformer (So, Liang, and Le 2019) applies NAS to discover better versions of the transformer architecture.", "Eploying an evolution-based search algorithm, and the vanilla transformer as the initial population, it generates a better transformer architecture that consistently outperform the vanilla transformer on 4 benchmark machine translation tasks.", "Our work contributes by going beyond the RNN structure and re-defining the search space to include a richer connection of operations.", "Our work is implemented on DARTS (Liu, Simonyan, and Yang 2018) and P-DARTS .", "DARTS relaxes the search space to be continuous, so that the architecture can be optimized with respect to its validation set performance by gradient descent.", "Due to its simplicity, DARTS has inspired a series follow-up work to improve the search stability and efficiency.", "Based on DARTS, P-DARTS ) divides the search process into multiple stages and progressively increase the network depth at the end of each stage.", "Our work contributes to the gradient-based NAS (and more generally, one-shot NAS) research by investigating its effectiveness in discovering new NN architectures for a series of NLU tasks.", "Our search space design takes advantages of the recent advances in the NLU field.", "One of the most import advances in sentence encoding is the application of various self-attention mechanisms, among which the transformer (Vaswani et al. 2017 ) is the most prominent one, which has become ubiquitous in NLU research.", "Specifically, the QANet ) modifies the transformer architecture to obtain the first place on the SQuaD leaderboard 2 .", "The transformer is powerful due to its multi-head self-attention mechanism, which can well capture the contextual information.", "However, the transformer maybe be difficult to train and generalize well on a small or medium sized data-set (Guo et al. 2019 ).", "Thus, many other self-attention operations are proposed, e.g., dynamic self-attention (Yoon, Lee, and Lee 2018) and DiSAN (Shen et al. 2018) .", "Recently, (Guo et al. 2019) propose the star-transformer, a sparser version of the multi-head attention model, and achieves competitive results on a series of benchmark datasets like SST-1, SNLI, CoNLL2003.", "On the aggregation side, an important advancement is the application of capsule networks and dynamic routing policy in text classification Gong et al. 2018) .", "Capsule networks can dynamically decide what and how much information need to be transferred from each word to the final encoding of the text sequence, thus achieving better results even with simple encoders (Gong et al. 2018 ).", "Our work is built upon these work and contributes by:", "i) include some of the most prominent attention based encoders and aggregators into the search space, and experiment on whether NAS can generate new architectures that have competitive results;", "ii) we are the first to propose the aggregator search space;", "iii) we include a modularized version of the star-transformer and its variant into the search space, thus we are the first to combine the dense and sparse multi-head self-attention operations into the same search space.", "Results on SST Results on SST-1 and SST-2 datasets are listed in Table 2 .", "On the SST-1, DARTS generate a network architecture (DARTS-SST-1-V0) that performs better than most of the traditional NN models.", "Not that the encoder cell of DARTS-SST-1-V0 contains only RNN and CNN operations, but the exact details of combination of different level of features are impossible to design manually.", "The best ar- (Le and Mikolov 2014) 48.7 87.8 MT-LSTM (F2S) 49.1 87.2 Tree-LSTM (Tai, Socher, and Manning 2015) 51.0 88.0 CNN-Tensor (Lei, Barzilay, and Jaakkola 2015) 51.2 -BiLSTM + max pooling (Gong et al. 2018) 48.0 87.0 BiLSTM + average pooling (Gong et al. 2018) 46.2 85.2 BiLSTM + self-att (Gong et al. 2018) 48.2 86.4 BiLSTM + dynamic routing (Gong et al. 2018) 50.5 87.6 Emb + self-att (Shen et al. 2018) 48.9 -DiSAN (Shen et al. 2018) 51.7 -BiLSTM + self-att (Yoon, Lee, and Lee 2018) 50.4 88.2 CNN + self-att (Yoon, Lee, and Lee 2018) 50.6 88.3 Dynamic self-att (Yoon, Lee, and Lee 2018) 50.6 88.5 Transformer (Guo et al. 2019) 50 chitecture (DARTS-SST-2-V0) we obtained on the SST-2 dataset involves a star-transformer operation and an identity map.", "Note that since (Guo et al. 2019 ) did not provide results on SST-2, we use the code from fastNLP 4 to run the transformer and the original star-transformer on SST-2.", "The results given by us are all the average of 10 different runs.", "We can see that DARTS-SST-2-V0 can obtain results comparable to the SOTA on SST-2.", "We also experiment on the transferability of the learned architectures.", "From Table 2 , we can see that DARTS-SST-2-V0 performs worse than DARTS-SST-1-V0 on SST-1 with a significant margin, but DARTS-SST-1-V0 also performs competitively on SST-2.", "Results on NLI tasks Among the architecture candidates derived from the search on SciTail, we find that the one obtained by accepting the null operation when it gets the highest score (DARTS-SciTail-V0) performs best.", "In addition, this search run gives the average pooling as the aggregator instead of dynamic-routing.", "The results are presented in Table 3 : Test accuracy (%) on the SciTail dataset.", "Model ACC 600D ESIM 70.6 Decomposable Attention 72.3 DGEM 72.3 AdvEntuRe 79.0 HCRN (Tay, Luu, and Hui 2018) 80.0 DeIsTe (Yin, Schütze, and Roth 2018) 82.1 CAFE (Yin, Schütze, and Roth 2018) 83.3 MIMN 84.0 ConSeqNet 85.2 HBMP (Mihaylov et al. 2018) 86.0 star-transformer (Guo et al. 2019) 79 Table 3 .", "DARTS-SciTail-V0 achieves a competitive performance on the test set, outperforming the baseline models such as ESIM and decomposable attention by a large margin.", "It also outperforms the results of the star-transformer and transformer even after extensively parameters tuning.", "Our model is actually the best one that has no inter-sentence attentions other than the final interaction before the prediction layer, and uses no outside resources, no manually designed features and no extra training mechanism like adversarial training.", "As we can see from Figure 5 that, on the MedNLI dataset, the search gives out a architecture (DARTS-MedNLI-V0) that quite resembles the original implementation of the multi-head attention inside the transformer block, except the residual connection is replaced by a sep conv with kernel size 3.", "DARTS-MedNLI-V0 performs worse than the original star-transformer, but it is better than the original transformer, and the baseline ESIM and InferSent.", "We also look into the transferability between the two task.", "We find that although the datasets are from different domains, the architecture searched on one performs comparable on the other.", "This paper addresses NAS for a series of NLU tasks.", "Corresponding to the encoder-aggregator architecture of typical NN models for NLU (Gong et al. 2018) , we redefine the search space, by splitting it into encoder search space and aggregator search space.", "Our search strategy is based on DARTS (Liu, Simonyan, and Yang 2018) and P-DARTS .", "Experiments shows that architectures discovered by NAS achieves results that are comparable to the previous SOTA models.", "In the further, we would like to investigate one-shot architecture search on more large-scale NLU tasks." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.15789473056793213, 0, 0.4000000059604645, 0.09756097197532654, 0.09999999403953552, 0.21621620655059814, 0.14999999105930328, 0.1538461446762085, 0.1818181723356247, 0.0624999962747097, 0.2028985470533371, 0, 0.12121211737394333, 0, 0.1621621549129486, 0.19999998807907104, 0.19607841968536377, 0.13333332538604736, 0.21052631735801697, 0.20689654350280762, 0.25, 0.038461532443761826, 0.10526315122842789, 0.04999999701976776, 0.05882352590560913, 0.11764705181121826, 0.19999998807907104, 0, 0.06451612710952759, 0.0476190410554409, 0, 0.1764705777168274, 0, 0.0476190410554409, 0.21276594698429108, 0.20338982343673706, 0.21739129722118378, 0.05405404791235924, 0.20512820780277252, 0.08695651590824127, 0.038461532443761826, 0.18867923319339752, 0.0624999962747097, 0.260869562625885, 0.20408162474632263, 0.290909081697464, 0.37837836146354675, 0.16326530277729034, 0.0952380895614624, 0.1904761791229248, 0.14999999105930328, 0.03278687968850136, 0.10526315122842789, 0.1538461446762085, 0.11764705181121826, 0.24390242993831635, 0.19999998807907104, 0.27272728085517883, 0, 0.21739129722118378, 0.24390242993831635, 0.1304347813129425, 0.3461538553237915, 0.2702702581882477, 0.11320754140615463, 0.1538461446762085, 0.09756097197532654, 0.12765957415103912, 0, 0.15686273574829102, 0.08510638028383255, 0.09999999403953552, 0, 0.15686273574829102, 0.1764705777168274, 0.23529411852359772, 0, 0.2380952388048172, 0.12244897335767746, 0.03999999538064003, 0.07692307233810425, 0.10810810327529907, 0.10810810327529907, 0.12121211737394333, 0.04255318641662598, 0.15094339847564697, 0.15789473056793213, 0.05128204822540283, 0, 0.13333332538604736, 0.10526315122842789, 0.0363636314868927, 0.1538461446762085, 0.04999999701976776, 0.060606054961681366, 0.09756097197532654, 0.3529411852359772, 0.3461538553237915, 0.05405404791235924, 0.14999999105930328, 0.29999998211860657 ]
rkgARFTUjB
true
[ "Neural Architecture Search for a series of Natural Language Understanding tasks. Design the search space for NLU tasks. And Apply differentiable architecture search to discover new models" ]
[ "Network embedding (NE) methods aim to learn low-dimensional representations of network nodes as vectors, typically in Euclidean space.", "These representations are then used for a variety of downstream prediction tasks.", "Link prediction is one of the most popular choices for assessing the performance of NE methods.", "However, the complexity of link prediction requires a carefully designed evaluation pipeline to provide consistent, reproducible and comparable results.", "We argue this has not been considered sufficiently in recent works.", "The main goal of this paper is to overcome difficulties associated with evaluation pipelines and reproducibility of results.", "We introduce EvalNE, an evaluation framework to transparently assess and compare the performance of NE methods on link prediction.", "EvalNE provides automation and abstraction for tasks such as hyper-parameter tuning, model validation, edge sampling, computation of edge embeddings and model validation.", "The framework integrates efficient procedures for edge and non-edge sampling and can be used to easily evaluate any off-the-shelf embedding method.", "The framework is freely available as a Python toolbox.", "Finally, demonstrating the usefulness of EvalNE in practice, we conduct an empirical study in which we try to replicate and analyse experimental sections of several influential papers.", "Link prediction is an important task with applications in a wide range of fields such as computer science, social sciences, biology, and medicine BID6 BID14 BID15 BID22 .", "It amounts to estimating the likelihood for the existence of edges, between pairs of nodes that do not form an edge in the input graph.", "Many Network Embedding (NE) methods (e.g., BID0 BID2 BID5 BID8 BID10 BID12 BID17 BID18 BID19 have recently been applied to solving link prediction problems, showing promising results. These methods map nodes in the network to vectors in IR d . This embedding is then used for a variety of tasks such as visualization, multi-label classification, clustering or link prediction.The challenges of evaluating NE methods for link prediction We argue that the practical performance of most NE methods is poorly understood and that experiments in many papers are difficult to compare due to variation in experimental setup and evaluation procedures. In this paper, we focus on a number of difficulties specific to the evaluation of NE methods for link prediction. Link prediction is a particularly challenging task to evaluate as it involve a number design choices, which can confound the results and are prone to errors.1) Train-test splitting of graphs For example, a typical implicit assumption is that the input graph is not complete, and the purpose is to accurately predict the missing edges.", "To evaluate the performance of an NE method for link prediction, one thus needs an (incomplete) training graph along with a (more) complete version of that graph for testing.", "Much research has been devoted to determining the best approach to generate these training graphs BID6 BID14 BID22 .", "Strong theoretical and empirical evidence suggest that in order to fairly evaluate link prediction methods, snapshots of the network at different points in time should be used for training and testing.", "In this way, the link prediction methods are tested on the natural evolutions of the networks.", "However, the availability of such snapshots is uncommon and raises additional questions, such as how to choose the time intervals for splitting the network.For these reasons, authors typically resort to sampling sets of edges from the input graphs and using the resulting sub-graphs for training BID5 BID8 BID10 BID12 .", "The remaining edges are used as positive test examples.", "The process of sampling edges is not standardized and varies between scientific works.", "The relative sizes of the train and test sets, for example, is a user-defined parameter which varies significantly.", "In BID8 ; BID10 the authors use a 50-50 train-test split, in BID5 ) a 60-40, in Lai et al. (2017 an 80-20 and in BID20 values ranging from 30-70 up to 80-20.A related problem is that, in addition to the 'positive' train and test edges, often also 'negative' edges (or non-edges) are required.", "Sometimes these are used to derive the embedding, while in other cases they are used only to train the classifier that predicts links.", "These sets of non-edges can be selected according to different strategies (Kotnis & Nastase) and can be of various sizes.2) From node embeddings to edge predictions Furthermore, most NE methods simply provide node embeddings.", "From these, edge embeddings need to be derived prior to performing predictions.", "There are several approaches for deriving edge embeddings which also seem to have a strong impact on the performance of different methods BID8 .3) Evaluation measures Also the metrics used to evaluate the accuracy varies, e.g., from AUC-ROC BID10 , to precision-recall BID21 , to precision@k BID20 .", "The recent surge of research in the area of network embeddings has resulted in a wide variety of data sets, metrics, and setups for evaluating and comparing the utility of embedding methods.", "Comparability across studies is lacking and not all evaluations are equally sound.", "This highlights the need for specific tools and pipelines to ensure the correct evaluation of these methods.", "Particularly, the use of representation learning for link prediction tasks requires train and test sampling, non-edge sampling, and in many cases selection of edge embedding methods and binary classifiers.", "The evaluation procedure, thus, becomes an ensemble of tasks which allow for many errors or inconsistencies.In this work we have proposed EvalNE, a novel framework that can be used to evaluate any network embedding method for link prediction.", "Our pipeline automates the selection of train and test edge sets, simplifies the process of tuning model parameters and reports the accuracy of the methods according to many criteria.", "Our experiments highlight the importance of the edge sampling strategy and parameter tuning for evaluating NE methods.", "We have also introduced a scalable procedure to select edge sets from given networks and showed empirically that is orders or magnitude faster than the naive approaches used in recent literature." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1904761791229248, 0.2222222238779068, 0.2631579041481018, 0.3720930218696594, 0.05714285373687744, 0.3414634168148041, 0.4651162624359131, 0.1395348757505417, 0.13636362552642822, 0.1818181723356247, 0.1666666567325592, 0.15686273574829102, 0.1304347813129425, 0.22068965435028076, 0.20408162474632263, 0.04878048226237297, 0.2641509473323822, 0.42105263471603394, 0.1538461446762085, 0, 0.10810810327529907, 0.2380952388048172, 0.11267605423927307, 0.04651162400841713, 0.11320754140615463, 0, 0.1818181723356247, 0.3199999928474426, 0.0555555522441864, 0.29999998211860657, 0.3265306055545807, 0.3870967626571655, 0.1702127605676651, 0.25, 0.1090909019112587 ]
H1eJH3IaLN
true
[ "In this paper we introduce EvalNE, a Python toolbox for automating the evaluation of network embedding methods on link prediction and ensuring the reproducibility of results." ]
[ "Deep learning models can be efficiently optimized via stochastic gradient descent, but there is little theoretical evidence to support this.", "A key question in optimization is to understand when the optimization landscape of a neural network is amenable to gradient-based optimization.", "We focus on a simple neural network two-layer ReLU network with two hidden units, and show that all local minimizers are global.", "This combined with recent work of Lee et al. (2017); Lee et al. (2016) show that gradient descent converges to the global minimizer.", "In this paper, we provided recovery guarantee of stochastic gradient descent with random initialization for learning a two-layer neural network with two hidden nodes, unit-norm weights, ReLU activation functions and Gaussian inputs.", "Experiments are also done to verify our results.", "For future work, here we list some possible directions.", "In conclusion, based on the assumption that θ 1 ≤ θ 2 there are four critical points in the 2D case: Assume the manifold is R = {(w 1 , w 2 ) : w 1 2 = w 2 2 = 1}, then the Hessian on the manifold is DISPLAYFORM0 DISPLAYFORM1 where z = (z 1 , z 2 ) satisfies w DISPLAYFORM2 and DISPLAYFORM3 Then we can get when w 1 = w 2 and w 1 = −w 2 , DISPLAYFORM4 So this point is a saddle point.", "In conclusion, we have four critical points: one is global maximal, the other three are saddle points.", "DISPLAYFORM0 ) is a critical point, then there exists a set of standard orthogonal basis (e 1 , e 2 , e 3 ) such that e 1 = w * 1 , e 2 = w * 2 and w 1 , w 2 lies in span{e 1 , e 2 , e 3 }.Proof", ". If (", "w 1 , w 2 ) is a critical point, then DISPLAYFORM1 where matrix (I − w 1 w T 1 ) projects a vector onto the tangent space of w 1 . Since", "DISPLAYFORM2 we get DISPLAYFORM3 DISPLAYFORM4 )w * 2 lies in the direction of w 1 . If θ", "w1,w2 = π, i.e., w 1 = −w 2 , then of course the four vectors have rank at most 3, so we can find the proper basis. If θ", "w1,w2 < π, then we know that there exists a real number r such that DISPLAYFORM5 Since θ w1,w2 < π, we know that the four vectors w 1 , w 2 , w * 1 and w * 2 are linear dependent. Thus", ", they have rank at most 3 and we can find the proper basis." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1304347813129425, 0.1860465109348297, 0.3829787075519562, 0.17391303181648254, 0.8771929740905762, 0, 0, 0.05063290521502495, 0, 0.10526315122842789, 0, 0.07999999821186066, 0.04651162400841713, 0.0363636314868927, 0.0714285671710968, 0.04999999701976776 ]
B14uJzW0b
true
[ "Recovery guarantee of stochastic gradient descent with random initialization for learning a two-layer neural network with two hidden nodes, unit-norm weights, ReLU activation functions and Gaussian inputs." ]
[ "Dropout is a simple yet effective technique to improve generalization performance and prevent overfitting in deep neural networks (DNNs).", "In this paper, we discuss three novel observations about dropout to better understand the generalization of DNNs with rectified linear unit (ReLU) activations: 1) dropout is a smoothing technique that encourages each local linear model of a DNN to be trained on data points from nearby regions; 2) a constant dropout rate can result in effective neural-deactivation rates that are significantly different for layers with different fractions of activated neurons; and 3) the rescaling factor of dropout causes an inconsistency to occur between the normalization during training and testing conditions when batch normalization is also used. ", "The above leads to three simple but nontrivial improvements to dropout resulting in our proposed method \"Jumpout.", "\"", "Jumpout samples the dropout rate using a monotone decreasing distribution (such as the right part of a truncated Gaussian), so the local linear model at each data point is trained, with high probability, to work better for data points from nearby than from more distant regions.", "Instead of tuning a dropout rate for each layer and applying it to all samples, jumpout moreover adaptively normalizes the dropout rate at each layer and every training sample/batch, so the effective dropout rate applied to the activated neurons are kept the same.", "Moreover, we rescale the outputs of jumpout for a better trade-off that keeps both the variance and mean of neurons more consistent between training and test phases, which mitigates the incompatibility between dropout and batch normalization.", "Compared to the original dropout, jumpout shows significantly improved performance on CIFAR10, CIFAR100, Fashion- MNIST, STL10, SVHN, ImageNet-1k, etc., while introducing negligible additional memory and computation costs.", "Deep learning has achieved remarkable success on a variety of machine learning tasks BID15 BID14 .", "Deep neural networks (DNN), however, are often able to fit the training data perfectly -this can result in the overfitting problem, thereby weakening the generalization performance on unseen data.", "Dropout BID17 BID7 is a simple yet effective technique to mitigate such problems by randomly setting the activations of hidden neurons to 0, a strategy that reduces co-adaptation amongst neurons.", "Dropout applies to any layer in a DNN without causing significant additional computational overhead.Dropout, however, has several drawbacks.", "Firstly, dropout rates, constituting extra hyper-parameters at each layer, need to be tuned to get optimal performance.", "Too high a dropout rate can slow the convergence rate of the model, and often hurt final performance.", "Too low a rate yields few or no improvements on generalization performance.", "Ideally, dropout rates should be tuned separately for each layer and also during various training stages.", "In practice, to reduce computation, we often tune a single dropout rate and keep it constant for all dropout layers and throughout the training process.If we treat dropout as a type of perturbation on each training sample, it acts to generalize the DNN to noisy samples having that specific expected amount of perturbation (due to the fixed dropout rate) with high probability.", "The fixed rate rules out samples typical having less perturbation, i.e., those potentially more likely to be closer to the original samples and thus that are potentially more helpful to improve generalization.", "Also, when a constant dropout rate is applied to layers and samples having different fractions of activated neurons, the effective dropout rate (i.e., the proportion of the activated neurons that are deactivated by dropout) varies, which might result in too much perturbation for some layers and samples and too little perturbation for others.Another deficiency of dropout lies in its incompatibility with batch normalization (BN) BID8 (more empirical evidence of this is shown in Section 3.3).", "As dropout randomly shuts down activated neurons, it needs to rescale the undropped neurons to match the original overall activation gain of the layer.", "Unfortunately, such rescaling breaks the consistency of the normalization parameters required between training and test phases 1 and may cause poor behavior when used with BN.", "Since BN, and its variants BID0 BID18 BID20 , has become an almost indispensable component of modern DNN architectures to keep the training stable and to accelerate convergence, dropout itself often gets dropped out in the choice between these two non-complementary options and has recently become less popular." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3255814015865326, 0.2549019455909729, 0.19999998807907104, 0.2153846174478531, 0.1428571343421936, 0.07407406717538834, 0.19230768084526062, 0.10526315122842789, 0.23999999463558197, 0.23529411852359772, 0.1860465109348297, 0.09999999403953552, 0.14999999105930328, 0.1666666567325592, 0, 0.1666666567325592, 0.11320754140615463, 0.1428571343421936, 0.13333332538604736, 0.1249999925494194, 0.1515151411294937 ]
r1gRCiA5Ym
true
[ "Jumpout applies three simple yet effective modifications to dropout, based on novel understandings about the generalization performance of DNN with ReLU in local regions." ]
[ "Concerns about interpretability, computational resources, and principled inductive priors have motivated efforts to engineer sparse neural models for NLP tasks.", "If sparsity is important for NLP, might well-trained neural models naturally become roughly sparse?", "Using the Taxi-Euclidean norm to measure sparsity, we find that frequent input words are associated with concentrated or sparse activations, while frequent target words are associated with dispersed activations but concentrated gradients.", "We find that gradients associated with function words are more concentrated than the gradients of content words, even controlling for word frequency.", "Researchers in NLP have long relied on engineering features to reflect the sparse structures underlying language.", "Modern deep learning methods promised to relegate this practice to history, but have not eliminated the interest in sparse modeling for NLP.", "Along with concerns about computational resources BID0 BID12 and interpretability BID10 BID21 , human intuitions continue to motivate sparse representations of language.", "For example, some work applies assumptions of sparsity to model latent hard categories such as syntactic dependencies BID14 or phonemes BID1 .", "BID13 found that a sparse attention mechanism outperformed dense methods on some NLP tasks; BID11 found sparsified versions of LMs that outperform dense originals.", "Attempts to engineer sparsity rest on an unstated assumption that it doesn't arise naturally when neural models are learned.", "Is this true?Using", "a simple measure of sparsity, we analyze how it arises in different layers of a neural language model in relation to word frequency. We show", "that the sparsity of a word representation increases with exposure to that word during training. We also", "find evidence of syntactic learning: gradient updates in backpropagation depend on whether a word's part of speech is open or closed class, even controlling for word frequency." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.0952380895614624, 0.1111111044883728, 0.1249999925494194, 0.23255813121795654, 0.15789473056793213, 0.1395348757505417, 0.13636362552642822, 0.1395348757505417, 0.1860465109348297, 0.04878048226237297, 0, 0.27272728085517883, 0.3243243098258972, 0.1666666567325592 ]
H1ets1h56E
true
[ "We study the natural emergence of sparsity in the activations and gradients for some layers of a dense LSTM language model, over the course of training." ]
[ "The integration of a Knowledge Base (KB) into a neural dialogue agent is one of the key challenges in Conversational AI.", "Memory networks has proven to be effective to encode KB information into an external memory to thus generate more fluent and informed responses.", "Unfortunately, such memory becomes full of latent representations during training, so the most common strategy is to overwrite old memory entries randomly. \n\n", "In this paper, we question this approach and provide experimental evidence showing that conventional memory networks generate many redundant latent vectors resulting in overfitting and the need for larger memories.", "We introduce memory dropout as an automatic technique that encourages diversity in the latent space by", "1) Aging redundant memories to increase their probability of being overwritten during training", "2) Sampling new memories that summarize the knowledge acquired by redundant memories.", "This technique allows us to incorporate Knowledge Bases to achieve state-of-the-art dialogue generation in the Stanford Multi-Turn Dialogue dataset.", "Considering the same architecture, its use provides an improvement of +2.2 BLEU points for the automatic generation of responses and an increase of +8.1% in the recognition of named entities.", "Given the large amount of dialogue data recorded in human-human or human-chatbot interactions, there is a great need for dialogue systems that infer automatic responses grounded to personal knowledge bases.", "This approach has the advantage of integrating semantic information that is fundamental to achieve dialogue understanding.", "We want to leverage the contextual information present in a KB (e.g., a calendar of events) to answer queries like What time is my dentist appointment?", ". This task is challenging because existing neural dialogue agents often assume that the dialogue history carries the information needed to provide an answer but struggle to interface with the structured data stored in a KB. This assumption prevents to have an end-to-end differentiable model to maintain the kind of contextual conversations that people desire.", "Memory networks Miller et al. (2016) has proven to be effective to encode KB information into an external memory to generate more fluent and informed responses.", "However, there is no much work in regularizing the latent representations stored in the external memory.", "Unlike the conventional dropout technique used to regularize deep neural networks Srivastava et al. (2014) , we propose memory dropout to attain the same goal (i.e., reduction of overfitting) but with different functionality and designed for memory networks Weston et al. (2015) .", "Given the long-term nature of memory networks, we do not immediately remove redundant memories with some probability as in the original dropout algorithm.", "Instead, we assign them the current maximum age increasing their probability of being overwritten by more recent latent representations in future training steps.", "Thus, in contrast to Srivastava et al. (2014) , our memory dropout is a delayed regularization mechanism.", "The main contributions of our work are the following:", "• We introduce a new regularization method called memory dropout designed for dealing with overfitting in Memory Augmented Neural Networks.", "To our best knowledge, ours is the first work on regularizing memory networks.", "• We build a neural dialogue agent that uses memory dropout to incorporate KB into an external memory for automatic response generation.", "Our results show that this technique can generate more fluent and accurate responses: an improvement of +2.2 BLUE points and +8.1% Entity F1 score versus not using it in the Stanford Multi-Turn Dialogue dataset.", "Figure 1: Learning the (h, y) pair transitions the neighborhood of h (represented as an ellipse) to a new state in which a memory h is drawn as the distribution of positive memories.", "Small circles represent the uncertainty of using a particular memory to model h .", "In the new memory configuration, we age positive keys (now faded in grey) making them more likely of being overwritten by other training examples.", "Memory Dropout is a technique for improving memory augmented neural networks by breaking co-adaptating memories built during backpropagation.", "While conventional dropout works at the level of individual activations, our memory dropout deals with latent representations of the input.", "These arrays of activations are stored into an external memory module which resembles areas of the human brain that are content-addressable and sensitive to semantic information Wixted et al. (2018) .", "Central to this technique is the idea that age and uncertainty play important roles to regularize the addressable keys of an external memory module that is persistent across training examples.", "By doing this, we obtain higher BLEU and Entity F1 scores when training a task-oriented dialogue agent that decodes an answer considering the entries of KB stored in the memory module." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.08510638028383255, 0.20408162474632263, 0.11999999731779099, 0.6071428656578064, 0.6818181872367859, 0.09756097197532654, 0.20512820780277252, 0.1304347813129425, 0.2222222238779068, 0.21052631735801697, 0.09090908616781235, 0.1111111044883728, 0.1111111044883728, 0.19230768084526062, 0.1904761791229248, 0.21875, 0.2800000011920929, 0.11764705181121826, 0.13333332538604736, 0.054054051637649536, 0.2916666567325592, 0.1463414579629898, 0.2857142686843872, 0.2222222238779068, 0.2181818187236786, 0.09756097197532654, 0.11538460850715637, 0.21739129722118378, 0.17777776718139648, 0.178571417927742, 0.2222222238779068, 0.20689654350280762 ]
SJl7tREFvr
true
[ "Conventional memory networks generate many redundant latent vectors resulting in overfitting and the need for larger memories. We introduce memory dropout as an automatic technique that encourages diversity in the latent space." ]
[ "Su-Boyd-Candes (2014) made a connection between Nesterov's method and an ordinary differential equation (ODE). ", "We show if a Hessian damping term is added to the ODE from Su-Boyd-Candes (2014), then Nesterov's method arises as a straightforward discretization of the modified ODE.", "Analogously, in the strongly convex case, a Hessian damping term is added to Polyak's ODE, which is then discretized to yield Nesterov's method for strongly convex functions. ", "Despite the Hessian term, both second order ODEs can be represented as first order systems.\n\n", "Established Liapunov analysis is used to recover the accelerated rates of convergence in both continuous and discrete time. ", "Moreover, the Liapunov analysis can be extended to the case of stochastic gradients which allows the full gradient case to be considered as a special case of the stochastic case. ", "The result is a unified approach to convex acceleration in both continuous and discrete time and in both the stochastic and full gradient cases. \n", " Su et al. (2014) made a connection between Nesterov's method for a convex, L-smooth function, f , and the second order, ordinary differential equation (ODE) x + 3 tẋ + ∇f (x) = 0 (A-ODE)However Su et al. (2014) did not show that Nesterov's method arises as a discretization of (A-ODE).", "In order to obtain such a discretization, we consider the following ODE, which has an additional Hessian damping term with coefficient 1/ √ L. DISPLAYFORM0 Notice that (H-ODE) is a perturbation of (A-ODE), and the perturbation goes to zero as L → ∞.", "Similar ODEs have been studied by BID1 , they have been shown to accelerate gradient descent in continuous time in .Next", ", we consider the case where f is also µ-strongly convex, and write C f := L/µ for the condition number of f . Then", "Nesterov's method in the strongly convex case arises as discretization of the following second order ODË DISPLAYFORM1 (H-ODE-SC) is a perturbation of Polyak's ODE (Polyak, 1964) x + 2 √ µẋ + ∇f (x) = 0 which is accelerates gradient when f is quadratic see (Scieur et al., 2017) .In each", "case, both continuous and discrete, as well and convex and strongly convex, it is possible to provide a proof of the rate using a Liapunov function. These proofs", "are already established in the literature: we give citations below, and also provide proof in the Appendix.Moreover, the analysis for Nesterov's method in the full gradient can be extended to prove acceleration in the case of stochastic gradients. Acceleration", "of stochastic gradient descent has been established by Lin et al. (2015) and BID7 , see also BID8 . A direct acceleration", "method with a connection to Nestero'v method was done by BID0 . Our analysis unifies", "the continuous time ODE with the algorithm, and includes full gradient acceleration as a special case. The analysis proceeds", "by first rewriting (H-ODE) (and (H-ODE-SC)) as first order systems involving ∇f , and then replacing the ∇f with g = ∇f + e. Both the continuous and", "discrete time methods achieve the accelerated rate of convergence, provided |e| goes to zero quickly enough. The condition on |e|, is", "given below in (12) and (13) -it is faster than the corresponding rate for stochastic gradient descent. When e = 0 we recover the", "full gradient case.The renewed interested in the continuous time approach began with the work of Su et al. (2014) and was followed Wibisono et al. (2016); Wilson et al. (2016) . Continuous time analysis", "also appears in BID6 , BID11 , and BID10 . However, continuous time", "approaches to optimization have been around for a long time. Polyak's method Polyak (", "1964) is related to successive over relaxation for linear equations (Varga, 1957) which were initially used to accelerate solutions of linear partial differential equations (Young, 1954) . A continuous time interpretation", "of Newton's method can be found in (Polyak, 1987) or BID1 . The mirror descent algorithm of", "Nemirovskii et al. (1983) has a continuous time interpretation BID5 . The Liapunov approach for acceleration", "had already appeared in BID4 for FISTA.The question of when discretizations of dynamical systems also satisfy a Liapunov function has been studied in the context of stabilization in optimal control BID12 . More generally, Stuart & Humphries (1996", ") studies when a discretization of a dynamical system preserves a property such as energy dissipation." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.31578946113586426, 0.5531914830207825, 0.21276594698429108, 0.10526315122842789, 0.1904761791229248, 0.27272728085517883, 0.27272728085517883, 0.2769230604171753, 0.19672130048274994, 0.04878048226237297, 0.1818181723356247, 0.3142856955528259, 0.2083333283662796, 0.3571428656578064, 0.1818181723356247, 0.1111111044883728, 0.3414634168148041, 0.1304347813129425, 0.09090908616781235, 0.17391303181648254, 0.19607841968536377, 0.11764705181121826, 0.10810810327529907, 0.03999999538064003, 0.15789473056793213, 0.10526315122842789, 0.14035087823867798, 0.21621620655059814 ]
HJMINj05tQ
true
[ "We derive Nesterov's method arises as a straightforward discretization of an ODE different from the one in Su-Boyd-Candes and prove acceleration the stochastic case" ]
[ "We propose learning to transfer learn (L2TL) to improve transfer learning on a target dataset by judicious extraction of information from a source dataset.", "L2TL considers joint optimization of vastly-shared weights between models for source and target tasks, and employs adaptive weights for scaling of constituent losses.", "The adaptation of the weights is based on reinforcement learning, guided with a performance metric on the target validation set.", "We demonstrate state-of-the-art performance of L2TL given fixed models, consistently outperforming fine-tuning baselines on various datasets.", "In the regimes of small-scale target datasets and significant label mismatch between source and target datasets, L2TL outperforms previous work by an even larger margin." ]
[ 1, 0, 0, 0, 0 ]
[ 1, 0.15789473056793213, 0.21621620655059814, 0.17142856121063232, 0.1904761791229248 ]
H1l0e6VKDS
false
[ "We propose learning to transfer learn (L2TL) to improve transfer learning on a target dataset by judicious extraction of information from a source dataset." ]
[ "In many partially observable scenarios, Reinforcement Learning (RL) agents must rely on long-term memory in order to learn an optimal policy.", "We demonstrate that using techniques from NLP and supervised learning fails at RL tasks due to stochasticity from the environment and from exploration.", "Utilizing our insights on the limitations of traditional memory methods in RL, we propose AMRL, a class of models that can learn better policies with greater sample efficiency and are resilient to noisy inputs.", "Specifically, our models use a standard memory module to summarize short-term context, and then aggregate all prior states from the standard model without respect to order.", "We show that this provides advantages both in terms of gradient decay and signal-to-noise ratio over time.", "Evaluating in Minecraft and maze environments that test long-term memory, we find that our model improves average return by 19% over a baseline that has the same number of parameters and by 9% over a stronger baseline that has far more parameters.", "We address the problem of reinforcement learning (RL) in tasks that require long-term memory.", "While many successes of Deep RL were achieved in settings that are (near) fully observable, such as Atari games (Mnih et al., 2015) , partial observability requires memory to recall prior observations that indicate the current state.", "Relying on full observability severely limits the applicability of such approaches.", "For example, many tasks in virtual and physical environments are naturally observed from a first-person perspective (Oh et al., 2016) , which means that an agent may need to seek out and remember task-relevant information that is not immediately observable without directly observing the entire environment.", "Recent research has started to address this issue, but effective learning in RL settings with long sequential dependencies remain a key challenge in Deep RL (Oh et al., 2016; Stepleton et al., 2018; Parisotto & Salakhutdinov, 2018) .", "The currently most common approach to RL in partially observable settings relies on models that use memory components that were originally developed for tasks like those that occur in natural language processing (NLP), e.g., LSTMs (Hochreiter & Schmidhuber, 1997) and GRUs (Cho et al., 2014) .", "Hausknecht & Stone (2015) first demonstrated benefits of LSTMs in RL tasks designed to test memory, and these and similar approaches have become common in Deep RL (Wang et al., 2016) , including multi-agent RL (Rashid et al., 2018; Foerster et al., 2017) .", "In this work, we demonstrate that the characteristics of RL can severely impede learning in memory models that are not specifically designed for RL, and propose new models designed to tackle these challenges.", "For example, LSTMs excel in NLP tasks where the order of observations (characters or words) is crucial, and where influence between observations decays quickly with distance.", "Contrast this with a hypothetical RL example where an agent must discover a hidden passcode to escape a locked dungeon.", "The order of observations is highly dependent on the agent's path through the dungeon, yet when it reaches the door, only its ability to recall the passcode is relevant to escaping the dungeon, irrespective of when the agent observed it and how many observations it has seen since.", "Figure 1 illustrates the problem.", "Even in the simplified case where stochasticity is introduced by observation noise, the sample efficiency of LSTMs decreases drastically.", "We show that this problem occurs not just for LSTMs, but also for stacked LSTMs and DNCs (Graves et al., 2016; Wayne et al., 2018) , which have been widely applied in RL, and propose solutions that address this problem.", "(Hochreiter & Schmidhuber, 1997 ) trained on a noise-free (T-L, left) and noisy (T-LN, right) TMaze tasks.", "In both cases, the agent must recall a signal from memory.", "LSTM completely fails in the noisy setting while AMRL-Max learns rapidly.", "(68% confidence interval over 5 runs, as for all plots.)", "We make the following three contributions.", "First, in Section 3, we introduce our approach, AMRL.", "AMRL augments memory models like LSTMs with aggregators that are substantially more robust to noise than previous approaches.", "Our models combine several innovations which jointly allow the model to ignore noise while maintaining order-variant information as needed.", "Further, AMRL models maintain informative gradients over very long horizons, which is crucial for sample-efficient learning in long-term memory tasks (Pascanu et al., 2012; Bakker, 2001; Wierstra et al., 2009 ).", "Second, in Section 5, we systematically evaluate how the sources of noise that affect RL agents affect the sample efficiency of AMRL and baseline approaches.", "We devise a series of experiments in two domains, (1) a symbolic maze domain and (2) 3D mazes in the game Minecraft.", "Our results show that AMRL can solve long-term memory tasks significantly faster than existing methods.", "Across tasks our best model achieves an increase in final average return of 9% over baselines with far more parameters and 19% over LSTMs with the same number of parameters.", "Third, in Section 6 we analytically and empirically analyze the characteristics of our proposed and baseline models with the aim to identify factors that affect performance.", "We empirically confirm that AMRL models are substantially less susceptible to vanishing gradients than previous models.", "We propose to additionally analyze memory models in terms of the signal-to-noise ratio achieved at increasing distances from a given signal, and show that AMRL models can maintain signals over many timesteps.", "Jointly, the results of our detailed analysis validate our modeling choices and show why AMRL models are able to effectively solve long-term memory tasks.", "The results in the previous section indicate that models that perform well on long-term memory tasks in noisy settings, such as those studied in Section 5, tend to have informative gradients and high SNR over long time horizons.", "In this section we further examine this relationship.", "Figure 8 shows the aggregate performance achieved by each model across the experiments presented in Section 5 and in the appendix A.2.", "We argue that these tasks capture key aspects of long-term memory tasks in noisy settings.", "We observe that our proposed AMRL-Avg and AMRL-Max approaches outperform all other methods.", "Ablations Max and Avg are competitive with baselines, but our results demonstrate the value of the ST connection.", "AMRL-Max improves over the LSTM average return by 19% with no additional parameters and outperforms DNC the average return by 9% with far fewer parameters.", "We have shown that AMRL models are not susceptible to the drastic performance decreases in noisy environments that LSTMs and DNCs are susceptible to, and we have shown that this generalizes to an ability to ignore irrelevant features in other tasks.", "Figure 8(b) relates overall model performance to the quantities analyzed above, SNR and gradient strength.", "We find SNR and gradient strength are both integral and complementary aspects needed for a successful model: DNC has a relatively large SNR, but does not match the empirical performance of AMRL -likely due to its decaying gradients.", "AMRL models achieve high SNR and maintain strong gradients, achieving the highest empirical performance.", "The reverse holds for LSTM models.", "An outlier is the SUM model -we hypothesize that the growing sum creates issues when interpreting memories independent of the time step at which they occur.", "The max aggregator may be less susceptible to growing activations given a bounded number of distinct observations, a bounded input activation, or an analogously compact internal representation.", "That is, the max value may be low and reached quickly.", "Moreover, the ST connection will still prevent gradient decay in such a case.", "Overall, our analytical and empirical analysis in terms of SNR and gradient decay both validates our modeling choices in developing AMRL, and provides a useful tool for understanding learning performance of memory models.", "By considering both empirical measurements of SNR and gradients we are able to rank models closely in-line with empirical performance.", "We consider this a particularly valuable insight for future research seeking to improve long-term memory.", "We have demonstrated that the performance of previous approaches to memory in RL can severely deteriorate under noise, including observation noise and noise introduced by an agents policy and environment dynamics.", "We proposed AMRL, a novel approach designed specifically to be robust to RL settings, by maintaining strong signal and gradients over time.", "Our empirical results confirmed that the proposed models outperform existing approaches, often dramatically.", "Finally, by analyzing gradient strength and signal-to-noise ratio of the considered models, we validated our model choices and showed that both aspects help explain the high empirical performance achieved by our models.", "In future research, we believe our models and analysis will form the basis of further understanding, and improving performance of memory models in RL.", "An aspect that goes beyond the scope of the present paper is the question of how to prevent long-term memory tasks from interfering with shorter-term tasks -an issue highlighted in Appendix A.2.3.", "Additionally, integration of AMRL into models other than the standard LSTM could be explored.", "Overall, our work highlights the need and potential for approaches that specifically tackle long-term memory tasks from an RL perspective." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1904761791229248, 0.09756097197532654, 0.25925925374031067, 0.17777776718139648, 0.21052631735801697, 0.07547169178724289, 0.11428570747375488, 0.14035087823867798, 0, 0.0923076868057251, 0.145454540848732, 0.1230769157409668, 0.1428571343421936, 0.2745097875595093, 0.13333332538604736, 0.10256409645080566, 0.0714285671710968, 0, 0.051282044500112534, 0.1111111044883728, 0.052631575614213943, 0.1249999925494194, 0.0624999962747097, 0, 0, 0.06666666269302368, 0.20512819290161133, 0.09999999403953552, 0.07843136787414551, 0.1395348757505417, 0.09756097197532654, 0.1111111044883728, 0.12765957415103912, 0.17777776718139648, 0.0555555522441864, 0.19230768084526062, 0.13636362552642822, 0.1428571343421936, 0.0714285671710968, 0.09756097197532654, 0.11428570747375488, 0.05882352590560913, 0.10526315122842789, 0.09999999403953552, 0.11538460850715637, 0.1666666567325592, 0.10526315122842789, 0.05714285373687744, 0, 0, 0.08695651590824127, 0.1249999925494194, 0.1764705777168274, 0.20408162474632263, 0.14999999105930328, 0.1666666567325592, 0.23999999463558197, 0.1428571343421936, 0, 0.08163265138864517, 0.1904761791229248, 0.15686273574829102, 0.11428570747375488, 0.09756097197532654 ]
Bkl7bREtDr
true
[ "In Deep RL, order-invariant functions can be used in conjunction with standard memory modules to improve gradient decay and resilience to noise." ]
[ "Optimization on manifold has been widely used in machine learning, to handle optimization problems with constraint.", "Most previous works focus on the case with a single manifold.", "However, in practice it is quite common that the optimization problem involves more than one constraints, (each constraint corresponding to one manifold).", "It is not clear in general how to optimize on multiple manifolds effectively and provably especially when the intersection of multiple manifolds is not a manifold or cannot be easily calculated.", "We propose a unified algorithm framework to handle the optimization on multiple manifolds.", "Specifically, we integrate information from multiple manifolds and move along an ensemble direction by viewing the information from each manifold as a drift and adding them together.", "We prove the convergence properties of the proposed algorithms.", "We also apply the algorithms into training neural network with batch normalization layers and achieve preferable empirical results.", "Machine learning problem is often formulated as optimization problem.", "It is common that the optimization problem comes with multiple constraints due to practical scenarios or human prior knowledge that adding some of them help model achieve a better result.", "One way to handle these constraints is adding regularization terms to the objective, such as the 1 and 2 regularization.", "However, it is hard to adjust the hyper-parameters of the regularization terms to guarantee that the original constraints get satisfied.Another way to deal with the constraints is to optimize on manifolds determined by the constraints.", "Then the optimization problem becomes unconstrained on the manifold, which could be easy to solve technically.", "Furthermore, optimization on manifold indicates optimizing on a more compact space, and may bring performance gain when training neural networks, e.g., BID10 BID3 .Most", "previous works on manifold optimization focus on a single manifold BID13 . However", ", in practice, we often face more than one constraints, each of them corresponding to one manifold. If we", "still solve the optimization problem with multiple constraints by method on manifold, we need to handle it on the intersection of multiple manifolds, which may no longer be a manifold BID11 . Due to", "this, traditional optimization methods on manifold does not work in this case.In this paper, we consider the problem of optimization on multiple manifolds. Specifically", ", the problem is written as arg min DISPLAYFORM0 where each M i is a manifold. We propose", "a method solving this problem by choosing the moving direction as −∇f (x)(on manifold is −gradf (x)) with several drifts which are derived from the descent information on other manifolds. By this method", ", we get sequence that has information from all manifolds.", "In this paper, we derive an intuitively method to approach optimization problem with multiple constraints which corresponds to optimizing on the intersection of multiple manifolds.", "Specifically, the method is integrating information among all manifolds to determine minimum points on each manifold.", "We don't add extra conditions to constraints of optimization problem, as long as each constraint can be converted to a manifold.", "In the future, we may add some conditions to manifolds which derive a conclusion that minimum points on each manifold achieved by our algorithm are close with other.", "If this conclusion is established, the problem of optimization on intersection of multiple manifolds is solved.According to the updating rule (equation 3), we can derive many other algorithms, because the drift h k in (equation 3) is flexible.", "On the other hand, Retr x on our algorithm does not limit to a specific one.", "Since there are some results for Retr x = Exp x , for example Corollary 8 in , we may get more elegant results by using Exp x as retraction function in our algorithm.The manifolds we encounter in optimization are mainly embedded sub-manifold and quotient manifold BID1 .", "Embedded sub-manifold is F −1", "(y) for a smooth function F : M 1 → M 2 , where M 1 , M 2 are two manifolds and y ∈ M 2 .", "Quotient manifold is a quotient topology space generalized by a specific equivalence relationship ∼.", "In this paper, we use Oblique manifold and Grassmann manifold which are embedded sub-manifold and quotient manifold respectively.The difficulty we faced in optimization on manifold is calculating tangent space T x M and Riemannian gradient gradf", "(x).", "Giving a exact formula of a tangent space T x M is not a easy problem.", "On the other hand, since Riemannian gradient is ∇f", "(x) projected to a tangent space T x M, finding projection matrix to a specific space T x M is nontrivial." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3125, 0.14814814925193787, 0.1621621549129486, 0.1860465109348297, 0.3448275923728943, 0.14999999105930328, 0.0833333283662796, 0.05882352590560913, 0.1666666567325592, 0.31111109256744385, 0.1818181723356247, 0.1904761791229248, 0.19354838132858276, 0.09756097197532654, 0.1538461446762085, 0.1818181723356247, 0.4000000059604645, 0.2631579041481018, 0.12121211737394333, 0.1304347813129425, 0, 0.41025641560554504, 0.1249999925494194, 0.2857142686843872, 0.1818181723356247, 0.20408162474632263, 0.1249999925494194, 0.1111111044883728, 0, 0, 0.06896550953388214, 0.08510638028383255, 0.13333332538604736, 0, 0.0624999962747097 ]
HJerDj05tQ
true
[ "This paper introduces an algorithm to handle optimization problem with multiple constraints under vision of manifold." ]
[ "It has long been assumed that high dimensional continuous control problems cannot be solved effectively by discretizing individual dimensions of the action space due to the exponentially large number of bins over which policies would have to be learned.", "In this paper, we draw inspiration from the recent success of sequence-to-sequence models for structured prediction problems to develop policies over discretized spaces.", "Central to this method is the realization that complex functions over high dimensional spaces can be modeled by neural networks that predict one dimension at a time.", "Specifically, we show how Q-values and policies over continuous spaces can be modeled using a next step prediction model over discretized dimensions.", "With this parameterization, it is possible to both leverage the compositional structure of action spaces during learning, as well as compute maxima over action spaces (approximately).", "On a simple example task we demonstrate empirically that our method can perform global search, which effectively gets around the local optimization issues that plague DDPG.", "We apply the technique to off-policy (Q-learning) methods and show that our method can achieve the state-of-the-art for off-policy methods on several continuous control tasks.", "Reinforcement learning has long been considered as a general framework applicable to a broad range of problems.", "However, the approaches used to tackle discrete and continuous action spaces have been fundamentally different.", "In discrete domains, algorithms such as Q-learning leverage backups through Bellman equations and dynamic programming to solve problems effectively.", "These strategies have led to the use of deep neural networks to learn policies and value functions that can achieve superhuman accuracy in several games (Mnih et al., 2013; where actions lie in discrete domains.", "This success spurred the development of RL techniques that use deep neural networks for continuous control problems BID12 BID20 .", "The gains in these domains, however, have not been as outsized as they have been for discrete action domains.This disparity is, in part, a result of the inherent difficulty in maximizing an arbitrary function on a continuous domain, even in low-dimensional settings.", "Furthermore, it becomes harder to apply dynamic programming methods to back up value function estimates from successor states to parent states in continuous control problems.", "Several of the recent continuous control reinforcement learning approaches attempt to borrow characteristics from discrete problems by proposing models that allow maximization and backups more easily BID12 .One", "way in which continuous control can avail itself of the above advantages is to discretize each of the dimensions of continuous control action spaces. As", "noted in , doing this naively, however, would create an exponentially large discrete space of actions. For", "example with M dimensions being discretized into N bins, the problem would balloon to a discrete space with M N possible actions.We leverage the recent success of sequence-to-sequence type models BID32 to train such discretized models, without falling into the trap of requiring an exponentially large number of actions. Our", "method relies on a technique that was first introduced in BID3 , which allows us to escape the curse of dimensionality in high dimensional spaces by modeling complicated probability distributions using the chain rule decomposition. In", "this paper, we similarly parameterize functions of interest -Q-values -using a decomposition of the joint function into a sequence of conditional values tied together with the bellman operator. With", "this formulation, we are able to achieve fine-grained discretization of individual domains, without an explosion in the number of parameters; at the same time we can model arbitrarily complex distributions while maintaining the ability to perform (approximate) global maximization. These", "benefits come at the cost of shifting the exponentially complex action space into an exponentially complex MDP BID5 BID10 . In many", "settings, however, there are relationships between transitions that can be leveraged and large regions of good solutions, which means that this exponential space need not be fully explored. Existing", "work using neural networks to perform approximate exponential search is evidence of this BID37 ; BID2 .While this", "strategy can be applied to most function approximation settings in RL, we focus on off-policy settings with an algorithm akin to DQN. Empirical", "results on an illustrative multimodal problem demonstrates how our model is able to perform global maximization, avoiding the exploration problems faced by algorithms like NAF BID13 and DDPG . We also show", "the effectiveness of our method on a range of benchmark continuous control problems from hopper to humanoid.", "Conceptually, our approach centers on the idea that action selection at each stage can be factored and sequentially selected.", "In this work we use 1-D action spaces that are discretized as our base component.", "Existing work in the image modeling domain suggests that using a mixture of logistic units BID25 greatly speeds up training and would also satisfy our need for a closed form max.", "Additionally, this work imposes a prespecified ordering of actions which may negatively impact training for certain classes of problems (with much larger number of action dimensions).", "To address this, we could learn to factor the action space into the sequential order for continuous action spaces or learn to group action sets for discrete action spaces.", "Another promising direction is to combine this approximate max action with gradient based optimization procedure.", "This would relieve some of the complexity of the modeling task of the maxing network, at the cost of increased compute when sampling from the policy.", "Finally, the work presented here is exclusively on off-policy methods.", "We chose to focus on these methods due to their sample efficiency.", "Use of an sequential policies with discretized actions could also be used as the policy for any stochastic policy optimization algorithm such as TRPO BID27 or A3C (Mnih et al., 2016) .", "In this work we present a continuous control algorithm that utilize discretized action spaces and sequential models.", "The technique we propose is an off-policy RL algorithm that utilizes sequential prediction and discretization.", "We decompose our model into a hierarchy of Q function.", "The effectiveness of our method is demonstrated on illustrative and benchmark tasks, as well as on more complex continuous control tasks.", "Sampling an Action" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19230768084526062, 0.19999998807907104, 0.23255813121795654, 0.21052631735801697, 0.19999998807907104, 0.0952380895614624, 0.20512819290161133, 0.1818181723356247, 0.25, 0.1111111044883728, 0.11764705181121826, 0.1111111044883728, 0.18867924809455872, 0.10256409645080566, 0.17777776718139648, 0.2702702581882477, 0.11764705181121826, 0.17543859779834747, 0.2745097875595093, 0.1428571343421936, 0.07692307233810425, 0.11428570747375488, 0.045454539358615875, 0.11764705181121826, 0.10526315122842789, 0.1249999925494194, 0.3636363446712494, 0.1111111044883728, 0.25, 0.08510638028383255, 0.19512194395065308, 0.21052631735801697, 0.1249999925494194, 0.0555555522441864, 0.07407406717538834, 0.1428571343421936, 0.12765957415103912, 0.29411762952804565, 0, 0.14814814925193787, 0.2222222238779068, 0 ]
r1SuFjkRW
true
[ "A method to do Q-learning on continuous action spaces by predicting a sequence of discretized 1-D actions." ]
[ "Model-based reinforcement learning (MBRL) aims to learn a dynamic model to reduce the number of interactions with real-world environments.", "However, due to estimation error, rollouts in the learned model, especially those of long horizon, fail to match the ones in real-world environments.", "This mismatching has seriously impacted the sample complexity of MBRL.", "The phenomenon can be attributed to the fact that previous works employ supervised learning to learn the one-step transition models, which has inherent difficulty ensuring the matching of distributions from multi-step rollouts.", "Based on the claim, we propose to learn the synthesized model by matching the distributions of multi-step rollouts sampled from the synthesized model and the real ones via WGAN.", "We theoretically show that matching the two can minimize the difference of cumulative rewards between the real transition and the learned one.", "Our experiments also show that the proposed model imitation method outperforms the state-of-the-art in terms of sample complexity and average return.", "Reinforcement learning (RL) has become of great interest because plenty of real-world problems can be modeled as a sequential decision-making problem.", "Model-free reinforcement learning (MFRL) is favored by its capability of learning complex tasks when interactions with environments are cheap.", "However, in the majority of real-world problems, such as autonomous driving, interactions are extremely costly, thus MFRL becomes infeasible.", "One critique about MFRL is that it does not fully exploit past queries over the environment, and this motivates us to consider the model-based reinforcement learning (MBRL).", "In addition to learning an agent policy, MBRL also uses the queries to learn the dynamics of the environment that our agent is interacting with.", "If the learned dynamic is accurate enough, the agent can acquire the desired skill by simply interacting with the simulated environment, so that the number of samples to collect in the real world can be greatly reduced.", "As a result, MBRL has become one of the possible solutions to reduce the number of samples required to learn an optimal policy.", "Most previous works of MBRL adopt supervised learning with 2 -based errors (Luo et al., 2019; Kurutach et al., 2018; or maximum likelihood (Janner et al., 2019) , to obtain an environment model that synthesizes real transitions.", "These non-trivial developments imply that optimizing a policy on a synthesized environment is a challenging task.", "Because the estimation error of the model accumulates as the trajectory grows, it is hard to train a policy on a long synthesized trajectory.", "On the other hand, training on short trajectories makes the policy short-sighted.", "This issue is known as the planning horizon dilemma (Wang et al., 2019) .", "As a result, despite having a strong intuition at first sight, MBRL has to be designed meticulously.", "Intuitively, we would like to learn a transition model in a way that it can reproduce the trajectories that have been generated in the real world.", "Since the attained trajectories are sampled according to a certain policy, directly employing supervised learning may not necessarily lead to the mentioned result especially when the policy is stochastic.", "The resemblance in trajectories matters because we estimate policy gradient by generating rollouts; however, the one-step model learning adopted by many MBRL methods do not guarantee this.", "Some previous works propose multi-step training (Luo et al., 2019; Asadi et al., 2019; Talvitie, 2017) ; however, experiments show that model learning fails to benefit much from the multi-step loss.", "We attribute this outcome to the essence of super-", "We have pointed out that the state-of-the-art methods concentrate on learning synthesized models in a supervised fashion, which does not guarantee that the policy is able to reproduce a similar trajectory in the learned model and therefore the model may not be accurate enough to estimate long rollouts.", "We have proposed to incorporate WGAN to achieve occupancy measure matching between the real transition and the synthesized model and theoretically shown that matching indicates the closeness in cumulative rewards between the synthesized model and the real environment.", "To enable stable training across WGANs, we have suggested using a truncated version of WGAN to prevent training from getting stuck at local optimums.", "The empirical property of WGAN application such as imitation learning indicates its potential to learn the transition with fewer samples than supervised learning.", "We have confirmed it experimentally by further showing that MI converges much faster and obtains better policy than state-of-the-art model-based and model-free algorithms." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.13333332538604736, 0.0624999962747097, 0, 0.19512194395065308, 0.17142856121063232, 0.12903225421905518, 0.1249999925494194, 0.0624999962747097, 0.06666666269302368, 0, 0.10526315122842789, 0.12121211737394333, 0.04651162400841713, 0.0624999962747097, 0.08888888359069824, 0, 0.0624999962747097, 0, 0, 0.0714285671710968, 0.11764705181121826, 0.10526315122842789, 0.052631575614213943, 0.09999999403953552, 0.0952380895614624, 0.07843136787414551, 0.3684210479259491, 0.11428570747375488, 0.23529411852359772, 0 ]
S1lJv0VYDr
true
[ "Our method incorporates WGAN to achieve occupancy measure matching for transition learning." ]
[ "Batch Normalization (BN) and its variants have seen widespread adoption in the deep learning community because they improve the training of deep neural networks.", "Discussions of why this normalization works so well remain unsettled. ", "We make explicit the relationship between ordinary least squares and partial derivatives computed when back-propagating through BN.", "We recast the back-propagation of BN as a least squares fit, which zero-centers and decorrelates partial derivatives from normalized activations.", "This view, which we term {\\em gradient-least-squares}, is an extensible and arithmetically accurate description of BN.", "To further explore this perspective, we motivate, interpret, and evaluate two adjustments to BN.", "Training deep neural networks has become central to many machine learning tasks in computer vision, speech, and many other application areas.", "BID10 showed empirically that Batch Normalization (BN) enables deep networks to attain faster convergence and lower loss.", "Reasons for the effectiveness of BN remain an open question BID12 .", "Existing work towards explaining this have focused on covariate shift; Santurkar et al. (2018) described how BN makes the loss function smoother.", "This work examines the details of the back-propagation of BN, and recasts it as a least squares fit.", "This gradient regression zero-centers and decorrelates partial derivatives from the normalized activations; it passes on a scaled residual during back-propagation.", "Our view provides novel insight into the effectiveness of BN and several existing alternative normalization approaches in the literature.", "This work makes explicit how BN back-propagation regresses partial derivatives against the normalized activations and keeps the residual.", "This view, in conjunction with the empirical success of BN, suggests an interpretation of BN as a gradient regression calculation.", "BN and its variants decorrelate and zero-center the gradients with respect to the normalized activations.", "Subjectively, this can be viewed as removing systematic errors from the gradients.", "Our view also support empirical results in literature preferring early BN placement within neural network branches.Leveraging gradient-least-squares considerations, we ran two sets of normalization experiments, applicable to large batch and small batch settings.", "Placing a LN layer either before or after BN can be viewed as two-step regression that better explains the residual.", "We show empirically on CIFAR-10 that BN and LN together are better than either individually.", "In a second set of experiments, we address BN's performance degradation with small batch size.", "We regularize the gradient regression with streaming gradient statistics, which empirically recovers some performance on CIFAR-10 relative to basic BN, on batch size two.Why do empirical improvements in neural networks with BN keep the gradient-least-squares residuals and drop the explained portion?", "We propose two open approaches for investigating this in future work.", "A first approach focuses on how changes to the gradient regression result in different formulations; the two empirical experiments in our work contribute to this.", "A second approach examines the empirical relationships between gradients of activations evaluated on the same parameter values; we can search for a shared noisy component arising from gradients in the same normalization partition.", "Suppose that the gradient noise correlates with the activations -this is plausible because the population of internal activations arise from using shared weights -then normalizations could be viewed as a layer that removes systematic noise during back-propagation.In DISPLAYFORM0 Then, the partial derivatives satisfy DISPLAYFORM1 Proof.", "In deriving ∂z j ∂x i , we will treat the cases of when j = i and when j = i separately.", "We start by examining intermediate quantities of interest as a matter of convenience for later use.", "We define helper quantities u i = x i − µ.", "Note that each u j depends on all of x i via µ.", "Next, we write out useful identities DISPLAYFORM2 We prepare to differentiate with rule of total derivative: DISPLAYFORM3 Making use of equations 21, 22, 23 and 25, We simplify ∂σ ∂x i for any i as follows.", "DISPLAYFORM4 We apply the quotient rule on ∂z j ∂x i when j = i, then substitute equation 33 DISPLAYFORM5 Similarly, when i = j, inputs in batch b.", "In our work, we keep track of am exponential running estimates across batches, DISPLAYFORM6 DISPLAYFORM7 DISPLAYFORM8 that marginalize the (B, H, W ) dimensions into accumulators of shape C. The b subscript of the outer expectation is slightly abusive notation indicating thatα * andβ * are running averages across recent batches with momentum as a hyperparameter that determines the weighting.", "We regularize the gradient regression with virtual activations and virtual gradients, defined as follows.", "We append two virtual batch items, broadcast to an appropriate shape, x + = µ b + σ b and x − = µ b − σ b .", "Here, µ b and σ b are batch statistics of the real activations.", "The concatenated tensor undergoes standard BN, which outputs the usual {z i } for the real activations, but z + = 1 and z − = −1 for the virtual items.", "The z + and z − do not affect the feed forward calculations, but they receive virtual gradients during back-propagation: DISPLAYFORM9 Virtual data z + , ∂L ∂z + and z − , ∂L ∂z − regularizes the gradient-least-squares regression.", "∂L ∂z + and ∂L ∂z − eventually modify the gradients received by the real x i activations.", "The virtual data can be weighted with hyperparameters.", "In our experiments, we see improvements, robust to a hyperparameter cross-product search over the weightings and the momentum forα * andβ * .", "The momentum forα * andβ * were in {.997, .5} and the virtual item weights were in {2 i−1 } i∈{0,1,2,3} .", "The performance of larger batches are not recovered; regularized regression could not be reasonably expected to recover the performance of regressing with more data.", "See table 2 for final validation performances with a reference Tensorflow ResNet-34-v2 implementation on batch size of two.", "The baseline evaluation with identity (no normalization) experienced noticeable overfitting in terms of cross entropy but not accuracy.", "The base learning rate was multiplied by 1 64 relative to the baseline rate used in runs with batch size 128." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05128204822540283, 0.0714285671710968, 0.1764705777168274, 0.5405405163764954, 0.12121211737394333, 0.06451612710952759, 0.05405404791235924, 0.05882352590560913, 0, 0, 0.1818181723356247, 0.4864864945411682, 0.11428570747375488, 0.29411762952804565, 0.0555555522441864, 0.19999998807907104, 0.06896550953388214, 0.07999999821186066, 0.05405404791235924, 0.0624999962747097, 0.0624999962747097, 0.07407406717538834, 0, 0, 0.17391303181648254, 0.21052631735801697, 0.05882352590560913, 0.0624999962747097, 0, 0, 0.03999999538064003, 0, 0.028985504060983658, 0.13333332538604736, 0.0555555522441864, 0.13793103396892548, 0.09302324801683426, 0.08888888359069824, 0.1249999925494194, 0, 0.10810810327529907, 0.0555555522441864, 0, 0.05714285373687744, 0, 0 ]
BkMq0oRqFQ
true
[ "Gaussian normalization performs a least-squares fit during back-propagation, which zero-centers and decorrelates partial derivatives from normalized activations." ]
[ "Batch Normalization (BN) has become a cornerstone of deep learning across diverse architectures, appearing to help optimization as well as generalization.", "While the idea makes intuitive sense, theoretical analysis of its effectiveness has been lacking.", "Here theoretical support is provided for one of its conjectured properties, namely, the ability to allow gradient descent to succeed with less tuning of learning rates.", "It is shown that even if we fix the learning rate of scale-invariant parameters (e.g., weights of each layer with BN) to a constant (say, 0.3), gradient descent still approaches a stationary point (i.e., a solution where gradient is zero) in the rate of T^{−1/2} in T iterations, asymptotically matching the best bound for gradient descent with well-tuned learning rates.", "A similar result with convergence rate T^{−1/4} is also shown for stochastic gradient descent.", "Batch Normalization (abbreviated as BatchNorm or BN) (Ioffe & Szegedy, 2015) is one of the most important innovation in deep learning, widely used in modern neural network architectures such as ResNet BID8 , Inception (Szegedy et al., 2017) , and DenseNet (Huang et al., 2017) .", "It also inspired a series of other normalization methods (Ulyanov et al., 2016; BID0 Ioffe, 2017; Wu & He, 2018) .BatchNorm", "consists of standardizing the output of each layer to have zero mean and unit variance. For a single", "neuron, if x 1 , . . . , x B is the original outputs in a mini-batch, then it adds a BatchNorm layer which modifies the outputs to DISPLAYFORM0 where µ = i=1 (x i − µ) 2 are the mean and variance within the minibatch, and γ, β are two learnable parameters. BN appears to", "stabilize and speed up training, and improve generalization. The inventors", "suggested (Ioffe & Szegedy, 2015) that these benefits derive from the following:1. By stabilizing", "layer outputs it reduces a phenomenon called Internal Covariate Shift, whereby the training of a higher layer is continuously undermined or undone by changes in the distribution of its inputs due to parameter changes in previous layers., 2. Making the", "weights", "invariant to scaling, appears to reduce the dependence of training on the scale of parameters and enables us to use a higher learning rate;3. By implictly regularizing", "the model it improves generalization.But these three benefits are not fully understood in theory. Understanding generalization", "for deep models remains an open problem (with or without BN). Furthermore, in demonstration", "that intuition can sometimes mislead, recent experimental results suggest that BN does not reduce internal covariate shift either (Santurkar et al., 2018) , and the authors of that study suggest that the true explanation for BN's effectiveness may lie in a smoothening effect (i.e., lowering of the Hessian norm) on the objective. Another recent paper (Kohler", "et al., 2018) tries to quantify the benefits of BN for simple machine learning problems such as regression but does not analyze deep models.Provable quantification of Effect 2 (learning rates). Our study consists of quantifying", "the effect of BN on learning rates. Ioffe & Szegedy (2015) observed that", "without BatchNorm, a large learning rate leads to a rapid growth of the parameter scale. Introducing BatchNorm usually stabilizes", "the growth of weights and appears to implicitly tune the learning rate so that the effective learning rate adapts during the course of the algorithm. They explained this intuitively as follows", ". After BN the output of a neuron z = BN(w", "x) is unaffected when the weight w is scaled, i.e., for any scalar c > 0, BN(w x) = BN((cw) x).Taking derivatives one finds that the gradient", "at cw equals to the gradient at w multiplied by a factor 1/c. Thus, even though the scale of weight parameters", "of a linear layer proceeding a BatchNorm no longer means anything to the function represented by the neural network, their growth has an effect of reducing the learning rate.Our paper considers the following question: Can we rigorously capture the above intuitive behavior? Theoretical analyses of speed of gradient descent", "algorithms in nonconvex settings study the number of iterations required for convergence to a stationary point (i.e., where gradient vanishes). But they need to assume that the learning rate has", "been set (magically) to a small enough number determined by the smoothness constant of the loss function -which in practice are of course unknown. With this tuned learning rate, the norm of the gradient", "reduces asymptotically as T −1/2 in T iterations. In case of stochastic gradient descent, the reduction is", "like T −1/4 . Thus a potential way to quantify the rate-tuning behavior", "of BN would be to show that even when the learning rate is fixed to a suitable constant, say 0.1, from the start, after introducing BN the convergence to stationary point is asymptotically just as fast (essentially) as it would be with a hand-tuned learning rate required by earlier analyses. The current paper rigorously establishes such auto-tuning", "behavior of BN (See below for an important clarification about scale-invariance).We note that a recent paper (Wu et al., 2018) introduced a", "new algorithm WNgrad that is motivated by BN and provably has the above auto-tuning behavior as well. That paper did not establish such behavior for BN itself,", "but it was a clear inspiration for our analysis of BN.Scale-invariant and scale-variant parameters. The intuition of Ioffe & Szegedy (2015) applies for all scale-invariant", "parameters, but the actual algorithm also involves other parameters such as γ and β whose scale does matter. Our analysis partitions the parameters in the neural networks into two", "groups W (scale-invariant) and g (scale-variant). The first group, W = {w (1) , . . . , w (m) }, consists of all the parameters", "whose scales does not affect the loss, i.e., scaling w (i) to cw (i) for any c > 0 does not change the loss (see Definition 2.1 for a formal definition); the second group, g, consists of all other parameters that are not scale-invariant. In a feedforward neural network with BN added at each layer, the layer weights", "are all scale-invariant. This is also true for BN with p normalization strategies (Santurkar et al., 2018", "; Hoffer et al., 2018) and other normalization layers, such as Weight Normalization (Salimans & Kingma, 2016) , Layer Normalization BID0 ), Group Normalization (Wu & He, 2018 ) (see Table 1 in BID0 for a summary).", "In this paper, we studied how scale-invariance in neural networks with BN helps optimization, and showed that (stochastic) gradient descent can achieve the asymptotic best convergence rate without tuning learning rates for scale-invariant parameters.", "Our analysis suggests that scale-invariance in nerual networks introduced by BN reduces the efforts for tuning learning rate to fit the training data.However, our analysis only applies to smooth loss functions.", "In modern neural networks, ReLU or Leaky ReLU are often used, which makes the loss non-smooth.", "It would have more implications by showing similar results in non-smooth settings.", "Also, we only considered gradient descent in this paper.", "It can be shown that if we perform (stochastic) gradient descent with momentum, the norm of scale-invariant parameters will also be monotone increasing.", "It would be interesting to use it to show similar convergence results for more gradient methods." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23255813121795654, 0.21621620655059814, 0.2978723347187042, 0.21917808055877686, 0.05405404791235924, 0.12903225421905518, 0.13333332538604736, 0.19999998807907104, 0.11764705181121826, 0, 0.05405404791235924, 0.17543859779834747, 0.21739129722118378, 0.10256409645080566, 0.1621621549129486, 0.1690140813589096, 0.20689654350280762, 0.1666666567325592, 0.24390242993831635, 0.21276594698429108, 0.1818181723356247, 0.08163265138864517, 0.1904761791229248, 0.1538461446762085, 0.30188679695129395, 0.23076923191547394, 0.1538461446762085, 0.17142856121063232, 0.17391303181648254, 0.17391303181648254, 0.0833333283662796, 0.1702127605676651, 0.12244897335767746, 0.08888888359069824, 0.13333332538604736, 0.09756097197532654, 0.1428571343421936, 0.14035087823867798, 0.23076923191547394, 0.052631575614213943, 0.05714285373687744, 0.0624999962747097, 0.08888888359069824, 0.10526315122842789 ]
rkxQ-nA9FX
true
[ "We give a theoretical analysis of the ability of batch normalization to automatically tune learning rates, in the context of finding stationary points for a deep learning objective." ]
[ "Generative models of natural images have progressed towards high fidelity samples by the strong leveraging of scale.", "We attempt to carry this success to the field of video modeling by showing that large Generative Adversarial Networks trained on the complex Kinetics-600 dataset are able to produce video samples of substantially higher complexity and fidelity than previous work. ", "Our proposed model, Dual Video Discriminator GAN (DVD-GAN), scales to longer and higher resolution videos by leveraging a computationally efficient decomposition of its discriminator.", "We evaluate on the related tasks of video synthesis and video prediction, and achieve new state-of-the-art Fréchet Inception Distance for prediction for Kinetics-600, as well as state-of-the-art Inception Score for synthesis on the UCF-101 dataset, alongside establishing a strong baseline for synthesis on Kinetics-600.", "We approached the challenging problem of modeling natural video by introducing a GAN capable of capturing the complexity of a large video dataset.", "We showed that on UCF-101 and frame-conditional Kinetics-600 it quantitatively achieves the new state of the art, alongside qualitatively producing video synthesis samples with high complexity and diversity.", "We further wish to emphasize the benefit of training generative models on large and complex video datasets, such as Kinetics-600, and envisage the strong baselines we established on this dataset with DVD-GAN will be used as a reference point by the generative modeling community moving forward.", "While much remains to be done before realistic videos can be consistently generated in an unconstrained setting, we believe DVD-GAN is a step in that direction.", "A EXPERIMENT METHODOLOGY" ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09302324801683426, 0.317460298538208, 0.15686273574829102, 0.27586206793785095, 0.2666666507720947, 0.30188679695129395, 0.2985074520111084, 0.15686273574829102, 0 ]
Byx91R4twB
true
[ "We propose DVD-GAN, a large video generative model that is state of the art on several tasks and produces highly complex videos when trained on large real world datasets." ]
[ "Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated.", "In this work, we introduce Neural Process Networks to understand procedural text through (neural) simulation of action dynamics. ", "Our model complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers.", "The model updates the states of the entities by executing learned action operators.", "Empirical results demonstrate that our proposed model can reason about the unstated causal effects of actions, allowing it to provide more accurate contextual information for understanding and generating procedural text, all while offering more interpretable internal representations than existing alternatives.", "Understanding procedural text such as instructions or stories requires anticipating the implicit causal effects of actions on entities.", "For example, given instructions such as \"add blueberries to the muffin mix, then bake for one half hour,\" an intelligent agent must be able to anticipate a number of entailed facts (e.g., the blueberries are now in the oven; their \"temperature\" will increase).", "While this common sense reasoning is trivial for humans, most natural language understanding algorithms do not have the capacity to reason about causal effects not mentioned directly in the surface strings BID12 BID7 BID14 .", "The process is a narrative of entity state changes induced by actions.", "In each sentence, these state changes are induced by simulated actions and must be remembered.In this paper, we introduce Neural Process Networks, a procedural language understanding system that tracks common sense attributes through neural simulation of action dynamics.", "Our network models interpretation of natural language instructions as a process of actions and their cumulative effects on entities.", "More concretely, reading one sentence at a time, our model attentively selects what actions to execute on which entities, and remembers the state changes induced with a recurrent memory structure.", "In FIG0 , for example, our model indexes the \"tomato\" embedding, selects the \"wash\" and \"cut\" functions and performs a computation that changes the \"tomato\" embedding so that it can reason about attributes such as its \"SHAPE\" and \"CLEANLINESS\".Our", "model contributes to a recent line of research that aims to model aspects of world state changes, such as language models and machine readers with explicit entity representations BID4 BID6 , as well as other more general purpose memory network variants BID30 BID26 BID5 BID23 . This", "worldcentric modeling of procedural language (i.e., understanding by simulation) abstracts away from the surface strings, complementing text-centric modeling of language, which focuses on syntactic and semantic labeling of surface words (i.e., understanding by labeling).Unlike", "previous approaches, however, our model also learns explicit action representations as functional operators (See FIG0 . While", "representations of action semantics could be acquired through an embodied agent that can see and interact with the world BID22 , we propose to learn these representations from text. In particular", ", we require the model to be able to explain the causal effects of actions by predicting natural language attributes about entities such as \"LOCATION\" and \"TEMPERATURE\". The model adjusts", "its representations of actions based on errors it makes in predicting the resultant state changes to attributes. This textual simulation", "allows us to model aspects of action causality that are not readily available in existing simulation environments. Indeed, most virtual environments", "offer limited aspects of the world -with a primary focus on spatial relations BID22 BID1 BID29 . They leave out various other dimensions", "of the world states that are implied by diverse everyday actions such as \"dissolve\" (change of \"COMPOSITION\") and \"wash\" (change of \"CLEANLINESS\").Empirical results demonstrate that parametrizing", "explicit action embeddings provides an inductive bias that allows the neural process network to learn more informative context representations for understanding and generating natural language procedural text. In addition, our model offers more interpretable", "internal representations and can reason about the unstated causal effects of actions explained through natural language descriptors. Finally, we include a new dataset with fine-grained", "annotations on state changes, to be shared publicly, to encourage future research in this direction.", "We introduced the Neural Process Network for modeling a process of actions and their causal effects on entities by learning action transformations that change entity state representations.", "The model maintains a recurrent memory structure to track entity states and is trained to predict the state changes that entities undergo.", "Empirical results demonstrate that our model can learn the causal effects of action semantics in the cooking domain and track the dynamic state changes of entities, showing advantages over competitive baselines.A TRAINING DETAILS OF OUR FULL MODEL AND ABLATIONS" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.20512819290161133, 0.04878048226237297, 0.20512819290161133, 0.23529411852359772, 0.19672130048274994, 0.29999998211860657, 0.0952380895614624, 0.18518517911434174, 0.3529411852359772, 0.29999998211860657, 0.25, 0.2745097875595093, 0.178571417927742, 0.158730149269104, 0.11538460850715637, 0, 0.19230768084526062, 0.2857142686843872, 0.2380952388048172, 0.0952380895614624, 0.13636362552642822, 0.2222222238779068, 0.072727270424366, 0.3404255211353302, 0.0555555522441864, 0.44897958636283875, 0.41860464215278625, 0.3050847351551056 ]
rJYFzMZC-
true
[ "We propose a new recurrent memory architecture that can track common sense state changes of entities by simulating the causal effects of actions." ]
[ "There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression. ", "In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence.", "In many applications this expensive initialization is not practical, for example streaming algorithms --- where inputs are ephemeral and can only be inspected a small number of times. ", "In this paper we explore the learning of approximate set membership over a stream of data in one-shot via meta-learning.", "We propose a novel memory architecture, the Neural Bloom Filter, which we show to be more compressive than Bloom Filters and several existing memory-augmented neural networks in scenarios of skewed data or structured sets.", "One of the simplest questions one can ask of a set of data is whether or not a given query is contained within it.", "Is q, our query, a member of S, our chosen set of observations?", "This set membership query arises across many computing domains; from databases, network routing, and firewalls.", "One could query set membership by storing S in its entirety and comparing q against each element.", "However, more space-efficient solutions exist.The original and most widely implemented approximate set membership data-structure is the Bloom Filter BID2 .", "It works by storing sparse distributed codes, produced from randomized hash functions, within a binary vector.", "The Bloom-filter trades off space for an allowed false positive rate, which arises due to hash collisions.", "However its error is one-sided; if an element q is contained in S then it will always be recognized.", "It never emits false negatives.One can find Bloom Filters embedded within a wide range of production systems; from network security BID16 , to block malicious IP addresses; databases, such as Google's Bigtable BID7 , to avoid unnecessary disk lookups; cryptocurrency BID19 , to allow clients to filter irrelevant transactions; search, such as Facebook's typeahead search BID0 , to filter pages which do not contain query prefixes; and program verification BID13 , to avoid recomputation over previously observed states.While the main appeal of Bloom Filters is favourable compression, another important quality is the support for dynamic updates.", "New elements can be inserted in O(1) time.", "This is not the case for all approximate set membership data structures.", "For example, perfect hashing saves ≈ 40% space over Bloom Filters but requires a pre-processing stage that is polynomial-time in the number of elements to store BID12 .", "Whilst the static set membership problem is interesting, it limits the applicability of the algorithm.", "For example, in a database application that is serving a high throughput of write operations, it may be intractable to regenerate the full data-structure upon each batch of writes.We thus focus on the data stream computation model BID27 , where input observations are assumed to be ephemeral and can only be inspected a constant number of timesusually once.", "This captures many real-world applications: network traffic analysis, database query serving, and reinforcement learning in complex domains.", "Devising an approximate set membership data structure that is not only more compressive than Bloom Filters, but can be applied to either dynamic or static sets, could have a significant performance impact on modern computing applications.", "In this paper we investigate this problem using memory-augmented neural networks and meta-learning.We build upon the recently growing literature on using neural networks to replace algorithms that are configured by heuristics, or do not take advantage of the data distribution.", "For example, Bloom Filters are indifferent to the data distribution.", "They have near-optimal space efficiency when data is drawn uniformly from a universe set BID5 (maximal-entropy case) but (as we shall show) are sub-optimal when there is more structure.", "Prior studies on this theme have investigated compiler optimization BID11 , computation graph placement , and data index structures such as b-trees BID22 .In", "the latter work, BID22 explicitly consider the problem of static set membership. By", "training a neural network over a fixed S (URLs from Google's Transparency Report) with negative examples in the form of held-out URLs, they observe 36% space reduction over a conventional Bloom Filter 1 . Crucially", "this requires iterating over the storage set S a large number of times to embed its salient information into the weights of a neural network classifier. For a new", "S this process would have to be repeated from scratch.Instead of learning from scratch, we draw inspiration from the few-shot learning advances obtained by meta-learning memory-augmented neural networks BID30 BID34 . In this setup", ", tasks are sampled from a common distribution and a network learns to specialize to (learn) a given task with few examples. This matches", "very well to applications where many Bloom Filters are instantiated over different subsets of a common data distribution. For example,", "a Bigtable database usually contains one Bloom Filter per SSTable file. For a large", "table that contains Petabytes of data, say, there can be over 100, 000 separate instantiated data-structures which share a common row key format and query distribution. Meta-learning", "allows us to exploit this common redundancy.The main contributions of this paper are (1) A new sparse memory-augmented neural network architecture, the Neural Bloom Filter, which learns to write to memory using a distributed write scheme, and (2) An empirical evaluation of the Neural Bloom Filter meta-learned on one-shot approximate set membership problems of varying structure.We compare with the classical Bloom Filter alongside other memory-augmented neural networks such as the Differentiable Neural Computer and Memory Networks BID33 . We find when", "there is no structure, that differentiates the query set elements and queries, the Neural Bloom Filter learns a solution similar to a Bloom Filter derivative (a Bloom-g filter BID28 ), but when there is a lot of structure the solution can be considerably more space-efficient.", "In many situations neural networks are not a suitable replacement to Bloom Filters and their variants.", "The Bloom Filter is robust to changes in data distribution, and adversarial attacks, because it delivers a bounded false positive rate for any sampled subset, unlike a neural network.", "However in this paper we consider the questions, \"When might a neural network provide better compression than a Bloom Filter?\" and \"What kind of neural architecture is practical?\".", "We see that a model which uses an external memory with an adaptable capacity, avoids BPTT with a feed-forward write scheme, and learns to address its memory, is the most promising option in contrast to popular memory models such as DNCs and LSTMs.", "We term this model the Neural Bloom Filter due to the analogous incorporation of a hashing scheme, commutative write scheme, and multiplicative read mechanism.The Neural Bloom Filter relies on settings where cost of learning to query is possible and will be a net benefit to a population of existing bloom filters.", "That is, because we rely on meta-learning, we need situations where we have an off-line dataset (both of stored elements and queries) that is similar enough to future data that we wish to store.", "In the case of a large database we think this is warranted, a database with 100, 000 separate set membership data structures will benefit from a single (or periodic) meta-learning training routine that can run on a single machine and sample from the currently stored data, generating a large number of efficient data-structures.", "We envisage the space cost of the network to be amortized by sharing it across many neural bloom filters, and the time-cost of executing the network to be offset by the continuous acceleration of dense linear algebra on modern hardware, and the ability to batch writes and queries efficiently.", "A promising future direction would be to investigate the feasibility of this approach in a production system." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.045454543083906174, 0.12121211737394333, 0.0476190447807312, 0.3125, 0.21739129722118378, 0.1818181723356247, 0.1666666567325592, 0.1428571343421936, 0.13333332538604736, 0.1818181723356247, 0, 0.06666666269302368, 0, 0.04301074892282486, 0, 0.23999999463558197, 0.14999999105930328, 0.307692289352417, 0.09375, 0.06666666269302368, 0.08163265138864517, 0.2448979616165161, 0.08695651590824127, 0.20000000298023224, 0, 0.3199999928474426, 0.1818181723356247, 0.21052631735801697, 0.23255813121795654, 0, 0.060606054961681366, 0, 0.04999999701976776, 0.20779220759868622, 0.16326530277729034, 0.06896550953388214, 0.04878048226237297, 0.1538461446762085, 0.07999999821186066, 0.15094339847564697, 0.0476190447807312, 0.1428571343421936, 0.2083333283662796, 0.19999998807907104 ]
HkekMnR5Ym
true
[ "We investigate the space efficiency of memory-augmented neural nets when learning set membership." ]
[ "We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network.", "Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them.", "We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network.", "We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks.", "Our approach only requires calculating two square curvature factor matrices for each layer.", "Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage.", "We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture.", "Neural networks are most commonly trained in a maximum a posteriori (MAP) setting, which only yields point estimates of the parameters, ignoring any uncertainty about them.", "This often leads to overconfident predictions, especially in regimes that are weakly covered by training data or far away from the data manifold.", "While the confidence of wrong predictions is usually irrelevant in a research context, it is essential that a Machine Learning algorithm knows when it does not know in the real world, as the consequences of mistakes can be fatal, be it when driving a car or diagnosing a disease.The Bayesian framework of statistics provides a principled way for avoiding overconfidence in the parameters by treating them as unknown quantities and integrating over all possible values.", "Specifically, for the prediction of new data under a model, it fits a posterior distribution over the parameters given the training data and weighs the contribution of each setting of the parameters to the prediction by the probability of the data under those parameters times their prior probability.", "However, the posterior of neural networks is usually intractable due to their size and nonlinearity.There has been previous interest in integrating neural networks into the Bayesian framework BID26 BID15 BID28 BID1 , however these approaches were designed for small networks by current standards.", "Recent adaptations to architectures of modern scale rely on crude approximations of the posterior to become tractable.", "All of BID9 BID14 BID2 assume independence between the individual weights.", "While they achieve good results on small datasets, this strong restriction of the posterior is susceptible to underestimating the uncertainty, in particular when optimising the variational bound.", "The approach in BID6 requires the use of certain stochastic regularisers which are not commonly present in most recent architectures.", "Furthermore, it is not clear if the approximate posterior defined by these regularisers is a good fit to the true posterior.Recent work on second-order optimisation of neural networks BID27 BID3 has demonstrated that the diagonal blocks of the curvature can be well approximated by a Kronecker product.", "We combine this insight with the idea of modelling the posterior over the weights as a Gaussian, using a Laplace approximation BID26 with Kronecker factored covariance matrices.", "This leads to a computationally efficient matrix normal posterior distribution BID11 over the weights of every layer.", "Since the Laplace approximation is applied after training, our approach can be used to obtain uncertainty estimates from existing networks.", "We presented a scalable approximation to the Laplace approximation for the posterior of a neural network and provided experimental results suggesting that the uncertainty estimates are on par with current alternatives like Dropout, if not better.", "It enables practitioners to obtain principled uncertainty estimates from their models, even if they were trained in a maximum likelihood/MAP setting.There are many possible extensions to this work.", "One would be to automatically determine the scale and regularisation hyperparameters of the Kronecker factored Laplace approximation using the model evidence similar to how BID26 interpolates between the data log likelihood and the width of the prior.", "The model evidence could further be used to perform Bayesian model averaging on ensembles of neural networks, potentially improving their generalisation ability and uncertainty estimates.", "A challenging application would be active learning, where only little data is available relative to the number of curvature directions that need to be estimated." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.6222222447395325, 0.13333332538604736, 0.3499999940395355, 0.2790697515010834, 0.05882352590560913, 0.1395348757505417, 0.1764705777168274, 0.1304347813129425, 0.1860465109348297, 0.1249999925494194, 0.23529411852359772, 0.16393442451953888, 0.1111111044883728, 0.1249999925494194, 0.08695651590824127, 0.04999999329447746, 0.2295081913471222, 0.40909090638160706, 0.5263158082962036, 0.24390242993831635, 0.3396226465702057, 0.08163265138864517, 0.23999999463558197, 0.08888888359069824, 0.13636362552642822 ]
Skdvd2xAZ
true
[ "We construct a Kronecker factored Laplace approximation for neural networks that leads to an efficient matrix normal distribution over the weights." ]
[ "Spectral embedding is a popular technique for the representation of graph data.", "Several regularization techniques have been proposed to improve the quality of the embedding with respect to downstream tasks like clustering.", "In this paper, we explain on a simple block model the impact of the complete graph regularization, whereby a constant is added to all entries of the adjacency matrix.", "Specifically, we show that the regularization forces the spectral embedding to focus on the largest blocks, making the representation less sensitive to noise or outliers.", "We illustrate these results on both on both synthetic and real data, showing how regularization improves standard clustering scores.", "Spectral embedding is a standard technique for the representation of graph data (Ng et al., 2002; Belkin & Niyogi, 2002) .", "Given the adjacency matrix A ∈ R n×n + of the graph, it is obtained by solving either the eigenvalue problem:", "or the generalized eigenvalue problem:", "where D = diag(A1 n ) is the degree matrix, with 1 n the all-ones vector of dimension n, L = D − A is the Laplacian matrix of the graph, Λ ∈ R k×k is the diagonal matrix of the k smallest (generalized) eigenvalues of L and X ∈ R n×k is the corresponding matrix of (generalized) eigenvectors.", "In this paper, we only consider the generalized eigenvalue problem, whose solution is given by the spectral decomposition of the normalized Laplacian matrix L norm = I − D −1/2 AD −1/2 (Luxburg, 2007) .", "The spectral embedding can be interpreted as equilibrium states of some physical systems (Snell & Doyle, 2000; Spielman, 2007; Bonald et al., 2018) , a desirable property in modern machine learning.", "However, it tends to produce poor results on real datasets if applied directly on the graph (Amini et al., 2013) .", "One reason is that real graphs are most often disconnected due to noise or outliers in the dataset.", "In order to improve the quality of the embedding, two main types of regularization have been proposed.", "The first artificially increases the degree of each node by a constant factor (Chaudhuri et al., 2012; Qin & Rohe, 2013) , while the second adds a constant to all entries of the original adjacency matrix (Amini et al., 2013; Joseph et al., 2016; Zhang & Rohe, 2018) .", "In the practically interesting case where the original adjacency matrix A is sparse, the regularized adjacency matrix is dense but has a so-called sparse + low rank structure, enabling the computation of the spectral embedding on very large graphs (Lara, 2019) .", "While (Zhang & Rohe, 2018) explains the effects of regularization through graph conductance and (Joseph et al., 2016) through eigenvector perturbation on the Stochastic Block Model, there is no simple interpretation of the benefits of graph regularization.", "In this paper, we show on a simple block model that the complete graph regularization forces the spectral embedding to separate the blocks in decreasing order of size, making the embedding less sensitive to noise or outliers in the data.", "Indeed, (Zhang & Rohe, 2018) identified that, without regularization, the cuts corresponding to the first dimensions of the spectral embedding tend to separate small sets of nodes, so-called dangling sets, loosely connected to the rest of the graph.", "Our work shows more explicitly that regularization forces the spectral embedding to focus on the largest clusters.", "Moreover, our analysis involves some explicit characterization of the eigenvalues, allowing us to quantify the impact of the regularization parameter.", "The rest of this paper is organized as follows.", "Section 2 presents block models and an important preliminary result about their aggregation.", "Section 3 presents the main result of the paper, about the regularization of block models, while Section 4 extends this result to bipartite graphs.", "Section 5 presents the experiments and Section 6 concludes the paper." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.20689654350280762, 0.22857142984867096, 0.1428571343421936, 0.7368420958518982, 0.11764705181121826, 0.15789473056793213, 0.0555555522441864, 0.09090908616781235, 0.03703703358769417, 0.0833333283662796, 0.0833333283662796, 0.1621621549129486, 0.17142856121063232, 0.1875, 0.07407406717538834, 0.15686273574829102, 0.1249999925494194, 0.4399999976158142, 0.1702127605676651, 0.5454545617103577, 0.1764705777168274, 0, 0, 0.1666666567325592, 0.07692307233810425 ]
H1l_0JBYwS
true
[ "Graph regularization forces spectral embedding to focus on the largest clusters, making the representation less sensitive to noise. " ]
[ "The exposure bias problem refers to the training-inference discrepancy caused by teacher forcing in maximum likelihood estimation (MLE) training for auto-regressive neural network language models (LM).", "It has been regarded as a central problem for natural language generation (NLG) model training.", "Although a lot of algorithms have been proposed to avoid teacher forcing and therefore to alleviate exposure bias, there is little work showing how serious the exposure bias problem is.", "In this work, we first identify the auto-recovery ability of MLE-trained LM, which casts doubt on the seriousness of exposure bias.", "We then develop a precise, quantifiable definition for exposure bias.", "However, according to our measurements in controlled experiments, there's only around 3% performance gain when the training-inference discrepancy is completely removed.", "Our results suggest the exposure bias problem could be much less serious than it is currently assumed to be.", "Language model (LM) is a central module for natural language generation (NLG) tasks (Young et al., 2017) such as machine translation (Wu et al., 2017) , dialogue response generation , image captioning (Lin et al., 2014) , etc.", "For decades, maximum likelihood estimation (MLE) has been the the most widely used objective for LM training.", "However, there is a popular belief in the natural language processing (NLP) community that standard MLE training will cause \"exposure bias\" and lead to a performance degradation during the test-time language generation.", "The exposure bias problem (Bengio et al., 2015; Ranzato et al., 2016) refers to the following discrepancy between MLE training and test-time generation for language models: During training, the language model predicts the next word conditioned on history words sampled from the groundtruth data distribution.", "And during generation, the model generates words conditioned on history sequences generated by the model itself.", "However, due to the exposure to real data during training, the language model is biased to only perform well on the ground-truth history distribution.", "As a result, during generation the errors will accumulate along the generated sequence, and the distribution generated by the model will be distorted.", "The forced exposure to ground-truth data during training is also referred to as \"teacher forcing\".", "Given its defintion, the exposure bias problem could rise in the general cases when the model needs to make a sequence of decisions or generations (e.g. music/pixel/speech generation (Lamb et al., 2016) ).", "In this work, we focus on the task of language generation, because the exposure bias problem is originally proposed in this field (Bengio et al., 2015) , and has since attracted huge research attention.", "In order to avoid teacher forcing, many training algorithms (Bengio et al., 2015; Lamb et al., 2016; Ranzato et al., 2016; Yu et al., 2016; Zhu et al., 2018; Lu et al., 2018; Lin et al., 2017; Guo et al., 2017; Rajeswar et al., 2017; Wiseman & Rush, 2016; Nie et al., 2019; Shi et al., 2018) have been proposed as alternatives to MLE training.", "Most of these works utilize techniques from generative adversarial network (GAN) (Goodfellow et al., 2014) or reinforcement learning (RL) (Sutton & Barto, 1998) .", "In this paper, we refer to these algorithms as non-MLE methods or text GANs.", "Despite the huge research efforts devoted to alleviate exposure bias, surprisingly, its existence or significance is much less studied.", "In particular, to the best of our knowledge, no existing work Table 1 : Samples of a MLE-trained STOA transformer LM when fed with different types of length-10 history prefix.", "To save space, we omitted the first 7 words of the random history.", "attempts to directly show the seriousness of exposure bias in an empirical or theoretical way.", "This work is motivated by the belief that a good solution should be built upon a testable and quantifiable problem definition.", "In this rest of this paper, we first identify the \"self-recovery\" ability of popular LM models, which casts doubt on the original claim of exposure bias.", "We then develop a precise and quantifiable definition of exposure bias, and validate its seriousness in controlled experiments.", "In this section, we focus on answering the following question: \"Does the EB-M measurement correctly reflect the significance of exposure bias?\"", "In short, our answer is not really.", "The problem is that the distortion of the marginal P l+1 M |M is not only affected by the presumably existing exposure bias problem alone, but also by the mismatch between the history distribution P M from P D for W 1:l , which grows with the length of the history.", "Therefore, even if the measured EB-M is significantly larger than one, we can not conclude that exposure bias causes serious deterioration.", "We provide an example to illustrate this argument: Example 1.", "Suppose L = 2, and V = {A, B}.", "P D and P M are crafted as follows:", "However, the only problem P M has is the mismatch between the history distributions (P M and P D ) for W 1 .", "The next set of experiments also suggest that EB-M does not precisely reflect exposure bias.", "On the EMNLP-news data-set (specified in Appendix B), we compare EB-M measurements for several non-MLE training methods with the baseline MLE model.", "We include results for Scheduled Sampling (SS) (Bengio et al., 2015) , Cooperative Training (CoT) (Lu et al., 2018) , and Adversarial Ranking (RankGAN) (Lin et al., 2017) .", "We provide implementation details for non-MLE methods in Appendix C. Intuitively, these methods will cause the model to be biased to behave well with model samples as history, instead of data samples.", "Therefore, we expect EB-M measurement for non-MLE trained models to be smaller than MLE trained models.", "However, Figure 1 shows that the measurements for different training frameworks are almost the same.", "We believe the reason is that the EB-M measurements are only reflecting the trivial mismatch between the history distributions.", "Is it possible that the original definition of exposure bias (Bengio et al., 2015; Ranzato et al., 2016) exactly refers to this mismatch between the model and data history distributions?", "However, note that this mismatch is inevitable for any imperfect model, and non-MLE training algorithms can not solve it.", "We believe a better, more precise definition is needed to discriminate exposure bias from this trivial mismatch.", "Motivated by this view, we propose a second approach in the section below.", "Following the discussion in the last section, we wish our measurement to be independent of the quality of the history distribution.", "In light of that, we design a quantity to measure the model's conditional generation quality.", "Let P H ∈ {P M , P D } denote the history distribution as in the MGD definition (5).", "With history length l fixed, we define the conditional generation deviation (CGD) with history distribution P H for P M using metric d as:", "where we assume that P D (· | W 1:l )) is computable, and use it to measure the quality of the model's conditional distribution.", "For the choice of the distribution distance d, in addition to d T V and d JS , we introduce greedy decoding divergence (d GD ) defined as:", "where 1 is the indicator function, and P, Q ∈ P. The distance d GD 2 reflects the model's accuracy during greedy decoding.", "Similar to MGD, exposure bias should imply a significant gap between CGD(P M |M , l, d) and CGD(P M |D , l, d).", "We again define rate of exposure bias at history length l with metric d to be:", "For our definition of EB-C, a natural question is why we only focus on the generation distribution of the very next word.", "The reason is we want to precisely measure how the error caused by the history part affect the generation part, by keeping them separate.", "If we measure the deviation of, for example, two sampled tokens, the definition will be confusing: Because the second sampled token will be affected not only by the accumulated error induced by the history (sampled from the model), but also by the first generated token as history.", "To get a better understanding of the intuition behind the definition of EB-C, we recommend readers to read Appendix A about our NMT experiment.", "Since CGD requires inference for ground-truth data distribution P D , we first consider experiments in a synthetic setting.", "In text-GAN literature (Yu et al., 2016; Lin et al., 2017) , a randomly-initialized one-layer LSTM model with hidden dimension of 32 is usually used as P D in synthetic experiments (we denote this setting as M random 32", ").", "However, the model is small-scale and does not reflect any structure existing in real-world text.", "To improve upon this approach, we take the MLE baseline model trained on EMNLP-news data (described in Appendix B) as P D in this synthetic setting.", "We denote the data model (P D ) as M news 512 .", "We then train two LSTM LM (P M ) with different capacities using samples from the data model, with the standard MLE objective.", "One is a one-layer LSTM with hidden width of 512 (denoted as LSTM-512), the other one is with hidden width of 32 (denoted as LSTM-32).", "We train P M for 100 epochs using the Adam optimizer with learning rate 0.001.", "In each epoch, 250k sentences (same to the size of the original EMNLP-news data) of length L = 50 are sampled from M news-512 as training data to avoid over-fitting.", "We show perplexity (PPL) results of the trained models in Appendix F. Finally, EB-C is calculated using 100k 3 samples from P M and P D .", "In Figure 2 , we show EB-C measurements with different metrics d m , and the two models give similar results.", "It is shown that EB-C has a steady but slow increasing trend as history length increases.", "This is expected as a consequence of exposure bias, because P M deviates farther from P D as history length increases.", "However, the average value of EB-C is less than 1.03 (the largest average value is from d JS for the LSTM-512 experiment), meaning that the gap between CGD(P M |M , l, d) and CGD(P M |D , l, d) is not large.", "Also, note that in most NLG applications (such as machine translation or image captioning), the generated sequence typically has short length (less than 20).", "In that range of history length, the EB-C measurements that exposure bias only has minimal influence.", "In Appendix E, we repeat the experiment for a transformer LM (Dai et al., 2019) , and get very similar EB-C measurements.", "These measurements imply a striking conclusion : (Informal) Even if all the bad effects from exposure bias for MLE LM training are removed, the relative performance gain is at most 3%.", "If the sequence length is not very long, the gain is less than 1%..", "To dive deeper into the cause of the gap in CGD, we experiment with corrupted versions of P M as history distribution.", "We first specify a corrupt rate c ∈ [0, 1], and randomly substitute words in a history sample from P M to a \"noise\" word drawn uniformly from the vocabulary with probability c.", "Consequently, larger c will cause the history distribution to deviate farther from the groundtruth P D .", "In Figure 3 , we show CGD measurement versus the corrupted history P corrupt M .", "Large gaps are observed between CGD(P M |M corrupt ) and CGD(P M |D ).", "Therefore, the small gap between CGD(P M |M ) and CGD(P M |D ) in Figure 2 results from the small deviation between the history distribution P M and P D .", "In other word, P M has learned a \"good enough\" distribution that is able to keep it in the well-behaving region during sampling.", "With these observations, we conclude that, in the synthetic setting considered, exposure bias does exist, but is much less serious than it is presumed to be.", "Although there exists mismatch between the history distribution P M and P D , the mismatch is still in the model's \"comfortable zone\".", "In other words, the LSTM LM is more robust than exposure bias claims it to be.", "To concretize the this argument, we provide an example LM and show that MLE training is unlikely to generate models with a large EB-C value.", "Example 2.", "Again suppose L = 2, and V = {A, B}, the ground-truth data distribution is uniform on {AA, AB, BB, BA}.", "P M is crafted as follows:", ". Note that the model behaves bad when W 1 = A, which is of high probability during sampling.", "In this work, we first identify the self-recovery ability of MLE-trained LM, which casts doubt on the seriousness of exposure bias, which has been regarded as a central problem for MLE training by the LM community.", "We then explore two intuitive approaches to quantify the significance of exposure bias for LM training.", "The first quantification EB-M relies on the marginal generation distribution and reveals some vagueness in the original definition of exposure bias.", "We argue that we should focus on the model's generation performance in terms of its conditional distribution and propose a second quantification EB-C, which we regard as the precise definition for exposure bias.", "We design a evaluation of EB-C at different history length with real human (turkers from AMT) as the data model, for a SOTA transformer LM.", "It is shown that removing the training-testing discrepancy only gives around 2% of performance gain.", "Our synthetic experiments also gives very similar measurements.", "By analyzing EB-C measurements with perturbed history samples, we hypothesise that although the mismatch between the data and model distribution for history prefix exists, it is still in the model's \"comfortable zone\".", "With these results, we claim that on the contrary to the popular belief, exposure bias is only a minor problem in MLE-based LM training.", "To wrap up, we discuss the fundamental question \"Is MLE training really biased?\", from the perspective of objective functions.", "Note that the MLE objective (1) can be re-written as:", "where D KL denotes the Kullback-Leibler divergence, and θ denotes the trainable parameters in P M .", "Therefore, MLE training is minizing the divergence from P M , which is exactly the model's sampling distribution, from P D .", "While it's true that the training is \"exposed\" to data samples, we can not simply deduce the objective is \"biased\".", "We want to end our discussion with two remarks.", "First, the proposed quantification approaches should not be used as the only metric for NLG.", "For example, a position-aware uni-gram LM, which generates words independent of previous context, has no exposure bias problem and can pass our test easily.", "Second, the intention of this work is not to discourage researchers from exploring non-MLE training algorithms for LM.", "It is completely possible that an training objective different from", ", can lead to better generation performance (Lu et al., 2018; Huszr, 2015) .", "However, though non-MLE algorithms avoid teacher forcing, these algorithms (using GAN or RL for example) are usually less stable and more difficult to tune.", "Given that the quantified measurement of exposure bias is insignificant, we think it should be questioned whether adopting these techniques to avoid exposure bias is a wise trade-off." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.21739129722118378, 0.11428570747375488, 0.21276594698429108, 0.10256409645080566, 0.2666666507720947, 0.09756097197532654, 0.6842105388641357, 0.07999999821186066, 0.1666666567325592, 0.20408162474632263, 0.20000000298023224, 0, 0.14999999105930328, 0.052631575614213943, 0.23529411852359772, 0.15094339847564697, 0.11538460850715637, 0.10344827175140381, 0, 0.05882352590560913, 0.25641024112701416, 0.0833333283662796, 0, 0.22857142984867096, 0.14999999105930328, 0.1428571343421936, 0.10810810327529907, 0.051282044500112534, 0.07407406717538834, 0.17543859779834747, 0.2926829159259796, 0.13333332538604736, 0, 0, 0.10256409645080566, 0.17142856121063232, 0.1463414579629898, 0.09302324801683426, 0.1666666567325592, 0.29411762952804565, 0.1764705777168274, 0.1666666567325592, 0.2083333283662796, 0.25641024112701416, 0.2702702581882477, 0, 0.10810810327529907, 0.05714285373687744, 0, 0.0476190410554409, 0.1818181723356247, 0.04347825422883034, 0.0476190410554409, 0.1538461446762085, 0.2222222238779068, 0.04999999329447746, 0.09756097197532654, 0.07407406717538834, 0.0476190410554409, 0.051282044500112534, 0.0363636314868927, 0.05714285373687744, 0.045454539358615875, 0.0624999962747097, 0.1463414579629898, 0.052631575614213943, 0.1111111044883728, 0.08510638028383255, 0.13333332538604736, 0.04999999329447746, 0.1111111044883728, 0.10256409645080566, 0.18867923319339752, 0.09090908616781235, 0.17142856121063232, 0.0952380895614624, 0.2800000011920929, 0.1875, 0, 0.08163265138864517, 0.05714285373687744, 0.05714285373687744, 0, 0, 0.1860465109348297, 0.4444444477558136, 0.051282044500112534, 0.4444444477558136, 0.31111109256744385, 0.04999999329447746, 0.07692307233810425, 0.10526315122842789, 0.19230768084526062, 0.3888888955116272, 0.09999999403953552, 0.19607841968536377, 0.13636362552642822, 0.11428570747375488, 0, 0.16326530277729034, 0.3255814015865326, 0.10526315122842789, 0.20000000298023224, 0, 0.1621621549129486, 0.21052631735801697, 0.13793103396892548, 0.11764705181121826, 0.09090908616781235, 0.2631579041481018, 0.20000000298023224, 0.060606054961681366, 0.1395348757505417, 0.31111109256744385 ]
rJg2fTNtwr
true
[ "We show that exposure bias could be much less serious than it is currently assumed to be for MLE LM training." ]
[ "The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks.", "Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games.", "We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation.", "We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured. ", "The study of emergent communication is important for two related problems in language development, both human and artificial: language evolution, the development of communication protocols from scratch BID27 ; and language acquisition, the ability of an embodied agent to learn an existing language.", "In this paper we focus on the problem of how environmental or pre-linguistic conditions affect the nature of the communication protocol that an agent learns.", "The increasing realism and complexity of environments being used for grounded language learning BID8 BID18 present an opportunity to analyse these effects in detail.In line with previous work on emergent communication, we are strongly motivated by the view that language derives meaning from its use BID39 BID37 .", "This perspective especially motivates the study of language emergence in cases where co-operative agents try to achieve shared goals in game scenarios BID34 BID6 BID26 , and is related to the study of multi-agent and self-play methods that have found great success in other areas of machine learning BID1 BID30 .", "Here we focus on simple referential games, in which one agent must communicate to another a target object in the agent's environment.One of the most important properties of natural language is compositionality.", "Smaller building blocks (e.g. words, morphemes) are used to generate unbounded numbers of more complex forms (e.g. sentences, multi-word expressions), with the meaning of the larger form being determined by the meanings of its parts and how they are put together BID14 .", "Compositionality is an advantage in any communication protocol as it allows in principle infinite expression through a finite dictionary and a finite set of combination rules.", "In emergent communication research, previous work has shown that agents can produce (somewhat) compositional protocols when engaging in language games BID34 .", "However, the computational agents were typically situated in artificial worlds containing just a handful of objects, represented as disentangled, structured, and sometimes even atomic symbols, e.g. attribute-based or one-hot vectors BID2 BID5 BID13 BID0 BID26 .", "However, humans receive raw sensorimotor hank you! h a n k y o u ! rather than symbolic input, and little work to date has tested whether these findings carry over when agents are situated in less idealized worlds that bear more similarity to the kind of entangled and noisy environments to which humans are typically exposed.", "We presented a series of studies investigating the properties of protocols emerging when reinforcement learning agents are trained end-to-end on referential communication games.", "We found that when agents are presented with disentangled input data in the form of attribute vectors, this inherent compositional structure is successfully retained in the output.", "Moreover, we showed that communication can also be achieved in cases where agents are presented with raw pixel data, a type of input that aligns better with the raw sensorimotor data that humans are exposed to.", "At the same time, we found that their ability to form compositional protocols in these cases is hampered by their ability to pull apart the objects' factors of variations.", "Altogether, we were able to successfully scale up traditional research from the language evolution literature on emergent communication tasks to the contemporary deep learning framework, thus opening avenues to more realistic, and large scale, computational simulations of language emergence with complex image stimuli." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3589743673801422, 0.05405404791235924, 0.09302324801683426, 0.16326530277729034, 0.3265306055545807, 0.1621621549129486, 0.22580644488334656, 0.17543859779834747, 0.2222222238779068, 0.1538461446762085, 0.15789473056793213, 0.2222222238779068, 0.11764705181121826, 0.1492537260055542, 0.2702702581882477, 0.19999998807907104, 0.260869562625885, 0.25, 0.2222222238779068 ]
HJGv1Z-AW
true
[ "A controlled study of the role of environments with respect to properties in emergent communication protocols." ]
[ "For understanding generic documents, information like font sizes, column layout, and generally the positioning of words may carry semantic information that is crucial for solving a downstream document intelligence task.", "Our novel BERTgrid, which is based on Chargrid by Katti et al. (2018), represents a document as a grid of contextualized word piece embedding vectors, thereby making its spatial structure and semantics accessible to the processing neural network.", "The contextualized embedding vectors are retrieved from a BERT language model.", "We use BERTgrid in combination with a fully convolutional network on a semantic instance segmentation task for extracting fields from invoices.", "We demonstrate its performance on tabulated line item and document header field extraction.", "Documents often come in a variety of layouts and formats.", "For instance, a single document may contain isolated text boxes, tabular arrangements, multiple columns, and different font sizes.", "This layout can carry crucial semantic information.", "In classical natural language processing (NLP), however, the layout information is completely discarded as the document text is simply a sequence of words.", "Without access to the layout, a downstream task such as extraction of tabulated data can become much harder -and in some cases impossible to solve -since the necessary serialization may lead to severe information loss.", "Instead of working on the textual level, it is possible to directly apply methods from computer vision (CV) (e.g. Ren et al. (2015) ) to work on the raw document pixel level which naturally retains the two-dimensional (2D) document structure.", "However, this is impractical, as a machine learning model would first need to learn textual information from the raw pixel data followed by the semantics.", "Recent approaches have designed a hybrid between NLP and CV methods for document intelligence: Chargrid (Katti et al. (2018) ), followed more recently by CUTIE (Zhao et al. (2019) ), construct a 2D grid of characters or words from a document and feed it into a neural model, thereby preserving the spatial arrangement of the document.", "The symbols in the original document are embedded in some vector space, yielding a rank-3 tensor (width, height, embedding).", "Both papers report significant benefits of using such a grid approach over purely sequential 1D input representations, especially for semantically understanding tabulated or otherwise spatially arranged text like line items.", "With our contribution BERTgrid, we incorporate contextualized embedding into the grid document representation.", "More specifically, we use a BERT language model (Devlin et al. (2019) ) pre-trained on a large pool of unlabeled documents from the target domain to compute contextualized feature vectors for every word piece in a document.", "We demonstrate the effectiveness of BERTgrid on an invoice information extraction task from document tables and headers.", "We compare our results to Chargrid and find significant improvements from 61.76% ± 0.72 to 65.48% ± 0.58 on an invoice dataset previously described in Katti et al. (2018) .", "Tab.", "1 shows the results in terms of the evaluation measure for different input representations.", "All results are averaged over four randomly initialized training runs.", "Katti et al. (2018); Zhao et al. (2019) have shown that grid-based approaches like [Chargrid] or [Wordgrid] outperform conventional sequential models as well as purely image-based methods, so we use [Chargrid] as our baseline, with 61.76% ± 0.72.", "We assume the performance of BERTgrid stems from", "(i) embedding on the word-piece level and", "(ii) contextualization.", "Rather than learning to represent words first, the network directly gets access to semantically meaningful word(-piece)-level information.", "For instance, words such as avenue, street, and drive are very different when embedded on the character level, but will be mapped to approximately the same embedding vector.", "We observe that both [C+Wordgrid] and [C+BERTgrid] converge faster than [Chargrid] which supports this statement.", "During language model pre-training on the large, unlabeled dataset, knowledge about the language of invoices is distilled into the BERT model parameters.", "Compared to simpler, non-contextualized embedding methods such as word2vec, it has sufficient capacity to capture complex dependencies.", "This distilled knowledge is made accessible via the BERTgrid representation and eases the downstream task significantly.", "We acknowledge the BERT model has only access to S, not D. Future work could use 2D positional encodings to preserve the layout structure also during language model pre-training and inference." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09999999403953552, 0.125, 0.27272728085517883, 0.12903225421905518, 0.0833333283662796, 0.0952380895614624, 0.06896550953388214, 0, 0.0624999962747097, 0, 0.04255318641662598, 0, 0.1071428507566452, 0.06896550953388214, 0.04878048226237297, 0.3333333432674408, 0.21739129722118378, 0.0714285671710968, 0, 0.0833333283662796, 0, 0.043478257954120636, 0, 0.1111111044883728, 0, 0.052631575614213943, 0, 0, 0.07407406717538834, 0.07692307233810425, 0.05128204822540283 ]
H1gsGaq9US
true
[ "Grid-based document representation with contextualized embedding vectors for documents with 2D layouts" ]
[ "Deep reinforcement learning (RL) policies are known to be vulnerable to adversarial perturbations to their observations, similar to adversarial examples for classifiers.", "However, an attacker is not usually able to directly modify another agent's observations.", "This might lead one to wonder: is it possible to attack an RL agent simply by choosing an adversarial policy acting in a multi-agent environment so as to create natural observations that are adversarial?", "We demonstrate the existence of adversarial policies in zero-sum games between simulated humanoid robots with proprioceptive observations, against state-of-the-art victims trained via self-play to be robust to opponents.", "The adversarial policies reliably win against the victims but generate seemingly random and uncoordinated behavior.", "We find that these policies are more successful in high-dimensional environments, and induce substantially different activations in the victim policy network than when the victim plays against a normal opponent.", "Videos are available at https://attackingrl.github.io.", "The discovery of adversarial examples for image classifiers prompted a new field of research into adversarial attacks and defenses (Szegedy et al., 2014) .", "Recent work has shown that deep RL policies are also vulnerable to adversarial perturbations of image observations Kos and Song, 2017) .", "However, real-world RL agents inhabit natural environments populated by other agents, including humans, who can only modify observations through their actions.", "We explore whether it's possible to attack a victim policy by building an adversarial policy that takes actions in a shared environment, inducing natural observations which have adversarial effects on the victim.", "RL has been applied in settings as varied as autonomous driving (Dosovitskiy et al., 2017) , negotiation (Lewis et al., 2017) and automated trading (Noonan, 2017) .", "In domains such as these, an attacker cannot usually directly modify the victim policy's input.", "For example, in autonomous driving pedestrians and other drivers can take actions in the world that affect the camera image, but only in a physically realistic fashion.", "They cannot add noise to arbitrary pixels, or make a building disappear.", "Similarly, in financial trading an attacker can send orders to an exchange which will appear in the victim's market data feed, but the attacker cannot modify observations of a third party's orders.", "Contributions.", "Our paper makes three key contributions.", "First, we have proposed a novel threat model of natural adversarial observations produced by an adversarial policy taking actions in a shared environment.", "Second, we demonstrate that adversarial policies exist in a range of zero-sum simulated robotics games against state-of-the-art victims trained via self-play to be robust to adversaries.", "Third, we verify the adversarial policies win by confusing the victim, not by learning a generally strong policy.", "Specifically, we find the adversary induces highly off-distribution activations in the victim, and that victim performance increases when it is blind to the adversary's position.", "We repeated the hyperparameter sweep for fine-tuning victim policies for the defence experiments, but obtained similar results.", "For simplicity, we therefore chose to use the same hyperparameters throughout.", "We used a mixture of in-house and cloud infrastructure to perform these experiments.", "It takes around 8 hours to train an adversary for a single victim using 4 cores of an Intel Xeon Platinum 8000 (Skylake) processor." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.31578946113586426, 0.12121211737394333, 0.4313725531101227, 0.1702127605676651, 0.11428570747375488, 0.12765957415103912, 0.07407406717538834, 0.0476190410554409, 0.3414634168148041, 0.39024388790130615, 0.2916666567325592, 0.09756097197532654, 0.05714285373687744, 0.1818181723356247, 0.0624999962747097, 0.12765957415103912, 0, 0.2926829159259796, 0.2222222238779068, 0.1666666567325592, 0.09302324801683426, 0.05714285373687744, 0.06451612710952759, 0.060606054961681366, 0.04651162400841713 ]
HJgEMpVFwB
true
[ "Deep RL policies can be attacked by other agents taking actions so as to create natural observations that are adversarial." ]
[ "GloVe and Skip-gram word embedding methods learn word vectors by decomposing a denoised matrix of word co-occurrences into a product of low-rank matrices.", "In this work, we propose an iterative algorithm for computing word vectors based on modeling word co-occurrence matrices with Generalized Low Rank Models.", "Our algorithm generalizes both Skip-gram and GloVe as well as giving rise to other embedding methods based on the specified co-occurrence matrix, distribution of co-occurences, and the number of iterations in the iterative algorithm.", "For example, using a Tweedie distribution with one iteration results in GloVe and using a Multinomial distribution with full-convergence mode results in Skip-gram.", "Experimental results demonstrate that multiple iterations of our algorithm improves results over the GloVe method on the Google word analogy similarity task.", "Word embeddings are low dimensional vector representations of words or phrases.", "They are applied to word analogy tasks and used as feature vectors in numerous tasks within natural language processing, computational linguistics, and machine learning.", "They are constructed by various methods which rely on the distributional hypothesis popularized by Firth: \"words are characterized by the company they keep\" BID9 .", "Two seminal methodological approaches to finding word embeddings are Skip-gram [Mikolov et al., 2013a] and GloVe [Pennington et al., 2014] .", "Both methods input a corpus D, process it into a word co-occurence matrix X, then output word vectors with some dimension d.Skip-gram processes a corpus with w words into a count co-occurence matrix X ∈ R w×w , where x ij is the number of times word w i appears in the same context as the word w j .", "Here, two words being in the same context means that they're within l c tokens of each other.", "Define this co-occurence matrix to be the count co-occurence matrix.", "Next, Skip-gram [Pennington et al., 2014 where u u u T i is the i th row of U , then defines the word vectors to be the rows ofÛ .GloVe", "processes a corpus with w words into a harmonic co-occurence matrix X ∈ R w×w where x ij is the harmonic sum of the number of tokens between words w i and w j over each co-occurrence. That", "is, x ij = p1<p2,|p1−p2|≤lc,D(p1)=wi,D(p2)=wj h(x ij ) u u u DISPLAYFORM0 where a i and b j are bias terms, h(x ij ) = (min{x ij , x max }) .75 is the weight, and x max is some prespecified cutoff. GloVe", "then defines the estimated word vectors to be the rows of 1 2Û + 1 2V . In both", "Skip-gram and GloVe, a matrix of co-occurences X is introduced by processing the corpus, and an objective function is introduced to find a low rank factorization related to the co-occurences X. In this paper, we derive the objective functions from a model-based perspective. We introduce", "an iterative algorithm, and show that problem (1) results from running the iterative algorithm on full-convergence mode for a Multinomial model and problem (2) is one step of the iterative algorithm for a Tweedie model. This algorithm", "additionally allows us to introduce methods to \"fill in the gaps\" between Skip-gram and GloVe and to introduce altogether new methods for finding word vectors.", "We present a general model-based methodology for finding word vectors from a corpus.", "This methodology involves choosing the distribution of a chosen co-occurrence matrix to be an exponential dispersion family and choosing the number of iterations to run our algorithm.In Table 1 , we see that our methodology unifies the dominant word embedding methods available in the literature and provides new and improved methods.", "We introduce an extension of Skip-gram that is stopped before full-convergence analagously to GloVe and an extension to GloVe beyond one iteration.", "Experimental results on a small corpus demonstrate our method improves upon GloVe and Skip-gram on the Google word analogy similarity task.", "It is our hope that this methodology can lead to the development of better, more statistically sound, word embeddings and consequently improve results on many other downstream tasks." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.21621620655059814, 0.3499999940395355, 0.260869562625885, 0.11428570747375488, 0.15789473056793213, 0.06896550953388214, 0.09999999403953552, 0.052631575614213943, 0.10810810327529907, 0.0634920597076416, 0, 0, 0.045454539358615875, 0.08163265138864517, 0.0833333283662796, 0.05882352590560913, 0.19230768084526062, 0.2666666507720947, 0.15789473056793213, 0.3333333432674408, 0.16949151456356049, 0.1111111044883728, 0.21052631735801697, 0.1304347813129425 ]
rJgjYyaio7
true
[ "We present a novel iterative algorithm based on generalized low rank models for computing and interpreting word embedding models." ]
[ "Deterministic models are approximations of reality that are often easier to build and interpret than stochastic alternatives. \n", "Unfortunately, as nature is capricious, observational data can never be fully explained by deterministic models in practice. \n", "Observation and process noise need to be added to adapt deterministic models to behave stochastically, such that they are capable of explaining and extrapolating from noisy data.\n", "Adding process noise to deterministic simulators can induce a failure in the simulator resulting in no return value for certain inputs -- a property we describe as ``brittle.''\n", "We investigate and address the wasted computation that arises from these failures, and the effect of such failures on downstream inference tasks.\n", "We show that performing inference in this space can be viewed as rejection sampling, and train a conditional normalizing flow as a proposal over noise values such that there is a low probability that the simulator crashes, increasing computational efficiency and inference fidelity for a fixed sample budget when used as the proposal in an approximate inference algorithm.", "In order to compensate for epistemic uncertainty due to modelling approximations and unmodeled aleatoric uncertainty, deterministic simulators are often \"converted\" to \"stochastic\" simulators by randomly perturbing the state at each time step.", "In practice, models adapted in this way often provide better inferences (Møller et al., 2011; Saarinen et al., 2008; Lv et al., 2008; Pimblott and LaVerne, 1990; Renard et al., 2013) .", "State-independent white noise with heuristically tuned variance is often used to perturb the state (Adhikari and Agrawal, 2013; Brockwell and Davis, 2016; Fox, 1997; Reddy and Clinton, 2016; Du and Sam, 2006; Allen, 2017; Mbalawata et al., 2013) .", "However, naively adding noise to the state will, in many applications, render the perturbed input state \"invalid,\" inducing failure (Razavi et al., 2019; Lucas et al., 2013; Sheikholeslami et al., 2019) .", "These failures waste computational resources and reduce sample diversity, worsening inference performance.", "Examples of failure modes include ordinary differential equation (ODE) solvers not converging to the required tolerance in the allocated time, or, the state crossing into an unhandled configuration, such as solid bodies overlapping.", "Establishing the cause of failure is non-trivial and hence, the simulation artifact can be sensitive to seemingly inconsequential alterations to the state -a property we describe as \"brittle.\"", "The principal contribution of this paper is a technique for minimizing this failure rate.", "We proceed by first framing sampling from brittle simulators as rejection sampling.", "We then eliminate rejections by learning the state-dependent density over perturbations that do not induce failure, using conditional autoregressive flows (Papamakarios et al., 2017) .", "Doing so renders the joint distribution unchanged and retains the interpretability afforded by the simulator, but improves sample efficiency.", "We show that using the learned proposal increases the fidelity of the inference results attainable on a range of examples.", "In this paper we have tackled reducing simulator failures caused by naively perturbing the input state.", "We achieve this by defining these simulators as rejection samplers and learning a conditional autoregressive flow to estimate the state-dependent proposal distribution conditioned on acceptance.", "We show that using this learned proposal reduces the variance of inference results when used as the proposal in a subsequent approximate inference scheme.", "This work has readily transferable practical contributions in the scientific community where naively modified simulation platforms are widely deployed." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.11764705181121826, 0, 0.0952380895614624, 0.1818181723356247, 0.15789473056793213, 0.22580644488334656, 0.043478257954120636, 0, 0.038461532443761826, 0.04651162400841713, 0.13793103396892548, 0.0416666604578495, 0.04651162400841713, 0.06666666269302368, 0.0714285671710968, 0.3333333432674408, 0, 0.23529411852359772, 0.060606054961681366, 0.2857142686843872, 0.21052631735801697, 0 ]
SJecKyhEKr
true
[ "We learn a conditional autoregressive flow to propose perturbations that don't induce simulator failure, improving inference performance." ]
[ "Multi-hop question answering requires models to gather information from different parts of a text to answer a question.", "Most current approaches learn to address this task in an end-to-end way with neural networks, without maintaining an explicit representation of the reasoning process.", "We propose a method to extract a discrete reasoning chain over the text, which consists of a series of sentences leading to the answer.", "We then feed the extracted chains to a BERT-based QA model to do final answer prediction.", "Critically, we do not rely on gold annotated chains or ``supporting facts:'' at training time, we derive pseudogold reasoning chains using heuristics based on named entity recognition and coreference resolution.", "Nor do we rely on these annotations at test time, as our model learns to extract chains from raw text alone. ", "We test our approach on two recently proposed large multi-hop question answering datasets: WikiHop and HotpotQA, and achieve state-of-art performance on WikiHop and strong performance on HotpotQA.", "Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way.", "Furthermore, human evaluation shows that our extracted chains allow humans to give answers with high confidence, indicating that these are a strong intermediate abstraction for this task." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.13793103396892548, 0.1621621549129486, 0.3030303120613098, 0.06896550953388214, 0.04878048226237297, 0, 0.17142856121063232, 0.09302324801683426, 0.09999999403953552 ]
ByxDJyHYPS
false
[ "We improve answering of questions that require multi-hop reasoning extracting an intermediate chain of sentences." ]
[ "Normalizing constant (also called partition function, Bayesian evidence, or marginal likelihood) is one of the central goals of Bayesian inference, yet most of the existing methods are both expensive and inaccurate.", "Here we develop a new approach, starting from posterior samples obtained with a standard Markov Chain Monte Carlo (MCMC).", "We apply a novel Normalizing Flow (NF) approach to obtain an analytic density estimator from these samples, followed by Optimal Bridge Sampling (OBS) to obtain the normalizing constant.", "We compare our method which we call Gaussianized Bridge Sampling (GBS) to existing methods such as Nested Sampling (NS) and Annealed Importance Sampling (AIS) on several examples, showing our method is both significantly faster and substantially more accurate than these methods, and comes with a reliable error estimation.", "Normalizing constant, also called partition function, Bayesian evidence, or marginal likelihood, is the central object of Bayesian methodology.", "Despite its importance, existing methods are both inaccurate and slow, and may require specialized tuning.", "One such method is Annealed Importance Sampling (AIS), and its alternative, Reverse AIS (RAIS), which can give stochastic lower and upper bounds to the normalizing constant, bracketing the true value (Neal, 2001; Grosse et al., 2015) .", "However, as the tempered distribution may vary substantially with temperature, it can be expensive to obtain good samples at each temperature, which can lead to poor estimates (Murray et al., 2006) .", "Nested sampling (NS) is another popular alternative (Skilling, 2004; Handley et al., 2015) , which can be significantly more expensive than standard sampling methods in higher dimensions but, as we show, can also lead to very inaccurate estimates.", "Moreover, there is no simple way to know how accurate the estimate is.", "Here we develop a new approach to the problem, combining Normalizing Flow (NF) density estimators with Optimal Bridge Sampling (OBS).", "In a typical Bayesian inference application, we first obtain posterior samples using one of the standard Markov Chain Monte Carlo (MCMC) methods.", "In our approach we use these samples to derive the normalizing constant with relatively few additional likelihood evaluations required, making the additional cost of normalizing constant estimation small compared to posterior sampling.", "All of our calculations are run on standard CPU platforms, and will be available in the BayesFast Python package.", "We present a new method to estimate the normalizing constant (Bayesian evidence) in the context of Bayesian analysis.", "Our starting point are the samples from the posterior using standard MCMC based methods, and we assume that these have converged to the correct probability distribution.", "In our approach we combine OBS with INT, a novel NF based density estimator, showing on several high dimensional examples that our method outperforms other approaches in terms of accuracy and computational cost, and provides a reliable error estimate." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.23728813230991364, 0.11999999731779099, 0.3103448152542114, 0.3243243098258972, 0.12244897335767746, 0.1304347813129425, 0.17910447716712952, 0.06557376682758331, 0.14705881476402283, 0.09090908616781235, 0.307692289352417, 0.14814814925193787, 0.1355932205915451, 0.11764705181121826, 0.40816324949264526, 0.1071428507566452, 0.2647058665752411 ]
SkxKFJ2NtS
true
[ "We develop a new method for normalization constant (Bayesian evidence) estimation using Optimal Bridge Sampling and a novel Normalizing Flow, which is shown to outperform existing methods in terms of accuracy and computational time." ]
[ "We present a large-scale empirical study of catastrophic forgetting (CF) in modern Deep Neural Network (DNN) models that perform sequential (or: incremental) learning.\n", "A new experimental protocol is proposed that takes into account typical constraints encountered in application scenarios.\n", "As the investigation is empirical, we evaluate CF behavior on the hitherto largest number of visual classification datasets, from each of which we construct a representative number of Sequential Learning Tasks (SLTs) in close alignment to previous works on CF.\n", "Our results clearly indicate that there is no model that avoids CF for all investigated datasets and SLTs under application conditions.", "We conclude with a discussion of potential solutions and workarounds to CF, notably for the EWC and IMM models.", "This article is in the context of sequential or incremental learning in Deep Neural Networks (DNNs).", "Essentially, this means that a DNN is not trained once, on a single task D, but successively on two or more sub-tasks D 1 , . . . , D n , one after another.", "Learning tasks of this type, which we term Sequential Learning Tasks (SLTs) (see FIG0 ), are potentially very common in real-world applications.", "They occur wherever DNNs need to update their capabilities on-site and over time: gesture recognition, network traffic analysis, or face and object recognition in mobile robots.", "In such scenarios, neural networks have long been known to suffer from a problem termed \"catastrophic forgetting\"(CF) (e.g., BID7 ) which denotes the abrupt and near-complete loss of knowledge from previous subtasks D 1 , . . . , D k−1 after only a few training iterations on the current sub-task D k (see FIG0 compared to FIG0 ).", "We focus on SLTs from the visual domain with two sub-tasks each, as DNNs show pronounced CF behavior even when only two sub-tasks are involved.", "The sequential learning tasks used in this study only have two sub-tasks: D1 and D2.", "During training (white background) and re-training (gray background), test accuracy is measured on D1 (blue, ), D2 (green, ) and D1 ∪ D2 (red, ).", "The blue curve allows to determine the presence of CF by simple visual inspection: if there is significant degradation w.r.t. the red curve, then CF has occurred.", "DISPLAYFORM0", "The field of incremental learning is large, e.g., BID20 and BID8 .", "Recent systematic comparisons between different DNN approaches to avoid CF are performed in, e.g., BID23 or .", "Principal recent approaches to avoid CF include ensemble methods BID22 BID6 , dual-memory systems BID24 BID11 BID21 BID9 and regularization approaches.", "Whereas BID10 suggest Dropout for alleviating CF, the EWC method BID14 proposes to add a term to the energy function that protects weights that are important for the previous sub-task (s) .", "Importance is determined by approximating the Fisher information matrix of the DNN.", "A related approach is pursued by the Incremental Moment Matching technique (IMM) (see ), where weights from DNNs trained on a current and a past sub-tasks are \"merged\" using the Fisher information matrix.", "Other regularization-oriented approaches are proposed in BID2 ; BID25 and BID13 which focus on enforcing sparsity of neural activities by lateral interactions within a layer.Number of tested datasets In general, most methods referenced here are evaluated only on a few datasets, usually on MNIST BID16 and various derivations thereof (permutation, rotation, class separation).", "Some studies make limited use of CIFAR10, SVHN, the Amazon sentiment analysis problem, and non-visual problems such as data from Q-learning of Atari games.", "A largescale evaluation on a huge number of qualitatively different datasets is still missing 1 .", "Model selection and prescience Model selection (i.e., selecting DNN topology and hyperparameters) is addressed in some approaches BID10 but on the basis of a \"prescient\" evaluation where the best model is selected after all tasks have been processed, an approach which is replicated in BID14 .", "This amounts to a knowledge of future sub-tasks which is problematic in applications.", "Most approaches ignore model selection BID25 BID2 BID13 , and thus implicitly violate causality.", "Storage of data from previous sub-tasks From a technical point of view, DNNs can be retrained without storing training data from previous sub-tasks, which is done in BID10 and BID25 .", "For regularization approaches, however, there are regularization parameters that control the retention of previous knowledge, and thus must be chosen with care.", "In BID14 , this is λ, whereas two such quantities occur in : the \"balancing\" parameter α and the regularization parameter λ for L2-transfer.", "The only study where regularization parameters are obtained through cross-validation (which is avoided in other studies) is BID2 (for λ SN I and λ Ω ) but this requires to store all previous training data.This review shows that enormous progress has been made, but that there are shortcomings tied to applied scenarios which need to be addressed.", "We will formalize this in Sec. 1.2 and propose an evaluation strategy that takes these formal constraints into account when testing CF in DNNs.", "The original contributions of our work can be summarized as follows:• We propose a training and evaluation paradigm for incremental learning in DNNs that enforces typical application constraints, see Sec. 1.2.", "The importance of such an applicationoriented paradigm is underlined by the fact that taking application constraints into account leads to radically different conclusions about CF than those obtained by other recent studies on CF (see Sec. 1.1).•", "We investigate the incremental learning capacity of various DNN approaches (Dropout, LWTA, EWC and IMM) using the largest number of qualitatively different classification datasets so far described. We", "find that all investigated models are afflicted by catastrophic forgetting, or else in violation of application constraints and discuss potential workarounds.• We", "establish that the \"permuted\" type of SLTs (e.g., \"permuted MNIST\") should be used with caution when testing for CF.• We", "do not propose a method for avoiding CF in this article. This", "is because avoiding CF requires a consensus on how to actually measure this effect: our novel contribution is a proposal how to do just that.", "The primary conclusion from the results in Sec. 4 is that CF still represents a major problem when training DNNs.", "This is particularly true if DNN training happens under application constraints as outlined in Sec. 1.2.", "Some of these constraints may be relaxed depending on the concrete application: if some prior knowledge about future sub-task exists, it can be used to simplify model selection and improve results.", "If sufficient resources are available, a subset of previously seen data may be kept in memory and thus allow a \"best\" type evaluation/stopping criterion for re-training, see Alg.", "1.Our evaluation approach is similar to , and we adopt some measures for CF proposed there.", "A difference is the setting of up to 10 sub-tasks, whereas we consider only two of them since we focus less on the degree but mainly on presence or absence of CF.", "Although comparable both in the number of tested models and benchmarks, BID23 uses a different evaluation methodology imposing softer constraints than ours, which is strongly focused on application scenarios.", "This is, to our mind, the reason why those results differ significantly from ours and underscores the need for a consensus of how to measure CF.In general application scenarios without prior knowledge or extra resources, however, an essential conclusion we draw from Sec. 4 is that model selection must form an integral part of training a DNN on SLTs.", "Thus, a wrong choice of hyper-parameters based on D 1 can be disastrous for the remaining sub-tasks, which is why application scenarios require DNN variants that do not have extreme dependencies on hyper-parameters such as layer number and layer sizes.Lastly, our findings indicate workarounds that would make EWC or IMM practicable in at least some application scenarios.", "If model selection is addressed, a small subset of D 1 may be kept in memory for both methods: to determine optimal values of α for IMM and to determine when to stop re-training for EWC.", "FIG7 shows that small changes to α do not dramatically impact final accuracy for IMM, and FIG4 indicates that accuracy loss as a function of re-training time is gradual in most cases for EWC.", "The inaccuracies introduced by using only a subset of D 1 would therefore not be very large for both algorithms.To conclude, this study shows that the consideration of applied scenarios significantly changes the procedures to determine CF behavior, as well as the conclusions as to its presence in latestgeneration DNN models.", "We propose and implement such a procedure, and as a consequence claim that CF is still very much of a problem for DNNs.", "More research, either on generic solutions, or on workarounds for specific situations, needs to be conducted before the CF problem can be said to be solved.", "A minor but important conclusion is that results obtained on permutation-type SLTs should be treated with caution in future studies on CF." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.27272728085517883, 0.21621620655059814, 0.03703703358769417, 0.19999998807907104, 0.2631579041481018, 0, 0.1249999925494194, 0, 0, 0.02898550219833851, 0.09302324801683426, 0, 0, 0, 0, 0.052631575614213943, 0, 0.1304347813129425, 0.06451612710952759, 0.07843136787414551, 0.029411761090159416, 0, 0.11428570747375488, 0.09999999403953552, 0.060606054961681366, 0, 0.04347825422883034, 0.09756097197532654, 0.0476190410554409, 0.02816900983452797, 0.13636362552642822, 0.26923075318336487, 0.07017543166875839, 0.13333332538604736, 0.23255813121795654, 0.1860465109348297, 0.1249999925494194, 0.09756097197532654, 0.14999999105930328, 0.10810810327529907, 0.03999999538064003, 0.08510638028383255, 0.10810810327529907, 0, 0.16326530277729034, 0.1621621549129486, 0.138888880610466, 0.07999999821186066, 0.11764705181121826, 0.1818181723356247, 0.19999998807907104, 0.0476190410554409, 0.1463414579629898 ]
BkloRs0qK7
true
[ "We check DNN models for catastrophic forgetting using a new evaluation scheme that reflects typical application conditions, with surprising results." ]
[ "Federated Learning (FL) refers to learning a high quality global model based on decentralized data storage, without ever copying the raw data.", "A natural scenario arises with data created on mobile phones by the activity of their users.", "Given the typical data heterogeneity in such situations, it is natural to ask how can the global model be personalized for every such device, individually.", "In this work, we point out that the setting of Model Agnostic Meta Learning (MAML), where one optimizes for a fast, gradient-based, few-shot adaptation to a heterogeneous distribution of tasks, has a number of similarities with the objective of personalization for FL.", "We present FL as a natural source of practical applications for MAML algorithms, and make the following observations.", "1) The popular FL algorithm, Federated Averaging, can be interpreted as a meta learning algorithm.", "2) Careful fine-tuning can yield a global model with higher accuracy, which is at the same time easier to personalize.", "However, solely optimizing for the global model accuracy yields a weaker personalization result.", "3) A model trained using a standard datacenter optimization method is much harder to personalize, compared to one trained using Federated Averaging, supporting the first claim.", "These results raise new questions for FL, MAML, and broader ML research.", "In recent years, the growth of machine learning applications was driven by aggregation of large amounts of data in a datacenter, where a model can be trained using large scale distributed system (Dean et al., 2012; LeCun et al., 2015) .", "Both the research community and general public are becoming increasingly aware that there is a variety of scenarios where this kind of data collection comes with significant risks, mainly related to notions of privacy and trust.", "In the presence of user generated data, such as activity on mobile phones, Federated Learning (FL) proposes an alternative approach for training a high quality global model without ever sending raw data to the cloud.", "The FL system proposed by Google (Bonawitz et al., 2019 ) selects a sample of available devices and sends them a model to be trained.", "The devices compute an update to the model based on an optimization procedure with locally available data, and the central system aggregates the updates from different devices.", "Such iteration is repeated many times until the model has converged.", "The users' training data does not leave their devices.", "The basic FL algorithm, Federated Averaging (FedAvg) , has been used in production applications, for instance for next word prediction in mobile keyboard (Hard et al., 2018) , which shows that Federated Learning can outperform the best model trained in a datacenter.", "Successful algorithmic extensions to the central idea include training a differential private model (McMahan et al., 2018) , compression (Konečný et al., 2016b; Caldas et al., 2018a) , secure aggregation (Bonawitz et al., 2017) , and a smaller number of always-participating nodes (Yang et al., 2019) .", "FL applications generally face non-i.i.d and unbalanced data available to devices, which makes it challenging to ensure good performance across different devices with a FL-trained global model.", "Theoretical guarantees are only available under restrictive assumptions and for convex objectives, cf.", "Li et al. (2019b) .", "In this work, we are interested in personalization methods that adapt the model for data available on each device, individually.", "We refer to a trained global model as the initial model, and the locally adapted model as the personalized model.", "Existing FL personalization work directly takes a converged initial model and conducts personalization evaluation via gradient descent .", "However, in this approach, the training and personalization procedures are completely disconnected, which results in potentially suboptimal personalized models.", "Meta Learning optimizes the performance after adaptation given few-shot adaptation examples on heterogeneous tasks, and has increasing applications in the context of Supervised Learning and Reinforcement Learning.", "Model Agnostic Meta Learning (MAML) introduced by Finn et al. (2017) is a solely gradient-based Meta Learning algorithm, which runs in two connected stages; metatraining and meta-testing.", "Meta-training learns a sensitive initial model which can conduct fast adaptation on a range of tasks, and meta-testing adapts the initial model for a particular task.", "Both tasks for MAML, and clients for FL, are heterogeneous.", "For each task in MAML and client in FL, existing algorithms use a variant of gradient descent locally, and send an overall update to a coordinator to update the global model.", "If we present the FL training process as meta-training in the MAML language, and the FL personalization via gradient descent as meta-testing, we show in Section 2 that FedAvg (McMahan et al., 2017) and Reptile (Nichol et al., 2018) , two popular FL and MAML algorithms, are very similar to each other; see also Khodak et al. (2019) .", "In order to make FL personalization useful in practice, we propose that the following objectives must all be addressed, simultaneously.", "(1) Improved Personalized Model -for a large majority of the clients.", "(2) Solid Initial Model -some clients have limited or even no data for personalization.", "(3) Fast Convergence -reach a high quality model in small number of training rounds.", "Typically, the MAML algorithms only focus on objective (1); that was the original motivation in Finn et al. (2017) .", "Existing FL works usually focus on objectives (2) and (3), and take the personalized performance as secondary.", "This is largely due to the fact that it was not obvious that getting a solid initial model is feasible or practical if devices are available occasionally and with limited resources.", "In this work, we study these three objectives jointly, and our main contributions are:", "• We point out the connection between two widely used FL and MAML algorithms, and interpret existing FL algorithm in the light of existing MAML algorithms.", "• We propose a novel modification of FedAvg, with two stages of training and fine-tuning, for optimizing the three above objectives.", "• We empirically demonstrate that FedAvg is already a meta learning algorithm, optimizing for personalized performance, as opposed to quality of the global model.", "Furthermore, we show that the fine tuning stage enables better and more stable personalized performance.", "• We observe that different global models with the same accuracy, can exhibit very different capacity for personalization.", "• We highlight that these results challenge the existing objectives in the FL literature, and motivate new problems for the broader Machine Learning research community.", "It this work, we argue that in the context of Federated Learning, the accuracy of the global model after personalization should be of much greater interest than it has been.", "Investigation of the topic reveals close similarities between the fields of Federated Learning and Model Agnostic Meta Learning, and raises new questions for these areas, as well as for the broader Machine Learning community.", "Challenges for Federated Learning.", "Framing papers in the area of Federated Learning Konečný et al., 2016a; Li et al., 2019a) , formulate the objective as training of a shared global model, based on a decentralized data storage where each node / client has access to a non-i.i.d sample from the overall distribution.", "The objective is identical to one the broader ML community would optimize for, had all the data been available in a centralized location.", "We argue that in this setting, the primary objective should be the adaptation to the statistical heterogeneity present at different data nodes, and demonstrate that the popular FL algorithm, Federated Averaging, does in fact optimize the personalized performance, and while doing so, also improves the performance of the global model.", "Experiments we perform demonstrate that the algorithm used to train the model has major influence on its capacity to personalize.", "Moreover, solely optimizing the accuracy of the global model tends to have negative impact on its capacity to personalize, which further questions the correctness of the commonly presented objectives of Federated Learning.", "Challenges for Model Agnostic Meta Learning.", "The objectives in the Model Agnostic Meta Learning literature are usually only the model performance after adaptation to given task (Finn et al., 2017) .", "In this work, we present the setting of Federated Learning as a good source of practical applications for MAML algorithms.", "However, to have impact in FL, these methods need to also consider the performance of the initial model, 4 as in practice there will be many clients without data available for personalization.", "In addition, the connectivity constraints in a production deployment emphasize the importance of fast convergence in terms of number of communication rounds.", "We suggest these objectives become the subject of MAML works, in addition to the performance after adaptation, and to consider the datasets with a natural user/client structure being established for Federated Learning (Caldas et al., 2018b) as the source of experiments for supervised learning.", "Challenges for broader Machine Learning.", "The empirical evaluation in this work raises a number of questions of relevance to Machine Learning research in general.", "In particular, Figure 2 clearly shows that models with similar initial accuracy can have very different capacity to personalize to a task of the same type as it was trained on.", "This observation raises obvious questions for which we currently cannot provide an answer.", "How does the training algorithm impact personalization ability of the trained model?", "Is there something we can measure that will predict the adaptability of the model?", "Is it something we can directly optimize for, potentially leading to novel optimization methods?", "These questions can relate to a gap highlighted in Table 2 .", "While the common measures could suggest the global model is overfitting the training data, this is not true of the personalized model.", "Transfer Learning is another technique for which our result could inspire a novel solution.", "It is very common for machine learning practitioners to take a trained model from the research community, replace the final layer with a different output class of interest, and retrain for the new task (Oquab et al., 2014) .", "We conjecture that the algorithms proposed in the FL and MAML communities, could yield base models for which this kind of domain adaptation would yield better results.", "Finally, we believe that a systematic analysis of optimization algorithms of the inner-outer structure presented in Algorithm 1 could provide novel insights into the connections between optimization and generalization.", "Apart from the FL and MAML algorithms, Zhang et al. (2019) recently proposed a method that can be interpreted as outer optimizer in the general algorithm, which improves the stability of a variety of existing optimization methods used as the inner optimizer.", "A APPENDIX This Appendix contains further details referenced from the main body of the paper.", "Table 3 summarizes the attempts at fine tuning the model user in main body with different server optimizers.", "We see that comparing the same client optimizers, Adam consistently provides better and more stable results in terms of initial accuracy.", "A.2", "PER-CLIENT PERSONALIZATION RESULTS Figure 4 visualizes the distribution of initial and personalized accuracies on a per-client basis.", "Each dot represents a random sample of the test clients used for personalization experiments.", "Studying this distribution is of great importance, as in practical deployment, degrading a user's experience might incur disproportionate cost, compared to the benefit of comparable improvement.", "Designing methods that robustly identify the clients below the diagonal line and at least revert to the initial model is worth of future investigation." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.21621620655059814, 0, 0.10256409645080566, 0.15686273574829102, 0.05882352590560913, 0.19354838132858276, 0.2222222238779068, 0.06896550953388214, 0.25641024112701416, 0, 0.039215680211782455, 0.16326530277729034, 0.1599999964237213, 0.09756097197532654, 0.05128204822540283, 0.07407406717538834, 0, 0.18867924809455872, 0.07843136787414551, 0.09090908616781235, 0.06896550953388214, 0, 0.1111111044883728, 0.12903225421905518, 0.0624999962747097, 0.05882352590560913, 0.10526315122842789, 0.24390242993831635, 0.052631575614213943, 0.07999999821186066, 0.0952380895614624, 0.06666666269302368, 0.0555555522441864, 0.07407406717538834, 0, 0.06666666269302368, 0, 0, 0.17777776718139648, 0, 0, 0.0555555522441864, 0.25, 0, 0, 0.05128204822540283, 0.0476190410554409, 0.1395348757505417, 0.20000000298023224, 0.1355932205915451, 0.15789473056793213, 0.14035087823867798, 0.11764705181121826, 0.1428571343421936, 0.1818181723356247, 0.19999998807907104, 0.17142856121063232, 0.08888888359069824, 0.05882352590560913, 0.145454540848732, 0.0952380895614624, 0.1818181723356247, 0.1304347813129425, 0, 0, 0, 0.06666666269302368, 0.14814814925193787, 0.060606054961681366, 0.19999998807907104, 0.11764705181121826, 0, 0.0476190410554409, 0.11764705181121826, 0, 0, 0, 0.060606054961681366, 0.06666666269302368, 0.1463414579629898, 0.15789473056793213 ]
BkeaEyBYDB
true
[ "Federated Averaging already is a Meta Learning algorithm, while datacenter-trained methods are significantly harder to personalize." ]
[ "Memorization of data in deep neural networks has become a subject of significant research interest. \n", "In this paper, we link memorization of images in deep convolutional autoencoders to downsampling through strided convolution. ", "To analyze this mechanism in a simpler setting, we train linear convolutional autoencoders and show that linear combinations of training data are stored as eigenvectors in the linear operator corresponding to the network when downsampling is used. ", "On the other hand, networks without downsampling do not memorize training data. ", "We provide further evidence that the same effect happens in nonlinear networks. ", "Moreover, downsampling in nonlinear networks causes the model to not only memorize just linear combinations of images, but individual training images. ", "Since convolutional autoencoder components are building blocks of deep convolutional networks, we envision that our findings will shed light on the important phenomenon of memorization in over-parameterized deep networks. \n", "As deep convolutional neural networks (CNNs) become ubiquitous in computer vision due to their applicability and strong performance on a range of tasks BID6 , recent work has begun analyzing the memorization properties of such networks in classification.", "For example, BID19 show that popular CNNs can achieve almost zero training error on randomly labeled datasets, indicating that CNNs have the capacity to \"memorize\" large training data sets.", "BID0 and BID15 build on the experiments from BID19 to better understand and evaluate the extent to which CNNs memorize training data.", "BID0 show that CNNs, when trained on large datasets, are able to learn patterns from realistic data before memorizing training images.", "BID15 present experiments on \"membership inference\" (i.e. determining whether an image was used during training) and conclude that modern architectures are capable of \"remember[ing] a large number of images and distinguish [ing] them from unseen images\".Although", "the above methods analyze memorization in the classification setting, they do not provide a mechanism through which memorization of training data occurs. We here", "present downsampling as one mechanism by which deep CNNs memorize specific training images. We will", "focus our study on the memorization properties of linear and nonlinear fully convolutional autoencoders. The architectures", "we use (such as U-Net, BID14 ) are commonly employed in imageto-image tasks, see e.g. BID17 . However, we will", "use these architectures only in the autoencoding framework. We primarily focus", "on autoencoders BID1 for the following reasons: (1) components of convolutional autoencoders are building blocks of many CNNs; and (2) layerwise pre-training using autoencoders is a technique to initialize individual layers of CNNs to improve training BID3 , BID4 ). It is important to", "note that there are many potential solutions to the autoencoding problem when using over-parameterized autoencoders. In particular, in", "the linear case, these models may range from learning the (full rank) identity function (which has 0 error in the autoencoding task) to low rank solutions where each training example corresponds to an eigenvector with eigenvalue 1. Thus, understanding", "how autoencoders learn is of interest in order to gain insights into how deep CNNs memorize training data.Figures 1a and 1b provide two examples of memorization: A typical U-Net architecture (the same as e.g. used in BID17 for large hole impainting) when trained on a single image \"memorizes\" the training image in the sense that for any input, the output always contains the training image (even if the input is random noise or an arbitrary white square). This paper provides", "a mechanism for this phenomenon.The outline is as follows: After introducing some notation in Section 2, we will show in Section 3 that memorization is tightly coupled with downsampling and also occurs in the simpler setting of linear autoencoding CNNs. In the linear setting", ", the neural network corresponds to matrix multiplication. In Section 4, we show", "how to extract this matrix representation and we provide our main conjecture, namely that linear combinations of the training images are stored as eigenvectors of this matrix, whose rank is given by the dimension of the span of the training set. We also provide strong", "evidence for this conjecture on 2 × 2 images.In Section 5, we analyze the eigenvalue decay and show in various examples that using downsampling linear CNNs, linear combinations of the training examples are stored as eigenvectors with eigenvalues close to 1. Finally, we return to", "the nonlinear setting in Section 6, providing evidence that memorization is an even stronger phenomenon in nonlinear networks, since the actual training images (in contrast to linear combinations of training images) are memorized. We end with a short discussion", "in Section 7.", "This paper identified downsampling as a mechanism through which linear CNNs memorize training images.", "We demonstrated that downsampling convolutional autoencoders memorize training images in both the linear and nonlinear setting.", "In particular, we showed that it is not just the dimensionality reduction of downsampling that causes these models to learn point maps by demonstrating that a downsampling CNN architecture with the capacity to learn the identity function still prefers the point map.", "In the linear case, this preference for low-rank over the equally valid high-rank solutions is highly suggestive of similar phenomena observed in problems such as matrix completion (e.g., Gunasekar et al.) .In", "the non-linear case, memorization in downsampling networks is manifested even more strikingly with nearly arbitrary input images being mapped to output images that are visually identifiable as one of the training images. While", "the exact mechanism still needs to be explored, this is reminiscent of FastICA in Independent Component Analysis BID10 or more general non-linear eigen-problems BID2 , where every \"eigenvector\" for certain iterative maps has its own basin of attraction. On the", "other hand, non-downsampling auto-encoders do not memorize the training data and consistently learn a \"high rank\" map, similar to the identity map, at least visually.We conjecture that our findings will help to shed light on the strong generalization properties of downsampling networks for image classification and recognition tasks. Indeed", ", if downsampling networks memorize images or linear combinations of images, when trained on large datasets, they may be capable of learning representations within the space of all realisitic images instead of learning the standard full rank basis.We conclude with a mention of further areas of exploration spurred on by our work. We still", "need to understand why downsampling forces the network to learn low rank solutions even when the network has the capacity to learn the identity. This requires", "developing a better grasp of optimization and initialization, starting with linear autoencoders and proceeding to the non-linear settings. Finally, we need", "to explore connections between our conjecture and the manifold hypothesis to better understand the space of realistic images." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1538461446762085, 0.3448275923728943, 0.2666666507720947, 0.0833333283662796, 0.1666666567325592, 0.12121211737394333, 0.15789473056793213, 0.17391304671764374, 0, 0, 0, 0.04255318641662598, 0.24242423474788666, 0.23076923191547394, 0.2222222238779068, 0.13333332538604736, 0.1818181723356247, 0.1666666567325592, 0.13793103396892548, 0.04255318641662598, 0.1265822798013687, 0.2448979616165161, 0, 0.08510638028383255, 0.15686273574829102, 0.17391304671764374, 0.1428571343421936, 0.23999999463558197, 0.37037035822868347, 0.09090908616781235, 0.13636362552642822, 0.19512194395065308, 0.08163265138864517, 0.1428571343421936, 0.1071428507566452, 0.06666666269302368, 0.12903225421905518, 0 ]
ByGUFsAqYm
true
[ "We identify downsampling as a mechansim for memorization in convolutional autoencoders." ]
[ "Reinforcement learning provides a powerful and general framework for decision\n", "making and control, but its application in practice is often hindered by the need\n", "for extensive feature and reward engineering.", "Deep reinforcement learning methods\n", "can remove the need for explicit engineering of policy or value features, but\n", "still require a manually specified reward function.", "Inverse reinforcement learning\n", "holds the promise of automatic reward acquisition, but has proven exceptionally\n", "difficult to apply to large, high-dimensional problems with unknown dynamics.", "In\n", "this work, we propose AIRL, a practical and scalable inverse reinforcement learning\n", "algorithm based on an adversarial reward learning formulation that is competitive\n", "with direct imitation learning algorithms.", "Additionally, we show that AIRL is\n", "able to recover portable reward functions that are robust to changes in dynamics,\n", "enabling us to learn policies even under significant variation in the environment\n", "seen during training.", "While reinforcement learning (RL) provides a powerful framework for automating decision making and control, significant engineering of elements such as features and reward functions has typically been required for good practical performance.", "In recent years, deep reinforcement learning has alleviated the need for feature engineering for policies and value functions, and has shown promising results on a range of complex tasks, from vision-based robotic control BID12 to video games such as Atari BID13 and Minecraft BID16 .", "However, reward engineering remains a significant barrier to applying reinforcement learning in practice.", "In some domains, this may be difficult to specify (for example, encouraging \"socially acceptable\" behavior), and in others, a naïvely specified reward function can produce unintended behavior BID2 .", "Moreover, deep RL algorithms are often sensitive to factors such as reward sparsity and magnitude, making well performing reward functions particularly difficult to engineer.Inverse reinforcement learning (IRL) BID19 BID14 refers to the problem of inferring an expert's reward function from demonstrations, which is a potential method for solving the problem of reward engineering.", "However, inverse reinforcement learning methods have generally been less efficient than direct methods for learning from demonstration such as imitation learning BID10 , and methods using powerful function approximators such as neural networks have required tricks such as domain-specific regularization and operate inefficiently over whole trajectories BID6 .", "There are many scenarios where IRL may be preferred over direct imitation learning, such as re-optimizing a reward in novel environments BID7 or to infer an agent's intentions, but IRL methods have not been shown to scale to the same complexity of tasks as direct imitation learning.", "However, adversarial IRL methods BID6 a) hold promise for tackling difficult tasks due to the ability to adapt training samples to improve learning efficiency.Part of the challenge is that IRL is an ill-defined problem, since there are many optimal policies that can explain a set of demonstrations, and many rewards that can explain an optimal policy BID15 .", "The maximum entropy (MaxEnt) IRL framework introduced by BID24 handles the former ambiguity, but the latter ambiguity means that IRL algorithms have difficulty distinguishing the true reward functions from those shaped by the environment dynamics.", "While shaped rewards can increase learning speed in the original training environment, when the reward is deployed at test-time on environments with varying dynamics, it may no longer produce optimal behavior, as we discuss in Sec. 5.", "To address this issue, we discuss how to modify IRL algorithms to learn rewards that are invariant to changing dynamics, which we refer to as disentangled rewards.In this paper, we propose adversarial inverse reinforcement learning (AIRL), an inverse reinforcement learning algorithm based on adversarial learning.", "Our algorithm provides for simultaneous learning of the reward function and value function, which enables us to both make use of the efficient adversarial formulation and recover a generalizable and portable reward function, in contrast to prior works that either do not recover a reward functions BID10 , or operates at the level of entire trajectories, making it difficult to apply to more complex problem settings BID6 a) .", "Our experimental evaluation demonstrates that AIRL outperforms prior IRL methods BID6 on continuous, high-dimensional tasks with unknown dynamics by a wide margin.", "When compared to GAIL BID10 , which does not attempt to directly recover rewards, our method achieves comparable results on tasks that do not require transfer.", "However, on tasks where there is considerable variability in the environment from the demonstration setting, GAIL and other IRL methods fail to generalize.", "In these settings, our approach, which can effectively disentangle the goals of the expert from the dynamics of the environment, achieves superior results." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.06896550953388214, 0, 0.07999999821186066, 0.17391304671764374, 0.1249999925494194, 0.07692307233810425, 0.1818181723356247, 0.13333332538604736, 0.0714285671710968, 0.25806450843811035, 0.3333333432674408, 0.0833333283662796, 0, 0.19354838132858276, 0.06451612710952759, 0, 0.20408162474632263, 0.1355932205915451, 0.25, 0.12765957415103912, 0.2461538463830948, 0.1071428507566452, 0.20000000298023224, 0.1875, 0.08163265138864517, 0.14814814925193787, 0.3396226465702057, 0.21917808055877686, 0, 0.1395348757505417, 0.04878048226237297, 0.15789473056793213 ]
rkHywl-A-
true
[ "We propose an adversarial inverse reinforcement learning algorithm capable of learning reward functions which can transfer to new, unseen environments." ]
[ "We consider two questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well?", "Our work responds to \\citet{zhang2016understanding}, who showed deep neural networks can easily memorize randomly labeled training data, despite generalizing well on real labels of the same inputs.", "We show that the same phenomenon occurs in small linear models.", "These observations are explained by the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization.", "We also demonstrate that, when one holds the learning rate fixed, there is an optimum batch size which maximizes the test set accuracy.", "We propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large.", "Interpreting stochastic gradient descent as a stochastic differential equation, we identify the ``noise scale\" $g = \\epsilon (\\frac{N}{B} - 1) \\approx \\epsilon N/B$, where $\\epsilon$ is the learning rate, $N$ the training set size and $B$ the batch size.", "Consequently the optimum batch size is proportional to both the learning rate and the size of the training set, $B_{opt} \\propto \\epsilon N$.", "We verify these predictions empirically.", "This paper shows Bayesian principles can explain many recent observations in the deep learning literature, while also discovering practical new insights.", "BID27 trained deep convolutional networks on ImageNet and CIFAR10, achieving excellent accuracy on both training and test sets.", "They then took the same input images, but randomized the labels, and found that while their networks were now unable to generalize to the test set, they still memorized the training labels.", "They claimed these results contradict learning theory, although this claim is disputed BID18 BID7 .", "Nonetheless, their results beg the question; if our models can assign arbitrary labels to the training set, why do they work so well in practice?", "Meanwhile BID19 observed that if we hold the learning rate fixed and increase the batch size, the test accuracy usually falls.", "This striking result shows improving our estimate of the full-batch gradient can harm performance.", "BID11 observed a linear scaling rule between batch size and learning rate in a deep ResNet, while BID15 proposed a square root rule on theoretical grounds.Many authors have suggested \"broad minima\" whose curvature is small may generalize better than \"sharp minima\" whose curvature is large BID4 BID14 .", "Indeed, BID7 argued the results of BID27 can be understood using \"nonvacuous\" PAC-Bayes generalization bounds which penalize sharp minima, while BID19 showed stochastic gradient descent (SGD) finds wider minima as the batch size is reduced.", "However BID6 challenged this interpretation, by arguing that the curvature of a minimum can be arbitrarily increased by changing the model parameterization.", "In this work we show:• The results of BID27 are not unique to deep learning; we observe the same phenomenon in a small \"over-parameterized\" linear model.", "We demonstrate that this phenomenon is straightforwardly understood by evaluating the Bayesian evidence in favor of each model, which penalizes sharp minima but is invariant to the model parameterization.•", "SGD integrates a stochastic differential equation whose \"noise scale\" g ≈ N/B, where is the learning rate, N training set size and B batch size. Noise", "drives SGD away from sharp minima, and therefore there is an optimal batch size which maximizes the test set accuracy. This", "optimal batch size is proportional to the learning rate and training set size 1 .We describe", "Bayesian model comparison in section 2. In section", "3 we replicate the observations of BID27 in a linear model, and show they are explained by the Bayesian evidence. In section", "4 we show there is an optimum batch size which maximizes the test set accuracy, and in section 5 we derive scaling rules between the optimum batch size, learning rate, training set size and momentum coefficient. Throughout", "this work, \"generalization gap\" refers to the gap in test accuracy between small and large batch SGD training, not the gap in accuracy between training and test sets.", "Just like deep neural networks, linear models which generalize well on informative labels can memorize random labels of the same inputs.", "These observations are explained by the Bayesian evidence, which is composed of the cost function and an \"Occam factor\".", "The Occam factor penalizes sharp minima but it is invariant to changes in model parameterization.", "Mini-batch noise drives SGD away from sharp minima, and therefore there is an optimum batch size which maximizes the test accuracy.", "Interpreting SGD as the discretization of a stochastic differential equation, we predict this optimum batch size should scale linearly with both the learning rate and the training set size, B opt ∝ N .", "We derive an additional scaling rule, B opt ∝ 1/(1 − m), between the optimal batch size and the momentum coefficient.", "We verify these scaling rules empirically and discuss their implications." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1599999964237213, 0.04444443807005882, 0.06896550953388214, 0.277777761220932, 0.09999999403953552, 0.514285683631897, 0.15686273574829102, 0.1621621549129486, 0, 0.10256409645080566, 0.05882352590560913, 0.08695651590824127, 0.0624999962747097, 0.0476190410554409, 0.10810810327529907, 0.1249999925494194, 0.1355932205915451, 0.1538461446762085, 0.052631575614213943, 0.04651162400841713, 0.21739129722118378, 0.23255813121795654, 0.25641024112701416, 0.1818181723356247, 0.07999999821186066, 0.20512819290161133, 0.1249999925494194, 0.20512819290161133, 0.052631575614213943, 0.277777761220932, 0.12121211737394333, 0.307692289352417, 0.16326530277729034, 0.10526315122842789, 0.0714285671710968 ]
BJij4yg0Z
true
[ "Generalization is strongly correlated with the Bayesian evidence, and gradient noise drives SGD towards minima whose evidence is large." ]
[ "In the industrial field, the positron annihilation is not affected by complex environment, and the gamma-ray photon penetration is strong, so the nondestructive detection of industrial parts can be realized.", "Due to the poor image quality caused by gamma-ray photon scattering, attenuation and short sampling time in positron process, we propose the idea of combining deep learning to generate positron images with good quality and clear details by adversarial nets.", "The structure of the paper is as follows: firstly, we encode to get the hidden vectors of medical CT images based on transfer Learning, and use PCA to extract positron image features.", "Secondly, we construct a positron image memory based on attention mechanism as a whole input to the adversarial nets which uses medical hidden variables as a query.", "Finally, we train the whole model jointly and update the input parameters until convergence.", "Experiments have proved the possibility of generating rare positron images for industrial non-destructive testing using countermeasure networks, and good imaging results have been achieved.", "In recent years, with the advancement of science and technology, especially the rapid development of high-end manufacturing, in the field of industrial non-destructive testing, in many cases, it is necessary to perform defect detection without damaging or affecting the performance and internal structure of the device under test.", "Therefore, there is an increasing demand for corresponding detection devices.", "In complex industrial environments (such as aviation, internal combustion engines, chemical engineering, etc.), it is of great research significance to detect faults and defects in closed chambers.", "In this paper, the use of positron annihilation gamma photon imaging positron emission imaging technology for industrial nondestructive testing is studied.", "The occurrence of positron annihilation is not affected by factors such as high temperature, high pressure, corrosion, etc., so it can penetrate the dense metal material cavity, realize the undisturbed and non-destructive trace imaging of the detected object, and obtain the detected object after processing.", "Describe the image and perform a state analysis.", "Therefore, the quality of imaging technology directly affects the analysis of fault detection results.", "Positron Emission Tomography (PET) was first used in medical imaging.", "The principle is that when a radioactive positron nucleus decays, a proton in the nucleus is converted into a neutron, and a positron and a neutron are released.", "The positron will quickly combine with the electrons in the material in a very short time, causing a positron annihilation phenomenon, producing a pair of gamma photon pairs with opposite directions and energy of 511KeV.", "Photon pairs are collected, identified, processed, and finally reconstructed to obtain medical images.", "Commonly used PET reconstruction algorithms are analytic method (K, 2000) and statistical method (Shepp & Vardi, 2007) .", "The currently widely used algorithms are MLEM and OSEM.", "At present, PET technology has been widely used in the clinical diagnosis of human diseases.", "The advantages are quite obvious, the imaging quality is higher, and it shows great advantages in medical research.", "The principle of positron emission in industrial non-destructive fields is similar to medical imaging, but it has its own unique difficulties: the detection environment is more harsh, the sampling time is short, and at the same time, due to the phenomenon of scattering and attenuation of photons, industrial positron imaging is obtained.", "The image quality is even worse.", "Therefore, the reconstructed image needs to be further processed to obtain a higher quality image.", "In this paper, we propose adversarial networks of positron image memory module based on attention mechanism.", "Using medical images as basic data sets, introducing knowledge of migration learning, building memory module according to the contribution of detail features to images, a positron image generation network in the field of industrial non-destructive testing is obtained through joint training, thus achieving higher quality generation of industrial positron images.", "In summary, our main contributions in this paper are as follows:", "We are the first to advocate an idea of using Generative Adversarial Networks to enhance the detail of the positron image in the industrial non-destructive field, and realize the generation and processing of the scarce image data in the professional field.", "We use the medical CT image dataset as the basic training sample of the network framework, which is based on the idea of migration learning, and then extract the features of a small number of industrial non-destructively detected positron images, which can improve the details of the generated images, and make the network model have better applicability in the field of industrial non-destructive testing.", "We combine the attention-based mechanism in the professional domain image feature extraction.", "By constructing a memory module containing industrial positron image features, we can generate image generation in a specific domain, and finally conduct an industrial lossless positron image generation model.", "We train the whole network jointly, through the discriminant network of the antagonistic generation network, the front-end network was back-propagated, the input parameters were updated, and the model was optimized.", "Finally, the convergence was achieved and The Turing test was passed successfully.", "In this paper, we introduce an application of GAN in the field of nondestructive testing for specific industries.", "We combine the knowledge of transfer learning to make up the problem of insufficient data.", "The key point is to introduce attention mechanism to construct a positron image feature memory module, which can reuse image features under the condition of scarce data.", "At the same time, an attention loss function is added to the discriminative net to further improve the generator performance.", "Experiments show that compared with the start-of-the-art generation methods in deep learning, the model in our paper has an obvious improvement in the quality of industrial positron image generation.", "In the future, our focus is to further study the application of generative adversarial networks in industrial positron image processing, and to further improve the quality of domain images." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.060606058686971664, 0.0952380895614624, 0.054054051637649536, 0.1875, 0, 0.06451612710952759, 0, 0, 0, 0.07407406717538834, 0.042553190141916275, 0, 0, 0, 0.0714285671710968, 0.0555555522441864, 0, 0, 0, 0, 0, 0.0416666641831398, 0, 0, 0.25, 0.12244897335767746, 0, 0.10810810327529907, 0.07407407462596893, 0, 0.06451612710952759, 0, 0, 0, 0.0952380895614624, 0.1818181723356247, 0.07999999821186066, 0.0624999962747097, 0.1249999925494194 ]
SkxcSpEKPS
true
[ "adversarial nets, attention mechanism, positron images, data scarcity" ]
[ "We revisit the Recurrent Attention Model (RAM, Mnih et al. (2014)), a recurrent neural network for visual attention, from an active information sampling perspective. \n\n", "We borrow ideas from neuroscience research on the role of active information sampling in the context of visual attention and gaze (Gottlieb, 2018), where the author suggested three types of motives for active information sampling strategies.", "We find the original RAM model only implements one of them.\n\n", "We identify three key weakness of the original RAM and provide a simple solution by adding two extra terms on the objective function.", "The modified RAM", "1) achieves faster convergence,", "2) allows dynamic decision making per sample without loss of accuracy, and", "3) generalizes much better on longer sequence of glimpses which is not trained for, compared with the original RAM. \n", "We revisit the Recurrent Attention Model (RAM, ), a recurrent neural network for visual attention, from an active information sampling perspective.", "The RAM, instead of processing the input image for classification in full, only takes a glimpse at a small patch of the image at a time.", "The recurrent attention mechanism learns where to look at to obtain new information based on the internal state of the network.", "After a pre-defined number of glimpses, RAM finally makes a prediction as output.", "Compared with the attention mechanism which now dominates the AI/NLP research such as Transformer (Vaswani et al., 2017) and BERT (Devlin et al., 2018) , this recurrent attention mechanism is fundamentally different, as it is used to obtain new information (active sampling of information), rather than processing information that is already fully observed.", "In this paper, we identify three weaknesses of this widely-cited approach.", "First, the convergence of RAM training is slow.", "Second, RAM does not support dynamic number of glimpses per sample, but uses a fixed number of glimpses for every sample.", "Third and perhaps most importantly, the performance of the original RAM does not improve but rather decrease dramatically if it takes more glimpses, which is weird and against intuition.", "We provide a simple solution of adding two extra terms in the objective function of RAM, insipred from neuroscience research (Gottlieb, 2018) which discusses the logic and neural substrates of information sampling policies in the context of visual attention and gaze.", "Base on the evidence available so far, Gottlieb (2018) suggested three kinds of motives for the active sampling strategies of decision-making while the original RAM only implements one of them.", "We incorporate the other two motives in the objective function, and by doing so we", "1) achieve much faster convergence and", "2) instantly enbale decision making with a dynamic number of glimpse for different samples with no loss of accuracy.", "3) More importantly, we find that the modified RAM generalizes much better to longer sequence of glimpses which is not trained for.", "We evaluate on MNIST dataset as in the orignal RAM paper.", "We set the train-time number of glimpses N = 6 for it achieves the best test-time accuracy in .", "Implementation details see the source code 1 .", "We first show in Figure 1 that the two new terms in the objective both contribute to a faster convergence.", "We test four cases", "1) the orignal objective,", "2) add the J intrinsic , 3) add J uncertainty , 4) add both new terms.", "We see in Figure 1 that both of our new objective in isolation help a faster learning and together give the fastest convergence.", "As in Figure 2 , we test the trained models with varying number of glimpses.", "(We want to emphasize that the focus is not the absolute performance , but rather the generalization on more glimpses than train time.)", "We fisrt evaluate the non-dynamic case (fixed number for all samples).", "The performance of the original RAM decrease dramatically when N > 10.", "Adding both terms, the modified RAM does not suffer the decrease anymore even when N is large .", "Also, it is interesting that adding only the uncertainty term, we observe the improvement is very slight and the intrinsic term effectively stablizes the prediction accuracy given more glimpses.", "We also test the dynamic case by varying the exploration rate.", "We see that dynamic number of glimpses does not hurt the performance very much, which confirms with the hypothesis that some samples are easier to discriminate and thus need fewer glimpses.", "One may argue that the given longer training time or other hyperparameter tuning, RAM will eventually reach a point where it can give stable prediction accuracy on more glimpses, and the new objective only make it converge faster to that point.", "But during our experiments, we find with λ 2 = 0.1 the J intrinsic term can effectively stablize the prediction given more glimpses, even when trained for only 1 epoch.", "We observe that the l2-norm of internal states of orginal RAM becomes very large given a longer sequence of glimpses while the modified RAM with J intrinsic remains stable." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.08695651590824127, 0.23999999463558197, 0.1818181723356247, 0.5581395030021667, 0, 0, 0.060606054961681366, 0.1463414579629898, 0.0952380895614624, 0.09756097197532654, 0.25, 0.060606054961681366, 0.1230769157409668, 0.19354838132858276, 0.13793103396892548, 0.051282044500112534, 0.0833333283662796, 0.3272727131843567, 0.1702127605676651, 0.22857142984867096, 0, 0.052631575614213943, 0.09302324801683426, 0.1249999925494194, 0.10526315122842789, 0.0714285671710968, 0.20512819290161133, 0, 0.07999999821186066, 0.12121211737394333, 0.1395348757505417, 0.1111111044883728, 0.09302324801683426, 0.0624999962747097, 0.12121211737394333, 0.05405404791235924, 0.08695651590824127, 0.12903225421905518, 0.08163265138864517, 0.10344827175140381, 0.03999999538064003, 0.08695651590824127 ]
HJlVEQt8Lr
true
[ " Inspired by neuroscience research, solve three key weakness of the widely-cited recurrent attention model by simply adding two terms on the objective function." ]
[ "Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data.", "However, a large quantity of labeled graphs is difficult to obtain, which significantly limit the true success of GNNs.", "Although active learning has been widely studied for addressing label-sparse issues with other data types like text, images, etc., how to make it effective over graphs is an open question for research. ", "In this paper, we present the investigation on active learning with GNNs for node classification tasks. ", "Specifically, we propose a new method, which uses node feature propagation followed by K-Medoids clustering of the nodes for instance selection in active learning.", "With a theoretical bound analysis we justify the design choice of our approach.", "In our experiments on four benchmark dataset, the proposed method outperforms other representative baseline methods consistently and significantly.", "Graph Neural Networks (GNN) (Kipf & Welling, 2016; Veličković et al., 2017; Hamilton et al., 2017; Wu et al., 2019) have been widely applied in many supervised and semi-supervised learning scenarios such as node classifications, edge predictions and graph classifications over the past few years.", "Though GNN frameworks are effective at fusing both the feature representations of nodes and the connectivity information, people are longing for enhancing the learning efficiency of such frameworks using limited annotated nodes.", "This property is in constant need as the budget for labeling is usually far less than the total number of nodes.", "For example, in biological problems where a graph represents the chemical structure (Gilmer et al., 2017; Jin et al., 2018 ) of a certain drug assembled through atoms, it is not easy to obtain a detailed analysis of the function for each atom since getting expert labeling advice is very expensive.", "On the other hand, people can carefully design a small \"seeding pool\" so that by selecting \"representative\" nodes or atoms as the training set, a GNN can be trained to get an automatic estimation of the functions for all the remaining unlabeled ones.", "Active Learning (AL) (Settles, 2009; Bodó et al., 2011) , following this lead, provides solutions that select \"informative\" examples as the initial training set.", "While people have proposed various methods for active learning on graphs (Bilgic et al., 2010; Kuwadekar & Neville, 2011; Moore et al., 2011; Rattigan et al., 2007) , active learning for GNN has received relatively few attention in this area.", "Cai et al. (2017) and Gao et al. (2018) are two major works that study active learning for GNN.", "The two papers both use three kinds of metrics to evaluate the training samples, namely uncertainty, information density, and graph centrality.", "The first two metrics make use of the GNN representations learnt using both node features and the graph; while they might be reasonable with a good (well-trained) GNN model, the metrics are not informative when the label budget is limited and/or the network weights are under-trained so that the learned representation is not good.", "On the other hand, graph centrality ignores the node features and might not get the real informative nodes.", "Further, methods proposed in Cai et al. (2017) ; Gao et al. (2018) only combine the scores using simple linear weighted-sum, which do not solve these problems principally.", "We propose a method specifically designed for GNN that naturally avoids the problems of methods above 1 .", "Our method select the nodes based on node features propagated through the graph structure, 1 Our code will be released upon acceptance.", "making it less sensitive to inaccuracies of representation learnt by under-trained models.", "Then we cluster the nodes using K-Medoids clustering; K-Medoids is similar to the conventional K-Means, but constrains the centers to be real nodes in the graph.", "Theoretical results and practical experiments prove the strength of our algorithm.", "• We perform a theoretical analysis for our method and study the relation between its classification loss and the geometry of the propagated node features.", "• We show the advantage of our method over Coreset (Sener & Savarese, 2017) by comparing the bounds.", "We also conjecture that similar bounds are not achievable if we use raw unpropagated node features.", "• We compare our method with several AL methods and obtain the best performance over all benchmark datasets.", "We study the active learning problem in the node classification task for Graph Convolution Networks (GCNs).", "We propose a propagated node feature selection approach (FeatProp) to comply with the specific structure of GCNs and give a theoretical result characterizing the relation between its classification loss and the geometry of the propagated node features.", "Our empirical experiments also show that FeatProp outperforms the state-of-the-art AL methods consistently on most benchmark datasets.", "Note that FeatProp only focuses on sampling representative points in a meaningful (graph) representation, while uncertainty-based methods select the active nodes from a different criterion guided by labels, how to combine that category of methods with FeatProp in a principled way remains an open and yet interesting problem for us to explore." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05882352590560913, 0.1428571343421936, 0.1395348757505417, 0.2222222238779068, 0.1764705777168274, 0.08695651590824127, 0.0714285671710968, 0.03999999538064003, 0.0555555522441864, 0.06896550953388214, 0.036363635212183, 0.0416666641831398, 0, 0.190476194024086, 0.14814814925193787, 0, 0.037735845893621445, 0, 0, 0.07407406717538834, 0.06666666269302368, 0, 0, 0.0952380895614624, 0.0624999962747097, 0, 0, 0, 0.1599999964237213, 0.05128204822540283, 0.07407406717538834, 0.1090909093618393 ]
HylwpREtDr
true
[ "This paper introduces a clustering-based active learning algorithm on graphs." ]
[ "Continuous Normalizing Flows (CNFs) have emerged as promising deep generative models for a wide range of tasks thanks to their invertibility and exact likelihood estimation.", "However, conditioning CNFs on signals of interest for conditional image generation and downstream predictive tasks is inefficient due to the high-dimensional latent code generated by the model, which needs to be of the same size as the input data.", "In this paper, we propose InfoCNF, an efficient conditional CNF that partitions the latent space into a class-specific supervised code and an unsupervised code that shared among all classes for efficient use of labeled information.", "Since the partitioning strategy (slightly) increases the number of function evaluations (NFEs), InfoCNF also employs gating networks to learn the error tolerances of its ordinary differential equation (ODE) solvers for better speed and performance.", "We show empirically that InfoCNF improves the test accuracy over the baseline while yielding comparable likelihood scores and reducing the NFEs on CIFAR10.", "Furthermore, applying the same partitioning strategy in InfoCNF on time-series data helps improve extrapolation performance.", "Invertible models are attractive modelling choice in a range of downstream tasks that require accurate densities including anomaly detection (Bishop, 1994; Chandola et al., 2009 ) and model-based reinforcement learning (Polydoros & Nalpantidis, 2017) .", "These models enable exact latent-variable inference and likelihood estimation.", "A popular class of invertible models is the flow-based generative models (Dinh et al., 2017; Rezende & Mohamed, 2015; Kingma & Dhariwal, 2018; Grathwohl et al., 2018) that employ a change of variables to transform a simple distribution into more complicated ones while preserving the invertibility and exact likelihood estimation.", "However, computing the likelihood in flow-based models is expensive and usually requires restrictive constraints on the architecture in order to reduce the cost of computation.", "Recently, introduced a new type of invertible model, named the Continuous Normalizing Flow (CNF), which employs ordinary differential equations (ODEs) to transform between the latent variables and the data.", "The use of continuous-time transformations in CNF, instead of the discrete ones, together with efficient numerical methods such as the Hutchinson's trace estimator (Hutchinson, 1989) , helps reduce the cost of determinant computation from O(d 3 ) to O(d), where d is the latent dimension.", "This improvement opens up opportunities to scale up invertible models to complex tasks on larger datasets where invertibility and exact inference have advantages.", "Until recently, CNF has mostly been trained using unlabeled data.", "In order to take full advantage of the available labeled data, a conditioning method for CNF -which models the conditional likelihood, as well as the posterior, of the data and the labels -is needed.", "Existing approaches for conditioning flow-based models can be utilized, but we find that these methods often do not work well on CNF.", "This drawback is because popular conditioning methods for flow-based models, such as in (Kingma & Dhariwal, 2018) , make use of the latent code for conditioning and introduce independent parameters for different class categories.", "However, in CNF, for invertibility, the dimension of the latent code needs to be the same as the dimension of the input data and therefore is substantial, which results in many unnecessary parameters.", "These additional but redundant parameters increase the complexity of the model and hinder learning efficiency.", "Such overparametrization also has a negative impact on other flow-based generative models, as was pointed out by (Liu et al., 2019) , but is especially bad in the case of CNF.", "This is because the ODE solvers in CNF are sensitive to the complexity of the model, and the number of function evaluations that the ODE solvers request in a single forward pass (NFEs) increases significantly as the complexity of the model increases, thereby slowing down the training.", "This growing NFEs issue has been observed in unconditional CNF but to a much lesser extent (Grathwohl et al., 2018) .", "It poses a unique challenge to scale up CNF and its conditioned variants for real-world tasks and data.", "Our contributions in this paper are as follows:", "Contribution 1: We propose a simple and efficient conditioning approach for CNF, namely InfoCNF.", "Our method shares the high-level intuition with the InfoGAN (Chen et al., 2016) , thus the eponym.", "In InfoCNF, we partition the latent code into two separate parts: class-specific supervised code and unsupervised code which is shared between different classes (see Figure 1) .", "We use the supervised code to condition the model on the given supervised signal while the unsupervised code captures other latent variations in the data since it is trained using all label categories.", "The supervised code is also used for classification, thereby reducing the size of the classifier and facilitating the learning.", "Splitting the latent code into unsupervised and supervised parts allows the model to separate the learning of the task-relevant features and the learning of other features that help fit the data.", "We later show that the cross-entropy loss used to train InfoCNF corresponds to the mutual information between the generated image and codes in InfoGAN, which encourages the model to learn disentangled representations.", "We have developed an efficient framework, namely InfoCNF, for conditioning CNF via partitioning the latent code into the supervised and unsupervised part.", "We investigated the possibility of tuning the error tolerances of the ODE solvers to speed up and improve the performance of InfoCNF.", "We invented InfoCNF with gating networks that learns the error tolerances from the data.", "We empirically show the advantages of InfoCNF and InfoCNF with learned tolerances over the baseline CCNF.", "Finally, we study possibility of improving large-batch training of our models using large learning rates and learned error tolerances of the ODE solvers." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.09090908616781235, 0.15094339847564697, 0.35999998450279236, 0.4000000059604645, 0.14999999105930328, 0.05882352590560913, 0.07407406717538834, 0, 0.1269841194152832, 0.1463414579629898, 0.17391303181648254, 0.1355932205915451, 0.04999999329447746, 0.06896550953388214, 0.21276594698429108, 0.09756097197532654, 0.07999999821186066, 0.13333332538604736, 0.12121211737394333, 0.11999999731779099, 0.2641509473323822, 0.09999999403953552, 0.1111111044883728, 0, 0.1818181723356247, 0.05882352590560913, 0.09302324801683426, 0.1304347813129425, 0.1111111044883728, 0.19512194395065308, 0.21739129722118378, 0.29999998211860657, 0.4444444477558136, 0.4375, 0.24242423474788666, 0.29999998211860657 ]
SJgvl6EFwH
true
[ "We propose the InfoCNF, an efficient conditional CNF that employs gating networks to learn the error tolerances of the ODE solvers " ]
[ "A central goal of unsupervised learning is to acquire representations from unlabeled data or experience that can be used for more effective learning of downstream tasks from modest amounts of labeled data.", "Many prior unsupervised learning works aim to do so by developing proxy objectives based on reconstruction, disentanglement, prediction, and other metrics.", "Instead, we develop an unsupervised meta-learning method that explicitly optimizes for the ability to learn a variety of tasks from small amounts of data.", "To do so, we construct tasks from unlabeled data in an automatic way and run meta-learning over the constructed tasks.", "Surprisingly, we find that, when integrated with meta-learning, relatively simple task construction mechanisms, such as clustering embeddings, lead to good performance on a variety of downstream, human-specified tasks.", "Our experiments across four image datasets indicate that our unsupervised meta-learning approach acquires a learning algorithm without any labeled data that is applicable to a wide range of downstream classification tasks, improving upon the embedding learned by four prior unsupervised learning methods.", "Unsupervised learning is a fundamental, unsolved problem (Hastie et al., 2009 ) and has seen promising results in domains such as image recognition (Le et al., 2013) and natural language understanding BID19 .", "A central use case of unsupervised learning methods is enabling better or more efficient learning of downstream tasks by training on top of unsupervised representations BID23 BID7 or fine-tuning a learned model BID13 .", "However, since the downstream objective requires access to supervision, the objectives used for unsupervised learning are only a rough proxy for downstream performance.", "If a central goal of unsupervised learning is to learn useful representations, can we derive an unsupervised learning objective that explicitly takes into account how the representation will be used?The", "use of unsupervised representations for downstream tasks is closely related to the objective of meta-learning techniques: finding a learning procedure that is more efficient and effective than learning from scratch. However", ", unlike unsupervised learning methods, meta-learning methods require large, labeled datasets and hand-specified task distributions. These", "dependencies are major obstacles to widespread use of these methods for few-shot classification.To begin addressing these problems, we propose an unsupervised meta-learning method: one which aims to learn a learning procedure, without supervision, that is useful for solving a wide range of new, human-specified tasks. With", "only raw, unlabeled observations, our model's goal is to learn a useful prior such that, after meta-training, when presented with a modestly-sized dataset for a human-specified task, the model can transfer its prior experience to efficiently learn to perform the new task. If we", "can build such an algorithm, we can enable few-shot learning of new tasks without needing any labeled data nor any pre-defined tasks.To perform unsupervised meta-learning, we need to automatically construct tasks from unlabeled data. We study", "several options for how this can be done. We find", "that a good task distribution should be diverse, but also not too difficult: naïve random approaches for task generation produce tasks that contain insufficient regularity to enable useful meta-learning. To that", "end, our method proposes tasks by first leveraging prior unsupervised learning algorithms to learn an embedding of the input data, and then performing an overcomplete partitioning of the dataset to construct numerous categorizations of the data. We show", "how we can derive classification tasks from these categorizations for use with meta-learning algorithms. Surprisingly", ", even with simple mechanisms for partitioning the embedding space, such as k-means clustering, we find that meta-learning acquires priors that, when used to learn new, human-designed tasks, learn those tasks more effectively than methods that directly learn on the embedding. That is, the", "learning algorithm acquired through unsupervised meta-learning achieves better downstream performance than the original representation used to derive meta-training tasks, without introducing any additional assumptions or supervision. See Figure 1", "for an illustration of the complete approach.The core idea in this paper is that we can leverage unsupervised embeddings to propose tasks for a meta-learning algorithm, leading to an unsupervised meta-learning algorithm that is particularly effective as pre-training for human-specified downstream tasks. In the following", "sections, we formalize our problem assumptions and goal, which match those of unsupervised learning, and discuss several options for automatically deriving tasks from embeddings. We instantiate our", "method with two meta-learning algorithms and compare to prior state-of-the-art unsupervised learning methods. Across four image", "datasets (MNIST, Omniglot, miniImageNet, and CelebA), we find that our method consistently leads to effective downstream learning of a variety of human-specified tasks, including character recognition tasks, object classification tasks, and facial attribute discrimination tasks, without requiring any labels or hand-designed tasks during meta-learning and where key hyperparameters of our method are held constant across all domains. We show that, even", "though our unsupervised meta-learning algorithm trains for one-shot generalization, one instantiation of our approach performs well not only on few-shot learning, but also when learning downstream tasks with up to 50 training examples per class. In fact, some of our", "results begin to approach the performance of fully-supervised meta-learning techniques trained with fully-specified task distributions.... , , . Figure 1 : Illustration", "of the", "proposed unsupervised meta-learning procedure. Embeddings of raw observations", "are clustered with k-means to construct partitions, which give rise to classification tasks. Each task involves distinguishing", "between examples from N = 2 clusters, with Km-tr = 1 example from each cluster being a training input. The meta-learner's aim is to produce", "a learning procedure that successfully solves these tasks.", "We demonstrate that meta-learning on tasks produced using simple mechanisms based on embeddings improves upon the utility of these representations in learning downstream, human-specified tasks.", "We empirically show that this holds across benchmark datasets and tasks in the few-shot classification literature BID26 BID22 , task difficulties, and embedding learning methods while fixing key hyperparameters across all experiments.In a sense, CACTUs can be seen as a facilitating interface between an embedding learning method and a meta-learning algorithm.", "As shown in the results, the meta-learner's performance significantly depends on the nature and quality of the task-generating embeddings.", "We can expect our method to yield better performance as the methods that produce these embedding functions improve, becoming better suited for generating diverse yet distinctive clusterings of the data.", "However, the gap between unsupervised and supervised meta-learning will likely persist because, with the latter, the meta-training task distribution is human-designed to mimic the expected evaluation task distribution as much as possible.", "Indeed, to some extent, supervised meta-learning algorithms offload the effort of designing and tuning algorithms onto the effort of designing and tuning task distributions.", "With its evaluation-agnostic task generation, CACTUs-based meta-learning trades off performance in specific use-cases for broad applicability and the ability to train on unlabeled data.", "In principle, CACTUs-based meta-learning may outperform supervised meta-learning when the latter is trained on a misaligned task distribution.", "We leave this investigation to future work.While we have demonstrated that k-means is a broadly useful mechanism for constructing tasks from embeddings, it is unlikely that combinations of k-means clusters in learned embedding spaces are universal approximations of arbitrary class definitions.", "An important direction for future work is to find examples of datasets and human-designed tasks for which CACTUs-based meta-learning results in ineffective downstream learning.", "This will result in better understanding of the practical scope of applicability for our method, and spur further development in automatic task construction mechanisms for unsupervised meta-learning.A potential concern of our experimental evaluation is that MNIST, Omniglot, and miniImageNet exhibit particular structure in the underlying class distribution (i.e., perfectly balanced classes), since they were designed to be supervised learning benchmarks.", "In more practical applications of machine learning, such structure would likely not exist.", "Our CelebA results indicate that CACTUs is effective even in the case of a dataset without neatly balanced classes or attributes.", "An interesting direction for future work is to better characterize the performance of CACTUs and other unsupervised pretraining methods with highly-unstructured, unlabeled datasets.Since MAML and ProtoNets produce nothing more than a learned representation, our method can be viewed as deriving, from a previous unsupervised representation, a new representation particularly suited for learning downstream tasks.", "Beyond visual classification tasks, the notion of using unsupervised pre-training is generally applicable to a wide range of domains, including regression, speech (Oord et al., 2018) , language (Howard & Ruder, 2018) , and reinforcement learning BID28 .", "Hence, our unsupervised meta-learning approach has the potential to improve unsupervised representations for a variety of such domains, an exciting avenue for future work." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2666666507720947, 0.1538461446762085, 0.2926829159259796, 0.05405404791235924, 0.08695651590824127, 0.4000000059604645, 0.0833333283662796, 0.260869562625885, 0.21052631735801697, 0.21739129722118378, 0.3478260934352875, 0.23529411852359772, 0.2666666507720947, 0.0363636314868927, 0.20408162474632263, 0, 0.17391303181648254, 0.19999998807907104, 0.12121211737394333, 0.178571417927742, 0.25531914830207825, 0.2222222238779068, 0.09302324801683426, 0.47058823704719543, 0.2571428418159485, 0.2181818187236786, 0.15789473056793213, 0.23076923191547394, 0.11764705181121826, 0.04878048226237297, 0.1538461446762085, 0.19512194395065308, 0.1904761791229248, 0.05882352590560913, 0.21739129722118378, 0.13636362552642822, 0.17142856121063232, 0.0952380895614624, 0.05714285373687744, 0.1071428507566452, 0.2926829159259796, 0.16438356041908264, 0.06451612710952759, 0.10256409645080566, 0.23880596458911896, 0.23076923191547394, 0.19999998807907104 ]
r1My6sR9tX
true
[ "An unsupervised learning method that uses meta-learning to enable efficient learning of downstream image classification tasks, outperforming state-of-the-art methods." ]
[ "Domain transfer is a exciting and challenging branch of machine learning because models must learn to smoothly transfer between domains, preserving local variations and capturing many aspects of variation without labels. \n", "However, most successful applications to date require the two domains to be closely related (ex. image-to-image, video-video), \n", "utilizing similar or shared networks to transform domain specific properties like texture, coloring, and line shapes. \n", "Here, we demonstrate that it is possible to transfer across modalities (ex. image-to-audio) by first abstracting the data with latent generative models and then learning transformations between latent spaces. \n", "We find that a simple variational autoencoder is able to learn a shared latent space to bridge between two generative models in an unsupervised fashion, and even between different types of models (ex. variational autoencoder and a generative adversarial network). \n", "We can further impose desired semantic alignment of attributes with a linear classifier in the shared latent space. \n", "The proposed variation autoencoder enables preserving both locality and semantic alignment through the transfer process, as shown in the qualitative and quantitative evaluations.\n", "Finally, the hierarchical structure decouples the cost of training the base generative models and semantic alignments, enabling computationally efficient and data efficient retraining of personalized mapping functions.", "Domain transfer has long captured the imagination of inventors and artists alike.", "The early precursor of the phonograph, the phonautograph, was actually inspired by the idea of \"words which write themselves\", where the shape of audio waveforms would transform into the shape of writing, capturing the content and character of the speaker's voice in shape and stroke of the written characters BID9 .", "While perhaps fanciful at the time, modern deep learning techniques have shown similar complex transformations are indeed possible.Deep learning enables domain transfer by learning a smooth mapping between two domains such that the variations in one domain are reflected in the other.", "This has been demonstrated to great effect within a data modality, for example transferring between two different styles of image BID12 BID18 , video BID26 and music BID23 .", "The works have been the basis of interesting creative tools, as small intuitive changes in the source domain are reflected by small intuitive changes in the target domain.", "Furthermore, the strong conditioning signal of the source domain makes learning transformations easier than learning a full generative model in each domain.Despite these successes, this line of work in domain transfer has several limitations.", "The first limitation is that it requires that two domains should be closely related (e.g. image-to-image or videoto-video) .", "This allows the model to focus on transferring local properties like texture and coloring instead of high-level semantics.", "For example, directly applying these image-to-image transfer such as CycleGAN or its variants to images from distant domains leads to distorted and unrealistic results .", "This agrees with the findings of BID3 who show that CycleGAN transformations are more akin to adversarial examples than style transfer, as the model Our method aims at transfer from one domain to another domain such that the correct semantics (e.g., label) is maintained across domains and local changes in the source domain should be reflected in the target domain.", "To achieve this, we train a model to transfer between the latent spaces of pre-trained generative models on source and target domains.", "(a) The training is done with three types of loss functions: (1) The VAE ELBO losses to encourage modeling of z 1 and z 2 , which are denoted as L2 and KL in the figure.", "(2) The Sliced Wasserstein Distance loss to encourage cross-domain overlapping in the shared latent space, which is denoted as SWD.", "(3) The classification loss to encourage intra-class overlap in the shared latent space, which is denoted as Classifier.", "The training is semi-supervised, since (1) and (2) requires no supervision (classes) while only (3) needs such information.", "(b) To transfer data from one domain x 1 (an image of digit \"0\") to another domain x 2 (an audio of human saying \"zero\", shown in form of spectrum in the example), we first encode x 1 to z 1 ∼ q(z 1 |x 1 ), which we then further encode to a shared latent vector z using our conditional encoder, z ∼ q(z |z 1 , D = 1), where D donates the operating domain.", "We then decode to the latent space of the target domain z 2 = g(z|z , D = 2) using our conditional decoder, which finally is used to generate the transferred audio x 2 = g(x 2 |z 2 ).learns", "to hide information about the source domain in near-imperceptible high-frequency variations of the target domain.The second limitation is data efficiency. Most conditional", "GAN techniques, such as Pix2Pix BID12 and vid2vid BID26 , require very dense supervision from large volumes of paired data. This is usually", "accomplished by extracting features, such as edges or a segmentation map, and then training the conditional GAN to learn the inverse mapping back to pixels. For many more interesting", "transformations, no such easy alignment procedure exists, and paired data is scarce. We demonstrate the limitation", "of existing approaches in Appendix C.For multi-modal domain transfer, we seek to train a model capable of transferring instances from a source domain (x 1 ) to a target domain (x 2 ), such that local variations in source domain are transferred to local variations in the target domain. We refer to this property as", "locality. Thus, local interpolation in", "the source domain would ideally be similar to local interpolation in target domain when transferred.There are many possible ways that two domains could align such that they maintain locality, with many different alignments of semantic attributes. For instance, for a limited", "dataset, there is no a priori reason that images of the digit \"0\" and spoken utterances of the digit \"0\" would align with each other. Or more abstractly, there may", "be no agreed common semantics for images of landscapes and passages of music, and it is at the liberty of the user to define such connections based on their own intent. Our goal in modeling is to respect", "the user's intent and make sure that the correct semantics (e.g., labels) are shared between the two domains after transfer. We refer to this property as semantic", "alignment. A user can thus sort a set of data points", "from in each domain into common bins, which we can use to constrain the cross-domain alignment. We can quantitatively measure the degree", "of semantic alignment by using a classifier to label transformed data and measuring the percentage of data points that fall into the same bin for the source and target domain. Our goal can thus be stated as learning", "transformations that preserve locality and semantic alignment, while requiring as few labels from a user as possible.To achieve this goal and tackle prior limitations, we propose to abstract the domain domains with independent latent variable models, and then learn to transfer between the latent spaces of those models. Our main contributions include:• We propose", "a shared \"bridging\" VAE to transfer between latent generative models. Locality and semantic alignment of transformations", "are encouraged by applying a sliced-wasserstein distance, and a classification loss respectively to the shared latent space.• We demonstrate with qualitative and quantitative", "results that our proposed method enables transfer both within a modality (image-to-image) and between modalities (image-to-audio).• Since we training a smaller secondary model in latent", "space, we find improvements in training efficiency, measured by both in terms of the amount of required labeled data and well training time.", "We have demonstrated an approach to learn mappings between disparate domains by bridging the latent codes of each domain with a shared autoencoder.", "We find bridging VAEs are able to achieve high transfer accuracies, smoothly map interpolations between domains, and even connect different model types (VAEs and GANs).", "Here, we have restricted ourselves to datasets with intuitive classlevel mappings for the purpose of quantitative comparisons, however, there are many interesting creative possibilities to apply these techniques between domains without a clear semantic alignment.", "As a semi-supervised technique, we have shown bridging autoencoders to require less supervised labels, making it more feasible to learn personalized cross-modal domain transfer based on the creative guidance of individual users." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23076923191547394, 0.05128204822540283, 0.04999999329447746, 0.307692289352417, 0.290909081697464, 0.1904761791229248, 0.31111109256744385, 0.2222222238779068, 0.17142856121063232, 0.06896550953388214, 0.16949151456356049, 0.15686273574829102, 0.045454539358615875, 0.11538460850715637, 0.09756097197532654, 0.1463414579629898, 0.1304347813129425, 0.13333332538604736, 0.4888888895511627, 0.1090909019112587, 0.04651162400841713, 0.04878048226237297, 0.09756097197532654, 0.07792207598686218, 0.0714285671710968, 0.045454539358615875, 0.08695651590824127, 0.03999999538064003, 0.10256409645080566, 0.06451612710952759, 0.0714285671710968, 0.16129031777381897, 0.1249999925494194, 0.1090909019112587, 0.23999999463558197, 0.11764705181121826, 0.04651162400841713, 0.178571417927742, 0.3380281627178192, 0.5128204822540283, 0.08888888359069824, 0.25, 0.09302324801683426, 0.17391303181648254, 0.1702127605676651, 0.17543859779834747, 0.1111111044883728 ]
r1xrb3CqtQ
true
[ "Conditional VAE on top of latent spaces of pre-trained generative models that enables transfer between drastically different domains while preserving locality and semantic alignment." ]
[ "We propose Adversarial Inductive Transfer Learning (AITL), a method for addressing discrepancies in input and output spaces between source and target domains.", "AITL utilizes adversarial domain adaptation and multi-task learning to address these discrepancies.", "Our motivating application is pharmacogenomics where the goal is to predict drug response in patients using their genomic information.", "The challenge is that clinical data (i.e. patients) with drug response outcome is very limited, creating a need for transfer learning to bridge the gap between large pre-clinical pharmacogenomics datasets (e.g. cancer cell lines) and clinical datasets.", "Discrepancies exist between", "1) the genomic data of pre-clinical and clinical datasets (the input space), and", "2) the different measures of the drug response (the output space).", "To the best of our knowledge, AITL is the first adversarial inductive transfer learning method to address both input and output discrepancies.", "Experimental results indicate that AITL outperforms state-of-the-art pharmacogenomics and transfer learning baselines and may guide precision oncology more accurately.", "Deep neural networks (Goodfellow et al., 2016) have demonstrated the state-of-the-art performance in different problems, ranging from computer vision and natural language processing to genomics (Eraslan et al., 2019) and medicine (Topol, 2019) .", "However, these networks often require a large number of samples for training, which is challenging and sometimes impossible to obtain in the real world applications.", "Transfer learning (Pan & Yang, 2009) attempts to solve this challenge by leveraging the knowledge in a source domain, a large data-rich dataset, to improve the generalization performance on a small target domain.", "Training a model on the source domain and testing it on the target domain violates the i.i.d assumption that the train and test data are from the same distribution.", "The discrepancy in the input space decreases the prediction accuracy on the test data, which leads to poor generalization (Zhang et al., 2019) .", "Many methods have been proposed to minimize the discrepancy between the source and the target domains using different metrics such as Jensen Shannon Divergence (Ganin & Lempitsky, 2014) , Maximum Mean Discrepancy (Gretton et al., 2012) , and Margin Disparity Discrepancy (Zhang et al., 2019) .", "While transductive transfer learning (e.g. domain adaptation) uses a labeled source domain to improve generalization on an unlabeled target domain, inductive transfer learning (e.g. few-shot learning) uses a labeled source domain to improve the generalization on a labeled target domain where label spaces are different in the source and the target domains (Pan & Yang, 2009 ).", "Adversarial domain adaptation has shown great performance in addressing the discrepancy in the input space for different applications (Schoenauer-Sebag et al., 2019; Hosseini-Asl et al., 2018; Pinheiro, 2018; Zou et al., 2018; Tsai et al., 2018; Long et al., 2018; , however, adversarial adaptation to address the discrepancies in both the input and output spaces has not yet been explored.", "Our motivating application is pharmacogenomics (Smirnov et al., 2017) where the goal is to predict response to a cancer drug given the genomic data (e.g. gene expression).", "Since clinical datasets in pharmacogenomics (patients) are small and hard to obtain, many studies have focused on large pre-clinical pharmacogenomics datasets such as cancer cell lines as a proxy to patients (Barretina et al., 2012; Iorio et al., 2016) .", "A majority of the current methods are trained on cell line datasets and then tested on other cell line or patient datasets Geeleher et al., 2014) .", "However, cell lines and patients data, even with the same set of genes, do not have identical distributions due to the lack of an immune system and the tumor microenvironment in cell lines (Mourragui et al., 2019) .", "Moreover, in cell lines, the response is often measured by the drug concentration that reduces viability by 50% (IC50), whereas in patients, it is often based on changes in the size of the tumor and measured by metrics such as response evaluation criteria in solid tumors (RECIST) (Schwartz et al., 2016) .", "This means that drug response prediction is a regression problem in cell lines but a classification problem in patients.", "Therefore, discrepancies exist in both the input and output spaces in pharmacogenomics datasets.", "Table A1 provides the definition of these biological terms.", "In this paper, we propose Adversarial Inductive Transfer Learning (AITL), the first adversarial method of inductive transfer learning.", "Different from existing methods for transfer learning, AITL adapts not only the input space but also the output space.", "Our motivating application is transfer learning for pharmacogenomics datasets.", "In our driving application, the source domain is the gene expression data obtained from the cell lines and the target domain is the gene expression data obtained from patients.", "Both domains have the same set of genes (i.e., raw feature representation).", "Discrepancies exist between the gene expression data in the input space, and the measure of the drug response in the output space.", "AITL learns features for the source and target samples and uses these features as input for a multi-task subnetwork to predict drug response for both the source and the target samples.", "The output space discrepancy is addressed by the multi-task subnetwork, which has one shared layer and separate classification and regression towers, and assigns binary labels (called cross-domain labels) to the source samples.", "The multi-task subnetwork also alleviates the problem of small sample size in the target domain by sharing the first layer with the source domain.", "To address the discrepancy in the input space, AITL performs adversarial domain adaptation.", "The goal is that features learned for the source samples should be domain-invariant and similar enough to the features learned for the target samples to fool a global discriminator that receives samples from both domains.", "Moreover, with the cross-domain binary labels available for the source samples, AITL further regularizes the learned features by class-wise discriminators.", "A class-wise discriminator receives source and target samples from the same class label and should not be able to predict the domain accurately.", "We evaluated the performance of AITL and state-of-the-art inductive and adversarial transductive transfer learning baselines on pharmacogenimcs datasets in terms of the Area Under the Receiver Operating Characteristic curve (AUROC) and the Area Under the Precision-Recall curve (AUPR).", "In our experiments, AITL achieved a substantial improvement compared to the baselines, demonstrating the potential of transfer learning for drug response prediction, a crucial task of precision oncology.", "To our surprise, ProtoNet and ADDA could not outperform the method of (Geeleher et al., 2014) and MOLI baselines.", "For ProtoNet, this may be due to the depth of the backbone network.", "A recent study has shown that a deeper backbone improves ProtoNet performance drastically in image classification Chen et al. (2019) .", "However, in pharmacogenomics, employing a deep backbone is not realistic because of the much smaller sample size compared to an image classification application.", "Another limitation for ProtoNet is the imbalanced number of training examples in different classes in pharmacogenomics datasets.", "Specifically, the number of examples per class in the training episodes is limited to the number of samples of the minority class as ProtoNet requires the same number of examples from each class.", "For ADDA, this lower performance may be due to the lack of end-to-end training of the classifier along with the global discriminator of this method.", "The reason is that end-to-end training of the classifier along with the discriminators improved the performance of the second adversarial baseline in AUROC and AUPR compared to ADDA.", "Moreover, the method of ) also showed a relatively better performance in AUPR compared to the method of (Geeleher et al., 2014) and MOLI.", "In pharmacogenomics, patient datasets are small or not publicly available due to privacy and/or data sharing issues.", "We believe including more patient samples and more drugs will increase generalization capability.", "In addition, recent studies in pharmacogenomics have shown that using multiple genomic data types (known as multi-omics in genomics) works better than using only gene expression .", "In this work, we did not consider such data due to the lack of patient samples with multi-omics and drug response data publicly available; however, in principle, AITL also works with such data.", "Last but not least, we used pharmacogenomics as our motivating application for this new problem of transfer learning, but we believe that AITL can also be employed in other applications.", "For example, in slow progressing cancers such as prostate cancer, large patient datasets with gene expression and short-term clinical data (source domain) are available, however, patient datasets with long-term clinical data (target domain) are small.", "AITL may be beneficial to learn a model to predict these long-term clinical labels using the source domain and its short-term clinical labels (Sharifi-Noghabi et al., 2019a) .", "Moreover, AITL can also be applied to the diagnosis of rare cancers with a small sample size.", "Gene expression data of prevalent cancers with a large sample size, such as breast cancer, may be beneficial to learn a model to diagnose these rare cancers.", "In this paper, we introduced a new problem in transfer learning motivated by applications in pharmacogenomics.", "Unlike domain adaptation that only requires adaptation in the input space, this new problem requires adaptation in both the input and output spaces.", "To address this problem, we proposed AITL, an Adversarial Inductive Transfer Learning method which, to the best of our knowledge, is the first method that addresses the discrepancies in both the input and output spaces.", "AITL uses a feature extractor to learn features for target and source samples.", "Then, to address the discrepancy in the output space, AITL utilizes these features as input of a multi-task subnetwork that makes predictions for the target samples and assign cross-domain labels to the source samples.", "Finally, to address the input space discrepancy, AITL employs global and class-wise discriminators for learning domain-invariant features.", "In our motivating application, pharmacogenomics, AITL adapts the gene expression data obtained from cell lines and patients in the input space, and also adapts different measures of the drug response between cell lines and patients in the output space.", "In addition, AITL can also be applied to other applications such as rare cancer diagnosis or predicting long-term clinical labels for slow progressing cancers.", "We evaluated AITL on four different drugs and compared it against state-of-the-art baselines from three categories in terms of AUROC and AUPR.", "The empirical results indicated that AITL achieved a significantly better performance compared to the baselines showing the benefits of addressing the discrepancies in both the input and output spaces.", "We conclude that AITL may be beneficial in pharmacogenomics, a crucial task in precision oncology.", "For future research directions, we believe that the TCGA dataset consisting of gene expression data of more than 12,000 patients (without drug response outcome) can be incorporated in an unsupervised transfer learning setting to learn better domain-invariant features between cell lines and cancer patients.", "In addition, we did not explore the impact of the chemical structures of the studied drugs in the prediction performance.", "We believe incorporating this input with transfer learning in the genomic level can lead to a better performance.", "Currently, AITL borrows information between the input domains indirectly via its multi-task subnetwork and assignment of cross-domain labels.", "An interesting future direction can be to exchange this information between domains in a more explicit way.", "Moreover, we also did not perform theoretical analysis on this new problem of transfer learning and we leave it for future work.", "Finally, we did not distinguish between different losses in the multi-task subnetwork, however, in reality patients are more important than cell lines, and considering a higher weight for the corresponding loss in the cost function can improve the prediction performance." ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.24390242993831635, 0.375, 0.15789473056793213, 0.2142857164144516, 0, 0.25, 0.20000000298023224, 0.5853658318519592, 0.21052631735801697, 0.1599999964237213, 0.2222222238779068, 0.16326530277729034, 0.1395348757505417, 0.2857142686843872, 0.13793103396892548, 0.24561403691768646, 0.3333333432674408, 0.08695651590824127, 0.1111111044883728, 0.1860465109348297, 0.19230768084526062, 0.1666666567325592, 0.1111111044883728, 0.3125, 0.13793103396892548, 0.3684210479259491, 0.2702702581882477, 0.13793103396892548, 0.10526315122842789, 0.11764705181121826, 0.37837836146354675, 0.24390242993831635, 0.2857142686843872, 0.19999998807907104, 0.375, 0.17391303181648254, 0.052631575614213943, 0.19512194395065308, 0.3333333432674408, 0.2222222238779068, 0.20512819290161133, 0.1875, 0.14999999105930328, 0.1860465109348297, 0.1666666567325592, 0.19512194395065308, 0.19999998807907104, 0.3181818127632141, 0.2857142686843872, 0.05405404791235924, 0.0624999962747097, 0.09090908616781235, 0.20408162474632263, 0.1666666567325592, 0.0833333283662796, 0.13333332538604736, 0.1621621549129486, 0.09090908616781235, 0.17142856121063232, 0.3243243098258972, 0.3921568691730499, 0.12121211737394333, 0.44897958636283875, 0.4324324131011963, 0.2857142686843872, 0.045454539358615875, 0.1463414579629898, 0.3478260934352875, 0.11764705181121826, 0.25806450843811035, 0.1666666567325592, 0.31578946113586426, 0.2631579041481018, 0.10810810327529907, 0.19512194395065308, 0.145454540848732 ]
ryeRn3NtPH
true
[ "A novel method of inductive transfer learning that employs adversarial learning and multi-task learning to address the discrepancy in input and output space" ]
[ "Named entity recognition (NER) and relation extraction (RE) are two important tasks in information extraction and retrieval (IE & IR).", "Recent work has demonstrated that it is beneficial to learn these tasks jointly, which avoids the propagation of error inherent in pipeline-based systems and improves performance.", "However, state-of-the-art joint models typically rely on external natural language processing (NLP) tools, such as dependency parsers, limiting their usefulness to domains (e.g. news) where those tools perform well.", "The few neural, end-to-end models that have been proposed are trained almost completely from scratch.", "In this paper, we propose a neural, end-to-end model for jointly extracting entities and their relations which does not rely on external NLP tools and which integrates a large, pre-trained language model.", "Because the bulk of our model's parameters are pre-trained and we eschew recurrence for self-attention, our model is fast to train.", "On 5 datasets across 3 domains, our model matches or exceeds state-of-the-art performance, sometimes by a large margin.", "The extraction of named entities (named entity recognition, NER) and their semantic relations (relation extraction, RE) are key tasks in information extraction and retrieval (IE & IR) .", "Given a sequence of text (usually a sentence), the objective is to identify both the named entities and the relations between them.", "This information is useful in a variety of NLP tasks such as question answering, knowledge base population, and semantic search (Jiang, 2012) .", "In the biomedical domain, NER and RE facilitate large-scale biomedical data analysis, such as network biology (Zhou et al., 2014) , gene prioritization (Aerts et al., 2006) , drug repositioning (Wang & Zhang, 2013) and the creation of curated databases (Li et al., 2015) .", "In the clinical domain, NER and RE can aid in disease and treatment prediction, readmission prediction, de-identification, and patient cohort identification (Miotto et al., 2017) .", "Most commonly, the tasks of NER and RE are approached as a pipeline, with NER preceding RE.", "There are two main drawbacks to this approach: (1) Pipeline systems are prone to error propagation between the NER and RE systems.", "(2) One task is not able to exploit useful information from the other (e.g. the type of relation identified by the RE system may be useful to the NER system for determining the type of entities involved in the relation, and vice versa).", "More recently, joint models that simultaneously learn to extract entities and relations have been proposed, alleviating the aforementioned issues and achieving state-of-the-art performance (Miwa & Sasaki, 2014; Miwa & Bansal, 2016; Gupta et al., 2016; Li et al., 2016; Adel & Schütze, 2017; Bekoulis et al., 2018a; b; Nguyen & Verspoor, 2019; Li et al., 2019) .", "Many of the proposed joint models for entity and relation extraction rely heavily on external natural language processing (NLP) tools such as dependency parsers.", "For instance, Miwa & Bansal (2016) propose a recurrent neural network (RNN)-based joint model that uses a bidirectional long-short term memory network (BiLSTM) to model the entities and a tree-LSTM to model the relations between entities; Li et al. (2017) propose a similar model for biomedical text.", "The tree-LSTM uses dependency tree information extracted using an external dependency parser to model relations between entities.", "The use of these external NLP tools limits the effectiveness of a model to domains (e.g. news) where those NLP tools perform well.", "As a remedy to this problem, Bekoulis et al. (2018a) proposes a neural, end-to-end system that jointly learns to extract entities and relations without relying on external NLP tools.", "In Bekoulis et al. (2018b) , they augment this model with adversarial training.", "Nguyen & Verspoor (2019) propose a different, albeit similar end-to-end neural model which makes use of deep biaffine attention (Dozat & Manning, 2016) .", "Li et al. (2019) approach the problem with multi-turn question answering, posing templated queries to a BERT-based QA model (Devlin et al., 2018) whose answers constitute extracted entities and their relations and achieve state-of-the-art results on three popular benchmark datasets.", "While demonstrating strong performance, end-to-end systems like Bekoulis et al. (2018a; b) and Nguyen & Verspoor (2019) suffer from two main drawbacks.", "The first is that most of the models parameters are trained from scratch.", "For large datasets, this can lead to long training times.", "For small datasets, which are common in the biomedical and clinical domains where it is particularly challenging to acquire labelled data, this can lead to poor performance and/or overfitting.", "The second is that these systems typically contain RNNs, which are sequential in nature and cannot be parallelized within training examples.", "The multi-pass QA model proposed in Li et al. (2019) alleviates these issues by incorporating a pre-trained language model, BERT (Devlin et al., 2018) , which eschews recurrence for self-attention.", "The main limitation of their approach is that it relies on handcrafted question templates to achieve maximum performance.", "This may become a limiting factor where domain expertise is required to craft such questions (e.g., for biomedical or clinical corpora).", "Additionally, one has to create a question template for each entity and relation type of interest.", "In this study, we propose an end-to-end model for joint NER and RE which addresses all of these issues.", "Similar to past work, our model can be viewed as a mixture of a NER module and a RE module (Figure 1 ).", "Unlike most previous works, we include a pre-trained, transformer-based language model, specifically BERT (Devlin et al., 2018) , which achieved state-of-the-art performance across many NLP tasks.", "The weights of the BERT model are fine-tuned during training, and the entire model is trained in an end-to-end fashion.", "Our main contributions are as follows: (1) Our solution is truly end-to-end, relying on no handcrafted features (e.g. templated questions) or external NLP tools (e.g. dependency parsers).", "(2) Our model is fast to train (e.g. under 10 minutes on a single GPU for the CoNLL04 corpus), as most of its parameters are pre-trained and we avoid recurrence.", "(3) We match or exceed state-of-the-art performance for joint NER and RE on 5 datasets across 3 domains.", "Figure 1 illustrates the architecture of our approach.", "Our model is composed of an NER module and an RE module.", "The NER module is identical to the one proposed by Devlin et al. (2018) .", "For a given input sequence s of N word tokens w 1 , w 2 , .", ". ., w N , the pre-trained BERT BASE model first produces a sequence of vectors, x", "In this paper, we introduced an end-to-end model for entity and relation extraction.", "Our key contributions are: (1) No reliance on any hand-crafted features (e.g. templated questions) or external NLP tools (e.g. dependency parsers).", "(2) Integration of a pre-trained, transformer-based language model.", "(3) State-of-the-art performance on 5 datasets across 3 domains.", "Furthermore, our model is inherently modular.", "One can easily initialize the language model with pre-trained weights better suited for a domain of interest (e.g. BioBERT for biomedical corpora) or swap BERT for a comparable language model (e.g. XLNet (Yang et al., 2019) ).", "Finally, because of (2), our model is fast to train, converging in approximately 1 hour or less on a single GPU for all datasets used in this study.", "Our model out-performed previous state-of-the-art performance on ADE by the largest margin (6.53%).", "While exciting, we believe this corpus was particularly easy to learn.", "The majority of sentences (∼68%) are annotated for two entities (drug and adverse effect, and one relation (adverse drug event).", "Ostensibly, a model should be able to exploit this pattern to get near-perfect performance on the majority of sentences in the corpus.", "As a test, we ran our model again, this time using ground-truth entities in the RE module (as opposed to predicted entities) and found that the model very quickly reached almost perfect performance for RE on the test set (∼98%).", "As such, high performance on the ADE corpus is not likely to transfer to real-world scenarios involving the large-scale annotation of diverse biomedical articles.", "In our experiments, we consider only intra-sentence relations.", "However, the multiple entities within a document generally exhibit complex, inter-sentence relations.", "Our model is not currently capable of extracting such inter-sentence relations and therefore our restriction to intra-sentence relations will limit its usefulness for certain downstream tasks, such as knowledge base creation.", "We also ignore the problem of nested entities, which are common in biomedical corpora.", "In the future, we would like to extend our model to handle both nested entities and inter-sentence relations.", "Finally, given that multilingual, pre-trained weights for BERT exist, we would also expect our model's performance to hold across multiple languages.", "We leave this question to future work." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2857142686843872, 0.1860465109348297, 0.04255318641662598, 0.1249999925494194, 0.13333332538604736, 0.3243243098258972, 0, 0.1904761791229248, 0.2222222238779068, 0.10256409645080566, 0.037735845893621445, 0.04999999329447746, 0.0624999962747097, 0.1111111044883728, 0.19607841968536377, 0.09836065024137497, 0.24390242993831635, 0.14814814925193787, 0.060606054961681366, 0.052631575614213943, 0.1818181723356247, 0, 0.05128204822540283, 0.072727270424366, 0.10256409645080566, 0.13333332538604736, 0.07407406717538834, 0.13333332538604736, 0.15789473056793213, 0.04444443807005882, 0.17142856121063232, 0.14999999105930328, 0.3030303120613098, 0.1666666567325592, 0.10810810327529907, 0, 0.17142856121063232, 0.04651162400841713, 0.25, 0.11428570747375488, 0.07999999821186066, 0.14814814925193787, 0.12903225421905518, 0, 0, 0.4000000059604645, 0, 0, 0, 0.08695651590824127, 0.03999999538064003, 0.1818181723356247, 0, 0.0714285671710968, 0.1666666567325592, 0.05405404791235924, 0.15094339847564697, 0.10256409645080566, 0, 0, 0.17391303181648254, 0, 0.11764705181121826, 0.15789473056793213, 0.0833333283662796 ]
rkgqm0VKwB
true
[ "A novel, high-performing architecture for end-to-end named entity recognition and relation extraction that is fast to train." ]
[ "In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks.\n", "Firstly, we show that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, also reducing the amount of parameters by 80\\%.\n", "Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs.", "We incorporate local gradient information into the approximate posterior to sharpen it around the current batch statistics.", "We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks.\n", "We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them.", "We also introduce a new benchmark for studying uncertainty for language models so future methods can be easily compared.", "Recurrent Neural Networks (RNNs) achieve state-of-the-art performance on a wide range of sequence prediction tasks BID0 BID22 BID50 BID32 .", "In this work we examine how to add uncertainty and regularisation to RNNs by means of applying Bayesian methods to training.", "This approach allows the network to express uncertainty via its parameters.", "At the same time, by using a prior to integrate out the parameters to average across many models during training, it gives a regularisation effect to the network.", "Recent approaches either justify dropout BID43 and weight decay as a variational inference scheme BID12 , or apply Stochastic Gradient Langevin dynamics (Welling & Teh, 2011, SGLD) to truncated backpropagation in time directly BID13 .", "Interestingly, recent work has not explored further directly applying a variational Bayes inference scheme BID3 for RNNs as was done in BID14 .", "We derive a straightforward approach based upon Bayes by Backprop that we show works well on large scale problems.", "Our strategy is a simple alteration to truncated backpropagation through time that results in an estimate of the posterior distribution on the weights of the RNN.", "This formulation explicitly leads to a cost function with an information theoretic justification by means of a bits-back argument BID18 where a KL divergence acts as a regulariser.The form of the posterior in variational inference shapes the quality of the uncertainty estimates and hence the overall performance of the model.", "We shall show how performance of the RNN can be improved by means of adapting (\"sharpening\") the posterior locally to a batch.", "This sharpening adapts the variational posterior to a batch of data using gradients based upon the batch.", "This can be viewed as a hierarchical distribution, where a local batch gradient is used to adapt a global posterior, forming a local approximation for each batch.", "This gives a more flexible form to the typical assumption of Gaussian posterior when variational inference is applied to neural networks, which reduces variance.", "This technique can be applied more widely across other Bayesian models.The contributions of our work are as follows:• We show how Bayes by Backprop (BBB) can be efficiently applied to RNNs.•", "We develop a novel technique which reduces the variance of BBB, and which can be widely adopted in other maximum likelihood frameworks.•", "We improve performance on two widely studied benchmarks outperforming established regularisation techniques such as dropout by a big margin.•", "We introduce a new benchmark for studying uncertainty of language models.", "We have shown how to apply the Bayes by Backprop (BBB) technique to RNNs.", "We enhanced it further by introducing the idea of posterior sharpening: a hierarchical posterior on the weights of neural networks that allows a network to adapt locally to batches of data by a gradient of the model.We showed improvements over two open source, widely available models in the language modelling and image captioning domains.", "We demonstrated that not only do BBB RNNs often have superior performance to their corresponding baseline model, but are also better regularised and have superior uncertainty properties in terms of uncertainty on out-of-distribution data.", "Furthermore, BBB RNNs through their uncertainty estimates show signs of knowing what they know, and when they do not, a critical property for many real world applications such as self-driving cars, healthcare, game playing, and robotics.", "Everything from our work can be applied on top of other enhancements to RNN/LSTM models (and other non-recurrent architectures), and the empirical evidence combined with improvements such as posterior sharpening makes variational Bayes methods look very promising.", "We are exploring further research directions and wider adoption of the techniques presented in our work." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.5454545617103577, 0, 0, 0, 0, 0.04444444179534912, 0.07999999821186066, 0.23076923191547394, 0, 0, 0, 0.04878048598766327, 0.20689654350280762, 0.07692307233810425, 0, 0, 0, 0, 0.06896551698446274, 0, 0.054054051637649536, 0, 0, 0.1111111044883728, 0.09999999403953552, 0, 0, 0.04878048598766327, 0.04651162400841713, 0 ]
Hkp3uhxCW
true
[ " Variational Bayes scheme for Recurrent Neural Networks" ]
[ "Over the passage of time Unmanned Autonomous Vehicles (UAVs), especially\n", "Autonomous flying drones grabbed a lot of attention in Artificial Intelligence.\n", "Since electronic technology is getting smaller, cheaper and more efficient, huge\n", "advancement in the study of UAVs has been observed recently.", "From monitoring\n", "floods, discerning the spread of algae in water bodies to detecting forest trail, their\n", "application is far and wide.", "Our work is mainly focused on autonomous flying\n", "drones where we establish a case study towards efficiency, robustness and accuracy\n", "of UAVs where we showed our results well supported through experiments.\n", "We provide details of the software and hardware architecture used in the study.", "We\n", "further discuss about our implementation algorithms and present experiments that\n", "provide a comparison between three different state-of-the-art algorithms namely\n", "TrailNet, InceptionResnet and MobileNet in terms of accuracy, robustness, power\n", "consumption and inference time.", "In our study, we have shown that MobileNet has\n", "produced better results with very less computational requirement and power consumption.\n", "We have also reported the challenges we have faced during our work\n", "as well as a brief discussion on our future work to improve safety features and\n", "performance.", "In modern era, UAVs have become very popular and have basic intelligence of being driven autonomously.", "Talking about ground traffic, these vehicles have limitations of physical paths and barriers.", "However, such is not the case with flying objects like drones as they do not suffer from such physical limitations.", "Autonomous Flying objects are in much of discussion these days and are striding all across the realm -traffic monitoring BID9 , agriculture BID11 , inventory management BID4 , surveillance BID15 , data mining, disaster response BID10 , etc.", "As their areas of application increase, it becomes more important to find algorithms well suited for these kind of vehicles.", "Some applications may not require the drone to be extremely accurate, but may require it to work for longer durations e.g. in surveillance applications while others may require it to be very precise but may not require it to work for long duration e.g. in delivery of items.In the last decade, significant changes have been observed in the field of autonomous motion planning of vehicles, UAVs in particular.", "The motion planning of UAVs are distinctly difficult because of several complexities that comes with aerial vehicles.", "The salience of differential constraints, uncertainty in the vehicle state and limited knowledge about the environment makes it impossible to have a precise pre-computed plan to follow through.", "These differences considerably gave rise to various approaches and techniques for planning the motion of these unmanned autonomous vehicles.", "The ambiguity of different algorithms and its inherent adversity pose an intriguing scope for a bench-marking study to enhance accuracy, robustness, power consumption, safety and inference time and fine tuning it further.Throughout this paper, we bear in mind some of the generic characteristics and prerequisites relating to UAVs.", "The basic design of a UAV is modelled to have acceleration and velocity constraints.", "Furthermore, the higher-order differential constraints also associate themselves with the equation of motion of a drone.", "However, the coherent objective involved in all UAVs is to guide the vehicle towards a goal.", "In this paper, we introduce to the best of our knowledge, a very first comparative study of three algorithms in order to find a better motion control of a drone for detecting a trail.In order to be able to compare a set of algorithms in a meticulous way, it is necessary to establish their precision and robustness and to evaluate its power consumption as well as inference time.", "Along with these metrics established for a particular algorithm, it is also necessary to consider the distinct areas of its application.", "Only then, based on the requirements called for by a particular application, a reasonable opinion about an algorithm is to be formed.", "Our study covers recent developments and algorithms used in the area of trail detection by UAVs and runs down as comprehensively as possible what has been already upheld regarding these algorithms.", "While producing this work, we have encountered several challenges.", "Few of these challenges are listed below:1.", "The major challenge encountered was to run our DNN models on the physical drone in real time due to a hardware bug we were facing with the FCU.2.", "We could not train our models from scratch due to lack of significant amount of dataset.", "Additionally, we handled a lot of issues to make our models more stable and robust.", "Since in each trial, the number of images in each class (left, straight and right) were different, there was a lot of data imbalance which we solved by upsampling and downsampling the dataset.3.", "Due to the large number of training parameters at the beginning, our models were overfitted.", "We eliminated over-fitting by introducing several data augmentation techniques (random flipping, random rotation, random contrast and transition etc. ).", "We further included regularization (especially dropout layers) in order to reduce network complexity.4.", "Power is one of the important factors especially in mobile embedded devices with small size and computational power.", "Typically, deep learning algorithms consume more power specifically for the real time inference.", "We have made an estimate of the power consumption of each of our model by calculating the GPU power drawn by them but we could not test how long our drone would run implementing each of these models due to the hardware bug mentioned before.", "In this paper, we have presented a comparison between 3 algorithms -TrailNet, InceptionResnet and MobileNet in terms of accuracy, computational cost, power consumption, inference time and robustness.", "The choice of algorithm for UAVs varies on the basis of several factors.", "In our work, we have worked with some of the factors which we thought would be pivotal in algorithm selection considering reasonable comparisons.", "We observed in our study that MobileNet outperformed others with very less computational requirement and power consumption.", "Hence in our opinion, MobileNet is more befitting for drones and other embedded devices compared to TrailNet and InceptionResnet.Safety is another major concern in terms of drones.", "There can be many situations such as collision with the objects, external disturbances like winds, chances of drone moving out of manual controller zone, battery issues, chances of getting stolen and other safety hazards.", "We will be implementing these drone related safety features in our future work." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0, 0.21052631735801697, 0, 0, 0.11764705181121826, 0.1904761791229248, 0.0952380895614624, 0.0952380895614624, 0, 0, 0, 0, 0, 0, 0, 0.08695651590824127, 0.0833333283662796, 0, 0.07407406717538834, 0, 0.0714285671710968, 0.07407407462596893, 0.07999999821186066, 0, 0.0714285671710968, 0.11320754140615463, 0, 0, 0.0833333283662796, 0.06779661029577255, 0.06666666269302368, 0.13333332538604736, 0.10810810327529907, 0, 0, 0.0555555522441864, 0, 0, 0, 0, 0, 0, 0, 0.27272728085517883, 0.04444444179534912, 0, 0.2857142686843872, 0, 0.07692307233810425, 0.060606054961681366, 0, 0 ]
Syx9rnRcYm
true
[ "case study on optimal deep learning model for UAVs" ]
[ "Music relies heavily on repetition to build structure and meaning. ", "Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure. ", "The Transformer (Vaswani et al., 2017), a sequence model based on self-attention, has achieved compelling results in many generation tasks that require maintaining long-range coherence.", "This suggests that self-attention might also be well-suited to modeling music.", "In musical composition and performance, however, relative timing is critically important. ", "Existing approaches for representing relative positional information in the Transformer modulate attention based on pairwise distance (Shaw et al., 2018). ", "This is impractical for long sequences such as musical compositions since their memory complexity is quadratic in the sequence length. ", "We propose an algorithm that reduces the intermediate memory requirements to linear in the sequence length.", "This enables us to demonstrate that a Transformer with our modified relative attention mechanism can generate minute-long (thousands of steps) compositions with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies. ", "We evaluate the Transformer with our relative attention mechanism on two datasets, JSB Chorales and Piano-e-competition, and obtain state-of-the-art results on the latter.", "A musical piece often consists of recurring elements at various levels, from motifs to phrases to sections such as verse-chorus.", "To generate a coherent piece, a model needs to reference elements that came before, sometimes in the distant past, and then repeat, vary, and further develop them to create contrast and surprise.", "Intuitively, self-attention (Parikh et al., 2016) could be a good match for this task.", "Self-attention over its own previous outputs allows an autoregressive model to access any part of the previously generated output at every step of generation.", "By contrast, recurrent neural networks have to learn to proactively store elements to be referenced in a fixed size state or memory, making training potentially much more difficult.", "We believe that repeating self-attention in multiple, successive layers of a Transformer decoder BID17 can help capture the multiple levels at which self-referential phenomena exist in music.In its original formulation, the Transformer relies on absolute position representations, using either positional sinusoids or learned position embeddings that are added to the per-position input representations.", "Recurrent and convolutional neural networks instead model position in relative terms: RNNs through their recurrence over the positions in their input, and CNNs by applying kernels that effectively choose which parameters to apply based on the relative position of the covered input representations.Music has multiple dimensions along which relative differences arguably matter more than their absolute values; the two most prominent are timing and pitch.", "To capture such pairwise relations between representations, BID13 introduce a relation-aware version of self-attention which they use successfully to modulate self-attention by the distance between two positions.", "We extend this approach to capture relative timing and optionally also pitch, which yields improvement in both sample quality and perplexity for the JSB Chorales dataset.", "As opposed to the original Transformer, samples from a Transformer with our relative attention mechanism maintain the regular timing grid present in this dataset.", "The model furthermore captures global timing, giving rise to regular phrases.The original formulation of relative attention BID13 requires O(L 2 D) memory where L is the sequence length and D is the dimension of the model's hidden state.", "This is prohibitive for long sequences such as those found in the Maestro dataset of human-performed virtuosic, classical piano music BID7 .", "In Section 3.4, we show how to reduce the memory requirements to O(LD), making it practical to apply relative attention to long sequences.The Maestro dataset consists of MIDI recorded from performances of competition participants, bearing expressive dynamics and timing on a less than 10-millisecond granularity.", "Discretizing time in a fixed grid on such a resolution would yield unnecessarily long sequences as not all events change on the same timescale.", "We hence adopt a sparse, MIDI-like, event-based representation from (Oore et al., 2018) , allowing a minute of music with a 10-millisecond resolution to be represented at lengths around 2K.", "This is in contrast to a 6K to 18K length that would be needed on a serialized multi-attribute fixed-grid representation.", "As position in sequence no longer corresponds to time, a priori it is not obvious that relative attention should work as well with such a representation.", "However, we will show in Section 4.2 that it does improve perplexity and sample quality over strong baselines.We speculate that idiomatic piano gestures such as scales, arpeggios and other motifs all exhibit a certain grammar and recur periodically, hence knowing their relative positional distances makes it easier to model this regularity.", "This inductive bias towards learning relational information, as opposed to patterns based on absolute position, suggests that the Transformer with relative attention could generalize beyond the lengths it was trained on, which our experiments in Section 4.2.1 confirm.", "In this work we demonstrated that the Transformer equipped with relative attention is very well-suited for generative modeling of symbolic music.", "The compelling long-term structure in the samples from our model leaves us enthusiastic about this direction of research.", "Moreover, the ability to expand upon a prime, in particular, suggests potential applications as creative tool.The significant improvement from relative attention highlights a shortcoming of the original Transformer that might also limit its performance in other domains.", "Improving the Transformer's ability to capture periodicity at various time scales, for instance, or relations between scalar features akin to pitch could improve time-series models.", "Our memory-efficient implementation enables the application of relative attention to much longer sequences such as long texts or even audio waveforms, which significantly broadens the range of problems to which it could be applied." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.14814814925193787, 0.21052631735801697, 0.1428571343421936, 0.14814814925193787, 0.0714285671710968, 0.21052631735801697, 0.1666666567325592, 0.25806450843811035, 0.18867924809455872, 0.1666666567325592, 0.05714285373687744, 0.13636362552642822, 0, 0.10256409645080566, 0.0476190410554409, 0.21875, 0.11428570747375488, 0.1463414579629898, 0.1463414579629898, 0.1538461446762085, 0.07999999821186066, 0.21621620655059814, 0.10169491171836853, 0.10526315122842789, 0.13636362552642822, 0.11764705181121826, 0.09756097197532654, 0.1230769157409668, 0.145454540848732, 0.2702702581882477, 0.29411762952804565, 0.19607841968536377, 0.04999999701976776, 0.08695651590824127 ]
rJe4ShAcF7
true
[ "We show the first successful use of Transformer in generating music that exhibits long-term structure. " ]
[ "Sequential decision problems for real-world applications often need to be solved in real-time, requiring algorithms to perform well with a restricted computational budget.", "Width-based lookaheads have shown state-of-the-art performance in classical planning problems as well as over the Atari games with tight budgets.", "In this work we investigate width-based lookaheads over Stochastic Shortest paths (SSP).", "We analyse why width-based algorithms perform poorly over SSP problems, and overcome these pitfalls proposing a method to estimate costs-to-go.", "We formalize width-based lookaheads as an instance of the rollout algorithm, give a definition of width for SSP problems and explain its sample complexity.", "Our experimental results over a variety of SSP benchmarks show the algorithm to outperform other state-of-the-art rollout algorithms such as UCT and RTDP.", "Model-based lookahead algorithms provide the ability to autonomously solve a large variety of sequential decision making problems.", "Lookaheads search for solutions by considering sequences of actions that can be made from the current state up to a certain time into the future.", "For realworld applications decisions often need to be computed in real-time, requiring algorithms to perform with a restricted computational budget.", "Limiting search in this way can result in considering states and trajectories which do not provide useful information.", "To address this, lookaheads can be augmented with heuristics that estimate costs-to-go to prioritise states and trajectories, and have been shown to perform well where computation budgets are restricted BID8 .This", "paper is concerned with Stochastic Shortest Path (SSP) problems which are often used to compare and evaluate search algorithms. We consider", "the width-based family of planning algorithms, first introduced by BID15 , which aim to prioritise the exploration of novel areas of the state space. Two width-based", "planners, Lipovetzky and Geffner's breadth-first search, IW(1), and the depth-first search, Rollout-IW(1) BID1 , are investigated on SSP problems. We first provide", "the necessary background for SSP problems and width-based algorithms, while also formalising width-based algorithms as instances of the rollout algorithm BID4 . We then show the", "motive to augment width-based lookaheads with cost estimates on SSP problems, define the width of SSP problems and propose a novel width-based algorithm that estimates costs-to-go by simulating a general base policy. Our experimental", "study shows that the algorithm compares favourably to the original Rollout-IW(1) algorithm and to other state-of-the-art instances of the rollout algorithm.", "MCTS approaches typically combine lookaheads and cost-togo approximations, along with statistical tests to determine what are the most promising directions and focus their sampling effort.", "The width-based methods described in this paper do so too, but in ways which are, at first sight, orthogonal to existing strategies.", "It remains an area of active research to map out exactly how the width-based methods described in this paper, and those elsewhere by BID11 too, provide alternatives to the limitations of existing MCTS approaches.", "Having said this, there is no general theory guiding the design of MCTS algorithms BID4 , and to avoid generating ad-hoc, problem dependent solutions involuntarily it is important to follow strict protocols that alert of potential lack of statistical significance in results, while relying on a diverse set of benchmarks that can be both easily understood, and highlight limitations of existing state-of-theart methods and overcome them." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.09756097197532654, 0, 0.06451612710952759, 0.20512819290161133, 0.1904761791229248, 0.1904761791229248, 0.1666666567325592, 0.1860465109348297, 0.10526315122842789, 0.0555555522441864, 0.0833333283662796, 0.14999999105930328, 0.09999999403953552, 0.10256409645080566, 0.19512194395065308, 0.2857142686843872, 0.22857142984867096, 0.04651162400841713, 0.09999999403953552, 0.07999999821186066, 0.10526315122842789 ]
HJgdVWPTv4
true
[ "We propose a new Monte Carlo Tree Search / rollout algorithm that relies on width-based search to construct a lookahead." ]
[ "Deep Neural Networks (DNNs) are known for excellent performance in supervised tasks such as classification.", "Convolutional Neural Networks (CNNs), in particular, can learn effective features and build high-level representations that can be used for\n", "classification, but also for querying and nearest neighbor search.", "However, CNNs have also been shown to suffer from a performance drop when the distribution of the data changes from training to test data.", "In this paper we analyze the internal\n", "representations of CNNs and observe that the representations of unseen data in each class, spread more (with higher variance) in the embedding space of the CNN compared to representations of the training data.", "More importantly, this difference is more extreme if the unseen data comes from a shifted distribution.", "Based on this observation, we objectively evaluate the degree of representation’s variance in each class by applying eigenvalue decomposition on the within-class covariance of the internal representations of CNNs and observe the same behaviour.", "This can be problematic as larger variances might lead to mis-classification if the sample crosses the decision boundary of its class.", "We apply nearest neighbor classification on the representations and empirically show that the embeddings with the high variance actually have significantly worse KNN classification performances, although this could not be foreseen from their end-to-end classification results.", "To tackle this problem, we propose Deep Within-Class Covariance Analysis (DWCCA), a deep neural network layer that significantly reduces the within-class covariance of a DNN’s representation, improving performance on unseen test data from a shifted distribution.", "We empirically evaluate DWCCA on two datasets for Acoustic Scene Classification (DCASE2016 and DCASE2017).", "We demonstrate that not only does DWCCA significantly improve the network’s internal representation, it\n", "also increases the end-to-end classification accuracy, especially when the test set exhibits a slight distribution shift.", "By adding DWCCA to a VGG neural network, we achieve around 6 percentage points improvement in the case of a distribution\n", "mismatch.", "Convolutional Neural Networks (CNNs) are the state of the art in many supervised learning tasks such as classification, and using the power of convolutional layers, CNNs can learn useful features that are often superior to engineered features, and build internal representations that can achieve high classification performance.It has been shown that CNNs have a surprising ability to fit data, so much so that they can even perfectly learn from data with random labels BID32 .", "But of course, memorising the training data is not sufficient: a model is expected to generalize to unseen data points.", "Additionally, a robust model has to be able to not only deal with unseen data points that are similar to the training set, but also cope with unseen data points that may come from a slightly different distribution than the training data (distribution mismatch).", "When there is a distribution shift between the training and test sets, robustness of the model's representation becomes more important as it has to classify or embed data points that are quite different from the ones it has observed in the training set.In this paper, we investigate this by using a well-known DNN architecture (VGG BID28 ) that is adapted for audio classification BID9 and is widely used among researchers.", "We evaluate VGG on data with as well as without distribution mismatch and observe that while VGG exhibits a reasonable performance on the data without distribution mismatch, its performance significantly drops when tested on data from a shifted distribution.We start by analyzing the internal representations of the network by using visualisations.", "As will be seen in the first (a-c) and the 3rd rows (g-i) of FIG2 , the network's internal representations in each class spread more in the embedding space for the unseen data (validation or test) compared to the training data.", "This is even more extreme when the unseen data comes from a shifted distribution (i).For", "an objective evaluation of the amount of the representation's variance in each class, we compute the within-class covariance of the representations of the network for each class, and we apply eigenvalue decomposition to compute the eigenvalues of each class's covariance matrix. We", "then report the sorted eigenvalues of the within-class covariance of the representations in Figure 3 . As", "the blue curves show, the eigenvalues in unseen data of validation (b and e) and test (c and d)", "have considerably higher ranges than train data (a and d)", "for all the datasets we used.To better understand the effect of such high variance in the quality of generalisation in the representations of our network, we carry out K-nearest neighbor (KNN) experiments on the dataset without, and the dataset with distribution shift. As", "the results in Figure 4 show, the performance degredation from validation (c", ") compared to test (", "d) in case of distribution mismatch is significantly higher compared to the performance drop from validation", "(a) to test", "(b) when the test data comes from a similar distribution.", "This observation is also aligned with what we observed in the visualisations from FIG2 that showed the data is more spread than validation data, when coming from a shifted distribution.To tackle this problem, we propose Deep Within-Class Covariance Analysis (DWCCA), a deep neural network layer that reformulates the conventional Within-Class Covariance Normalization (WCCN) BID12 as a DNN-compatible version.", "DWCCA is trained end-to-end using back-propagation, can be placed in any arbitrary position in a DNN, and is capable of significantly reducing the within-class covariance of the internal representation in a DNN.We empirically show that DWCCA significantly reduces the within-class covariance of the DNN's representations, in both cases.", "Further, we evaluate the generalization quality of the DNN's representations after applying DWCCA by performing nearest neighbor classification on its representations.", "Our results show that DWCCA significantly improves the nearest neighbor classification results in both cases, hence improving the generalization quality of the representations.", "And finally we report the end-to-end classification results of the trained models on an acoustic scene classification task, using data from the annual IEEE Challenges on Detection and Classification of Acoustic Scenes and Events (DCASE).", "It turns out that the classification results for the dataset with distribution shift are significantly improved by integrating the DWCCA layer, while the performance on the dataset without distribution mismatch stayed the same.", "In FIG2 , the network's internal representations in each class are projected into 2D via PCA and each class is represented by a different colour.", "Looking at first (a-c) and second (d-f) row, it can be seen that for the dataset without mismatched distribution the embeddings of unseen data (validation and test) are spread less after applying DWCCA.", "Also comparing the unseen embeddings to the training embeddings (with lower opacity and in grey) it can be seen that the unseen embeddings projected closer to the training embeddings after applying DWCCA.", "Comparing third (g-i) and fourth (j-l) row, it can be seen that for the case of a distribution shift DWCCA also reduces the variance of the embeddings in each class, resulting in them being embedded closer to the training embeddings (grey).", "This suggests that this property can improve the generalisation of the representations.", "We will empirically evaluate this hypothesis later in this section by applying KNN classification on the representations.", "Looking at Figure 3 , we can see that in all plots from dataset with, and dataset without distribution shift, DWCCA significantly reduces the within-class variability.", "This can be observed by looking at the eigenvalues of the covariances of the representations.", "An interesting observation is the range of eigenvalues in vanilla: In both datasets, eigenvalues have significantly larger range on unseen data (validation and test) compared to the training data.", "The maximum eigenvalue in DCASE2016 is around 0.7, while the maximum eigenvalue for unseen is around 7, about 10 times more.", "Also the maximum eigenvalue of the train set of DCASE2017 is around 2, while the max.", "eigenvalue on unseen data is around 20 (10 times larger).By", "looking at the KNN results in Fig. 4 it can be seen that in both cases (mismatch / no mismatch), the KNN classification accuracy increases by adding DWCCA. Also", ", while the KNN performance is in a reasonable range on the validation set of both datasets, the test accuraty in the mismatch case (DCASE2017) drops significantly compared to the validation set. Additionally", "it can be seen that applying DWCCA significantly improves the performance on the test set with shifted distribution, adding an improvement of about 6 percentage point, while the improvement on the test set without mismatch is around 2 percentage points. Looking at the", "results of end-to-end classifications in TAB2 , we see that the performance of vanilla on DCASE 2017 consistently and significantly improves when adding DWCCA, on all development folds as well as on the unseen test data. We observe around", "6 percentage points improvement by adding DWCCA to VGG.Looking at the results of the dataset without mismatch, we see that although the results on all folds were improved by adding DWCCA, the results on the unseen test set do not significantly change. This can be explained", "better by looking at FIG2 : the embeddings of validation (b) and test (c) indicate", "that the test", "data is projected closer to the training set than the validation set. This observation suggests", "that the unseen test in DCASE2016 might be similar (even more similar than the validation data) to the training set. This can also be confirmed", "by looking at the results of the best CNN baseline, as well as vanilla: the performances on the unseen test set are consistently higher than all the validation folds. Hence, DWCCA could not help", "as there was not a large generalisation gap between training and test.It is worth mentioning that both vanilla and DWCCA are single models, trained on mono single channel spectrograms and no ensemble or multi-channel features were used in these experiments. In other words, a single VGG", "model achieves comparable performances to an ensemble of multi-channel Resnets. We also provide class-wise f-measures", "on the unseen test set for both datasets in TAB3 . While on the dataset without distribution", "shift, the average f1 stays the same by adding DWCCA in both calibrated and non calibrated models, we can observe that there is a boost of 13 percentage points on the \"train\" class which was the class with the lowest f1 (both calibrated and non calibrated). It seems that DWCCA does not have a significant", "impact on classes with high f1: \"office\" and \"beach\" which stay the highest correctly predicted classes and do not face significant changes by DWCCA.On the dataset with distribution shift, we can see a significant improvement of 4 and 7 percentage points on average f1 for non-calibrated and calibrated models, respectively. The worst class in DCASE2017 was \"beach\" with 32", "%, which was boosted by 24 and 37 percentage points for noncalibrated and calibrated models, respectively. On the other hand, the best performing class, \"", "forest path\", drops by only 2 and 3 percentage points for non-calibrated and calibrated models, respectively.From the experimental results, we may thus conclude that overall, reducing the within-class covariance of representations using DWCCA results in more robust performance and, in case of a large gap between training and test, DWCCA can improve the generalisation. Additionally, the networks tend to reach a more", "uniform performance across various classes by improving the performance on the worst classes while not significantly degrading the best performing classes.", "In this paper, we presented the DWCCA layer, a DNN compatible version of the classic WCCN which is used to normalize the within-class covariance of the DNN's representation and improve the performance of CNNs on data-points with shifted distributions.", "Using DWCCA, we improved the performance of the VGG network by around 6 percentage point when the test datapoints were from a shifted distribution.", "We analysed the embedding's generalisation by reporting KNN classification accuracies and showed that DWCCA also improves the generalisation of DNN representations both for with and without distribution mismatch.", "We also showed that large within-class covariance of representations can be a sign for bad generalisation and showed that DWCCA can significantly reduce WCC and improve generalisation of the representations." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.09999999403953552, 0.1860465109348297, 0.05882352590560913, 0.13333332538604736, 0.125, 0.2083333283662796, 0.09756097197532654, 0.2641509473323822, 0.08888888359069824, 0.21052631735801697, 0.4406779706478119, 0.10256409645080566, 0.25641024112701416, 0.09999999403953552, 0.2222222238779068, 0.1627907007932663, 0.1428571343421936, 0.10344827175140381, 0.1666666567325592, 0.29032257199287415, 0.20689654350280762, 0.09756097197532654, 0.38461539149284363, 0.31578946113586426, 0.1463414579629898, 0, 0.20338982343673706, 0.1666666567325592, 0, 0.19512194395065308, 0, 0.11428570747375488, 0.2432432323694229, 0.36666667461395264, 0.13636362552642822, 0.35555556416511536, 0.14814814925193787, 0.19607841968536377, 0.2083333283662796, 0.1428571343421936, 0.1249999925494194, 0.19999998807907104, 0.277777761220932, 0.19512194395065308, 0.19999998807907104, 0.1621621549129486, 0.1599999964237213, 0.1428571343421936, 0.10526315122842789, 0, 0.15686273574829102, 0.19607841968536377, 0.16949151456356049, 0.23728813230991364, 0.158730149269104, 0.09999999403953552, 0.1428571343421936, 0.052631575614213943, 0.1304347813129425, 0.1111111044883728, 0.11940298229455948, 0.14999999105930328, 0.1538461446762085, 0.1492537260055542, 0.13333332538604736, 0.0833333283662796, 0.2820512652397156, 0.1463414579629898, 0.20689654350280762, 0.1702127605676651, 0.2800000011920929, 0.4583333134651184 ]
S1giWPGsjQ
true
[ "We propose a novel deep neural network layer for normalising within-class covariance of an internal representation in a neural network that results in significantly improving the generalisation of the learned representations." ]
[ "Generative models have proven to be an outstanding tool for representing high-dimensional probability distributions and generating realistic looking images.", "A fundamental characteristic of generative models is their ability to produce multi-modal outputs.", "However, while training, they are often susceptible to mode collapse, which means that the model is limited in mapping the input noise to only a few modes of the true data distribution.", "In this paper, we draw inspiration from Determinantal Point Process (DPP) to devise a generative model that alleviates mode collapse while producing higher quality samples.", "DPP is an elegant probabilistic measure used to model negative correlations within a subset and hence quantify its diversity.", "We use DPP kernel to model the diversity in real data as well as in synthetic data.", "Then, we devise a generation penalty term that encourages the generator to synthesize data with a similar diversity to real data.", "In contrast to previous state-of-the-art generative models that tend to use additional trainable parameters or complex training paradigms, our method does not change the original training scheme.", "Embedded in an adversarial training and variational autoencoder, our Generative DPP approach shows a consistent resistance to mode-collapse on a wide-variety of synthetic data and natural image datasets including MNIST, CIFAR10, and CelebA, while outperforming state-of-the-art methods for data-efficiency, convergence-time, and generation quality.", "Our code will be made publicly available.", "Deep generative models have gained enormous research interest in recent years as a powerful framework to learn high dimensional data in an unsupervised fashion.", "Generative Adversarial Networks (GANs) BID10 and Variational AutoEncoders (VAEs) are among the most dominant generative approaches.", "They consist of training two networks: a generator (decoder) and a discriminator (encoder), where the generator attempts to map random noise to fake data points that simulate the probability distribution of real data.", ". GANs are typically associated with higher quality images compared to VAEs. Nevertheless, in the process of learning multi-modal complex distributions, both models may converge to a trivial solution where the generator learns to produce few modes exclusively, as referred to by mode collapse problem.To address this, we propose utilizing Determinantal Point Processes (DPP) to model the diversity within data samples. DPP is a probabilistic model that has been mainly adopted for solving subset selection problems with diversity constraints BID21 , such as video and document summarization. However, Sampling from a DPP requires quantifying the diversity of 2 N subsets, where N is the size of the ground set. This renders DPP sampling from true data to be computationally inefficient in the generation domain. The key idea of our work is to model the diversity within real and fake data throughout the training process, which does adds an insignificant computational cost. Then, We encourage producing samples of similar diversity distribution to the true-data by back-propagating the DPP metric through the generator. This way, generator explicitly learns to cover more modes of real distribution, and accordingly alleviates mode collapse.Recent approaches tackled mode-collapse in one of two different ways: (1) improving the learning of the system to reach a better convergence point(e.g. BID28 ; BID0 ); or (2) explicitly enforcing the models to capture diverse modes or map back to the true-data distribution (e.g. BID37 ; BID2 ). Here we focus on a relaxed version of the former, where we use the same learning paradigm of the standard GANs and only change the objective function. The advantage of such an approach is to avoid adding any extra trainable parameters to the trained system while maintaining the same back-propagation steps as the standard GANs. Thus, our model converges faster to a fair equilibrium point where the generator captures the diversity of the true-data distribution while preserving the quality of generations.Contribution. We introduce a new loss function, that we denote Generative Determinantal Point Processes (GDPP) loss. Our loss only assumes an access to a generator G, a feature extraction function φ(·), and sampler from true data distribution p d . The loss encourages the generator to diversify generated samples that match the diversity of real data.This criterion can be considered as a complement to the original adversarial loss which attempts to learn an indistinguishable distribution from the true-data distribution without being specific to diverse modes. We assess the performance of GDPP on three different synthetic data environments, while also verifying the superiority on three real-world images datasets. We compared our approach with state-of-the-art approaches of more complex architectures and learning paradigms. Experiments show that our method outperforms all competing methods in terms of alleviating modecollapse and generations quality.", "In this work, we introduce a novel criterion to train generative networks on capturing a similar diversity to one of the true data by utilizing Determinantal Point Process(DPP).", "We apply our criterion to Generative Adversarial training and the Variational Autoencoder by learning a kernel via features extracted from the discriminator/encoder.", "We train the generator on optimizing a loss between the fake and real, eigenvalues and eigenvectors of this kernel to simulate the diversity of the real data.", "Our GDPP framework accumulates many desirable properties: it does not require any extra trainable parameters, it operates in an unsupervised setting, yet it consistently outperforms stateof-the-art methods on a battery of synthetic data and real image datasets as measure by generation quality and invariance to mode collapse.", "Furthermore, GDPP-GANs exhibit a stabilized adversarial training and has been shown to be time and data efficient as compared to state-of-the-art approaches.", "Moreover, the GDPP criterion is architecture and model invariant, allowing it to be embedded with any variants of generative models such as adversarial feature learning or conditional GANs." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.09756097197532654, 0.11428570747375488, 0.23529411852359772, 0.21276594698429108, 0.24390242993831635, 0.277777761220932, 0.19999998807907104, 0.08510638028383255, 0.19672130048274994, 0, 0.13333332538604736, 0.10526315122842789, 0.20408162474632263, 0.11552346497774124, 0.25, 0.2790697515010834, 0.27272728085517883, 0.21212120354175568, 0.1428571343421936, 0.19999998807907104 ]
S1x8WnA5Ym
true
[ "The addition of a diversity criterion inspired from DPP in the GAN objective avoids mode collapse and leads to better generations. " ]
[ "Despite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization.", "In this work we suggest a novel complexity measure based on unit-wise capacities resulting in a tighter generalization bound for two layer ReLU networks.", "Our capacity bound correlates with the behavior of test error with increasing network sizes (within the range reported in the experiments), and could partly explain the improvement in generalization with over-parametrization.", "We further present a matching lower bound for the Rademacher complexity that improves over previous capacity lower bounds for neural networks.", "Deep neural networks have enjoyed great success in learning across a wide variety of tasks.", "They played a crucial role in the seminal work of Krizhevsky et al. (2012) , starting an arms race of training larger networks with more hidden units, in pursuit of better test performance (He et al., 2016) .", "In fact the networks used in practice are over-parametrized to the extent that they can easily fit random labels to the data (Zhang et al., 2017) .", "Even though they have such a high capacity, when trained with real labels they achieve smaller generalization error.Traditional wisdom in learning suggests that using models with increasing capacity will result in overfitting to the training data.", "Hence capacity of the models is generally controlled either by limiting the size of the model (number of parameters) or by adding an explicit regularization, to prevent from overfitting to the training data.", "Surprisingly, in the case of neural networks we notice that increasing the model size only helps in improving the generalization error, even when the networks are trained without any explicit regularization -weight decay or early stopping (Lawrence et al., 1998; Srivastava et al., 2014; Neyshabur et al., 2015c) .", "In particular, Neyshabur et al. (2015c) observed that training on models with increasing number of hidden units lead to decrease in the test error for image classification on MNIST and CIFAR-10.", "Similar empirical observations have been made over a wide range of architectural and hyper-parameter choices (Liang et al., 2017; Novak et al., 2018; Lee et al., 2018) .", "What explains this improvement in generalization with over-parametrization?", "What is the right measure of complexity of neural networks that captures this generalization phenomenon?Complexity", "measures that depend on the total number of parameters of the network, such as VC bounds, do not capture this behavior as they increase with the size of the network. Existing works", "suggested different norm, margin and sharpness based measures, to measure the capacity of neural networks, in an attempt to explain the generalization behavior observed in practice (Neyshabur et al., 2015b; Keskar et al., 2017; Dziugaite & Roy, 2017; Neyshabur et al., 2017; Bartlett et al., 2017; We observe that even when after network is large enough to completely fit the training data(reference line), the test error continues to decrease for larger networks. Middle panel:", "Training fully connected feedforward network with single hidden layer on CIFAR-10. We observe the", "same phenomena as the one observed in ResNet18 architecture. Right panel: Unit", "capacity captures the complexity of a hidden unit and unit impact captures the impact of a hidden unit on the output of the network, and are important factors in our capacity bound (Theorem 1). We observe empirically", "that both average unit capacity and average unit impact shrink with a rate faster than 1/ √ h where h is the number of hidden units. Please see Supplementary", "Section A for experiments settings. BID0 Golowich et al., 2018", "; BID0 . In particular, Bartlett et", "al. (2017) showed a margin based generalization bound that depends on the spectral norm and 1,2 norm of the layers of a network. However, as shown in Neyshabur", "et al. (2017) and in FIG6 , these complexity measures fail to explain why over-parametrization helps, and in fact increase with the size of the network. Dziugaite & Roy (2017) numerically", "evaluated a generalization bound based on PAC-Bayes. Their reported numerical generalization", "bounds also increase with the increasing network size. These existing complexity measures increase", "with the size of the network, even for two layer networks, as they depend on the number of hidden units either explicitly, or the norms in their measures implicitly depend on the number of hidden units for the networks used in practice (Neyshabur et al., 2017)", "In this paper we present a new capacity bound for neural networks that empirically decreases with the increasing number of hidden units, and could potentially explain the better generalization performance of larger networks.", "In particular, we focused on understanding the role of width in the generalization behavior of two layer networks.", "More generally, understanding the role of depth and the interplay between depth and width in controlling capacity of networks, remain interesting directions for future study.", "We also provided a matching lower bound for the capacity improving on the current lower bounds for neural networks.", "While these bounds are useful for relative comparison between networks of different size, their absolute values still remain larger than the number of training samples, and it is of interest to get bounds with numerically smaller values.In this paper we do not address the question of whether optimization algorithms converge to low complexity networks in the function class considered in this paper, or in general how does different hyper parameter choices affect the complexity of the recovered solutions.", "It is interesting to understand the implicit regularization effects of the optimization algorithms (Neyshabur et al., 2015a; Gunasekar et al., 2017; Soudry et al., 2018) for neural networks, which we leave for future work." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1702127605676651, 0.2702702581882477, 0.5128204822540283, 0.3030303120613098, 0.13793103396892548, 0.17391303181648254, 0.15789473056793213, 0.25, 0.04999999701976776, 0.14814814925193787, 0.1818181723356247, 0.052631575614213943, 0.3636363446712494, 0.2142857164144516, 0.14999999105930328, 0.16438356041908264, 0.2142857164144516, 0.1538461446762085, 0.2631579041481018, 0.19999998807907104, 0, 0, 0.31578946113586426, 0.24390242993831635, 0.25, 0.1538461446762085, 0.12765957415103912, 0.3636363446712494, 0.19999998807907104, 0.11428570747375488, 0.2666666507720947, 0.07894736528396606, 0.04651162400841713 ]
BygfghAcYX
true
[ "We suggest a generalization bound that could partly explain the improvement in generalization with over-parametrization." ]
[ "We introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of multiple state-of-the-art networks, thus allowing to design deeper and more accurate networks.\n\n", "The novel processing blocks that facilitate efficient information flow are a convolution-type operation block for point sets that blends neighborhood information in a memory-efficient manner; a multi-resolution point cloud processing block; and a crosslink block that efficiently shares information across low- and high-resolution processing branches.", "Combining these blocks, we design significantly wider and deeper architectures.\n\n", "We extensively evaluate the proposed architectures on multiple point segmentation benchmarks (ShapeNetPart, ScanNet, PartNet) and report systematic improvements in terms of both accuracy and memory consumption by using our generic modules in conjunction with multiple recent architectures (PointNet++, DGCNN, SpiderCNN, PointCNN).", "We report a 9.7% increase in IoU on the PartNet dataset, which is the most complex, while decreasing memory footprint by 57%.", "Geometry processing has recently started profiting from applying deep learning to graphics and 3D shape analysis (Qi et al., 2017b; Wang et al., 2018b; with networks that guarantee desirable properties of point cloud processing, such as permutation-invariance and quantization-free representation Wang et al., 2017; .", "Despite these advances, several differences still impede the breakthroughs made in computer vision.", "The different nature of 3D data dictates re-inventing for geometry processing the functionality of basic image processing blocks, such as multi-resolution processing or convolution operations.", "When operating with unstructured point clouds, one has to resort to elementary local pooling operations that group information within a neighborhood based on Euclidean distance.", "Exemplary methods such as the PointNet/PointNet++ architectures (Qi et al., 2017a; make design choices that potentially compromise performance.", "In particular, the computation and memory demands of point network blocks can affect both training speed and, more crucially, inference time.", "One of the main bottlenecks for point networks is their memory-intensive nature: as detailed in Sec. 3.1, the PointNet++ architecture and its variants replicate point neighborhood information, letting every node carry in its feature vector information about all of its neighborhood.", "This results in significant memory overhead, and limits the number of layers, features and feature compositions one can compute.", "In this work, we enhance point processing networks by introducing a set of modules that improve memory footprint and accuracy, without compromising on inference speed.", "We call the result architectures Lean Point Networks, to highlight their lightweight memory budget.", "We build on the decreased memory budget to go deeper with point networks.", "As has been witnessed repeatedly in the image domain Huang et al., 2016; Zagoruyko & Komodakis, 2016) , we show that going deep also increases the prediction accuracy of point networks.", "We start in Sec. 3.2 by replacing the grouping operation used in point cloud processing networks with a low-memory alternative that is the point cloud processing counterpart of efficient image processing implementations of convolution.", "The resulting 'point convolution block' is 67% more memory-efficient and 41% faster than its PointNet++ counterpart, while exhibiting favorable training properties due to more effective mixing of information across neighborhoods.", "We then turn in Sec. 3.3 to improving the information flow across layers and scales within point networks through three techniques: a multi-resolution variant for multi-scale network which still delivers the multi-scale context but at a reduced memory and computational cost, residual links, and a new cross-link block that broadcasts multi-scale information across the network branches.", "By combining these advances we are able to successfully train deeper point networks that allow us to leverage upon larger, recently introduced datasets.", "In Sec. 4 we thoroughly validate our contributions on the ShapeNet-Part, ScanNet and PartNet segmentation benchmarks, reporting systematic improvements over the PointNet++ baseline.", "As shown in Fig. 1 , when combined these contributions deliver multifold reductions in memory consumption while improving performance, allowing us in a second stage to train increasingly wide and deep networks.", "On PartNet, the most complex dataset, our deep architecture achieves a 9.7% relative increase in IoU while decreasing memory footprint by 57% and inference time by 47%.", "Having thoroughly ablated our design choices on the PartNet++ baseline, in Sec. 4.3 we turn to confirming the generic nature of our blocks.", "We extend the scope of our experiments to three additional networks,", "(i) DGCNN (Wang et al., 2018b) ,", "(ii) SpiderCNN (Xu et al., 2018) and", "(iii) PointCNN (Li et al., 2018b) and report systematic improvements in memory efficiency and performance.", "In this work we have introduced new generic building blocks for point processing networks, that exhibit favorable memory, computation, and optimization properties when compared to the current counterparts of state-of-the-art point processing networks.", "When based on PointNet++, our lean architecture convPN wins on all counts, memory efficiency (-67% wrt. PointNet++) and speed (-41% and -68% on inference time and length of backward pass).", "Its deep counterpart has a marginal cost in terms of efficiency and achieves the best IoU on PartNet (+9.7% over PointNet++).", "Those generic and modular blocks exhibit similar performance on all of the additional tested architectures with a significant decrease in memory (up to -69%) and increase in IoU (up to +8.0%).", "From the promising results on PartNet and the extremely low cost of depth in our architectures, we anticipate that adding these components to the armament of the deep geometry processing community will allow researchers to train the next generation of point processing networks by leveraging upon the advent of larger shape datasets (Mo et al., 2018; Koch et al., 2018" ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.9818181991577148, 0.19999998807907104, 0.15789473056793213, 0.3125, 0.08163265138864517, 0.24242423474788666, 0, 0.08163265138864517, 0.11764705181121826, 0.08695651590824127, 0.2916666567325592, 0.12903225421905518, 0.13333332538604736, 0.307692289352417, 0.1463414579629898, 0.29999998211860657, 0.17543859779834747, 0.2545454502105713, 0.1428571343421936, 0.2222222238779068, 0.20408162474632263, 0.04081632196903229, 0.21052631735801697, 0.07407406717538834, 0.20408162474632263, 0.2631579041481018, 0, 0.05714285373687744, 0.0952380895614624, 0.37931033968925476, 0.11320754140615463, 0.08163265138864517, 0.2181818187236786, 0.18421052396297455 ]
rJgsgCVYwS
true
[ "We introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of multiple state-of-the-art networks, thus allowing to design deeper and more accurate networks." ]
[ "End-to-end acoustic-to-word speech recognition models have recently gained popularity because they are easy to train, scale well to large amounts of training data, and do not require a lexicon.", "In addition, word models may also be easier to integrate with downstream tasks such as spoken language understanding, because inference (search) is much simplified compared to phoneme, character or any other sort of sub-word units.", "In this paper, we describe methods to construct contextual acoustic word embeddings directly from a supervised sequence-to-sequence acoustic-to-word speech recognition model using the learned attention distribution.", "On a suite of 16 standard sentence evaluation tasks, our embeddings show competitive performance against a word2vec model trained on the speech transcriptions.", "In addition, we evaluate these embeddings on a spoken language understanding task and observe that our embeddings match the performance of text-based embeddings in a pipeline of first performing speech recognition and then constructing word embeddings from transcriptions.", "The task of learning fixed-size representations for variable length data like words or sentences, either text or speech-based, is an interesting problem and a focus of much current research.", "In the natural language processing community, methods like word2vec BID0 , GLoVE BID1 , CoVe BID2 and ELMo BID3 have become increasingly popular, due to their utility in several natural language processing tasks.", "Similar research has progressed in the speech recognition community, where however the input is a sequence of short-term audio features, rather than words or characters.", "Therefore, the variability in speakers, acoustics or microphones for different occurrences of the same word or sentence adds to the challenge.Prior work towards the problem of learning word representations from variable length acoustic frames involved either providing word boundaries to align speech and text BID4 , or chunking (\"chopping\" or \"padding\") input speech into fixed-length segments that usually span only one word BID5 BID6 BID7 BID8 .", "Since these techniques learn acoustic word embeddings from audio fragment and word pairs obtained via a given segmentation of the audio data, they ignore the specific audio context associated with a particular word.", "So the resulting word embeddings do not capture the contextual dependencies in speech.", "In contrast, our work constructs individual acoustic word embeddings grounded in utterance-level acoustics.In this paper, we present different methods of obtaining acoustic word embeddings from an attention-based sequence-to-sequence * Equal contribution model BID9 BID10 BID11 trained for direct Acoustic-to-Word (A2W) speech recognition BID12 .", "Using this model, we jointly learn to automatically segment and classify input speech into individual words, hence getting rid of the problem of chunking or requiring pre-defined word boundaries.", "As our A2W model is trained at the utterance level, we show that we can not only learn acoustic word embeddings, but also learn them in the proper context of their containing sentence.", "We also evaluate our contextual acoustic word embeddings on a spoken language understanding task, demonstrating that they can be useful in non-transcription downstream tasks.Our main contributions in this paper are the following:", "1. We demonstrate the usability of attention not only for aligning words to acoustic frames without any forced alignment but also for constructing Contextual Acoustic Word Embeddings (CAWE).", "2. We demonstrate that our methods to construct word representations (CAWE) directly from a speech recognition model are highly competitive with the text-based word2vec embeddings BID0 , as evaluated on 16 standard sentence evaluation benchmarks.", "3. We demonstrate the utility of CAWE on a speech-based downstream task of Spoken Language Understanding showing that pretrained speech models could be used for transfer learning similar to VGG in vision BID13 or CoVe in natural language understanding BID2 .", "We present a method to learn contextual acoustic word embeddings from a sequence-to-sequence acoustic-to-word speech recognition model that learns to jointly segment and classify speech.", "We analyze the role of attention in constructing contextual acoustic word embeddings, and find our acoustic embeddings to be highly competitive with word2vec (CBOW) text embeddings.", "We discuss two variants of such contextual acoustic word embeddings which outperform the simple unweighted average method by upto 34% on semantic textual similarity tasks.", "The embeddings also matched the performance of text-based embeddings in spoken language understanding, showing the use of this model as a pre-trained model for other speech-based downstream tasks.", "We surmise that contextual audio embeddings will generalize and improve downstream tasks in a way that is similar to their text counterparts, despite the additional complexity presented by noisy audio input.", "In the future, we will explore ways to scale our model to larger corpora, larger vocabularies and compare with non-contextual acoustic word embedding methods." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.1304347813129425, 0.11538460850715637, 0.40909090638160706, 0.14999999105930328, 0.2800000011920929, 0.04444443807005882, 0.04255318641662598, 0.0952380895614624, 0.16438356041908264, 0.2666666507720947, 0.2666666507720947, 0.27586206793785095, 0.17391303181648254, 0.2083333283662796, 0.19999998807907104, 0.08888888359069824, 0.37735849618911743, 0.1071428507566452, 0.550000011920929, 0.2857142686843872, 0.1860465109348297, 0.1428571343421936, 0.1702127605676651, 0.25 ]
SJlmNI0ojQ
true
[ "Methods to learn contextual acoustic word embeddings from an end-to-end speech recognition model that perform competitively with text-based word embeddings." ]
[ "Unsupervised monocular depth estimation has made great progress after deep\n", "learning is involved.", "Training with binocular stereo images is considered as a\n", "good option as the data can be easily obtained.", "However, the depth or disparity\n", "prediction results show poor performance for the object boundaries.", "The main\n", "reason is related to the handling of occlusion areas during the training.", "In this paper,\n", "we propose a novel method to overcome this issue.", "Exploiting disparity maps\n", "property, we generate an occlusion mask to block the back-propagation of the occlusion\n", "areas during image warping.", "We also design new networks with flipped\n", "stereo images to induce the networks to learn occluded boundaries.", "It shows that\n", "our method achieves clearer boundaries and better evaluation results on KITTI\n", "driving dataset and Virtual KITTI dataset.", "Monocular depth estimation becomes an active research topic as deep learning is applied in various computer vision tasks.", "It has many applications, from navigation through to scene understanding.", "A single traditional camera can be a cheaper alternative to the expensive LIDAR sensor for automotive cars if accurate estimation can be achieved.", "Meanwhile, single camera simplifies the design of depth estimation solution which can be adopted quite widely at a low cost.", "One straight-forward way to train deep depth estimation models is to use ground truth depth images as the supervision signals BID1 .", "However, supervised deep learning method is eager for massive data with ground truth.", "Collecting large datasets with ground truth depth in varied real scenarios is challenge and expensive.", "Instead, training using stereo images without depth label is an alternative option.", "BID7 proposed a method to exploit the left-right consistency of stereo images to tackle the monocular depth estimation, which achieved quite promising results.", "However, the depth predicted by their method has blurred boundaries.", "The issue is mainly due to the occlusions during the image warping.", "Though it can be alleviated in some extent with proper post processing, the fundamental problem is not well addressed.In this paper, we propose a new method to overcome the blurred boundaries when using stereo pairs to train the monocular depth model.", "An example is illustrated in FIG0 .", "During the image warping, we generate an occlusion mask using the disparity map to block the inappropriate back-propagation gradients for occlusion areas.", "However, the mask only cannot guarantee clear boundaries as there is no constrain for the masked areas.", "Then we design new networks to fully exploit the information of stereo images.", "With flipped stereo pairs, the network is induced to learn clear boundaries for occlusion areas.", "Our method provides a solution to the fundamental learning difficulty of occluded areas introduced by image warping in depth estimation.", "Empirical evaluation on KITTI driving dataset BID6 ) and Virtual KITTI dataset BID4 ) demonstrates the effectiveness of our approach.", "Moreover, we find the depth label of KITTI 2015 is usually very sparse near the object boundaries, which is not very sensitive to evaluate the clearness of boundaries.", "In this work, we present an occlusion mask and filp-over training scheme to enable effective learning of object boundaries when using image warping.", "With our new network, our model achieves state of art result using only stereo images.", "Moreover, as warping based image reconstruction is commonly used in depth estimation problem, our method provides a solution to the fundamental difficulty of occluded areas introduced by image warping.In the future, our method can be incorporated with more accurate network trained on trinocular data (temporal Stereo sequence) such as BID25 , BID17 and BID8 , which would further boost the accuracy.6", "SUPPLEMENTARY MATERIALS" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.20000000298023224, 0, 0.06896550953388214, 0.06896550953388214, 0.1599999964237213, 0.13793103396892548, 0.19354838132858276, 0, 0.20689654350280762, 0, 0.25806450843811035, 0, 0, 0.06896550953388214, 0, 0.12903225421905518, 0, 0.10526315122842789, 0, 0.1463414579629898, 0.29999998211860657, 0.1538461446762085, 0.060606054961681366, 0.05714285373687744, 0.0624999962747097, 0.39024388790130615, 0.3333333432674408, 0.06451612710952759, 0.23728813230991364, 0, 0.1538461446762085, 0.1111111044883728, 0.12121211737394333, 0.11428570747375488, 0.3499999940395355, 0.10810810327529907, 0.1860465109348297, 0.1395348757505417, 0.05882352590560913, 0.21333332359790802 ]
H1fs4oRqKm
true
[ "This paper propose a mask method which solves the previous blurred results of unsupervised monocular depth estimation caused by occlusion" ]
[ "Graph classification is currently dominated by graph kernels, which, while powerful, suffer some significant limitations.", "Convolutional Neural Networks (CNNs) offer a very appealing alternative.", "However, processing graphs with CNNs is not trivial.", "To address this challenge, many sophisticated extensions of CNNs have recently been proposed.", "In this paper, we reverse the problem: rather than proposing yet another graph CNN model, we introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs.", "Despite its simplicity, our method proves very competitive to state-of-the-art graph kernels and graph CNNs, and outperforms them by a wide margin on some datasets.", "It is also preferable to graph kernels in terms of time complexity.", "Code and data are publicly available.", "Replacing the raw counts by the empirical joint probability density function, either by normalizing the histograms, or with a Kernel Density Estimate, significantly deteriorated performance.", "This suggests that keeping the absolute values of the counts is important, which makes sense, because some categories might be associated with larger or smaller graphs, on average.", "Therefore, preventing the model from using size information is likely to decrease accuracy.", "We also observed that increasing the number of channels to more than 5 does not yield better results (which makes sense, as channels contain less and less information), but that reducing this number improves performance in some cases, probably because it plays a regularization role.The main contribution of our study is a novel method for representing graphs as multi-channel image-like structures from their node embeddings, that allows them to be processed by 2D CNNs.", "How the embeddings are computed, and which 2D CNN architecture is used, does not matter.", "We hold this flexibility to be a major strength.", "First, the embedding-agnostic nature of our method means that it can be seamlessly extended to directed, weighted, or labeled graphs with continuous or categorical node/edge attributes, simply by using an embedding algorithm that accepts such graphs, e.g., BID21 .", "The independence of our approach with respect to the image classification model used is another advantage.", "Here, we employed a vanilla 2D CNN architecture as it was offering an excellent trade-off between accuracy and simplicity, but more recent models, such as the one of BID15 , may yield even better results.", "Above all, performance should improve as graph node embedding algorithms and CNN architectures for images improve in the future.Even though results are very good out-of-the-box in most cases, finding an embedding algorithm that works well, or the right combination of parameters for a given dataset, can require some efforts.", "For instance, on COLLAB, we hypothesize that our results are inferior to that observed on the other datasets because optimizing p and q for COLLAB may require more than a coarse grid search, or because node2vec may not be well-suited to very dense graphs such as the ones found in COLLAB.", "The main contribution of this paper is to show that CNN architectures designed for images can be used for graph processing in a completely off-the-shelf manner, simply by representing graphs as stacks of two-dimensional histograms of their node embeddings.", "Despite the simplicity of our approach, results indicate that it is very competitive to state-of-the-art graph kernels and graph CNN models, sometimes outperforming them by a wide margin.", "Furthermore, these good results were obtained with limited parameter tuning and by using a basic 2D CNN model.", "From a time complexity perspective, our approach is preferable to graph kernels too, allowing to process larger datasets featuring bigger graphs." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0555555522441864, 0.06666666269302368, 0.13793103396892548, 0.05882352590560913, 0.7142857313156128, 0.1818181723356247, 0.060606054961681366, 0, 0.09302324801683426, 0.0833333283662796, 0.05882352590560913, 0.36781609058380127, 0.0555555522441864, 0.2666666507720947, 0.16949151456356049, 0.05405404791235924, 0.145454540848732, 0.09090908616781235, 0.1846153736114502, 0.24561403691768646, 0.2083333283662796, 0.1538461446762085, 0.1463414579629898 ]
HkOhuyA6-
true
[ "We introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs." ]
[ "The key attribute that drives the unprecedented success of modern Recurrent Neural Networks (RNNs) on learning tasks which involve sequential data, is their ever-improving ability to model intricate long-term temporal dependencies.", "However, a well established measure of RNNs' long-term memory capacity is lacking, and thus formal understanding of their ability to correlate data throughout time is limited.", "Though depth efficiency in convolutional networks is well established by now, it does not suffice in order to account for the success of deep RNNs on inputs of varying lengths, and the need to address their 'time-series expressive power' arises.", "In this paper, we analyze the effect of depth on the ability of recurrent networks to express correlations ranging over long time-scales.", "To meet the above need, we introduce a measure of the information flow across time that can be supported by the network, referred to as the Start-End separation rank.", "Essentially, this measure reflects the distance of the function realized by the recurrent network from a function that models no interaction whatsoever between the beginning and end of the input sequence.", "We prove that deep recurrent networks support Start-End separation ranks which are exponentially higher than those supported by their shallow counterparts.", "Moreover, we show that the ability of deep recurrent networks to correlate different parts of the input sequence increases exponentially as the input sequence extends, while that of vanilla shallow recurrent networks does not adapt to the sequence length at all.", "Thus, we establish that depth brings forth an overwhelming advantage in the ability of recurrent networks to model long-term dependencies, and provide an exemplar of quantifying this key attribute which may be readily extended to other RNN architectures of interest, e.g. variants of LSTM networks.", "We obtain our results by considering a class of recurrent networks referred to as Recurrent Arithmetic Circuits (RACs), which merge the hidden state with the input via the Multiplicative Integration operation.", "Over the past few years, Recurrent Neural Networks (RNNs) have become the prominent machine learning architecture for modeling sequential data, having been successfully employed for language modeling (Sutskever et al., 2011; Graves, 2013) , neural machine translation (Bahdanau et al., 2014) , speech recognition (Graves et al., 2013; BID1 , and more.", "The success of recurrent networks in learning complex functional dependencies for sequences of varying lengths, readily implies that long-term and elaborate correlations in the given inputs are somehow supported by these networks.", "However, formal understanding of the influence of a recurrent network's structure on its expressiveness, and specifically on its ever-improving ability to integrate data throughout time (e.g. translating long sentences, answering elaborate questions), is lacking.An ongoing empirical effort to successfully apply recurrent networks to tasks of increasing complexity and temporal extent, includes augmentations of the recurrent unit such as Long Short Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) and their variants (e.g. Cho et al. (2014) ).", "A parallel avenue, which we focus on in this paper, includes the stacking of layers to form deep recurrent networks (Schmidhuber, 1992) .", "Deep recurrent networks, which exhibit empirical superiority over shallow ones (see e.g. Graves et al. (2013) ), implement hierarchical processing of information at every time-step that accompanies their inherent time-advancing computation.", "Evidence for a time-scale related effect arises from experiments (Hermans and Schrauwen, 2013) -deep recurrent networks appear to model correlations which correspond to longer time-scales than shallow ones.", "These findings, which imply that depth brings forth a considerable advantage in complexity and in temporal capacity of recurrent networks, have no adequate theoretical explanation.In this paper, we address the above presented issues.", "Based on the relative maturity of depth efficiency results in neural networks, namely results that show that deep networks efficiently express functions that would require shallow ones to have a super-polynomial size (e.g. Cohen et al. (2016) ; Eldan and Shamir (2016) ), it is natural to assume that depth has a similar effect on the expressiveness of recurrent networks.", "Indeed, we show that depth efficiency holds for recurrent networks.However, the distinguishing attribute of recurrent networks, is their inherent ability to cope with varying input sequence length.", "Thus, once establishing the above depth efficiency in recurrent networks, a basic question arises, which relates to the apparent depth enhanced long-term memory in recurrent networks: Do the functions which are efficiently expressed by deep recurrent networks correspond to dependencies over longer time-scales?", "We answer this question, by showing that depth provides an exponential boost to the ability of recurrent networks to model long-term dependencies.In order to take-on the above question, we introduce in section 2 a recurrent network referred to as a recurrent arithmetic circuit (RAC) that shares the architectural features of RNNs, and differs from them in the type of non-linearity used in the calculation.", "This type of connection between state-of-the-art machine learning algorithms and arithmetic circuits (also known as Sum-Product Networks (Poon and Domingos, 2011)) has well-established precedence in the context of neural networks.", "Delalleau and Bengio (2011) prove a depth efficiency result on such networks, and Cohen et al. (2016) theoretically analyze the class of Convolutional Arithmetic Circuits which differ from common ConvNets in the exact same fashion in which RACs differ from more standard RNNs.", "Conclusions drawn from such analyses were empirically shown to extend to common ConvNets (e.g. Sharir and Shashua (2017) ; Levine et al. (2017) ).", "Beyond their connection to theoretical models, the modification which defines RACs resembles that of Multiplicative RNNs (Sutskever et al., 2011) and of Multiplicative Integration networks (Wu et al., 2016) , which provide a substantial performance boost over many of the existing RNN models.", "In order to obtain our results, we make a connection between RACs and the Tensor Train (TT) decomposition (Oseledets, 2011) , which suggests that Multiplicative RNNs may be related to a generalized TT-decomposition, similar to the way Cohen and Shashua (2016) connected ReLU ConvNets to generalized tensor decompositions.We move on to introduce in section 3 the notion of Start-End separation rank as a measure of the recurrent network's ability to model elaborate long-term dependencies.", "In order to analyze the longterm correlations of a function over a sequential input which extends T time-steps, we partition the inputs to those which arrive at the first T /2 time-steps (\"Start\") and the last T /2 time-steps (\"End\"), and ask how far the function realized by the recurrent network is from being separable w.r.t. this partition.", "Distance from separability is measured through the notion of separation rank (Beylkin and Mohlenkamp, 2002) , which can be viewed as a surrogate of the L 2 distance from the closest separable function.", "For a given function, high Start-End separation rank implies that the function induces strong correlation between the beginning and end of the input sequence, and vice versa.In section 4 we directly address the depth enhanced long-term memory question above, by examining depth L = 2 RACs and proving that functions realized by these deep networks enjoy Start-End separation ranks that are exponentially higher than those of shallow networks, implying that indeed these functions can model more elaborate input dependencies over longer periods of time.", "An additional reinforcing result is that the Start-End separation rank of the deep recurrent network grows exponentially with the sequence length, while that of the shallow recurrent network is independent of the sequence length.", "Informally, this implies that vanilla shallow recurrent networks are inadequate in modeling correlations of long input sequences, since in contrast to the case of deep recurrent networks, the modeled dependencies achievable by shallow ones do not adapt to the actual length of the input.", "Finally, we present and motivate a quantitative conjecture by which the Start-End separation rank of recurrent networks grows exponentially with the network depth.", "A proof of this conjecture, which will provide an even deeper insight regarding the advantages of depth in recurrent networks, is left as an open problem.", "The notion of depth efficiency, by which deep networks efficiently express functions that would require shallow networks to have a super-polynomial size, is well established in the context of convolutional networks.", "However, recurrent networks differ from convolutional networks, as they are suited by design to tackle inputs of varying lengths.", "Accordingly, depth efficiency alone does not account for the remarkable performance of recurrent networks on long input sequences.", "In this paper, we identified a fundamental need for a quantifier of 'time-series expressivity', quantifying the memory capacity of recurrent networks.", "In order to meet this need, we proposed a measure of the ability of recurrent networks to model long-term temporal dependencies, in the form of the Start-End separation rank.", "The separation rank was used to quantify correlations in convolutional networks, and has roots in the field of quantum physics.", "The proposed measure adjusts itself to the temporal extent of the input series, and quantifies the ability of the recurrent network to correlate the incoming sequential data as time progresses.We analyzed the class of Recurrent Arithmetic Circuits, which are closely related to successful RNN architectures, and proved that the Start-End separation rank of deep RACs increases exponentially as the input sequence extends, while that of shallow RACs is independent of the input length.", "These results, which demonstrate that depth brings forth an overwhelming advantage in the ability of recurrent networks to model long-term dependencies, were achieved by combining tools from the fields of measure theory, tensorial analysis, combinatorics, graph theory and quantum physics.Such analyses may be readily extended to other architectural features employed in modern recurrent networks.", "Indeed, the same time-series expressivity question may now be applied to the different variants of LSTM networks, and the proposed notion of Start-End separation rank may be employed for quantifying their memory capacity.", "We have demonstrated that such a treatment can go beyond unveiling the origins of the success of a certain architectural choice, and leads to new insights.", "The above established observation that correlations achievable by vanilla shallow recurrent network do not adapt at all to the sequence length, is an exemplar of this potential.Moreover, practical recipes may emerge by such theoretical analyses.", "The experiments preformed in Hermans and Schrauwen (2013) , suggest that shallow layers of recurrent networks are related to short time-scales, e.g. in speech: phonemes, syllables, words, while deeper layers appear to support correlations of longer time-scales, e.g. full sentences, elaborate questions.", "These findings open the door to further depth related investigations in recurrent networks, and specifically the role of each layer in modeling temporal correlations may be better understood.", "Levine et al. (2017) establish theoretical observations which translate into practical conclusions regarding the number of hidden channels to be chosen for each layer in a deep convolutional network.", "The conjecture presented in this paper, by which the Start-End separation rank of recurrent networks grows exponentially with depth, can similarly entail practical recipes for enhancing their memory capacity.", "Such analyses can be reinforced by experiments, and lead to a profound understanding of the contribution of deep layers to the recurrent network's memory.", "Indeed, we view this work as an important step towards novel methods of matching the recurrent network architecture to the temporal correlations in a given sequential data set.", "We begin in section A.1 by providing a brief introduction to TNs.", "Next, we present in section A.2 the TN which corresponds to the calculation of a shallow RAC, and tie it to a common TN architecture referred to as a Matrix Product State (MPS) (see overview in e.g. Orús (2014)), and equivalently to the tensor train (TT) decomposition (Oseledets, 2011) .", "Subsequently, we present in section A.3 a TN construction of a deep RAC, and emphasize the characteristics of this construction that are the origin of the enhanced ability of deep RACs to model elaborate temporal dependencies.", "Finally, in section A.4, we make use of the above TNs construction in order to formally motivate conjecture 1, according to which the Start-End separation rank of RACs grows exponentially with depth.", "A TN is a weighted graph, where each node corresponds to a tensor whose order is equal to the degree of the node in the graph.", "Accordingly, the edges emanating out of a node, also referred to as its legs, represent the different modes of the corresponding tensor.", "The weight of each edge in the graph, also referred to as its bond dimension, is equal to the dimension of the appropriate tensor mode.", "In accordance with the relation between mode, dimension and index of a tensor presented in section 3.2, each edge in a TN is represented by an index that runs between 1 and its bond dimension.", "FIG4 shows three examples: (1) A vector, which is a tensor of order 1, is represented by a node with one leg.", "(2) A matrix, which is a tensor of order 2, is represented by a node with two legs.", "(3) Accordingly, a tensor of order N is represented in the TN as a node with N legs.We move on to present the connectivity properties of a TN.", "Edges which connect two nodes in the TN represent an operation between the two corresponding tensors.", "A index which represents such an edge is called a contracted index, and the operation of contracting that index is in fact a summation over all of the values it can take.", "An index representing an edge with one loose end is called an open index.", "The tensor represented by the entire TN, whose order is equal to the number of open indices, can be calculated by summing over all of the contracted indices in the network.", "An example for a contraction of a simple TN is depicted in FIG4 .", "There, a TN corresponding to the operation of multiplying a vector v ∈ R r 1 by a matrix M ∈ R r 2 ×r 1 is performed by summing over the only contracted index, k.", "As there is only one open index, d, the result of contracting the network is an order 1 tensor (a vector): u ∈ R r 2 which upholds u = M v. Though we use below the contraction of indices in more elaborate TNs, this operation can be essentially viewed as a generalization of matrix multiplication." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2545454502105713, 0.2916666567325592, 0.1666666567325592, 0.1818181723356247, 0.19999998807907104, 0.2448979616165161, 0.4000000059604645, 0.2641509473323822, 0.25, 0.22641508281230927, 0.030303025618195534, 0.30188679695129395, 0.15909090638160706, 0.21739129722118378, 0.178571417927742, 0.3529411852359772, 0.21052631735801697, 0.27397260069847107, 0.19607841968536377, 0.3050847351551056, 0.301369845867157, 0.11538460850715637, 0.13114753365516663, 0.08510638028383255, 0.19999998807907104, 0.25581395626068115, 0.14492753148078918, 0.11320754140615463, 0.28260868787765503, 0.21276594698429108, 0.3448275923728943, 0.21739129722118378, 0.0833333283662796, 0.26923075318336487, 0.23255813121795654, 0.1428571343421936, 0.23255813121795654, 0.375, 0.1395348757505417, 0.2857142686843872, 0.24657534062862396, 0.1538461446762085, 0.25531914830207825, 0.16949151456356049, 0.26229506731033325, 0.23999999463558197, 0.15094339847564697, 0.15094339847564697, 0.31111109256744385, 0.19607841968536377, 0.1621621549129486, 0.1538461446762085, 0.37735849618911743, 0.07547169178724289, 0.13636362552642822, 0.1395348757505417, 0.08888888359069824, 0.14814814925193787, 0.09090908616781235, 0.09999999403953552, 0.1702127605676651, 0, 0.15686273574829102, 0, 0.07999999821186066, 0.1111111044883728, 0.11538460850715637, 0.05405404791235924 ]
HJ3d2Ax0-
true
[ "We propose a measure of long-term memory and prove that deep recurrent networks are much better fit to model long-term temporal dependencies than shallow ones." ]
[ "Holistically exploring the perceptual and neural representations underlying animal communication has traditionally been very difficult because of the complexity of the underlying signal.", "We present here a novel set of techniques to project entire communicative repertoires into low dimensional spaces that can be systematically sampled from, exploring the relationship between perceptual representations, neural representations, and the latent representational spaces learned by machine learning algorithms.", "We showcase this method in one ongoing experiment studying sequential and temporal maintenance of context in songbird neural and perceptual representations of syllables.", "We further discuss how studying the neural mechanisms underlying the maintenance of the long-range information content present in birdsong can inform and be informed by machine sequence modeling." ]
[ 1, 0, 0, 0 ]
[ 0.2857142686843872, 0.14814814925193787, 0.2222222238779068, 0.1904761791229248 ]
r1gKmmKULB
false
[ "We compare perceptual, neural, and modeled representations of animal communication using machine learning, behavior, and physiology. " ]
[ "The information bottleneck principle (Shwartz-Ziv & Tishby, 2017) suggests that SGD-based training of deep neural networks results in optimally compressed hidden layers, from an information theoretic perspective.", "However, this claim was established on toy data.", "The goal of the work we present here is to test these claims in a realistic setting using a larger and deeper convolutional architecture, a ResNet model.", "We trained PixelCNN++ models as inverse representation decoders to measure the mutual information between hidden layers of a ResNet and input image data, when trained for (1) classification and (2) autoencoding.", "We find that two stages of learning happen for both training regimes, and that compression does occur, even for an autoencoder.", "Sampling images by conditioning on hidden layers’ activations offers an intuitive visualisation to understand what a ResNets learns to forget.", "The ResNet architecture enables very deep CNNs.", "We show that learning representations using a ResNet results in information compression in hidden layers.", "We set out in this research to test some of the claims by Shwartz-Ziv & Tishby (2017) regarding the information bottleneck principle applied to deep learning.", "By defining a lower bound on the MI and 'decoder' models to compute the MI during classifier and autoencoder training regimes, we explored the notion of compression for generalisation in the context of realistic images and a modern architecture choice.For both classification and autoencoding we observed two stages of learning, characterised by: (1) an initial and relatively short-lived increase and (2) a longer decrease in MI between hidden layers and input training data.", "Although we cannot confirm the mechanism responsible for compression (stochastic relaxation, for example), we gave an intuitive glimpse into what quality/type of information is kept and discarded as ResNets learn.", "PixelCNN++ models were used to estimate the MI between hidden layers (of the models under scrutiny) and input data; images were generated conditioned on hidden layers to illustrate the fitting and compression of data in a visual and intuitive fashion.The experimental procedure we developed for this research enables visualising class invariances throughout training.", "In particular, we see that when a ResNet is maximally (subject to model constraints) compressing information in its hidden layers, the class-irrelevant features of the input images are discarded: conditionally generated samples vary more while retaining information relevant to classification.", "This result has been shown in theory and for toy examples, but never illustrated to the degree that we do here." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.08888888359069824, 0, 0.1818181723356247, 0.2916666567325592, 0.10526315122842789, 0.10526315122842789, 0.07692307233810425, 0.12121211737394333, 0.1395348757505417, 0.13333332538604736, 0.12765957415103912, 0.21875, 0.1428571343421936, 0.14999999105930328 ]
HklbTjRcKX
true
[ "The Information Bottleneck Principle applied to ResNets, using PixelCNN++ models to decode mutual information and conditionally generate images for information illustration" ]
[ "We study the problem of safe adaptation: given a model trained on a variety of past experiences for some task, can this model learn to perform that task in a new situation while avoiding catastrophic failure?", "This problem setting occurs frequently in real-world reinforcement learning scenarios such as a vehicle adapting to drive in a new city, or a robotic drone adapting a policy trained only in simulation.", "While learning without catastrophic failures is exceptionally difficult, prior experience can allow us to learn models that make this much easier.", "These models might not directly transfer to new settings, but can enable cautious adaptation that is substantially safer than na\\\"{i}ve adaptation as well as learning from scratch.", "Building on this intuition, we propose risk-averse domain adaptation (RADA).", "RADA works in two steps: it first trains probabilistic model-based RL agents in a population of source domains to gain experience and capture epistemic uncertainty about the environment dynamics.", "Then, when dropped into a new environment, it employs a pessimistic exploration policy, selecting actions that have the best worst-case performance as forecasted by the probabilistic model.", "We show that this simple maximin policy accelerates domain adaptation in a safety-critical driving environment with varying vehicle sizes.", "We compare our approach against other approaches for adapting to new environments, including meta-reinforcement learning.", "An experienced human driving a rental car for the first time is initially very aware of her lack of familiarity with the car.", "How sensitive is it to acceleration and braking?", "How does it respond to steering?", "How wide is the vehicle and what is its turning radius?", "She drives mindfully, at low speeds, braking far ahead of desired stops, and making wide turns, all the while observing the car's responses and adapting to it.", "Within minutes, once she is familiar with the car, she begins to drive more fluently and efficiently.", "Humans draw upon their prior experiences to perform this kind of safe, quick adaptation to unfamiliar situations all the time, such as when playing with a new tennis racquet, or walking on a new slippery surface.", "Such problems are critical to address in autonomous systems: such as when a self-driving car must learn to drive in a new country, or when a planetary rover might have to learn to explore a harsh new environment.", "Missteps in real-world situations can cause real damage to robots and their environments.", "An important bottleneck in applying today's standard machine learning approaches to control in these real-world situations is that they are trained without any notion of safe behavior under uncertainty.", "Recent works have attempted to address this by proposing methods for safe exploration during reinforcement learning -in other words, how might an agent avoid risky actions during training time?", "This still requires that the robot acquire its notions of uncertainty and risks at the same time as it is learning to perform tasks in the new environment, which is difficult and precarious.", "Could we instead rely on transferring notions of uncertainty and risk acquired from prior experience in other related domains, such as in simulated environments, where safety may not be as much of a concern?", "In other words, could we make the safe learning problem easier through knowledge transfer, relaxing the problem to safe adaptation, like the human driver?", "How might the planetary rover draw on its experience in many varied terrains on Earth to perform meaningfully cautious actions during learning on the unknown terrain of a new planet?", "Motivated by these questions, we propose a model-based reinforcement learning approach called risk averse domain adaptation (RADA).", "RADA works by first pretraining a probabilistic dynamics model on a population of training domains with varied, unknown dynamics.", "Through this experience over many environments, the model learns to estimate the epistemic uncertainty (model uncertainty) of unknown environment dynamics, thus permitting estimation of a distribution of outcomes for any action executed by the agent.", "When introduced into a new target environment, RADA uses this estimated distribution of outcomes to select cautious actions that obey the following maximin notion of risk-aversion: among various candidate action sequences, it executes those that lead to the best worst-case performance, as predicted by the model.", "Much like the human driver in the example above, all the information collected during this cautious phase of exploration is fed back into the model to finetune it to the new domain, leading to increasingly confident predictions.", "Over time, RADA steadily estimates lower risks and approaches optimality in the target environment.", "As we demonstrate in experiments in a driving domain, the experience acquired during RADA's pretraining phase enables fast yet safe adaptation within only a handful of episodes.", "We have proposed RADA, a new approach to model-based reinforcement learning for safe, quick adaptation of RL agents in new environments with unknown dynamics.", "RADA relies on two key ideas: transferring knowledge from training in a variety of training environments, and using a maximin notion of risk-aversion during action selection in the target environment.", "We show in a physically accurate driving environment that RADA performs fast, safe adaptation to learn to drive cars around corners, even when they are up to two times larger than any cars it has driven at pretraining time." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.16393442451953888, 0.072727270424366, 0.11999999731779099, 0.1111111044883728, 0.1538461446762085, 0.28070175647735596, 0.1111111044883728, 0.2083333283662796, 0, 0.16326530277729034, 0.10810810327529907, 0, 0.10256409645080566, 0.07407406717538834, 0.13333332538604736, 0.19354838132858276, 0.14035087823867798, 0.1428571343421936, 0.14035087823867798, 0.17543859779834747, 0.13793103396892548, 0.23333333432674408, 0.08163265138864517, 0.25, 0.1304347813129425, 0.21739129722118378, 0.19999998807907104, 0.1428571343421936, 0.13333332538604736, 0.1860465109348297, 0.3333333134651184, 0.3461538553237915, 0.290909081697464, 0.1846153736114502 ]
BkxA5lBFvH
true
[ "Adaptation of an RL agent in a target environment with unknown dynamics is fast and safe when we transfer prior experience in a variety of environments and then select risk-averse actions during adaptation." ]
[ "We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers.", "Our model builds an object-based scene representation and translates sentences into executable, symbolic programs.", "To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation.", "Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to.", "Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences.", "We use curriculum learning to guide the searching over the large compositional space of images and language.", "Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences.", "Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains.", "It also empowers applications including visual question answering and bidirectional image-text retrieval.", "Humans are capable of learning visual concepts by jointly understanding vision and language BID12 BID8 BID15 .", "Consider the example shown in Figure 1 -I.", "Imagine that someone with no prior knowledge of colors is presented with the images of the red and green cubes, paired with the questions and answers.", "They can easily identify the difference in objects' visual appearance (in this case, color), and align it to the corresponding words in the questions and answers (Red and Green).", "Other object attributes (e.g., shape) can be learned in a similar fashion.", "Starting from there, humans are able to inductively learn the correspondence between visual concepts and word semantics (e.g., spatial relations and referential expressions, Figure 1 -II), and unravel compositional logic from complex questions assisted by the learned visual concepts (Figure 1 -III, also see BID0 ).Motivated", "by this, we propose the neuro-symbolic concept learner (NS-CL), which jointly learns visual perception, words, and semantic language parsing from images and question-answer pairs. NS-CL has", "three modules: a neural-based perception module that extracts object-level representations from the scene, a visually-grounded semantic parser for translating questions into executable programs, and a symbolic program executor that reads out the perceptual representation of objects, classifies their attributes/relations, and executes the program to obtain an answer.Figure 1: Humans learn visual concepts, words, and semantic parsing jointly and incrementally. I. Learning", "visual concepts (red vs. green) starts from looking at simple scenes, reading simple questions, and reasoning over contrastive examples BID12 . II. Afterwards", ", we", "can interpret referential expressions based on the learned object-based concepts, and learn relational concepts (e.g., on the right of, the same material as). III Finally, we", "can interpret complex questions from visual cues by exploiting the compositional structure.NS-CL learns from natural supervision (i.e., images and QA pairs), requiring no annotations on images or semantic programs for sentences. Instead, analogical", "to human concept learning, it learns via curriculum learning. NS-CL starts by learning", "representations/concepts of individual objects from short questions (e.g., What's the color of the cylinder?) on simple scenes (≤3 objects). By doing so, it learns object-based", "concepts such as colors and shapes. NS-CL then learns relational concepts", "by leveraging these object-based concepts to interpret object referrals (e.g., Is there a box right of a cylinder?). The model iteratively adapts to more", "complex scenes and highly compositional questions.NS-CL's modularized design enables interpretable, robust, and accurate visual reasoning: it achieves state-of-the-art performance on the CLEVR dataset (Johnson et al., 2017a) . More importantly, it naturally learns", "disentangled visual and language concepts, enabling combinatorial generalization w.r.t. both visual scenes and semantic programs. In particular, we demonstrate four forms", "of generalization. First, NS-CL generalizes to scenes with", "more objects and longer semantic programs than those in the training set. Second, it generalizes to new visual attribute", "compositions, as demonstrated on the CLEVR-CoGenT (Johnson et al., 2017a) dataset. Third, it enables fast adaptation to novel visual", "concepts, such as learning a new color. Finally, the learned visual concepts transfer to", "new tasks, such as image-caption retrieval, without any extra fine-tuning.", "We presented a method that jointly learns visual concepts, words, and semantic parsing of sentences from natural supervision.", "The proposed framework, NS-CL, learns by looking at images and reading paired questions and answers, without any explicit supervision such as class labels for objects.", "Our model learns visual concepts with remarkable accuracy.", "Based upon the learned concepts, our model achieves good results on question answering, and more importantly, generalizes well to new visual compositions, new visual concepts, and new domain specific languages.The design of NS-CL suggests multiple research directions.", "First, constructing 3D object-based representations for realistic scenes needs further exploration BID1 BID5 .", "Second, our model assumes a domain-specific language for describing formal semantics.", "The integration of formal semantics into the processing of complex natural language would be meaningful future work BID4 Oh et al., 2017) .", "We hope our paper could motivate future research in visual concept learning, language learning, and compositionality.Our framework can also be extended to other domains such as video understanding and robotic manipulation.", "Here, we would need to discover semantic representations for actions and interactions (e.g., push) beyond static spatial relations.", "Along this direction, researchers have studied building symbolic representations for skills (Konidaris et al., 2018) and learning instruction semantics from interaction (Oh et al., 2017) in constrained setups.", "Applying neuro-symbolic learning frameworks for concepts and skills would be meaningful future work toward robotic learning in complex interactive environments." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.7868852615356445, 0.1538461446762085, 0.21739129722118378, 0.22727271914482117, 0.2702702581882477, 0.19512194395065308, 0.45454543828964233, 0.09090908616781235, 0.10810810327529907, 0.1463414579629898, 0.060606058686971664, 0.17777776718139648, 0.12244897335767746, 0.05128204822540283, 0.08955223113298416, 0.35999998450279236, 0.25974026322364807, 0.08695651590824127, 0.1599999964237213, 0.2711864411830902, 0.05405404791235924, 0.1599999964237213, 0.11428570747375488, 0.12244897335767746, 0.17543859779834747, 0.17391303181648254, 0.060606058686971664, 0.1818181723356247, 0.13333332538604736, 0.20512820780277252, 0.11428570747375488, 0.604651153087616, 0.2448979616165161, 0.1818181723356247, 0.24137930572032928, 0, 0.1111111044883728, 0.08510638028383255, 0.1090909019112587, 0.08888888359069824, 0.038461532443761826, 0.045454539358615875 ]
rJgMlhRctm
true
[ "We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them." ]
[ "Bayesian inference offers a theoretically grounded and general way to train neural networks and can potentially give calibrated uncertainty.", "However, it is challenging to specify a meaningful and tractable prior over the network parameters, and deal with the weight correlations in the posterior.", "To this end, this paper introduces two innovations:", "(i) a Gaussian process-based hierarchical model for the network parameters based on recently introduced unit embeddings that can flexibly encode weight structures, and", "(ii) input-dependent contextual variables for the weight prior that can provide convenient ways to regularize the function space being modeled by the network through the use of kernels. \n", "We show these models provide desirable test-time uncertainty estimates, demonstrate cases of modeling inductive biases for neural networks with kernels and demonstrate competitive predictive performance on an active learning benchmark.", "The question of which priors one should use for Bayesian neural networks is largely unanswered, as two considerations need to be balanced: First, we want to keep inference in the high dimensional weight posterior tractable; Second, we desire to express our beliefs about the properties of the modeled functions compactly by modeling the collection of weights.", "Especially the latter is typically hard, as functional regularization for weight-based models is non-trivial.", "In order to cope with richer posterior inference than mean-field typically achieves, a variety of structured posterior models have been proposed recently, for instance utilizing radial posteriors (Oh et al., 2019) , or rich weight posteriors based on Gaussian processes (Louizos and Welling, 2016) .", "When it comes to modeling priors on weights with correlations, recent work has attempted to capture feature-level correlations using for instance a horseshoe prior (Ghosh et al., 2018) .", "One interesting direction of inquiry has focused on utilizing hyper-networks in order to model distributions over weights for an entire network (Ha et al., 2016; Pradier et al., 2018) , or alternatively to utilize unit-level level variables combined with compact hyper-networks to regress to single weights and capture weight correlations through the auxiliary variables (Karaletsos et al., 2018) .", "We propose to tackle some of the challenges in modeling weight priors by extending the latter work and combining it with ideas from the Gaussian process literature to replace the hyper-network with a Gaussian process prior over weights.", "We explore the use of compositional kernels to add input-dependence to the prior for our model and obtain rich models with beneficial properties in tasks such as active learning, and generalization, while maintaining tractable inference properties." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.21276594698429108, 0.3199999928474426, 0, 0.23076923191547394, 0.145454540848732, 0.17241378128528595, 0.12987013161182404, 0.0476190447807312, 0.14084506034851074, 0.14035087823867798, 0.2368420958518982, 0.3333333134651184, 0.26229506731033325 ]
Bylhq134Fr
true
[ "We introduce a Gaussian Process Prior over weights in a neural network and explore its ability to model input-dependent weights with benefits to various tasks, including uncertainty estimation and generalization in the low-sample setting." ]
[ "We perform an in-depth investigation of the suitability of self-attention models for character-level neural machine translation.", "We test the standard transformer model, as well as a novel variant in which the encoder block combines information from nearby characters using convolution.", "We perform extensive experiments on WMT and UN datasets, testing both bilingual and multilingual translation to English using up to three input languages (French, Spanish, and Chinese).", "Our transformer variant consistently outperforms the standard transformer at the character-level and converges faster while learning more robust character-level alignments.", "Most existing Neural Machine Translation (NMT) models operate on the word or subword-level.", "Often, these models are memory inefficient because of large vocabulary size.", "Character-level models (Lee et al., 2017; Cherry et al., 2018) instead work directly on raw characters, resulting in a more compact language representation, while mitigating out-of-vocabulary (OOV) problems (Luong and Manning, 2016) .", "They are especially suitable for multilingual translation, where multiple languages can be modelled using the same character vocabulary.", "Multilingual training can lead to improvements in the overall performance without any increase in model complexity (Lee et al., 2017) .", "It also circumvents the need to train separate models for each language pair.", "Models based on self-attention have achieved excellent performance on a number of tasks including machine translation (Vaswani et al., 2017) and representation learning (Devlin et al., 2019; Yang et al., 2019) .", "Despite the success of these models, no previous work has considered their suitability for character-level translation, with the In this work, we perform an in-depth investigation of the suitability of self-attention models for character-level translation.", "We consider two models: the standard transformer from (Vaswani et al., 2017) ; as well as a novel variant, which we call the convtransformer (Figure 1 , Section 3).", "The latter uses convolution to facilitate interactions among nearby character representations.", "We evaluate these models on both bilingual and multilingual translation to English, using up to three input languages: French (FR), Spanish (ES), and Chinese (ZH).", "We compare their translation performance on close (e.g., FR and ES) and on distant (e.g., FR and ZH) input languages (Section 5.1), and we analyze their learned character alignments (Section 5.2).", "We find that self-attention models work surprisingly well for character-level translation, performing competitively with equivalent subword-level models while requiring up to 60% fewer parameters.", "At the character-level, the convtransformer performs better than the standard transformer, converging faster and producing more robust alignments.", "We performed a detailed investigation of the utility of self-attention models for character-level translation, testing the standard transformer architecture, as well as a novel variant augmented by convolution in the encoder to facilitate information propagation across characters.", "Our experiments show that self-attention performs very well on characterlevel translation, performing competitively with subword-level models, while requiring fewer parameters.", "Training on multiple input languages is also effective and leads to improvements across all languages when the source and target languages are similar.", "When the languages are different, we observe a drop in performance, in particular for the distant language.", "In future work, we will extend our analysis to include additional source and target languages from different language families, such as more Asian languages.", "We will also work towards improving the training efficiency of character-level models, which is one of their main bottlenecks.", "A Example model outputs Tables 3, 4 and 5 contain example translations produced by our different bilingual and multilingual models trained on the UN datasets." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 1, 0.10810810327529907, 0.1538461446762085, 0.1249999925494194, 0.1428571343421936, 0.1538461446762085, 0.043478257954120636, 0.12121211737394333, 0.05714285373687744, 0.2142857164144516, 0.1904761791229248, 0.5581395030021667, 0.0952380895614624, 0, 0.15789473056793213, 0.09756097197532654, 0.2631579041481018, 0.06451612710952759, 0.3404255211353302, 0.05714285373687744, 0.05714285373687744, 0.13333332538604736, 0, 0.24242423474788666, 0.10256409645080566 ]
BWlCpme3TS
true
[ "We perform an in-depth investigation of the suitability of self-attention models for character-level neural machine translation." ]
[ "The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures.", "Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies.", "Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset.", "This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples -- ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks.", "We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training.", "Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice.", "Medical diagnostics have increasingly become a more interesting and viable endpoint for machine learning.", "A general scarcity of publicly available medical data, however, inhibits its rapid development.", "Pre-training on tangentially related datasets such as ImageNet BID4 ) has been shown to help in circumstances where training data is limited, but may introduce unintended biases which are undesirable in a clinical setting.", "Furthermore, most clinical settings will drive a need for models which can accurately predict a large number of diagnostic outcomes.", "This essentially turns many medical problems into multi-label classification with a large number of targets, many of which may be subtle or poorly defined and are likely to be inconsistently labeled.", "In addition, unlike the traditional multi-label setting, predicting the absence of each label is as important as predicting its presence in order to minimize the possibility of misdiagnosis.", "Each of these challenges drive a need for architectures which consider clinical context to make the most of the data available.Chest x-rays are the most common type of radiology exam in the world and a particularly challenging example of multi-label classification in medical diagnostics.", "Making up nearly 45% of all radiological studies, the chest x-ray has achieved global ubiquity as a low-cost screening tool for a wealth of pathologies including lung cancer, tuberculosis, and pneumonia.", "Each scan can contain dozens of patterns corresponding to hundreds of potential pathologies and can thus be difficult to interpret, suffering from high disagreement rates between radiologists and often resulting in unnecessary follow-up procedures.", "Complex interactions between abnormal patterns frequently have significant clinical meaning that provides radiologists with additional context.", "For example, a study labeled to indicate the presence of cardiomegaly (enlargement of the cardiac silhouette) is more likely to additionally have pulmonary edema (abnormal fluid in the extravascular tissue of the lung) as the former may suggest left ventricular failure which often causes the latter.", "The presence of edema further predicates the possible presence of both consolidation (air space opacification) and a pleural effusion (abnormal fluid in the pleural space).", "Training a model to recognize the potential for these interdependencies could enable better prediction of pathologic outcomes across all categories while maximizing the data utilization and its statistical efficiency.Among the aforementioned challenges, this work firstly addresses the problem of predicting multiple labels simultaneously while taking into account their conditional dependencies during both the training and the inference.", "Similar problems have been raised and analyzed in the work of BID30 BID1 with the application of image tagging, both outside the medical context.", "The work of BID26 for chest x-ray annotations are closest to ours.", "All of them utilize out-of-the-box decoders based on recurrent neural networks (RNNs) to sequentially predict the labels.", "Such a naive adoption of RNNs is problematic and often fails to attend to peculiarities of the medical problem in their design, which we elaborate on in Section 2.3 and Section 3.3.1.In addition, we hypothesize that the need for pre-training may be safely removed when there are sufficient medical data available.", "To verify this, all our models are trained from scratch, without using any extra data from other domains.", "We directly compare our results with those of that are pre-trained on ImageNet.", "Furthermore, to address the issue of clinical interpretability, we juxtapose a collection of alternative metrics along with those traditionally used in machine learning, all of which are reported in our benchmark.", "To improve the quality of computer-assisted diagnosis of chest x-rays, we proposed a two-stage end-to-end neural network model that combines a densely connected image encoder with a recurrent neural network decoder.", "The first stage was chosen to address the challenges to learning presented by high-resolution medical images and limited training set sizes.", "The second stage was designed to allow the model to exploit statistical dependencies between labels in order to improve the accuracy of its predictions.", "Finally, the model was trained from scratch to ensure that the best application-specific features were captured.", "Our experiments have demonstrated both the feasibility and effectiveness of this approach.", "Indeed, our baseline model significantly outperformed the current state-of-the-art.", "The proposed set of metrics provides a meaningful quantification of this performance and will facilitate comparisons with future work.While a limited exploration into the value of learning interdependencies among labels yields promising results, additional experimentation will be required to further explore the potential of this methodology both as it applies specifically to chest x-rays and to medical diagnostics as a whole.", "One potential concern with this approach is the risk of learning biased interdependencies from a limited training set which does not accurately represent a realistic distribution of pathologies -if every example of cardiomegaly is also one of cardiac failure, the model may learn to depend too much on the presence of other patterns such as edemas which do not always accompany enlargement of the cardiac silhouette.", "This risk is heightened when dealing with data labeled with a scheme which mixes pathologies, such as pneumonia, with patterns symptomatic of those pathologies, such as consolidation.", "The best approach to maximizing feature extraction and leveraging interdependencies among target labels likely entails training from data labeled with an ontology that inherently poses some consistent known relational structure.", "This will be the endpoint of a future study." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09999999403953552, 0.1666666567325592, 0.10810810327529907, 0.11764705181121826, 0.30188679695129395, 0.07692307233810425, 0, 0.07692307233810425, 0.043478257954120636, 0.0624999962747097, 0.09756097197532654, 0.1666666567325592, 0.16326530277729034, 0.1428571343421936, 0.09302324801683426, 0, 0.11764705181121826, 0.11764705181121826, 0.0952380895614624, 0.11764705181121826, 0.23999999463558197, 0.3333333432674408, 0.13793103396892548, 0.06666666269302368, 0.1538461446762085, 0.19512194395065308, 0.25641024112701416, 0.12121211737394333, 0.1764705777168274, 0.1428571343421936, 0.1599999964237213, 0.1818181723356247, 0.158730149269104, 0.09090908616781235, 0.05714285373687744, 0.04651162400841713, 0.1818181723356247 ]
H1uP7ebAW
true
[ "we present the state-of-the-art results of using neural networks to diagnose chest x-rays" ]
[ "Semmelhack et al. (2014) have achieved high classification accuracy in distinguishing swim bouts of zebrafish using a Support Vector Machine (SVM).", "Convolutional Neural Networks (CNNs) have reached superior performance in various image recognition tasks over SVMs, but these powerful networks remain a black box.", "Reaching better transparency helps to build trust in their classifications and makes learned features interpretable to experts.", "Using a recently developed technique called Deep Taylor Decomposition, we generated heatmaps to highlight input regions of high relevance for predictions.", "We find that our CNN makes predictions by analyzing the steadiness of the tail's trunk, which markedly differs from the manually extracted features used by Semmelhack et al. (2014).", "We further uncovered that the network paid attention to experimental artifacts.", "Removing these artifacts ensured the validity of predictions.", "After correction, our best CNN beats the SVM by 6.12%, achieving a classification accuracy of 96.32%.", "Our work thus demonstrates the utility of AI explainability for CNNs.", "In the study by Semmelhack et al. (2014) , a well-performing classifier allowed to correlate neural interventions with behavioral changes.", "Support Vector Machines (SVMs) were commonly applied to such classification tasks, relying on feature engineering by domain experts.", "In recent years, Convolutional Neural Networks (CNNs) have proven to reach high accuracies in classification tasks on images and videos reducing the need for manual feature engineering.", "After Lecun & Bengio (1995) introduced them in the 90s, CNNs had their break-through in the competition ILSVRC2012 with the architecture of .", "Since then, more and more sophisticated architectures have been designed enabling them to identify increasingly abstract features.", "This development has become possible due to the availability of larger training sets, computing resources, GPU training implementations, and better regularization techniques, such as Dropout ; Zeiler & Fergus (2014) ).", "While these more complex deep neural network architectures achieved better results, they also kept their learnt features hidden if not further analyzed.", "This caused CNNs to come with significant drawbacks: a lack of trust in their classifications, missing interpretability of learned features in the application domain, and the absence of hints as to what data could enhance performance (Molnar (2019) ).", "Explaining the decisions made by CNNs might even become a legal requirement in certain applications (Alber et al. (2018) ).", "In order to overcome these drawbacks, subsequent research has developed approaches to shed light on the inner workings of CNNs.", "These approaches have been successfully used for uncovering how CNNs might learn unintended spurious correlations, termed \"Clever Hans\" predictions (Lapuschkin et al. (2019) ).", "Such predictions could even become harmful if the predictions entailed decisions with severe consequences (Leslie (2019) ).", "Also, since deep neural networks have become a popular machine learning technique in applied domains, spurious correlations would undermine scientific discoveries.", "This paper focuses on zebrafish research as an applied domain of AI explainability, considering that the research community around this organism has grown immensely.", "The zebrafish is an excellent model organism for vertebrates, including humans, due to the following four reasons: The genetic codes of humans and zebrafish are about 70% orthologue (Howe et al. (2013) ).", "The fish are translucent which allows non-invasive observation of changes in the organism (Bianco et al. (2011) ).", "Furthermore, zebrafish are relatively cheap to maintain, produce plenty of offspring, and develop rapidly.", "Finally, they are capable of recovering their brain structures within days after brain injury (Kishimoto et al. (2011) ; Kizil et al. (2012) ).", "In this paper, we adapt CNNs to work on highly controlled zebrafish video recordings and show the utility of a recently developed AI explainability technique on this task.", "We train the network on optical flow for binary classifying swim bouts and achieve superior performance when compared to the current state-of-the-art in bout classification (Semmelhack et al. (2014) ).", "We then create heatmaps over the videos with the \"iNNvestigate\" toolbox (Alber et al. (2018) ) which highlight the areas that our CNN pays attention to when making a prediction.", "The resulting heatmaps show that our CNN learns reasonable features which are very different from those manually composed by Semmelhack et al. (2014) .", "We trained a two-stream Convolutional Neural Network (CNN) on recordings of larval zebrafish to classify prey and spontaneous swim bouts.", "We then visualized the learned weights by generating relevance heatmaps showing which regions of the input the network focuses on while performing its classifications.", "We find that our CNN is capable of learning highly discriminating tail features.", "These features seem to be quite different from the ones used in the SVM classification by Semmelhack et al. (2014) -the previous state-of-the-art in bout classification.", "The heatmaps further uncovered a \"Clever Hans\" type of correlation.", "After removing this spurious correlation and retraining the network, the network reached a test accuracy of 96.32%, which is 6.12% points better than the accuracy achieved by Semmelhack et al. (2014) .", "Judging from the test accuracy, our CNN has learned better discriminating features than those used for the SVM by Semmelhack et al. (2014), and has thus beaten manual feature engineering in this application domain.", "Steadiness of the fish's trunk as differentiating feature.", "The relevance heatmaps and high accuracy show that the network achieves correct classifications by looking for salient features in the trunk of the tail while largely disregarding the tip.", "A sharp and clear relevance profile confined to the edges of the trunk gives a clear sign of a prey bout.", "The opposite speaks for a spontaneous bout.", "Here, attention spreads out to capture the strong vertical oscillation of the trunk.", "For this reason we conclude that the CNN makes its predictions based on the steadiness of the trunk.", "We believe our interpretation of learned features to be in line with existing research on the kinematics of prey bouts.", "As shown by Borla et al. (2002) and McElligott & O'Malley (2005) , prey bouts require fine control of the tail's axial kinematics to perform precise swim movements.", "Zebrafish noticeably reduce their yaw rotation and stabilize the positioning of their head to make a targeted move at their prey.", "Such precise movements are not required in spontaneous swim bouts.", "The heatmaps indicate that the network has found clear evidence for these kinds of motion in the trunk of the tail.", "Furthermore, we argue that the CNN has learned features which are very different from the ones identified by Semmelhack et al. (2014) .", "All of their features -as outlined in Section 2 -, except the second one, rely on information from the tip of the tail and a complete sequence of frames.", "However, many optical flow frames do not depict the tip of the tail because of its small size and high speed.", "This might have happened due to suboptimal parameter settings which could not handle the sometimes long distances which the tip traveled between frames.", "Also, subsamples include only 85 of the original 150 frames for each video.", "Due to its higher performance, we conclude not only that the CNN has learned a different set of features, but also that these features must bear higher discriminative power.", "Origin of the \"Clever Hans\" correlation.", "The telltale motion in the top left corner stems from a substance called agarose, which the fish's head was embedded in to keep it steady.", "It is quite curious that, while not visible to human eyes, the agarose seems to be moving each time the fish performed a spontaneous swim bout, but not so for a prey bout.", "We speculate that this correlation was unintentionally introduced by the experimenters who might have tapped the petri dish to induce the fish to perform a spontaneous swim bout.", "Future work.", "Calculating and storing optical flow is expensive.", "If we attained similar performance on original frames, training would be considerably cheaper.", "While we can confirm the findings by that the spatial stream by itself reaches a fairly competitive accuracy, it provides only very minor improvement to the overall network.", "Yet, this stream is probably looking for very similar features as the temporal stream, because it focuses largely on the upper half of the tail, just like the temporal stream.", "If that is the case, we should see improved performance when giving the spatial stream a sequence of frames.", "It should be interesting to probe whether the spatial stream could then match or even surpass the performance of the temporal stream.", "Furthermore, CNNs such as the one used in this paper could be used to investigate brain recovery in larval zebrafish.", "It has been shown on a cellular level that zebrafish can heal their brain within days after a lesion.", "However, this needs to be proven on a behavioral level (Krakauer et al. (2017) ).", "Future work could perform a lesion study on the optic tectum in zebrafish (McDowell et al. (2004) ; Roeser & Baier (2003) ), a brain region responsible for translating visual input into motor output.", "CNNs could then assess swim bouts of recovered fish and give a measure for potential behavioral changes.", "Insights from relevance heatmaps would be required if the CNN were not able not distinguish recovered fish from healthy ones." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1904761791229248, 0.045454539358615875, 0.10810810327529907, 0.1428571343421936, 0.25531914830207825, 0.1249999925494194, 0.13793103396892548, 0.307692289352417, 0.3125, 0.1463414579629898, 0.1538461446762085, 0.1666666567325592, 0.09999999403953552, 0.05405404791235924, 0.07843136787414551, 0.04651162400841713, 0.1818181723356247, 0.1463414579629898, 0.14999999105930328, 0, 0.05405404791235924, 0.0952380895614624, 0.22727271914482117, 0.11538460850715637, 0.10256409645080566, 0.11428570747375488, 0.0476190410554409, 0.3829787075519562, 0.19999998807907104, 0.16326530277729034, 0.13636362552642822, 0.2926829159259796, 0.2790697515010834, 0.23529411852359772, 0.1818181723356247, 0.12903225421905518, 0.15686273574829102, 0.18867923319339752, 0.13793103396892548, 0.1702127605676651, 0.15789473056793213, 0.0714285671710968, 0.12121211737394333, 0.21621620655059814, 0.29999998211860657, 0.16326530277729034, 0.14999999105930328, 0.06451612710952759, 0.10256409645080566, 0.2380952388048172, 0.21739129722118378, 0.09999999403953552, 0.0476190410554409, 0.11764705181121826, 0.25, 0.14814814925193787, 0.09090908616781235, 0.07999999821186066, 0.17391303181648254, 0, 0.05882352590560913, 0.1304347813129425, 0.17391303181648254, 0.1538461446762085, 0.09999999403953552, 0.10256409645080566, 0.1538461446762085, 0.1111111044883728, 0.14814814925193787, 0.10526315122842789, 0.10256409645080566 ]
rJgQkT4twH
true
[ "We demonstrate the utility of a recent AI explainability technique by visualizing the learned features of a CNN trained on binary classification of zebrafish movements." ]
[ "When communicating, humans rely on internally-consistent language representations.", "That is, as speakers, we expect listeners to behave the same way we do when we listen.", "This work proposes several methods for encouraging such internal consistency in dialog agents in an emergent communication setting.", "We consider two hypotheses about the effect of internal-consistency constraints:", "1) that they improve agents’ ability to refer to unseen referents, and", "2) that they improve agents’ ability to generalize across communicative roles (e.g. performing as a speaker de- spite only being trained as a listener).", "While we do not find evidence in favor of the former, our results show significant support for the latter.", "Emergent communication is the study of how linguistic protocols evolve when agents are tasked to cooperate.", "For example, agents engaged in a simple object retrieval task learn to communicate with one another in order to get the items they want .", "To date, work of this type has each agent assume a conversational role.", "Thus, agents are often trained only to speak or only to listen , or similarily trained to speak using a vocabulary disjoint from the vocabulary it is understands as a listener-e.g. speaking only to ask questions (\"what color?\") and listening only to comprehend the answer (\"blue\") Das et al., 2017) .", "These assumptions are misaligned with how we think about human communication, and with the way we'd like computational models to work in practice.", "As humans, not only can we easily shift between roles, we also know that there is inherent symmetry between these roles: we expect others to speak (or listen) similarly to the way we do, and we know that others expect the same of us.", "We test if dialog agents that incorporate the symmetry between themselves and their communicative partners learn more generalizable representations than those which do not.", "We introduce three modifications to the agents to encourage that they abide by the \"golden rule\": speak/listen as you would want to be spoken/listened to.", "Specifically, these modifications include self-play training objectives, shared embedding spaces, and symmetric decoding and encoding mechanisms that share parameters.", "We test two hypotheses about the effect of the proposed modifications on emergent communication:", "1. Internal-consistency constraints improve agents' ability to generalize to unseen items-e.g. training on \"red square\" and \"blue circle\" and then testing on \"blue square\".", "2. Internal-consistency constraints improve agents' ability to generalize across communicative roles-e.g. training on \"blue\" as a listener, and using \"blue\" as a speaker when testing.", "We evaluate the effect of each of the proposed modifications with two reference game datasets and two model architectures, an RNN model used by and a Transformer model.", "We find no evidence to support that internal-consistency improves generalization to unseen items (Hypothesis 1), but significant evidence that these proposed constraints enable models to generalize learned representations across communicative roles (Hypothesis 2), even in the case of where the agent receives no direct training in the target (test) role.", "All of our code and data are available at bit.ly/internal-consistency-emergent-communication.", "Notation.", "The space of possible references is parameterized by the number of attributes n f that describe each item (e.g. color) and the number of values n v each attribute can take (e.g.{red, blue}).", "Each item o is a bag-of-features vector o P t0, 1u N where N \" n f¨nv . Each index o i is 1 if o expresses the ith feature value. The speaker produces a message with symbols from a vocabulary V with length L. For comparison, we use the best-performing setting |V| \" 100 and L \" 10 from previous work . Symbols in V are represented as 1-hot vectors. In each round of the reference game, we construct xC, r, ry where C is the context (set of item column vectors stacked into a matrix), r is a vector representing the referent, and r is the index of the referent in C. We uniformly sample k´1 items as distractors to form C \" to 1 , . . . o k´1 uYtru.", "The distractors are is sampled randomly each round (in every epoch).", "We propose three methods for encouraging dialog agents to follow \"the golden rule\": speak/listen to others as you would expect to be spoken/listened to.", "In the emergent communication setting, we find that the internal-consistency constraints do not systematically improve models' generalization to novel items, but both the self-play objective and shared embeddings significantly improve performance when agents are tested on roles they were not directly trained for.", "In fact, when trained in one role and tested on another, these internal-consistency constraints allow the agents to perform about as well as if they had been trained in the target role." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.06896550953388214, 0.12903225421905518, 0, 0.3199999928474426, 0.4324324131011963, 0, 0.19999998807907104, 0.1111111044883728, 0, 0.07692307233810425, 0.0555555522441864, 0.08510638028383255, 0.15789473056793213, 0.17142856121063232, 0.0624999962747097, 0.07407406717538834, 0.3529411852359772, 0.4324324131011963, 0, 0.2545454502105713, 0, 0.0476190447807312, 0.019999997690320015, 0, 0.11428570747375488, 0.2641509473323822, 0.1463414579629898 ]
SkgJOAEtvr
true
[ "Internal-consistency constraints improve agents ability to develop emergent protocols that generalize across communicative roles." ]
[ "Neural networks (NNs) are able to perform tasks that rely on compositional structure even though they lack obvious mechanisms for representing this structure.", "To analyze the internal representations that enable such success, we propose ROLE, a technique that detects whether these representations implicitly encode symbolic structure.", "ROLE learns to approximate the representations of a target encoder E by learning a symbolic constituent structure and an embedding of that structure into E’s representational vector space.", "The constituents of the approximating symbol structure are defined by structural positions — roles — that can be filled by symbols.", "We show that when E is constructed to explicitly embed a particular type of structure (e.g., string or tree), ROLE successfully extracts the ground-truth roles defining that structure.", "We then analyze a seq2seq network trained to perform a more complex compositional task (SCAN), where there is no ground truth role scheme available.", "For this model, ROLE successfully discovers an interpretable symbolic structure that the model implicitly uses to perform the SCAN task, providing a comprehensive account of the link between the representations and the behavior of a notoriously hard-to-interpret type of model.", "We verify the causal importance of the discovered symbolic structure by showing that, when we systematically manipulate hidden embeddings based on this symbolic structure, the model’s output is also changed in the way predicted by our analysis.", "Finally, we use ROLE to explore whether popular sentence embedding models are capturing compositional structure and find evidence that they are not; we conclude by discussing how insights from ROLE can be used to impart new inductive biases that will improve the compositional abilities of such models.", "We have introduced ROLE, a neural network that learns to approximate the representations of an existing target neural network E using an explicit symbolic structure.", "ROLE successfully discovers symbolic structure both in models that explicitly define this structure and in an RNN without explicit structure trained on the fully-compositional SCAN task.", "When applied to sentence embedding models trained on partially-compositional tasks, ROLE performs better than hand-specified role schemes but still provides little evidence that the sentence encodings represent compositional structure.", "Uncovering the latent symbolic structure of NN representations on fully-compositional tasks is a significant step towards explaining how they can achieve the level of compositional generalization that they do, and suggests types of inductive bias to improve such generalization for partially-compositional tasks.", "We offer several observations about this algorithm.", "1. This algorithm may seem convoluted, but a few observations can illuminate how the roles assigned by such an algorithm support success on the SCAN task.", "First, a sequence will contain role 30 if and only if it contains and, and it will contain role 17 if and only if it contains after.", "Thus, by implicitly checking for the presence of these two roles (regardless of the fillers bound to them), the decoder can tell whether the output involves one or two basic commands, where the presence of and or after leads to two basic commands and the absence of both leads to one basic command.", "Moreover, if there are two basic commands, whether it is role 17 or role 30 that is present can tell the decoder whether the input order of these commands also corresponds to their output order (when it is and in play, i.e., role 30), or if the input order is reversed (when it is after in play, i.e., role 17).", "With these basic structural facts established, the decoder can begin to decode the specific commands.", "For example, if the input is a sequence with after, it can begin with the command after after, which it can decode by checking which fillers are bound to the relevant roles for that type of command.", "It may seem odd that so many of the roles are based on position (e.g., \"first word\" and \"second-to-last word\"), rather than more functionally-relevant categories such as \"direction word.\"", "However, this approach may actually be more efficient: Each command consists of a single mandatory element (namely, an action word such as walk or jump) followed by several optional modifiers (namely, rotation words, direction words, and cardinalities).", "Because most of the word categories are optional, it might be inefficient to check for the presence of, e.g., a cardinality, since many sequences will not have one.", "By contrast, every sequence will have a last word, and checking the identity of the last word provides much functionally-relevant information: if that word is not a cardinality, then the decoder knows that there is no cardinality present in the command (because if there were, it would be the last word); and if it is a cardinality, then that is important to know, because the presence of twice or thrice can dramatically affect the shape of the output sequence.", "In this light, it is unsurprising that the SCAN encoder has implicitly learned several different roles that essentially mean the last element of a particular subcommand.", "2. The algorithm does not constitute a simple, transparent role scheme.", "But its job is to describe the representations that the original network produces, and we have no a priori expectation about how complex that process may be.", "The role-assignment algorithm implicitly learned by ROLE is interpretable locally (each line is readily expressible in simple English), but not intuitively transparent globally.", "We see this as a positive result, in two respects.", "First, it shows why ROLE is crucial: no human-generated role scheme would provide a good approximation to this algorithm.", "Such an algorithm can only be identified because ROLE is able to use gradient descent to find role schemes far more complex than any we would hypothesize intuitively.", "This enables us to analyze networks far more complex than we could analyze previously, being necessarily limited to hand-designed role schemes based on human intuitions about how to perform the task.", "Second, when future work illuminates the computation in the original SCAN GRU seq2seq decoder, the baroqueness of the role-assignment algorithm that ROLE has shown to be implicit in the seq2seq encoder can potentially explain certain limitations in the original model, which is known to suffer from severe failures of systematic generalization outside the training distribution (Lake and Baroni, 2018).", "It is reasonable to hypothesize that systematic generalization requires that the encoder learn an implicit role scheme that is relatively simple and highly compositional.", "Future proposals for improving the systematic generalization of models on SCAN can be examined using ROLE to test the hypothesis that greater systematicity requires greater compositional simplicity in the role scheme implicitly learned by the encoder.", "3. While the role-assignment algorithm of A.8.1 may not be simple, from a certain perspective, it is quite surprising that it is not far more complex.", "Although ROLE is provided 50 roles to learn to deploy as it likes, it only chooses to use 16 of them (only 16 are ever selected as the arg max(a t ); see Sec. 6.1).", "Furthermore, the SCAN grammar generates 20,910 input sequences, containing a total of 151,688 words (an average of 7.25 words per input).", "This means that, if one were to generate a series of conditional statements to determine which role is assigned to each word in every context, this could in theory require up to 151,688 conditionals (e.g., \"if the filler is 'jump' in the context 'walk thrice after opposite left', then assign role 17\").", "However, our algorithm involves just 47 conditionals.", "This reduction helps explain how the model performs so well on the test set: If it used many more of the 151,688 possible conditional rules, it would completely overfit the training examples in a way that would be unlikely to generalize.", "The 47-conditional algorithm we found is more likely to generalize by abstracting over many details of the context.", "4. Were it not for ROLE's ability to characterize the representations generated by the original encoder in terms of implicit roles, providing an equally complete and accurate interpretation of those representations would necessarily require identifying the conditions determining the activation level of each of the 100 neurons hosting those representations.", "It seems to us grossly overly optimistic to estimate that each neuron's activation level in the representation of a given input could be characterized by a property of the input statable in, say, two lines of roughly 20 words/symbols; yet even then, the algorithm would require 200 lines, whereas the algorithm in A.8.1 requires 47 lines of that scale.", "Thus, by even such a crude estimate of the degree of complexity expected for an algorithm describing the representations in terms of neuron activities, the algorithm we find, stated over roles, is 4 times simpler." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.20512819290161133, 0.21052631735801697, 0.1428571343421936, 0.1111111044883728, 0.17777776718139648, 0.14999999105930328, 0.2857142686843872, 0.16326530277729034, 0.14035087823867798, 0.25641024112701416, 0.19999998807907104, 0.13333332538604736, 0.15094339847564697, 0.0833333283662796, 0.04878048226237297, 0.0624999962747097, 0, 0.06896550953388214, 0, 0.08695651590824127, 0.04081632196903229, 0.038461532443761826, 0.043478257954120636, 0.08695651590824127, 0.09756097197532654, 0.0714285671710968, 0.0952380895614624, 0.10256409645080566, 0.2222222238779068, 0.0555555522441864, 0, 0.04444443807005882, 0.0624999962747097, 0.10526315122842789, 0.12244897335767746, 0.0952380895614624, 0, 0.05405404791235924, 0.0634920597076416, 0, 0.11320754140615463, 0, 0.035087715834379196, 0.0923076868057251, 0.08510638028383255 ]
BklMDCVtvr
true
[ "We introduce a new analysis technique that discovers interpretable compositional structure in notoriously hard-to-interpret recurrent neural networks." ]
[ "The vertebrate visual system is hierarchically organized to process visual information in successive stages.", "Neural representations vary drastically across the first stages of visual processing: at the output of the retina, ganglion cell receptive fields (RFs) exhibit a clear antagonistic center-surround structure, whereas in the primary visual cortex (V1), typical RFs are sharply tuned to a precise orientation.", "There is currently no unified theory explaining these differences in representations across layers.", "Here, using a deep convolutional neural network trained on image recognition as a model of the visual system, we show that such differences in representation can emerge as a direct consequence of different neural resource constraints on the retinal and cortical networks, and for the first time we find a single model from which both geometries spontaneously emerge at the appropriate stages of visual processing.", "The key constraint is a reduced number of neurons at the retinal output, consistent with the anatomy of the optic nerve as a stringent bottleneck.", "Second, we find that, for simple downstream cortical networks, visual representations at the retinal output emerge as nonlinear and lossy feature detectors, whereas they emerge as linear and faithful encoders of the visual scene for more complex cortical networks.", "This result predicts that the retinas of small vertebrates (e.g. salamander, frog) should perform sophisticated nonlinear computations, extracting features directly relevant to behavior, whereas retinas of large animals such as primates should mostly encode the visual scene linearly and respond to a much broader range of stimuli.", "These predictions could reconcile the two seemingly incompatible views of the retina as either performing feature extraction or efficient coding of natural scenes, by suggesting that all vertebrates lie on a spectrum between these two objectives, depending on the degree of neural resources allocated to their visual system.", "Why did natural selection shape our visual representations to be the way they are?", "Traditionally, the properties of the early visual system have been explained with theories of efficient coding, which are based on the premise that the neural representations are optimal at preserving information about the visual scene, under a set of metabolic constraints such as total firing rate or total number of synapses.", "These theories can successfully account for the antagonistic center-surround structure of receptive fields (RFs) found in the retina BID0 BID18 BID35 BID21 BID13 , as well as for the oriented structure of RFs found in the primary visual cortex V1 BID30 BID3 .However", ", a number of properties of the early visual system remain unexplained. First,", "it is unclear why RF geometries would be so different in the retina and V1. A study", "BID36 has proposed that both representations are optimal at preserving visual information under different metabolic constraints: a constraint on total number of synapses for the retina, and one on total firing rate in V1. However", ", it is unclear why the two systems would be optimized for these two different objectives. Second", ", there is a great diversity of ganglion cell types at the output the retina BID17 , with each cell type tiling the entire visual field and performing a specific computation. Interestingly", ", some of these types perform a highly nonlinear computation, extracting specific, behaviorally-relevant cues from the visual scene (e.g. direction-selective cells, objectmotion-selective cells), whereas other types are better approximated by a quasi-linear model, and respond to a broad range of stimuli (e.g. midget cells in the primate BID32 and quasi-linear pixel-encoders in the mouse BID20 ). Intriguingly", ", although quasi-linear and more nonlinear types exist in species of all sizes (e.g. primate parasol cells are nonlinear BID10 ), the proportion of cells performing a rather linear encoding versus a nonlinear feature detection seems to vary across species. For example", ", the most common ganglion cell type in the primate retina is fairly well approximated by a quasi-linear pixel-encoder (midget cells, 50% of all cells and >95% in the central retina BID32 BID11 ), whereas the most common cell type in mouse acts as a specific feature detector, thought to serve as an alarm system for overhead predators (W3 cells, 13% of all ganglion cells BID38 ). Again, theories", "of efficient coding have not been able to account for this diversity of computations found across cell types and across species.The limitations of current efficient coding theories might reside in the simplistic assumption that the objective is to simply relay indiscriminately all visual information to the next stages of processing. Indeed, the ultimate", "goal of the visual system is to extract meaningful features from the visual scene in order to produce an adequate behavioral response, not necessarily to faithfully encode it. A recent line of work", "has proposed using the information bottleneck framework as a way to move beyond the simplistic objective of information preservation towards more realistic objectives BID5 . Another study has shown", "that by changing the objective from efficiently encoding the present to efficiently encoding the future (predictive coding), one could better account for the spatio-temporal RFs of V1 cells BID34 . Although promising, these", "approaches were limited to the study of a single layer of neurons, and they did not answer the aforementioned questions about cross-layer or cross-species differences. On the other hand, deep convolutional", "networks have proven to be accurate models of the visual system, whether they are trained directly on reproducing neural activity BID28 BID4 , or on a behaviorally relevant task BID37 BID14 BID4 ), but they have not yet been used to study the visual system through the lens of efficient coding theories.In this study, we trained deep convolutional neural networks on image recognition (CIFAR-10, BID23 ) and varied their architectures to explore the sets of constraints that could have shaped vertebrates' early visual representations through natural selection. We modeled the visual system with a series", "of two convolutional networks, one corresponding to the retina and one downstream network corresponding to the ventral visual system in the brain. By varying the architecture of these networks", ", we first found that a reduction in the number of neurons at the retinal output -corresponding to a realistic physical constraint on the number of fibers in the optic nerve -accounted simultaneously for the emergence of center-surround RFs in our model of the retina, and for the emergence of oriented receptive fields in the primary visual relay of the brain. Second, we found that the degree of neural resources", "allocated to visual cortices in our model drastically reshaped retinal representations. Given a deep visual cortex, the retinal processing emerged", "as quasi-linear and retained substantial information about the visual scene. In contrast, for a shallow cortex, the retinal processing", "emerged as nonlinear and more information-lossy, but was better at extracting features relevant to the object classification task. These observations make testable predictions on the qualitative", "differences that should be found in retinal representations across species, and could reconcile the seemingly incompatible theories of retinal processing as either performing efficient encoding or feature detection.", "A unified theoretical account for the structural differences between the receptive field shapes of retinal neurons and V1 neurons has until now been beyond the reach of efficient coding theories.", "BID21 found that efficient encoding of images with added noise and a cost on firing rate produce center-surround RFs, whereas the same task without noise produces edge detectors.", "However, this observation (as they note) does not explain the discrepancy between retinal and cortical representations.", "BID36 propose a different set of constraints for the retina and V1, in which the retina optimizes for a metabolic constraint on total number of synapses, whereas V1 optimizes for a constraint on total firing rate.", "It is not clear why each of these constraints would predominate in each respective system.", "Here we show that these two representations can emerge from the requirement to perform a biologically relevant task (extracting object identity from an image) with a bottleneck constraint on the dimensionality of the retinal output.", "Interestingly, this constraint differs from the ones used previously to account for center-surround RFs (number of synapses or total firing rate).", "It is worth noting that we unsuccessfully tried to reproduce the result of BID21 in our network, by adding noise to the image and applying an L1 regularization to the retina-net activations.", "In our framework (different than the one of BID21 in many ways), the receptive fields of the retina-net without bottleneck remained oriented across the full range of orders of magnitude of noise and L1 regularization that permitted successful task performance.There is a long-standing debate on whether the role of the retina is to extract relevant features from the environment BID25 BID17 BID32 , or to efficiently encode all visual information indistinctly BID2 BID0 BID18 .", "In this work, we show that our model of the visual system, trained on the same task and with the same input statistics, can exhibit different retinal representations depending on the degree of neural resources allocated to downstream processing by the ventral visual stream.", "These results suggest the hypothesis that, despite its conserved structure across evolution, the retina could prioritize different computations in different species.", "In species with fewer brain resources devoted to visual processing, the retina should nonlinearly extract relevant features from the environment for object recognition, and in species with a more complex ventral visual stream, the retina should prioritize a linear and efficient transmission of visual information for further processing by the brain.", "Although all species contain a mix of quasi-linear and nonlinear cell types, the proportion of quasi-linear cells seems to vary across species.", "In the mouse, the most numerous cell type is a two-stage nonlinear feature detector, thought to detect overhead predators BID38 .", "In contrast, the most common ganglion cell type in the primate retina is fairly well approximated by a linear filter (midget cells, 50% of all cells and >95% in the central retina BID32 BID11 ).", "Note however that two-stage nonlinear models are also present in larger species, such as cat Y-type cells and primate parasol cells BID10 , making it difficult to make definitive statements about inter-species differences in retinal coding.", "To gain a better understanding of these differences, it would be useful to collect a dataset consisting of recordings of complete populations of ganglion cells of different species in response to a common bank of natural scenes.A related question is the role of the parcellation of visual information in many ganglion cell types at the retinal output.", "A recent theory of efficient coding has shown that properties of midget and parasol cells in the primate retina can emerge from the objective of faithfully encoding natural movies with a cost on the total firing rate traversing the optic nerve BID29 .", "On the other hand, many cell types seem exquisitely sensitive to behaviorally relevant features, such as potential prey or predators BID17 .", "For example, some cell types in the frog are tuned to detect moving flies or looming predators BID25 .", "It is an intriguing possibility that different cell types could subserve different functions within a single species, namely efficient coding of natural scenes for some types and extraction of behaviorally-relevant features for others.", "In this study we allowed only a limited number of cell types (i.e. convolutional channels) at the retinal output (1 to 4), in order to have a dimensionality expansion between the retinal representation and the representation in the ventral visual stream (32 channels), an important condition to see the retinal center-surround representation emerge.", "By using larger networks with more channels in the retina-net and the VVS-net, we could study the emergence of a greater diversity of neuron types in our retina-net and compare their properties to real retinal cell types.", "It would also be interesting to extend our model to natural movies.", "Indeed, most feature detectors identified to date seem to process some form of image motion: wide-field, local or differential BID32 .", "Adding a temporal dimension to the model would be necessary to study their emergence.In conclusion, by studying emergent representations learned by a deep network trained on a biologically relevant task, we found that striking differences in retinal and cortical representations of visual information could be a consequence of the anatomical constraint of transmitting visual information through a low-dimensional communication channel, the optic nerve.", "Moreover, our computational explorations suggest that the rich diversity of retinal representations found across species could have adaptively co-evolved with the varying sophistication of subsequent processing performed by the ventral visual stream.", "These insights illustrate how deep neural networks, whose creation was once inspired by the visual system, can now be used to shed light on the constraints and objectives that have driven the evolution of our visual system.", "The following analysis corroborates our qualitative observation that a dimensionality bottleneck in the retina-net yields center-surround retinal receptive fields and oriented, edge-detecting receptive fields in the first layer of the VVS-net (V1).", "For a given receptive field, we quantified its orientedness as follows: we displayed rectangular bar stimuli of all possible combinations of width, orientations and spatial translations that fit in the input image window.", "Among all these combinations, we selected the bar stimulus width, orientation, and translation that yielded the strongest response from the RF.", "Bars with the same width as the best stimuli were presented at all orientations and translations, and for each orientation, we select the strongest response it produced (across all translations).", "In this manner we obtained a measure of the strength of a receptive field's preference for all orientations.We measured the strength of each RF preference (maximum strength of response) for its preferred orientation and for the orthogonal orientation, and computed the ratio of these strengths.", "Completely isotropic filters would be expected to give a ratio of 1, while oriented filters should give higher ratios.", "Note however that some deviation from 1 may indicate noise in the filter rather than true orientedness.", "For each network layer, we averaged this ratio across filters (for convolutional layers with multiple layers) and trials (re-training of the same neural network architecture with different random initializations).", "We found that the average ratios were 1.56(±0.22) for the retinal output, 3.05(±0.30) for the first VVS-net layer, and 2.57(±0.27) for the second VVS-net layer, where error margins given are 95% confidence intervals.", "To help assess whether retinal RFs were more isotropic than expected by chance, we compared them to receptive fields composed of random Gaussian noise as a baseline.", "These give an average ratio (as computed above) of 1.97(±0.08), significantly higher than that for retinal RFs.", "Furthermore, the standard deviation of RF preference across orientations was significantly lower for the retinal RFs (0.118 ± 0.036) than for random RFs (0.177 ± 0.007), also indicating that retinal RFs were more isotropic than expected by chance.We also plot the average RF preference for different orientations at each layer to more comprehensively assess the isotropy of RFs at each network layer.", "To aggregate results across multiple trials and filters, we rotated the coordinates of each receptive field such that its preferred orientation was vertical, and averaged our results across filters and trials.", "(See FIG4 .The results confirm our qualitative observations that (1)", "RFs in the second layer of a vanilla network (N BN = 32) are highly oriented ( FIG4 ) (2) RFs in the second layer (retina output) of a bottleneck network (N BN = 1) are much more isotropic, consistent with center-surround RFs ( FIG4 top), and (3) RFs in the layer immediately following the retina-net in the bottleneck network are oriented ( FIG4 .We", "also quantitatively corroborate our observation that oriented receptive fields in the V1 layer pool input from oriented arrays of center-surround filters in the retina-net output layer. We", "apply our method of isotropy quantification described above to the weight matrix for each input-output filter combination in the V1 convolutional layer. We", "find that this weight matrix itself exhibits orientedness across filters and trials, confirming our observation ( FIG4 ). To", "investigate whether neurons in our model's early layers more closely resembled simple or complex cells, we performed the following analysis. As", "before, we obtained local linear approximations of receptive fields by computing the gradient in input space with respect to the response of a given neuron. Rather", "than beginning with a blank input, we ran multiple trials with different randomly initialized inputs. A purely", "linear cell would give the same result no matter the initialization; a somewhat nonlinear but still \"simple\" cell is expected to give similar results across initializations." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.12903225421905518, 0.1428571343421936, 0.12903225421905518, 0.2686567008495331, 0.051282044500112534, 0.07999999821186066, 0.06666666269302368, 0.1666666567325592, 0.1249999925494194, 0.16949151456356049, 0.11538460850715637, 0.13333332538604736, 0.05714285373687744, 0.15686273574829102, 0.05882352590560913, 0.08888888359069824, 0.12121211737394333, 0.072727270424366, 0.08571428060531616, 0.09836065024137497, 0.08695651590824127, 0.04651162400841713, 0.04444443807005882, 0.1304347813129425, 0.19780220091342926, 0.14999999105930328, 0.1904761791229248, 0.3333333134651184, 0.1111111044883728, 0, 0.13333332538604736, 0, 0.08888888359069824, 0.05882352590560913, 0.1395348757505417, 0.1249999925494194, 0.08163265138864517, 0, 0.08695651590824127, 0.07499999552965164, 0.18518517911434174, 0.05405404791235924, 0.1428571343421936, 0.05405404791235924, 0.05405404791235924, 0.12244897335767746, 0.038461532443761826, 0.09677419066429138, 0.072727270424366, 0, 0.0555555522441864, 0.04255318641662598, 0.13333332538604736, 0.1249999925494194, 0.06896550953388214, 0, 0.2647058665752411, 0.1702127605676651, 0.19230768084526062, 0.08888888359069824, 0.08163265138864517, 0, 0, 0.07999999821186066, 0.05714285373687744, 0.05714285373687744, 0.08888888359069824, 0.07999999821186066, 0.08888888359069824, 0, 0.0634920597076416, 0, 0, 0.1090909019112587, 0.09756097197532654, 0.14999999105930328, 0, 0.051282044500112534, 0.1428571343421936, 0.05882352590560913, 0.04878048226237297 ]
S1xq3oR5tQ
true
[ "We reproduced neural representations found in biological visual systems by simulating their neural resource constraints in a deep convolutional model." ]
[ "While it has not yet been proven, empirical evidence suggests that model generalization is related to local properties of the optima which can be described via the Hessian.", "We connect model generalization with the local property of a solution under the PAC-Bayes paradigm.", "In particular, we prove that model generalization ability is related to the Hessian, the higher-order \"smoothness\" terms characterized by the Lipschitz constant of the Hessian, and the scales of the parameters.", "Guided by the proof, we propose a metric to score the generalization capability of the model, as well as an algorithm that optimizes the perturbed model accordingly.", "Deep models have proven to work well in applications such as computer vision BID18 BID8 BID14 , speech recognition , and natural language processing BID35 BID6 BID25 .", "Many deep models have millions of parameters, which is more than the number of training samples, but the models still generalize well BID11 .On", "the other hand, classical learning theory suggests the model generalization capability is closely related to the \"complexity\" of the hypothesis space, usually measured in terms of number of parameters, Rademacher complexity or VC-dimension. This", "seems to be a contradiction to the empirical observations that over-parameterized models generalize well on the test data 1 . Indeed", ", even if the hypothesis space is complex, the final solution learned from a given training set may still be simple. This", "suggests the generalization capability of the model is also related to the property of the solution. BID15", "and BID1 empirically observe that the generalization ability of a model is related to the spectrum of the Hessian matrix ∇ 2 L(w * ) evaluated at the solution, and large eigenvalues of the ∇ 2 L(w * ) often leads to poor model generalization. Also,", "BID15 , BID1 and BID31 introduce several different metrics to measure the \"sharpness\" of the solution, and demonstrate the connection between the sharpness metric and the generalization empirically. BID2", "later points out that most of the Hessian-based sharpness measures are problematic and cannot be applied directly to explain generalization. In particular", ", they show that the geometry of the parameters in RELU-MLP can be modified drastically by re-parameterization.Another line of work originates from Bayesian analysis. Mackay (1995", ") first introduced Taylor expansion to approximate the (log) posterior, and considered the second-order term, characterized by the Hessian of the loss function, as a way of evaluating the model simplicity, or \"Occam factor\". Recently BID34", "use this factor to penalize sharp minima, and determine the optimal batch size. BID4 connect the", "PAC-Bayes bound and the Bayesian marginal likelihood when the loss is (bounded) negative log-likelihood, which leads to an alternative perspective on Occam's razor. BID19 , and more", "recently, BID7 BID28 BID29 use PAC-Bayes bound to analyze the generalization behavior of the deep models.Since the PAC-Bayes bound holds uniformly for all \"posteriors\", it also holds for some particular \"posterior\", for example, the solution parameter perturbed with noise. This provides a", "natural The sharp minimum, even though it approximates the true label better, has some complex structures in its predicted labels, while the flat minimum seems to produce a simpler classification boundary.", "We connect the smoothness of the solution with the model generalization in the PAC-Bayes framework.", "We prove that the generalization power of a model is related to the Hessian and the smoothness of the solution, the scales of the parameters, as well as the number of training samples.", "In particular, we prove that the best perturbation level scales roughly as the inverse of the square root of the Hessian, which mostly cancels out scaling effect in the re-parameterization suggested by BID2 .", "To the best of our knowledge, this is the first work that integrate Hessian in the model generalization bound rigorously.", "It also roughly explains the effect of re-parameterization over the generalization.", "Based on our generalization bound, we propose a new metric to test the model generalization and a new perturbation algorithm that adjusts the perturbation levels according to the Hessian.", "Finally, we empirically demonstrate the effect of our algorithm is similar to a regularizer in its ability to attain better performance on unseen data." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2631579041481018, 0.47999998927116394, 0.2857142686843872, 0.29411762952804565, 0.054054051637649536, 0.1249999925494194, 0.25, 0.13793103396892548, 0.1875, 0.4166666567325592, 0.3333333432674408, 0.23529411852359772, 0.24242423474788666, 0.1111111044883728, 0.2857142686843872, 0.1538461446762085, 0.1111111044883728, 0.21739129722118378, 0.09756097197532654, 0.43478259444236755, 0.47058823704719543, 0.10256409645080566, 0.3448275923728943, 0.2857142686843872, 0.3636363446712494, 0.1764705777168274 ]
BJxOHs0cKm
true
[ "a theory connecting Hessian of the solution and the generalization power of the model" ]
[ "Unsupervised learning is about capturing dependencies between variables and is driven by the contrast between the probable vs improbable configurations of these variables, often either via a generative model which only samples probable ones or with an energy function (unnormalized log-density) which is low for probable ones and high for improbable ones.", "Here we consider learning both an energy function and an efficient approximate sampling mechanism for the corresponding distribution.", "Whereas the critic (or discriminator) in generative adversarial networks (GANs) learns to separate data and generator samples, introducing an entropy maximization regularizer on the generator can turn the interpretation of the critic into an energy function, which separates the training distribution from everything else, and thus can be used for tasks like anomaly or novelty detection. \n\n", "This paper is motivated by the older idea of sampling in latent space rather than data space because running a Monte-Carlo Markov Chain (MCMC) in latent space has been found to be easier and more efficient, and because a GAN-like generator can convert latent space samples to data space samples.", "For this purpose, we show how a Markov chain can be run in latent space whose samples can be mapped to data space, producing better samples.", "These samples are also used for the negative phase gradient required to estimate the log-likelihood gradient of the data space energy function.", "To maximize entropy at the output of the generator, we take advantage of recently introduced neural estimators of mutual information.", "We find that in addition to producing a useful scoring function for anomaly detection, the resulting approach produces sharp samples (like GANs) while covering the modes well, leading to high Inception and Fréchet scores.\n", "The early work on deep learning relied on unsupervised learning BID13 BID2 BID17 ) to train energy-based models BID18 , in particular Restricted Boltzmann Machines, or RBMs.", "However, it turned out that training energy-based models without an analytic form for the normalization constant is very difficult, because of the challenge of estimating the gradient of the partition function, also known as the negative phase part of the log-likelihood gradient (described in more details below, Sec. 2).", "Several algorithms were proposed for this purpose, such as Contrastive Divergence BID12 and Stochastic Maximum Likelihood BID28 BID26 , relying on Monte-Carlo Markov Chains (MCMC) to iteratively sample from the energy-based model.", "However, because they appear to suffer from either high bias or high variance (due to long mixing times), training of RBMs and other Boltzmann machines has not remained competitive after the introduction of variational auto-encoders BID16 ) and generative adversarial networks or GANs .In", "this paper, we revisit the question of training energy-based models, taking advantage of recent advances in GAN-related research, and propose a novel approach to training energy functions and sampling from them, called EnGAN. The", "main inspiration for the proposed solution is the earlier observation BID4 made on stacks of auto-encoders that sampling in latent space (and then applying a decoder to map back to data space) led to faster mixing and more efficient sampling. The", "authors observed that whereas the data manifold is generally very complex and curved, the corresponding distribution in latent space tends to be much simpler and flatter. This", "was verified visually by interpolating in latent space and projecting back to data space through the decoder, observing that the resulting samples look like data samples (i.e., the latent space manifold is approximately convex, with most points interpolated between examples encoded in latent space also having high probability). We propose", "a related approach, EnGAN, which also provides two energy functions, one in data space and one in latent space. A key ingredient", "of the proposed approach is the need to regularize the generator (playing the role of the decoder in auto-encoders, but with no need for an encoder) so as to increase its entropy. This is needed to", "make sure to produce negative examples that can kill off spurious minima of the energy function. This need was first", "identified by BID15 , who showed that in order for an approximate sampler to match the density associated with an energy function, a compromise must be reached between sampling low energy configurations and obtaining a high-entropy distribution. However, estimating", "and maximizing the entropy of a complex high-dimensional distribution is not trivial, and we take advantage for this purpose of very recently proposed GAN-based approaches for maximizing mutual information BID1 BID24 , since the mutual information between the input and the output of the generator is equal to the entropy at the output of the generator.In this context, the main contributions of this paper are the following:• proposing EnGAN, a general architecture, sampling and training framework for energy functions, taking advantage of an estimator of mutual information between latent variables and generator output and approximating the negative phase samples with MCMC in latent space, • showing that the resulting energy function can be successfully used for anomaly detection, improving on recently published results with energy-based models, • showing that EnGAN produces sharp images -with competitive Inception and Frechet scores -and which also better cover modes than standard GANs and WGAN-GPs, while not suffering from the common blurriness issue of many maximum likelihood generative models.", "We proposed EnGAN, an energy-based generative model that produces energy estimates using an energy model and a generator that produces fast approximate samples.", "This takes advantage of novel methods to maximize the entropy at the output of the generator using a GAN-like technique.", "We have shown that our energy model learns good energy estimates using visualizations in toy 2D data and through performance in unsupervised anomaly detection.", "We have also shown that our generator produces samples of high perceptual quality by measuring Inception and Frchet scores and shown that EnGAN is robust to the respective weaknesses of GAN models (mode dropping) and maximumlikelihood energy-based models (spurious modes).", "We found that running an MCMC in latent space rather than in data space (by composing the generator and the data-space energy to obtain a latentspace energy) works substantially better than running the MCMC in data-space." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.2142857164144516, 0.24242423474788666, 0.25, 0.15094339847564697, 0.10256409645080566, 0.2857142686843872, 0.24242423474788666, 0.2448979616165161, 0.04878048226237297, 0.1428571343421936, 0.1249999925494194, 0.1090909019112587, 0.21276594698429108, 0.15094339847564697, 0.09756097197532654, 0.10344827175140381, 0.11428570747375488, 0.2790697515010834, 0.277777761220932, 0.19230768084526062, 0.125, 0.23529411852359772, 0.3030303120613098, 0.10526315122842789, 0.1599999964237213, 0.2790697515010834 ]
HJlmhs05tm
true
[ "We introduced entropy maximization to GANs, leading to a reinterpretation of the critic as an energy function." ]
[ "Neural Style Transfer has become a popular technique for\n", "generating images of distinct artistic styles using convolutional neural networks.", "This\n", "recent success in image style transfer has raised the question of\n", "whether similar methods can be leveraged to alter the “style” of musical\n", "audio.", "In this work, we attempt long time-scale high-quality audio transfer\n", "and texture synthesis in the time-domain that captures harmonic,\n", "rhythmic, and timbral elements related to musical style, using examples that\n", "may have different lengths and musical keys.", "We demonstrate the ability\n", "to use randomly initialized convolutional neural networks to transfer\n", "these aspects of musical style from one piece onto another using 3\n", "different representations of audio: the log-magnitude of the Short Time\n", "Fourier Transform (STFT), the Mel spectrogram, and the Constant-Q Transform\n", "spectrogram.", "We propose using these representations as a way of\n", "generating and modifying perceptually significant characteristics of\n", "musical audio content.", "We demonstrate each representation's\n", "shortcomings and advantages over others by carefully designing\n", "neural network structures that complement the nature of musical audio.", "Finally, we show that the most\n", "compelling “style” transfer examples make use of an ensemble of these\n", "representations to help capture the varying desired characteristics of\n", "audio signals.", "The problem we seek to explore in this paper is the transfer of artistic \"style\" from one musical audio example onto another.", "The definition and perception of an artistic style in visual art images (e.g., impressionist, pointilist, cubist) shown in Figure 1 is perhaps more straightforward than in the case musical audio.", "For images, a successful style transfer algorithm is capable of generating a novel image whose content information, or what is in the image, is matched as well as its stylistic information, or the artistic approach.", "In other words, it explores the question, \"What would a rendering of scene A by artist B look like?\"", "Figure 1 : Demonstration of image style transfer courtesy of BID7 .For", "our work, we similarly set out to develop an algorithm that explores the question, \"What would it sound like if a musical piece by ensemble/artist A was performed by ensemble/artist B?\" It", "should be noted that we do not approach the problem according to strict musicological definitions (e.g., melodic, harmonic, rhythmic, and structural elements), as one might proceed if given the musical notation of a composition. We", "do not presume access to the notation or any music theoretic analysis of a piece. We", "are instead interested in transferring the acoustic features related to harmonic, rhythmic, and timbral aspects of one musical piece onto another. Therefore", ", for the single instance \"style\" transfer algorithm we propose in this work, it is more accurate to pose the question as \"What would a rendering of musical piece A (by artist A) using the harmonic and rhythmic patterns of piece B (by artist B) sound like?\" In this", "paper, we define musical \"style\" transfer according to this type of audio content transformation, and will henceforth drop the use of quotation marks around \"style\". In texture", "generation, we instead ask \"What would it sound like for a source musical piece to contain the same musical patterns and higher-order statistics without any of the same local, event-based information?\" This can be", "achieved in the image or audio domain by only optimizing those terms of the loss function of a transfer algorithm associated with style, and not using any loss term associated with content.Currently, there are two types of approaches to image style transfer. The first method", "uses a learned generative model to manipulate the representation of the data such that it maintains its original content rendered into a new style. The second class", "of methods, which we investigate and apply in this paper, are concerned with synthesizing new data that matches the representations of data in a learned model in some specific way. Measuring the accuracy", "of such algorithms' abilities to transfer style is difficult, since most data is not able to be entirely disentangled into separate content and style components. This is especially true", "for musical style.There have been attempts for learning representations of musical style include the use of generative models which use a MIDI representation of audio BID14 . The advantages of using", "this representation are the ability to focus solely on a highly understandable representation of musical information in its harmonic and rhythmic components, but lacks the ability to capture other important sonic information like timbre.Our approach utilizes many interesting findings from recent research in image style transfer. We suggest that it is possible", "to use the same style transfer algorithm used for images for musical audio, but best performance requires a careful selection of how content and style is represented, given the task. FIG0 shows a spectral visualization", "of how a style transfer result contains both local, event based information from the content piece, while also having the characteristic nature of the style signal, as there is clearly more energy in the higher frequencies. However, it is important to note that", "despite this visualization in the log-magnitude STFT representation, the audio is ultimately synthesized in the time-domain.", "We introduce several improvements for performing musical style transfer on raw audio through the utilization of multiple audio representations.", "Our contributions can be summarized as follows: First, we have demonstrated that using additional representations of Mel and CQT spectrograms with accompanying neural structure improve in many cases the capture of musically meaningful style information.", "Secondly, we have proposed a novel, key-invariant content representation for musical audio.", "Finally we have shown that despite using log-magnitude spectrograms to capture the content and style information, we are still able to synthesize a target audio waveform in the time domain using the backpropogation of the STFT.While our proposed content representations work for audio in different keys, there still is no representation for tempo invariance.", "Other future work may include using learned generative models to perform musical style transfer and trying to perform style transfer entirely in the time-domain.", "This or the use of complex weights may be able to help improve representation of phase information in neural representations." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.06896550953388214, 0.06666666269302368, 0.32258063554763794, 0.1875, 0.2666666507720947, 0.13793103396892548, 0.06451612710952759, 0.07407406717538834, 0.1666666567325592, 0.0714285671710968, 0.1875, 0.2142857164144516, 0.0714285671710968, 0.27586206793785095, 0.07407406717538834, 0.17391304671764374, 0.0833333283662796, 0, 0.2666666507720947, 0.07692307233810425, 0.13333332538604736, 0.20689654350280762, 0.2857142686843872, 0.23999999463558197, 0.2916666567325592, 0.1538461446762085, 0.19354838132858276, 0.1599999964237213, 0.178571417927742, 0.2222222238779068, 0.1904761791229248, 0.22580644488334656, 0.2222222238779068, 0.15686273574829102, 0.27586206793785095, 0.2222222238779068, 0.25, 0.13333332538604736, 0.3636363446712494, 0.2686567008495331, 0.31372547149658203, 0.21052631735801697, 0.1818181723356247, 0.42105263471603394, 0.18518517911434174, 0.1875, 0.21875, 0.25, 0.20512819290161133 ]
BybQ7zWCb
true
[ "We present a long time-scale musical audio style transfer algorithm which synthesizes audio in the time-domain, but uses Time-Frequency representations of audio." ]
[ "To communicate with new partners in new contexts, humans rapidly form new linguistic conventions.", "Recent language models trained with deep neural networks are able to comprehend and produce the existing conventions present in their training data, but are not able to flexibly and interactively adapt those conventions on the fly as humans do.", "We introduce a repeated reference task as a benchmark for models of adaptation in communication and propose a regularized continual learning framework that allows an artificial agent initialized with a generic language model to more accurately and efficiently understand their partner over time.", "We evaluate this framework through simulations on COCO and in real-time reference game experiments with human partners.", "Linguistic communication depends critically on shared knowledge about the meanings of words BID9 .", "However, the real-world demands of communication often require speakers and listeners to go beyond dictionary meanings to understand one another BID0 BID15 .", "The social world continually presents new communicative challenges, and agents must continually coordinate on new meanings to meet them.For example, consider a nurse visiting a bed-ridden patient in a cluttered home.", "The first time they ask the nurse to retrieve a particular medication, the patient must painstakingly refer to unfamiliar pills, e.g. \"the vasoprex-tecnoblek meds for my blood pressure, in a small bluish bottle, on the bookcase in my bathroom.\"", "After a week of care, however, they may just ask for their \"Vasotec.\"This type of flexible language use poses a challenge for models of language in machine learning.", "Approaches based on deep neural networks typically learn a monolithic meaning function during training, with fixed weights during use.", "For an in-home robot to communicate as flexibly and efficiently with patients as a human nurse, it must be equipped with a continual learning mechanism.", "Such a mechanism would present two specific advantages for interaction and communication applications.", "First, to the extent that current models have difficulty communicating in a new setting, an adaptive approach can quickly improve performance on the relevant subset of language.", "Second, for human-robot contexts, an adaptive model enables speakers to communicate more efficiently as they build up common ground, remaining understandable while expending significantly fewer words as humans naturally do BID1 .In", "this paper, we introduce a benchmark communication task and general continual learning framework for transforming neural language models into adaptive models that can be deployed in real-time interactions with other agents.Our key insight is that through continual interactions with the same partner in a shared context, an adaptive listener can more effectively communicate with its partner FIG0 .We", "are motivated by hierarchical Bayesian approaches to task-specific adaptation. Our", "approach integrates two core components: (i)", "a loss function combining speaker and listener information, and (ii", ") a regularization scheme for fine-tuning model weights without overfitting.", "Human language use is flexible, continuously adapting to the needs of the current situation.", "In this paper, we introduced a challenging repeated reference game benchmark for artificial agents, which requires such adaptability to succeed.", "We proposed a continual learning approach that forms context-specific conventions by adapting general-purpose semantic knowledge.", "Even when models based on generalpurpose knowledge perform poorly, our approach allows human speakers working with adapted variants of such models to become more accurate and more efficient over time." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1875, 0.15094339847564697, 0.508474588394165, 0.2702702581882477, 0.060606054961681366, 0.09756097197532654, 0.1249999925494194, 0.145454540848732, 0.1818181723356247, 0.10526315122842789, 0.2380952388048172, 0.24242423474788666, 0.17391303181648254, 0.11764705181121826, 0.3529411852359772, 0, 0.07692307233810425, 0.13793103396892548, 0.13333332538604736, 0, 0.25, 0.2857142686843872, 0.1249999925494194 ]
BklzE9Bo3V
true
[ "We propose a repeated reference benchmark task and a regularized continual learning approach for adaptive communication with humans in unfamiliar domains" ]
[ "Traditional set prediction models can struggle with simple datasets due to an issue we call the responsibility problem.", "We introduce a pooling method for sets of feature vectors based on sorting features across elements of the set.", "This can be used to construct a permutation-equivariant auto-encoder that avoids this responsibility problem.", "On a toy dataset of polygons and a set version of MNIST, we show that such an auto-encoder produces considerably better reconstructions and representations.", "Replacing the pooling function in existing set encoders with FSPool improves accuracy and convergence speed on a variety of datasets.", "Consider the following task: you have a dataset wherein each datapoint is a set of 2-d points that form the vertices of a regular polygon, and the goal is to learn an auto-encoder on this dataset.", "The only variable is the rotation of this polygon around the origin, with the number of points, size, and centre of it fixed.", "Because the inputs and outputs are sets, this problem has some unique challenges.", "Encoder: This turns the set of points into a latent space.", "The order of the elements in the set is irrelevant, so the feature vector the encoder produces should be invariant to permutations of the elements in the set.", "While there has been recent progress on learning such functions (Zaheer et al., 2017; Qi et al., 2017) , they compress a set of any size down to a single feature vector in one step.", "This can be a significant bottleneck in what these functions can represent efficiently, particularly when relations between elements of the set need to be modeled (Murphy et al., 2019; Zhang et al., 2019b) .", "Decoder: This turns the latent space back into a set.", "The elements in the target set have an arbitrary order, so a standard reconstruction loss cannot be used naïvely -the decoder would have to somehow output the elements in the same arbitrary order.", "Methods like those in Achlioptas et al. (2018) therefore use an assignment mechanism to match up elements (section 2), after which a usual reconstruction loss can be computed.", "Surprisingly, their model is still unable to solve the polygon reconstruction task with close-to-zero reconstruction error, despite the apparent simplicity of the dataset.", "In this paper, we introduce a set pooling method for neural networks that addresses both the encoding bottleneck issue and the decoding failure issue.", "We make the following contributions:", "1. We identify the responsibility problem (section 3).", "This is a fundamental issue with existing set prediction models that has not been considered in the literature before, explaining why these models struggle to model even the simple polygon dataset.", "2. We introduce FSPOOL: a differentiable, sorting-based pooling method for variable-size sets (section 4).", "By using our pooling in the encoder of a set auto-encoder and inverting the sorting in the decoder, we can train it with the usual MSE loss for reconstruction without the need for an assignment-based loss.", "This avoids the responsibility problem.", "3. We show that our auto-encoder can learn polygon reconstructions with close-to-zero error, which is not possible with existing set auto-encoders (subsection 6.1).", "This benefit transfers over to a set version of MNIST, where the quality of reconstruction and learned representation is improved (subsection 6.2).", "In further classification experiments on CLEVR (subsection 6.3) and several graph classification datasets (subsection 6.4), using FSPool in a set encoder improves over many non-trivial baselines.", "Lastly, we show that combining FSPool with Relation Networks significantly improves over standard Relation Networks in a model that heavily relies on the quality of the representation (subsection 6.5).", "In this paper, we identified the responsibility problem with existing approaches for predicting sets and introduced FSPool, which provides a way around this issue in auto-encoders.", "In experiments on two datasets of point clouds, we showed that this results in much better reconstructions.", "We believe that this is an important step towards set prediction tasks with more complex set elements.", "However, because our decoder uses information from the encoder, it is not easily possible to turn it into a generative set model, which is the main limitation of our approach.", "Still, we find that using the auto-encoder to obtain better representations and pre-trained weights can be beneficial by itself.", "Our insights about the responsibility problem have already been successfully used to create a model without the limitations of our auto-encoder (Zhang et al., 2019a) .", "In classification experiments, we also showed that simply replacing the pooling function in an existing model with FSPool can give us better results and faster convergence.", "We showed that FSPool consistently learns better set representations at a relatively small computational cost, leading to improved results in the downstream task.", "Our model thus has immediate applications in various types of set models that have traditionally used sum or max pooling.", "It would be useful to theoretically characterise what types of relations are more easily expressed by FSPool through an analysis like in Murphy et al. (2019) .", "This may result in further insights into how to learn better set representations efficiently." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.25806450843811035, 0.12903225421905518, 0.2222222238779068, 0.11764705181121826, 0.1818181723356247, 0.1428571343421936, 0.0624999962747097, 0.1538461446762085, 0.0833333283662796, 0.25, 0.13333332538604736, 0.1395348757505417, 0.08695651590824127, 0.20000000298023224, 0.09756097197532654, 0.060606054961681366, 0.11428570747375488, 0, 0.1904761791229248, 0.1428571343421936, 0, 0.2380952388048172, 0.2222222238779068, 0.1111111044883728, 0.17142856121063232, 0.21052631735801697, 0.05128204822540283, 0.2631579041481018, 0.06666666269302368, 0.06896550953388214, 0.1538461446762085, 0.1249999925494194, 0.15789473056793213, 0.10256409645080566, 0.1666666567325592, 0.12121211737394333, 0.10256409645080566, 0.2222222238779068 ]
HJgBA2VYwH
true
[ "Sort in encoder and undo sorting in decoder to avoid responsibility problem in set auto-encoders" ]
[ "We present a method for policy learning to navigate indoor environments.", "We adopt a hierarchical policy approach, where two agents are trained to work in cohesion with one another to perform a complex navigation task.", "A Planner agent operates at a higher level and proposes sub-goals for an Executor agent.", "The Executor reports an embedding summary back to the Planner as additional side information at the end of its series of operations for the Planner's next sub-goal proposal.", "The end goal is generated by the environment and exposed to the Planner which then decides which set of sub-goals to propose to the Executor.", "We show that this Planner-Executor setup drastically increases the sample efficiency of our method over traditional single agent approaches, effectively mitigating the difficulty accompanying long series of actions with a sparse reward signal.", "On the challenging Habitat environment which requires navigating various realistic indoor environments, we demonstrate that our approach offers a significant improvement over prior work for navigation.", "The ability to model and understand the world at a high-level is crucial for performing complex tasks in real world environments.", "Part of this high-level understanding involves the ability to divide and plan out tasks that are complicated and have long time horizons into more manageable subtasks.", "For example, when navigating to a new location, we typically break the task down into a set of manageable directions (i.e. drive along a certain road until a familiar landmark before taking a turn).", "Imbuing machines with this ability of creating abstractions for long and complex tasks is an active area of research known as hierarchical learning (Sutton et al., 1998; 1999) .", "Research for navigation has recently seen a rejuvenation due to the advent of learning-based approaches Parisotto & Salakhutdinov, 2017; Henriques & Vedaldi, 2018) .", "Embodied learning-based approaches have shown some appealing properties over classical approaches such as being able to operate in complex environments with limited sensor data (Savva et al., 2019; Mishkin et al., 2019) .", "However, there is a need for the ability to plan across long time horizons with sparse reward signals.", "This in effect, causes limitations such as the inability to overcome small obstacles when navigating towards a given goal and the requirement of invoking the environment a large number of times for any meaningful learning to occur (Le et al., 2018) .", "Works which have combined hierarchical reinforcement learning with imitation learning have shown promising results (Das et al., 2018b; Le et al., 2018) , by leveraging expert trajectories with policy sketches (Andreas et al., 2017) , which are less expensive to obtain; however these sketches still require annotation of the environment.", "In this work, we study such hierarchical control for the task of indoor navigation, whereby an embodied agent is randomly spawned within a novel and complex environment and must learn to navigate this environment through interaction (Das et al., 2018a) .", "We address this challenging learning problem through a hierarchical policy approach, where two agents are cooperatively trained together.", "Each agent performs a different role, where one agent acts as a Planner, learning how to propose good sub-goals to an Executor agent, which acts at the low level to achieve these sub-goals (Fig. 1) .", "In contrast to existing hierarchical policy learning approaches, communication between our two agents is two-way, where the Executor provides the Planner with a summary of its series of actions and recent observations.", "This aids the Planner in deciding the next sub-goal with additional side Figure 1 : Our PLEX framework adopts a hierarchical policy approach, where a Planner proposes sub-goals for an Executor to act upon within an environment.", "The Planner receives an egocentric, top-down view with the target location and an embedding summary provided by the Executor.", "The Executor receives visual sensory data (i.e. colour and depth) as its input and a sub-goal provided by the Planner.", "Our method reduces the need for long-term planning and addresses the known sample inefficiency problem accompanying memory models within deep reinforcement learning approaches.", "information provided by the Executor.", "To this end, we propose PLEX, a planning and executing learning framework which offers the following contributions:", "• A hierarchical reinforcement learning approach where two agents specialise on different tasks but are jointly trained by sharing information • We demonstrate both theoretically and empirically that our method benefits from significantly improved sample efficiency as the time horizon is distributed between the Planner and Executor • By extension, our approach mitigates problems prevalent in long-horizon planning, especially those adopting LSTM (Hochreiter & Schmidhuber, 1997) planning approaches", "In this work, we present a hierarchical reinforcement learning approach for solving PointGoal navigation tasks.", "Our proposed approach uses a cooperative learning strategy in which two agents, an Executor and a Planner are jointly learned to solve this task.", "This is enabled through a two-way communication channel established between the two agents through the use of an Executor Latent Information vector provided by the Executor and sub-goals generated by the Planner.", "We motivate the use of this hierarchical approach both theoretically, as well as through empirical experiments which demonstrate a significant improvement in sampling efficiency of our approach, allowing our structured approach to perform significantly better on increasingly harder tasks when compared to baseline approaches." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.43478259444236755, 0.23529411852359772, 0.23076923191547394, 0.10810810327529907, 0, 0.09302325546741486, 0.15789473056793213, 0.1249999925494194, 0, 0.04651162400841713, 0.20000000298023224, 0.1764705777168274, 0, 0.13333332538604736, 0.12244897335767746, 0.07692307233810425, 0.23999999463558197, 0.2666666507720947, 0.1463414579629898, 0.1428571343421936, 0.2666666507720947, 0.06896550953388214, 0.0624999962747097, 0.1764705777168274, 0, 0.20689654350280762, 0.0810810774564743, 0.4444444477558136, 0.17142856121063232, 0.10526315122842789, 0.11764705181121826 ]
r1g7xT4Kwr
true
[ "We present a hierarchical learning framework for navigation within an embodied learning setting" ]
[ "Saliency methods aim to explain the predictions of deep neural networks.", "These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction.", "We use a simple and common pre-processing step ---adding a mean shift to the input data--- to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute.", "We define input invariance as the requirement that a saliency method mirror the sensitivity of the model with respect to transformations of the input.", "We show, through several examples, that saliency methods that do not satisfy a input invariance property are unreliable and can lead to misleading and inaccurate attribution.", "While considerable research has focused on discerning the decision process of neural networks BID1 BID9 BID2 BID15 BID12 BID0 BID14 BID6 BID5 BID16 BID13 BID11 BID4 , there remains a trade-off between model complexity and interpretability.", "Research to address this tension is urgently needed; reliable explanations build trust with users, helps identify points of model failure, and removes barriers to entry for the deployment of deep neural networks in domains high stakes like health care and others.In deep models, data representation is delegated to the model.", "We cannot generally say in an informative way what led to a model prediction.", "Instead, saliency methods aim to infer insights about the f (x) learnt by the model by ranking the explanatory power of constituent inputs.", "While unified in purpose, these methods are surprisingly divergent and non-overlapping in outcome.", "Evaluating the reliability of these methods is complicated by a lack of ground truth, as ground truth would depend upon full transparency into how a model arrives at a decision -the very problem we are trying to solve for in the first place BID13 BID4 .Given", "the need for a quantitative method of comparison, several properties such as completeness BID0 BID13 , implementation invariance and sensitivity BID13 have been articulated as desirable to ensure that saliency methods are reliable. Implementation", "invariance, proposed as an axiom for attribution methods by BID13 , is the requirement that functionally equivalent networks (models with different architectures but equal outputs all inputs), always attribute in an identical way.This work posits that a second invariance axiom, which we term input invariance, needs to be satisfied to ensure reliable interpretation of input contribution to the model prediction. Input invariance", "requires that the saliency method mirror the sensitivity of the model with respect to transformations of the input. We demonstrate that", "numerous methods do not satisfy input invariance using a simple transformation -mean shifts of the input-that does not affect model prediction or weights. We limit our treatment", "of input invariance to showing that there exist cases where this property is not satisfied and welcome future research on a broader treatment of this topic.In this work we:• introduce the axiom input invariance and demonstrate that certain saliency methods do not satisfy this property when considering a simple mean shift in the input. (See FIG3 ).• show that", "when input invariance", "is missing, the saliency method becomes unreliable and misleading. Using two example reference points", "for each method we demonstrate that changing the reference causes the attribution to diverge. The attributions are visualized by", "multiplying them with the input image as is done in the IG paper 1 BID13 . Visualisations were made on ImageNet", "BID7 and the VGG16 architecture BID9 .• demonstrate that \"reference point\"", "methods-Integrated gradients and the Deep Taylor Decomposition-have diverging attribution and input invariance breaking points that depends upon the choice of reference FIG0 .In Section 2, we detail our experiment", "framework. In Section 3, we determine that while", "the model is invariant to the input transformation considered, several saliency methods attribute to the mean shift. In Section 4 we discuss \"reference point", "\" methods and illustrate the importance of choosing an appropriate reference before discussing some directions of future research in Section 5.", "Saliency methods are powerful tools to gain intuition about our model.", "We consider some examples that can cause a break in the reliability of these methods.", "We show that we are able to purposefully create a deceptive explanation of the network using a hand drawn kitten image.We introduce input invariance as a prerequisite for reliable attribution.", "Our treatment of input invariance is restricted to demonstrating there is at least one input transformation that causes attribution to fail.", "We hope this work drives further discussion on this subject.", "We also acknowledge that saliency methods may still provide intuition for image recognition tasks even if they are not input invariant.", "Our work is motivated in part because while we can visually inspect for catasthropic attribution failure in images, other modalities (like audio or word vectors) are more opaque and prone to unintentional misrepresentation." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0.05882352590560913, 0, 0.13793103396892548, 0, 0, 0, 0, 0, 0, 0, 0.03333333134651184, 0, 0, 0, 0, 0.10526315122842789, 0, 0, 0, 0, 0, 0, 0, 0, 0.09999999403953552, 0, 0, 0, 0, 0.054054051637649536 ]
r1Oen--RW
true
[ "Attribution can sometimes be misleading" ]
[ "Large Transformer models routinely achieve state-of-the-art results on\n", "a number of tasks but training these models can be prohibitively costly,\n", "especially on long sequences.", "We introduce two techniques to improve\n", "the efficiency of Transformers.", "For one, we replace dot-product attention\n", "by one that uses locality-sensitive hashing, changing its complexity\n", "from O(L^2) to O(L), where L is the length of the sequence.\n", "Furthermore, we use reversible residual layers instead of the standard\n", "residuals, which allows storing activations only once in the training\n", "process instead of N times, where N is the number of layers.\n", "The resulting model, the Reformer, performs on par with Transformer models\n", "while being much more memory-efficient and much faster on long sequences.", "The Transformer architecture (Vaswani et al., 2017 ) is widely used in natural language processing and yields state-of-the-art results on a number of tasks.", "To obtain these results, researchers have resorted to training ever larger Transformer models.", "The number of parameters exceeds 0.5B per layer in the largest configuration reported in while the number of layers goes up to 64 in (Al-Rfou et al., 2018) .", "Transformer models are also used on increasingly long sequences.", "Up to 11 thousand tokens of text in a single example were processed in (Liu et al., 2018) and when processing other modalities, like music (Huang et al., 2018) and images , even longer sequences are commonplace.", "These large-scale long-sequence models yield great results but strain resources to the point where some argue that this trend is breaking NLP research 1 .", "Many large Transformer models can only realistically be trained in large industrial research laboratories and such models trained with model parallelism cannot even be fine-tuned on a single GPU as their memory requirements demand a multi-accelerator hardware setup even for a single training step.", "Do large Transformer models fundamentally require such huge resources or are they simply inefficient?", "Consider the following calculation: the 0.5B parameters used in the largest reported Transformer layer account for 2GB of memory.", "Activations for 64K tokens with embedding size 1024 and batch size 8 account for 64K × 1K × 8 = 0.5B floats, requiring another 2GB of memory.", "If our memory use was only per-layer, then we should fairly easily fit a large Transformer even on sequences of length 64K on a single accelerator.", "Further, the whole corpus used to train BERT only requires 17GB to store.", "Why is it then that we cannot even fine-tune these models on single machines?", "The above estimate includes only per-layer memory and input activations cost and does not take into account the following major sources of memory use in the Transformer.", "• Memory in a model with N layers is N -times larger than in a single-layer model due to the fact that activations need to be stored for back-propagation.", "• Since the depth d f f of intermediate feed-forward layers is often much larger than the depth d model of attention activations, it accounts for a large fraction of memory use.", "• Attention on sequences of length L is O(L 2 ) in both computational and memory complexity, so even for a single sequence of 64K tokens can exhaust accelerator memory.", "We introduce the Reformer model which solves these problems using the following techniques:", "1 https://hackingsemantics.xyz/2019/leaderboards/ • Reversible layers, first introduced in Gomez et al. (2017) , enable storing only a single copy of activations in the whole model, so the N factor disappears.", "• Splitting activations inside feed-forward layers and processing them in chunks removes the d f f factor and saves memory inside feed-forward layers.", "• Approximate attention computation based on locality-sensitive hashing replaces the O(L 2 ) factor in attention layers with O(L) and so allows operating on long sequences.", "We study these techniques and show that they have negligible impact on the training process compared to the standard Transformer.", "Splitting activations in fact only affects the implementation; it is numerically identical to the layers used in the Transformer.", "Applying reversible residuals instead of the standard ones does change the model but has a negligible effect on training in all configurations we experimented with.", "Finally, locality-sensitive hashing in attention is a more major change that can influence the training dynamics, depending on the number of concurrent hashes used.", "We study this parameter and find a value which is both efficient to use and yields results very close to full attention.", "We experiment on a synthetic task, a text task (enwik8) with sequences of length 64K and an image generation task (imagenet-64 generation) with sequences of length 12K.", "In both cases we show that Reformer matches the results obtained with full Transformer but runs much faster, especially on the text task, and with orders of magnitude better memory efficiency.", "Reformer combines the modeling capacity of a Transformer with an architecture that can be executed efficiently on long sequences and with small memory use even for models with a large number of layers.", "We believe that this will help large, richly-parameterized Transformer models become more widespread and accessible.", "Also, the ability to handle long sequences opens the way for the use of the Reformer on many generative tasks.", "In addition to generating very long coherent text, the Reformer can bring the power of Transformer models to other domains like time-series forecasting, music, image and video generation." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1249999925494194, 0, 0, 0, 0, 0, 0.11764705181121826, 0, 0.2222222238779068, 0, 0.10526315122842789, 0.21052631735801697, 0.1111111044883728, 0.12121211737394333, 0.0952380895614624, 0.060606058686971664, 0.11764705181121826, 0.04999999701976776, 0, 0.13636364042758942, 0.09090908616781235, 0.07692307233810425, 0.12903225421905518, 0.0624999962747097, 0, 0, 0.1249999925494194, 0.1249999925494194, 0.05882352590560913, 0.0555555522441864, 0, 0, 0.1538461446762085, 0.3125, 0.14814814925193787, 0.1666666567325592, 0.1249999925494194, 0.12903225421905518, 0.0714285671710968, 0.13793103396892548, 0.1621621549129486, 0.21621620655059814, 0.17391303181648254, 0, 0.11764705181121826 ]
rkgNKkHtvB
true
[ "Efficient Transformer with locality-sensitive hashing and reversible layers" ]
[ "Obtaining policies that can generalise to new environments in reinforcement learning is challenging.", "In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments.", "We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations.", "We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information.", "In addition, we propose txt2π, a model that captures three-way interactions between the goal, document, and observations.", "On RTFM, txt2π generalises to new environments with dynamics not seen during training via reading.", "Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM.", "Through curriculum learning, txt2π produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps.", "Reinforcement learning (RL) has been successful in a variety of areas such as continuous control (Lillicrap et al., 2015) , dialogue systems (Li et al., 2016) , and game-playing (Mnih et al., 2013) .", "However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments even slightly different from those seen during training.", "We explore language-conditioned policy learning, where agents use machine reading to discover strategies required to solve a task, thereby leveraging language as a means to generalise to new environments.", "Prior work on language grounding and language-based RL (see Luketina et al. (2019) for a recent survey) are limited to scenarios in which language specifies the goal for some fixed environment dynamics (Branavan et al., 2011; Hermann et al., 2017; Bahdanau et al., 2019; Fried et al., 2018; Co-Reyes et al., 2019) , or the dynamics of the environment vary and are presented in language for some fixed goal .", "In practice, changes to goals and to environment dynamics tend to occur simultaneously-given some goal, we need to find and interpret relevant information to understand how to achieve the goal.", "That is, the agent should account for variations in both by selectively reading, thereby generalising to environments with dynamics not seen during training.", "Our contributions are two-fold.", "First, we propose a grounded policy learning problem that we call Read to Fight Monsters (RTFM).", "In RTFM, the agent must jointly reason over a language goal, a document that specifies environment dynamics, and environment observations.", "In particular, it must identify relevant information in the document to shape its policy and accomplish the goal.", "To necessitate reading comprehension, we expose the agent to ever changing environment dynamics and corresponding language descriptions such that it cannot avoid reading by memorising any particular environment dynamics.", "We procedurally generate environment dynamics and natural language templated descriptions of dynamics and goals to produced a combinatorially large number of environment dynamics to train and evaluate RTFM.", "Second, we propose txt2π to model the joint reasoning problem in RTFM.", "We show that txt2π generalises to goals and environment dynamics not seen during training, and outperforms previous language-conditioned models such as language-conditioned CNNs and FiLM (Perez et al., 2018; Bahdanau et al., 2019) both in terms of sample efficiency and final win-rate on RTFM.", "Through curriculum learning where we adapt txt2π trained on simpler tasks to more complex tasks, we obtain agents that generalise to tasks with natural language documents that require five hops of reasoning between the goal, document, and environment observations.", "Our qualitative analyses show that txt2π attends to parts of the document relevant to the goal and environment observations, and that the resulting agents exhibit complex behaviour such as retrieving correct items, engaging correct enemies after acquiring correct items, and avoiding incorrect enemies.", "Finally, we highlight the complexity of RTFM in scaling to longer documents, richer dynamics, and natural language variations.", "We show that significant improvement in language-grounded policy learning is needed to solve these problems in the future.", "We proposed RTFM, a grounded policy learning problem in which the agent must jointly reason over a language goal, relevant dynamics specified in a document, and environment observations.", "In order to study RTFM, we procedurally generated a combinatorially large number of environment dynamics such that the model cannot memorise a set of environment dynamics and must instead generalise via reading.", "We proposed txt2π, a model that captures three-way interactions between the goal, document, and observations, and that generalises to new environments with dynamics not seen during training.", "txt2π outperforms baselines such as FiLM and language-conditioned CNNs.", "Through curriculum learning, txt2π performs well on complex RTFM tasks that require several reasoning and coreference steps with natural language templated goals and descriptions of the dynamics.", "Our work suggests that language understanding via reading is a promising way to learn policies that generalise to new environments.", "Despite curriculum learning, our best models trail performance of human players, suggesting that there is ample room for improvement in grounded policy learning on complex RTFM problems.", "In addition to jointly learning policies based on external documentation and language goals, we are interested in exploring how to use supporting evidence in external documentation to reason about plans and induce hierarchical policies (Hu et al., 2019; Jiang et al., 2019) .", "A PLAYTHROUGH EXAMPLES These figures shows key snapshots from a trained policy on randomly sampled environments.", "Figure 6 : The initial world is shown in 1.", "In 4, the agent avoids the target \"lightning shaman\" because it does not yet have \"arcane spear\", which beats the target.", "In 7 and 8, the agent is cornered by monsters.", "In 9, the agent is forced to engage in combat and loses.", "Hyperparameters.", "The txt2π used in our experiments consists of 5 consecutive FiLM 2 layers, each with 3x3 convolutions and padding and stride sizes of 1.", "The txt2π layers have channels of 16, 32, 64, 64, and 64 , with residual connections from the 3rd layer to the 5th layer.", "The Goal-doc LSTM (see Figure 3) shares weight with the Goal LSTM.", "The Inventory and Goal LSTMs have a hidden dimension of size 10, whereas the Vis-doc LSTM has a dimension of 100.", "We use a word embedding dimension of 30.", "The input to the network is the concatenation of the observations V (0) and text representations.", "The text representations consist of self-attention over bidirectional LSTM-encoded goal, document, and inventory.", "These attention outputs are replicated over the dimensions of the grid and concatenated feature-wise with the observation embeddings in each cell.", "Figure 8 illustrates the CNN baseline." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.48275861144065857, 0.5405405163764954, 0.13333332538604736, 0.24390242993831635, 0.060606054961681366, 0.32258063554763794, 0, 0.11764705181121826, 0, 0.19512194395065308, 0.3414634168148041, 0.06557376682758331, 0.04999999701976776, 0.10256409645080566, 0, 0.12903225421905518, 0.11764705181121826, 0.060606054961681366, 0.1904761791229248, 0.1621621549129486, 0.0714285671710968, 0.145454540848732, 0.15686273574829102, 0.12244897335767746, 0.11764705181121826, 0.3030303120613098, 0.09756097197532654, 0.22727271914482117, 0.24390242993831635, 0, 0.0952380895614624, 0.8235294222831726, 0.09302324801683426, 0.11999999731779099, 0.0624999962747097, 0.07692307233810425, 0, 0.07692307233810425, 0.1428571343421936, 0, 0.05405404791235924, 0, 0, 0.0833333283662796, 0.13333332538604736, 0, 0, 0 ]
SJgob6NKvH
true
[ "We show language understanding via reading is promising way to learn policies that generalise to new environments." ]
[ "An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data.", "We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other.", "Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists.", "We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning.", "Furthermore, our analysis is not just descriptive, but prescriptive.", "It suggests a natural modification to gradient descent that can greatly reduce overfitting.", "Neural networks used in practice often have sufficient effective capacity to learn arbitrary maps from their inputs to their outputs.", "This is typically demonstrated by training a classification network that achieves good test accuracy on a real dataset S, on a modified version of S (call it S ) where the labels are randomized and observing that the training accuracy on S is very high, though, of course, the test accuracy is no better than chance (Zhang et al., 2017) .", "This leads to an important open question in the Deep Learning community (Zhang et al. (2017) ; Arpit et al. (2017) ; Bartlett et al. (2017) ; Kawaguchi et al. (2017) ; ; Arora et al. (2018) ; Belkin et al. (2019) ; Rahaman et al. (2019) ; Nagarajan & Kolter (2019), etc.", ") : Among all maps that fit a real dataset, how does Gradient Descent (GD) find one that generalizes well?", "This is the question we address in this paper.", "We start by observing that this phenomenon is not limited to neural networks trained with GD but also applies to Random Forests and Decision Trees.", "However, there is no mystery with trees: A typical tree construction algorithm splits the training set recursively into similar subsets based on input features.", "If no similarity is found, eventually, each example is put into its own leaf to achieve good training accuracy (but, of course, at the cost of poor generalization).", "Thus, trees that achieve good accuracy on a randomized dataset are much larger than those on a real dataset.", "Is it possible that something similar happens with GD?", "We believe so.", "The type of randomized-label experiments described above show that if there are common patterns to be found, then GD finds them.", "If not, it fits each example on a case-by-case basis.", "The question then is, what is it about the dynamics of GD that makes it possible to extract common patterns from the data?", "And what does it mean for a pattern to be common?", "Since the only change to the network parameters in GD comes from the gradients, the mechanism to detect commonality amongst examples must be through the gradients.", "We propose that this commonality detection can be explained as follows:", "1. Gradients are coherent, i.e, similar examples (or parts of examples) have similar gradients (or similar components of gradients) and dissimilar examples have dissimilar gradients.", "2. Since the overall gradient is the sum of the per-example gradients, it is stronger in directions where the per-example gradients are similar and reinforce each other and weaker in other directions where they are different and do not add up.", "3. Since network parameters are updated proportionally to gradients, they change faster in the direction of stronger gradients.", "4. Thus the changes to the network during training are biased towards those that simultaneously benefit many examples instead of a few (or one example).", "For convenience, we refer to this as the Coherent Gradients hypothesis.", "It is instructive to work through the proposed mechanism in the context of a simple thought experiment.", "Consider a training set with two examples a and b.", "At some point in training, suppose the gradient of a, g a , can be decomposed into two orthogonal components g a1 and g a2 of roughly equal magnitude, i.e., there are two, equally good, independent ways in which the network can better fit a (by using say two disjoint parts of the network).", "Likewise, for b.", "Now, further suppose that one of the two ways is common to both a and b, i.e., say g a2 = g b2 = g ab , whereas, the other two are example specific, i.e., g a1 , g b1 = 0.", "Now, the overall gradient is", "Observe that the gradient is stronger in the direction that simultaneously helps both examples and thus the corresponding parameter changes are bigger than those those that only benefit only one example.", "It is important to emphasize that the notion of similarity used above (i.e., which examples are considered similar) is not a constant but changes in the course of training as network parameters change.", "It starts from a mostly task independent notion due to random initialization and is bootstrapped in the course of training to be task dependent.", "We say \"mostly\" because even with random initialization, examples that are syntactically close are treated similarly (e.g., two images differing in the intensities of some pixels as opposed to two images where one is a translated version of the other).", "The relationship between strong gradients and generalization can also be understood through the lens of algorithmic stability (Bousquet & Elisseeff, 2002) : strong gradient directions are more stable since the presence or absence of a single example does not impact them as much, as opposed to weak gradient directions which may altogether disappear if a specific example is missing from the training set.", "With this observation, we can reason inductively about the stability of GD: since the initial values of the parameters do not depend on the training data, the initial function mapping examples to their gradients is stable.", "Now, if all parameter updates are due to strong gradient directions, then stability is preserved.", "However, if some parameter updates are due to weak gradient directions, then stability is diminished.", "Since stability (suitably formalized) is equivalent to generalization (Shalev-Shwartz et al., 2010) , this allows us to see how generalization may degrade as training progresses.", "Based on this insight, we shall see later how a simple modification to GD to suppress the weak gradient directions can dramatically reduce overfitting.", "In addition to providing insight into why GD generalizes in practice, we believe that the Coherent Gradients hypothesis can help explain several other empirical observations about deep learning in the literature:", "(a) Learning is slower with random labels than with real labels (Zhang et al., 2017)", "(b) Robustness to large amounts of label noise (Rolnick et al., 2017)", "(c) Early stopping leads to better generalization (Caruana et al., 2000)", "(d) Increasing capacity improves generalization (Caruana et al., 2000;", "(e) The existence of adversarial initialization schemes (Liu et al., 2019)", "(f) GD detects common patterns even when trained with random labels (Chatterjee & Mishchenko, 2019) A direct experimental verification of the Coherent Gradients hypothesis is challenging since the notion of similarity between examples depends on the parameters of the network and thus changes during training.", "Our approach, therefore, is to design intervention experiments where we establish a baseline and compare it against variants designed to test some aspect or prediction of the theory.", "As part of these experiments, we replicate the observations", "(a)-(c", ") in the literature noted above, and analyze the corresponding explanations provided by Coherent Gradients ( §2), and outline for future work how (d)-(f", ") may", "be accounted for ( §5).", "In this paper, we limit our study to simple baselines: vanilla Stochastic Gradient Descent (SGD) on MNIST using fully connected networks.", "We believe that this is a good starting point, since even in this simple setting, with all frills eliminated (e.g., inductive bias from architecture or explicit regularization, or a more sophisticated optimization procedure), we are challenged to find a satisfactory explanation of why SGD generalizes well.", "Furthermore, our prior is that the difference between weak and strong directions is small at any one step of training, and therefore having a strong learning signal as in the case of MNIST makes a direct analysis of gradients easier.", "It also has the benefit of having a smaller carbon footprint and being easier to reproduce.", "Finally, based on preliminary experiments on other architectures and datasets we are optimistic that the insights we get from studying this simple setup apply more broadly.", "Although there has been a lot of work in recent years in trying to understand generalization in Deep Learning, no entirely satisfactory explanation has emerged so far.", "There is a rich literature on aspects of the stochastic optimization problem such as the loss landscape and minima (e.g., Choromanska et al. (2015) ; Zhu et al. (2018) ), the curvature around stationary points (e.g., Hochreiter & Schmidhuber (1997) ; Keskar et al. (2016) ; Dinh et al. (2017) ; Wu et al. (2018) ), and the implications of stochasticity due to sampling in SGD (e.g., Simsekli et al. (2019) ).", "However, we believe it should be possible to understand generalization without a detailed understanding of the optimization landscape.", "For example, since stopping early typically leads to small generalization gap, the nature of the solutions of GD (e.g., stationary points, the limit cycles of SGD at equilibrium) cannot be solely responsible for generalization.", "In fact, from this observation, it would appear that an inductive argument for generalization would be more natural.", "Likewise, there is reason to believe that stochasticity is not fundamental to generalization (though it may help).", "For example, modifying the experiment in §2.1 to use full batch leads to similar qualitative generalization results.", "This is consistent with other small scale studies (e.g., Figure 1 of Wu et al. (2018) ) though we are not aware of any large scale studies on full batch.", "Our view of optimization is a simple, almost combinatorial, one: gradient descent is a greedy search with some hill-climbing thrown in (due to sampling in SGD and finite step size).", "Therefore, we worry less about the quality of solutions reached, but more about staying \"feasible\" at all times during the search.", "In our context, feasibility means being able to generalize; and this naturally leads us to look at the transition dynamics to see if that preserves generalizability.", "Another approach to understanding generalization, is to argue that gradient-based optimization induces a form of implicit regularization leading to a bias towards models of low complexity.", "This is an extension of the classical approach where bounding a complexity measure leads to bounds on the generalization gap.", "As is well known, classical measures of complexity (also called capacity) do not work well.", "For example, sometimes adding more parameters to a net can help generalization (see for e.g. Lawrence et al. (1996); ) and, as we have seen, VC-Dimension and Rademacher Complexity-based bounds must be vacuous since networks can memorize random labels and yet generalize on real data.", "This has led to a lot of recent work in identifying better measures of complexity such as spectrally-normalized margin (Bartlett et al., 2017) , path-based group norm , a compression-based approach (Arora et al., 2018) , etc.", "However, to our knowledge, none of these measures is entirely satisfactory for accounting for generalization in practice.", "Please see Nagarajan & Kolter (2019) for an excellent discussion of the challenges.", "We rely on a different classical notion to argue generalization: algorithmic stability (see Bousquet & Elisseeff (2002) for a historical overview).", "We have provided only an informal argument in Section 1, but there has been prior work by Hardt et al. (2016) in looking at GD and SGD through the lens of stability, but their formal results do not explain generalization in practical settings (e.g., multiple epochs of training and non-convex objectives).", "In fact, such an attempt appears unlikely to work since our experimental results imply that any stability bounds for SGD that do not account for the actual training data must be vacuous!", "(This was also noted by Zhang et al. (2017) .", ") That said, we believe stability is the right way to think about generalization in GD for a few reasons.", "First, since by Shalev-Shwartz et al. (2010) stability, suitably formalized, is equivalent to generalization.", "Therefore, in principle, any explanation of generalizability for a learning problem must-to borrow a term from category theory-factor through stability.", "Second, a stability based analysis may be more amenable to taking the actual training data into account (perhaps by using a \"stability accountant\" similar to a privacy accountant) which appears necessary to get non-vacuous bounds for practical networks and datasets.", "Finally, as we have seen with the modification in §3, a stability based approach is not just descriptive but prescriptive 1 and can point the way to better learning algorithms.", "The work of Rahaman et al. (2019) is particularly relevant.", "They compute the Fourier spectrum of ReLU networks and argue based on heuristics and experiments that these networks learn low frequency functions first.", "In contrast, we focus not on the function learnt, but on the mechanism in GD to detect commonality.", "This leads to a perspective that is at once simpler and more general (for e.g., it applies equally to networks with other activation functions, with attention, LSTMs, and discrete (combinatorial) inputs).", "Furthermore, it opens up a path to analyzing generalization via stability.", "It is is not clear if Rahaman et al. (2019) claim a causal mechanism, but their analysis does not suggest an obvious intervention experiment such as ours of §3 to test causality.", "There are other experimental results that show biases towards linear functions (Nakkiran et al., 2019) and functions with low descriptive complexity (Valle-Perez et al., 2019) but these papers do not posit a causal mechanism.", "It is interesting to consider if Coherent Gradients can provide a unified explanation for these observed biases.", "Finally, Fort et al. (2019) (concurrent submission) propose a descriptive statistic stiffness based on pairwise per-example gradients and show experimentally that it can be used to characterize generalization.", "Sankararaman et al. (2019) (also concurrent submission) independently propose a very similar statistic called gradient confusion but use it to study the speed of training.", "Unlike our work, these do not propose causal mechanisms for generalization, but these statistics (which are rather different from those in §2.4) could be useful for the further study of Coherent Gradients." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1249999925494194, 0.33898305892944336, 0, 0.20512819290161133, 0, 0.19354838132858276, 0, 0.0634920597076416, 0, 0.1621621549129486, 0, 0.0952380895614624, 0.1428571343421936, 0.045454539358615875, 0.11764705181121826, 0.07407406717538834, 0.0952380895614624, 0, 0.2142857164144516, 0, 0.13793103396892548, 0.051282044500112534, 0.13793103396892548, 0.0555555522441864, 0.21276594698429108, 0.0555555522441864, 0.0476190410554409, 0.06896550953388214, 0.05882352590560913, 0.14814814925193787, 0.0634920597076416, 0.0952380895614624, 0.07843136787414551, 0.08695651590824127, 0.04651162400841713, 0.03999999538064003, 0.04999999329447746, 0.1090909019112587, 0.0833333283662796, 0.0833333283662796, 0.060606054961681366, 0.060606054961681366, 0.04878048226237297, 0.19512194395065308, 0.1702127605676651, 0.0624999962747097, 0, 0, 0, 0, 0.10344827175140381, 0.04444443807005882, 0, 0.09999999403953552, 0.08695651590824127, 0.051282044500112534, 0.16129031777381897, 0.07843136787414551, 0.05882352590560913, 0.1428571343421936, 0.0476190410554409, 0.05714285373687744, 0.0555555522441864, 0.04081632196903229, 0.05714285373687744, 0, 0, 0.12765957415103912, 0.17777776718139648, 0, 0, 0.04999999329447746, 0.10810810327529907, 0, 0.09677419066429138, 0.04081632196903229, 0.05882352590560913, 0.06451612710952759, 0.21052631735801697, 0.030303025618195534, 0.0416666604578495, 0, 0.10526315122842789, 0, 0.10810810327529907, 0.1111111044883728, 0.12765957415103912, 0, 0.10256409645080566, 0.05882352590560913, 0.1249999925494194, 0.06896550953388214, 0.0416666604578495, 0.12244897335767746, 0.11428570747375488, 0.260869562625885, 0.1395348757505417, 0.08163265138864517 ]
ryeFY0EFwS
true
[ "We propose a hypothesis for why gradient descent generalizes based on how per-example gradients interact with each other." ]
[ " Recent advances in deep learning have shown promising results in many low-level vision tasks.", "However, solving the single-image-based view synthesis is still an open problem.", "In particular, the generation of new images at parallel camera views given a single input image is of great interest, as it enables 3D visualization of the 2D input scenery.", "We propose a novel network architecture to perform stereoscopic view synthesis at arbitrary camera positions along the X-axis, or Deep 3D Pan, with \"t-shaped\" adaptive kernels equipped with globally and locally adaptive dilations.", "Our proposed network architecture, the monster-net, is devised with a novel t-shaped adaptive kernel with globally and locally adaptive dilation, which can efficiently incorporate global camera shift into and handle local 3D geometries of the target image's pixels for the synthesis of naturally looking 3D panned views when a 2-D input image is given.", "Extensive experiments were performed on the KITTI, CityScapes and our VXXLXX_STEREO indoors dataset to prove the efficacy of our method.", "Our monster-net significantly outperforms the state-of-the-art method, SOTA, by a large margin in all metrics of RMSE, PSNR, and SSIM.", "Our proposed monster-net is capable of reconstructing more reliable image structures in synthesized images with coherent geometry.", "Moreover, the disparity information that can be extracted from the \"t-shaped\" kernel is much more reliable than that of the SOTA for the unsupervised monocular depth estimation task, confirming the effectiveness of our method.\n", "Recent advances in deep learning have pushed forward the state-of-the-art performance for novel view synthesis problems.", "Novel view synthesis is the task of generating a new view seen from a different camera position, given a single or multiple input images, and finds many applications in robotics, navigation, virtual and augmented reality (VR/AR), cinematography, etc.", "In particular, the challenging task of generating stereo images given a single input view is of great interest as it enables 3D visualization of the 2D input scene.", "In addition, the falling price and the increasing availability of the equipment required for VR/AR has fueled the demand for stereoscopic contents.", "The previous works, such as the Deep3D (Xie et al., 2016) , have addressed the right-view generation problem in a fully supervised fashion when the input is the left-view to which the output is the synthetic right-view at a fixed camera shift.", "In contrast, our proposed Deep 3D Pan pipeline enables the generation of new views at arbitrary camera positions along the horizontal X-axis of an input image with far better quality by utilizing adaptive \"t-shaped\" convolutions with globally and locally adaptive dilations, which takes into account the camera shift amount and the local 3D geometries of the target pixels.", "Panning at arbitrary camera positions allows our proposed model to adjust the baseline (distance between cameras) for different levels of 3D sensation.", "Additionally, arbitrary panning unlocks the possibility to adjust for different inter-pupillary distances of various persons.", "Figure 1 shows some generated left and right view images for a given single image input by our proposed Deep 3D Pan pipeline, which we call it the \"monster-net\" (monocular to stereo network).", "In this paper, we define \"pan\" in the context of 3D modeling, implying that camera movement is in parallel to the center view plane.", "In the following sections, we review the related works to stereoscopic view synthesis and discuss the differences with our proposed method, followed by the formulation of our Deep 3d Pan pipeline and finally, we present outstanding results on various challenging stereo datasets, showing superior performance against the previous state-of-the-art methods.", "We presented an adaptive \"t-shaped\" kernel equipped with globally and locally adaptive dilations for the Deep 3D Pan problem, defined as the task of arbitrarily shifting the camera position along the X-axis for stereoscopic view synthesis.", "Our proposed monster-net showed superior performance to the SOTA for right-view generation on the KITTI and the CityScapes datasets.", "Our monsternet also showed very good generalization capabilities with 3dB gain in PSNR against the Deep3D baseline.", "In addition, our method presents no-discontinuities, consistent geometries, good contrast, and naturally looking left or right synthetic panned images.", "Our monster-net can be extended for image registration, monocular video to stereo video, and generation of novel views at any camera translation by just allowing pixel-wise rotation of our \"t-shaped\" kernel." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.14814814925193787, 0.0952380895614624, 0.4680851101875305, 0.19672130048274994, 0, 0, 0.060606054961681366, 0.04444443807005882, 0.1875, 0.1599999964237213, 0.04999999701976776, 0.11764705181121826, 0.07999999821186066, 0.190476194024086, 0.21052631735801697, 0.12903225421905518, 0.08163265138864517, 0.10526315122842789, 0.1355932205915451, 0.3404255211353302, 0.060606054961681366, 0.060606054961681366, 0, 0.1304347813129425 ]
B1gF56VYPH
true
[ "Novel architecture for stereoscopic view synthesis at arbitrary camera shifts utilizing adaptive t-shaped kernels with adaptive dilations." ]
[ "Deep Neutral Networks(DNNs) require huge GPU memory when training on modern image/video databases.", "Unfortunately, the GPU memory as a hardware resource is always finite, which limits the image resolution, batch size, and learning rate that could be used for better DNN performance.", "In this paper, we propose a novel training approach, called Re-forwarding, that substantially reduces memory usage in training.", "Our approach automatically finds a subset of vertices in a DNN computation graph, and stores tensors only at these vertices during the first forward.", "During backward, extra local forwards (called the Re-forwarding process) are conducted to compute the missing tensors between the subset of vertices.", "The total memory cost becomes the sum of (1) the memory cost at the subset of vertices and (2) the maximum memory cost among local re-forwards.", "Re-forwarding trades training time overheads for memory and does not compromise any performance in testing.", "We propose theories and algorithms that achieve the optimal memory solutions for DNNs with either linear or arbitrary computation graphs.", "Experiments show that Re-forwarding cuts down up-to 80% of training memory on popular DNNs such as Alexnet, VGG, ResNet, Densenet and Inception net.", "The standard DNN training process consists of two alternated stages: forward and backward.", "FIG0", "(a) illustrates an example of feed-forward neural networks.", "In the forward stage, the network takes an input tensor, [BatchSize × Channel × W idth × Height], and computes the tensors at each layer until producing the output.", "In the backward stage, difference between the output and ground truth is passed back along the network to compute the gradients at each layer.", "The regular training approach saves tensors at all layers during forward, because they are all needed to compute gradients during backward.", "The total memory cost is the sum of cost over all layers.In popular backbone DNNs for feature extraction of images, such as AlexNet BID13 ), VGG BID22 ) and ResNet BID10 ), the memory cost increases quadratically with the input image resolution and network depth.", "For example, given an median size input tensor of (32, 3, 224, 224) , ResNet101 requires around 5000 MB.", "In more challenging tasks, DNNs that detect small objects and large number of object categories require input image resolution of more than 600 × 600 BID18 ; BID23 ; BID17 ).", "The memory issue is worse for video-based DNNs, such as CDC BID21 ), C3D BID12 ) and 3D-ResNet BID9 ).", "To model complex activities in video, the input tensor may contain 64 frames.", "Moreover, DNN training takes much more memory than testing.", "In order to train DNNs with large databases and big learning rate, the batch size can be up to 64.", "In training DNN compositions, such as Generative adversarial networks (GANs), multiple generator and discriminator networks are simultaneously stored in GPU memory.Existing efforts to address memory issues presented three main approaches: (1) Better single GPUs.", "Recent GPUs provide larger memory at the expense of exponentially growing price and power consumption.", "For instance, from TitanXp, Quadro P6000 to Tesla V100, for 1-2.7 times increase in memory, the prices increase 2.8-8.5 times.", "(2) Parallelization among multiple GPUs BID8 ; BID20 ; ; BID15 BID16 ; BID27 ; BID2 ; BID1 ), which requires expensive The regular approach saves all tensors during forward, and uses these tensors to compute gradients during backward.", "(b) Reforwarding (our) saves a subset of tensors during the first forward, and conducts \"Re-forward\" to compute tensors for gradients during backward.clusters, introduces substantial I/O cost, and does not reduce the total memory cost.", "(3) Low-level heuristic techniques.", "Optimization of computation graphs BID3 ), which merges inplace operations into non-inplace operations to cut down memory.", "Liveness analysis BID3 ), which dynamically recycles garbage tensors in training epochs.", "These approaches are specific to certain DNN structures, data and tasks.To address above issues, we propose a fundamental approach that explores trade-off between memory and computation power of GPUs.", "Note that recent affordable GPUs, although limited in memory ( 12GB), provide exceptional improvement in GPU cores and FLOPS.", "Trading computational time for memory is a very attractive solution that make it possible to train very heavy DNNs with finite GPU memory.", "Our approach only saves tensors at a subset of layers during the first forward, and conduct only extra local forwards to compute the missing tensors needed during backward.", "We call the extra forward process as Re-forwarding.", "The total memory cost is the sum of (1) the cost at the subset of layers and (2) the maximum memory cost among local re-forwards.", "Training with Reforwarding, see FIG0", "(b), leads to substantial memory reduction.", "We propose sophisticate theories and efficient algorithms that achieve the optimal memory solution of arbitrary computation graphs.", "Re-forwarding is a fundamental approach that explores trade-off between memory and computation power of GPUs.", "By saving tensors at a subset of layers during forward, and conducting extra local forwards for backward, Re-forwarding makes it possible to train very heavy DNNs with finite GPU memory.", "To our knowledge, our theoretical and algorithmic results are the first top-down work that achieve an optimal memory solution for arbitrary computation graphs in DNNs.", "Re-forwarding can be further embedded and optimized with any low-level techniques such as distributed computing, GPU/CPU swapping, computation graph optimization and liveness analysis.", "Same on v q , v q must be v j or v t .", "As s ⊂ [s ij ), ∀v 1 ∈ s, v 1 has no edge with v 2 ∈ [s ij ). As s kj is close, ∀v 1 ∈ s, v 1 has no edge with v 2 ∈ s kj . ∀v 1 ∈ s, v 1 can only have edge with v 2 ∈ [s].", "Thus the independence of s is guaranteed.", "Therefore, s is closed set, v k is the splitting vertex of s ij .", "DISPLAYFORM0 Same on v j , v j is the splitting vertex of s kt Lemma 4.", "If s ij has n splitting vertices {v 1 , v 2 , ..., v n }, then s ij = s i1 ∪ s 12 ∪ ... ∪ s nj ∪ {v 1 , v 2 , ..., v n } Proof.", "If n = 2, the splitting vertices are DISPLAYFORM1 According to Lemma 3, v 1 is splitting vertex of s i2 and v 2 is splitting vertex of s 1j .", "Therefore, DISPLAYFORM2 For n > 2, the lemma can be proved by repetitively using the conclusion in n = 2.", "Lemma", "6. Any member of a maximal split can not be the subset of another closed set s s ij .Proof", ". Suppose", "the source vertex of s is v 1 and target vertex is v 2 , a member s xy of the maximal split is inside s. Suppose", "a member s ab of the maximal split has its source vertex v a inside s and target vertex v b outside s. Then the", "boundary vertex (the vertex that has edges to the non-overlapping parts of both sets) must be v 2 , otherwise the independence of s will be violated. Notice that", "v 2 is inside s ab and the independence of s ab needs to be guaranteed, for ∀v p ∈ s, v p / ∈ s ∩ s ab , v q ∈ s ∩ s ab , v p has no edge with v q . Therefore,", "v a is a splitting vertex of s.Similarly, if s ba has its target vertex v a inside s and source vertex v b outside s, the boundary vertex must be v 1 and v a is a splitting vertex of s.For the closed set s, from the discussion above, we know that there are at most 2 members of the maximal split that can overlap with s. Other members", "must be either completely inside s or completely outside s. Let's discuss", "the number of members that overlaps with s.If there are 0 member that overlaps with s, s is the union of a subset of members of the maximal split, which violates the definition of maximal split.If there is 1 member that overlaps with s, suppose the corresponding splitting vertex is v b , and the boundary vertex is actually v 2 . Then s 1b is", "a closed set containing s xy and corresponds to the situation of 0 member overlapping. s 1b is the", "union of a subset of members of the maximal split, and violates the definition of maximal split.If there are 2 members that overlaps with s, suppose they generate two different splitting vertex v a and v b . Then s ab is", "a closed set containing s xy and corresponds to the situation of 0 member overlapping. s ab is the", "union of a subset of members of the maximal split, and violates the definition of maximal split.If they generate the same splitting vertex v b , from lemma 5, v b is also the endpoint vertex of at least 1 other member s ab which has to be inside s. Suppose the", "two overlapping members are s cb that contains v 1 , and s bd that contains v 2 . As the source", "vertex of s, v 1 has path to v b and v 1 has path to v a , which implies v b has path to v a . As the target", "vertex of s, v 2 has path from v b and v 2 has path from v a , which implies v b has path from v a . This conflicts", "with the fact that s is acyclic. Therefore, this", "case is not possible.Therefore, this lemma is proved.Lemma 7. If non-branched", "s ij has at least 1 vertex but has 0 splitting vertex, then its maximal split has length > 2 Proof. As s ij is not branched", ", the members of its maximal split cannot have the starting vertex as v i and the ending vertex as v j at the same time. If s ij has at least 1", "vertex, and its maximal split has length 2, then its maximal split must be {[s ik ], [s kj ]}, and v k will be the splitting vertex of s ij , which violates that s ij has no splitting vertex.If s ij has at least 1 vertex without splitting vertex, it has at least 2 edges and cannot have a trivial length 1 maximal split. Therefore, its maximal", "split has length > 2" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11764705181121826, 0.20408162474632263, 0.10526315122842789, 0.1395348757505417, 0.09999999403953552, 0.1538461446762085, 0.2222222238779068, 0.2926829159259796, 0.3181818127632141, 0.23529411852359772, 0.06896550953388214, 0.04444443807005882, 0.0952380895614624, 0.09999999403953552, 0.20338982343673706, 0.04999999329447746, 0.1249999925494194, 0.14999999105930328, 0, 0.20000000298023224, 0.19999998807907104, 0.18518517911434174, 0.1666666567325592, 0.0952380895614624, 0.11320754140615463, 0.23076923191547394, 0, 0.21621620655059814, 0.12121211737394333, 0.23999999463558197, 0.10256409645080566, 0.1904761791229248, 0.13333332538604736, 0, 0.1538461446762085, 0, 0.14814814925193787, 0.2631579041481018, 0.2222222238779068, 0.23529411852359772, 0.2222222238779068, 0.04651162400841713, 0, 0, 0.0714285671710968, 0.060606054961681366, 0.0555555522441864, 0, 0.13636362552642822, 0, 0.051282044500112534, 0.09999999403953552, 0.09999999403953552, 0.08888888359069824, 0.16326530277729034, 0.060606054961681366, 0, 0.10169491171836853, 0.15789473056793213, 0.07407406717538834, 0.15789473056793213, 0.13114753365516663, 0.052631575614213943, 0.21052631735801697, 0.21621620655059814, 0, 0, 0, 0.08510638028383255, 0.0923076868057251, 0 ]
BJMvBjC5YQ
true
[ "This paper proposes fundamental theory and optimal algorithms for DNN training, which reduce up to 80% of training memory for popular DNNs." ]
[ "Compression is a key step to deploy large neural networks on resource-constrained platforms.", "As a popular compression technique, quantization constrains the number of distinct weight values and thus reducing the number of bits required to represent and store each weight.", "In this paper, we study the representation power of quantized neural networks.", "First, we prove the universal approximability of quantized ReLU networks on a wide class of functions.", "Then we provide upper bounds on the number of weights and the memory size for a given approximation error bound and the bit-width of weights for function-independent and function-dependent structures.", "Our results reveal that, to attain an approximation error bound of $\\epsilon$, the number of weights needed by a quantized network is no more than $\\mathcal{O}\\left(\\log^5(1/\\epsilon)\\right)$ times that of an unquantized network.", "This overhead is of much lower order than the lower bound of the number of weights needed for the error bound, supporting the empirical success of various quantization techniques.", "To the best of our knowledge, this is the first in-depth study on the complexity bounds of quantized neural networks.", "Various deep neural networks deliver state-of-the-art performance on many tasks such as object recognition and natural language processing using new learning strategies and architectures BID11 Kumar et al., 2016; Ioffe & Szegedy, 2015; Vaswani et al., 2017) .", "Their prevalence has extended to embedded or mobile devices for edge intelligence, where security, reliability or latency constraints refrain the networks from running on servers or in clouds.", "However, large network sizes with the associated expensive computation and memory consumption make edge intelligence even more challenging BID2 Sandler et al., 2018) .In", "response, as will be more detailed in Section 2, substantial effort has been made to reduce the memory consumption of neural networks while minimizing the accuracy loss. The", "memory consumption of neural networks can be reduced by either directly reducing the number of weights or decreasing the number of bits (bit-width) needed to represent and store each weight, which can be employed on top of each other BID3 . The", "number of weights can be reduced by pruning BID9 , weight sparsifying (Liu et al., 2015) , structured sparsity learning BID14 and low rank approximation BID5 . The", "bit-width is reduced by quantization that maps data to a smaller set of distinct levels (Sze et al., 2017) . Note", "that while quantization may stand for linear quantization only (Li et al., 2017; BID7 or nonlinear quantization only BID8 BID3 in different works, our discussion will cover both cases.However, as of today quantization is still only empirically shown to be robust and effective to compress various neural network architectures (Hubara et al., 2016; BID20 BID22 . Its", "theoretical foundation still remains mostly missing. Specifically", ", many important questions remain unanswered. For example:•", "Why even binarized networks, those most extremely quantized with bit-width down to one, still work well in some cases?• To what extent", "will quantization decrease the expressive power of a network? Alternatively, what", "is the overhead induced by weight quantization in order to maintain the same accuracy?In this paper, we provide", "some insights into these questions from a theoretical perspective. We focus on ReLU networks", ", which is among the most widely used in deep neural networks BID15 . We follow the idea from", "BID16 to prove the complexity bound by constructing a network, but with new and additional construction components essential for quantized networks. Specifically, given the", "number of distinct weight values λ and a target function f , we construct a network that can approximate f with an arbitrarily small error bound to prove the universal approximability. The memory size of this", "network then naturally serves as an upper bound for the minimal network size.The high-level idea of our approach is to replace basic units in an unquantized network with quantized sub-networks 1 that approximate these basic units. For example, we can approximate", "a connection with any weight in an unquantized network by a quantized sub-network that only uses a finite number of given weight values. Even though the approximation of", "a single unit can be made arbitrarily accurate in principle with unlimited resources (such as increased network depth), in practice, there exists some inevitable residual error at every approximation, all of which could propagate throughout the entire network. The challenge becomes, however,", "how to mathematically prove that we can still achieve the end-to-end arbitrary small error bound even if these unavoidable residual errors caused by quantization can be propagated throughout the entire network. This paper finds a solution to", "solve the above challenge. In doing so, we have to propose", "a number of new ideas to solve related challenges, including judiciously choosing the proper finite weight values, constructing the approximation sub-networks as efficient as possible (to have a tight upper bound), and striking a good balance among the complexities of different approximation steps.Based on the bounds derived, we compare them with the available results on unquantized neural networks and discuss its implications. In particular, the main contributions", "of this paper include:• We prove that even the most extremely quantized ReLU networks using two distinct weight values are capable of representing a wide class of functions with arbitrary accuracy.• Given the number of distinct weights", "and the desired approximation error bound, we provide upper bounds on the number of weights and the memory size. We further show that our upper bounds", "have good tightness by comparing them with the lower bound of unquantized ReLU networks established in the literature.• We show that, to attain the same approximation", "error bound , the number of weights needed by a quantized network is no more than O log 5 (1/ ) times that of an unquantized network. This overhead is of much lower order compared with", "even the lower bound of the number of weights needed for the error bound. This partially explains why many state-ofthe-art quantization", "schemes work well in practice.• We demonstrate how a theoretical complexity bound can be used", "to estimate an optimal bit-width, which in turn enables the best cost-effectiveness for a given task.The remainder of the paper is organized as follows. Section 2 reviews related works. Section 3 lays down the models", "and assumptions of our analysis.", "We prove the universal approximability and the upper bounds with", "function-independent structure in Section 4 and extend it to function-dependent structure in Section 5. We analyze the bound-based optimal bit-width in Section 6. Finally", ", Section 7 discusses the results and gets back to the questions", "raised above.", "In this section, we further discuss the bound of nonlinear quantization with a function-independent structure as the generality of nonlinear quantization.", "The availability of unquantized functionindependent structures in literature also makes it an excellent reference for comparison.Comparison with the Upper Bound: The quality of an upper bound lies on its tightness.", "Compared with the most recent work on unquantized ReLU networks BID16 , where the upper bound on the number of weights to attain an approximation error is given by O log(1/ ) (1/ ) d n , our result for a quantized ReLU network is given by O λ log DISPLAYFORM0 , which translates to an increase by a factor of λ log 1 λ−1 (1/ ) .", "Loosely speaking, this term reflects the loss of expressive power because of weight quantization, which decreases quickly as λ increases." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1249999925494194, 0.1463414579629898, 0.32258063554763794, 0.4117647111415863, 0.2857142686843872, 0.21276594698429108, 0.24390242993831635, 0.3333333134651184, 0.1111111044883728, 0.08888888359069824, 0.09090908616781235, 0.17391303181648254, 0.19230768084526062, 0.08888888359069824, 0.04999999329447746, 0.08695651590824127, 0, 0, 0.04878048226237297, 0.13333332538604736, 0.0555555522441864, 0.12121211737394333, 0.1666666567325592, 0.3333333432674408, 0.26923075318336487, 0.145454540848732, 0.1860465109348297, 0.10169491171836853, 0.22641508281230927, 0.06666666269302368, 0.13333332538604736, 0.26923075318336487, 0.19999998807907104, 0.22727271914482117, 0.23529411852359772, 0.277777761220932, 0.11428570747375488, 0.1538461446762085, 0.1666666567325592, 0.2857142686843872, 0.10526315122842789, 0.13333332538604736, 0.1666666567325592, 0.12765957415103912, 0.24242423474788666, 0.10526315122842789 ]
SJe9rh0cFX
true
[ "This paper proves the universal approximability of quantized ReLU neural networks and puts forward the complexity bound given arbitrary error." ]
[ "Reinforcement learning (RL) with value-based methods (e.g., Q-learning) has shown success in a variety of domains such as\n", "games and recommender systems (RSs).", "When the action space is finite, these algorithms implicitly finds a policy by learning the optimal value function, which are often very efficient. \n", "However, one major challenge of extending Q-learning to tackle continuous-action RL problems is that obtaining optimal Bellman backup requires solving a continuous action-maximization (max-Q) problem.", "While it is common to restrict the parameterization of the Q-function to be concave in actions to simplify the max-Q problem, such a restriction might lead to performance degradation.", "Alternatively, when the Q-function is parameterized with a generic feed-forward neural network (NN), the max-Q problem can be NP-hard.", "In this work, we propose the CAQL method which minimizes the Bellman residual using Q-learning with one of several plug-and-play action optimizers.", "In particular, leveraging the strides of optimization theories in deep NN, we show that max-Q problem can be solved optimally with mixed-integer programming (MIP)---when the Q-function has sufficient representation power, this MIP-based optimization induces better policies and is more robust than counterparts, e.g., CEM or GA, that approximate the max-Q solution.", "To speed up training of CAQL, we develop three techniques, namely", "(i) dynamic tolerance,", "(ii) dual filtering, and", "(iii) clustering.\n", "To speed up inference of CAQL, we introduce the action function that concurrently learns the optimal policy.\n", "To demonstrate the efficiency of CAQL we compare it with state-of-the-art RL algorithms on benchmark continuous control problems that have different degrees of action constraints and show that CAQL significantly outperforms policy-based methods in heavily constrained environments.", "Reinforcement learning (RL) has shown success in a variety of domains such as games (Mnih et al., 2013) and recommender systems (RSs) (Gauci et al., 2018) .", "When the action space is finite, valuebased algorithms such as Q-learning (Watkins & Dayan, 1992) , which implicitly finds a policy by learning the optimal value function, are often very efficient because action optimization can be done by exhaustive enumeration.", "By contrast, in problems with a continuous action spaces (e.g., robotics (Peters & Schaal, 2006) ), policy-based algorithms, such as policy gradient (PG) (Sutton et al., 2000; Silver et al., 2014) or cross-entropy policy search (CEPS) (Mannor et al., 2003; Kalashnikov et al., 2018) , which directly learn a return-maximizing policy, have proven more practical.", "Recently, methods such as ensemble critic (Fujimoto et al., 2018) and entropy regularization (Haarnoja et al., 2018) have been developed to improve the performance of policy-based RL algorithms.", "Policy-based approaches require a reasonable choice of policy parameterization.", "In some continuous control problems, Gaussian distributions over actions conditioned on some state representation is used.", "However, in applications such as RSs, where actions often take the form of high-dimensional item-feature vectors, policies cannot typically be modeled by common action distributions.", "Furthermore, the admissible action set in RL is constrained in practice, for example, when actions must lie within a specific range for safety (Chow et al., 2018) .", "In RSs, the admissible actions are often random functions of the state (Boutilier et al., 2018) .", "In such cases, it is non-trivial to define policy parameterizations that handle such factors.", "On the other hand, value-based algorithms are wellsuited to these settings, providing potential advantage over policy methods.", "Moreover, at least with linear function approximation (Melo & Ribeiro, 2007) , under reasonable assumptions, Q-learning converges to optimality, while such optimality guarantees for non-convex policy-based methods are generally limited (Fazel et al., 2018) .", "Empirical results also suggest that value-based methods are more data-efficient and less sensitive to hyper-parameters (Quillen et al., 2018) .", "Of course, with large action spaces, exhaustive action enumeration in value-based algorithms can be expensive--one solution is to represent actions with continuous features (Dulac-Arnold et al., 2015) .", "The main challenge in applying value-based algorithms to continuous-action domains is selecting optimal actions (both at training and inference time).", "Previous work in this direction falls into three broad categories.", "The first solves the inner maximization of the (optimal) Bellman residual loss using global nonlinear optimizers, such as the cross-entropy method (CEM) for QT-Opt (Kalashnikov et al., 2018) , gradient ascent (GA) for actor-expert (Lim et al., 2018) , and action discretization (Uther & Veloso, 1998; Smart & Kaelbling, 2000; Lazaric et al., 2008) .", "However, these approaches do not guarantee optimality.", "The second approach restricts the Q-function parameterization so that the optimization problem is tractable.", "For instance, wire-fitting (Gaskett et al., 1999; III & Klopf, 1993) approximates Q-values piecewise-linearly over a discrete set of points, chosen to ensure the maximum action is one of the extreme points.", "The normalized advantage function (NAF) (Gu et al., 2016) constructs the state-action advantage function to be quadratic, hence analytically solvable.", "Parameterizing the Q-function with an input-convex neural network (Amos et al., 2017) ensures it is concave.", "These restricted functional forms, however, may degrade performance if the domain does not conform to the imposed structure.", "The third category replaces optimal Q-values with a \"soft\" counterpart (Haarnoja et al., 2018) : an entropy regularizer ensures that both the optimal Q-function and policy have closed-form solutions.", "However, the sub-optimality gap of this soft policy scales with the interval and dimensionality of the action space (Neu et al., 2017) .", "Motivated by the shortcomings of prior approaches, we propose Continuous Action Q-learning (CAQL), a Q-learning framework for continuous actions in which the Q-function is modeled by a generic feed-forward neural network.", "1 Our contribution is three-fold.", "First, we develop the CAQL framework, which minimizes the Bellman residual in Q-learning using one of several \"plug-andplay\" action optimizers.", "We show that \"max-Q\" optimization, when the Q-function is approximated by a deep ReLU network, can be formulated as a mixed-integer program (MIP) that solves max-Q optimally.", "When the Q-function has sufficient representation power, MIP-based optimization induces better policies and is more robust than methods (e.g., CEM, GA) that approximate the max-Q solution.", "Second, to improve CAQL's practicality for larger-scale applications, we develop three speed-up techniques for computing max-Q values:", "(i) dynamic tolerance;", "(ii) dual filtering; and", "(iii) clustering.", "Third, we compare CAQL with several state-of-the-art RL algorithms on several benchmark problems with varying degrees of action constraints.", "Value-based CAQL is generally competitive, and outperforms policy-based methods in heavily constrained environments, sometimes significantly.", "We also study the effects of our speed-ups through ablation analysis.", "We proposed Continuous Action Q-learning (CAQL), a general framework for handling continuous actions in value-based RL, in which the Q-function is parameterized by a neural network.", "While generic nonlinear optimizers can be naturally integrated with CAQL, we illustrated how the inner maximization of Q-learning can be formulated as mixed-integer programming when the Qfunction is parameterized with a ReLU network.", "CAQL (with action function learning) is a general Q-learning framework that includes many existing value-based methods such as QT-Opt and actorexpert.", "Using several benchmarks with varying degrees of action constraint, we showed that the policy learned by CAQL-MIP generally outperforms those learned by CAQL-GA and CAQL-CEM; and CAQL is competitive with several state-of-the-art policy-based RL algorithms, and often outperforms them (and is more robust) in heavily-constrained environments.", "Future work includes: extending CAQL to the full batch learning setting, in which the optimal Q-function is trained using only offline data; speeding up the MIP computation of the max-Q problem to make CAQL more scalable; and applying CAQL to real-world RL problems." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.20000000298023224, 0, 0.060606054961681366, 0.11428570747375488, 0.05882352590560913, 0, 0.06451612710952759, 0.03448275476694107, 0.0952380895614624, 0, 0, 0.07407406717538834, 0.13636362552642822, 0.11428570747375488, 0.04255318641662598, 0.03448275476694107, 0.0555555522441864, 0.10526315122842789, 0.1599999964237213, 0.05714285373687744, 0.0555555522441864, 0.07692307233810425, 0, 0.07407406717538834, 0.045454543083906174, 0.06666666269302368, 0.1111111044883728, 0.06666666269302368, 0, 0.07407407462596893, 0, 0, 0.04878048226237297, 0, 0, 0, 0, 0.06666666269302368, 0.21621620655059814, 0, 0.06896550953388214, 0, 0, 0.07692307233810425, 0, 0, 0.07407406717538834, 0, 0.0952380895614624, 0.29411762952804565, 0.05128204822540283, 0.19354838132858276, 0.0416666641831398, 0.08695651590824127 ]
BkxXe0Etwr
true
[ "A general framework of value-based reinforcement learning for continuous control" ]