source
sequence
source_labels
sequence
rouge_scores
sequence
paper_id
stringlengths
9
11
ic
unknown
target
sequence
[ "There are many differences between convolutional networks and the ventral visual streams of primates.", "For example, standard convolutional networks lack recurrent and lateral connections, cell dynamics, etc.", "However, their feedforward architectures are somewhat similar to the ventral stream, and warrant a more detailed comparison.", "A recent study found that the feedforward architecture of the visual cortex could be closely approximated as a convolutional network, but the resulting architecture differed from widely used deep networks in several ways.", "The same study also found, somewhat surprisingly, that training the ventral stream of this network for object recognition resulted in poor performance.", "This paper examines the performance of this network in more detail.", "In particular, I made a number of changes to the ventral-stream-based architecture, to make it more like a DenseNet, and tested performance at each step.", "I chose DenseNet because it has a high BrainScore, and because it has some cortex-like architectural features such as large in-degrees and long skip connections.", "Most of the changes (which made the cortex-like network more like DenseNet) improved performance.", "Further work is needed to better understand these results.", "One possibility is that details of the ventral-stream architecture may be ill-suited to feedforward computation, simple processing units, and/or backpropagation, which could suggest differences between the way high-performance deep networks and the brain approach core object recognition." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.21621620655059814, 0.1111111044883728, 0.19999998807907104, 0.15094339847564697, 0.2666666507720947, 0.1764705777168274, 0.17391303181648254, 0.22727271914482117, 0.1111111044883728, 0.0624999962747097, 0.13793103396892548 ]
SkegNmFUIS
false
[ "An approximation of primate ventral stream as a convolutional network performs poorly on object recognition, and multiple architectural features contribute to this. " ]
[ "In reinforcement learning, it is common to let an agent interact with its environment for a fixed amount of time before resetting the environment and repeating the process in a series of episodes.", "The task that the agent has to learn can either be to maximize its performance over", "(i) that fixed amount of time, or", "(ii) an indefinite period where the time limit is only used during training.", "In this paper, we investigate theoretically how time limits could effectively be handled in each of the two cases.", "In the first one, we argue that the terminations due to time limits are in fact part of the environment, and propose to include a notion of the remaining time as part of the agent's input.", "In the second case, the time limits are not part of the environment and are only used to facilitate learning.", "We argue that such terminations should not be treated as environmental ones and propose a method, specific to value-based algorithms, that incorporates this insight by continuing to bootstrap at the end of each partial episode.", "To illustrate the significance of our proposals, we perform several experiments on a range of environments from simple few-state transition graphs to complex control tasks, including novel and standard benchmark domains.", "Our results show that the proposed methods improve the performance and stability of existing reinforcement learning algorithms.", "The reinforcement learning framework BID19 BID2 BID21 BID10 involves a sequential interaction between an agent and its environment.", "At every time step t, the agent receives a representation S t of the environment's state, selects an action A t that is executed in the environment which in turn provides a representation S t+1 of the successor state and a reward signal R t+1 .", "An individual reward received by the agent does not directly indicate the quality of its latest action as some rewards may indeed be the consequence of a series of actions taken far in advance.", "Thus, the goal of the agent is to learn a good policy by maximizing the discounted sum of future rewards also known as return: DISPLAYFORM0 A discount factor 0 ≤ γ < 1 is necessary to exponentially decay the future rewards ensuring bounded returns.", "While the series is infinite, it is common to use this expression even in the case of possible terminations.", "Indeed, episode terminations can be considered to be the entering of an absorbing state that transitions only to itself and generates zero rewards thereafter.", "However, when the maximum length of an episode is predefined, it is easier to rewrite the expression above by explicitly including the time limit T : DISPLAYFORM1 Optimizing for the expectation of the return specified in Equation 2 is suitable for naturally timelimited tasks where the agent has to maximize its expected return G 0:T over a fixed episode length only.", "In this case, since the return is bounded, a discount factor of γ = 1 can be used.", "However, in practice it is still common to keep γ smaller than 1 in order to give more priority to short-term rewards.", "Under this optimality model, the objective of the agent does not go beyond the time limit.", "Therefore, an agent optimizing under this model should ideally learn to take more risky actions that reveal higher expected return than safer ones as approaching the end of the time limit.", "In Section 2, we study this case and illustrate that due to the presence of the time limit, the remaining time is present in the environment's state and is essential to its Markov property BID19 .", "Therefore, we propose to include a notion of the remaining time in the agent's input, an approach that we refer to as time-awareness.", "We describe various general scenarios where lacking a notion of the remaining time can lead to suboptimal policies and instability, and demonstrate significant performance improvements for time-aware agents.Optimizing for the expectation of the return specified by Equation 1 is relevant for time-unlimited tasks where the interaction is not limited in time by nature.", "In this case, the agent has to maximize its expected return over an indefinite (e.g. infinite) period.", "However, it is desirable to use time limits in order to diversify the agent's experience.", "For example, starting from highly diverse states can avoid converging to suboptimal policies that are limited to a fraction of the state space.", "In Section 3, we show that in order to learn good policies that continue beyond the time limit, it is important to differentiate between the terminations that are due to time limits and those from the environment.", "Specifically, for value-based algorithms, we propose to continue bootstrapping at states where termination is due to the time limit, or generally any other causes other than the environmental ones.", "We refer to this method as partial-episode bootstrapping.", "We describe various scenarios where having a time limit can facilitate learning, but where the aim is to learn optimal policies for indefinite periods, and demonstrate that our method can significantly improve performance.We evaluate the impact of the proposed methods on a range of novel and popular benchmark domains using a deep reinforcement learning BID0 BID9 algorithm called the Proximal Policy Optimization (PPO), one which has recently been used to achieve stateof-the-art performance in many domains BID17 BID8 .", "We use the OpenAI Baselines 1 implementation of the PPO algorithm with the hyperparameters reported by BID17 , unless explicitly specified.", "All novel environments are implemented using the OpenAI Gym framework BID3 and the standard benchmark domains are from the MuJoCo BID22 Gym collection.", "We modified the TimeLimit wrapper to include remaining time in the observations for the proposed time-aware agent and a flag to separate timeout terminations from environmental ones for the proposed partial-episode bootstrapping agent.", "For every task involving PPO, to have perfect reproducibility, we used the same 40 seeds from 0 to 39 to initialize the pseudo-random number generators for the agents and environments.", "Every 5 training cycles (i.e. 10240 time steps), we perform an evaluation on a complete episode and store the sums of rewards, discounted returns, and estimated state-values.", "For generating the performance plots, we average the values across all runs and then apply smoothing with a sliding window of size 10.", "The performance graphs show these smoothed averages as well as their standard error.We empirically show that time-awareness significantly improves the performance and stability of PPO for the time-limited tasks and can sometimes result in quite interesting behaviors.", "For example, in the Hopper-v1 domain with T = 300, our agent learns to efficiently jump forward and fall towards the end of its time in order to maximize its travelled distance and achieve a \"photo finish\".", "For the time-unlimited tasks, we show that bootstrapping at the end of partial episodes allows to significantly outperform the standard PPO.", "In particular, on Hopper-v1, even if trained with episodes of only 200 steps, our agent manages to learn to hop for at least 10 6 time steps, resulting in more than two hours of rendered video.", "Detailed results for all variants of the tasks using PPO with and without the proposed methods are available in the Appendix.", "The source code will be made publicly available shortly.", "A visual depiction of highlights of the learned behaviors can be viewed at the address sites.google.com/view/time-limits-in-rl.", "We showed in Section 2 that time-awareness is required for correct credit assignment in domains where the agent has to optimize its performance over a time-limited horizon.", "However, even without the knowledge of the remaining time, reinforcement learning agents still often manage to perform relatively well.", "This could be due to several reasons including: (1) If the time limit is sufficiently long that terminations due to time limits are hardly ever experienced-for instance, in the Arcade Learning Environment (ALE) BID1 BID13 BID12 .", "In deep learning BID11 BID15 , it is highly common to use a stack of previous observations or recurrent neural networks (RNNs) BID6 to address scenarios with partial observations BID25 ).", "These solutions may to an extent help when the remaining time is not included as part of the agent's input.", "However, they are much more complex architectures and are only next-best solutions, while including a notion of the remaining time is quite simple and allows better diagnosis of the learned policies.", "The proposed approach is quite generic and can potentially be applied to domains with varying time limits where the agent has to learn to generalize as the remaining time approaches zero.", "In real-world applications such as robotics the proposed approach could easily be adapted by using the real time instead of simulation time steps.In order for the proposed partial-episode bootstrapping in Section 3 to work, as is the case for valuebased methods in general, the agent needs to bootstrap from reliable estimated predictions.", "This is in general resolved by enabling sufficient exploration.", "However, when the interactions are limited in time, exploration of the full state-space may not be feasible from some fixed starting states.", "Thus, a good way to allow appropriate exploration in such domains is to sufficiently randomize the initial states.", "It is worth noting that the proposed partial-episode bootstrapping is quite generic in that it is not restricted to partial episodes caused only due to time limits.", "In fact, this approach is valid for any early termination causes.", "For instance, it is common in the curriculum learning literature to start from near the goal states (easier tasks), and gradually expand to further states (more difficult tasks) BID5 .", "In this case, it can be helpful to stitch the learned values by terminating the episodes and bootstrapping as soon as the agent enters a state that is already well known.Since the proposed methods were shown to enable to better optimize for the time-limited and timeunlimited domains, we believe that they have the potential to improve the performance and stability of a large number of existing reinforcement learning algorithms.", "We also propose that, since reinforcement learning agents are in fact optimizing for the expected returns, and not the undiscounted sum of rewards, it is more appropriate to consider this measure for performance evaluation.", "We considered the problem of learning optimal policies in time-limited and time-unlimited domains using time-limited interactions.", "We showed that when learning policies for time-limited tasks, it is important to include a notion of the remaining time as part of the agent's input.", "Not doing so can cause state-aliasing which in turn can result in suboptimal policies, instability, and slower convergence.", "We then showed that, when learning policies that are optimal for time-unlimited tasks, it is more appropriate to continue bootstrapping at the end of the partial episodes when termination is due to time limits, or any other early termination causes other than environmental ones.", "In both cases, we illustrated that our proposed methods can significantly improve the performance of PPO and allow to optimize more directly, and accurately, for either of the optimality models.", "Reacher-v1, = 1.0, training limit = 50, evaluation limit = 50" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.1818181723356247, 0.06666666269302368, 0.09090908616781235, 0.0714285671710968, 0.1764705777168274, 0.1904761791229248, 0.25, 0.1666666567325592, 0.17777776718139648, 0.25806450843811035, 0.12121211737394333, 0.16326530277729034, 0.13333332538604736, 0.07843136787414551, 0.1875, 0.1621621549129486, 0.0952380895614624, 0.12121211737394333, 0.05882352590560913, 0.13793103396892548, 0.08888888359069824, 0.1860465109348297, 0.17142856121063232, 0.24137930572032928, 0.060606054961681366, 0.13793103396892548, 0.1621621549129486, 0.17777776718139648, 0.04878048226237297, 0.08695651590824127, 0.2469135820865631, 0.1764705777168274, 0.23529411852359772, 0.19512194395065308, 0.09756097197532654, 0.1428571343421936, 0.1621621549129486, 0.25, 0.1702127605676651, 0.1764705777168274, 0.0833333283662796, 0.29411762952804565, 0, 0.12903225421905518, 0.24390242993831635, 0.1818181723356247, 0.08510638028383255, 0.09090908616781235, 0.11764705181121826, 0.1904761791229248, 0.1428571343421936, 0.14035087823867798, 0.0833333283662796, 0.2222222238779068, 0.1875, 0.10526315122842789, 0, 0.19512194395065308, 0.1428571343421936, 0.2978723347187042, 0.9333333373069763, 0.307692289352417, 0.12903225421905518, 0.2641509473323822, 0.1428571343421936, 0 ]
HyDAQl-AW
true
[ "We consider the problem of learning optimal policies in time-limited and time-unlimited domains using time-limited interactions." ]
[ "Although stochastic gradient descent (SGD) is a driving force behind the recent success of deep learning, our understanding of its dynamics in a high-dimensional parameter space is limited.", "In recent years, some researchers have used the stochasticity of minibatch gradients, or the signal-to-noise ratio, to better characterize the learning dynamics of SGD.", "Inspired from these work, we here analyze SGD from a geometrical perspective by inspecting the stochasticity of the norms and directions of minibatch gradients.", "We propose a model of the directional concentration for minibatch gradients through von Mises-Fisher (VMF) distribution, and show that the directional uniformity of minibatch gradients increases over the course of SGD.", "We empirically verify our result using deep convolutional networks and observe a higher correlation between the gradient stochasticity and the proposed directional uniformity than that against the gradient norm stochasticity, suggesting that the directional statistics of minibatch gradients is a major factor behind SGD.", "Stochastic gradient descent (SGD) has been a driving force behind the recent success of deep learning.", "Despite a series of work on improving SGD by incorporating the second-order information of the objective function BID26 BID21 BID6 BID22 BID7 , SGD is still the most widely used optimization algorithm for training a deep neural network.", "The learning dynamics of SGD, however, has not been well characterized beyond that it converges to an extremal point BID1 due to the non-convexity and highdimensionality of a usual objective function used in deep learning.Gradient stochasticity, or the signal-to-noise ratio (SNR) of the stochastic gradient, has been proposed as a tool for analyzing the learning dynamics of SGD.", "BID28 identified two phases in SGD based on this.", "In the first phase, \"drift phase\", the gradient mean is much higher than its standard deviation, during which optimization progresses rapidly.", "This drift phase is followed by the \"diffusion phase\", where SGD behaves similarly to Gaussian noise with very small means.", "Similar observations were made by BID18 and BID4 who have also divided the learning dynamics of SGD into two phases.", "BID28 have proposed that such phase transition is related to information compression.", "Unlike them, we notice that there are two aspects to the gradient stochasticity.", "One is the L 2 norm of the minibatch gradient (the norm stochasticity), and the other is the directional balance of minibatch gradients (the directional stochasticity).", "SGD converges or terminates when either the norm of the minibatch gradient vanishes to zeros, or when the angles of the minibatch gradients are uniformly distributed and their non-zero norms are close to each other.", "That is, the gradient stochasticity, or the SNR of the stochastic gradient, is driven by both of these aspects, and it is necessary for us to investigate not only the holistic SNR but also the SNR of the minibatch gradient norm and that of the minibatch gradient angles.In this paper, we use a von Mises-Fisher (vMF hereafter) distribution, which is often used in directional statistics BID20 , and its concentration parameter κ to characterize the directional balance of minibatch gradients and understand the learning dynamics of SGD from the perspective of directional statistics of minibatch gradients.", "We prove that SGD increases the direc-tional balance of minibatch gradients.", "We empirically verify this with deep convolutional networks with various techniques, including batch normalization BID12 and residual connections BID9 , on MNIST and CIFAR-10 ( BID15 ).", "Our empirical investigation further reveals that the proposed directional stochasticity is a major drive behind the gradient stochasticity compared to the norm stochasticity, suggesting the importance of understanding the directional statistics of the stochastic gradient.Contribution We analyze directional stochasticity of the minibatch gradients via angles as well as the concentration parameter of the vMF distribution.", "Especially, we theoretically show that the directional uniformity of the minibatch gradients modeled by the vMF distribution increases as training progresses, and verify this by experiments.", "In doing so, we introduce gradient norm stochasticity as the ratio of the standard deviation of the minibatch gradients to their expectation and theoretically and empirically show that this gradient norm stochasticity decreases as the batch size increases.Related work Most studies about SGD dynamics have been based on two-phase behavior BID28 BID18 BID4 .", "BID18 investigated this behavior by considering a shallow neural network with residual connections and assuming the standard normal input distribution.", "They showed that SGD-based learning under these setups has two phases; search and convergence phases.", "BID28 on the other hand investigated a deep neural network with tanh activation functions, and showed that SGD-based learning has drift and diffusion phases.", "They have also proposed that such SNR transition (drift + diffusion) is related to the information transition divided into empirical error minimization and representation compression phases.", "However, Saxe et al. (2018) have reported that the information transition is not generally associated with the SNR transition with ReLU BID23 ) activation functions.", "BID4 instead looked at the inner product between successive minibatch gradients and presented transient and stationary phases.Unlike our work here, the experimental verification of the previous work conducted under limited settings -the shallow network BID18 , the specific activation function BID28 , and only MNIST dataset BID28 BID4 -that conform well with their theoretical assumptions.", "Moreover, their work does not offer empirical result about the effect of the latest techniques including both batch normalization BID12 layers and residual connections BID9 .", "Stochasticity of gradients is a key to understanding the learning dynamics of SGD BID28 and has been pointed out as a factor behind the success of SGD (see, e.g., BID17 BID14 .", "In this paper, we provide a theoretical framework using von Mises-Fisher distribution, under which the directional stochasticity of minibatch gradients can be estimated and analyzed, and show that the directional uniformity increases over the course of SGD.", "Through the extensive empirical evaluation, we have observed that the directional uniformity indeed improves over the course of training a deep neural network, and that its trend is monotonic when batch normalization and skip connections were used.", "Furthermore, we demonstrated that the stochasticity of minibatch gradients is largely determined by the directional stochasticity rather than the gradient norm stochasticity.Our work in this paper suggests two major research directions for the future.", "First, our analysis has focused on the aspect of optimization, and it is an open question how the directional uniformity relates to the generalization error although handling the stochasticity of gradients has improved SGD BID24 BID11 BID29 BID13 .", "Second, we have focused on passive analysis of SGD using the directional statistics of minibatch gradients, but it is not unreasonable to suspect that SGD could be improved by explicitly taking into account the directional statistics of minibatch gradients during optimization." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1875, 0.1428571343421936, 0.0714285671710968, 0.06451612710952759, 0.09302325546741486, 0.260869562625885, 0.09999999403953552, 0.15094339847564697, 0.1249999925494194, 0, 0, 0.14814814925193787, 0, 0, 0.1666666567325592, 0.060606058686971664, 0.0845070406794548, 0.1111111044883728, 0.0624999962747097, 0.043478257954120636, 0.06666666269302368, 0.038461536169052124, 0, 0.09090908616781235, 0.13333332538604736, 0, 0, 0.07407407462596893, 0.06451612710952759, 0.11428570747375488, 0.10256409645080566, 0.09999999403953552, 0.10810810327529907, 0.04999999701976776, 0.04878048598766327 ]
rkeT8iR9Y7
true
[ "One of theoretical issues in deep learning" ]
[ "Design of reliable systems must guarantee stability against input perturbations.", "In machine learning, such guarantee entails preventing overfitting and ensuring robustness of models against corruption of input data.", "In order to maximize stability, we analyze and develop a computationally efficient implementation of Jacobian regularization that increases classification margins of neural networks.", "The stabilizing effect of the Jacobian regularizer leads to significant improvements in robustness, as measured against both random and adversarial input perturbations, without severely degrading generalization properties on clean data.", "Stability analysis lies at the heart of many scientific and engineering disciplines.", "In an unstable system, infinitesimal perturbations amplify and have substantial impacts on the performance of the system.", "It is especially critical to perform a thorough stability analysis on complex engineered systems deployed in practice, or else what may seem like innocuous perturbations can lead to catastrophic consequences such as the Tacoma Narrows Bridge collapse (Amman et al., 1941) and the Space Shuttle Challenger disaster (Feynman and Leighton, 2001) .", "As a rule of thumb, well-engineered systems should be robust against any input shifts -expected or unexpected.", "Most models in machine learning are complex nonlinear systems and thus no exception to this rule.", "For instance, a reliable model must withstand shifts from training data to unseen test data, bridging the so-called generalization gap.", "This problem is severe especially when training data are strongly biased with respect to test data, as in domain-adaptation tasks, or when only sparse sampling of a true underlying distribution is available, as in few-shot learning.", "Any instability in the system can further be exploited by adversaries to render trained models utterly useless (Szegedy et al., 2013; Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2016; Papernot et al., 2016a; Kurakin et al., 2016; Madry et al., 2017; Carlini and Wagner, 2017; Gilmer et al., 2018) .", "It is thus of utmost importance to ensure that models be stable against perturbations in the input space.", "Various regularization schemes have been proposed to improve the stability of models.", "For linear classifiers and support vector machines (Cortes and Vapnik, 1995) , this goal is attained via an L 2 regularization which maximizes classification margins and reduces overfitting to the training data.", "This regularization technique has been widely used for neural networks as well and shown to promote generalization (Hinton, 1987; Krogh and Hertz, 1992; Zhang et al., 2018) .", "However, it remains unclear whether or not L 2 regularization increases classification margins and stability of a network, especially for deep architectures with intertwining nonlinearity.", "In this paper, we suggest ensuring robustness of nonlinear models via a Jacobian regularization scheme.", "We illustrate the intuition behind our regularization approach by visualizing the classification margins of a simple MNIST digit classifier in Figure 1 (see Appendix A for more).", "Decision cells of a neural network, trained without regularization, are very rugged and can be unpredictably unstable ( Figure 1a ).", "On average, L 2 regularization smooths out these rugged boundaries but does not necessarily increase the size of decision cells, i.e., does not increase classification margins (Figure 1b) .", "In contrast, Jacobian regularization pushes decision boundaries farther away from each training data point, enlarging decision cells and reducing instability (Figure 1c ).", "The goal of the paper is to promote Jacobian regularization as a generic scheme for increasing robustness while also being agnostic to the architecture, domain, or task to which it is applied.", "In support of this, after presenting the Jacobian regularizer, we evaluate its effect both in isolation as well as in combination with multiple existing approaches that are intended to promote robustness and generalization.", "Our intention is to showcase the ease of use and complimentary nature of our proposed regularization.", "Domain experts in each field should be able to quickly incorporate our regularizer into their learning pipeline as a simple way of improving the performance of their state-of-the-art system.", "The rest of the paper is structured as follows.", "In Section 2 we motivate the usage of Jacobian regularization and develop a computationally efficient algorithm for its implementation.", "Next, the effectiveness of this regularizer is empirically studied in Section 3.", "As regularlizers constrain the learning problem, we first verify that the introduction of our regularizer does not adversely affect learning in the case when input data remain unperturbed.", "Robustness against both random and adversarial perturbations is then evaluated and shown to receive significant improvements from the Jacobian regularizer.", "We contrast our work with the literature in Section 4 and conclude in Section 5.", "In this paper, we motivated Jacobian regularization as a task-agnostic method to improve stability of models against perturbations to input data.", "Our method is simply implementable in any open source automatic differentiation system, and additionally we have carefully shown that the approximate nature of the random projection is virtually negligible.", "Furthermore, we have shown that Jacobian regularization enlarges the size of decision cells and is practically effective in improving the generalization property and robustness of the models, which is especially useful for defense against input-data corruption.", "We hope practitioners will combine our Jacobian regularization scheme with the arsenal of other tricks in machine learning and prove it useful in pushing the (decision) boundary of the field and ensuring stable deployment of models in everyday life." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0714285671710968, 0.11428570747375488, 0.800000011920929, 0.1666666567325592, 0.19999998807907104, 0.1764705777168274, 0.08955223858356476, 0.11428570747375488, 0.05882352590560913, 0.10526315122842789, 0.07999999821186066, 0.07407406717538834, 0.1666666567325592, 0.19999998807907104, 0.2083333283662796, 0.17777776718139648, 0.3255814015865326, 0.24242423474788666, 0.3181818127632141, 0.20512819290161133, 0.2222222238779068, 0.14999999105930328, 0.21739129722118378, 0.20408162474632263, 0.24242423474788666, 0.13333332538604736, 0.14814814925193787, 0.5405405163764954, 0.13333332538604736, 0.1395348757505417, 0.1621621549129486, 0.19354838132858276, 0.21052631735801697, 0.17777776718139648, 0.2448979616165161, 0.23999999463558197 ]
ryl-RTEYvB
true
[ "We analyze and develop a computationally efficient implementation of Jacobian regularization that increases the classification margins of neural networks." ]
[ "With the increasing demand to deploy convolutional neural networks (CNNs) on mobile platforms, the sparse kernel approach was proposed, which could save more parameters than the standard convolution while maintaining accuracy.", "However, despite the great potential, no prior research has pointed out how to craft an sparse kernel design with such potential (i.e., effective design), and all prior works just adopt simple combinations of existing sparse kernels such as group convolution.", "Meanwhile due to the large design space it is also impossible to try all combinations of existing sparse kernels.", "In this paper, we are the first in the field to consider how to craft an effective sparse kernel design by eliminating the large design space.", "Specifically, we present a sparse kernel scheme to illustrate how to reduce the space from three aspects.", "First, in terms of composition we remove designs composed of repeated layers.", "Second, to remove designs with large accuracy degradation, we find an unified property named~\\emph{information field} behind various sparse kernel designs, which could directly indicate the final accuracy.", "Last, we remove designs in two cases where a better parameter efficiency could be achieved.", "Additionally, we provide detailed efficiency analysis on the final 4 designs in our scheme.", "Experimental results validate the idea of our scheme by showing that our scheme is able to find designs which are more efficient in using parameters and computation with similar or higher accuracy.", "CNNs have achieved unprecedented success in visual recognition tasks.", "The development of mobile devices drives the increasing demand to deploy these deep networks on mobile platforms such as cell phones and self-driving cars.", "However, CNNs are usually resource-intensive, making them difficult to deploy on these memory-constrained and energy-limited platforms.To enable the deployment, one intuitive idea is to reduce the model size.", "Model compression is the major research trend for it.", "Previously several techniques have been proposed, including pruning BID18 , quantization BID28 and low rank approximation BID6 .", "Though these approaches can can offer a reasonable parameter reduction with minor accuracy degradation, they suffer from the three drawbacks:", "1) the irregular network structure after compression, which limits performance and throughput on GPU;", "2) the increased training complexity due to the additional compression or re-training process; and", "3) the heuristic compression ratios depending on networks, which cannot be precisely controlled.Recently the sparse kernel approach was proposed to mitigate these problems by directly training networks using structural (large granularity) sparse convolutional kernels with fixed compression ratios.", "The idea of sparse kernel was originally proposed as different types of convolutional approach.", "Later researchers explore their usages in the context of CNNs by combining some of these sparse kernels to save parameters/computation against the standard convolution.", "For example, MobileNets BID12 realize 7x parameter savings with only 1% accuracy loss by adopting the combination of two sparse kernels, depthwise convolution BID26 and pointwise convoluiton BID20 , to replace the standard convolution in their networks.However, despite the great potential with sparse kernel approach to save parameters/computation while maintaining accuracy, it is still mysterious in the field regarding how to craft an sparse kernel design with such potential (i.e., effective sparse kernel design).", "Prior works like MobileNet BID12 and Xception BID1 just adopt simple combinations of existing sparse kernels, and no one really points out the reasons why they choose such kind of design.", "Meanwhile, it has been a long-existing question in the field whether there is any other sparse kernel design that is more efficient than all state-of-the-art ones while also maintaining a similar accuracy with the standard convolution.To answer this question, a native idea is to try all possible combinations and get the final accuracy for each of them.", "Unfortunately, the number of combination will grow exponentially with the number of kernels in a design, and thus it is infeasible to train each of them.", "Specifically, even if we limit the design space to four common types of sparse kernels -group convolution BID16 , depthwise convolution BID26 , pointwise convolution BID20 and pointwise group convolution ) -the total number of possible combinations would be 4 k , given that k is the number of sparse kernels we allow to use in a design (note that each sparse kernel can appear more than once in a design).In", "this paper, we craft the effective sparse kernel design by efficiently eliminating poor candidates from the large design space. Specifically", ", we reduce the design space from three aspects: composition, performance and efficiency. First, observing", "that in normal CNNs it is quite common to have multiple blocks which contain repeated patterns such as layers or structures, we eliminate the design space by ignoring the combinations including repeated patterns. Second, realizing", "that removing designs with large accuracy degradation would significantly reduce the design space, we identify a easily measurable quantity named information field behind various sparse kernel designs, which is closely related to the model accuracy. We get rid of designs", "that lead to a smaller information field compared to the standard convolution model. Last, in order to achieve", "a better parameter efficiency, we remove redundant sparse kernels in a design if the same size of information field is already retained by other sparse kernels in the design. With all aforementioned knowledge", ", we present a sparse kernel scheme that incorporates the final four different designs manually reduced from the original design space.Additionally, in practice, researchers would also like to select the most parameter/computation efficient sparse kernel designs based on their needs, which drives the demand to study the efficiency for different sparse kernel designs. Previously no research has investigated", "on the efficiency for any sparse kernel design. In this paper, three aspects of efficiency", "are addressed for each of the sparse kernel designs in our scheme: 1) what are the factors which could affect", "the efficiency for each design? 2) how does each factor affect the efficiency", "alone? 3) when is the best efficiency achieved combining", "all these factors in different real situations?Besides, we show that the accuracy of models composed", "of new designs in our scheme are better than that of all state-of-the-art methods under the same constraint of parameters, which implies that more efficient designs are constructed by our scheme and again validates the effectiveness of our idea.The contributions of our paper can be summarized as follows:• We are the first in the field to point out that the information field is the key for the sparse kernel designs. Meanwhile we observe the model accuracy is positively", "correlated to the size of the information field.• We present a sparse kernel scheme to illustrate how", "to eliminate the original design space from three aspects and incorporate the final 4 types of designs along with rigorous mathematical foundation on the efficiency.• We provide some potential network designs which are", "in the scope of our scheme and have not been explored yet and show that they could have superior performances.", "In this paper, we present a scheme to craft the effective sparse kernel design by eliminating the large design space from three aspects: composition, performance and efficiency.", "During the process to reduce the design space, we find an unified property named information field behind various designs, which could directly indicate the final accuracy.", "Meanwhile we show the final 4 designs in our scheme along with detailed efficiency analysis.", "Experimental results also validate the idea of our scheme." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.15686273574829102, 0.32786884903907776, 0.19999998807907104, 0.5909090638160706, 0.3684210479259491, 0.060606054961681366, 0.2083333283662796, 0.10810810327529907, 0.1666666567325592, 0.19230768084526062, 0.06451612710952759, 0.13333332538604736, 0.16326530277729034, 0.06451612710952759, 0.05128204822540283, 0.1463414579629898, 0.1666666567325592, 0.17142856121063232, 0.14035087823867798, 0.11428570747375488, 0.1818181723356247, 0.28915661573410034, 0.15686273574829102, 0.2222222238779068, 0.1818181723356247, 0.18918918073177338, 0.3499999940395355, 0.4864864945411682, 0.14814814925193787, 0.23728813230991364, 0.21052631735801697, 0.20408162474632263, 0.23188404738903046, 0.3333333432674408, 0.25, 0.1875, 0.12903225421905518, 0.1621621549129486, 0.25974026322364807, 0.3684210479259491, 0.3396226465702057, 0.19999998807907104, 0.5957446694374084, 0.21739129722118378, 0.21621620655059814, 0.06451612710952759 ]
rJlg1n05YX
true
[ "We are the first in the field to show how to craft an effective sparse kernel design from three aspects: composition, performance and efficiency." ]
[ "In weakly-supervised temporal action localization, previous works have failed to locate dense and integral regions for each entire action due to the overestimation of the most salient regions.", "To alleviate this issue, we propose a marginalized average attentional network (MAAN) to suppress the dominant response of the most salient regions in a principled manner.", "The MAAN employs a novel marginalized average aggregation (MAA) module and learns a set of latent discriminative probabilities in an end-to-end fashion.", " MAA samples multiple subsets from the video snippet features according to a set of latent discriminative probabilities and takes the expectation over all the averaged subset features", ". Theoretically, we prove that the MAA module with learned latent discriminative probabilities successfully reduces the difference in responses between the most salient regions and the others", ". Therefore, MAAN is able to generate better class activation sequences and identify dense and integral action regions in the videos", ". Moreover, we propose a fast algorithm to reduce the complexity of constructing MAA from $O(2^T)$ to $O(T^2)$.", "Extensive experiments on two large-scale video datasets show that our MAAN achieves a superior performance on weakly-supervised temporal action localization.\n\n\n", "Weakly-supervised temporal action localization has been of interest to the community recently.", "The setting is to train a model with solely video-level class labels, and to predict both the class and the temporal boundary of each action instance at the test time.", "The major challenge in the weakly-supervised localization problem is to find the right way to express and infer the underlying location information with only the video-level class labels.", "Traditionally, this is achieved by explicitly sampling several possible instances with different locations and durations BID2 BID11 .", "The instance-level classifiers would then be trained through multiple instances learning BID4 BID40 or curriculum learning BID1 ).", "However, the length of actions and videos varies too much such that the number of instance proposals for each video varies a lot and it can also be huge.", "As a result, traditional methods based on instance proposals become infeasible in many cases.Recent research, however, has pivoted to acquire the location information by generating the class activation sequence (CAS) directly BID17 , which produces the classification score sequence of being each action for each snippet over time.", "The CAS along the 1D temporal dimension for a video is inspired by the class activation map (CAM) BID46 BID19 BID18 in weakly-supervised object detection.", "The CAM-based models have shown that despite being trained on image-level labels, convolutional neural networks (CNNs) have the remarkable ability to localize objects.", "Similar to object detection, the basic idea behind CAS-based methods for action localization in the training is to sample the non-overlapping snippets from a video, then to aggregate the snippet-level features into a video-level feature, and finally to yield a video-level class prediction.", "During testing, the model generates a CAS for each class that identifies the discriminative action regions, and then applies a threshold on the CAS to localize each action instance in terms of the start time and the end time.In CAS-based methods, the feature aggregator that aggregates multiple snippet-level features into a video-level feature is the critical building block of weakly-supervised neural networks.", "A model's ability to capture the location information of an action is primarily determined by the design of the aggregators.", "While using the global average pooling over a full image or across the video snippets has shown great promise in identifying the discriminative regions BID46 BID19 BID18 , treating each pixel or snippet equally loses the opportunity to benefit from several more essential parts.", "Some recent works BID17 BID49 have tried to learn attentional weights for different snippets to compute a weighted sum as the aggregated feature.", "However, they suffer from the weights being easily dominated by only a few most salient snippets.In general, models trained with only video-level class labels tend to be easily responsive to small and sparse discriminative regions from the snippets of interest.", "This deviates from the objective of the localization task that is to locate dense and integral regions for each entire action.", "To mitigate this gap and reduce the effect of the domination by the most salient regions, several heuristic tricks have been proposed to apply to existing models.", "For example, BID35 BID44 attempt to heuristically erase the most salient regions predicted by the model which are currently being mined, and force the network to attend other salient regions in the remaining regions by forwarding the model several times.", "However, the heuristic multiple-run model is not end-to-end trainable.", "It is the ensemble of multiple-run mined regions but not the single model's own ability that learns the entire action regions.", "\"Hide-and-seek\"", "BID28 randomly masks out some regions of the input during training, enforcing the model to localize other salient regions when the most salient regions happen to be masked out.", "However, all the input regions are masked out with the same probability due to the uniform prior, and it is very likely that most of the time it is the background that is being masked out.", "A detailed discussion about related works can be found in Appendix D.To this end, we propose the marginalized average attentional network (MAAN) to alleviate the issue raised by the domination of the most salient region in an end-to-end fashion for weakly-supervised action localization.", "Specifically, MAAN suppresses the action prediction response of the most salient regions by employing marginalized average aggregation (MAA) and learning the latent discriminative probability in a principled manner.", "Unlike the previous attentional pooling aggregator which calculates the weighted sum with attention weights, MAA first samples a subset of features according to their latent discriminative probabilities, and then calculates the average of these sampled features.", "Finally, MAA takes the expectation (marginalization) of the average aggregated subset features over all the possible subsets to achieve the final aggregation.", "As a result, MAA not only alleviates the domination by the most salient regions, but also maintains the scale of the aggregated feature within a reasonable range.", "We theoretically prove that, with the MAA, the learned latent discriminative probability indeed reduces the difference of response between the most salient regions and the others.", "Therefore, MAAN can identify more dense and integral regions for each action.", "Moreover, since enumerating all the possible subsets is exponentially expensive, we further propose a fast iterative algorithm to reduce the complexity of the expectation calculation procedure and provide a theoretical analysis.", "Furthermore, MAAN is easy to train in an end-to-end fashion since all the components of the network are differentiable.", "Extensive experiments on two large-scale video datasets show that MAAN consistently outperforms the baseline models and achieves superior performance on weakly-supervised temporal action localization.In summary, our main contributions include: (1) a novel end-to-end trainable marginalized average attentional network (MAAN) with a marginalized average aggregation (MAA) module in the weaklysupervised setting; (2) theoretical analysis of the properties of MAA and an explanation of the reasons MAAN alleviates the issue raised by the domination of the most salient regions; (3) a fast iterative algorithm that can effectively reduce the computational complexity of MAA; and (4) a superior performance on two benchmark video datasets, THUMOS14 and ActivityNet1.3, on the weakly-supervised temporal action localization.", "incorporates MAA, and introduce the corresponding inference process on weakly-supervised temporal action localization in Sec. 2.4.", "We have proposed the marginalized average attentional network (MAAN) for weakly-supervised temporal action localization.", "MAAN employs a novel marginalized average aggregation (MAA) operation to encourage the network to identify the dense and integral action segments and is trained in an end-to-end fashion.", "Theoretically, we have proved that MAA reduces the gap between the most discriminant regions in the video to the others, and thus MAAN generates better class activation sequences to infer the action locations.", "We have also proposed a fast algorithm to reduce the computation complexity of MAA.", "Our proposed MAAN achieves superior performance on both the THUMOS14 and the ActivityNet1.3 datasets on weakly-supervised temporal action localization tasks compared to current state-of-the-art methods." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.22857142984867096, 0.22857142984867096, 0.1875, 0, 0, 0.06666666269302368, 0, 0.25806450843811035, 0.260869562625885, 0.1111111044883728, 0.11428570747375488, 0, 0, 0.0555555522441864, 0.0714285671710968, 0.17142856121063232, 0, 0.13333332538604736, 0.10344827175140381, 0.1428571343421936, 0.039215683937072754, 0.12121211737394333, 0, 0.19354838132858276, 0, 0.04878048226237297, 0, 0.06896550953388214, 0, 0, 0.3529411852359772, 0.1621621549129486, 0.0952380895614624, 0.06666666269302368, 0, 0, 0.17391303181648254, 0, 0.06896550953388214, 0.20000000298023224, 0.2857142686843872, 0.7199999690055847, 0.277777761220932, 0.05128204822540283, 0, 0.22857142984867096 ]
HkljioCcFQ
true
[ "A novel marginalized average attentional network for weakly-supervised temporal action localization " ]
[ "Deep image prior (DIP), which utilizes a deep convolutional network (ConvNet) structure itself as an image prior, has attracted huge attentions in computer vision community. ", "It empirically shows the effectiveness of ConvNet structure for various image restoration applications. ", "However, why the DIP works so well is still unknown, and why convolution operation is essential for image reconstruction or enhancement is not very clear.", "In this study, we tackle these questions.", "The proposed approach is dividing the convolution into ``delay-embedding'' and ``transformation (\\ie encoder-decoder)'', and proposing a simple, but essential, image/tensor modeling method which is closely related to dynamical systems and self-similarity.", "The proposed method named as manifold modeling in embedded space (MMES) is implemented by using a novel denoising-auto-encoder in combination with multi-way delay-embedding transform.", "In spite of its simplicity, the image/tensor completion and super-resolution results of MMES are quite similar even competitive to DIP in our extensive experiments, and these results would help us for reinterpreting/characterizing the DIP from a perspective of ``low-dimensional patch-manifold prior''.", "The most important piece of information for image/tensor restoration would be the \"prior\" which usually converts the optimization problems from ill-posed to well-posed, and/or gives some robustness for specific noises and outliers.", "Many priors were studied in computer science problems such as low-rank representation (Pearson, 1901; Hotelling, 1933; Hitchcock, 1927; Tucker, 1966) , smoothness (Grimson, 1981; Poggio et al., 1985; Li, 1994) , sparseness (Tibshirani, 1996) , non-negativity (Lee & Seung, 1999; Cichocki et al., 2009) , statistical independence (Hyvarinen et al., 2004) , and so on.", "Particularly in today's computer vision problems, total variation (TV) (Guichard & Malgouyres, 1998; Vogel & Oman, 1998) , low-rank representation (Liu et al., 2013; Ji et al., 2010; Zhao et al., 2015; Wang et al., 2017) , and non-local similarity (Buades et al., 2005; Dabov et al., 2007) priors are often used for image modeling.", "These priors can be obtained by analyzing basic properties of natural images, and categorized as \"unsupervised image modeling\".", "By contrast, the deep image prior (DIP) (Ulyanov et al., 2018) has been come from a part of \"supervised\" or \"data-driven\" image modeling framework (i.e., deep learning) although the DIP itself is one of the state-of-the-art unsupervised image restoration methods.", "The method of DIP can be simply explained to only optimize an untrained (i.e., randomly initialized) fully convolutional generator network (ConvNet) for minimizing squares loss between its generated image and an observed image (e.g., noisy image), and stop the optimization before the overfitting.", "Ulyanov et al. (2018) explained the reason why a high-capacity ConvNet can be used as a prior by the following statement: Network resists \"bad\" solutions and descends much more quickly towards naturally-looking images, and its phenomenon of \"impedance of ConvNet\" was confirmed by toy experiments.", "However, most researchers could not be fully convinced from only above explanation because it is just a part of whole.", "One of the essential questions is why is it ConvNet?", "or in more practical perspective, to explain what is \"priors in DIP\" with simple and clear words (like smoothness, sparseness, low-rank etc) is very important.", "In this study, we tackle the question why ConvNet is essential as an image prior, and try to translate the \"deep image prior\" with words.", "For this purpose, we divide the convolution operation into \"embedding\" and \"transformation\" (see Fig. 9 in Appendix).", "Here, the \"embedding\" stands for delay/shift-embedding (i.e., Hankelization) which is a copy/duplication operation of image-patches by sliding window of patch size (τ, τ ).", "The embedding/Hankelization is a preprocessing to capture the delay/shift-invariant feature (e.g., non-local similarity) of signals/images.", "This \"transformation\" is basically linear transformation in a simple convolution operation, and it also indicates some nonlinear transformation from the ConvNet perspective.", "To simplify the complicated \"encoder-decoder\" structure of ConvNet used in DIP, we consider the following network structure: Embedding H (linear), encoding φ r (non-linear), decoding ψ r (non-linear), and backward embedding H † (linear) (see Fig. 1 ).", "Note that its encoder-decoder part (φ r , ψ r ) is just a simple multi-layer perceptron along the filter domain (i.e., manifold learning), and it is sandwitched between forward and backward embedding (H, H † ).", "Hence, the proposed network can be characterized by Manifold Modeling in Embedded Space (MMES).", "The proposed MMES is designed as simple as possible while keeping a essential ConvNet structure.", "Some parameters τ and r in MMES are corresponded with a kernel size and a filter size in ConvNet.", "When we set the horizontal dimension of hidden tensor L with r, each τ 2 -dimensional fiber in H, which is a vectorization of each (τ, τ )-patch of an input image, is encoded into r-dimensional space.", "Note that the volume of hidden tensor L looks to be larger than that of input/output image, but representation ability of L is much lower than input/output image space since the first/last tensor (H,H ) must have Hankel structure (i.e., its representation ability is equivalent to image) and the hidden tensor L is reduced to lower dimensions from H. Here, we assume r < τ 2 , and its lowdimensionality indicates the existence of similar (τ, τ )-patches (i.e., self-similarity) in the image, and it would provide some \"impedance\" which passes self-similar patches and resist/ignore others.", "Each fiber of Hidden tensor L represents a coordinate on the patch-manifold of image.", "It should be noted that the MMES network is a special case of deep neural networks.", "In fact, the proposed MMES can be considered as a new kind of auto-encoder (AE) in which convolution operations have been replaced by Hankelization in pre-processing and post-processing.", "Compared with ConvNet, the forward and backward embedding operations can be implemented by convolution and transposed convolution with one-hot-filters (see Fig. 12 in Appendix for details).", "Note that the encoder-decoder part can be implemented by multiple convolution layers with kernel size (1,1) and non-linear activations.", "In our model, we do not use convolution explicitly but just do linear transform and non-linear activation for \"filter-domain\" (i.e., horizontal axis of tensors in Fig. 1 ).", "The contributions in this study can be summarized as follow: (1) A new and simple approach of image/tensor modeling is proposed which translates the ConvNet, (2) effectiveness of the proposed method and similarity to the DIP are demonstrated in experiments, and (3) most importantly, there is a prospect for interpreting/characterizing the DIP as \"low-dimensional patch-manifold prior\".", "A beautiful manifold representation of complicated signals in embedded space has been originally discovered in a study of dynamical system analysis (i.e., chaos analysis) for time-series signals (Packard et al., 1980) .", "After this, many signal processing and computer vision applications have been studied but most methods have considered only linear approximation because of the difficulty of non-linear modeling (Van Overschee & De Moor, 1991; Szummer & Picard, 1996; Li et al., 1997; Ding et al., 2007; Markovsky, 2008) .", "However nowadays, the study of non-linear/manifold modeling has been well progressed with deep learning, and it was successfully applied in this study.", "Interestingly, we could apply this non-linear system analysis not only for time-series signals but also natural color images and tensors (this is an extension from delay-embedding to multi-way shiftembedding).", "The best of our knowledge, this is the first study to apply Hankelization with AE into general tensor data reconstruction.", "MMES is a novel and simple image reconstruction model based on the low-dimensional patchmanifold prior which has many connections to ConvNet.", "We believe it helps us to understand how work ConvNet/DIP through MMES, and support to use DIP for various applications like tensor/image reconstruction or enhancement (Gong et al., 2018; Yokota et al., 2019; Van Veen et al., 2018; Gandelsman et al., 2019) .", "Finally, we established bridges between quite different research areas such as the dynamical system analysis, the deep learning, and the tensor modeling.", "The proposed method is just a prototype and can be further improved by incorporating other methods such as regularizations, multi-scale extensions, and adversarial training.", "We can see the anti-diagonal elements of above matrix are equivalent.", "Such matrix is called as \"Hankel matrix\".", "For a two-dimensional array", "we consider unfold of it and inverse folding by unfold", ", and", "The point here is that we scan matrix elements column-wise manner.", "Hankelization of this twodimensional array (matrix) with τ = [2, 2] is given by scanning a matrix with local (2,2)-window column-wise manner, and unfold and stack each local patch left-to-right.", "Thus, it is given as", "We can see that it is not a Hankel matrix.", "However, it is a \"block Hankel matrix\" in perspective of block matrix, a matrix that its elements are also matrices.", "We can see the block matrix itself is a Hankel matrix and all elements are Hankel matrices, too.", "Thus, Hankel matrix is a special case of block Hankel matrix in case of that all elements are scalar.", "In this paper, we say simply \"Hankel structure\" for block Hankel structure.", "Figure 9 shows an illustrative explanation of valid convolution which is decomposed into delayembedding/Hankelization and linear transformation.", "1D valid convolution of f with kernel h = [h 1 , h 2 , h 3 ] can be provided by matrix-vector product of the Hankel matrix and h.", "In similar way, 2D valid convolution can be provided by matrix-vector product of the block Hankel matrix and unfolded kernel." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19999998807907104, 0.06896550953388214, 0.05405404791235924, 0, 0.04651162400841713, 0.21052631735801697, 0.03999999538064003, 0, 0, 0.03448275476694107, 0.060606054961681366, 0.15686273574829102, 0.035087715834379196, 0.072727270424366, 0.05714285373687744, 0, 0.052631575614213943, 0.10526315122842789, 0, 0.04999999701976776, 0.0624999962747097, 0.0555555522441864, 0, 0.04081632196903229, 0, 0.06896550953388214, 0.13333332538604736, 0.08510638028383255, 0.024096382781863213, 0.1428571343421936, 0.12903225421905518, 0.1428571343421936, 0.052631575614213943, 0.05882352590560913, 0.045454539358615875, 0.06666666269302368, 0.04444444179534912, 0, 0.1111111044883728, 0.045454539358615875, 0.05714285373687744, 0.1666666567325592, 0.03999999538064003, 0.05714285373687744, 0.052631575614213943, 0.07692307233810425, 0, 0.10526315122842789, 0, 0, 0.0952380895614624, 0, 0.1599999964237213, 0.05882352590560913, 0.12903225421905518, 0.06666666269302368, 0, 0, 0.04999999701976776, 0 ]
SJgBra4YDS
true
[ "We propose a new auto-encoder incorporated with multiway delay-embedding transform toward interpreting deep image prior." ]
[ "Federated learning is a recent advance in privacy protection. \n", "In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients.", "The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. \n", "However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization.", "In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model. \n", "We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization.", "The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance. \n", "Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.", "Lately, the topic of security in machine learning is enjoying increased interest.", "This can be largely attributed to the success of big data in conjunction with deep learning and the urge for creating and processing ever larger data sets for data mining.", "However, with the emergence of more and more machine learning services becoming part of our daily lives, making use of our data, special measures must be taken to protect privacy.", "Unfortunately, anonymization alone often is not sufficient BID12 ; BID1 and standard machine learning approaches largely disregard privacy aspects and are susceptible to a variety of adversarial attacks BID11 .", "In this regard, machine learning can be analyzed to recover private information about the participating user or employed data as well ; BID16 ; BID3 ; BID6 .", "BID2 propose a measure for to assess the memorization of privacy related data.", "All the aspects of privacy-preserving machine learning are aggravated when further restrictions apply such as a limited number of participating clients or restricted communication bandwidth such as mobile devices Google (2017) .In", "order to alleviate the need of explicitly sharing data for training machine learning models, decentralized approaches have been proposed, sometimes referred to as collaborative BID15 or federated learning BID9 . In", "federated learning BID9 a model is learned by multiple clients in decentralized fashion. Learning", "is shifted to the clients and only learned parameters are centralized by a trusted curator. This curator", "then distributes an aggregated model back to the clients. However, this", "alone is not sufficent to preserve privacy. In BID14 it is", "shown that clients be identified in a federated learning setting by the model updates alone, necessitating further steps.Clients not revealing their data is an advance in privacy protection. However, when", "a model is learned in conventional way, its parameters reveal information about the data that was used during training. In order to solve", "this issue, the concept of differential privacy (dp) BID4 for learning algorithms was proposed by BID0 . The aim is to ensure", "a learned model does not reveal whether a certain data point was used during training.We propose an algorithm that incorporates a dp-preserving mechanism into federated learning. However, opposed to", "BID0 we do not aim at protecting w.r.t. a single data point only. Rather, we want to", "ensure that a learned model does not reveal whether a client participated during decentralized training. This implies a client", "'s whole data set is protected against differential attacks from other clients.Our main contributions: First, we show that a client's participation can be hidden while model performance is kept high in federated learning. We demonstrate that", "our proposed algorithm can achieve client level differential privacy at a minor loss in model performance. An independent study", "BID10 , published at the same time, proposed a similar procedure for client level-dp. Experimental setups", "however differ and BID10 also includes elementlevel privacy measures. Second, we propose", "to dynamically adapt the dp-preserving mechanism during decentralized training. Empirical studies", "suggest that model performance is increased that way. This stands in contrast", "to latest advances in centralized training with differential privacy, were such adaptation was not beneficial. We can link this discrepancy", "to the fact that, compared to centralized learning, gradients in federated learning exhibit different sensibilities to noise and batch size throughout the course of training.", "As intuitively expected, the number of participating clients has a major impact on the achieved model performance.", "For 100 and 1000 clients, model accuracy does not converge and stays significantly below the non-differentially private performance.", "However, 78% and 92% accuracy for K ∈ {100, 1000} are still substantially better than anything clients would be able to achieve when only training on their own data.", "In domains where K lays in this order of magnitude and differential privacy is of utmost importance, such models would still substantially benefit any client participating.", "An example for such a domain are hospitals.", "Several hundred could jointly learn a model, while information about a specific hospital stays hidden.", "In addition, the jointly learned model could be used as an initialization for further client-side training.For K = 10000, the differentially private model almost reaches accuracies of the non-differential private one.", "This suggests that for scenarios where many parties are involved, differential privacy comes at almost no cost in model performance.", "These scenarios include mobile phones and other consumer devices.In the cross-validation grid search we also found that raising m t over the course of training improves model performance.", "When looking at a single early communication round, lowering both m t and σ t in a fashion such that σ 2 t /m t stays constant, has almost no impact on the accuracy gain during that round.", "however, privacy loss is reduced when both parameters are lowered.", "This means more communication rounds can be performed later on in training, before the privacy budget is drained.", "In subsequent communication rounds, a large m t is unavoidable to gain accuracy, and a higher privacy cost has to be embraced in order to improve the model.", "This observation can be linked to recent advances of information theory in learning algorithms.", "As observable in FIG3 , BID17 suggest, we can distinguish two different phases of training: label fitting and data fitting phase.During label fitting phase, updates by clients are similar and thus V c is low, as FIG3 shows.", "U c , however, is high during this initial phase, as big updates to the randomly initialized weights are performed.", "During data fitting phase V c rises.", "The individual updates w k look less alike, as each client optimizes on their data set.", "U c however drastically shrinks, as a local optima of the global model is approached, accuracy converges and the contributions cancel each other out to a certain extend.", "FIG3 shows these dependencies of V c and U c .We", "can conclude: i)", "At early communication rounds, small subsets of clients might still contribute an average update w t representative of the true data distribution ii", ") At later stages a balanced (and therefore bigger) fraction of clients is needed to reach a certain representativity for an update. iii", ") High U c makes early updates less vulnerable to noise.", "We were able to show through first empirical studies that differential privacy on a client level is feasible and high model accuracies can be reached when sufficiently many parties are involved.", "Furthermore, we showed that careful investigation of the data and update distribution can lead to optimized privacy budgeting.", "For future work, we plan to derive optimal bounds in terms of signal to noise ratio in dependence of communication round, data representativity and between-client variance as well as further investigate the connection to information theory.", "Additionally, we plan to further investigate the dataset dependency of the bounds.", "For assessing further applicability in bandwith-limited settings, we plan to investigate the applicability of proposed approach in context of compressed gradients such as proposed by BID8 ." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17391303181648254, 0.2142857164144516, 0.05714285373687744, 0.06451612710952759, 0.1111111044883728, 0.06896550953388214, 0, 0.14999999105930328, 0.07999999821186066, 0.052631575614213943, 0, 0.09756097197532654, 0, 0.07692307233810425, 0.0476190447807312, 0.04878048226237297, 0.37037035822868347, 0.13793103396892548, 0, 0.08695651590824127, 0.23255813121795654, 0.277777761220932, 0, 0.2926829159259796, 0.19354838132858276, 0.3448275923728943, 0.25, 0.1249999925494194, 0.06896550953388214, 0, 0, 0.1666666567325592, 0.12121211737394333, 0.11428570747375488, 0.06896550953388214, 0.06666666269302368, 0, 0.10526315122842789, 0.0952380895614624, 0.07407406717538834, 0.04878048226237297, 0.12121211737394333, 0.04878048226237297, 0.17777776718139648, 0, 0.06451612710952759, 0.10526315122842789, 0.07407406717538834, 0.04255318641662598, 0, 0, 0, 0.05128204822540283, 0, 0, 0, 0.05714285373687744, 0, 0.09090908616781235, 0.06451612710952759, 0.045454543083906174, 0, 0.05714285373687744 ]
SkVRTj0cYQ
true
[ "Ensuring that models learned in federated fashion do not reveal a client's participation." ]
[ "Employing deep neural networks as natural image priors to solve inverse problems either requires large amounts of data to sufficiently train expressive generative models or can succeed with no data via untrained neural networks.", "However, very few works have considered how to interpolate between these no- to high-data regimes.", "In particular, how can one use the availability of a small amount of data (even 5-25 examples) to one's advantage in solving these inverse problems and can a system's performance increase as the amount of data increases as well?", "In this work, we consider solving linear inverse problems when given a small number of examples of images that are drawn from the same distribution as the image of interest.", "Comparing to untrained neural networks that use no data, we show how one can pre-train a neural network with a few given examples to improve reconstruction results in compressed sensing and semantic image recovery problems such as colorization.", "Our approach leads to improved reconstruction as the amount of available data increases and is on par with fully trained generative models, while requiring less than 1% of the data needed to train a generative model.", "We study the problem of recovering an image x x x 0 ∈ R n from m linear measurements of the form y y y 0 = A A Ax x x 0 + η η η ∈ R m where A A A ∈ R m×n is a known measurement operator and η η η ∈ R m denotes the noise in our system.", "Problems of this form are ubiquitous in various domains ranging from image processing, machine learning, and computer vision.", "Typically, the problem's difficulty is a result of its ill-posedness due to the underdetermined nature of the system.", "To resolve this ambiguity, many approaches enforce that the image must obey a natural image model.", "While traditional approaches typically use hand-crafted priors such as sparsity in the wavelet basis [5] , recent approaches inspired by deep learning to create such natural image model surrogates have shown to outperform these methods.", "Deep Generative Priors: Advancements in generative modelling have allowed for deep neural networks to create highly realistic samples from a number of complex natural image classes.", "Popular generative models to use as natural image priors are latent variable models such as Generative Adversarial Networks (GANs) [6] and Variational Autoencoders (VAEs) [18] .", "This is in large part due to the fact that they provide a low-dimensional parameterization of the natural image manifold that can be directly exploited in inverse imaging tasks.", "When enforced as a natural image prior, these models have shown to outperform traditional methods and provide theoretical guarantees in problems such as compressed sensing [4, 24, 11, 14, 20, 15] , phase retrieval [10, 21, 16] , and blind deconvolution/demodulation [2, 9] .", "However, there are two main drawbacks of using deep generative models as natural image priors.", "The first is that they require a large amount of data to train, e.g., hundreds of thousands of images to generate novel celebrity faces.", "Additionally, they suffer from a non-trivial representation error due to the fact that they model the natural image manifold through a low-dimensional parameterization.", "Untrained Neural Network Priors: On the opposite end of the data spectrum, recent works have shown that randomly initialized neural networks can act as natural image priors without any learning.", "[22] first showed this to be the case by solving tasks such as denoising, inpainting, and super-resolution via optimizing over the parameters of a convolutional neural network to fit to a single image.", "The results showed that the neural network exhibited a bias towards natural images, but due to the high overparameterization in the network, required early stopping to succeed.", "A simpler model was later introduced in [13] which was, in fact, underparameterized and was able to both compress images while solving various linear inverse problems.", "Both methods require no training data and do not suffer from the same representation error as generative models do.", "Similar to generative models, they have shown to be successful image priors in a variety of inverse problems [13, 12, 23, 17] .", "Based on these two approaches, we would like to investigate how can one interpolate between these data regimes in a way that improves upon work with untrained neural network priors and ultimately reaches or exceeds the success of generative priors.", "More specifically, we would like to develop an algorithm that", "1) performs just as well as untrained neural networks with no data and", "2) improves performance as the amount of provided data increases." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2142857164144516, 0.09999999403953552, 0.24561403691768646, 0.11320754140615463, 0.688524603843689, 0.14035087823867798, 0.1269841194152832, 0.09090908616781235, 0, 0, 0.06896550953388214, 0.07692307233810425, 0.08163265138864517, 0.07692307233810425, 0.1818181723356247, 0.04878048226237297, 0, 0, 0.1090909019112587, 0.145454540848732, 0.1599999964237213, 0.11999999731779099, 0.09090908616781235, 0.08510638028383255, 0.28125, 0.1111111044883728, 0.2631579041481018, 0.0555555522441864 ]
ryxOh7n9Ir
true
[ "We show how pre-training an untrained neural network with as few as 5-25 examples can improve reconstruction results in compressed sensing and semantic recovery problems like colorization." ]
[ "We propose Cooperative Training (CoT) for training generative models that measure a tractable density for discrete data.", "CoT coordinately trains a generator G and an auxiliary predictive mediator M. The training target of M is to estimate a mixture density of the learned distribution G and the target distribution P, and that of G is to minimize the Jensen-Shannon divergence estimated through M. CoT achieves independent success without the necessity of pre-training via Maximum Likelihood Estimation or involving high-variance algorithms like REINFORCE.", "This low-variance algorithm is theoretically proved to be superior for both sample generation and likelihood prediction.", "We also theoretically and empirically show the superiority of CoT over most previous algorithms in terms of generative quality and diversity, predictive generalization ability and computational cost.", "Generative modeling is essential in many scenarios, including continuous data modeling (e.g. image generation BID6 , stylization BID17 , semisupervised classification BID13 ) and sequential discrete data modeling (e.g. neural text generation BID2 ).For", "discrete data with tractable density like natural language, generative models are predominantly optimized through Maximum Likelihood Estimation (MLE), inevitably introducing exposure bias BID14 , which results in that given a finite set of observations, the optimal parameters of the model trained via MLE do not correspond to the ones maximizing the generative quality. Specifically", ", the model is trained on the data distribution of inputs and tested on a different distribution of inputs, namely, the learned distribution. This discrepancy", "implies that in the training stage, the model is never exposed to its own errors and thus in the test stage, the errors made along the way will quickly accumulate.On the other hand, for general generative modeling tasks, an effective framework, named Generative Adversarial Network (GAN) BID6 , was proposed to train an implicit density model for continuous data. GAN introduces a", "discriminator D φ parametrized by φ to distinguish the generated samples from the real ones. As is proved in", "BID6 , GAN essentially optimizes an approximately estimated Jensen-Shannon divergence (JSD) between the currently learned distribution and the target distribution. GAN shows promising", "results in many unsupervised and semi-supervised learning tasks. The success of GAN", "results in the naissance of a new paradigm of deep generative models, i.e. adversarial networks.However, since the gradient computation requires backpropagation through the generator's output, GAN can only model the distribution of continuous variables, making it non-applicable for generating discrete sequences like natural language. Researchers then proposed", "Sequence Generative Adversarial Network (SeqGAN) , which uses model-free policy gradient algorithm to optimize the original GAN objective. With SeqGAN, the expected", "JSD between current and target discrete data distribution is minimized if the training is perfect. SeqGAN shows observable improvements", "in many tasks. Since then, many variants of SeqGAN", "have been proposed to improve its performance. Nonetheless, SeqGAN is not an ideal", "algorithm for this problem, and current algorithms based on it cannot show stable, reliable and observable improvements that covers all scenarios, according to a previous survey . The detailed reason will be discussed", "in detail in Section 2.In this paper, we propose Cooperative Training (CoT), a novel, low-variance, bias-free algorithm for training likelihood-based generative models on discrete data by directly optimizing a wellestimated Jensen-Shannon divergence. CoT coordinately trains a generative", "module G, and an auxiliary predictive module M , called mediator, for guiding G in a cooperative fashion. For theoretical soundness, we derive", "the proposed algorithm directly from the definition of JSD. We further empirically and theoretically", "demonstrate the superiority of our algorithm over many strong baselines in terms of generative performance, generalization ability and computational performance in both synthetic and real-world scenarios.", "Computational Efficiency Although in terms of time cost per epoch, CoT does not achieve the state-of-the-art, we do observe that CoT is remarkably faster than previous RL-GAN approaches.", "Besides, consider the fact that CoT is a sample-based optimization algorithm, which involves time BID3 8.89 8.71/-(MLE) (The same as MLE) 32.54 ± 1.14s Professor Forcing BID10 9 To show the hyperparameter robustness of CoT, we compared it with the similar results as were evaluated in SeqGAN .", "DISPLAYFORM0 cost in sampling from the generator, this result is acceptable.", "The result also verifies our claim that CoT has the same order (i.e. the time cost only differs in a constant multiplier or extra lower order term) of computational complexity as MLE.Hyper-parameter Robustness.", "We perform a hyper-parameter robustness experiment on synthetic data experiment.", "When compared with the results of similar experiments as in SeqGAN , our approach shows less sensitivity to hyper-parameter choices, as shown in FIG1 .", "Note that since in all our attempts, the evaluated JSD of SeqGAN fails to converge, we evaluated NLL oracle for it as a replacement.Self-estimated Training Progress Indicator.", "Like the critic loss, i.e. estimated Earth Mover Distance, in WGANs, we find that the training loss of the mediator (9), namely balanced NLL, can be a real-time training progress indicator as shown in FIG2 .", "Specifically, in a wide range, balanced NLL is a good estimation of real JSD(G P ) with a steady translation, namely, balanced N LL = JSD(G P ) + H(G) + H(P ).", "2.900 (σ = 0.025) 3.118 (σ = 0.018) 3.122 RankGAN BID11", "We proposed Cooperative Training, a novel training algorithm for generative modeling of discrete data.", "CoT optimizes Jensen-Shannon Divergence, which does not have the exposure bias problem as the forward KLD.", "Models trained via CoT shows promising results in sequential discrete data modeling tasks, including sample quality and the generalization ability in likelihood prediction tasks.B SAMPLE COMPARISON AND DISCUSSION TAB6 shows samples from some of the most powerful baseline models and our model.", "The Optimal Balance for Cooperative Training We find that the same learning rate and iteration numbers for the generator and mediator seems to be the most competitive choice.", "As for the architecture choice, we find that the mediator needs to be slightly stronger than the generator.", "For the best result in the synthetic experiment, we adopt exactly the same generator as other compared models and a mediator whose hidden state size is twice larger (with 64 hidden units) than the generator.Theoretically speaking, we can and we should sample more batches from G θ and P respectively for training the mediator in each iteration.", "However, if no regularizations are used when training the mediator, it can easily over-fit, leading the generator's quick convergence in terms of KL(G θ P ) or NLL oracle , but divergence in terms of JSD(G θ P ).", "Empirically, this could be alleviated by applying dropout techniques BID15 with 50% keeping ratio before the output layer of RNN.", "After applying dropout, the empirical results show good consistency with our theory that, more training batches for the mediator in each iteration is always helpful.However, applying regularizations is not an ultimate solution and we look forward to further theoretical investigation on better solutions for this problem in the future.", "(5) \" I think it was alone because I can do that, when you're a lot of reasons, \" he said.(6", ") It's the only thing we do, we spent 26 and $35(see how you do is we lose it,\" said both sides in the summer. CoT(1) We focus the plans to put aside either now, and which doesn't mean it is to earn the impact to the government rejected.(2) The argument would be very doing work on the 2014 campaign to pursue the firm and immigration officials, the new review that's taken up for parking.(3) This method is true to available we make up drink with that all they were willing to pay down smoking.(4) The number of people who are on the streaming boat would study if the children had a bottle -but meant to be much easier, having serious ties to the outside of the nation.(5) However, they have to wait to get the plant in federal fees and the housing market's most valuable in tourism. MLE (1) after the possible cost of military regulatory scientists, chancellor angela merkel's business share together a conflict of major operators and interest as they said it is unknown for those probably 100 percent as a missile for britain.(2) but which have yet to involve the right climb that took in melbourne somewhere else with the rams even a second running mate and kansas. (3) \" la la la la 30 who appeared that themselves is in the room when they were shot her until the end \" that jose mourinho could risen from the individual . (4) when aaron you has died, it is thought if you took your room at the prison fines of radical controls by everybody, if it's a digital plan at an future of the next time.Possible Derivatives of CoT The form of equation 13 can be modified to optimize other objectives. One example is the backward KLD (a.k.a. Reverse KLD) i.e. KL(G P ). In this case, the objective of the so-called \"Mediator\" and \"Generator\" thus becomes:\"Mediator\", now it becomes a direct estimatorP φ of the target distribution P : DISPLAYFORM0 Generator: DISPLAYFORM1 Such a model suffers from so-called mode-collapse problem, as is analyzed in Ian's GAN Tutorial BID5 . Besides", ", as the distribution estimatorP φ inevitably introduces unpredictable behaviors when given unseen samples i.e. samples from the generator, the algorithm sometimes fails (numerical error) or diverges.In our successful attempts, the algorithm produces similar (not significantly better than) results as CoT. The quantitive", "results are shown as follows: Although under evaluation of weak metrics like BLEU, if successfully trained, the model trained via Reverse KL seems to be better than that trained via CoT, the disadvantage of Reverse KL under evaluation of more strict metric like eWMD indicates that Reverse KL does fail in learning some aspects of the data patterns e.g. completely covering the data mode." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.5333333015441895, 0.09836065024137497, 0.13333332538604736, 0.15789473056793213, 0.1395348757505417, 0.158730149269104, 0.1818181723356247, 0.2153846174478531, 0, 0, 0.07692307233810425, 0.20689654350280762, 0.05714285373687744, 0.1875, 0.09090908616781235, 0.07407406717538834, 0.13333332538604736, 0.3333333432674408, 0.1111111044883728, 0.29629629850387573, 0.1621621549129486, 0.04878048226237297, 0.06557376682758331, 0, 0.08510638028383255, 0.260869562625885, 0.0555555522441864, 0.1463414579629898, 0.1304347813129425, 0.10256409645080566, 0, 1, 0, 0.15094339847564697, 0.15789473056793213, 0.06666666269302368, 0.09999999403953552, 0.08695651590824127, 0.05882352590560913, 0.06896551698446274, 0.11764705181121826, 0.03100775182247162, 0.037735845893621445, 0.06451612710952759 ]
SkxxIs0qY7
true
[ "We proposed Cooperative Training, a novel training algorithm for generative modeling of discrete data." ]
[ "Intrinsic rewards in reinforcement learning provide a powerful algorithmic capability for agents to learn how to interact with their environment in a task-generic way.", "However, increased incentives for motivation can come at the cost of increased fragility to stochasticity.", "We introduce a method for computing an intrinsic reward for curiosity using metrics derived from sampling a latent variable model used to estimate dynamics.", "Ultimately, an estimate of the conditional probability of observed states is used as our intrinsic reward for curiosity.", "In our experiments, a video game agent uses our model to autonomously learn how to play Atari games using our curiosity reward in combination with extrinsic rewards from the game to achieve improved performance on games with sparse extrinsic rewards.", "When stochasticity is introduced in the environment, our method still demonstrates improved performance over the baseline.", "Methods encouraging agents to explore their environment by rewarding actions that yield unexpected results are commonly referred to as curiosity (Schmidhuber (1991; 1990a; b) ).", "Using curiosity as an exploration policy in reinforcement learning has many benefits.", "In scenarios in which extrinsic rewards are sparse, combining extrinsic and intrinsic curiosity rewards gives a framework for agents to discover how to gain extrinsic rewards .", "In addition, when agents explore, they can build more robust policies for their environment even if extrinsic rewards are readily available (Forestier & Oudeyer, 2015) .", "These policies learned through exploration can give an agent a more general understanding of the results of their actions so that the agent will have a greater ability to adapt using their existing policy if their environment changes.", "Despite these benefits, novelty-driven exploration methods can be distracted by randomness.", "(Schmidhuber, 1990b; Storck et al., 1995)", "When stochastic elements are introduced in the environment, agents may try to overfit to noise instead of learning a deterministic model of the effect of their own actions on their world.", "In particular, Burda et al. (2018a) showed that when a TV with white noise is added to an environment in which an agent is using the intrinsic curiosity module (ICM) developed by Pathak et al. (2017) , the agent stops exploring the environment and just moves back and forth in front of the TV.", "In this paper, we present a new method for agent curiosity which provides robust performance in sparse reward environments and under stochasticity.", "We use a conditional variational autoencoder (Sohn et al., 2015) to develop a model of our environment.", "We choose to develop a conditional variational autoencoder (CVAE) due to the success of this architecture in modeling dynamics shown in the video prediction literature (Denton & Fergus, 2018; Xue et al., 2018) .", "We incorporate additional modeling techniques to regularize for stochastic dynamics in our perception model.", "We compute our intrinsic reward for curiosity by sampling from the latent space of the CVAE and computing an associated conditional probability which is a more robust metric than the commonly used pixel-level reconstruction error.", "The primary contributions of our work are the following.", "1. Perception-driven approach to curiosity.", "We develop a perception model which integrates model characteristics proven to work well for deep reinforcement learning with recent architectures for estimating dynamics from pixels.", "This combination retains robust-ness guarantees from existing deep reinforcement learning models while improving the ability to capture complex visual dynamics.", "2. Bayesian metric for surprise.", "We use the entropy of the current state given the last state as a measurement for computing surprise.", "This Bayesian approach will down-weight stochastic elements of the environment when learning a model of dynamics.", "As a result, this formulation is robust to noise.", "For our experiments, autonomous agents use our model to learn how to play Atari games.", "We measure the effectiveness of our surprise metric as a meaningful intrinsic reward by tracking the total achieved extrinsic reward by agents using a combination of our intrinsic reward with extrinsic rewards to learn.", "We show that the policy learned by a reinforcement learning algorithm using our surprise metric outperforms the policies learned by alternate reward schemes.", "Furthermore, we introduce stochasticity into the realization of actions in the environment, and we show that our method still demonstrates successful performance beyond that of the baseline method.", "In summary, we presented a novel method to compute curiosity through the use of a meaningfully constructed model for perception.", "We used a conditional variational autoencoder (CVAE) to learn scene dynamics from image and action sequences and computed an intrinsic reward for curiosity via a conditional probability derived from importance sampling from the latent space of our CVAE.", "In our experiments, we demonstrated that our approach allows agents to learn to accomplish tasks more effectively in environments with sparse extrinsic rewards without compromising robustness to stochasticity.", "We show robustness to stochasticity in our action space which we support through the actionprediction network used in our perception model.", "However, robustness to stochasticity in scenes is a separate challenge which the method we use as our baseline, ICM, cannot handle well.", "(Burda et al., 2018a)", "Stochasticity in scenes occurs when there are significant changes between sequential image frames which are random with respect to agent actions.", "We hypothesize that this stochasticity requires a different approach to handle.", "A consideration in comparing models for curiosity and exploration in deep reinforcement learning is that typically both the dynamics model and intrinsic reward metric are constructed and compared as unit as we did in this paper.", "However, a conditional probability estimation could be derived the dynamics model given by ICM just as reconstruction error could be used as intrinsic reward from our CVAE.", "Alternately, other metrics measuring novelty and learning such as the KL divergence between sequential latent distributions in our model have been proposed in a general manner by Schmidhuber (2010) .", "An interesting direction for future work would be to explore the impact of intrinsic reward metrics for curiosity on robustness to stochasticity in scenes independent across different choices of dynamics model." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1395348757505417, 0.1111111044883728, 1, 0.3589743673801422, 0.2641509473323822, 0.05405404791235924, 0.08695651590824127, 0.11764705181121826, 0.23255813121795654, 0.04255318641662598, 0.14814814925193787, 0, 0, 0.1249999925494194, 0.1875, 0.22727271914482117, 0.20512819290161133, 0.15094339847564697, 0.277777761220932, 0.4363636374473572, 0, 0.14814814925193787, 0.31111109256744385, 0.1428571343421936, 0.07407407462596893, 0.21621620655059814, 0.1621621549129486, 0.12903225421905518, 0.11428570747375488, 0.25531914830207825, 0.1904761791229248, 0.09090908616781235, 0.2926829159259796, 0.5090909004211426, 0.04255318641662598, 0.19512194395065308, 0.13636362552642822, 0, 0.0476190410554409, 0.1818181723356247, 0.22641508281230927, 0.3478260934352875, 0.1599999964237213, 0.3199999928474426 ]
rJlBQkrFvr
true
[ "We introduce a method for computing an intrinsic reward for curiosity using metrics derived from sampling a latent variable model used to estimate dynamics." ]
[ "Word embedding is a powerful tool in natural language processing.", "In this paper we consider the problem of word embedding composition \\--- given vector representations of two words, compute a vector for the entire phrase.", "We give a generative model that can capture specific syntactic relations between words.", "Under our model, we prove that the correlations between three words (measured by their PMI) form a tensor that has an approximate low rank Tucker decomposition.", "The result of the Tucker decomposition gives the word embeddings as well as a core tensor, which can be used to produce better compositions of the word embeddings.", "We also complement our theoretical results with experiments that verify our assumptions, and demonstrate the effectiveness of the new composition method.", "Word embeddings have become one of the most popular techniques in natural language processing.", "A word embedding maps each word in the vocabulary to a low dimensional vector.", "Several algorithms (e.g., Mikolov et al. (2013) ; Pennington et al. (2014) ) can produce word embedding vectors whose distances or inner-products capture semantic relationships between words.", "The vector representations are useful for solving many NLP tasks, such as analogy tasks (Mikolov et al., 2013) or serving as features for supervised learning problems (Maas et al., 2011) .While", "word embeddings are good at capturing the semantic information of a single word, a key challenge is the problem of composition: how to combine the embeddings of two co-occurring, syntactically related words to an embedding of the entire phrase. In practice", "composition is often done by simply adding the embeddings of the two words, but this may not be appropriate when the combined meaning of the two words differ significantly from the meaning of individual words (e.g., \"complex number\" should not just be \"complex\"+\"number\").In this paper", ", we try to learn a model for word embeddings that incorporates syntactic information and naturally leads to better compositions for syntactically related word pairs. Our model is", "motivated by the principled approach for understanding word embeddings initiated by Arora et al. (2015) , and models for composition similar to Coecke et al. (2010) . Arora et al.", "(2015) gave a generative model (RAND-WALK) for word embeddings, and showed several previous algorithms can be interpreted as finding the hidden parameters of this model. However, the", "RAND-WALK model does not treat syntactically related word-pairs differently from other word pairs. We give a generative", "model called syntactic RAND-WALK (see Section 3) that is capable of capturing specific syntactic relations (e.g., adjective-noun or verb-object pairs). Taking adjective-noun", "pairs as an example, previous works (Socher et al., 2012; Baroni & Zamparelli, 2010; Maillard & Clark, 2015) have tried to model the adjective as a linear operator (a matrix) that can act on the embedding of the noun. However, this would require", "learning a d × d matrix for each adjective while the normal embedding only has dimension d. In our model, we use a core", "tensor T ∈ R d×d×d to capture the relations between a pair of words and its context. In particular, using the tensor", "T and the word embedding for the adjective, it is possible to define a matrix for the adjective that can be used as an operator on the embedding of the noun. Therefore our model allows the", "same interpretations as many previous models while having much fewer parameters to train.One salient feature of our model is that it makes good use of high order statistics. Standard word embeddings are based", "on the observation that the semantic information of a word can be captured by words that appear close to it. Hence most algorithms use pairwise", "co-occurrence between words to learn the embeddings. However, for the composition problem", ", the phrase of interest already has two words, so it would be natural to consider co-occurrences between at least three words (the two words in the phrase and their neighbors).Based on the model, we can prove an", "elegant relationship between high order co-occurrences of words and the model parameters. In particular, we show that if we measure", "the Pointwise Mutual Information (PMI) between three words, and form an n × n × n tensor that is indexed by three words a, b, w, then the tensor has a Tucker decomposition that exactly matches our core tensor T and the word embeddings (see Section 2, Theorem 1, and Corollary 1). This suggests a natural way of learning our", "model using a tensor decomposition algorithm.Our model also allows us to approach the composition problem with more theoretical insights. Based on our model, if words a, b have the", "particular syntactic relationships we are modeling, their composition will be a vector v a + v b + T (v a , v b , ·). Here v a , v b are the embeddings for word", "a and b, and the tensor gives an additional correction term. By choosing different core tensors it is possible", "to recover many previous composition methods. We discuss this further in Section 3.Finally, we", "train our new model on a large corpus and give experimental evaluations. In the experiments, we show that the model learned", "satisfies the new assumptions that we need. We also give both qualitative and quantitative results", "for the new embeddings. Our embeddings and the novel composition method can capture", "the specific meaning of adjective-noun phrases in a way that is impossible by simply \"adding\" the meaning of the individual words. Quantitative experiment also shows that our composition vector", "are better correlated with humans on a phrase similarity task." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0714285671710968, 0.14999999105930328, 0.3870967626571655, 0.09302324801683426, 0.14999999105930328, 0.1621621549129486, 0.0624999962747097, 0.12903225421905518, 0.04444443807005882, 0.043478257954120636, 0.11999999731779099, 0.03703703358769417, 0.380952388048172, 0.19999998807907104, 0.2790697515010834, 0.2857142686843872, 0.14999999105930328, 0.10344827175140381, 0.10256409645080566, 0.10526315122842789, 0.25531914830207825, 0.15686273574829102, 0.1463414579629898, 0.13793103396892548, 0.038461532443761826, 0.1621621549129486, 0.1538461446762085, 0.08695651590824127, 0.2380952388048172, 0.1111111044883728, 0.060606054961681366, 0.21621620655059814, 0.1818181723356247, 0.20689654350280762, 0.0952380895614624, 0.0714285671710968 ]
H1eqjiCctX
true
[ "We present a generative model for compositional word embeddings that captures syntactic relations, and provide empirical verification and evaluation." ]
[ "Building deep reinforcement learning agents that can generalize and adapt to unseen environments remains a fundamental challenge for AI.", "This paper describes progresses on this challenge in the context of man-made environments, which are visually diverse but contain intrinsic semantic regularities.", "We propose a hybrid model-based and model-free approach, LEArning and Planning with Semantics (LEAPS), consisting of a multi-target sub-policy that acts on visual inputs, and a Bayesian model over semantic structures.", "When placed in an unseen environment, the agent plans with the semantic model to make high-level decisions, proposes the next sub-target for the sub-policy to execute, and updates the semantic model based on new observations.", "We perform experiments in visual navigation tasks using House3D, a 3D environment that contains diverse human-designed indoor scenes with real-world objects.", "LEAPS outperforms strong baselines that do not explicitly plan using the semantic content.", "Deep reinforcement learning (DRL) has undoubtedly witnessed strong achievements in recent years BID7 Mnih et al., 2015; BID9 .", "However, training an agent to solve tasks in a new unseen scenario, usually referred to as its generalization ability, remains a challenging problem (Geffner, 2018; Lake et al., 2017) .", "In model-free RL, the agent is trained to reactively make decisions from the observations, e.g., first-person view, via a black-box policy approximator.", "However the generalization ability of agents trained by model-free RL is limited, and is even more evident on tasks that require extensive planning BID9 Kansky et al., 2017) .", "On the other hand, model-based RL learns a dynamics model, predicting the next observation when taking an action.", "With the model, sequential decisions can be made via planning.", "However, learning a model for complex tasks and with high dimensional observations, such as images, is challenging.", "Current approaches for learning action-conditional models from video are only accurate for very short horizons BID3 Oh et al., 2015) .", "Moreover, it is not clear how to efficiently adapt such models to changes in the domain.In this work, we aim to improve the generalization of RL agents in domains that involve highdimensional observations.", "Our insight is that in many realistic settings, building a pixel-accurate model of the dynamics is not necessary for planning high-level decisions.", "There are semantic structures and properties that are shared in real-world man-made environments.", "For example, rooms in indoor scenes are often arranged by their mutual functionality (e.g. , bathroom next to bedroom, dining room next to kitchen).", "Similarly, objects in rooms are placed at locations of practical significance (e.g. , nightstand next to bed, chair next to table).", "Humans often make use of such structural priors when exploring a new scene, or when making a high-level plan of actions in the domain.", "However, pixel-level details are still necessary for carrying out the high-level plan.", "For example, we need high-fidelity observations to locate and interact with objects, open doors, etc.Based on this observation, we propose a hybrid framework, LEArning and Planning with Semantics (LEAPS), which consists of a model-based component that works on the semantic level to pursue a high-level target, and a model-free component that executes the target by acting on pixel-level inputs.", "Concretely, we (1) train model-free multi-target subpolicies in the form of neural networks that take the first-person views as input and sequentially execute sub-targets towards the final goal; (2) build a semantic model in the form of a latent variable model that only takes semantic signals, i.e., low-dimensional binary vectors, as input and is dynamically updated to plan the next sub-target.", "LEAPS has following advantages: (1) via model-based planning, generalization ability is improved; (2) by learning the prior distribution of the latent variable model, we capture the semantic consistency among the environments; (3) the semantic model can be efficiently updated by posterior inference when the agent is exploring the unseen environment, which is effective even with very few exploration experiences thanks to the Bayes rule; and (4) the semantic model is lightweight and fully interpretable.Our approach requires observations that are composed of both pixel-level data and a list of semantic properties of the scene.", "In general, automatically extracting high-level semantic structure from data is difficult.", "As a first step, in this work we focus on domains where obtaining semantics is easy.", "In particular, we consider environments which resemble the real-world and have strong object detectors available (He et al., 2017 ).", "An example of such environments is House3D which contains 45k human-designed 3D scenes BID12 .", "House3D provides a diverse set of scene layouts, object types, sizes and connectivity, which all conform to a consistent \"natural\" semantics.", "Within these complex scenes, we tackle navigation tasks within novel indoor scenes.", "Note that this problem is extremely challenging as the agent needs to reach far-away targets which can only be completed effectively if it can successfully reason about the overall structure of the new scenario.", "Lastly, we emphasize that although we consider navigation as a concrete example in this work, our approach is general and can be applied to other tasks for which semantic structures and signals are availableOur extensive experiments show that our LEAPS framework outperforms strong model-free RL approaches, even when the semantic signals are given as input to the policy.", "Furthermore, the relative improvements of LEAPS over baselines become more significant when the targets are further away from the agent's birthplace, indicating the effectiveness of planning on the learned semantic model.", "In this work, we proposed LEAPS to improve generalization of RL agents in unseen environments with diverse room layouts and object arrangements, while the underlying semantic information is opt plan-steps 1 2 3 4 5 overall Horizon H = 300 random 20.5 / 15.9 6.9 / 16.7 3.8 / 10.7 1.6 / 4.2 3.0 / 8.8 7.2 / 13.6 pure µ(θ) 49.4 / 47.6 11.8 / 27.6 2.0 / 4.8 2.6 / 10.8 4.2 / 13.2 13.1 / 22.9 aug.µ S (θ) 47.8 / 45.3 11.4 / 23.1 3.0 / 7.8 3.4 / 8.1 4.4 / 11.2 13.0 / 20.5 RNN control.", "52.7 / 45.2 13.6 / 23.6 3.4 / 9.6 3.4 / 10.2 6.0 / 17.6 14.9 / 21.9 LEAPS 53.4 / 58.4 15.6 / 31.5 4.5 / 12.5 3.6 / 6.6 7.0 / 18.0 16.4 / 27.9 Horizon H = 500 random 21.9 / 16.9 9.3 / 18.3 5.2 / 12.1 3.6 / 6.1 4.2 / 9.9 9.1 / 15.1 pure µ(θ) 54.0 / 57.5 15.9 / 25.6 3.8 / 7.7 2.8 / 6.4 4.8 / 8.6 16.2 / 22.9 aug.µ S (θ) 54.1 / 51.8 15.5 / 26.5 4.6 / 8.", "Our LEAPS agents have the highest success rates for all the cases requiring planning computations, i.e., plan-steps larger than 1.", "For SPL metric, LEAPS agents have the highest overall SPL value over all baseline methods (rightmost column).", "More importantly, as the horizon increases, LEAPS agents outperforms best baselines more.", "LEAPS requires a relatively longer horizon for the best practical performances since the semantic model is updated every fixed N = 30 steps, which may potentially increase the episode length for short horizons.", "More discussions are in Sec. 6.4.shared with the environments in which the agent is trained on.", "We adopt a graphical model over semantic signals, which are low-dimensional binary vectors.", "During evaluation, starting from a prior obtained from the training set, the agent plans on model, explores the unknown environment, and keeps updating the semantic model after new information arrives.", "For exploration, sub-policies that focus on multiple targets are pre-trained to execute primitive actions from visual input.", "The semantic model in LEAPS is lightweight, interpretable and can be updated dynamically with little explorations.", "As illustrated in the House3D environment, LEAPS works well for environments with semantic consistencies -typical of realistic domains.", "On random environments, e.g., random mazes, LEAPS degenerates to exhaustive search.Our approach is general and can be applied to other tasks, such as robotics manipulations where semantic signals can be status of robot arms and object locations, or video games where we can plan on semantic signals such as the game status or current resources.", "In future work we will investigate models for more complex semantic structures." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1621621549129486, 0.14999999105930328, 0.31111109256744385, 0.1304347813129425, 0.20512819290161133, 0.12903225421905518, 0.05405404791235924, 0.17391303181648254, 0.1463414579629898, 0.08695651590824127, 0.11428570747375488, 0, 0.05714285373687744, 0, 0.1666666567325592, 0.10256409645080566, 0.2666666507720947, 0.09756097197532654, 0.10526315122842789, 0.10256409645080566, 0, 0.2153846174478531, 0.1492537260055542, 0.13333332538604736, 0.06896550953388214, 0.11764705181121826, 0.051282044500112534, 0.0624999962747097, 0.10526315122842789, 0, 0.04081632196903229, 0.1818181723356247, 0.045454539358615875, 0.16470587253570557, 0, 0, 0, 0, 0.0833333283662796, 0.11764705181121826, 0.19354838132858276, 0.13636362552642822, 0.05714285373687744, 0.11764705181121826, 0.1666666567325592, 0.0952380895614624, 0.06666666269302368 ]
SJgs1n05YQ
true
[ "We propose a hybrid model-based & model-free approach using semantic information to improve DRL generalization in man-made environments." ]
[ "In this paper, we focus on two challenges which offset the promise of sparse signal representation, sensing, and recovery.", "First, real-world signals can seldom be described as perfectly sparse vectors in a known basis, and traditionally used random measurement schemes are seldom optimal for sensing them.", "Second, existing signal recovery algorithms are usually not fast enough to make them applicable to real-time problems.", "In this paper, we address these two challenges by presenting a novel framework based on deep learning.", "For the first challenge, we cast the problem of finding informative measurements by using a maximum likelihood (ML) formulation and show how we can build a data-driven dimensionality reduction protocol for sensing signals using convolutional architectures.", "For the second challenge, we discuss and analyze a novel parallelization scheme and show it significantly speeds-up the signal recovery process.", "We demonstrate the significant improvement our method obtains over competing methods through a series of experiments.", "High-dimensional inverse problems and low-dimensional embeddings play a key role in a wide range of applications in machine learning and signal processing.", "In inverse problems, the goal is to recover a signal X ∈ R N from a set of measurements Y = Φ(X) ∈ R M , where Φ is a linear or non-linear sensing operator.", "A special case of this problem is compressive sensing (CS) which is a technique for efficiently acquiring and reconstructing a sparse signal BID12 BID6 BID1 .", "In CS Φ ∈ R M ×N (M N ) is typically chosen to be a random matrix resulting in a random low-dimensional embedding of signals.", "In addition, X is assumed to be sparse in some basis Γ, i.e., X = ΓS, where S 0 = K N .While", "sparse signal representation and recovery have made significant real-world impact in various fields over the past decade (Siemens, 2017) , arguably their promise has not been fully realized. The reasons", "for this can be boiled down to two major challenges: First, real-world signals are only approximately sparse and hence, random/universal sensing matrices are sub-optimal measurement operators. Second, many", "existing recovery algorithms, while provably statistically optimal, are slow to converge. In this paper", ", we propose a new framework that simultaneously takes on both these challenges.To tackle the first challenge, we formulate the learning of the dimensionality reduction (i.e., signal sensing operator) as a likelihood maximization problem; this problem is related to the Infomax principle BID24 asymptotically. We then show", "that the simultaneous learning of dimensionality reduction and reconstruction function using this formulation gives a lower-bound of the objective functions that needs to be optimized in learning the dimensionality reduction. This is similar", "in spirit to what Vincent et al. show for denoising autoencoders in the non-asymptotic setting BID38 . Furthermore, we", "show that our framework can learn dimensionality reductions that preserve specific geometric properties. As an example,", "we demonstrate how we can construct a data-driven near-isometric low-dimensional embedding that outperforms competing embedding algorithms like NuMax BID18 . Towards tackling", "the second challenge, we introduce a parallelization (i.e., rearrangement) scheme that significantly speeds up the signal sensing and recovery process. We show that our", "framework can outperform state-of-the-art signal recovery methods such as DAMP BID26 and LDAMP BID25 both in terms of inference performance and computational efficiency.We now present a brief overview of prior work on embedding and signal recovery. Beyond random matrices", ", there are other frameworks developed for deterministic construction of linear (or nonlinear) near-isometric embeddings BID18 BID16 BID0 BID35 BID39 BID5 BID37 BID32 . However, these approaches", "are either computationally expensive, not generalizable to outof-sample data points, or perform poorly in terms of isometry. Our framework for low-dimensional", "embedding shows outstanding performance on all these aspects with real datasets. Algorithms for recovering signals", "from undersampled measurements can be categorized based on how they exploit prior knowledge of a signal distribution. They could use hand-designed priors", "BID7 BID13 BID9 BID29 , combine hand-designed algorithms with data-driven priors BID25 BID3 BID20 BID8 BID17 , or take a purely data-driven approach BID28 BID22 BID41 . As one moves from hand-designed approaches", "to data-driven approaches, models lose simplicity and generalizability while becoming more complex and more specifically tailored for a particular class of signals of interest.Our framework for sensing and recovering sparse signals can be considered as a variant of a convolutional autoencoder where the encoder is linear and the decoder is nonlinear and specifically designed for CS application. In addition, both encoder and decoder contain", "rearrangement layers which significantly speed up the signal sensing and recovery process, as we discuss later. Convolutional autoencoder has been previously", "used for image compression ; however, our work is mainly focused on the CS application rather than image compression. In CS, measurements are abstract and linear whereas", "in the image compression application measurements are a compressed version of the original image and are nonlinear. Authors in have used bicubic interpolation for upscaling", "images; however, our framework uses a data-driven approach for upscaling measurements. Finally, unlike the image compression application, when", "we deploy our framework for CS and during the test phase, we do not have high-resolution images beforehand. In addition to image compression, there have been previous", "works BID34 BID22 to jointly learn the signal sensing and reconstruction algorithm in CS using convolutional networks. However, the problem with these works is that they divide", "images into small blocks and recover each block separately. This blocky reconstruction approach is unrealistic in applications", "such as medical imaging (e.g. MRI) where the measurement operator is a Fourier matrix and hence we cannot have blocky reconstruction. Since both papers are designed for block-based recovery whereas our", "method senses/recovers images without subdivision, we have not compared against them. Note that our method could be easily modified to learn near-optimal", "frequency bands for medical imaging applications. In addition, BID34 and BID22 use an extra denoiser (e.g. BM3D, DCN)", "for denoising the final reconstruction while our framework does not use any extra denoiser and yet outperforms state-of-the-art results as we show later.Beside using convolutional autoencoders, authors in BID40 have introduced the sparse recovery autoencoder (SRA). In SRA, the encoder is a fully-connected layer while in this work,", "the encoder has a convolutional structure and is basically a circulant matrix. For large-scale problems, learning a fully-connected layer (as in", "the SRA encoder) is significantly more challenging than learning convolutional layers (as in our encoder). In SRA, the decoder is a T -step projected subgradient. However,", "in this work, the decoder is several convolutional layers", "plus a rearranging layer. It should also be noted that the optimization in SRA is solely over", "the measurement matrix and T (which is the number of layers in the decoder) scalar values. However, here, the optimization is performed over convolution weights", "and biases that we have across different layers of our network.", "In this paper we introduced DeepSSRR, a framework that can learn both near-optimal sensing schemes, and fast signal recovery procedures.", "Our findings set the stage for several directions for future exploration including the incorporation of adversarial training and its comparison with other methods BID2 BID14 BID10 ).", "Furthermore, a major question arising from our work is quantifying the generalizability of a DeepSSRR-learned model based on the richness of training data.", "We leave the exploration of this for future research." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3030303120613098, 0.09999999403953552, 0.19999998807907104, 0.12903225421905518, 0.1304347813129425, 0.24242423474788666, 0.13333332538604736, 0.1818181723356247, 0.13636362552642822, 0.21621620655059814, 0.052631575614213943, 0.10810810327529907, 0.27272728085517883, 0.1463414579629898, 0.1428571343421936, 0.20689654350280762, 0.19999998807907104, 0.12903225421905518, 0, 0, 0.2631579041481018, 0.16326530277729034, 0, 0.05714285373687744, 0, 0.1111111044883728, 0, 0.13114753365516663, 0.22857142984867096, 0.10256409645080566, 0.11428570747375488, 0.0624999962747097, 0.1538461446762085, 0.25641024112701416, 0.06451612710952759, 0.1304347813129425, 0.05714285373687744, 0.12121211737394333, 0.16949151456356049, 0.1818181723356247, 0.10810810327529907, 0.08695651590824127, 0.06451612710952759, 0.11428570747375488, 0.07999999821186066, 0.1764705777168274, 0.10256409645080566, 0.05882352590560913, 0.17391303181648254 ]
B1xVTjCqKQ
true
[ "We use deep learning techniques to solve the sparse signal representation and recovery problem." ]
[ "To select effective actions in complex environments, intelligent agents need to generalize from past experience.", "World models can represent knowledge about the environment to facilitate such generalization.", "While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them.", "We present Dreamer, a reinforcement learning agent that solves long-horizon tasks purely by latent imagination.", "We efficiently learn behaviors by backpropagating analytic gradients of learned state values through trajectories imagined in the compact state space of a learned world model.", "On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance.", "Intelligent agents can achieve goals in complex environments even though they never encounter the exact same situation twice.", "This ability requires building representations of the world from past experience that enable generalization to novel situations.", "World models offer an explicit way to represent an agent's knowledge about the world in a parametric model learned from experience that can make predictions about the future.", "When the sensory inputs are high-dimensional images, latent dynamics models can abstract observations to predict forward in compact state spaces (Watter et al., 2015; Oh et al., 2017; Gregor et al., 2019) .", "Compared to predictions in image space, latent states have a small memory footprint and enable imagining thousands of trajectories in parallel.", "Learning effective latent dynamics models is becoming feasible through advances in deep learning and latent variable models (Krishnan et al., 2015; Karl et al., 2016; Doerr et al., 2018; Buesing et al., 2018) .", "Behaviors can be derived from learned dynamics models in many ways.", "Often, imagined rewards are maximized by learning a parametric policy (Sutton, 1991; Ha and Schmidhuber, 2018; Zhang et al., 2019) or by online planning (Chua et al., 2018; Hafner et al., 2019) .", "However, considering only rewards within a fixed imagination horizon results in shortsighted behaviors.", "Moreover, prior work commonly resorts to derivative-free optimization for robustness to model errors (Ebert et al., 2017; Chua et al., 2018; Parmas et al., 2019) , rather than leveraging the analytic gradients offered by neural network dynamics models (Henaff et al., 2018; Srinivas et al., 2018) .", "We present Dreamer, an agent that learns long-horizon behaviors from images purely by latent imagination.", "A novel actor critic algorithm accounts for rewards beyond the planning horizon while making efficient use of the neural network dynamics.", "For this, we predict state values and actions in the learned latent space as summarized in Figure 1 .", "The values optimize Bellman consistency for imagined rewards and the policy maximizes the values by propagating their analytic gradients back through the dynamics.", "In comparison to actor critic algorithms that learn online or by experience replay Schulman et al., 2017; Haarnoja et al., 2018; , world models enable interpolating between past experience and offer analytic gradients of multi-step returns for efficient policy optimization.", "Figure 2: Agent observations for 5 of the 20 control tasks used in our experiments.", "These pose a variety of challenges including contact dynamics, sparse rewards, many degrees of freedom, and 3D environments that exceed the difficult to tasks previously solved through world models.", "The agent observes the images as 64 × 64 × 3 pixel arrays.", "The key contributions of this paper are summarized as follows:", "• Learning long-horizon behaviors in imagination Purely model-based agents can be shortsighted due to finite imagination horizons.", "We approach this limitation in latenby predicting both actions and state values.", "Training purely by latent imagination lets us efficiently learn the policy by propagating analytic gradients of the value function back through latent state transitions.", "• Empirical performance for visual control We pair Dreamer with three representation learning objectives to evaluate it on the DeepMind Control Suite with image inputs, shown in Figure 2 .", "Using the same hyper parameters for all tasks, Dreamer exceeds existing model-based and model-free agents in terms of data-efficiency, computation time, and final performance.", "We present Dreamer, an agent that learns long-horizon behaviors purely by latent imagination.", "For this, we propose a novel actor critic method that optimizes a parametric policy by propagating analytic gradients of multi-step values back through latent neural network dynamics.", "Dreamer outperforms previous approaches in data-efficiency, computation time, and final performance on a variety of challenging continuous control tasks from image inputs.", "While our approach compares favourably on these tasks, future research on learning representations is likely needed to scale latent imagination to visually more complex environments.", "A DETAILED ALGORITHM Update θ to predict rewards using representation learning." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0, 0, 0.04999999329447746, 0.625, 0.25641024112701416, 0, 0, 0.05882352590560913, 0.0952380895614624, 0.043478257954120636, 0.05405404791235924, 0.04651162400841713, 0, 0.04651162400841713, 0.13333332538604736, 0.11320754140615463, 0.8125, 0, 0.05882352590560913, 0.1621621549129486, 0.14814814925193787, 0, 0.04444443807005882, 0.0714285671710968, 0, 0.1818181723356247, 0.06896550953388214, 0.3684210479259491, 0.04444443807005882, 0, 0.8666666746139526, 0.23255813121795654, 0, 0.09999999403953552, 0.0714285671710968 ]
S1lOTC4tDS
true
[ "We present Dreamer, an agent that learns long-horizon behaviors purely by latent imagination using analytic value gradients." ]
[ "Transfer reinforcement learning (RL) aims at improving learning efficiency of an agent by exploiting knowledge from other source agents trained on relevant tasks.", "However, it remains challenging to transfer knowledge between different environmental dynamics without having access to the source environments.", "In this work, we explore a new challenge in transfer RL, where only a set of source policies collected under unknown diverse dynamics is available for learning a target task efficiently.", "To address this problem, the proposed approach, MULTI-source POLicy AggRegation (MULTIPOLAR), comprises two key techniques.", "We learn to aggregate the actions provided by the source policies adaptively to maximize the target task performance.", "Meanwhile, we learn an auxiliary network that predicts residuals around the aggregated actions, which ensures the target policy's expressiveness even when some of the source policies perform poorly.", "We demonstrated the effectiveness of MULTIPOLAR through an extensive experimental evaluation across six simulated environments ranging from classic control problems to challenging robotics simulations, under both continuous and discrete action spaces.", "We envision a future scenario where a variety of robotic systems, which are each trained or manually engineered to solve a similar task, provide their policies for a new robot to learn a relevant task quickly.", "For example, imagine various pick-and-place robots working in factories all over the world.", "Depending on the manufacturer, these robots will differ in their kinematics (e.g., link length, joint orientations) and dynamics (e.g., link mass, joint damping, friction, inertia).", "They could provide their policies to a new robot (Devin et al., 2017) , even though their dynamics factors, on which the policies are implicitly conditioned, are not typically available (Chen et al., 2018) .", "Moreover, we cannot rely on a history of their individual experiences, as they may be unavailable due to a lack of communication between factories or prohibitively large dataset sizes.", "In such scenarios, we argue that a key technique to develop is the ability to transfer knowledge from a collection of robots to a new robot quickly only by exploiting their policies while being agnostic to their different kinematics and dynamics, rather than collecting a vast amount of samples to train the new robot from scratch.", "The scenario illustrated above poses a new challenge in the transfer learning for reinforcement learning (RL) domains.", "Formally, consider multiple instances of a single environment that differ in their state transition dynamics, e.g., independent ant robots with different leg designs in Figure 1 , which reach different locations by executing the same walking actions.", "These source agents interacting with one of the environment instances provide their deterministic policy to a new target agent in another environment instance.", "Then, our problem is: can we efficiently learn the policy of a target agent given only the collection of source policies?", "Note that information about source environmental dynamics, such as the exact state transition distribu- Figure 2 : Overview of MULTIPOLAR.", "We formulate a target policy π target with the sum of", "1) the adaptive aggregation F agg of deterministic actions from source policies L and", "2) the auxiliary network F aux for predicting residuals around F agg .", "tions and the history of environmental states, will not be visible to the target agent as mentioned above.", "Also, the source policies are neither trained nor hand-engineered for the target environment instance, and therefore not guaranteed to work optimally and may even fail (Chen et al., 2018) .", "These conditions prevent us from adopting existing work on transfer RL between different environmental dynamics, as they require access to source environment instances or their dynamics for training a target policy (e.g., Lazaric et al. (2008) ; Chen et al. (2018) ; Yu et al. (2019) ; Tirinzoni et al. (2018) ).", "Similarly, meta-learning approaches (Vanschoren, 2018; Saemundsson et al., 2018; Clavera et al., 2019 ) cannot be used here because they typically train an agent on a diverse set of tasks (i.e., environment instances).", "Also, existing techniques that utilize a collection of source policies, e.g., policy reuse frameworks (Fernández & Veloso, 2006; Rosman et al., 2016; Zheng et al., 2018) and option frameworks (Sutton et al., 1999; Bacon et al., 2017; Mankowitz et al., 2018) , are not a promising solution because, to our knowledge, they assume source policies have the same environmental dynamics but have different goals.", "As a solution to the problem, we propose a new transfer RL approach named MULTI-source POLicy AggRegation (MULTIPOLAR).", "As shown in Figure 2 , our key idea is twofold;", "1) In a target policy, we adaptively aggregate the deterministic actions produced by a collection of source policies.", "By learning aggregation parameters to maximize the expected return at a target environment instance, we can better adapt the aggregated actions to unseen environmental dynamics of the target instance without knowing source environmental dynamics nor source policy performances.", "2) We also train an auxiliary network that predicts a residual around the aggregated actions, which is crucial for ensuring the expressiveness of the target policy even when some source policies are not useful.", "As another notable advantage, the proposed MULTIPOLAR can be used for both continuous and discrete action spaces with few modifications while allowing a target policy to be trained in a principled fashion.", "Similar to Ammar et al. (2014) ; Song et al. (2016) ; Chen et al. (2018) ; Tirinzoni et al. (2018) ; Yu et al. (2019) , our method assumes that the environment structure (state/action space) is identical between the source and target environments, while dynamics/kinematics parameters are different.", "This assumption holds in many real-world applications such as in sim-to-real tasks (Tan et al., 2018) , industrial insertion tasks (Schoettler et al., 2019) (different dynamics comes from the differences in parts), and wearable robots (Zhang et al., 2017) (with users as dynamics).", "We evaluate MULTIPOLAR in a variety of environments ranging from classic control problems to challenging robotics simulations.", "Our experimental results demonstrate the significant improvement of sample efficiency with the proposed approach, compared to baselines that trained a target policy from scratch or from a single source policy.", "We also conducted a detailed analysis of our approach and found it works well even when some of the source policies performed poorly in their original environment instance.", "Main contributions: (1) a new transfer RL problem that leverages multiple source policies collected under diverse environmental dynamics to train a target policy in another dynamics, and (2) MULTIPOLAR, a simple yet principled and effective solution verified in our extensive experiments.", "Reinforcement Learning We formulate our problem under the standard RL framework (Sutton & Barto, 1998) , where an agent interacts with its environment modeled by a Markov decision process (MDP).", "An MDP is represented by the tuple M = (ρ 0 , γ, S, A, R, T ) where ρ 0 is the initial state distribution and γ is a discount factor.", "At each timestep t, given the current state s t ∈ S, the agent executes an action a t ∈ A based on its policy π(a t | s t ; θ) that is parameterized by θ.", "The environment returns a reward R(s t , a t ) ∈ R and transitions to the next state s t+1 based on the state transition distribution T (s t+1 | s t , a t ).", "In this framework, RL aims to maximize the expected return with respect to the policy parameters θ.", "Our work is broadly categorized as an instance of transfer RL (Taylor & Stone, 2009) , in which a policy for a target task is trained using information collected from source tasks.", "In this section, we highlight how our work is different from the existing approaches and also discuss the current limitations as well as future directions.", "Transfer between Different Dynamics There has been very limited work on transferring knowledge between agents in different environmental dynamics.", "As introduced briefly in Section 1, some methods require training samples collected from source tasks.", "These sampled experiences are then used for measuring the similarity between environment instances (Lazaric et al., 2008; Ammar et al., 2014; Tirinzoni et al., 2018) or for conditioning a target policy to predict actions (Chen et al., 2018) .", "Alternative means to quantify the similarity is to use a full specification of MDPs (Song et al., 2016; Wang et al., 2019) or environmental dynamics Yu et al. (2019) .", "In contrast, the proposed MULTI-POLAR allows the knowledge transfer only through the policies acquired from source environment instances, which is beneficial when source and target environments are not always connected to exchange information about their environmental dynamics and training samples.", "Leveraging Multiple Policies The idea of utilizing multiple source policies can be found in the literature of policy reuse frameworks (Fernández & Veloso, 2006; Rosman et al., 2016; Li & Zhang, 2018; Zheng et al., 2018; Li et al., 2019) .", "The basic motivation behind these works is to provide \"nearly-optimal solutions\" (Rosman et al., 2016) for short-duration tasks by reusing one of the source policies, where each source would perform well on environment instances with different rewards (e.g., different goals in maze tasks).", "In our problem setting, where environmental dynamics behind each source policy are different, reusing a single policy without an adaptation is not the right approach, as described in (Chen et al., 2018) and also demonstrated in our experiment.", "Another relevant idea is hierarchical RL (Barto & Mahadevan, 2003; Kulkarni et al., 2016; Osa et al., 2019) that involves a hierarchy of policies (or action-value functions) to enable temporal abstraction.", "In particular, option frameworks (Sutton et al., 1999; Bacon et al., 2017; Mankowitz et al., 2018 ) make use of a collection of policies as a part of \"options\".", "However, they assumed all the policies in the hierarchy to be learned in a single environment instance.", "Another relevant work along this line of research is (Frans et al., 2018) , which meta-learns a hierarchy of multiple sub-policies by training a master policy over the distribution of tasks.", "Nevertheless, hierarchical RL approaches are not useful for leveraging multiple source policies each acquired under diverse environmental dynamics.", "Learning Residuals in RL Finally, some recent works adopt residual learning to mitigate the limited performance of hand-engineered policies (Silver et al., 2018; Johannink et al., 2019; Rana et al., 2019) .", "We are interested in a more extended scenario where various source policies with unknown performances are provided instead of a single sub-optimal policy.", "Also, these approaches focus only on RL problems for robotic tasks in the continuous action space, while our approach could work on both of continuous and discrete action spaces in a broad range of environments.", "Limitations and Future Directions Currently, our work has several limitations.", "First, MULTI-POLAR may not be scalable to a large number of source policies, as its training and testing times will increase almost linearly with the number of source policies.", "One possible solution for this issue would be pre-screening source policies before starting to train a target agent, for example, by testing each source on the target task and taking them into account in the training phase only when they are found useful.", "Moreover, our work assumes source and target environment instances to be different only in their state transition distribution.", "An interesting direction for future work is to involve other types of environmental differences, such as dissimilar rewards and state/action spaces.", "We presented a new problem setting of transfer RL that aimed to train a policy efficiently using a collection of source policies acquired under diverse environmental dynamics.", "We demonstrated that the proposed MULTIPOLAR is, despite its simplicity, a principled approach with high training sample efficiency on a variety of environments.", "Our transfer RL approach is advantageous when one does not have access to a distribution of diverse environmental dynamics.", "Future work will seek to adapt our approach to more challenging domains such as a real-world robotics task." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.0833333283662796, 0.23255813121795654, 0.5090909004211426, 0, 0.2926829159259796, 0.23076923191547394, 0.14035087823867798, 0.21052631735801697, 0.05128204822540283, 0.07999999821186066, 0.145454540848732, 0.11320754140615463, 0.17391303181648254, 0.1428571343421936, 0.12903225421905518, 0.3333333432674408, 0.31111109256744385, 0.17391303181648254, 0.277777761220932, 0.14999999105930328, 0, 0.1860465109348297, 0.14814814925193787, 0.2535211145877838, 0.13793103396892548, 0.23076923191547394, 0.23255813121795654, 0.054054051637649536, 0.23255813121795654, 0.28070175647735596, 0.27586206793785095, 0.2142857164144516, 0.16129031777381897, 0.06557376682758331, 0.23255813121795654, 0.26923075318336487, 0.22641508281230927, 0.5714285373687744, 0.1428571343421936, 0.03703703358769417, 0.10526315122842789, 0.07692307233810425, 0.1463414579629898, 0.3214285671710968, 0, 0.13636362552642822, 0.1463414579629898, 0.14035087823867798, 0.19607841968536377, 0.22580644488334656, 0.16949151456356049, 0.11594202369451523, 0.19354838132858276, 0.2142857164144516, 0.12244897335767746, 0.19512194395065308, 0.1111111044883728, 0.3181818127632141, 0.18518517911434174, 0.3404255211353302, 0.1428571343421936, 0, 0.19230768084526062, 0.1846153736114502, 0.1818181723356247, 0.12765957415103912, 0.6000000238418579, 0.1666666567325592, 0.35555556416511536, 0.09302324801683426 ]
Byx9p2EtDH
true
[ "We propose MULTIPOLAR, a transfer RL method that leverages a set of source policies collected under unknown diverse environmental dynamics to efficiently learn a target policy in another dynamics." ]
[ "Reinforcement learning algorithms rely on carefully engineered rewards from the environment that are extrinsic to the agent.", "However, annotating each environment with hand-designed, dense rewards is difficult and not scalable, motivating the need for developing reward functions that are intrinsic to the agent. \n", "Curiosity is such intrinsic reward function which uses prediction error as a reward signal.", "In this paper:", "(a) We perform the first large-scale study of purely curiosity-driven learning, i.e. {\\em without any extrinsic rewards}, across $54$ standard benchmark environments, including the Atari game suite.", "Our results show surprisingly good performance as well as a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many games.", "(b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.).", "(c) We demonstrate limitations of the prediction-based rewards in stochastic setups.", "Game-play videos and code are at https://doubleblindsupplementary.github.io/large-curiosity/.", "Reinforcement learning (RL) has emerged as a popular method for training agents to perform complex tasks.", "In RL, the agent's policy is trained by maximizing a reward function that is designed to align with the task.", "The rewards are extrinsic to the agent and specific to the environment they are defined for.", "Most of the success in RL has been achieved when this reward function is dense and well-shaped, e.g., a running \"score\" in a video game BID19 .", "However, designing a wellshaped reward function is a notoriously challenging engineering problem.", "An alternative to \"shaping\" an extrinsic reward is to supplement it with dense intrinsic rewards BID24 , that is, rewards that are generated by the agent itself.", "Examples of intrinsic reward include \"curiosity\" BID20 BID33 BID37 BID9 BID25 which uses prediction error as reward signal, and \"visitation counts\" BID2 BID22 BID28 BID18 which discourages the agent from revisiting the same states.", "The idea is that these intrinsic rewards will bridge the gaps between sparse extrinsic rewards by guiding the agent to efficiently explore the environment to find the next extrinsic reward.But what about scenarios with no extrinsic reward at all?", "This is not as strange as it sounds.", "Developmental psychologists talk about intrinsic motivation (i.e., curiosity) as the primary driver in the early stages of development BID38 BID30 : babies appear to employ goal-less exploration to learn skills that will be useful later on in life.", "There are plenty of other examples, from playing Minecraft to visiting your local zoo, where no extrinsic rewards are required.", "Indeed, there is evidence that pre-training an agent on a given environment using only intrinsic rewards allows it to learn much faster when fine-tuned to a novel task in a novel environment BID25 BID23 .", "Yet, so far, there has been no systematic study of learning with only intrinsic rewards.In this paper, we perform a large-scale empirical study of agents driven purely by intrinsic rewards across a range of diverse simulated environments.", "In particular, we choose the dynamics-based curiosity model of intrinsic reward presented in BID25 because it is scalable and trivially parallelizable, making it ideal for large-scale experimentation.", "The central idea is to represent intrinsic reward as the error in predicting the consequence of the agent's action given its current state, Figure 1 : A snapshot of the 54 environments investigated in the paper.", "We show that agents are able to make progress using no extrinsic reward, or end-of-episode signal, and only using curiosity.", "Video results, code and models at https://doubleblindsupplementary.github.io/large-curiosity/.i.e", "., the prediction error of learned forward-dynamics of the agent. We", "thoroughly investigate the dynamics-based curiosity across 54 environments: video games, physics engine simulations, and virtual 3D navigation tasks, shown in Figure 1 .To", "develop a better understanding of curiosity-driven learning, we further study the crucial factors that determine its performance. In", "particular, predicting the future state in the high dimensional raw observation space (e.g., images) is a challenging problem and, as shown by recent works BID25 BID39 , learning dynamics in an auxiliary feature space leads to improved results. However", ", how one chooses such an embedding space is a critical, yet open research problem. To ensure", "stable online training of dynamics, we argue that the desired embedding space should: 1) be compact", "in terms of dimensionality, 2) preserve sufficient", "information about the observation, and 3) be a stationary function", "of the observations. Through systematic ablation", ", we examine the role of different ways to encode agent's observation such that an agent can perform well, driven purely by its own curiosity. Here \"performing well\" means", "acting purposefully and skillfully in the environment. This can be assessed quantitatively", ", in some cases, by measuring extrinsic rewards or environment-specific measures of exploration, or qualitatively, by observing videos of the agent interacting. We show that encoding observations", "via a random network turn out to be a simple, yet surprisingly effective technique for modeling curiosity across many popular RL benchmarks. This might suggest that many popular", "RL video game testbeds are not as visually sophisticated as commonly thought. Interestingly, we discover that although", "random features are sufficient for good performance in environments that were used for training, the learned features appear to generalize better (e.g., to novel game levels in Super Mario Bros.).The main contributions of this paper are:", "(a) Large-scale study of curiosity-driven", "exploration across a variety of environments including: the set of Atari games BID1 , Super Mario Bros., virtual 3D navigation in Unity BID13 , multi-player Pong, and Roboschool environments. (b) Extensive investigation of different", "feature spaces for learning the dynamics-based curiosity: random features, pixels, inverse-dynamics BID25 and variational auto-encoders BID14 and evaluate generalization to unseen environments. (c) Analysis of some limitations of a direct", "prediction-error based curiosity formulation. We observe that if the agent itself is the source", "of stochasticity in the environment, it can reward itself without making any actual progress. We empirically demonstrate this limitation in a 3D", "navigation task where the agent controls different parts of the environment.", "We have shown that our agents trained purely with a curiosity reward are able to learn useful behaviours:", "(a) Agent being able to play many Atari games without using any rewards.", "(b) Mario being able to cross over 11 levels without any extrinsic reward.", "(c) Walking-like behavior emerged in the Ant environment.", "(d) Juggling-like behavior in Robo-school environment", "(e) Rally-making behavior in Two-player Pong with curiosity-driven agent on both sides.", "But this is not always true as there are some Atari games where exploring the environment does not correspond to extrinsic reward.More generally, our results suggest that, in many game environments designed by humans, the extrinsic reward is often aligned with the objective of seeking novelty.Limitation of prediction error based curiosity: A more serious potential limitation is the handling of stochastic dynamics.", "If the transitions in the environment are random, then even with a perfect dynamics model, the expected reward will be the entropy of the transition, and the agent will seek out transitions with the highest entropy.", "Even if the environment is not truly random, unpredictability caused by a poor learning algorithm, an impoverished model class or partial observability can lead to exactly the same problem.", "We did not observe this effect in our experiments on games so we designed an environment to illustrate the point.Figure 6: We add a noisy TV to the unity environment in Section 3.3.", "We compare IDF and RF with and without the TV.We return to the maze of Section 3.3 to empirically validate a common thought experiment called the noisy-TV problem.", "The idea is that local sources of entropy in an environment like a TV that randomly changes channels when an action is taken should prove to be an irresistible attraction to our agent.", "We take this thought experiment literally and add a TV to the maze along with an action to change the channel.", "In Figure 6 we show how adding the noisy-TV affects the performance of IDF and RF.", "As expected the presence of the TV drastically slows down learning, but we note that if you run the experiment for long enough the agents do sometimes converge to getting the extrinsic reward consistently.", "We have shown empirically that stochasticity can be a problem, and so it is important for future work to address this issue in an efficient manner.Future Work: We have presented a simple and scalable approach that can learn nontrivial behaviors across a diverse range of environments without any reward function or end-of-episode signal.", "One surprising finding of this paper is that random features perform quite well, but learned features appear to generalize better.", "While we believe that learning features will become more important once the environment is complex enough, we leave that for future work to explore.Our wider goal, however, is to show that we can take advantage of many unlabeled (i.e., not having an engineered reward function) environments to improve performance on a task of interest.", "Given this goal, showing performance in environments with a generic reward function is just the first step, and future work will hopefully investigate transfer from unlabeled to labeled environments." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19512194395065308, 0.15686273574829102, 0, 0, 0.26923075318336487, 0.2448979616165161, 0.1515151411294937, 0.1111111044883728, 0.05882352590560913, 0.04878048226237297, 0.1395348757505417, 0.21052631735801697, 0.11764705181121826, 0, 0.20408162474632263, 0.1428571343421936, 0.17543859779834747, 0, 0.09677419066429138, 0.13636362552642822, 0.1111111044883728, 0.14035087823867798, 0.11764705181121826, 0.1090909019112587, 0.22727271914482117, 0.0555555522441864, 0.1764705777168274, 0.1666666567325592, 0.09302324801683426, 0.032258059829473495, 0, 0.09756097197532654, 0.0624999962747097, 0.11428570747375488, 0.12903225421905518, 0.1111111044883728, 0.10810810327529907, 0.16326530277729034, 0.07999999821186066, 0, 0.09836065024137497, 0.06666666269302368, 0.1818181723356247, 0.11320754140615463, 0.10526315122842789, 0.08695651590824127, 0.17142856121063232, 0.09302324801683426, 0.052631575614213943, 0.10526315122842789, 0.060606058686971664, 0, 0.1621621549129486, 0.15189872682094574, 0.19607841968536377, 0.037735845893621445, 0.07407406717538834, 0.16326530277729034, 0.07547169178724289, 0.13636362552642822, 0.14999999105930328, 0.1090909019112587, 0.0555555522441864, 0.045454539358615875, 0.0810810774564743, 0.11320754140615463 ]
rJNwDjAqYX
true
[ "An agent trained only with curiosity, and no extrinsic reward, does surprisingly well on 54 popular environments, including the suite of Atari games, Mario etc." ]
[ "This work provides theoretical and empirical evidence that invariance-inducing regularizers can increase predictive accuracy for worst-case spatial transformations (spatial robustness). ", "Evaluated on these adversarially transformed examples, we demonstrate that adding regularization on top of standard or adversarial training reduces the relative error by 20% for CIFAR10 without increasing the computational cost. ", "This outperforms handcrafted networks that were explicitly designed to be spatial-equivariant.", "Furthermore, we observe for SVHN, known to have inherent variance in orientation, that robust training also improves standard accuracy on the test set." ]
[ 0, 0, 0, 1 ]
[ 0.20512819290161133, 0.1249999925494194, 0.06896550953388214, 0.24390242993831635 ]
B1e6oy39aE
false
[ "for spatial transformations robust minimizer also minimizes standard accuracy; invariance-inducing regularization leads to better robustness than specialized architectures" ]
[ "We propose order learning to determine the order graph of classes, representing ranks or priorities, and classify an object instance into one of the classes.", "To this end, we design a pairwise comparator to categorize the relationship between two instances into one of three cases: one instance is `greater than,' `similar to,' or `smaller than' the other.", "Then, by comparing an input instance with reference instances and maximizing the consistency among the comparison results, the class of the input can be estimated reliably.", "We apply order learning to develop a facial age estimator, which provides the state-of-the-art performance.", "Moreover, the performance is further improved when the order graph is divided into disjoint chains using gender and ethnic group information or even in an unsupervised manner.", "To measure the quality of something, we often compare it with other things of a similar kind.", "Before assigning 4 stars to a film, a critic would have thought, \"It is better than 3-star films but worse than 5-stars.\"", "This ranking through pairwise comparisons is done in various decision processes (Saaty, 1977) .", "It is easier to tell the nearer one between two objects in a picture than to estimate the distance of each object directly (Chen et al., 2016; Lee & Kim, 2019a) .", "Also, it is easy to tell a higher pitch between two notes, but absolute pitch is a rare ability (Bachem, 1955) .", "Ranking through comparisons has been investigated for machine learning.", "In learning to rank (LTR), the pairwise approach learns, between two documents, which one is more relevant to a query (Liu, 2009) .", "Also, in ordinal regression (Frank & Hall, 2001; Li & Lin, 2007) , to predict the rank of an object, binary classifications are performed to tell whether the rank is higher than a series of thresholds or not.", "In this paper, we propose order learning to learn ordering relationship between objects.", "Thus, order learning is related to LTR and ordinal regression.", "However, whereas LTR and ordinal regression assume that ranks form a total order (Hrbacek & Jech, 1984) , order learning can be used for a partial order as well.", "Order learning is also related to metric learning (Xing et al., 2003) .", "While metric learning is about whether an object is 'similar to or dissimilar from' another object, order learning is about 'greater than or smaller than.'", "Section 2 reviews this related work.", "In order learning, a set of classes, Θ = {θ 1 , θ 2 , · · · , θ n }, is ordered, where each class θ i represents one or more object instances.", "Between two classes θ i and θ j , there are three possibilities: θ i > θ j or θ i < θ j or neither (i.e. incomparable).", "These relationships are represented by the order graph.", "The goal of order learning is to determine the order graph and then classify an instance into one of the classes in Θ.", "To achieve this, we develop a pairwise comparator that determines ordering relationship between two instances x and y into one of three categories: x is 'greater than,' 'similar to,' or 'smaller than' y.", "Then, we use the comparator to measure an input instance against multiple reference instances in known classes.", "Finally, we estimate the class of the input to maximize the consistency among the comparison results.", "It is noted that the parameter optimization of the pairwise comparator, the selection of the references, and the discovery of the order graph are jointly performed to minimize a common loss function.", "Section 3 proposes this order learning.", "We apply order learning to facial age estimation.", "Order learning matches age estimation well, since it is easier to tell a younger one between two people than to estimate each person's age directly (Chang et al., 2010; Zhang et al., 2017a) .", "Even when we assume that age classes are linearly ordered, the proposed age estimator performs well.", "The performance is further improved, when classes are divided into disjoint chains in a supervised manner using gender and ethnic group information or even in an unsupervised manner.", "Section 4 describes this age estimator and discusses its results.", "Finally, Section 5 concludes this work.", "Order learning was proposed in this work.", "In order learning, classes form an ordered set, and each class represents object instances of the same rank.", "Its goal is to determine the order graph of classes and classify a test instance into one of the classes.", "To this end, we designed the pairwise comparator to learn ordering relationships between instances.", "We then decided the class of an instance by comparing it with reference instances in the same chain and maximizing the consistency among the comparison results.", "For age estimation, it was shown that the proposed algorithm yields the stateof-the-art performance even in the case of the single-chain hypothesis.", "The performance is further improved when the order graph is divided into multiple disjoint chains.", "In this paper, we assumed that the order graph is composed of disjoint chains.", "However, there are more complicated graphs, e.g. Figure 1 (a), than disjoint chains.", "For example, it is hard to recognize an infant's sex from its facial image (Porter et al., 1984) .", "But, after puberty, male and female take divergent paths.", "This can be reflected by an order graph, which consists of two chains sharing common nodes up to a certain age.", "It is an open problem to generalize order learning to find an optimal order graph, which is not restricted to disjoint chains." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2631579041481018, 0.1304347813129425, 0.10526315122842789, 0.19354838132858276, 0.19512194395065308, 0.1249999925494194, 0.10810810327529907, 0.13793103396892548, 0.17391303181648254, 0.1764705777168274, 0.07999999821186066, 0.1621621549129486, 0.20408162474632263, 0.20689654350280762, 0.4615384638309479, 0.1904761791229248, 0.2142857164144516, 0.2222222238779068, 0, 0.13333332538604736, 0.05714285373687744, 0.0833333283662796, 0.4444444477558136, 0.12765957415103912, 0.12121211737394333, 0.13793103396892548, 0.24390242993831635, 0.1818181723356247, 0.25, 0.17391303181648254, 0.06451612710952759, 0.1904761791229248, 0.07692307233810425, 0, 0.260869562625885, 0.1764705777168274, 0.3030303120613098, 0.06666666269302368, 0.20512819290161133, 0.22857142984867096, 0.19999998807907104, 0.19999998807907104, 0, 0.17142856121063232, 0.07999999821186066, 0.1621621549129486, 0.24242423474788666 ]
HygsuaNFwr
true
[ "The notion of order learning is proposed and it is applied to regression problems in computer vision" ]
[ "We study how the topology of a data set comprising two components representing two classes of objects in a binary classification problem changes as it passes through the layers of a well-trained neural network, i.e., one with perfect accuracy on training set and a generalization error of less than 1%.", "The goal is to shed light on two well-known mysteries in deep neural networks:", "(i) a nonsmooth activation function like ReLU outperforms a smooth one like hyperbolic tangent;", "(ii) successful neural network architectures rely on having many layers, despite the fact that a shallow network is able to approximate any function arbitrary well.", "We performed extensive experiments on persistent homology of a range of point cloud data sets.", "The results consistently demonstrate the following: (1) Neural networks operate by changing topology, transforming a topologically complicated data set into a topologically simple one as it passes through the layers.", "No matter how complicated the topology of the data set we begin with, when passed through a well-trained neural network, the Betti numbers of both components invariably reduce to their lowest possible values: zeroth Betti number is one and all higher Betti numbers are zero.", "Furthermore, (2) the reduction in Betti numbers is significantly faster for ReLU activation compared to hyperbolic tangent activation --- consistent with the fact that the former define nonhomeomorphic maps (that change topology) whereas the latter define homeomorphic maps (that preserve topology). Lastly, (3) shallow and deep networks process the same data set differently --- a shallow network operates mainly through changing geometry and changes topology only in its final layers, a deep network spreads topological changes more evenly across all its layers." ]
[ 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.25, 0.05714285373687744, 0.060606054961681366, 0.13333332538604736, 0.22857142984867096, 0.2916666567325592, 0.23333333432674408, 0.1882352977991104 ]
SkgBfaNKPr
false
[ "We show that neural networks operate by changing topologly of a data set and explore how architectural choices effect this change." ]
[ "The convergence rate and final performance of common deep learning models have significantly benefited from recently proposed heuristics such as learning rate schedules, knowledge distillation, skip connections and normalization layers.", "In the absence of theoretical underpinnings, controlled experiments aimed at explaining the efficacy of these strategies can aid our understanding of deep learning landscapes and the training dynamics.", "Existing approaches for empirical analysis rely on tools of linear interpolation and visualizations with dimensionality reduction, each with their limitations.", "Instead, we revisit the empirical analysis of heuristics through the lens of recently proposed methods for loss surface and representation analysis, viz. mode connectivity and canonical correlation analysis (CCA), and hypothesize reasons why the heuristics succeed.", "In particular, we explore knowledge distillation and learning rate heuristics of (cosine) restarts and warmup using mode connectivity and CCA.", " Our empirical analysis suggests that", ": (a) the reasons often quoted for the success of cosine annealing are not evidenced in practice", "; (b) that the effect of learning rate warmup is to prevent the deeper layers from creating training instability; and", "(c) that the latent knowledge shared by the teacher is primarily disbursed in the deeper layers.", "The introduction of heuristics such as normalization layers BID19 BID0 , residual connections BID11 , and learning rate strategies BID26 BID9 Smith, 2017) have greatly accelerated progress in Deep Learning.", "Many of these ingredients are now commonplace in modern architectures, and some of them have also been buttressed with theoretical guarantees BID1 BID28 BID10 .", "However, despite their simplicity and efficacy, why some of these heuristics work is still relatively unknown.", "Existing attempts at explaining these strategies empirically have been limited to intuitive explanations and the use of tools such as spectrum analysis (Sagun et al., 2017) , linear interpolation between two models and low-dimensional visualizations of the loss surface.", "In our work, we instead use recent tools built specifically for analyzing deep networks, viz., mode connectivity and singular value canonical correlation analysis (SVCCA) (Raghu et al., 2017) .", "We investigate three strategies in detail:", "(a) cosine learning rate decay,", "(b) learning rate warmup, and", "(c) knowledge distillation, and list the summary of our contributions at the end of this section.Cosine annealing BID26 , also known as stochastic gradient descent with restarts (SGDR), and more generally cyclical learning rate strategies (Smith, 2017) , have been recently proposed to accelerate training of deep networks BID3 .", "The strategy involves reductions and restarts of learning rates over the course of training, and was motivated as means to escape spurious local minima.", "Experimental results have shown that SGDR often improves convergence both from the standpoint of iterations needed for convergence and the final objective.Learning rate warmup BID9 also constitutes an important ingredient in training deep networks, especially in the presence of large or dynamic batch sizes.", "It involves increasing the learning rate to a large value over a certain number of training iterations followed by decreasing the learning rate, which can be performed using step-decay, exponential decay or other such schemes.", "The strategy was proposed out of the need to induce stability in the initial phase of training with large learning rates (due to large batch sizes).", "It has been employed in training of several architectures at scale including ResNets and Transformer networks (Vaswani et al., 2017) .Further", ", we investigate knowledge distillation (KD) BID13 . This strategy", "involves first training a (teacher) model on a typical loss function on the available data. Next, a different", "(student) model (typically much smaller than the teacher model) is trained, but instead of optimizing the loss function defined using hard data labels, this student model is trained to mimic the teacher model. It has been empirically", "found that a student network trained in this fashion significantly outperforms an identical network trained with the hard data labels. We defer a detailed discussion", "of the three heuristics, and existing explanations for their efficacy to sections 3, 4 and 5 respectively.Finally, we briefly describe the tools we employ for analyzing the aforementioned heuristics. Mode connectivity (MC) is a recent", "observation that shows that, under circumstances, it is possible to connect any two local minima of deep networks via a piecewise-linear curve BID5 . This shows that local optima obtained", "through different means, and exhibiting different local and generalization properties, are connected. The authors propose an algorithm that", "locates such a curve. While not proposed as such, we employ", "this framework to better understand loss surfaces but begin our analysis in Section 2 by first establishing its robustness as a framework.Deep network analyses focusing on the weights of a network are inherently limited since there are several invariances in this, such as permutation and scaling. Recently, Raghu et al. (2017) propose", "using CCA along with some pre-processing steps to analyze the activations of networks, such that the resulting comparison is not dependent on permutations and scaling of neurons. They also prove the computational gains", "of using CCA over alternatives ( BID25 ) for representational analysis and employ it to better understand many phenomenon in deep learning.", "Heuristics have played an important role in accelerating progress of deep learning.", "Founded in empirical experience, intuition and observations, many of these strategies are now commonplace in architectures.", "In the absence of strong theoretical guarantees, controlled experiments aimed at explaining the the efficacy of these strategies can aid our understanding of deep learning and the training dynamics.", "The primary goal of our work was the investigation of three such heuristics using sophisticated tools for landscape analysis.", "Specifically, we investigate cosine annealing, learning rate warmup, and knowledge distillation.", "For this purpose, we employ recently proposed tools of mode connectivity and CCA.", "Our empirical analysis sheds light on these heuristics and suggests that:", "(a) the reasons often quoted for the success of cosine annealing are not evidenced in practice;", "(b) that the effect of learning rate warmup is to prevent the deeper layers from creating training instability; and", "(c) that the latent knowledge shared by the teacher is primarily disbursed in the deeper layers.Inadvertently, our investigation also leads to the design of new heuristics for practically improving the training process.", "Through our results on SGDR, we provide additional evidence for the success of averaging schemes in this context.", "Given the empirical results suggesting the localization of the knowledge transfer between teacher and student in the process of distillation, a heuristic can be designed that only trains portions of the (pre-trained) student networks instead of the whole network.", "For instance, recent results on self-distillation BID6 show improved performance via multiple generations of knowledge distillation for the same model.", "Given our results, computational costs of subsequent generations can be reduced if only subsets of the model are trained, instead of training the entire model.", "Finally, the freezing of weights instead of employing learning rate warmup allows for comparable training performance but with reduced computation during the warmup phase.", "We note in passing that our result also ties in with results of Hoffer et al. FORMULA2 The learning rate is initialized to 0.05 and scaled down by a factor of 5 at epochs {60, 120, 160} (step decay).", "We use a training batch size of 100, momentum of 0.9, and a weight decay of 0.0005.", "Elements of the weight vector corresponding to a neuron are initialized randomly from the normal distribution N (0, 2/n) where n is the number of inputs to the neuron.", "We also use data augmentation by random cropping of input images.", "Figures 7, 8 and 9 show the Validation Loss, Training Accuracy and Training Loss respectively for the curves joining the 6 pairs discussed in Section 2.1.1.", "These results too, confirm the overfitting or poor generalization tendency of models on the curve." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.25, 0.17777776718139648, 0.19999998807907104, 0.23999999463558197, 0.5128204822540283, 0.07692307233810425, 0.05405404791235924, 0.3499999940395355, 0.05714285373687744, 0.19999998807907104, 0.09090908616781235, 0.1621621549129486, 0.17543859779834747, 0.19999998807907104, 0.14814814925193787, 0.1538461446762085, 0.23076923191547394, 0.21212120354175568, 0.1860465109348297, 0.16393442451953888, 0.18867923319339752, 0.1860465109348297, 0.1395348757505417, 0.20000000298023224, 0.0555555522441864, 0.07692307233810425, 0.09302324801683426, 0.23076923191547394, 0.08510638028383255, 0.05405404791235924, 0, 0.11764705181121826, 0.11764705181121826, 0.1860465109348297, 0.12121211737394333, 0.1666666567325592, 0.17777776718139648, 0.1538461446762085, 0.375, 0.29411762952804565, 0.1875, 0.0555555522441864, 0.3589743673801422, 0.19999998807907104, 0.051282044500112534, 0.19607841968536377, 0.1463414579629898, 0.0952380895614624, 0.2380952388048172, 0.20338982343673706, 0.277777761220932, 0.09090908616781235, 0.1875, 0.045454539358615875, 0.05714285373687744 ]
r14EOsCqKX
true
[ "We use empirical tools of mode connectivity and SVCCA to investigate neural network training heuristics of learning rate restarts, warmup and knowledge distillation." ]
[ "The increasing demand for neural networks (NNs) being employed on embedded devices has led to plenty of research investigating methods for training low precision NNs.", "While most methods involve a quantization step, we propose a principled Bayesian approach where we first infer a distribution over a discrete weight space from which we subsequently derive hardware-friendly low precision NNs.", "To this end, we introduce a probabilistic forward pass to approximate the intractable variational objective that allows us to optimize over discrete-valued weight distributions for NNs with sign activation functions.", "In our experiments, we show that our model achieves state of the art performance on several real world data sets.", "In addition, the resulting models exhibit a substantial amount of sparsity that can be utilized to further reduce the computational costs for inference.", "With the advent of deep neural networks (NNs) impressive performances have been achieved in many applications such as computer vision BID13 , speech recognition , and machine translation , among others.", "However, the performance improvements are largely attributed to increasing hardware capabilities that enabled the training of ever-increasing network architectures.", "On the other side, there is also a growing interest in making NNs available for embedded devices with drastic memory and power limitations -a field with plenty of interesting applications that barely profit from the tendency towards larger and deeper network structures.Thus, there is an emerging trend in developing NN architectures that allow fast and energy-efficient inference and require little storage for the parameters.", "In this paper, we focus on reduced precision methods that restrict the number of bits per weight while keeping the network structures at a decent size.", "While this reduces the memory footprint for the parameters accordingly, it can also result in drastic improvements in computation speed if appropriate representations for the weight values are used.", "This direction of research has been pushed towards NNs that require in the extreme case only a single bit per weight.", "In this case, assuming weights w ∈ {−1, 1} and binary inputs x ∈ {−1, 1}, costly floating point multiplications can be replaced by cheap and hardware-friendly logical XNOR operations.", "However, training such NNs is inherently different as discrete valued NNs cannot be directly optimized using gradient based methods.", "Furthermore, NNs with binary weights exhibit their full computational benefits only in case the sign activation function is used whose derivative is zero almost everywhere, and, therefore, is not suitable for backpropagation.Most methods for training reduced precision NNs either quantize the weights of pre-trained full precision NNs BID3 or train reduced precision NNs by maintaining a set of full precision weights that are deterministically or stochastically quantized during forward or backward propagation.", "Gradient updates computed with the quantized weights are then applied to the full precision weights BID4 BID19 BID8 .", "This approach alone fails if the sign activation function is used.", "A promising approach is based on the straight through gradient estimator (STE) BID1 which replaces the zero gradient of hard threshold functions by a non-zero surrogate derivative.", "This allows information in computation graphs to flow backwards such that parameters can be updated using gradient based optimization methods.", "Encouraging results are presented in BID8 where the STE is applied to the weight binarization and to the sign activation function.", "These methods, although showing The aim is to obtain a single discrete-valued NN (top right) with a good performance.", "We achieve this by training a distribution over discrete-valued NNs (bottom right) and subsequently deriving a single discrete-valued NN from that distribution.", "(b) Probabilistic forward pass: The idea is to propagate distributions through the network by approximating a sum over random variables by a Gaussian and subsequently propagating that Gaussian through the sign activation function.convincing empirical performance, have in common that they appear rather heuristic and it is usually not clear whether they optimize any well defined objective.Therefore, it is desired to develop principled methods that support discrete weights in NNs.", "In this paper, we propose a Bayesian approach where we first infer a distribution q(W ) over a discrete weight space from which we subsequently derive discrete-valued NNs.", "Thus, we can optimize over real-valued distribution parameters using gradient-based optimization instead of optimizing directly over the intractable combinatorial space of discrete weights.", "The distribution q(W ) can be seen as an exponentially large ensemble of NNs where each NN is weighted by its probability q(W ).Rather", "than having a single value for each connection of the NN, we now maintain a whole distribution for each connection (see bottom right of FIG0 (a)).", "To obtain", "q(W ), we employ variational inference where we approximate the true posterior p(W |D) by minimizing the variational objective KL(q(W )||p(W |D)). Although", "the variational objective is intractable, this idea has recently received a lot of attention for real-valued NNs due to the reparameterization trick which expresses gradients of intractable expectations as expectations of tractable gradients BID20 BID12 BID2 . This allows", "us to efficiently compute unbiased gradient samples of the intractable variational objective that can subsequently be used for stochastic optimization. Unfortunately", ", the reparameterization trick is only suitable for real-valued distributions which renders it unusable for our case. The recently", "proposed Gumbel softmax distribution BID10 BID16 overcomes this issue by relaxing one-hot encoded discrete distributions with probability vectors. Subsequently", ", the reparameterization trick can again be applied. However, for", "the sign activation function one still has to rely on the STE or similar heuristics. The log-derivative", "trick offers an alternative for discrete distributions to express gradients of expectations with expectations of gradients BID18 . However, the resulting", "gradient samples are known to suffer from high variance. Therefore, the log-derivative", "trick is typically impractical unless suitable variance reduction techniques are used. This lack of practical methods", "has led to a limited amount of literature investigating Bayesian NNs with discrete weights BID23 .In this work, we approximate the", "intractable variational objective with a probabilistic forward pass (PFP) BID26 BID23 BID6 BID21 . The idea is to propagate probabilities", "through the network by successively approximating the distributions of activations with a Gaussian and propagating this Gaussian through the sign activation function FIG0 ). This results in a well-defined objective", "whose gradient with respect to the variational parameters can be computed analytically. This is true for discrete weight distributions", "as well as for the sign activation function with zero gradient almost everywhere. The method is very flexible in the sense that", "different weight distributions can be used in different layers. We utilize this flexibility to represent the", "weights in the first layer with 3 bits and we use ternary weights w ∈ {−1, 0, 1} in the remaining layers.In our experiments, we evaluate the performance of our model by reporting the error of (i) the most probable model of the approximate", "posterior q(W ) and (ii) approximated expected predictions using the", "PFP. We show that averaging over small ensembles of NNs", "sampled from W ∼ q(W ) can improve the performance while inference using the ensemble is still cheaper than inference using a single full precision NN. Furthermore, our method exhibits a substantial amount", "of sparsity that further reduces the computational overhead. Compared to BID8 , our method requires less precision", "for the first layer, and we do not introduce a computational overhead by using batch normalization which appears to be a crucial component of their method.The paper is outlined as follows. In Section 2, we introduce the notation and formally", "define the PFP. Section 3 shows details of our model. Section 4 shows", "experiments. In Section 5 we discuss", "important issues concerning", "our model and Section 6 concludes the paper.", "The presented model has many tunable parameters, especially the type of variational distributions for the individual layers, that heavily influence the behavior in terms of convergence at training time and performance at test time.", "The binomial distribution appears to be a natural choice for evenly spaced values with many desirable properties.", "It is fully specified by only a single parameter, and its mean, variance, and KL divergence with another binomial has nice analytic expressions.", "Furthermore, neighboring values have similar probabilities which rules out odd cases in which, for instance, there is a value with low probability in between of two values with high probability.Unfortunately, the binomial distribution is not suited for the first layer as here it is crucial to be able to set weights with high confidence to zero.", "However, when favoring zero weights by setting w p = 0.5, the variance of the binomial distribution takes on its largest possible value.", "This might not be a problem in case predictions are computed as the true expectations with respect to q(W ) as in the PFP, but it results in bad classification errors when deriving a single model from q(W ).", "We also observed that using the binomial distribution in deeper layers favor the weights −1 and 1 over 0 (cf. TAB2 ).", "This might indicate that binary weights w ∈ {−1, 1} using a Bernoulli distribution could be sufficient, but in our experiments we observed this to perform worse.", "We believe this to stem partly from the importance of the zero weight and partly from the larger variance of 4w p (1 − w p ) of the Bernoulli distribution compared to the variance of 2w p (1 − w p ) of the binomial distribution.Furthermore, there is a general issue with the sign activation functions if the activations are close to zero.", "In this case, a small change to the inputs can cause the corresponding neuron to take on a completely different value which might have a large impact on the following layers of the NN.", "We found dropout to be a very helpful tool to counteract this issue.", "FIG1 shows histograms of the activations of the second hidden layer for both a model trained with dropout and the same model trained without dropout.", "We can see that without dropout the activations are much closer to zero whereas dropout introduces a much larger spread of the activations and even causes the histogram to decrease slightly in the vicinity of zero.", "Thus, the activations are much more often in regions that are stable with respect to changes of their inputs which makes them more robust.", "We believe that such regularization techniques are crucial if the sign activation function is used.", "We introduced a method to infer NNs with low precision weights.", "As opposed to existing methods, our model neither quantizes the weights of existing full precision NNs nor does it rely on heuristics to compute \"approximated\" gradients of functions whose gradient is zero almost everywhere.", "We perform variational inference to obtain a distribution over a discrete weight space from which we subsequently derive a single discrete-valued NN or a small ensemble of discrete-valued NNs.", "Our method propagates probabilities through the network which results in a well defined function that allows us to optimize the discrete distribution even for the sign activation function.", "The weights in the first layer are modeled using fixed point values with 3 bits precision and the weights in the remaining layers have values w ∈ {−1, 0, 1}.", "This reduces costly floating point multiplications to cheaper multiplications with fixed point values of 3 bits precision in the first layer, and logical XNOR operations in the following layers.", "In general, our approach allows flexible bit-widths for each individual layer.", "We have shown that the performance of our model is on par with state of the art methods that use a higher precision for the weights.", "Furthermore, our model exhibits a large amount of sparsity that can be utilized to further reduce the computational overhead.A DATA SETS" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.10526315122842789, 0.2380952388048172, 0.09302324801683426, 0, 0.1111111044883728, 0.04651162400841713, 0.0624999962747097, 0.14705881476402283, 0.10256409645080566, 0.05128204822540283, 0.05714285373687744, 0, 0.1249999925494194, 0.08695651590824127, 0, 0.07999999821186066, 0.1538461446762085, 0, 0.0624999962747097, 0.1249999925494194, 0.1818181723356247, 0.11267605423927307, 0.2631579041481018, 0.11428570747375488, 0.10810810327529907, 0.17142856121063232, 0, 0.1702127605676651, 0.05714285373687744, 0.1875, 0.12121211737394333, 0.0833333283662796, 0, 0.12903225421905518, 0.07692307233810425, 0.06666666269302368, 0.11428570747375488, 0.12121211737394333, 0.10256409645080566, 0.1818181723356247, 0.11764705181121826, 0, 0, 0, 0, 0.1395348757505417, 0, 0.1599999964237213, 0, 0, 0, 0, 0.04651162400841713, 0.19354838132858276, 0.1111111044883728, 0.16949151456356049, 0.05405404791235924, 0.08695651590824127, 0.05714285373687744, 0.09756097197532654, 0.14814814925193787, 0.09756097197532654, 0.07692307233810425, 0.12121211737394333, 0.04878048226237297, 0.0555555522441864, 0.06896550953388214, 0.07999999821186066, 0.04444444179534912, 0.25641024112701416, 0.307692289352417, 0, 0, 0.07999999821186066, 0.1666666567325592, 0.0555555522441864 ]
r1h2DllAW
true
[ "Variational Inference for infering a discrete distribution from which a low-precision neural network is derived" ]
[ "Many irregular domains such as social networks, financial transactions, neuron connections, and natural language structures are represented as graphs.", "In recent years, a variety of graph neural networks (GNNs) have been successfully applied for representation learning and prediction on such graphs.", "However, in many of the applications, the underlying graph changes over time and existing GNNs are inadequate for handling such dynamic graphs.", "In this paper we propose a novel technique for learning embeddings of dynamic graphs based on a tensor algebra framework.", "Our method extends the popular graph convolutional network (GCN) for learning representations of dynamic graphs using the recently proposed tensor M-product technique.", "Theoretical results that establish the connection between the proposed tensor approach and spectral convolution of tensors are developed.", "Numerical experiments on real datasets demonstrate the usefulness of the proposed method for an edge classification task on dynamic graphs.", "Graphs are popular data structures used to effectively represent interactions and structural relationships between entities in structured data domains.", "Inspired by the success of deep neural networks for learning representations in the image and language domains, recently, application of neural networks for graph representation learning has attracted much interest.", "A number of graph neural network (GNN) architectures have been explored in the contemporary literature for a variety of graph related tasks and applications (Hamilton et al., 2017; Seo et al., 2018; Zhou et al., 2018; Wu et al., 2019) .", "Methods based on graph convolution filters which extend convolutional neural networks (CNNs) to irregular graph domains are popular (Bruna et al., 2013; Defferrard et al., 2016; Kipf and Welling, 2016) .", "Most of these GNN models operate on a given, static graph.", "In many real-world applications, the underlining graph changes over time, and learning representations of such dynamic graphs is essential.", "Examples include analyzing social networks (Berger-Wolf and Saia, 2006) , predicting collaboration in citation networks (Leskovec et al., 2005) , detecting fraud and crime in financial networks (Weber et al., 2018; Pareja et al., 2019) , traffic control (Zhao et al., 2019) , and understanding neuronal activities in the brain (De Vico Fallani et al., 2014) .", "In such dynamic settings, the temporal interdependence in the graph connections and features also play a substantial role.", "However, efficient GNN methods that handle time varying graphs and that capture the temporal correlations are lacking.", "By dynamic graph, we mean a sequence of graphs (V, A (t) , X (t) ), t ∈ {1, 2, . . . , T }, with a fixed set V of N nodes, adjacency matrices A (t) ∈ R N ×N , and graph feature matrices X (t) ∈ R N ×F where X (t) n: ∈ R F is the feature vector consisting of F features associated with node n at time t.", "The graphs can be weighted, and directed or undirected.", "They can also have additional properties like (time varying) node and edge classes, which would be stored in a separate structure.", "Suppose we only observe the first T < T graphs in the sequence.", "The goal of our method is to use these observations to predict some property of the remaining T − T graphs.", "In this paper, we use it for edge classification.", "Other potential applications are node classification and edge/link prediction.", "In recent years, tensor constructs have been explored to effectively process high-dimensional data, in order to better leverage the multidimensional structure of such data (Kolda and Bader, 2009) .", "Tensor based approaches have been shown to perform well in many image and video processing ap- plications Martin et al., 2013; Zhang et al., 2014; Zhang and Aeron, 2016; Lu et al., 2016; Newman et al., 2018) .", "A number of tensor based neural networks have also been investigated to extract and learn multi-dimensional representations, e.g. methods based on tensor decomposition (Phan and Cichocki, 2010), tensor-trains (Novikov et al., 2015; Stoudenmire and Schwab, 2016) , and tensor factorized neural network (Chien and Bao, 2017) .", "Recently, a new tensor framework called the tensor M-product framework (Braman, 2010; Kilmer and Martin, 2011; Kernfeld et al., 2015) was proposed that extends matrix based theory to high-dimensional architectures.", "In this paper, we propose a novel tensor variant of the popular graph convolutional network (GCN) architecture (Kipf and Welling, 2016), which we call TensorGCN.", "It captures correlation over time by leveraging the tensor M-product framework.", "The flexibility and matrix mimeticability of the framework, help us adapt the GCN architecture to tensor space.", "Figure 1 illustrates our method at a high level: First, the time varying adjacency matrices A (t) and feature matrices X (t) of the dynamic graph are aggregated into an adjacency tensor and a feature tensor, respectively.", "These tensors are then fed into our TensorGCN, which computes an embedding that can be used for a variety of tasks, such as link prediction, and edge and node classification.", "GCN architectures are motivated by graph convolution filtering, i.e., applying filters/functions to the graph Laplacian (in turn its eigenvalues) (Bruna et al., 2013) , and we establish a similar connection between TensorGCN and spectral filtering of tensors.", "Experimental results on real datasets illustrate the performance of our method for the edge classification task on dynamic graphs.", "Elements of our method can also be used as a preprocessing step for other dynamic graph methods.", "We have presented a novel approach for dynamic graph embedding which leverages the tensor Mproduct framework.", "We used it for edge classification in experiments on four real datasets, where it performed competitively compared to state-of-the-art methods.", "Future research directions include further developing the theoretical guarantees for the method, investigating optimal structure and learning of the transform matrix M, using the method for other prediction tasks, and investigating how to utilize deeper architectures for dynamic graph learning." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0624999962747097, 0.3333333432674408, 0.22857142984867096, 0.5454545617103577, 0.4000000059604645, 0.06451612710952759, 0.3125, 0, 0.15789473056793213, 0.1304347813129425, 0.2380952388048172, 0.23999999463558197, 0.1818181723356247, 0.038461536169052124, 0.19354838132858276, 0.06666666269302368, 0.1269841194152832, 0.08695651590824127, 0.05714285373687744, 0.07999999821186066, 0.1249999925494194, 0.08695651590824127, 0, 0.04878048226237297, 0.04651162400841713, 0.15094339847564697, 0.1395348757505417, 0.31578946113586426, 0.07999999821186066, 0.06666666269302368, 0.22727271914482117, 0.09302324801683426, 0.07999999821186066, 0.32258063554763794, 0.32258063554763794, 0.46666666865348816, 0.1818181723356247, 0.17391303181648254 ]
rylVTTVtvH
true
[ "We propose a novel tensor based method for graph convolutional networks on dynamic graphs" ]
[ "Our main motivation is to propose an efficient approach to generate novel multi-element stable chemical compounds that can be used in real world applications.", "This task can be formulated as a combinatorial problem, and it takes many hours of human experts to construct, and to evaluate new data.", "Unsupervised learning methods such as Generative Adversarial Networks (GANs) can be efficiently used to produce new data. ", "Cross-domain Generative Adversarial Networks were reported to achieve exciting results in image processing applications.", "However, in the domain of materials science, there is a need to synthesize data with higher order complexity compared to observed samples, and the state-of-the-art cross-domain GANs can not be adapted directly. \n\n", "In this contribution, we propose a novel GAN called CrystalGAN which generates new chemically stable crystallographic structures with increased domain complexity.", "We introduce an original architecture, we provide the corresponding loss functions, and we show that the CrystalGAN generates very reasonable data.", "We illustrate the efficiency of the proposed method on a real original problem of novel hydrides discovery that can be further used in development of hydrogen storage materials.", "In modern society, a big variety of inorganic compositions are used for hydrogen storage owing to its favorable cost BID4 .", "A vast number of organic molecules are applied in solar cells, organic light-emitting diodes, conductors, and sensors BID25 .", "Synthesis of new organic and inorganic compounds is a challenge in physics, chemistry and in materials science.", "Design of new structures aims to find the best solution in a big chemical space, and it is in fact a combinatorial optimization problem.In this work, we focus on applications of hydrogen storage, and in particular, we challenge the problem to investigate novel chemical compositions with stable crystals.", "Traditionally, density functional theory (DFT) plays a central role in prediction of chemically relevant compositions with stable crystals BID22 .", "However, the DFT calculations are computationally expensive, and it is not acceptable to apply it to test all possible randomly generated structures.A number of machine learning approaches were proposed to facilitate the search for novel stable compositions BID3 .", "There was an attempt to find new compositions using an inorganic crystal structure database, and to estimate the probabilities of new candidates based on compositional similarities.", "These methods to generate relevant chemical compositions are based on recommender systems BID10 .", "The output of the recommender systems applied in the crystallographic field is a rating or preference for a structure.", "A recent approach based on a combination of machine learning methods and the high-throughput DFT calculations allowed to explore ternary chemical compounds BID21 , and it was shown that statistical methods can be of a big help to identify stable structures, and that they do it much faster than standard methods.", "Recently, support vector machines were tested to predict crystal structures BID16 showing that the method can reliably predict the crystal structure given its composition.", "It is worth mentioning that data representation of observations to be passed to a learner, is critical, and data representations which are the most suitable for learning algorithms, are not necessarily scientifically intuitive BID23 .Deep", "learning methods were reported to learn rich hierarchical models over all kind of data, and the GANs BID8 ) is a state-of-the-art model to synthesize data. Moreover", ", deep networks were reported to learn transferable representations BID18 . The GANs", "were already exploited with success in cross-domain learning applications for image processing BID13 BID12 .Our goal", "is to develop a competitive approach to identify stable ternary chemical compounds, i.e., compounds containing three different elements, from observations of binary compounds. Nowadays", ", there does not exist any approach that can be applied directly to such an important task of materials science. The state-of-the-art", "GANs are limited in the sense that they do not generate samples in domains with increased complexity, e.g., the application where we aim to construct crystals with three elements from observations containing two chemical elements only. An attempt to learn", "many-to-many mappings was recently introduced by BID0 , however, this promising approach does not allow to generate data of a higher-order dimension.Our contribution is multi-fold:• To our knowledge, we are the first to introduce a GAN to solve the scientific problem of discovery of novel crystal structures, and we introduce an original methodology to generate new stable chemical compositions; • The proposed method is called CrystalGAN, and it consists of two cross-domain GAN blocks with constraints integrating prior knowledge including a feature transfer step; • The proposed model generates data with increased complexity with respect to observed samples; • We demonstrate by numerical experiments on a real challenge of chemistry and materials science that our approach is competitive compared to existing methods; • The proposed algorithm is efficiently implemented in Python, and it will be publicly available shortly, as soon as the contribution is de-anonymized.This paper is organized as follows. We discuss the related", "work in Section 2. In Section 3, we provide", "the formalisation of the problem, and introduce the CrystalGAN. The results of our numerical", "experiments are discussed in Section 4. Concluding remarks and perspectives", "close the paper.", "In our numerical experiments, we compare the proposed CrystalGAN with a classical GAN, the DiscoGAN BID13 , and the CrystalGAN but without the geometric constraints.", "All these GANs generate POSCAR files, and we evaluate the performance of the models by the number of generated ternary structures which satisfy the geometric crystallographic environment.", "Table 2 shows the number of successes for the considered methods.", "The classical GAN which takes Gaussian noise as an input, does not generate acceptable chemical structures.", "The DiscoGAN approach performs quite well if we use it to generate novel pseudo-binary structures, however, it is not adapted to synthesize ternary compositions.", "We observed that the CrystalGAN (with the geometric constraints) outperforms all tested methods.From multiple discussions with experts in materials science and chemistry, first, we know that the number of novel stable compounds can not be very high, and it is already considered as a success if we synthesize several stable structures which satisfy the constraints.", "Hence, we can not really reason in terms of accuracy or error rate which are widely used metrics in machine learning and data mining.Second, evaluation of a stable structure is not straightforward.", "Given a new composition, only the result of density functional theory (DFT) calculations can provide a conclusion whether this composition is stable enough, and whether it can be used in practice.", "However, the DFT calculations are computationally too expensive, and it is out of question to run them on all data we generated using the CrystalGAN.", "It is planned to run the DFT calculations on some pre-selected generated ternary compositions to take a final decision on practical utility of the chemical compounds.", "Our goal was to develop a principled approach to generate new ternary stable crystallographic structures from observed binary, i.e. containing two chemical elements only.", "We propose a learning method called CrystalGAN to discover cross-domain relations in real data, and to generate novel structures.", "The proposed approach can efficiently integrate, in form of constraints, prior knowledge provided by human experts.CrystalGAN is the first GAN developed to generate scientific data in the field of materials science.", "To our knowledge, it is also the first approach which generates data of a higher-order complexity, i.e., ternary structures where the domains are well-separated from observed binary compounds.", "The CrystalGAN was, in particular, successfully tested to tackle the challenge to discover new materials for hydrogen storage.Currently, we investigate different GANs architectures, also including elements of reinforcement learning, to produce data even of a higher complexity, e.g., compounds containing four or five chemical elements.", "Note that although the CrystalGAN was developed and tested for applications in materials science, it is a general method where the constraints can be easily adapted to any scientific problem." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1249999925494194, 0.06451612710952759, 0.07407406717538834, 0, 0.14999999105930328, 0.13333332538604736, 0, 0.11764705181121826, 0, 0, 0.1666666567325592, 0.125, 0, 0.045454543083906174, 0.1249999925494194, 0.09090908616781235, 0, 0.039215683937072754, 0, 0, 0.05714285373687744, 0.0952380895614624, 0.07999999821186066, 0.060606054961681366, 0.06451612710952759, 0.08695651590824127, 0.0833333283662796, 0, 0, 0, 0, 0, 0.0624999962747097, 0, 0.07999999821186066, 0.06451612710952759, 0.06896551698446274, 0, 0.054054051637649536, 0.060606054961681366, 0.0624999962747097, 0.12121211737394333, 0.14814814925193787, 0.052631575614213943, 0, 0.15094339847564697, 0.052631575614213943 ]
SyEGUi05Km
true
[ "\"Generating new chemical materials using novel cross-domain GANs.\"" ]
[ "Given samples from a group of related regression tasks, a data-enriched model describes observations by a common and per-group individual parameters.", "In high-dimensional regime, each parameter has its own structure such as sparsity or group sparsity.", "In this paper, we consider the general form of data enrichment where data comes in a fixed but arbitrary number of tasks $G$ and any convex function, e.g., norm, can characterize the structure of both common and individual parameters. \t", "We propose an estimator for the high-dimensional data enriched model and investigate its statistical properties. ", "We delineate the sample complexity of our estimator and provide high probability non-asymptotic bound for estimation error of all parameters under a condition weaker than the state-of-the-art.", "We propose an iterative estimation algorithm with a geometric convergence rate.", "Overall, we present a first through statistical and computational analysis of inference in the data enriched model. \n\t", "Over the past two decades, major advances have been made in estimating structured parameters, e.g., sparse, low-rank, etc., in high-dimensional small sample problems BID13 BID6 BID14 .", "Such estimators consider a suitable (semi) parametric model of the response: y = φ(x, β * )+ω based on n samples {(x i , y i )} n i=1and β * ∈ R p is the true parameter of interest.", "The unique aspect of such high-dimensional setup is that the number of samples n < p, and the structure in β * , e.g., sparsity, low-rank, makes the estimation possible (Tibshirani, 1996; BID7 BID5 ).", "In several real world problems, natural grouping among samples arises and learning a single common model β 0 for all samples or many per group individual models β g s are unrealistic.", "The middle ground model for such a scenario is the superposition of common and individual parameters β 0 + β g which has been of recent interest in the statistical machine learning community BID16 and is known by multiple names.", "It is a form of multi-task learning (Zhang & Yang, 2017; BID17 when we consider regression in each group as a task.", "It is also called data sharing BID15 since information contained in different group is shared through the common parameter β 0 .", "And finally, it has been called data enrichment BID10 BID0 because we enrich our data set with pooling multiple samples from different but related sources.In this paper, we consider the following data enrichment (DE) model where there is a common parameter β * 0 shared between all groups plus individual per-group parameters β * g which characterize the deviation of group g: y gi = φ(x gi , (β * 0 + β * g )) + ω gi , g ∈ {1, . . . , G}, (1) where g and i index the group and samples respectively.", "Note that the DE model is a system of coupled superposition models.", "We specifically focus on the high-dimensional small sample regime for (1) where the number of samples n g for each group is much smaller than the ambient dimensionality, i.e., ∀g : n g p.", "Similar to all other highdimensional models, we assume that the parameters β g are structured, i.e., for suitable convex functions f g 's, f g (β g ) is small.", "Further, for the technical analysis and proofs, we focus on the case of linear models, i.e., φ(x, β) = x T β.", "The results seamlessly extend to more general non-linear models, e.g., generalized linear models, broad families of semi-parametric and single-index models, non-convex models, etc., using existing results, i.e., how models like LASSO have been extended (e.g. employing ideas such as restricted strong convexity (Negahban & Wainwright, 2012) ).In", "the context of Multi-task learning (MTL), similar models have been proposed which has the general form of y gi = x T gi (β * 1g + β * 2g ) + ω gi where B 1 = [β 11 , . . . , β 1G ] and B 2 = [β 21 , . . . , β 2G ] are two parameter matrices (Zhang & Yang, 2017) . To", "capture relation of tasks, different types of constraints are assumed for parameter matrices. For", "example, BID11 assumes B 1 and B 2 are sparse and low rank respectively. In", "this parameter matrix decomposition framework for MLT, the most related work to ours is the one proposed by BID17 where authors regularize the regression with B 1 1,∞ and B 2 1,1 where norms are p, q-norms on rows of matrices. Parameters", "of B 1 are more general than DE's common parameter when we use f 0 (β 0 ) = β 0 1 . This is because", "B 1 1,∞ regularizer enforces shared support of β * 1g s, i.e., supp(β * 1i ) = supp(β * 1j ) but allows β * 1i = β * 1j . Further sparse", "variation between parameters of different tasks is induced by B 2 1,1 which has an equivalent effect to DE's individual parameters where f g (·)s are l 1 -norm. Our analysis of", "DE framework suggests that it is more data efficient than this setup of BID17 ) because they require every task i to have large enough samples to learn its own common parameters β i while DE shares the common parameter and only requires the total dataset over all tasks to be sufficiently large.The DE model where β g 's are sparse has recently gained attention because of its application in wide range of domains such as personalized medicine BID12 , sentiment analysis, banking strategy BID15 , single cell data analysis (Ollier & Viallon, 2015) , road safety (Ollier & Viallon, 2014) , and disease subtype analysis BID12 . In spite of the", "recent surge in applying data enrichment framework to different domains, limited advances have been made in understanding the statistical and computational properties of suitable estimators for the data enriched model. In fact, non-asymptotic", "statistical properties, including sample complexity and statistical rates of convergence, of regularized estimators for the data enriched model is still an open question BID15 Ollier & Viallon, 2014) . To the best of our knowledge", ", the only theoretical guarantee for data enrichment is provided in (Ollier & Viallon, 2015) where authors prove sparsistency of their proposed method under the stringent irrepresentability condition of the design matrix for recovering supports of common and individual parameters. Existing support recovery guarantees", "(Ollier & Viallon, 2015) , sample complexity and l 2 consistency results BID17 of related models are restricted to sparsity and l 1 -norm, while our estimator and norm consistency analysis work for any structure induced by arbitrary convex functions f g . Moreover, no computational results,", "such as rates of convergence of the optimization algorithms associated with proposed estimators, exist in the literature." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2222222238779068, 0, 0.1111111044883728, 0.3636363446712494, 0.380952388048172, 0.3571428656578064, 0.34285715222358704, 0, 0.07999999821186066, 0.11999999731779099, 0.12765957415103912, 0.19230768084526062, 0.21052631735801697, 0, 0.06593406200408936, 0.13793103396892548, 0.1249999925494194, 0.04444443807005882, 0.19999998807907104, 0.0624999962747097, 0.05970148742198944, 0.13333332538604736, 0.06666666269302368, 0.145454540848732, 0.05128204822540283, 0.04878048226237297, 0.12765957415103912, 0.057692304253578186, 0.21276594698429108, 0.21276594698429108, 0.1071428507566452, 0.20338982343673706, 0.0624999962747097 ]
Byx1S9Bj3N
true
[ "We provide an estimator and an estimation algorithm for a class of multi-task regression problem and provide statistical and computational analysis.." ]
[ "Autonomous vehicles are becoming more common in city transportation. ", "Companies will begin to find a need to teach these vehicles smart city fleet coordination. ", "Currently, simulation based modeling along with hand coded rules dictate the decision making of these autonomous vehicles.", "We believe that complex intelligent behavior can be learned by these agents through Reinforcement Learning.", "In this paper, we discuss our work for solving this system by adapting the Deep Q-Learning (DQN) model to the multi-agent setting. ", "Our approach applies deep reinforcement learning by combining convolutional neural networks with DQN to teach agents to fulfill customer demand in an environment that is partially observ-able to them.", "We also demonstrate how to utilize transfer learning to teach agents to balance multiple objectives such as navigating to a charging station when its en-ergy level is low.", "The two evaluations presented show that our solution has shown hat we are successfully able to teach agents cooperation policies while balancing multiple objectives.", "Many business problems that exist in todays environment consist of multiple decisions makers either collaborating or competing towards a particular goal.", "In this work, the challenge is applying multi-agent systems for autonomous fleet control.", "As Autonomous Vehicles (AVs) are becoming more prevalent, companies controlling these fleets such as Uber/Lyft will need to teach these agents to make optimal decisions.", "The goal of this work is to train these agents/cars optimal relocation strategies that will maximize the efficiency of the fleet while satisfying customer trip demand.", "Traditional solutions will use discrete event simulation modeling to optimize over a chosen objective function.", "This approach requires various hand coded rules as well as assumptions to help the model converge on a solution.", "This becomes an extremely difficult problem when there are many outside environment dynamics that can influence an agents/cars decision making (E.g. Charging, Parking).", "Furthermore, a solution to a particular environment may become outdated with new incoming information (E.g. New Demand Distribution).An", "algorithm that can adapt and learn decision making organically is needed for these types of problems and recent works in Reinforcement Learning and particularly Deep Reinforcement Learning has shown to be effective in this space. Deep", "Minds recent success with Deep Q Learning (DQN) was proven to be very successful in learning human level performance for many Atari 2600 games which was difficult before this because of its highly dimension unstructured data. In this", "work, we will pull from prior work in Multi-Agent Deep Reinforcement Learning (MA-DRL) and extend this to our multi-agent system of cars and fleet coordination. We will", "represent the city environment that holds the cars and customers as an image-like state representation where each layer holds specific information about the environment. We then", "will introduce our work with applying this to a partially observable environment where agents can only see a certain distance from them and show how this helps with scaling up. Along with", "that, we will show how we took advantage of Transfer Learning to teach agents multiple objects in particular charging an import aspect of AVs. Our results", "show that we are successfully able to teach coordination strategies with other cars so that they can optimize the utility of each car. Finally, we", "are also able to teach agents the second object of keeping itself alive while not losing the previous objective of picking up customers.", "Deep Reinforcement Learning provides a great approach to teach agents how to solve complex problems that us as humans may never be able to solve.", "For instance, Deep Mind has been successful in teach an agent to defeat the world champion in Go.", "More specifically, multi-Agent Reinforcement Learning problems provide an interesting avenue to investigate agent to agent communication and decision protocols.", "Since agents must rationalize about the intentions of other agents the dimensionality of the problem space becomes difficult to solve.", "In our use case, we wanted to see if we can scale a DRL solution up to an actual ride sharing environment that maintains the same dynamics as it would in real life.", "For this to be possible, we were tasked with the problem of teaching these agents effective cooperation strategies that would optimize the reward of the system along with the problem of teaching these same agents multiple objectives.", "This work, demonstrated how we successfully applied a partially observable multi-agent deep reinforcement solution to this ride sharing problem.", "Along with that, we showed how we can effectively take advantage of transfer learning to adapt decision policies to account for multiple objectives." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.307692289352417, 0, 0.23076923191547394, 0.1249999925494194, 0.15789473056793213, 0.1666666567325592, 0.17142856121063232, 0, 0.0833333283662796, 0.1764705777168274, 0.11428570747375488, 0.07692307233810425, 0.06896550953388214, 0, 0.06666666269302368, 0.19512194395065308, 0.12765957415103912, 0.3333333432674408, 0, 0.10256409645080566, 0.22857142984867096, 0.1764705777168274, 0.1875, 0.3636363446712494, 0.2142857164144516, 0.2142857164144516, 0.14814814925193787, 0.0476190447807312, 0.10526315122842789, 0.06666666269302368, 0.0624999962747097 ]
B1EGg7ZCb
true
[ "Utilized Deep Reinforcement Learning to teach agents ride-sharing fleet style coordination." ]
[ "Stability is a key aspect of data analysis.", "In many applications, the natural notion of stability is geometric, as illustrated for example in computer vision.", "Scattering transforms construct deep convolutional representations which are certified stable to input deformations.", "This stability to deformations can be interpreted as stability with respect to changes in the metric structure of the domain. \n\n", "In this work, we show that scattering transforms can be generalized to non-Euclidean domains using diffusion wavelets, while preserving a notion of stability with respect to metric changes in the domain, measured with diffusion maps.", "The resulting representation is stable to metric perturbations of the domain while being able to capture ''high-frequency'' information, akin to the Euclidean Scattering.", "Convolutional Neural Networks (CNN) are layered information processing architectures.", "Each of the layers in a CNN is itself the composition of a convolution operation with a pointwise nonlinearity where the filters used at different layers are the outcome of a data-driven optimization process BID22 .", "Scattering transforms have an analogous layered architecture but differ from CNNs in that the convolutional filters used at different layers are not trained but selected from a multi-resolution filter bank BID25 BID3 .", "The fact that they are not trained endows scattering transforms with intrinsic value in situations where training is impossible -and inherent limitations in the converse case.", "That said, an equally important value of scattering transforms is that by isolating the convolutional layered architecture from training effects it permits analysis of the fundamental properties of CNN information processing architectures.", "This analysis is undertaken in BID25 ; BID3 where the fundamental conclusion is about the stability of scattering transforms with respect to deformations in the underlying domain that are close to translations.In this paper we consider graphs and signals supported on graphs such as brain connectivity networks and functional activity levels BID17 , social networks and opinions BID19 , or user similarity networks and ratings in recommendation systems BID18 .", "Our specific goals are:", "(i) To define a family of graph-scattering transforms.", "(ii) To define a notion of deformation for graph signals.", "(iii) To study the stability of graph scattering transforms with respect to this notion of deformation.", "To accomplish goal", "(i) we consider the family of graph diffusion wavelets which provide an appropriate construction of a multi-resolution filter bank BID8 .", "Our diffusion scattering transforms are defined as the layered composition of diffusion wavelet filter banks and pointwise nonlinearities.", "To accomplish goal", "(ii) we adopt the graph diffusion distance as a measure of deformation of the underlying domain BID27 .", "Diffusion distances measure the similarity of two graphs through the time it takes for a signal to be diffused on the graph.", "The major accomplishment of this paper is to show that the diffusion graph scattering transforms are stable with respect to deformations as measured with respect to diffusion distances.", "Specifically, consider a signal x supported on graph G whose diffusion scattering transform is denoted by the operator Ψ G .", "Consider now a deformation of the signal's domain so that the signal's support is now described by the graph G whose diffusion scattering operator is Ψ G .", "We show that the operator norm distance Ψ G − Ψ G is bounded by a constant multiplied by the diffusion distance between the graphs G and G .", "The constant in this bound depends on the spectral gap of G but, very importantly, does not depend on the number of nodes in the graph.It is important to point out that finding stable representations is not difficult.", "E.g., taking signal averages is a representation that is stable to domain deformations -indeed, invariant.", "The challenge is finding a representation that is stable and rich in its description of the signal.", "In our numerical analyses we show that linear filters can provide representations that are either stable or rich but that cannot be stable and rich at the same time.", "The situation is analogous to (Euclidean) scattering transforms and is also associated with high frequency components.", "We can obtain a stable representation by eliminating high frequency components but the representation loses important signal features.", "Alternatively, we can retain high frequency components to have a rich representation but that representation is unstable to deformations.", "Diffusion scattering transforms are observed to be not only stable -as predicted by our theoretical analysis -but also sufficiently rich to achieve good performance in graph signal classification examples.", "In this work we addressed the problem of stability of graph representations.", "We designed a scattering transform of graph signals using diffusion wavelets and we proved that this transform is stable under deformations of the underlying graph support.", "More specifically, we showed that the scattering transform of a graph signal supported on two different graphs is proportional to the diffusion distance between those graphs.", "As a byproduct of our analysis, we obtain stability bounds for Graph Neural Networks generated by diffusion operators.", "Additionally, we showed that the resulting descriptions are also rich enough to be able to adequately classify plays by author in the context of authorship attribution, and identify the community origin of a signal in a source localization problem.That said, there are a number of directions to build upon from these results.", "First, our stability bounds depend on the spectral gap of the graph diffusion.", "Although lazy diffusion prevents this spectral gap to vanish, as the size of the graph increases we generally do not have a tight bound, as illustrated by regular graphs.", "An important direction of future research is thus to develop stability bounds which are robust to vanishing spectral gaps.", "Next, and related to this first point, we are working on extending the analysis to broader families of wavelet decompositions on graphs and their corresponding graph neural network versions, including stability with respect to the GromovHausdorff metric, which can be achieved by using graph wavelet filter banks that achieve bounds analogous to those in Lemmas 5.1 and 5.2.A PROOF OF PROPOSITION 4.1Since all operators ψ j are polynomials of the diffusion T , they all diagonalise in the same basis.", "Let T = V ΛV T , where V T V = I contains the eigenvectors of T and Λ = diag(λ 0 , . . . , λ n−1 ) its eigenvalues.", "The frame bounds C 1 , C 2 are obtained by evaluating Ψx 2 for x = v i , i = 1, . . . , n− 1, since v 0 corresponds to the square-root degree vector and x is by assumption orthogonal to v 0 .We", "verify that the spectrum of ψ j is given by (p j (λ 0 ) , . . . , p j (λ n−1 )), where DISPLAYFORM0 2 . It", "follows from the definition that DISPLAYFORM1 . .", ", n − 1 and therefore DISPLAYFORM2 We check that DISPLAYFORM3 2 . One easily verifies that Q(x) is continuous in [0, 1) since it is bounded by a geometric series. Also, observe that DISPLAYFORM4 since x ∈ [0, 1). By continuity it thus follows that DISPLAYFORM5 which results in g (", "t) ≤ rβ r−1 B − A , proving (23).By", "plugging FORMULA5 into (22) we thus obtain DISPLAYFORM6 (1−β 2 ) 3 . Finally", ", we observe that DISPLAYFORM7 Without loss of generality, assume that the node assignment that minimizes T G − ΠT G Π T is the identity. We need", "to bound the leading eigenvectors of two symmetric matrices T G and T G with a spectral gap. As before", ", let DISPLAYFORM8 Since we are free to swap the role of v and v , the result follows. DISPLAYFORM9", "First, note that ρ G = ρ G = ρ since it is a pointwise nonlinearity (an absolute value), and is independent of the graph topology. Now, let's start", "with k = 0. In this case, we get U G x − U G x which is immediately bounded by Lemma 5.2 satisfying equation 15.For k = 1 we have DISPLAYFORM10 where the triangular inequality of the norm was used, together with the fact that ρu − ρu ≤ ρ(u − u ) for any real vector u since ρ is the pointwise absolute value. Using the submultiplicativity", "of the operator norm, we get DISPLAYFORM11 From Lemmas 5.1 and 5.2 we have that Ψ G − Ψ G ≤ ε Ψ and U G − U G ≤ ε U , and from Proposition 4.1 that Ψ G ≤ 1. Note also that U G = U G = 1 and that ρ = 1. This yields DISPLAYFORM12 satisfying equation 15 for k = 1.For k = 2, we observe that DISPLAYFORM13 The first term is bounded in a straightforward fashion by DISPLAYFORM14 analogy to the development for k = 1. Since U G = 1, for the second term, we focus on DISPLAYFORM15 We note that, in the first term in equation 33, the first layer induces an error, but after that, the processing is through the same filter banks. So we are basically interested", "in bounding the propagation of the error induced in the first layer. Applying twice the fact that ρ(u", ") − ρ(u ) ≤ ρ(u − u ) we get DISPLAYFORM16", "And following with submultiplicativity of the operator norm, DISPLAYFORM17 For the second term in equation 33, we see that the first layer applied is the same in both, namely ρΨ G so there is no error induced. Therefore, we are interested in the error obtained", "after the first layer, which is precisely the same error obtained for k = 1. Therefore, DISPLAYFORM18 Plugging equation 35 and equation 36 back in equation 31 we get DISPLAYFORM19 satisfying equation 15 for k = 2.For general k we see that we will have a first term that is the error induced by the mismatch on the low pass filter that amounts to ε U , a second term that accounts for the propagation through (k − 1) equal layers of an initial error, yielding ε Ψ , and a final third term that is the error induced by the previous layer, (k − 1)ε Ψ . More formally, assume that equation 15 holds for k", "− 1, implying that DISPLAYFORM20 Then, for k, we can write DISPLAYFORM21 Again, the first term we bound it in a straightforward manner using submultiplicativity of the operator norm DISPLAYFORM22 For the second term, since U G = 1 we focus on DISPLAYFORM23 The first term in equation 42 computes the propagation in the initial error caused by the first layer. Then, repeatedly applying ρ(u) − ρ(u ) ≤ ρ(u − u )", "in analogy with k = 2 and using", "submultiplicativity, we get DISPLAYFORM24 The second term in equation 42 is the bounded by equation 38, since the first layer is exactly the same in this second term. Then, combining equation 43 with equation 38, yields DISPLAYFORM25", "Overall, we get DISPLAYFORM26 which satisfies equation 15 for k. Finally, since this holds for k = 2, the proof is completed by induction", ".E PROOF OF COROLLARY 5.4From Theorem 5.3, we have DISPLAYFORM27 and, by definition (Bruna & Mallat, 2013, Sec. 3 .1), DISPLAYFORM28 so that DISPLAYFORM29 Then, applying the inequality of Theorem 5.3, we get DISPLAYFORM30 Now, considering each term, such that DISPLAYFORM31 + m−1 k=0 2 3/2 k β 2 + (1 + β 2 + ) (1 − β − )(1 − β 2 + ) 3 d" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.29999998211860657, 0.13793103396892548, 0.23999999463558197, 0.2666666507720947, 0.1818181723356247, 0.1875, 0, 0.10526315122842789, 0.0476190447807312, 0.10810810327529907, 0.1463414579629898, 0.17391304671764374, 0, 0.09999999403953552, 0.1818181723356247, 0.37037035822868347, 0, 0.19354838132858276, 0.20689654350280762, 0, 0.29629629850387573, 0.25, 0.34285715222358704, 0.25806450843811035, 0.3030303120613098, 0.0624999962747097, 0.22727271914482117, 0.1428571343421936, 0.1428571343421936, 0.10810810327529907, 0.14814814925193787, 0.06896550953388214, 0.13793103396892548, 0.14999999105930328, 0.3478260934352875, 0.4571428596973419, 0.3333333432674408, 0.06666666269302368, 0.1090909093618393, 0.25, 0.20512820780277252, 0.13333332538604736, 0.09999999403953552, 0.11764705181121826, 0.08888888359069824, 0.11428570747375488, 0.10526315122842789, 0, 0, 0, 0.11428570747375488, 0.19999998807907104, 0.20689654350280762, 0.1666666567325592, 0.05970149114727974, 0.06451612710952759, 0.1538461446762085, 0, 0.0833333283662796, 0.07058823108673096, 0.05882352590560913, 0, 0.05128204822540283, 0.05882352590560913, 0.06451612710952759 ]
BygqBiRcFQ
true
[ "Stability of scattering transform representations of graph data to deformations of the underlying graph support." ]
[ "We propose a novel deep network architecture for lifelong learning which we refer to as Dynamically Expandable Network (DEN), that can dynamically decide its network capacity as it trains on a sequence of tasks, to learn a compact overlapping knowledge sharing structure among tasks.", "DEN is efficiently trained in an online manner by performing selective retraining, dynamically expands network capacity upon arrival of each task with only the necessary number of units, and effectively prevents semantic drift by splitting/duplicating units and timestamping them.", "We validate DEN on multiple public datasets in lifelong learning scenarios on multiple public datasets, on which it not only significantly outperforms existing lifelong learning methods for deep networks, but also achieves the same level of performance as the batch model with substantially fewer number of parameters.", "Lifelong learning BID13 , the problem of continual learning where tasks arrive in sequence, is an important topic in transfer learning.", "The primary goal of lifelong learning is to leverage knowledge from earlier tasks for obtaining better performance, or faster convergence/training speed on models for later tasks.", "While there exist many different approaches to tackle this problem, we consider lifelong learning under deep learning to exploit the power of deep neural networks.", "Fortunately, for deep learning, storing and transferring knowledge can be done in a straightforward manner through the learned network weights.", "The learned weights can serve as the knowledge for the existing tasks, and the new task can leverage this by simply sharing these weights.Therefore, we can consider lifelong learning simply as a special case of online or incremental learning, in case of deep neural networks.", "There are multiple ways to perform such incremental learning BID12 BID17 .", "The simplest way is to incrementally fine-tune the network to new tasks by continuing to train the network with new training data.", "However, such simple retraining of the network can degenerate the performance for both the new tasks and the old ones.", "If the new task is largely different from the older ones, such as in the case where previous tasks are classifying images of animals and the new task is to classify images of cars, then the features learned on the previous tasks may not be useful for the new one.", "At the same time, the retrained representations for the new task could adversely affect the old tasks, as they may have drifted from their original meanings and are no longer optimal for them.", "For example, the feature describing stripe pattern from zebra, may changes its meaning for the later classification task for classes such as striped t-shirt or fence, which can fit to the feature and drastically change its meaning.Then how can we ensure that the knowledge sharing through the network is beneficial for all tasks, in the online/incremental learning of a deep neural network?", "Recent work suggests to either use a regularizer that prevents the parameters from drastic changes in their values yet still enables to find a good solution for the new task BID4 , or block any changes to the old task BID4 retrains the entire network learned on previous tasks while regularizing it to prevent large deviation from the original model.", "Units and weights colored in red denote the ones that are retrained, and black ones are ones that remain fixed.", "(b) Non-retraining models such as Progressive Network BID12 expands the network for the new task t, while withholding modification of network weights for previous tasks.", "(c) Our DEN selectively retrains the old network, expanding its capacity when necessary, and thus dynamically deciding its optimal capacity as it trains on.", "parameters BID12 .", "Our strategy is different from both approaches, since we retrain the network at each task t such that each new task utilizes and changes only the relevant part of the previous trained network, while still allowing to expand the network capacity when necessary.", "In this way, each task t will use a different subnetwork from the previous tasks, while still sharing a considerable part of the subnetwork with them.", "FIG0 illustrates our model in comparison with existing deep lifelong learning methods.There are a number of challenges that need to be tackled for such incremental deep learning setting with selective parameter sharing and dynamic layer expansion.", "1) Achieving scalability and efficiency in training: If the network grows in capacity, training cost per task will increasingly grow as well, since the later tasks will establish connections to a much larger network.", "Thus, we need a way to keep the computational overhead of retraining to be low.2) Deciding when to expand the network, and how many neurons to add: The network might not need to expand its size, if the old network sufficiently explains the new task.", "On the other hand, it might need to add in many neurons if the task is very different from the existing ones.", "Hence, the model needs to dynamically add in only the necessary number of neurons.3) Preventing semantic drift, or catastrophic forgetting, where the network drifts away from the initial configuration as it trains on, and thus shows degenerate performance for earlier examples/tasks.", "As our method retrains the network, even partially, to fit to later learned tasks, and add in new neurons which might also negatively affect the prior tasks by establishing connections to old subnetwork, we need a mechanism to prevent potential semantic drift.To overcome such challenges, we propose a novel deep network model along with an efficient and effective incremental learning algorithm, which we name as Dynamically Expandable Networks (DEN).", "In a lifelong learning scenario, DEN maximally utilizes the network learned on all previous tasks to efficiently learn to predict for the new task, while dynamically increasing the network capacity by adding in or splitting/duplicating neurons when necessary.", "Our method is applicable to any generic deep networks, including convolutional networks.We validate our incremental deep neural network for lifelong learning on multiple public datasets, on which it achieves similar or better performance than the model that trains a separate network for each task, while using only 11.9%p − 60.3%p of its parameters.", "Further, fine-tuning of the learned network on all tasks obtains even better performance, outperforming the batch model by as much as 0.05%p − 4.8%p.", "Thus, our model can be also used for structure estimation to obtain optimal performance over network capacity even when batch training is possible, which is a more general setup.", "We proposed a novel deep neural network for lifelong learning, Dynamically Expandable Network (DEN).", "DEN performs partial retraining of the network trained on old tasks by exploiting task relatedness, while increasing its capacity when necessary to account for new knowledge required to account for new tasks, to find the optimal capacity for itself, while also effectively preventing semantic drift.", "We implement both feedforward and convolutional neural network version of our DEN, and validate them on multiple classification datasets under lifelong learning scenarios, on which they significantly outperform the existing lifelong learning methods, achieving almost the same performance as the network trained in batch while using as little as 11.9%p − 60.3%p of its capacity.", "Further fine-tuning of the models on all tasks results in obtaining models that outperform the batch models, which shows that DEN is useful for network structure estimation as well." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.6440677642822266, 0.1071428507566452, 0.23728813230991364, 0.052631575614213943, 0.13636362552642822, 0.1428571343421936, 0.19999998807907104, 0.21052631735801697, 0.06451612710952759, 0.05405404791235924, 0.10810810327529907, 0.0714285671710968, 0.04081632196903229, 0.2222222238779068, 0.14705881476402283, 0.05714285373687744, 0.0952380895614624, 0.3333333134651184, 0, 0.10526315122842789, 0.04651162400841713, 0.18518517911434174, 0.11999999731779099, 0.1071428507566452, 0.04999999329447746, 0.16949151456356049, 0.17283950746059418, 0.25925925374031067, 0.3055555522441864, 0.13636362552642822, 0.1666666567325592, 0.3529411852359772, 0.1428571343421936, 0.23880596458911896, 0.17391303181648254 ]
Sk7KsfW0-
true
[ "We propose a novel deep network architecture that can dynamically decide its network capacity as it trains on a lifelong learning scenario." ]
[ "This paper fosters the idea that deep learning methods can be sided to classical\n", "visual odometry pipelines to improve their accuracy and to produce uncertainty\n", "models to their estimations.", "We show that the biases inherent to the visual odom-\n", "etry process can be faithfully learnt and compensated for, and that a learning ar-\n", "chitecture associated to a probabilistic loss function can jointly estimate a full\n", "covariance matrix of the residual errors, defining a heteroscedastic error model.\n", "Experiments on autonomous driving image sequences and micro aerial vehicles\n", "camera acquisitions assess the possibility to concurrently improve visual odome-\n", "try and estimate an error associated to its outputs." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3125, 0.1428571343421936, 0, 0, 0.19354838132858276, 0.06896550953388214, 0.13333332538604736, 0.0714285671710968, 0, 0.07407406717538834 ]
SklqvxSFDB
false
[ "This paper discusses different methods of pairing VO with deep learning and proposes a simultaneous prediction of corrections and uncertainty." ]
[ "Building robust online content recommendation systems requires learning com- plex interactions between user preferences and content features.", "The field has evolved rapidly in recent years from traditional multi-arm bandit and collabora- tive filtering techniques, with new methods integrating Deep Learning models that enable to capture non-linear feature interactions.", "Despite progress, the dynamic nature of online recommendations still poses great challenges, such as finding the delicate balance between exploration and exploitation.", "In this paper we provide a novel method, Deep Density Networks (DDN) which deconvolves measurement and data uncertainty and predicts probability densities of CTR, enabling us to perform more efficient exploration of the feature space.", "We show the usefulness of using DDN online in a real world content recommendation system that serves billions of recommendations per day, and present online and offline results to eval- uate the benefit of using DDN.", "In order to navigate the vast amounts of content on the internet, users either rely on active search queries, or on passive content recommendations.", "As the amount of the content on the internet grows, content discovery becomes an increasingly crucial challenge, shaping the way content is consumed by users.", "Taboola's content discovery platform aims to perform \"reverse search\", using computational models to match content to users who are likely to engage with it.", "Taboola's content recommendations are shown in widgets that are usually placed at the bottom of articles (see FIG0 ) in various websites across the internet, and serve billions of recommendation per day, with a user base of hundreds of millions of active users.Traditionally recommender systems have been modeled in a multi-arm bandit setting, in which the goal is to a find a strategy that balances exploitation and exploration in order to maximize the long term reward.", "Exploitation regimes try to maximize the immediate reward given the available information, while exploration seeks to extract new information from the feature space, subsequently increasing the performance of the exploitation module.One of the simplest approaches to deal with multi-arm bandit problems is the -greedy algorithm, in which with probability a random recommendation is chosen, and with probability 1 − the recommendation with the highest predicted reward is chosen.", "Upper Confidence Bound -UCB- BID0 ) and Thompson sampling techniques (Thompson (1933) ) use prediction uncertainty estimations in order to perform more efficient exploration of the feature space, either by explicitly adding the uncertainty to the estimation (UCB) or by sampling from the posterior distribution (Thompson sampling).", "Estimating prediction uncertainty is crucial in order to utilize these methods.", "Online recommendations are noisy and probabilistic by nature, with measured values being only a proxy to the true underlying distribution, leading to additional interesting challenges when predicting uncertainty estimations.In this paper we present DDN, a unified deep neural network model which incorporates both measurement and data uncertainty, having the ability to be trained end-to-end while facilitating the exploitation/exploration selection strategy.", "We introduce a mathematical formulation to deconvolve measurement noise, and to provide data uncertainty predictions that can be utilized to improve exploration methods.", "Finally, we demonstrate the benefit of using DDN in a real world content recommendation system.", "We have introduced Deep Density Network (DDN), a unified DNN model that is able to predict probability distributions and to deconvolve measurement and data uncertainties.", "DDN is able to model non-linearities and capture complex target-context relations, incorporating higher level representations of data sources such as contextual and textual input.", "We have shown the added value of using DNN in a multi-arm bandit setting, yielding an adaptive selection strategy that balances exploitation and exploration and maximizes the long term reward.", "We presented results validating DDN's improved noise handling capabilities, leading to 5.3% improvement on a noisy dataset.Furthermore, we observed that DDN outperformed both REG and MDN models in online experiments, leading to RPM improvements of 2.9% and 1.7% respectively.", "Finally, by employing DDN's data uncertainty estimation and UCB strategy, we improved our exploration strategy, depicting 6.5% increase of targets throughput with only 0.05% RPM decrease." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.11764705181121826, 0.12244897335767746, 0, 0.19607841968536377, 0.21276594698429108, 0.052631575614213943, 0, 0.052631575614213943, 0.15584415197372437, 0.11764705181121826, 0.1071428507566452, 0.20689654350280762, 0.13698630034923553, 0.20512819290161133, 0.1818181723356247, 0.4878048598766327, 0.09756097197532654, 0.21739129722118378, 0.13793103396892548, 0.04444443807005882 ]
ryY4RhkCZ
true
[ "We have introduced Deep Density Network, a unified DNN model to estimate uncertainty for exploration/exploitation in recommendation systems." ]
[ "While it is well-documented that climate change accepters and deniers have become increasingly polarized in the United States over time, there has been no large-scale examination of whether these individuals are prone to changing their opinions as a result of natural external occurrences.", "On the sub-population of Twitter users, we examine whether climate change sentiment changes in response to five separate natural disasters occurring in the U.S. in 2018.", "We begin by showing that tweets can be classified with over 75% accuracy as either accepting or denying climate change when using our methodology to compensate for limited labelled data; results are robust across several machine learning models and yield geographic-level results in line with prior research.", "We then apply RNNs to conduct a cohort-level analysis showing that the 2018 hurricanes yielded a statistically significant increase in average tweet sentiment affirming climate change.", "However, this effect does not hold for the 2018 blizzard and wildfires studied, implying that Twitter users' opinions on climate change are fairly ingrained on this subset of natural disasters.", "In Figure 3 , we see that overall sentiment averages rarely show movement post-event: that is, only Hurricane Florence shows a significant difference in average tweet sentiment pre-and post-event at the 1% level, corresponding to a 0.12 point decrease in positive climate change sentiment.", "However, controlling for the same group of users tells a different story: both Hurricane Florence and Hurricane Michael have significant tweet sentiment average differences pre-and post-event at the 1% level.", "Within-cohort, Hurricane Florence sees an increase in positive climate change sentiment by 0.21 points, which is contrary to the overall average change (the latter being likely biased since an influx of climate change deniers are likely to tweet about hurricanes only after the event).", "Hurricane Michael sees an increase in average tweet sentiment of 0.11 points, which reverses the direction of tweets from mostly negative pre-event to mostly positive post-event.", "Likely due to similar bias reasons, the Mendocino wildfires in California see a 0.06 point decrease in overall sentiment post-event, but a 0.09 point increase in within-cohort sentiment.", "Methodologically, we assert that overall averages are not robust results to use in sentiment analyses.We now comment on the two events yielding similar results between overall and within-cohort comparisons.", "Most tweets regarding the Bomb Cyclone have negative sentiment, though sentiment increases by 0.02 and 0.04 points post-event for overall and within-cohort averages, respectively.", "Meanwhile, the California Camp Fires yield a 0.11 and 0.27 point sentiment decline in overall and within-cohort averages, respectively.", "This large difference in sentiment change can be attributed to two factors: first, the number of tweets made regarding wildfires prior to the (usually unexpected) event is quite low, so within-cohort users tend to have more polarized climate change beliefs.", "Second, the root cause of the Camp Fires was quickly linked to PG&E, bolstering claims that climate change had nothing to do with the rapid spread of fire; hence within-cohort users were less vocally positive regarding climate change post-event.There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model).", "Second, our results are constrained to Twitter users, who are known to be more negative than the general U.S. population BID2 .", "Third, we do not take into account the aggregate effects of continued natural disasters over time.", "Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a \"nowcasting\" fashion.", "As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.27272728085517883, 0.3333333134651184, 0.14492753148078918, 0.3265306055545807, 0.23076923191547394, 0.1875, 0.11538460850715637, 0.25806450843811035, 0.12244897335767746, 0.1666666567325592, 0.19230768084526062, 0.0416666604578495, 0.1395348757505417, 0.2666666507720947, 0.20000000298023224, 0.27272728085517883, 0.09999999403953552, 0.1904761791229248, 0.23529411852359772 ]
B1evmEQg_V
true
[ "We train RNNs on famous Twitter users to determine whether the general Twitter population is more likely to believe in climate change after a natural disaster." ]
[ "We study the control of symmetric linear dynamical systems with unknown dynamics and a hidden state.", "Using a recent spectral filtering technique for concisely representing such systems in a linear basis, we formulate optimal control in this setting as a convex program.", "This approach eliminates the need to solve the non-convex problem of explicit identification of the system and its latent state, and allows for provable optimality guarantees for the control signal.", "We give the first efficient algorithm for finding the optimal control signal with an arbitrary time horizon T, with sample complexity (number of training rollouts) polynomial only in log(T) and other relevant parameters.", "Recent empirical successes of reinforcement learning involve using deep nets to represent the underlying MDP and policy.", "However, we lack any supporting theory, and are far from developing algorithms with provable guarantees for such settings.", "We can make progress by addressing simpler setups, such as those provided by control theory.Control theory concerns the control of dynamical systems, a non-trivial task even if the system is fully specified and provable guarantees are not required.", "This is true even in the simplest setting of a linear dynamical system (LDS) with quadratic costs, since the resulting optimization problems are high-dimensional and sensitive to noise.The task of controlling an unknown linear system is significantly more complex, often giving rise to non-convex and high-dimensional optimization problems.", "The standard practice in the literature is to first solve the non-convex problem of system identification-that is, recover a model that accurately describes the system-and then apply standard robust control methods.", "The non-convex problem of system identification is the main reason that we have essentially no provable algorithms for controlling even the simplest linear dynamical systems with unknown latent states.In this paper, we take the first step towards a provably efficient control algorithm for linear dynamical systems.", "Despite the highly non-convex and high-dimensional formulation of the problem, we can efficiently find the optimal control signal in polynomial time with optimal sample complexity.", "Our method is based on wave-filtering, a recent spectral representation technique for symmetric LDSs BID7 ).", "We have presented an algorithm for finding the optimal control inputs for an unknown symmetric linear dynamical system, which requires querying the system only a polylogarithmic number of times in the number of such inputs , while running in polynomial time.", "Deviating significantly from previous approaches, we circumvent the non-convex optimization problem of system identification by a new learned representation of the system.", "We see this as a first step towards provable, efficient methods for the traditionally non-convex realm of control and reinforcement learning." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.36734694242477417, 0.3928571343421936, 0.17543859779834747, 0.40625, 0.07999999821186066, 0.07843136787414551, 0.1764705777168274, 0.2222222238779068, 0.2295081913471222, 0.3561643660068512, 0.3636363446712494, 0.12244897335767746, 0.39393937587738037, 0.19230768084526062, 0.2222222238779068 ]
BygpQlbA-
true
[ "Using a novel representation of symmetric linear dynamical systems with a latent state, we formulate optimal control as a convex program, giving the first polynomial-time algorithm that solves optimal control with sample complexity only polylogarithmic in the time horizon." ]
[ "Generative Adversarial Networks (GANs) have become the gold standard when it comes to learning generative models for high-dimensional distributions.", "Since their advent, numerous variations of GANs have been introduced in the literature, primarily focusing on utilization of novel loss functions, optimization/regularization strategies and network architectures.", "In this paper, we turn our attention to the generator and investigate the use of high-order polynomials as an alternative class of universal function approximators.", "Concretely, we propose PolyGAN, where we model the data generator by means of a high-order polynomial whose unknown parameters are naturally represented by high-order tensors.", "We introduce two tensor decompositions that significantly reduce the number of parameters and show how they can be efficiently implemented by hierarchical neural networks that only employ linear/convolutional blocks.", "We exhibit for the first time that by using our approach a GAN generator can approximate the data distribution without using any activation functions.", "Thorough experimental evaluation on both synthetic and real data (images and 3D point clouds) demonstrates the merits of PolyGAN against the state of the art.", "Generative Adversarial Networks (GANs) are currently one of the most popular lines of research in machine learning.", "Research on GANs mainly revolves around:", "(a) how to achieve faster and/or more accurate convergence (e.g., by studying different loss functions (Nowozin et al., 2016; Arjovsky & Bottou, 2017; Mao et al., 2017) or regularization schemes (Odena et al., 2018; Miyato et al., 2018; Gulrajani et al., 2017) ), and", "(b) how to design different hierarchical neural networks architectures composed of linear and non-linear operators that can effectively model high-dimensional distributions (e.g., by progressively training large networks (Karras et al., 2018) or by utilizing deep ResNet type of networks as generators (Brock et al., 2019) ).", "Even though hierarchical deep networks are efficient universal approximators for the class of continuous compositional functions (Mhaskar et al., 2016) , the non-linear activation functions pose difficulties in their theoretical analysis, understanding, and interpretation.", "For instance, as illustrated in Arora et al. (2019) , element-wise non-linearities pose a challenge on proving convergence, especially in an adversarial learning setting (Ji & Liang, 2018) .", "Consequently, several methods, e.g., Saxe et al. (2014) ; Hardt & Ma (2017) ; Laurent & Brecht (2018) ; Lampinen & Ganguli (2019) , focus only on linear models (with respect to the weights) in order to be able to rigorously analyze the neural network dynamics, the residual design principle, local extrema and generalization error, respectively.", "Moreover, as stated in the recent in-depth comparison of many different GAN training schemes (Lucic et al., 2018) , the improvements may mainly arise from a higher computational budget and tuning and not from fundamental architectural choices.", "In this paper, we depart from the choice of hierarchical neural networks that involve activation functions and investigate for the first time in the literature of GANs the use of high-order polynomials as an alternative class of universal function approximators for data generator functions.", "This choice is motivated by the strong evidence provided by the Stone-Weierstrass theorem (Stone, 1948) , which states that every continuous function defined on a closed interval can be uniformly approximated as closely as desired by a polynomial function.", "Hence, we propose to model the vector-valued generator function Gpzq : R d Ñ R o by a high-order multivariate polynomial of the latent vector z, whose unknown parameters are naturally represented by high-order tensors.", "However, the number of parameters required to accommodate all higher-order correlations of the latent vector explodes with the desired order of the polynomial and the dimension of the latent vector.", "To alleviate this issue and at the same time capture interactions of parameters across different orders of approximation in a hierarchical manner, we cast polynomial parameters estimation as a coupled tensor factorization (Papalexakis et al., 2016; Sidiropoulos et al., 2017) that jointly factorizes all the polynomial parameters tensors.", "To this end, we introduce two specifically tailored coupled canonical polyadic (CP)-type of decompositions with shared factors.", "The proposed coupled decompositions of the parameters tensors result into two different hierarchical structures (i.e., architectures of neural network decoders) that do not involve any activation function, providing an intuitive way of generating samples with an increasing level of detail.", "This is pictorially shown in Figure 1 .", "The result of the proposed PolyGAN using a fourth-order polynomial approximator is shown in Figure 1", "(a), while Figure 1", "(b) shows the corresponding generation when removing the fourth-order power from the generator.", "Our contributions are summarized as follows:", "• We model the data generator with a high-order polynomial.", "Core to our approach is to cast polynomial parameters estimation as a coupled tensor factorization with shared factors.", "To this end, we develop two coupled tensor decompositions and demonstrate how those two derivations result in different neural network architectures involving only linear (e.g., convolution) units.", "This approach reveals links between high-order polynomials, coupled tensor decompositions and network architectures.", "• We experimentally verify that the resulting networks can learn to approximate functions with analytic expressions.", "• We show how the proposed networks can be used with linear blocks, i.e., without utilizing activation functions, to synthesize high-order intricate signals, such as images.", "• We demonstrate that by incorporating activation functions to the derived polynomial-based architectures, PolyGAN improves upon three different GAN architectures, namely DC-GAN (Radford et al., 2015) , SNGAN (Miyato et al., 2018) and SAGAN (Zhang et al., 2019) .", "(a)", "(b) Figure 1: Generated samples by an instance of the proposed PolyGAN.", "(a) Generated samples using a fourth-order polynomial and", "(b) the corresponding generated samples when removing the terms that correspond to the fourth-order.", "As evidenced, by extending the polynomial terms, PolyGAN generates samples with an increasing level of detail.", "We express data generation as a polynomial expansion task.", "We model the high-order polynomials with tensorial factors.", "We introduce two tailored coupled decompositions and show how the polynomial parameters can be implemented by hierarchical neural networks, e.g. as generators in a GAN setting.", "We exhibit how such polynomial-based generators can be used to synthesize images by utilizing only linear blocks.", "In addition, we empirically demonstrate that our polynomial expansion can be used with non-linear activation functions to improve the performance of standard state-of-the-art architectures.", "Finally, it is worth mentioning that our approach reveals links between high-order polynomials, coupled tensor decompositions and network architectures.", "Algorithm 1: PolyGAN (model 1).", "% Perform the Hadamard product for the n th layer.", "Algorithm 2: PolyGAN (model 2).", "for n=2:N do 6 % Multiply with the current layer weight S rns and perform the Hadamard product.", "κ \"´S rns κ`pB rns q T b rns¯˚´p A rns q T v7 end 8 x \" β`Cκ.", "The appendix is organized as:", "• Section B provides the Lemmas and their proofs required for our derivations.", "• Section C generalizes the Coupled CP decomposition for N th order expansion.", "• Section D extends the experiments to 3D manifolds.", "• In Section E, additional experiments on image generation with linear blocks are conducted.", "• Comparisons with popular GAN architectures are conducted in Section F. Specifically, we utilize three popular generator architectures and devise their polynomial equivalent and perform comparisons on image generation.", "We also conduct an ablation study indicating how standard engineering techniques affect the image generation of the polynomial generator.", "• In Section G, a comparison between the two proposed decompositions is conducted on data distributions from the previous Sections." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05882352590560913, 0.09999999403953552, 0.21052631735801697, 0.6486486196517944, 0.1860465109348297, 0.3243243098258972, 0.1666666567325592, 0.12903225421905518, 0, 0.039215683937072754, 0.10526315122842789, 0.08510638028383255, 0.0476190447807312, 0.0312499962747097, 0.12244897335767746, 0.19607843458652496, 0.1666666567325592, 0.43478259444236755, 0.17142856121063232, 0.178571417927742, 0.0624999962747097, 0.11320754140615463, 0, 0.25806450843811035, 0, 0.1538461446762085, 0, 0.6399999856948853, 0.1249999925494194, 0, 0.0714285671710968, 0.12903225421905518, 0.1395348757505417, 0.1249999925494194, 0.2222222238779068, 0.17391303181648254, 0.07407406717538834, 0.25806450843811035, 0.3333333432674408, 0.3478260934352875, 0.2380952388048172, 0.1249999925494194, 0.1538461446762085, 0.05882352590560913, 0, 0.0833333283662796, 0, 0.0624999962747097, 0, 0, 0.0714285671710968, 0.0714285671710968, 0.0833333283662796, 0, 0.09756097197532654, 0.3030303120613098, 0.1764705777168274 ]
Bye30kSYDH
true
[ "We model the data generator (in GAN) by means of a high-order polynomial represented by high-order tensors." ]
[ "Deep neural networks trained on large supervised datasets have led to impressive results in recent years.", "However, since well-annotated datasets can be prohibitively expensive and time-consuming to collect, recent work has explored the use of larger but noisy datasets that can be more easily obtained.", "In this paper, we investigate the behavior of deep neural networks on training sets with massively noisy labels.", "We show on multiple datasets such as MINST, CIFAR-10 and ImageNet that successful learning is possible even with an essentially arbitrary amount of noise.", "For example, on MNIST we find that accuracy of above 90 percent is still attainable even when the dataset has been diluted with 100 noisy examples for each clean example.", "Such behavior holds across multiple patterns of label noise, even when noisy labels are biased towards confusing classes.", "Further, we show how the required dataset size for successful training increases with higher label noise.", "Finally, we present simple actionable techniques for improving learning in the regime of high label noise.", "Deep learning has proven to be powerful for a wide range of problems, from image classification to machine translation.", "Typically, deep neural networks are trained using supervised learning on large, carefully annotated datasets.", "However, the need for such datasets restricts the space of problems that can be addressed.", "This has led to a proliferation of deep learning results on the same tasks using the same well-known datasets.", "Carefully annotated data is difficult to obtain, especially for classification tasks with large numbers of classes (requiring extensive annotation) or with fine-grained classes (requiring skilled annotation).", "Thus, annotation can be expensive and, for tasks requiring expert knowledge, may simply be unattainable at scale.To address this limitation, other training paradigms have been investigated to alleviate the need for expensive annotations, such as unsupervised learning BID11 , self-supervised learning BID16 BID23 and learning from noisy annotations (Joulin et al., 2016; BID15 BID22 .", "Very large datasets (e.g., BID7 ; BID19 ) can often be attained, for example from web sources, with partial or unreliable annotation.", "This can allow neural networks to be trained on a much wider variety of tasks or classes and with less manual effort.", "The good performance obtained from these large noisy datasets indicates that deep learning approaches can tolerate modest amounts of noise in the training set.In this work, we take this trend to an extreme, and consider the performance of deep neural networks under extremely low label reliability, only slightly above chance.", "We envision a future in which arbitrarily large amounts of data will easily be obtained, but in which labels come without any guarantee of validity and may merely be biased towards the correct distribution.The key takeaways from this paper may be summarized as follows:• Deep neural networks are able to learn from data that has been diluted by an arbitrary amount of noise.", "We demonstrate that standard deep neural networks still perform well even on training sets in which label accuracy is as low as 1 percent above chance.", "On MNIST, for example, performance still exceeds 90 percent even with this level of label noise (see Figure 1 ).", "This behavior holds, to varying extents, across datasets as well as patterns of label noise, including when noisy labels are biased towards confused classes.•", "A sufficiently large training set can accommodate a wide range of noise levels. We", "find that the minimum dataset size required for effective training increases with the noise level. A", "large enough training set can accommodate a wide range of noise levels. Increasing", "the dataset size further, however, does not appreciably increase accuracy.• Adjusting", "batch size and learning rate can allow conventional neural networks to operate in the regime of very high label noise. We find that", "label noise reduces the effective batch size, as noisy labels roughly cancel out and only a small learning signal remains. We show that", "dataset noise can be partly compensated for by larger batch sizes and by scaling the learning rate with the effective batch size.", "In this paper, we have considered the behavior of deep neural networks on training sets with very noisy labels.", "In a series of experiments, we have demonstrated that learning is robust to an essentially arbitrary amount of label noise, provided that the number of clean labels is sufficiently large.", "We have further shown that the threshold required for clean labels increases as the noise level does.", "Finally, we have observed that noisy labels reduce the effective batch size, an effect that can be mitigated by larger batch sizes and downscaling the learning rate.It is worthy of note that although deep networks appear robust to even high degrees of label noise, clean labels still always perform better than noisy labels, given the same quantity of training data.", "Further, one still requires expert-vetted test sets for evaluation.", "Lastly, it is important to reiterate that our studies focus on non-adversarial noise.Our work suggests numerous directions for future investigation.", "For example, we are interested in how label-cleaning and semi-supervised methods affect the performance of networks in a high-noise regime.", "Are such approaches able to lower the threshold for training set size?", "Finally, it remains to translate the results we present into an actionable trade-off between data annotation and acquisition costs, which can be utilized in real world training pipelines for deep networks on massive noisy data." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1621621549129486, 0.1702127605676651, 0.20512819290161133, 0.35555556416511536, 0.19607841968536377, 0.10256409645080566, 0.10810810327529907, 0.10810810327529907, 0.20512819290161133, 0.22857142984867096, 0.11428570747375488, 0.21052631735801697, 0.1395348757505417, 0.0845070406794548, 0.04444443807005882, 0.1860465109348297, 0.2686567008495331, 0.5, 0.21739129722118378, 0.09756097197532654, 0.13333332538604736, 0.17142856121063232, 0.1111111044883728, 0.11764705181121826, 0, 0.3181818127632141, 0.1818181723356247, 0.09999999403953552, 0.19999998807907104, 0.25531914830207825, 0.1621621549129486, 0.21917808055877686, 0, 0.1428571343421936, 0.14999999105930328, 0.12121211737394333, 0.1818181723356247 ]
B1p461b0W
true
[ "We show that deep neural networks are able to learn from data that has been diluted by an arbitrary amount of noise." ]
[ "In this paper, we propose to extend the recently introduced model-agnostic meta-learning algorithm (MAML, Finn et al., 2017) for low resource neural machine translation (NMT).", "We frame low-resource translation as a meta-learning problem, and we learn to adapt to low-resource languages based on multilingual high-resource language tasks.", "We use the universal lexical representation (Gu et al., 2018b) to overcome the input-output mismatch across different languages.", "We evaluate the proposed meta-learning strategy using eighteen European languages (Bg, Cs, Da, De, El, Es, Et, Fr, Hu, It, Lt, Nl, Pl, Pt, Sk, Sl, Sv and Ru) as source tasks and five diverse languages (Ro, Lv, Fi, Tr, and Ko) as target tasks.", "We show that the proposed approach significantly outperforms the multilingual, transfer learning based approach (Zoph et al., 2016) and enables us to train a competitive NMT system with only a fraction of training examples.", "For instance, the proposed approach can achieve as high as 22.04 BLEU on Romanian-English WMT’16 by seeing only 16,000 translated words (\u0018~600 parallel sentences).", "Despite the massive success brought by neural machine translation (NMT, BID36 BID4 BID37 , it has been noticed that the vanilla NMT often lags behind conventional machine translation systems, such as statistical phrase-based translation systems (PBMT, BID24 , for low-resource language pairs (see, e.g., BID23 .", "In the past few years, various approaches have been proposed to address this issue.", "The first attempts at tackling this problem exploited the availability of monolingual corpora BID17 BID32 BID40 .", "It was later followed by approaches based on multilingual translation, in which the goal was to exploit knowledge from high-resource language pairs by training a single NMT system on a mix of high-resource and low-resource language pairs (Firat et al., 2016a,b; BID27 BID21 BID19 .", "Its variant, transfer learning, was also proposed by BID42 , in which an NMT system is pretrained on a high-resource language pair before being finetuned on a target low-resource language pair.In this paper, we follow up on these latest approaches based on multilingual NMT and propose a meta-learning algorithm for low-resource neural machine translation.", "We start by arguing that the recently proposed model-agnostic meta-learning algorithm (MAML, Finn et al., 2017) could be applied to low-resource machine translation by viewing language pairs as separate tasks.", "This view enables us to use MAML to find the initialization of model parameters that facilitate fast adaptation for a new language pair with a minimal amount of training examples ( §3).", "Furthermore, the vanilla MAML however cannot handle tasks with mismatched input and output.", "We overcome this limitation by incorporating the universal lexical representation BID15 and adapting it for the meta-learning scenario ( §3.3).We", "extensively evaluate the effectiveness and generalizing ability of the proposed meta-learning algorithm on low-resource neural machine translation. We", "utilize 17 languages from Europarl and Russian from WMT as the source tasks and test the meta-learned parameter initialization against five target languages (Ro, Lv, Fi, Tr and Ko), in all cases translating to English. Our", "experiments using only up to 160k tokens in each of the target task reveal that the proposed meta-learning approach outperforms the multilingual translation approach across all the target language pairs, and the gap grows as the number of training examples 2 Background Neural Machine Translation (NMT) Given a source sentence X = {x 1 , ..., x T }, a neural machine translation model factors the distribution over possible output sentences Y = {y 1 , ..., y T } into a chain of conditional probabilities with a leftto-right causal structure: DISPLAYFORM0 where special tokens y 0 ( bos ) and y T +1 ( eos ) are used to represent the beginning and the end of a target sentence. These", "conditional probabilities are parameterized using a neural network. Typically", ", an encoder-decoder architecture BID36 BID9 BID4 with a RNN-based decoder is used. More recently", ", architectures without any recurrent structures BID13 BID37 have been proposed and shown to speedup training while achieving state-of-the-art performance.Low Resource Translation NMT is known to easily over-fit and result in an inferior performance when the training data is limited BID23 . In general,", "there are two ways for handling the problem of low resource translation:(1) utilizing the resource of unlabeled monolingual data, and (2) sharing the knowledge between low-and high-resource language pairs. Many research", "efforts have been spent on incorporating the monolingual corpora into machine translation, such as multi-task learning BID17 Zong, 2016), back-translation (Sennrich et al., 2015) , dual learning BID20 and unsupervised machine translation with monolingual corpora only for both sides BID3 BID26 . For the second", "approach, prior researches have worked on methods to exploit the knowledge of auxiliary translations, or even auxiliary tasks. For instance,", "BID8 ; BID28 investigate the use of a pivot to build a translation path between two languages even without any directed resource. The pivot can", "be a third language or even an image in multimodal domains. When pivots are", "not easy to obtain, BID11 ; BID27 ; BID21 have shown that the structure of NMT is suitable for multilingual machine translation. BID15 also showed", "that such a multilingual NMT system could improve the performance of low resource translation by using a universal lexical representation to share embedding information across languages. All the previous", "work for multilingual NMT assume the joint training of multiple high-resource languages naturally results in a universal space (for both the input representation and the model) which, however, is not necessarily true, especially for very low resource cases.Meta Learning In the machine learning community, meta-learning, or learning-to-learn, has recently received interests. Meta-learning tries", "to solve the problem of \"fast adaptation on new training data.\" One of the most successful", "applications of meta-learning has been on few-shot (or oneshot) learning BID25 , where a neural network is trained to readily learn to classify inputs based on only one or a few training examples. There are two categories of", "meta-learning:1. learning a meta-policy for", "updating model parameters (see, e.g., BID1 BID18 BID30 2. learning a good parameter", "initialization for fast adaptation (see, e.g., BID10 BID38 BID35 .In this paper, we propose", "to use a meta-learning algorithm for low-resource neural machine translation based on the second category. More specifically, we extend", "the idea of model-agnostic metalearning (MAML, Finn et al., 2017) in the multilingual scenario.", "In this paper, we proposed a meta-learning algorithm for low-resource neural machine translation that exploits the availability of high-resource languages pairs.", "We based the proposed algorithm on the recently proposed model-agnostic metalearning and adapted it to work with multiple languages that do not share a common vocabulary using the technique of universal lexcal representation, resulting in MetaNMT.", "Our extensive evaluation, using 18 high-resource source tasks and 5 low-resource target tasks, has shown that the proposed MetaNMT significantly outperforms the existing approach of multilingual, transfer learning in low-resource neural machine translation across all the language pairs considered.", "The proposed approach opens new opportunities for neural machine translation.", "First, it is a principled framework for incorporating various extra sources of data, such as source-and targetside monolingual corpora.", "Second, it is a generic framework that can easily accommodate existing and future neural machine translation systems." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.35555556416511536, 0.4615384638309479, 0.05405404791235924, 0.03389830142259598, 0.15686273574829102, 0.1395348757505417, 0.23333333432674408, 0.060606054961681366, 0, 0.17543859779834747, 0.3384615480899811, 0.2857142686843872, 0.25, 0, 0.10256409645080566, 0.3333333134651184, 0.03999999538064003, 0.17142856121063232, 0.1428571343421936, 0.05882352590560913, 0.03448275476694107, 0.08695651590824127, 0.13793103396892548, 0.10526315122842789, 0.1904761791229248, 0.12121211737394333, 0.23255813121795654, 0.17391303181648254, 0.08695651590824127, 0.1764705777168274, 0.22641508281230927, 0.1666666567325592, 0.05882352590560913, 0.17142856121063232, 0.5263158082962036, 0, 0.44999998807907104, 0.1538461446762085, 0.2545454502105713, 0.41379308700561523, 0.10526315122842789, 0.3333333134651184 ]
S1g5ylbm1Q
true
[ "we propose a meta-learning approach for low-resource neural machine translation that can rapidly learn to translate on a new language" ]
[ "This work presents a method for active anomaly detection which can be built upon existing deep learning solutions for unsupervised anomaly detection.", "We show that a prior needs to be assumed on what the anomalies are, in order to have performance guarantees in unsupervised anomaly detection.", "We argue that active anomaly detection has, in practice, the same cost of unsupervised anomaly detection but with the possibility of much better results.", "To solve this problem, we present a new layer that can be attached to any deep learning model designed for unsupervised anomaly detection to transform it into an active method, presenting results on both synthetic and real anomaly detection datasets.", "Anomaly detection (a.k.a. outlier detection) (Hodge & Austin, 2004; Chandola et al., 2009; Aggarwal, 2015) aims to discover rare instances that do not conform to the patterns of majority.", "From a business perspective, though, we are not only interested in finding rare instances, but \"usefull anomalies\".", "This problem has been amply studied recently (Liu et al., 2017; Li et al., 2017; Zong et al., 2018; Maurus & Plant, 2017; Zheng et al., 2017) , with solutions inspired by extreme value theory (Siffer et al., 2017) , robust statistics (Zhou & Paffenroth, 2017) and graph theory (Perozzi et al., 2014) .Unsupervised", "anomaly detection is a sub-area of outlier detection, being frequently applied since label acquisition is very expensive and time consuming. It is a specially", "hard task, where there is usually no information on what these rare instances are and most works use models with implicit priors or heuristics to discover these anomalies, providing an anomaly score s(x) for each instance in a dataset. Active anomaly detection", "is a powerful alternative approach to this problem, which has presented good results in recent works such as (Veeramachaneni et al., 2016; Das et al., 2016; 2017) .In this work, we first show", "that unsupervised anomaly detection requires priors to be assumed on the anomaly distribution; we then argue in favor of approaching it with active anomaly detection, an important, but under-explored approach (Section 2). We propose a new layer, called", "here Universal Anomaly Inference (UAI), which can be applied on top of any unsupervised anomaly detection model based on deep learning to transform it into an active model (Section 3). This layer uses the strongest", "assets of deep anomaly detection models, i.e. its learned latent representations (l) and anomaly score (s), to train a classifier on the few already labeled instances. An example of such an application", "can be seen in FIG0 , where an UAI layer is built upon a Deanoising AutoEncoder (DAE).We then present extensive experiments", ", analyzing the performance of our systems vs unsupervised, semi-supervised and active ones under similar budgets in both synthetic and real data, showing our algorithm improves state of the art results in several datasets, with no hyperparameter tuning (Section 4). Finally, we visualize our models learned", "latent representations, comparing them to unsupervised models' ones and analyze our model's performance for different numbers of labels (Appendix C). Grubbs (1969) defines an outlying observation", ", or outlier, as one that appears to deviate markedly from other members of the sample in which it occurs. Hawkins (1980) states that an outlier is an", "observation that deviates so much from other observations as to arouse suspicion that it was generated by a different mechanism. While Chandola et al. (2009) says that normal", "data instances occur in high probability regions of a stochastic model, while anomalies occur in the low probability ones. Following these definitions, specially the one", "from (Hawkins, 1980) , we assume there is a probability density function from which our 'normal' data instances are generated: X normal ∼ p normal (x) = p (x|y = 0), where x is an instance's available information 1 and y is a label saying if the point is anomalous or not. There is also a different probability density", "function from which anomalous data instances are sampled: X anom ∼ p anom (x) = p (x|y = 1).", "We proposed here a new architecture, Universal Anomaly Inference (UAI), which can be applied on top of any deep learning based anomaly detection architecture.", "We show that, even on top of very simple architectures, like a DAE, UaiNets can produce similar/better results to state-of-the-art unsupervised/semi-supervised anomaly detection methods.", "We also give both theoretical and practical arguments motivating active anomaly detection, arguing that, in most practical settings, there would be no detriment to using this instead of a fully unsupervised approach.We further want to make clear that we are not stating our method is better than our semi-supervised baselines (DAGMM, DCN, DSEBM-e).", "Our contributions are orthogonal to theirs.", "We propose a new approach to this hard problem which can be built on top of them, this being our main contribution in this work.", "To the best of our knowledge, this is the first work which applies deep learning to active anomaly detection.", "We use the strongest points of these deep learning algorithms (their learned representations and anomaly scores) to build an active algorithm, presenting an end-to-end architecture which learns representations by leveraging both the full dataset and the already labeled instances.Important future directions for this work are using the UAI layers confidence in its output to dynamically choose between either directly using its scores, or using the underlying unsupervised model's anomaly score to choose which instances to audit next.", "Another future direction would be testing new architectures for UAI layers, in this work we restricted all our analysis to simple logistic regression.", "A third important future work would be analyzing the robustness of UaiNets to mistakes being made by the labeling experts.", "Finally, making this model more interpretable, so that auditors could focus on a few \"important\" features when labeling anomalous instances, could increase labeling speed and make their work easier." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.4888888895511627, 0.3333333432674408, 0.260869562625885, 0.7301587462425232, 0.14035087823867798, 0.04651162400841713, 0, 0.12765957415103912, 0.1818181723356247, 0.0714285671710968, 0.3870967626571655, 0.5423728823661804, 0.21052631735801697, 0.2916666567325592, 0.030303025618195534, 0.1538461446762085, 0.1538461446762085, 0.1538461446762085, 0.04255318641662598, 0.054794516414403915, 0, 0.4000000059604645, 0.23999999463558197, 0.2368420958518982, 0.0624999962747097, 0.2448979616165161, 0.27272728085517883, 0.20689654350280762, 0.16326530277729034, 0.13333332538604736, 0.11320754140615463 ]
HJex0o05F7
true
[ "A method for active anomaly detection. We present a new layer that can be attached to any deep learning model designed for unsupervised anomaly detection to transform it into an active method." ]
[ "In this paper, we ask for the main factors that determine a classifier's decision making and uncover such factors by studying latent codes produced by auto-encoding frameworks.", "To deliver an explanation of a classifier's behaviour, we propose a method that provides series of examples highlighting semantic differences between the classifier's decisions.", "We generate these examples through interpolations in latent space.", "We introduce and formalize the notion of a semantic stochastic path, as a suitable stochastic process defined in feature space via latent code interpolations.", "We then introduce the concept of semantic Lagrangians as a way to incorporate the desired classifier's behaviour and find that the solution of the associated variational problem allows for highlighting differences in the classifier decision.\n", "Very importantly, within our framework the classifier is used as a black-box, and only its evaluation is required.", "A considerable drawback of the deep classification paradigm is its inability to provide explanations as to why a particular model arrives at a decision.", "This black-box nature of deep systems is one of the main reasons why practitioners often hesitate to incorporate deep learning solutions in application areas, where legal or regulatory requirements demand decision-making processes to be transparent.", "A state-of-the-art approach to explain misclassification is saliency maps, which can reveal the sensitivity of a classifier to its inputs.", "Recent work (Adebayo et al., 2018) , however, indicates that such methods can be misleading since their results are at times independent of the model, and therefore do not provide explanations for its decisions.", "The failure to correctly provide explanations by some of these methods lies in their sensibility to feature space changes, i.e. saliency maps do not leverage higher semantic representations of the data.", "This motivates us to provide explanations that exploit the semantic content of the data and its relationship with the classifier.", "Thus we are concerned with the question: can one find semantic differences which characterize a classifier's decision?", "In this work we propose a formalism that differs from saliency maps.", "Instead of characterizing particular data points, we aim at generating a set of examples which highlight differences in the decision of a black-box model.", "Let us consider the task of image classification and assume a misclassification has taken place.", "Imagine, for example, that a female individual was mistakenly classified as male, or a smiling face was classified as not smiling.", "Our main idea is to articulate explanations for such misclassifications through sets of semantically-connected examples which link the misclassified image with a correctly classified one.", "In other words, starting with the misclassified point, we change its features in a suitable way until we arrive at the correctly classified image.", "Tracking the black-box output probability while changing these features can help articulate the reasons why the misclassification happened in the first place.", "Now, how does one generate such a set of semantically-connected examples?", "Here we propose a solution based on a variational auto-encoder framework.", "We use interpolations in latent space to generate a set of examples in feature space connecting the misclassified and the correctly classified points.", "We then condition the resulting feature-space paths on the black-box classifier's decisions via a user-defined functional.", "Optimizing the latter over the space of paths allows us to find paths which highlight classification differences, e.g. paths along which the classifier's decision changes only once and as fast as possible.", "A basic outline of our approach is given in Fig. 1 .", "In what follows we introduce and formalize the notion of stochastic semantic paths -stochastic processes on feature (data) space created by decoding latent code interpolations.", "We formulate the corresponding path integral formalism which allows for a Lagrangian formulation of the problem, viz. how to condition stochastic semantic paths on the output Figure 1: Auto-Encoding Examples Setup: Given a misclassified point x 0 and representatives x −T , x T , we construct suitable interpolations (stochastic processes) by means of an Auto-Encoder.", "Sampling points along the interpolations produces a set of examples highlighting the classifier's decision making.", "probabilities of black-box models, and introduce an example Lagrangian which tracks the classifier's decision along the paths.", "We show the explanatory power of our approach on the MNIST and CelebA datasets.", "In the present work we provide a novel framework to explain black-box classifiers through examples obtained from deep generative models.", "To summarize, our formalism extends the auto-encoder framework by focusing on the interpolation paths in feature space.", "We train the auto-encoder, not only by guaranteeing reconstruction quality, but by imposing conditions on its interpolations.", "These conditions are such that information about the classification decisions of the model B is encoded in the example paths.", "Beyond the specific problem of generating explanatory examples, our work formalizes the notion of a stochastic process induced in feature space by latent code interpolations, as well as quantitative characterization of the interpolation through the semantic Lagrangian's and actions.", "Our methodology is not constrained to a specific Auto-Encoder framework provided that mild regularity conditions are guaranteed for the auto-encoder.", "There was no preprocessing on the 28x28 MNIST images.", "The models were trained with up to 100 epochs with mini-batches of size 32 -we remark that in most cases, however, acceptable convergence occurs much faster, e.g. requiring up to 15 epochs of training.", "Our choice of optimizer is Adam with learning rate α = 10 −3 .", "The weight of the KL term of the VAE is λ kl = 1, the path loss weight is λ p = 10 3 and the edge loss weight is λ e = 10 −1 .", "We estimate the path and edge loss during training by sampling 5 paths, each of those has 20 steps.", "Encoder Architecture", "Both the encoder and decoder used fully convolutional architectures with 3x3 convolutional filters with stride 2.", "Conv k denotes the convolution with k filters, FSConv k the fractional strides convolution with k filters (the first two of them doubling the resolution, the third one keeping it constant), BN denotes batch normalization, and as above ReLU the rectified linear units, FC k the fully connected layer to R k .", "The pre-processing of the CelebA images was done by first taking a 140x140 center crop and then resizing the image to 64x64.", "The models are trained with up to 100 epochs and with mini-batches of size 128.", "Our choice of optimizer is Adam with learning rate α = 10 −3 .", "The weight of the KL term of the VAE is λ kl = 0.5, the path loss weight is λ p = 0.5 and the edge loss weight is λ e = 10 − 3.", "We estimate the path and edge loss during training by sampling 10 paths, each of those has 10 steps.", "Encoder Architecture", "Decoder Architecture", "Both the encoder and decoder used fully convolutional architectures with 3x3 convolutional filters with stride 2.", "Conv k denotes the convolution with k filters, FSConv k the fractional strides convolution with k filters (the first two of them doubling the resolution, the third one keeping it constant), BN denotes batch normalization, and as above ReLU the rectified linear units, FC k the fully connected layer to R k .", "C FURTHER RESULTS", "Interpolation between 2 and 7.", "It is seen that the Path-VAE interpolation optimizes both probabilities (P(2) and P (7)) according to the chosen Lagrangian -in this case the minimum hesitant L 1 .", "Briefly put, the construction we utilize makes use of the well-known notion of consistent measures, which are finite-dimensional projections that enjoy certain restriction compatibility; afterwards, we show existence by employing the central extension result of Kolmogorov-Daniell." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1111111044883728, 0.1599999964237213, 0.3684210479259491, 0.3529411852359772, 0.2666666507720947, 0.17391303181648254, 0.19607841968536377, 0.16393442451953888, 0.2916666567325592, 0.0634920597076416, 0.23728813230991364, 0.25531914830207825, 0.1304347813129425, 0.04878048226237297, 0.23999999463558197, 0.13636362552642822, 0.04444443807005882, 0.25925925374031067, 0.15686273574829102, 0.0833333283662796, 0.14999999105930328, 0.10256409645080566, 0.44897958636283875, 0.22727271914482117, 0.178571417927742, 0.14999999105930328, 0.18518517911434174, 0.1794871687889099, 0.23255813121795654, 0.13333332538604736, 0.1428571343421936, 0.20408162474632263, 0.13333332538604736, 0.13333332538604736, 0.21276594698429108, 0.19354838132858276, 0.16326530277729034, 0.052631575614213943, 0.16949151456356049, 0.1428571343421936, 0.19999998807907104, 0.1666666567325592, 0.1395348757505417, 0.11764705181121826, 0.19999998807907104, 0.1860465109348297, 0.1428571343421936, 0.19230768084526062, 0.1702127605676651, 0.1395348757505417, 0.11764705181121826, 0, 0, 0.1111111044883728, 0.06666666269302368 ]
rJxs5p4twr
true
[ "We generate examples to explain a classifier desicion via interpolations in latent space. The variational auto encoder cost is extended with a functional of the classifier over the generated example path in data space." ]
[ "The soundness and optimality of a plan depends on the correctness of the domain model.", "In real-world applications, specifying complete domain models is difficult as the interactions between the agent and its environment can be quite complex.", "We propose a framework to learn a PPDDL representation of the model incrementally over multiple planning problems using only experiences from the current planning problem, which suits non-stationary environments.", "We introduce the novel concept of reliability as an intrinsic motivation for reinforcement learning, and as a means of learning from failure to prevent repeated instances of similar failures.", "Our motivation is to improve both learning efficiency and goal-directedness.", "We evaluate our work with experimental results for three planning domains.", "Planning requires as input a model which describes the dynamics of a domain.", "While domain models are normally hand-coded by human experts, complex dynamics typical of real-world applications can be difficult to capture in this way.", "This is known as the knowledge engineering problem BID3 .", "One solution is to learn the model from data which is then used to synthesize a plan or policy.", "In this work, we are interested in applications where the training data has to be acquired by acting or executing an action.", "However, training data acquired in a planning problem could be insufficient to infer a complete model.", "While this is mitigated by including past training data from previous planning problems, this would be ill-suited for nonstationary domains where distributions of stochastic dynamics shift over time.", "Furthermore, the computation time increases with the size of the training data.Following these observations, we present an incremental learning model (ILM) which learns action models incrementally over planning problems, under the framework of reinforcement learning.", "PPDDL, a planning language modelling probabilistic planning problems BID20 ) (see Figure", "1) , is used for planning, and a rules-based representation (see FIG0 ) is used for the learning process.", "A parser translates between these two representations.", "Action models that were learned previously are provided to subsequent planning problems and are improved upon acquiring new training data; past training data are not used.We denote the models provided as prior action models.", "These could also be hand-coded, incomplete models serving as prior knowledge.", "Using prior knowledge has two advantages: (1) it biases the learning towards the prior action models, and (2) it reduces the amount of exploration required.While the learning progress cannot be determined without the true action models, we can estimate it empirically based on the results of learning and acting.", "This empirical estimate, or reliability, is used to guide the search in the space of possible models during learning and as an intrinsic motivation in reinforcement learning.", "When every action is sufficiently reliable, we instead exploit with Gourmand, a planner that solves finite-horizon Markov Decision Processes (MDP) problems online BID9 .Another", "major contribution of our work is its ability to learn from failure. Actions", "fail to be executed if their preconditions are not satisfied in the current state. This is", "common when the model is incorrect. Failed", "executions can have dire consequences in the real-world or cause irreversible changes such that goal states cannot be reached. ILM records", "failed executions and prevents any further attempts that would lead to similar failure. This reduces", "the number of failed executions and increases the efficiency of exploration.The rest of the paper is organized as follows. First, we review", "related work and then present the necessary background. Next, we provide", "details of ILM. Lastly, we evaluate", "ILM in three planning domains and discuss the significance of various algorithmic features introduced in this paper.", "We presented a domain-independent framework, ILM, for incremental learning over multiple planning problems of a domain without the use of past training data.", "We introduced a new measure, reliability, which serves as an empirical estimate of the learning progress and influences the processes of learning and planning.", "The relational counts are weighted with reliability to reduce the amount of exploration required for reliable action models.", "We also extended an existing rules learner to consider prior knowledge in the form of incomplete action models.", "ILM learns from failure by checking if an action is in a list of state-action pairs which represents actions that have failed to execute.", "We evaluated ILM on three benchmark domains.", "Experimental results showed that variational distances of learned action models decreased over each subsequent round.", "Learning from failure greatly reduces the number of failed executions leading to improved correctness and goal-directedness.", "For complex domains, more training data is required to learn action models.", "Using past training data would not work well for non-stationary domains and also increases the computation time for learning.", "The first issue could be resolved by learning distributions from the current training data only.", "The second issue could be resolved by maintaining a fixed size of training data by replacing older experiences while maximizing the exposure, or variability, of the training data.", "These will be explored in the future." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11764705181121826, 0.0952380895614624, 0.4680851101875305, 0.25531914830207825, 0.12903225421905518, 0.0624999962747097, 0.12121211737394333, 0.13636362552642822, 0.06666666269302368, 0.15789473056793213, 0.1860465109348297, 0.1111111044883728, 0.1249999925494194, 0.4615384638309479, 0.1249999925494194, 0.10810810327529907, 0, 0.23999999463558197, 0.0624999962747097, 0.14035087823867798, 0.31111109256744385, 0.08888888359069824, 0.1764705777168274, 0.10810810327529907, 0.0714285671710968, 0.0476190410554409, 0.0555555522441864, 0.09999999403953552, 0.0624999962747097, 0.07407406717538834, 0.1621621549129486, 0.3333333134651184, 0.24390242993831635, 0.25641024112701416, 0.307692289352417, 0.17777776718139648, 0, 0.2222222238779068, 0.1621621549129486, 0.24242423474788666, 0.10256409645080566, 0.1111111044883728, 0.09090908616781235, 0.0714285671710968 ]
B1eZxbU9DE
true
[ "Introduce an approach to allow agents to learn PPDDL action models incrementally over multiple planning problems under the framework of reinforcement learning." ]
[ "The field of Deep Reinforcement Learning (DRL) has recently seen a surge in the popularity of maximum entropy reinforcement learning algorithms. ", "Their popularity stems from the intuitive interpretation of the maximum entropy objective and their superior sample efficiency on standard benchmarks.", "In this paper, we seek to understand the primary contribution of the entropy term to the performance of maximum entropy algorithms.", "For the Mujoco benchmark, we demonstrate that the entropy term in Soft Actor Critic (SAC) principally addresses the bounded nature of the action spaces.", "With this insight, we propose a simple normalization scheme which allows a streamlined algorithm without entropy maximization match the performance of SAC.", "Our experimental results demonstrate a need to revisit the benefits of entropy regularization in DRL.", "We also propose a simple non-uniform sampling method for selecting transitions from the replay buffer during training. ", "We further show that the streamlined algorithm with the simple non-uniform sampling scheme outperforms SAC and achieves state-of-the-art performance on challenging continuous control tasks.", "Off-policy Deep Reinforcement Learning (RL) algorithms aim to improve sample efficiency by reusing past experience.", "Recently a number of new off-policy Deep Reinforcement Learning algorithms have been proposed for control tasks with continuous state and action spaces, including Deep Deterministic Policy Gradient (DDPG) and Twin Delayed DDPG (TD3) (Lillicrap et al., 2015; Fujimoto et al., 2018) .", "TD3, which introduced clipped double-Q learning, delayed policy updates and target policy smoothing, has been shown to be significantly more sample efficient than popular on-policy methods for a wide range of Mujoco benchmarks.", "The field of Deep Reinforcement Learning (DRL) has also recently seen a surge in the popularity of maximum entropy RL algorithms.", "Their popularity stems from the intuitive interpretation of the maximum entropy objective and their superior sample efficiency on standard benchmarks.", "In particular, Soft Actor Critic (SAC), which combines off-policy learning with maximum-entropy RL, not only has many attractive theoretical properties, but can also give superior performance on a wide-range of Mujoco environments, including on the high-dimensional environment Humanoid for which both DDPG and TD3 perform poorly (Haarnoja et al., 2018a; b; Langlois et al., 2019) .", "SAC has a similar structure to TD3, but also employs maximum entropy reinforcement learning.", "In this paper, we first seek to understand the primary contribution of the entropy term to the performance of maximum entropy algorithms.", "For the Mujoco benchmark, we demonstrate that when using the standard objective without entropy along with standard additive noise exploration, there is often insufficient exploration due to the bounded nature of the action spaces.", "Specifically, the outputs of the policy network are often way outside the bounds of the action space, so that they need to be squashed to fit within the action space.", "The squashing results in actions persistently taking on their maximal values, so that there is insufficient exploration.", "In contrast, the entropy term in the SAC objective forces the outputs to have sensible values, so that even with squashing, exploration is maintained.", "We conclude that the entropy term in the objective for Soft Actor Critic principally addresses the bounded nature of the action spaces in the Mujoco environments.", "With this insight, we propose Streamlined Off Policy (SOP), a streamlined algorithm using the standard objective without the entropy term.", "SOP employs a simple normalization scheme to address the bounded nature of the action spaces, allowing satisfactory exploration throughout training.", "We also consider replacing the aforementioned normalization scheme with inverting gradients (IG) The contributions of this paper are thus threefold.", "First, we uncover the primary contribution of the entropy term of maximum entropy RL algorithms when the environments have bounded action spaces.", "Second, we propose a streamlined algorithm which do not employ entropy maximization but nevertheless matches the sampling efficiency and robustness performance of SAC for the Mujoco benchmarks.", "And third, we combine our streamlined algorithms with a simple non-uniform sampling scheme to achieve state-of-the art performance for the Mujoco benchmarks.", "We provide anonymized code for reproducibility 1 .", "In this paper we first showed that the primary role of maximum entropy RL for the Mujoco benchmark is to maintain satisfactory exploration in the presence of bounded action spaces.", "We then developed a new streamlined algorithm which does not employ entropy maximization but nevertheless matches the sampling efficiency and robustness performance of SAC for the Mujoco benchmarks.", "Our experimental results demonstrate a need to revisit the benefits of entropy regularization in DRL.", "Finally, we combined our streamlined algorithm with a simple non-uniform sampling scheme to achieve state-of-the art performance for the Mujoco benchmark." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1249999925494194, 0, 0.07407406717538834, 0, 0.25, 0.1538461446762085, 0.27586206793785095, 0.23529411852359772, 0, 0.12244897335767746, 0.04651162400841713, 0.06451612710952759, 0, 0.0952380895614624, 0.07999999821186066, 0.0714285671710968, 0, 0, 0, 0, 0.0624999962747097, 0.19999998807907104, 0.06666666269302368, 0.06451612710952759, 0, 0.21621620655059814, 0.12121211737394333, 0.1111111044883728, 0, 0.2631579041481018, 0.1538461446762085, 0.1875 ]
SJl47yBYPS
true
[ "We propose a new DRL off-policy algorithm achieving state-of-the-art performance. " ]
[ "Very recently, it comes to be a popular approach for answering open-domain questions by first searching question-related passages, then applying reading comprehension models to extract answers.", "Existing works usually extract answers from single passages independently, thus not fully make use of the multiple searched passages, especially for the some questions requiring several evidences, which can appear in different passages, to be answered.", "The above observations raise the problem of evidence aggregation from multiple passages.", "In this paper, we deal with this problem as answer re-ranking.", "Specifically, based on the answer candidates generated from the existing state-of-the-art QA model, we propose two different re-ranking methods, strength-based and coverage-based re-rankers, which make use of the aggregated evidences from different passages to help entail the ground-truth answer for the question.", "Our model achieved state-of-the-arts on three public open-domain QA datasets, Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8\\% improvement on the former two datasets.", "Open-domain question answering (QA) aims to answer questions from a broad range of domains by effectively marshalling evidence from large open-domain knowledge sources.", "Such resources can be Wikipedia , the whole web BID12 , structured knowledge bases BID2 or combinations of the above (Baudiš &Šedivỳ, 2015) .Recent", "work on open-domain QA has focused on using unstructured text retrieved from the web to build machine comprehension models BID9 . These", "studies adopt a two-step process: an information retrieval (IR) model to coarsely select passages relevant to a question, followed by a reading comprehension (RC) model BID26 to infer an answer from the passages. These", "studies have made progress in bringing together evidence from large data sources, but they predict an answer to the question with only a single retrieved passage at a time. However", ", answer accuracy can often be improved by using multiple passages. In some", "cases, the answer can only be determined by combining multiple passages.In this paper, we propose a method to improve open-domain QA by explicitly aggregating evidence from across multiple passages. Our method", "is inspired by two notable observations from previous open-domain QA results analysis:• First, compared with incorrect answers, the correct answer is often suggested by more passages repeatedly. For example", ", in FIG0 (a), the correct answer \"danny boy\" has more passages providing evidence relevant to the question compared to the incorrect one. This observation", "can be seen as multiple passages collaboratively enhancing the evidence for the correct answer.• Second, sometimes", "the question covers multiple answer aspects, which spreads over multiple passages. In order to infer the", "correct answer, one has to find ways to aggregate those multiple passages in an effective yet sensible way to try to cover all aspects. In FIG0 the correct answer", "\"Galileo Galilei\" at the bottom has passages P1, \"Galileo was a physicist ...\" and P2, \"Galileo discovered the first 4 moons of Jupiter\", mentioning two pieces of evidence to match the question. In this case, the aggregation", "of these two pieces of evidence can help entail the ground-truth answer \"Galileo Galilei\". In comparison, the incorrect", "answer \"Isaac Newton\" has passages providing partial evidence on only \"physicist, mathematician and astronomer\". This observation illustrates", "the way in which multiple passages may provide complementary evidence to better infer the correct answer to a question.To provide more accurate answers for open-domain QA, we hope to make better use of multiple passages for the same question by aggregating both the strengthened and the complementary evidence from all the passages. We formulate the above evidence", "aggregation as an answer re-ranking problem. Re-ranking has been commonly used", "in NLP problems, such as in parsing and translation, in order to make use of high-order or global features that are too expensive for decoding algorithms BID6 BID27 BID16 BID11 . Here we apply the idea of re-ranking", "; for each answer candidate, we efficiently incorporate global information from multiple pieces of textual evidence without significantly increasing the complexity of the prediction of the RC model. Specifically, we first collect the", "top-K candidate answers based on their probabilities computed by a standard RC/QA system, and then we use two proposed re-rankers to re-score the answer candidates by aggregating each candidate's evidence in different ways. The re-rankers are:• A strength-based", "re-ranker, which ranks the answer candidates according to how often their evidence occurs in different passages. The re-ranker is based on the first observation", "if an answer candidate has multiple pieces of evidence, and each passage containing some evidence tends to predict the answer with a relatively high score (although it may not be the top score), then the candidate is more likely to be correct. The passage count of each candidate, and the aggregated", "probabilities for the candidate, reflect how strong its evidence is, and thus in turn suggest how likely the candidate is the corrected answer.• A coverage-based re-ranker, which aims to rank an answer", "candidate higher if the union of all its contexts in different passages could cover more aspects included in the question. To achieve this, for each answer we concatenate all the passages", "that contain the answer together. The result is a new context that aggregates all the evidence necessary", "to entail the answer for the question. We then treat the new context as one sequence to represent the answer,", "and build an attention-based match-LSTM model between the sequence and the question to measure how well the new aggregated context could entail the question. Overall, our contributions are as follows: 1) We propose a re-ranking-based", "framework to make use of the evidence from", "multiple passages in open-domain QA, and two re-rankers, namely, a strengthbased re-ranker and a coverage-based re-ranker, to perform evidence aggregation in existing opendomain QA datasets. We find the second re-ranker performs better than the first one on two of the", "three public datasets. 2) Our proposed approach leads to the state-of-the-art results on three different", "datasets (Quasar-T BID9 , SearchQA BID10 and TriviaQA BID17 ) and outperforms previous state of the art by large margins. In particular, we achieved up to 8% improvement on F1 on both Quasar-T and SearchQA", "compared to the previous best results.", "We have observed that open-domain QA can be improved by explicitly combining evidence from multiple retrieved passages.", "We experimented with two types of re-rankers, one for the case where evidence is consistent and another when evidence is complementary.", "Both re-rankers helped to significantly improve our results individually, and even more together.", "Our results considerably advance the state-of-the-art on three open-domain QA datasets.Although our proposed methods achieved some successes in modeling the union or co-occurrence of multiple passages, there are still much harder problems in open-domain QA that require reasoning and commonsense inference abilities.", "In future work, we will explore the above directions, and we believe that our proposed approach could be potentially generalized to these more difficult multipassage reasoning scenarios." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.1463414579629898, 0.3199999928474426, 0.2857142686843872, 0, 0.31372547149658203, 0.19999998807907104, 0.15789473056793213, 0.15789473056793213, 0.1666666567325592, 0.1860465109348297, 0.08888888359069824, 0.20689654350280762, 0.40909090638160706, 0.1860465109348297, 0.10526315122842789, 0.3125, 0.19999998807907104, 0.1463414579629898, 0.1702127605676651, 0.1875, 0.060606054961681366, 0.3571428656578064, 0, 0.23999999463558197, 0.23255813121795654, 0.11538460850715637, 0.10256409645080566, 0.1428571343421936, 0.08888888359069824, 0.1904761791229248, 0.19354838132858276, 0.1875, 0.17391303181648254, 0.3333333432674408, 0.3265306055545807, 0.06666666269302368, 0.08510638028383255, 0.09090908616781235, 0.42424240708351135, 0.22857142984867096, 0, 0.2181818187236786, 0.0952380895614624 ]
rJl3yM-Ab
true
[ "We propose a method that can make use of the multiple passages information for open-domain QA." ]
[ "Many large text collections exhibit graph structures, either inherent to the content itself or encoded in the metadata of the individual documents.\n", "Example graphs extracted from document collections are co-author networks, citation networks, or named-entity-cooccurrence networks.\n", "Furthermore, social networks can be extracted from email corpora, tweets, or social media. \n", "When it comes to visualising these large corpora, either the textual content or the network graph are used.\n\n", "In this paper, we propose to incorporate both, text and graph, to not only visualise the semantic information encoded in the documents' content but also the relationships expressed by the inherent network structure.\n", "To this end, we introduce a novel algorithm based on multi-objective optimisation to jointly position embedded documents and graph nodes in a two-dimensional landscape.\n", "We illustrate the effectiveness of our approach with real-world datasets and show that we can capture the semantics of large document collections better than other visualisations based on either the content or the network information.", "Substantial amounts of data is produced in our modern information society each day.", "A large portion of it comes from the communication on social media platforms, within chat applications, or via emails.", "This data exhibits dualtiy in the sense that they can be represented as text and graph.", "The metadata provides an inherent graph structure given by the social network between correspondents and the exchanged messages constitute the textual content.", "In addition, there are many other datasets that exhibit these two facets.", "Some of them are found in bibliometrics, for example in collections of research publications as co-author and citation networks.", "When it comes to analyse these types of datasets, usually either the content or the graph structure is neglected.", "In data exploration scenarios the goal of getting an overview of the datasets at hand is insurmountable with current tools.", "The sheer amount of data prohibits simple visualisations of networks or meaningful keyword-driven summaries of the textual content.", "Data-driven journalism (Coddington, 2015) often has to deal with leaked, unstructured, very heterogeneous data, e.g. in the context of the Panama Papers, where journalists needed to untangle and order huge amounts of information, search entities, and visualise found patterns (Chabin, 2017) .", "Similar datasets are of interest in the context of computational forensics (Franke & Srihari, 2007) .", "Auditing firms and law enforcement need to sift through huge amounts of data to gather evidence of criminal activity, often involving communication networks and documents (Karthik et al., 2008) .", "Users investigating such data want to be able to quickly gain an overview of its entirety, since the large amount of heterogeneous data renders experts' investigations by hand infeasible.", "Computer-aided exploration tools can support their work to identify irregularities, inappropriate content, or suspicious patterns.", "Current tools 1 lack sufficient semantic support, for example by incorporating document embeddings (Mikolov et al., 2013) and the ability to combine text and network information intuitively.", "We propose MODiR, a scalable multi-objective dimensionality reduction algorithm, and show how it can be used to generate an overview of entire text datasets with inherent network information in a single interactive visualisation.", "Special graph databases enable the efficient storage of large relationship networks and provide interfaces to query or analyse the data.", "However, without prior knowledge, it is practically impossible to gain an overview or quick insights into global network structures.", "Although traditional node-link visualisations of a graph can provide this overview, all semantic information from associated textual content is lost completely.", "Technically, our goal is to combine network layouts with dimensionality reduction of highdimensional semantic embedding spaces.", "Giving an overview over latent structures and topics in one visualisation may significantly improve the exploration of a corpus by users unfamiliar with the domain and terminology.", "This means, we have to integrate multiple aspects of the data, especially graph and text, into a single visualisation.", "The challenge is to provide an intuitive, two-dimensional representation of both the graph and the text, while balancing potentially contradicting objectives of these representations.", "In contrast to existing dimensionality reduction methods, such as tSNE (Maaten & Hinton, 2008) , MODiR uses a novel approach to transform high-dimensional data into two dimensions while optimising multiple constraints simultaneously to ensure an optimal layout of semantic information extracted from text and the associated network.", "To minimise the computational complexity that would come from a naive combination of network drawing and dimensionality reduction algorithms, we formally use the notion of a hypergraph.", "In this way, we are able to move repeated expensive computations from the iterative document-centred optimisation to a preprocessing step that constructs the hypergraph.", "We use real-world datasets from different domains to demonstrate the effectiveness and flexibility of our approach.", "MODiR-generated representations are compared to a series of baselines and state-of-the-art dimensionality reduction methods.", "We further show that our integrated view of these datasets exhibiting duality is superior to approaches focusing on text-only or network-only information when computing the visualisation.", "In this paper we discussed how to jointly visualise text and network data with all its aspects on a single canvas.", "Therefore we identified three principles that should be balanced by a visualisation algorithm.", "From those we derived formal objectives that are used by a gradient descend algorithm.", "We have shown how to use that to generate landscapes which consist of a base-layer, where the embedded unstructured texts are positioned such that their closeness in the document landscape reflects semantic similarity.", "Secondly, the landscape consists of a graph layer onto which the inherent network is drawn such that well connected nodes are close to one another.", "Lastly, both aspects can be balanced so that nodes are close to the documents they are associated with while preserving the graph-induced neighbourhood.", "We proposed MODiR, a novel multi-objective dimensionality reduction algorithm which iteratively optimises the document and network layout to generate insightful visualisations using the objectives mentioned above.", "In comparison with baseline approaches, this multi-objective approach provided best balanced overall results as measured by various metrics.", "In particular, we have shown that MODiR outperforms state-of-the-art algorithms, such as tSNE.", "We also implemented an initial prototype for an intuitive and interactive exploration of multiple datasets.", "(Ammar et al., 2018) with over 45 million articles.", "Both corpora cover a range of different scientific fields.", "Semantic Scholar for example integrates multiple data sources like DBLP and PubMed and mostly covers computer science, neuroscience, and biomedical research.", "Unlike DBLP however, S2 and AM not only contain bibliographic metadata, such as authors, date, venue, citations, but also abstracts to most articles, that we use to train document embeddings using the Doc2Vec model in Gensim 10 .", "Similar to Carvallari et al. (Cavallari et al., 2017) remove articles with missing information and limit to six communities that are aggregated by venues as listed in Table 3 .", "This way we reduce the size and also remove clearly unrelated computer science articles and biomedical studies.", "For in depth comparisons we reduce the S2 dataset to 24 hand-picked authors, their co-authors, and their papers (S2b).", "Note, that the characteristics of the networks differ greatly as the ratio between documents, nodes, and edges in Table 2 shows.", "In an email corpus, a larger number of documents is attributed to fewer nodes and the distribution has a high variance (some people write few emails, some a lot).", "In the academic corpora on the other hand, the number of documents per author is relatively low and similar throughout.", "Especially different is the news corpus, that contains one entity that is linked to all other entities and to all documents." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1621621549129486, 0.06666666269302368, 0.13793103396892548, 0.1764705777168274, 0.17391303181648254, 0.09999999403953552, 0.12765957415103912, 0, 0.05714285373687744, 0.0624999962747097, 0.1111111044883728, 0, 0.12121211737394333, 0.11764705181121826, 0.11764705181121826, 0.0624999962747097, 0.14814814925193787, 0, 0.04651162400841713, 0.0952380895614624, 0.12903225421905518, 0.23255813121795654, 0.25, 0.11428570747375488, 0.22857142984867096, 0, 0.25, 0.1463414579629898, 0.05714285373687744, 0.10526315122842789, 0.16393442451953888, 0.09999999403953552, 0.052631575614213943, 0.0624999962747097, 0.13333332538604736, 0.0952380895614624, 0.2702702581882477, 0.06896550953388214, 0.06666666269302368, 0.043478257954120636, 0.09999999403953552, 0.10810810327529907, 0.19512194395065308, 0.05882352590560913, 0, 0.13333332538604736, 0.07692307233810425, 0, 0.11428570747375488, 0.038461532443761826, 0.09302324801683426, 0, 0.05882352590560913, 0, 0.1395348757505417, 0, 0.060606054961681366 ]
HJlMkTNYvH
true
[ "Dimensionality reduction algorithm to visualise text with network information, for example an email corpus or co-authorships." ]
[ "Machine learned models exhibit bias, often because the datasets used to train them are biased.", "This presents a serious problem for the deployment of such technology, as the resulting models might perform poorly on populations that are minorities within the training set and ultimately present higher risks to them.", "We propose to use high-fidelity computer simulations to interrogate and diagnose biases within ML classifiers.", "We present a framework that leverages Bayesian parameter search to efficiently characterize the high dimensional feature space and more quickly identify weakness in performance.", "We apply our approach to an example domain, face detection, and show that it can be used to help identify demographic biases in commercial face application programming interfaces (APIs).", "Machine learned classifiers are becoming increasingly prevalent and important.", "Many systems contain components that leverage trained models for detecting or classifying patterns in data.", "Whether decisions are made entirely, or partially based on the output of these models, and regardless of the number of other components in the system, it is vital that their characteristics are well understood.", "However, the reality is that with many complex systems, such as deep neural networks, many of the \"unknowns\" are unknown and need to identified BID23 .", "Imagine a model being deployed in law enforcement for facial recognition, such a system could encounter almost infinite scenarios; which of these scenarios will the classifier have a blind-spot for?", "We propose an approach for helping diagnose biases within such a system more efficiently.Many learned models exhibit bias as training datasets are limited in size and diversity BID34 BID33 , or they reflect inherent human-biases BID7 .", "It is difficult for researchers to collect vast datasets that feature equal representations of every key property.", "Collecting large corpora of training examples requires time, is often costly and is logistically challenging.", "Let us take facial analysis as an exemplar problem for computer vision systems.", "There are numerous companies that provide services of face detection and tracking, face recognition, facial attribute detection, and facial expression/action unit recognition (e.g., Microsoft (msf, 2018) , Google (goo, 2018) , Affectiva (McDuff et al., 2016; aff, 2018) ).", "However, studies have revealed systematic biases in results of these systems BID6 BID5 , with the error rate up to seven times larger on women than men.", "Such biases in performance are very problematic when deploying these algorithms in the real-world.", "Other studies have found that face recognition systems misidentify [color, gender (women) , and age (younger)] at higher error rates BID22 .", "Reduced performance of a classifier on minority groups can lead to both greater numbers of false positives (in a law enforcement domain this would lead to more frequent targeting) or greater numbers of false negatives (in a medical domain this would lead to missed diagnoses).Taking", "face detection as a specific example of a task that all the services mentioned above rely upon, demographic and environmental factors (e.g., gender, skin type, ethnicity, illumination) all influence the appearance of the face. Say we", "collected a large dataset of positive and negative examples of faces within images. Regardless", "of how large the dataset is, these examples may not be evenly distributed across each demographic group. This might", "mean that the resulting classifier performs much less accurately on African-American people, because the training data featured few examples. A longitudinal", "study of police departments revealed that African-American individuals were more likely to be subject to face recognition searches than others BID15 . To further complicate", "matters, even if one were to collect a dataset that balances the number of people with different skin types, it is highly unlikely that these examples would have similar characteristics across all other dimensions, such as lighting, position, pose, etc. Therefore, even the best", "efforts to collect balanced datasets are still likely to be flawed. The challenge then is to", "find a way of successfully characterizing the performance of the resulting classifier across all these dimensions.The concept of fairness through awareness was presented by BID9 , the principle being that in order to combat bias we need to be aware of the biases and why they occur. This idea has partly inspired", "proposals of standards for characterizing training datasets that inform consumers of their properties BID20 . Such standards would be very", "valuable. However, while transparency", "is very important, it will not solve the fundamental problem of how to address the biases caused by poor representation. Nor will it help identify biases", "that might still occur even with models trained using carefully curated datasets.Attempts have been made to improve facial attribute detection by including gender and racial diversity. In one example, by BID29 , results", "were improved by scraping images from the web and learning facial representations from a held-out dataset with a uniform distribution across race and gender intersections. However, a drawback of this approach", "is that even images available from vast sources, such as Internet image search, may not be evenly balanced across all attributes and properties and the data collection and cleaning is still very time consuming.To address the problem of diagnosing bias in real world datasets we propose the use of high-fidelity simulations BID30 to interrogate models. Simulations allow for large volumes", "of diverse training examples to be generated and different parameter combinations to be systematically tested, something that is challenging with \"found\" data scrapped from the web or even curated datasets.Simulated data can be created in different ways. Generative adversarial networks (GANs", ") BID17 are becoming increasingly popular for synthesizing data BID31 . For example, GANs could be used to synthesize", "images of faces at different ages BID40 . However, GANs are inherently statistical models", "and are likely to contain some of the biases that the data used to train them contain. A GAN model trained with only a few examples of", "faces with darker skin tones will likely fail to produce a diverse set of high quality synthesized images with this attribute. Parameterized graphics models are an alternative", "for training and testing vision models BID36 BID15 BID35 . Specifically, it has been proposed that graphics", "models be used for performance evaluation BID19 . As an example, this approach has been used for models", "for pedestrian detection BID35 . To the best of our knowledge graphics models have not", "been employed for detecting demographic biases within vision models. We believe that demographic biases in machine learned", "systems is significant enough a problem to warrant further attention.The contributions of this paper are to: (1) present a simulated model for generating synthetic facial data, (2) show how simulated data can be used to identify the limitations of existing face detection algorithms, and (3) to present a sample efficient approach that reduces the number of simulations required. The simulated model used in this paper is made available", ".", "We present an approach that leverages highly-realistic computer simulations to interrogate and diagnose biases within ML classifiers.", "We propose the use of simulated data and Bayesian optimization to intelligently search the parameter space.", "We have shown that it is possible to identify limits in commercial face detection systems using synthetic data.", "We highlight bias in these existing classifiers which indicates they perform poorly on darker skin types and on older skin texture appearances.", "Our approach is easily extensible, given the amount of parameters (e.g., facial expressions and actions, lighting direction and intensity, number of faces, occlusions, head pose, age, gender, skin type) that can be systematically varied with simulations.We used one base facial model for our experimentation.", "This limits the generalization of our conclusions and the ability for us to determine whether the effects would be similar or different across genders and other demographic variables.", "Synthetic faces with alternate bone structures would need to be created to test these hypotheses.", "While the initial cost of creating the models is high, they can be used to generate large volumes of data, making synthetics cost effective in the long-run.", "Age modeling in face images should be improved using GAN or improved parametric synthetic models.", "A limitation of our work is that the aging was only represented via texture changes.", "We plan to investigate GAN-based approaches for synthesis and compare these to parametric synthesis.", "A hybrid of parametric and statistical models could be used to create a more controllable but diverse set of synthesized faces.", "Future work will consider retraining the models using synthetic data in order to examine whether this can be used to combat model bias." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.060606054961681366, 0.23999999463558197, 0.75, 0.380952388048172, 0.2222222238779068, 0.14814814925193787, 0.060606054961681366, 0.08510638028383255, 0.1463414579629898, 0.043478257954120636, 0.2181818187236786, 0.11428570747375488, 0.0624999962747097, 0.06451612710952759, 0.07692307233810425, 0.08888888359069824, 0.06451612710952759, 0.10256409645080566, 0.08163265138864517, 0.11999999731779099, 0.19354838132858276, 0, 0.052631575614213943, 0.09756097197532654, 0.10169491171836853, 0.0624999962747097, 0.15625, 0.05714285373687744, 0, 0.09999999403953552, 0.11764705181121826, 0.08888888359069824, 0.16438356041908264, 0.1090909019112587, 0.0555555522441864, 0, 0.24390242993831635, 0.09090908616781235, 0.11764705181121826, 0, 0, 0.24242423474788666, 0.1690140813589096, 0.800000011920929, 0.1818181723356247, 0.1666666567325592, 0.15789473056793213, 0.12903225421905518, 0.09302324801683426, 0.0624999962747097, 0.04878048226237297, 0, 0.060606054961681366, 0.19999998807907104, 0.15789473056793213, 0.04999999329447746 ]
BJf_YjCqYX
true
[ "We present a framework that leverages high-fidelity computer simulations to interrogate and diagnose biases within ML classifiers. " ]
[ "Point clouds are a flexible and ubiquitous way to represent 3D objects with arbitrary resolution and precision.", "Previous work has shown that adapting encoder networks to match the semantics of their input point clouds can significantly improve their effectiveness over naive feedforward alternatives.", "However, the vast majority of work on point-cloud decoders are still based on fully-connected networks that map shape representations to a fixed number of output points.", "In this work, we investigate decoder architectures that more closely match the semantics of variable sized point clouds.", "Specifically, we study sample-based point-cloud decoders that map a shape representation to a point feature distribution, allowing an arbitrary number of sampled features to be transformed into individual output points.", "We develop three sample-based decoder architectures and compare their performance to each other and show their improved effectiveness over feedforward architectures.", "In addition, we investigate the learned distributions to gain insight into the output transformation.", "Our work is available as an extensible software platform to reproduce these results and serve as a baseline for future work.", "Point clouds are an important data type for deep learning algorithms to support.", "They are commonly used to represent point samples of some underlying object.", "More generally, the points may be extended beyond 3D space to capture additional information about multi-sets of individual objects from some class.", "The key distinction between point clouds and the more typical tensor data types is that the information content is invariant to the ordering of points.", "This implies that the spatial relationships among points is not explicitly captured via the indexing structure of inputs and outputs.", "Thus, standard convolutional architectures, which leverage such indexing structure to support spatial generalization, are not directly applicable.", "A common approach to processing point clouds with deep networks is voxelization, where point clouds are represented by one or more occupancy-grid tensors (Zhou & Tuzel (2018) , Wu et al. (2018) ).", "The grids encode the spatial dimensions of the points in the tensor indexing structure, which allows for the direct application of convolutional architectures.", "This voxelization approach, however, is not appropriate in many use cases.", "In particular, the size of the voxelized representation depends on the spatial extent of the point cloud relative to the spatial resolution needed to make the necessary spatial distinctions (such as distinguishing between different objects in LIDAR data).", "In many cases, the required resolution will be unknown or result in enormous tensors, which can go beyond the practical space and time constraints of an application.", "This motivates the goal of developing architectures that support processing point cloud data directly, so that processing scales with the number of points rather than the required size of an occupancy grid.", "One naive approach, which scales linearly in the size of the point cloud, is to 'flatten' the point cloud into an arbitrarily ordered list.", "The list can then be directly processed by standard convolutional or fully-connected (MLP) architectures directly.", "This approach, however, has at least two problems.", "First, the indexing order in the list carries no meaningful information, while the networks do not encode this as a prior.", "Thus, the networks must learn to generalize in a way that is invariant to ordering, which can be data inefficient.", "Second, in some applications, it is useful for point clouds to consist of varying numbers of points, while still representing the same underlying objects.", "However, the number of points that can be consumed by the naive feedforward architecture is fixed.", "PointNet (Qi et al., 2017) and Deepsets Zaheer et al. (2017) exhibit better performance over the MLP baseline with a smaller network by independently transforming each point into a high-dimensional representation with a single shared MLP that is identically applied to each individual point.", "This set of derived point features is then mapped to a single, fixed-sized dense shape representation using a symmetric reduction function.", "As such the architectures naturally scale to any number of input points and order invariance is built in as an architectural bias.", "As a result, these architectures have been shown to yield significant advantages in applications in which point clouds are used as input, such as shape classification.", "The success of PointNet and DeepSet style architectures in this domain shows that designing a network architecture to match the semantics of a point cloud results in a more efficient, and better performing network.", "Since point clouds are such a useful object representation, it's natural to ask how we should design networks to decode point clouds from some provided shape representation.", "This would allow for the construction of point cloud auto-encoders, which could serve a number of applications, such as anomaly detection and noise smoothing.", "Surprisingly, the dominant approach to designing such a differentiable point cloud decoder is to feed the dense representation of the desired object through a single feedforward MLP whose result is then reshaped into the appropriate size for the desired point cloud.", "This approach has similar issues as the flat MLP approach to encoding point clouds; the decoder can only produce a fixed-sized point cloud while point clouds are capable of representing objects at low or high levels of detail; the decoder only learns a single deterministic mapping from a shape representation to a point cloud while we know that point clouds are inherently random samples of the underlying object.", "The primary goal and contribution of this paper is to study how to apply the same lessons learned from the PointNet encoder's semantic congruence with point clouds to a point cloud decoder design.", "As such, we build on PointNet's principles to present the 'NoiseLearn' algorithm-a novel, simple, and effective point cloud decoding approach.", "The simplicity of the decoding architectures and the increase in performance are strong indicators that sample-based decoders should be considered as a default in future studies and systems.", "In addition, we investigate the operation of the decoders to gain insight into how the output point clouds are generated from a latent shape representation.", "In this work, we evaluated and compared several realizations of a sample-based point cloud decoder architecture.", "We show that these sampling approaches are competitive with or outperform the MLP approach while using fewer parameters and providing better functionality.", "These advantages over the baseline suggest that sample based point cloud decoders should be the default approach when a network needs to produce independent point samples of some underlying function or object.", "To further this this area of research, we provide a complete open-source implementation of our tools used to train and evaluate these networks." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.1111111044883728, 0.2666666507720947, 0.1818181723356247, 0.31578946113586426, 0.1666666567325592, 0.10526315122842789, 0.060606054961681366, 0.10256409645080566, 0.060606054961681366, 0.1249999925494194, 0.0952380895614624, 0.2857142686843872, 0.20512819290161133, 0, 0.1599999964237213, 0.10256409645080566, 0, 0.16326530277729034, 0.1304347813129425, 0.21739129722118378, 0.19512194395065308, 0.05882352590560913, 0, 0.051282044500112534, 0.10256409645080566, 0.1860465109348297, 0.22857142984867096, 0.28070175647735596, 0.09999999403953552, 0.1428571343421936, 0.09090908616781235, 0.3333333432674408, 0.09090908616781235, 0.23255813121795654, 0.23529411852359772, 0.23529411852359772, 0.2448979616165161, 0.29999998211860657, 0.2222222238779068, 0.23255813121795654, 0.2222222238779068, 0.380952388048172, 0.3199999928474426, 0.1463414579629898 ]
SklVI1HKvH
true
[ "We present and evaluate sampling-based point cloud decoders that outperform the baseline MLP approach by better matching the semantics of point clouds." ]
[ "We present a deep reinforcement learning approach to minimizing the execution cost of neural network computation graphs in an optimizing compiler.", "Unlike earlier learning-based works that require training the optimizer on the same graph to be optimized, we propose a learning approach that trains an optimizer offline and then generalizes to previously unseen graphs without further training.", "This allows our approach to produce high-quality execution decisions on real-world TensorFlow graphs in seconds instead of hours.", "We consider two optimization tasks for computation graphs: minimizing running time and peak memory usage.", "In comparison to an extensive set of baselines, our approach achieves significant improvements over classical and other learning-based methods on these two tasks.", "Deep Learning frameworks such as MXNet (Chen et al., 2015) , PyTorch (Paszke et al., 2017) , and TensorFlow (TensorFlow Authors, 2016a) represent neural network models as computation graphs.", "Efficiently executing such graphs requires optimizing discrete decisions about how to map the computations in a graph onto hardware so as to minimize a relevant cost metric (e.g., running time, peak memory).", "Given that execution efficiency is critical for the success of neural networks, there is growing interest in the use of optimizing static compilers for neural network computation graphs, such as Glow (Rotem et al., 2018) , MLIR (MLIR Authors, 2018) , TVM (Chen et al., 2018a) , and XLA (XLA team, 2017 ).", "Here we consider the model parallelism setting where a computation graph can be executed using multiple devices in parallel.", "Nodes of the graph are computational tasks, and directed edges denote dependencies between them.", "We consider here jointly optimizing over placement, i.e., which nodes are executed on which devices, and schedule, i.e., the node execution order on each device.", "These decisions are typically made in either one or two passes in the compiler.", "We consider two different objectives:", "1) minimize running time, subject to not exceeding device memory limits, and", "2) minimize peak memory usage.", "In the optimization literature, such problems are studied under the class of task scheduling, which is known to be NP-hard in typical settings (Sinnen, 2007; Kwok & Ahmad, 1999) .", "As scheduling and placement are just a few of the many complex decisions made in a compiler, it is essential in a production setting that a solution", "1) produce solutions of acceptable quality fast, even on large graphs (e.g., thousands of nodes) and decision spaces, and", "2) handle diverse graphs from various types of applications, neural network architectures, and users.", "In this work we consider learning an optimizer that satisfies these requirements.", "Crucially, we aim to learn an optimizer that generalizes to a broad set of previously unseen computation graphs, without the need for training on such graphs, thus allowing it to be fast at test time.", "Previous works on learning to optimize model parallelism decisions (Mirhoseini et al., 2017; Addanki et al., 2019) have not considered generalization to a broad set of graphs nor joint optimization of placement and scheduling.", "In Mirhoseini et al. (2017; , learning is done from scratch for each computation graph and for placement decisions only, requiring hours (e.g., 12 to 27 hours per graph).", "This is too slow to be broadly useful in a general-purpose production compiler.", "We propose an approach that takes only seconds to optimize similar graphs.", "In concurrent work to ours, Addanki et al. (2019) shows generalization to unseen graphs, but they are generated artificially by architecture search for a single learning task and dataset.", "In contrast, we collect real user-defined graphs spanning a broad set of tasks, architectures, and datasets.", "In addition, both Mirhoseini et al. (2017; and Fig. 1 : Overview of our approach.", "The Biased Random Key Genetic Algorithm (BRKGA) is used to optimize execution decisions for a computation graph (e.g., placement and scheduling of nodes) with respect to a cost metric (e.g., running time, peak memory) computed using the performance model.", "BRKGA requires proposal distributions for each node in the graph to generate candidate solutions in its search loop.", "The default choice is agnostic to the input graph: uniform distribution over [0, 1] at all nodes.", "We use a graph neural network policy to predict node-specific non-uniform proposal distribution choices (parameterized as beta distributions over [0, 1] ).", "BRKGA is then run with those choices and outputs the best solution found by its iteration limit.", "By controlling the non-uniformity of the distributions, the policy directs how BRKGA's search effort is allocated such that a better solution can be found with the same search budget.", "Addanki et al. (2019) consider only placement decisions and rely on TensorFlow's dynamic scheduler; they do not address the static compiler setting where it is natural to jointly optimize scheduling and placement.", "The key idea of our approach (Figure 1 ) is to learn a neural network that, conditioned on the input graph to be optimized, directs an existing optimization algorithm's search such that it finds a better solution in the same search budget.", "We choose the Biased Random-Key Genetic Algorithm (BRKGA (Gonçalves & Resende, 2011) ) as the optimization algorithm after an extensive evaluation of several choices showed that it gives by far the best speed-vs-quality trade-off for our application.", "BRKGA produces good solutions in just a few seconds even for real-world TensorFlow graphs with thousands of nodes, and we use learning to improve the solution quality significantly at similar speed.", "We train a graph neural network (Battaglia et al., 2018) to take a computation graph as input and output node-specific proposal distributions to use in the mutant generation step of BRKGA's inner loop.", "BRKGA is then run to completion with those input-dependent distribution choices, instead of inputagnostic default choices, to compute execution decisions.", "The distributions are predicted at each node, resulting in a high-dimensional prediction problem.", "There is no explicit supervision available, so we use the objective value as a reward signal in a contextual bandit approach with REINFORCE (Williams, 1992) .", "Our approach, \"Reinforced Genetic Algorithm Learning\" (REGAL), uses the network's ability to generalize to new graphs to significantly improve the solution quality of the genetic algorithm for the same objective evaluation budget.", "We follow the static compiler approach of constructing a coarse static cost model to evaluate execution decisions and optimizing them with respect to it, as done in (Addanki et al., 2018; Jia et al., 2018) .", "This is in contrast to evaluating the cost by executing the computation graph on hardware (Mirhoseini et al., 2017; .", "A computationally cheap cost model enables fast optimization.", "It is also better suited for distributed training of RL policies since a cost model is cheap to replicate in parallel actors, while hardware environments are not.", "Our cost model corresponds to classical NP-hard scheduling problems, so optimizing it is difficult.", "In this paper we focus fully on learning to optimize this cost model, leaving integration with a compiler for future work.", "We structure the neural network's task as predicting proposal distributions to use in the search over execution decisions, rather than the decisions themselves directly.", "Empirically we have found the direct prediction approach to be too slow at inference time for our application and generalizes poorly.", "Our approach potentially allows the network to learn a more abstract policy not directly tied to detailed decisions that are specific to particular graphs, which may generalize better to new graphs.", "It can also make the learning task easier as the search may succeed even with sub-optimal proposal distribution predictions, thus smoothening the reward function and allowing the network to incrementally learn better proposals.", "The node-specific proposal distribution choices provide a rich set of knobs for the network to flexibly direct the search.", "Combining learning with a search algorithm has been shown to be successful (e.g., (Silver et al., 2017; ), and our work can be seen as an instance of the same high-level idea.", "This paper makes several contributions:", "• We are the first to demonstrate learning a policy for jointly optimizing placement and scheduling that generalizes to a broad set of real-world TensorFlow graphs.", "REGAL significantly outperforms all baseline algorithms on two separate tasks of minimizing runtime and peak memory usage (section 5.3) on datasets constructed from 372 unique real-world TensorFlow graphs, the largest dataset of its kind in the literature and at least an order of magnitude larger than the ones in previous works (Mirhoseini et al., 2017; Chen et al., 2018b; Addanki et al., 2018; .", "• We use a graph neural network to predict mutant sampling distributions of a genetic algorithm, specifically BRKGA, for the input graph to be optimized.", "This directs BRKGA's search in an input-dependent way, improving solution quality for the same search budget.", "• We compare extensively to classical optimization algorithms, such as enumerative search, local search, genetic search, and other heuristics, and analyze room-for-improvement in the objective value available to be captured via learning.", "Both are missing in previous works.", "By training a graph neural network policy to predict graph-conditional node-level distributions for BRKGA, REGAL successfully generalizes to new graphs, significantly outperforms all baselines in solution quality, and computes solutions in about one second on average per TensorFlow test set graph.", "REGAL's speed and generalization make it a strong choice for use in a production compiler that needs to handle a diverse set of graphs under a limited time budget.", "We foresee several extensions.", "Integrating REGAL into a neural network compiler would allow us to evaluate the end-to-end gains due to better placement and scheduling decisions.", "To further improve REGAL's own performance, one could use a Mixture of Experts architecture.", "Given the diversity of graphs, a mixture model can train specialized sub-models on different types of graphs (e.g., convolutional networks, recurrent networks, etc.).", "Another is to replace BRKGA with alternatives, e.g., combining learned neural policies with local search.", "figure 6 give statistics for the number of nodes and edges in the datasets.", "The broad range of graph sizes indicates the diversity of the datasets." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.4000000059604645, 0.23333333432674408, 0.2978723347187042, 0.13636362552642822, 0.1538461446762085, 0.14814814925193787, 0.16393442451953888, 0.2222222238779068, 0.1249999925494194, 0.1395348757505417, 0.18867923319339752, 0.0476190447807312, 0.05882352590560913, 0.09756097197532654, 0, 0.10526315122842789, 0.19230768084526062, 0.1666666567325592, 0.1395348757505417, 0.04878048226237297, 0.2950819730758667, 0.23333333432674408, 0.10526315122842789, 0.0952380895614624, 0.24390242993831635, 0.17543859779834747, 0.17777776718139648, 0.09090908616781235, 0.2686567008495331, 0.1304347813129425, 0.08695651590824127, 0.19607841968536377, 0.08695651590824127, 0.29629629850387573, 0.16949151456356049, 0.2985074520111084, 0.15625, 0.29999998211860657, 0.2666666507720947, 0.12765957415103912, 0.0476190447807312, 0.11320754140615463, 0.2142857164144516, 0.26229506731033325, 0.2083333283662796, 0.054054051637649536, 0.2181818187236786, 0.09302324801683426, 0.20408162474632263, 0.23529411852359772, 0.11999999731779099, 0.31578946113586426, 0.20338982343673706, 0.21276594698429108, 0.22580644488334656, 0, 0.4150943458080292, 0.16867469251155853, 0.2745097875595093, 0.13636362552642822, 0.17543859779834747, 0, 0.20895521342754364, 0.2545454502105713, 0.060606058686971664, 0.19999998807907104, 0.1395348757505417, 0.22641508281230927, 0.08888888359069824, 0.1428571343421936, 0.10256409645080566 ]
rkxDoJBYPB
true
[ "We use deep RL to learn a policy that directs the search of a genetic algorithm to better optimize the execution cost of computation graphs, and show improved results on real-world TensorFlow graphs." ]
[ "Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction.", "One of the most basic prediction tasks is view prediction: how would a given scene look from an alternative viewpoint?", "Humans excel at this task.", "Our ability to imagine and fill in missing visual information is tightly coupled with perception: we feel as if we see the world in 3 dimensions, while in fact, information from only the front surface of the world hits our (2D) retinas.", "This paper explores the connection between view-predictive representation learning and its role in the development of 3D visual recognition.", "We propose inverse graphics networks, which take as input 2.5D video streams captured by a moving camera, and map to stable 3D feature maps of the scene, by disentangling the scene content from the motion of the camera.", "The model can also project its 3D feature maps to novel viewpoints, to predict and match against target views.", "We propose contrastive prediction losses that can handle stochasticity of the visual input and can scale view-predictive learning to more photorealistic scenes than those considered in previous works.", "We show that the proposed model learns 3D visual representations useful for (1) semi-supervised learning of 3D object detectors, and (2) unsupervised learning of 3D moving object detectors, by estimating motion of the inferred 3D feature maps in videos of dynamic scenes.", "To the best of our knowledge, this is the first work that empirically shows view prediction to be a useful and scalable self-supervised task beneficial to 3D object detection. ", "Predictive coding theories (Rao & Ballard, 1999; Friston, 2003) suggest that the brain learns by predicting observations at various levels of abstraction.", "These theories currently have extensive empirical support: stimuli are processed more quickly if they are predictable (McClelland & Rumelhart, 1981; Pinto et al., 2015) , prediction error is reflected in increased neural activity (Rao & Ballard, 1999; Brodski et al., 2015) , and disproven expectations lead to learning (Schultz et al., 1997) .", "A basic prediction task is view prediction: from one viewpoint, predict what the scene would look like from another viewpoint.", "Learning this task does not require supervision from any annotations; supervision is freely available to a mobile agent in a 3D world who can estimate its egomotion (Patla, 1991) .", "Humans excel at this task: we can effortlessly imagine plausible hypotheses for the occluded side of objects in a photograph, or guess what we would see if we walked around our office desks.", "Our ability to imagine information missing from the current image view-and necessary for predicting alternative views-is tightly coupled with visual perception.", "We infer a mental representation of the world that is 3-dimensional, in which the objects are distinct, have 3D extent, occlude one another, and so on.", "Despite our 2-dimensional visual input, and despite never having been supplied a 3D bounding box or 3D segmentation mask as supervision, our ability for 3D perception emerges early in infancy (Spelke et al., 1982; Soska & Johnson, 2008) .", "In this paper, we explore the link between view predictive learning and the emergence of 3D perception in computational models of perception, on mobile agents in static and dynamic scenes.", "Our models are trained to predict views of static scenes given 2.5D video streams as input, and are evaluated on their ability to detect objects in 3D.", "Our models map 2.5D input streams into 3D feature volumes of the depicted scene.", "At every frame, the architecture estimates and accounts for the motion of the camera, so that the internal 3D representation remains stable.", "The model projects its inferred 3D feature maps to novel viewpoints, and matches them against visual representations", "We propose models that learn space-aware 3D feature abstractions of the world given 2.5D input, by minimizing 3D and 2D view contrastive prediction objectives.", "We show that view-contrastive prediction leads to features useful for 3D object detection, both in simulation and in the real world.", "We further show that the ability to visually imagine full 3D scenes allows us to estimate dense 3D motion fields, where clustering non-zero motion allows 3D objects to emerge without any human supervision.", "Our experiments suggest that the ability to imagine visual information in 3D can drive 3D object detection without any human annotations-instead, the model learns by moving and watching objects move (Gibson, 1979) ." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.12903225421905518, 0.05714285373687744, 0, 0.11999999731779099, 0.3030303120613098, 0.16326530277729034, 0.12121211737394333, 0.2857142686843872, 0.3404255211353302, 0.2790697515010834, 0.10810810327529907, 0.06779660284519196, 0.05882352590560913, 0.0476190447807312, 0.043478257954120636, 0.1111111044883728, 0.25, 0.07843136787414551, 0.19512194395065308, 0.09756097197532654, 0.13333332538604736, 0.23529411852359772, 0.1249999925494194, 0.25641024112701416, 0.4000000059604645, 0.2380952388048172, 0.2666666507720947 ]
BJxt60VtPr
true
[ "We show that with the right loss and architecture, view-predictive learning improves 3D object detection" ]
[ "The modeling of style when synthesizing natural human speech from text has been the focus of significant attention.", "Some state-of-the-art approaches train an encoder-decoder network on paired text and audio samples (x_txt, x_aud) by encouraging its output to reconstruct x_aud.", "The synthesized audio waveform is expected to contain the verbal content of x_txt and the auditory style of x_aud.", "Unfortunately, modeling style in TTS is somewhat under-determined and training models with a reconstruction loss alone is insufficient to disentangle content and style from other factors of variation.", "In this work, we introduce an end-to-end TTS model that offers enhanced content-style disentanglement ability and controllability.", "We achieve this by combining a pairwise training procedure, an adversarial game, and a collaborative game into one training scheme.", "The adversarial game concentrates the true data distribution, and the collaborative game minimizes the distance between real samples and generated samples in both the original space and the latent space.", "As a result, the proposed model delivers a highly controllable generator, and a disentangled representation.", "Benefiting from the separate modeling of style and content, our model can generate human fidelity speech that satisfies the desired style conditions.", "Our model achieves start-of-the-art results across multiple tasks, including style transfer (content and style swapping), emotion modeling, and identity transfer (fitting a new speaker's voice).", "In the past few years, we have seen exciting developments in Text-To-Speech (TTS) using deep neural networks that learn to synthesize human-like speech from text in an end-to-end fashion.", "Ideally, synthesized speech should convey the given text content in an appropriate auditory style which we refer to as style modeling.", "Modeling style is of particular importance for many practical applications such as intelligent conversational agents and assistants.", "Yet, this is an incredibly challenging task because the same text can map to different speaking styles, making the problem somewhat under-determined.", "To this end, the recently proposed Tacotron-based approaches BID22 ) use a piece of reference speech audio to specify the expected style.", "Given a pair of text and audio input, they assume two independent latent variables: c that encodes content from text, and s that encodes style from the reference audio, where c and s are produced by a text encoder and a style encoder, respectively.", "A new audio waveform can be consequently generated by a decoder conditioned on c and s, i.e. p(x|c, s).", "Thus, it is straightforward to train the model that minimizes the log-likelihood by a reconstruction loss.", "However, this method makes it challenging for s to exclusively encode style because no constraints are placed on the disentanglement of style from content within the reference audio.", "It makes the model easy to simply memorize all the information (i.e. both style and content components) from the paired audio sample.", "In this case, the style embedding tends to be neglected by the decoder, and the style encoder cannot be optimized easily.To help address some of the limitations of the prior work, we propose a model that provides enhanced controllability and disentanglement ability.", "Rather than only training on a single paired text-audio sample (the text and audio are aligned with each other), i.e. (x txt , x aud ) →x, we adopt a pairwise training procedure to enforce our model to correctly map input text to two different audio references (x txt , x aud is paired with x txt , and x − aud is unpaired (randomly sampled).", "Training the model involves solving an adversarial game and a collaborative game.", "The adversarial game concentrates the true joint data distribution p(x, c) by using a conditional GAN loss.", "The collaborative game is built to minimize the distance of generated samples from the real samples in both original space and latent space.", "Specifically, we introduce two additional losses, the reconstruction loss and the style loss.", "The style loss is produced by drawing inspiration from image style transfer BID4 , which can be used to give explicit style constraints.", "During training, the the generator and discriminator combat each other to match a joint distribution.", "While at the same time, they also collaborate with each other in order to minimize the distance of the expected sample and the synthesized sample in both original space and hidden space.", "As a result, our model delivers a highly controllable generator and disentangled representation.", "We propose an end-to-end conditional generative model for TTS style modeling.", "The proposed model is built upon Tacotron, with an enhanced content-style disentanglement ability and controllability.", "The proposed pairwise training approach that involves a adversarial game and a collaborative game together, result in a highly controllable generator with disentangled representations.", "Benefiting from the separate modeling of content c and style s, our model can synthesize high fidelity speech signals with the correct content and realistic style, resulting in natural human-like speech.", "We demonstrated our approach on two TTS datasets with different auditory styles (emotion and speaker identity), and show that our approach establishes state-of-the-art quantitative and qualitative performance on a variety of tasks.", "For future research, an important direction can be training on unpaired data under an unsupervised setting.", "In this way, the requirements for a lot of work on aligning text and audios can be much released." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.14814814925193787, 0.0624999962747097, 0.07407406717538834, 0.22857142984867096, 0, 0.1428571343421936, 0.12903225421905518, 0.08695651590824127, 0.13333332538604736, 0.1249999925494194, 0.052631575614213943, 0.20000000298023224, 0.14814814925193787, 0, 0.12903225421905518, 0.0952380895614624, 0.06666666269302368, 0.07999999821186066, 0.1111111044883728, 0.06451612710952759, 0.08888888359069824, 0.035087715834379196, 0.1904761791229248, 0.14814814925193787, 0.06666666269302368, 0.0952380895614624, 0.06451612710952759, 0.0833333283662796, 0.05714285373687744, 0.09090908616781235, 0.380952388048172, 0, 0.19354838132858276, 0.1621621549129486, 0.054054051637649536, 0, 0.13793103396892548 ]
ByzcS3AcYX
true
[ "a generative adversarial network for style modeling in a text-to-speech system" ]
[ "Empirical evidence suggests that neural networks with ReLU activations generalize better with over-parameterization.", "However, there is currently no theoretical analysis that explains this observation.", "In this work, we study a simplified learning task with over-parameterized convolutional networks that empirically exhibits the same qualitative phenomenon. ", "For this setting, we provide a theoretical analysis of the optimization and generalization performance of gradient descent.", "Specifically, we prove data-dependent sample complexity bounds which show that over-parameterization improves the generalization performance of gradient descent.", "Most successful deep learning models use a number of parameters that is larger than the number of parameters that are needed to get zero-training error.", "This is typically referred to as overparameterization.", "Indeed, it can be argued that over-parameterization is one of the key techniques that has led to the remarkable success of neural networks.", "However, there is still no theoretical account for its effectiveness.One very intriguing observation in this context is that over-parameterized networks with ReLU activations, which are trained with gradient based methods, often exhibit better generalization error than smaller networks BID11 Novak et al., 2018) .", "This somewhat counterintuitive observation suggests that first-order methods which are trained on over-parameterized networks have an inductive bias towards solutions with better generalization performance.", "Understanding this inductive bias is a necessary step towards a full understanding of neural networks in practice.Providing theoretical guarantees for this phenomenon is extremely challenging due to two main reasons.", "First, to show a generalization gap, one needs to prove that large networks have better sample complexity than smaller ones.", "However, current generalization bounds that are based on complexity measures do not offer such guarantees.", "Second, analyzing the dynamics of first-order methods on networks with ReLU activations is a major challenge.", "Indeed, there do not exist optimization guarantees even for simple learning tasks such as the classic XOR problem in two dimensions.", "1 To advance this issue, we focus on a particular learning setting that captures key properties of the over-parameterization phenomenon.", "We consider a high-dimensional extension of the XOR problem, which we refer to as the \"XOR Detection problem (XORD)\".", "The XORD is a pattern recognition task where the goal is to learn a function which classifies binary vectors according to whether they contain a two-dimensional binary XOR pattern (i.e., (1, 1) or (−1, −1)).", "This problem contains the classic XOR problem as a special case when the vectors are two dimensional.", "We consider learning this function with gradient descent trained on an over-parameterized convolutional neural network (i.e., with multiple channels) with ReLU activations and three layers: convolutional, max pooling and fully connected.", "As can be seen in FIG0 , over-parameterization improves generalization in this problem as well.", "Therefore it serves as a good test-bed for understanding the role of over-parameterization.", "1 We are referring to the problem of learning the XOR function given four two-dimensional points with binary entries, using a moderate size one-hidden layer neural network (e.g., with 50 hidden neurons).", "Note that there are no optimization guarantees for this setting.", "Variants of XOR have been studied in BID10 ; Sprinkhuizen-Kuyper & Boers (1998) but these works only analyzed the optimization landscape and did not provide guarantees for optimization methods.", "We provide guarantees for this problem in Sec. 9.", "3).", "The figure shows the test error obtained for different number of channels k.", "The blue curve shows test error when restricting to cases where training error was zero.", "It can be seen that increasing the number of channels improves the generalization performance.", "Experimental details are provided in Section 8.2.1..", "In this work we provide an analysis of optimization and generalization of gradient descent for XORD.", "We show that for various input distributions, ranges of accuracy and confidence parameters, sufficiently over-parameterized networks have better sample complexity than a small network which can realize the ground truth classifier.", "To the best of our knowledge, this is the first example which shows that over-paramaterization can provably improve generalization for a neural network with ReLU activations.Our analysis provides a clear distinction between the inductive bias of gradient descent for overparameterized and small networks.", "It reveals that over-parameterized networks are biased towards global minima that detect more patterns in the data than global minima found by small networks.", "2 Thus, even though both networks succeed in optimization, the larger one has better generalization performance.", "We provide experiments which show that the same phenomenon occurs in a more general setting with more patterns in the data and non-binary input.", "We further show that our analysis can predict the behavior of over-parameterized networks trained on MNIST and guide a compression scheme for over-parameterized networks with a mild loss in accuracy (Sec. 6).", "In this paper we consider a simplified learning task on binary vectors and show that overparameterization can provably improve generalization performance of a 3-layer convolutional network trained with gradient descent.", "Our analysis reveals that in the XORD problem overparameterized networks are biased towards global minima which detect more relevant patterns in the data.", "While we prove this only for the XORD problem and under the assumption that the training set contains diverse points, our experiments clearly show that a similar phenomenon occurs in other settings as well.", "We show that this is the case for XORD with non-diverse points FIG0 ) and in the more general OBD problem which contains 60 patterns in the data and is not restricted to binary inputs FIG1 .", "Furthermore, our experiments on MNIST hint that this is the case in MNIST as well FIG5 .By", "clustering the detected patterns of the large network we could achieve better accuracy with a small network. This", "suggests that the larger network detects more patterns with gradient descent even though its effective size is close to that of a small network.We believe that these insights and our detailed analysis can guide future work for showing similar results in more complex tasks and provide better understanding of this phenomenon. It would", "also be interesting to further study the implications of such results on model compression and on improving training algorithms. Behnam Neyshabur", ", Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. We tested the generalization", "performance in the setup of Section3. We considered networks with", "number of channels 4,6,8,20,50,100 and 200 . The distribution in this setting", "has p + = 0.5 and p − = 0.9 and the training sets are of size 12 (6 positive, 6 negative). Note that in this case the training", "set contains non-diverse points with high probability. The ground truth network can be realized", "by a network with 4 channels. For each number of channels we trained a", "convolutional network 100 times and averaged the results. In each run we sampled a new training set", "and new initialization of the weights according to a gaussian distribution with mean 0 and standard deviation 0.00001. For each number of channels c, we ran gradient", "descent with learning rate 0.04 c and stopped it if it did not improve the cost for 20 consecutive iterations or if it reached 30000 iterations. The last iteration was taken for the calculations", ". We plot both average test error over all 100 runs", "and average test error only over the runs that ended at 0% train error. In this case, for each number of channels 4, 6, 8", ", 20, 50, 100 ,200 the number of runs in which gradient descent converged to a 0% train error solution is 62, 79, 94, 100, 100, 100, 100, respectively. Figure 5 shows that setting γ = 5 gives better performance", "than setting γ = 1 in the XORD problem. The setting is similar to the setting of Section 8.2.1. Each", "point is an average test error of 100 runs. . Because the", "result is a lower bound, it is desirable to understand", "the behaviour of gradient descent for values outside these ranges. In Figure 6 we empirically show that for values outside these ranges, there is a generalization gap between gradient descent for k = 2 and gradient descent for larger k." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19999998807907104, 0.13793103396892548, 0.307692289352417, 0.29411762952804565, 0.4444444477558136, 0.25641024112701416, 0.07999999821186066, 0.21052631735801697, 0.23333333432674408, 0.1904761791229248, 0.17391303181648254, 0.21621620655059814, 0.12121211737394333, 0.23529411852359772, 0.10256409645080566, 0.2631579041481018, 0.1666666567325592, 0.12244897335767746, 0.060606054961681366, 0.25, 0.25, 0.19354838132858276, 0.19999998807907104, 0.0714285671710968, 0.08695651590824127, 0.14814814925193787, 0.06451612710952759, 0, 0.25806450843811035, 0.07407406717538834, 0.24242423474788666, 0.20408162474632263, 0.28070175647735596, 0.10526315122842789, 0.11764705181121826, 0.307692289352417, 0.3404255211353302, 0.5106382966041565, 0.10256409645080566, 0.16326530277729034, 0.2448979616165161, 0.1764705777168274, 0.1764705777168274, 0.27272728085517883, 0.052631575614213943, 0.1249999925494194, 0.2857142686843872, 0.13793103396892548, 0.1395348757505417, 0.0624999962747097, 0.2666666507720947, 0.05714285373687744, 0.1860465109348297, 0.12765957415103912, 0.0714285671710968, 0.0952380895614624, 0.25925925374031067, 0.1666666567325592, 0.13333332538604736, 0.14814814925193787, 0.3404255211353302 ]
HyGLy2RqtQ
true
[ "We show in a simplified learning task that over-parameterization improves generalization of a convnet that is trained with gradient descent." ]
[ "We introduce a new and rigorously-formulated PAC-Bayes few-shot meta-learning algorithm that implicitly learns a model prior distribution of interest.", "Our proposed method extends the PAC-Bayes framework from a single task setting to the few-shot meta-learning setting to upper-bound generalisation errors on unseen tasks.", "We also propose a generative-based approach to model the shared prior and task-specific posterior more expressively compared to the usual diagonal Gaussian assumption.", "We show that the models trained with our proposed meta-learning algorithm are well calibrated and accurate, with state-of-the-art calibration and classification results on mini-ImageNet benchmark, and competitive results in a multi-modal task-distribution regression.", "One unique ability of humans is to be able to quickly learn new tasks with only a few training examples.", "This is due to the fact that humans tend to exploit prior experience to facilitate the learning of new tasks.", "Such exploitation is markedly different from conventional machine learning approaches, where no prior knowledge (e.g. training from scratch with random initialisation) (Glorot & Bengio, 2010) , or weak prior knowledge (e.g., fine tuning from pre-trained models) (Rosenstein et al., 2005) are used when encountering an unseen task for training.", "This motivates the development of novel learning algorithms that can effectively encode the knowledge learnt from training tasks, and exploit that knowledge to quickly adapt to future tasks (Lake et al., 2015) .", "Prior knowledge can be helpful for future learning only if all tasks are assumed to be distributed according to a latent task distribution.", "Learning this latent distribution is, therefore, useful for solving an unseen task, even if the task contains a limited number of training samples.", "Many approaches have been proposed and developed to achieve this goal, namely: multi-task learning (Caruana, 1997) , domain adaptation (Bridle & Cox, 1991; Ben-David et al., 2010) and meta-learning (Schmidhuber, 1987; Thrun & Pratt, 1998) .", "Among these, meta-learning has flourished as one of the most effective methods due to its ability to leverage the knowledge learnt from many training tasks to quickly adapt to unseen tasks.", "Recent advances in meta-learning have produced state-of-the-art results in many benchmarks of few-shot learning data sets (Santoro et al., 2016; Ravi & Larochelle, 2017; Munkhdalai & Yu, 2017; Snell et al., 2017; Finn et al., 2017; Rusu et al., 2019) .", "Learning from a few examples is often difficult and easily leads to over-fitting, especially when no model uncertainty is taken into account.", "This issue has been addressed by several recent Bayesian meta-learning approaches that incorporate model uncertainty into prediction, notably LLAMA that is based on Laplace method (Grant et al., 2018) , or PLATIPUS (Finn et al., 2017) , Amortised Meta-learner (Ravi & Beatson, 2019) and VERSA (Gordon et al., 2019 ) that use variational inference (VI).", "However, these works have not thoroughly investigated the generalisation errors for unseen samples, resulting in limited theoretical generalisation guarantees.", "Moreover, most of these papers are based on variational functions that may not represent well the richness of the underlying distributions.", "For instance, a common choice for the variational function relies on the diagonal Gaussian distribution, which can potentially worsen the prediction accuracy given its limited representability.", "In this paper, we address the two problems listed above with the following technical novelties:", "(i) derivation of a rigorous upper-bound for the generalisation errors of few-shot meta-learning using PAC-Bayes framework, and", "(ii) proposal of a novel variational Bayesian learning based on implicit", "The few-shot meta-learning problem is modelled using a hierarchical model that learns a prior p(w i ; θ) using a few data points s", "ij )}.", "Shaded nodes denote observed variables, while white nodes denote hidden variables.", "generative models to facilitate the learning of unseen tasks.", "Our evaluation shows that the models trained with our proposed meta-learning algorithm is at the same time well calibrated and accurate, with competitive results in terms of Expected Calibration Error (ECE) and Maximum Calibration Error (MCE), while outperforming state-of-the-art methods in a few-shot classification benchmark (mini-ImageNet).", "We introduce and formulate a new Bayesian algorithm for few-shot meta-learning.", "The proposed algorithm, SImBa, is based on PAC-Bayes framework which theoretically guarantees prediction generalisation on unseen tasks.", "In addition, the proposed method employs a generative approach that implicitly models the shared prior p(w i ; θ) and task-specific posterior q(w i ; λ i ), resulting in more expressive variational approximation compared to the usual diagonal Gaussian methods, such as PLATIPUS or Amortised Meta-learner (Ravi & Beatson, 2019) .", "The uncertainty, in the form of the learnt implicit distributions, can introduce more variability into the decision made by the model, resulting in well-calibrated and highly-accurate prediction.", "The algorithm can be combined with different base models that are trainable with gradient-based optimisation, and is applicable in regression and classification.", "We demonstrate that the algorithm can make reasonable predictions about unseen data in a multi-modal 5-shot learning regression problem, and achieve state-of-the-art calibration and classification results with on few-shot 5-way tasks on mini-ImageNet data set." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.29629629850387573, 0.20000000298023224, 0.13333332538604736, 0.10526315122842789, 0, 0.07692307233810425, 0.037735845893621445, 0.052631575614213943, 0, 0, 0.0952380895614624, 0.05714285373687744, 0.05128204822540283, 0.06666666269302368, 0.10526315867900848, 0, 0.0714285671710968, 0, 0, 0.3199999928474426, 0.19999998807907104, 0.20000000298023224, 0, 0, 0.08163265138864517, 0.29999998211860657, 0.1599999964237213, 0.072727270424366, 0.1249999925494194, 0.06896550953388214, 0.04878048598766327 ]
SkgYJaEFwS
true
[ "Bayesian meta-learning using PAC-Bayes framework and implicit prior distributions" ]
[ "As the area of Explainable AI (XAI), and Explainable AI Planning (XAIP), matures, the ability for agents to generate and curate explanations will likewise grow.", "We propose a new challenge area in the form of rebellious and deceptive explanations.", "We discuss how these explanations might be generated and then briefly discuss evaluation criteria.", "Explanations as a research area in AI (XAI) has been around for several decades BID7 BID5 BID10 BID45 BID12 BID40 BID44 BID28 .", "It has additionally gained momentum recently as evidenced by the increasing number of workshops and special tracks covering it in various conferences (e.g., VIS-xAI, FEAPAI4Fin, XAIP, XAI, OXAI, MAKE-eXAI, ICCBR-19 Focus area).While", "still growing in use, there have been some approaches to formalizing XAI. BID11", "stated that anything calling itself XAI should address the following questions:• Why did the agent do that and not something else?• When", "does the agent succeed and when does it fail?• When", "can I trust the agent?However", ", less thought out is the idea of explanations that are deceptive or rebellious in nature. These", "forms of explanation can be an entirely new area of discussion and use for certain autonomous agents.The study of deception and rebellion are both rich fields, and many aspects of both that are studied in civilian and military capacities. For example", ", the area of deception detection works on finding ways to detect inconsistencies BID41 BID22 BID2 . BID17 discuss", "a number of ways why deception is an important topic for autonomous agents.Studies of rebellion and resistance have investigated how, why, when it does, and doesn't, happen (Martí and Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved", ". BID24 BID33 . The", "use of both has", "also been studied BID34 BID1 BID21 BID18 BID30 .The idea of pairing", "deception and rebellion with explanations may not be intuitive initially. However, in addition", "to being areas of rich study, deception and rebellion offer key conditions that are of interest to agent reasoning. Predominately, it requires", "multiple actors (i.e., An actor deceives another actor, or an actor rebels against a coordinator). Additionally, there needs", "to be some sort of conflict or misalignment between the actors. Either something needs to", "be in contention for an actor to rebel, or something needs to be in conflict for the actor to use deception. b Rebellion in agents has", "been a growing area of interest BID9 BID3 BID8 . This area is focused on finding", "models in which agents can rebel from directives given in certain circumstances. This can include having more upto-date", "knowledge that would affect the plan, finding opportunities to exploit but may be off-mission, or solving problems or roadblocks before they become an issue even if it is off-mission. discuss three ways in which rebellion", "could manifest in agents. The expression of a rebellion can consist", "of either an explicit or implicit act. The focus is either inward or outward facing", ". Lastly, the interaction initiation can either", "be reactive or proactive.Deception in agents has been progressing over the last decade, with many discussions on formalizing deception. The majority of this formalism is on the topic", "of lying BID42 BID38 BID43 . There has also been inroads for more encompassing", "deception as described by BID39 and BID37 . Of interest here, BID37 defined Quantitative & Qualitative", "Maxims for Dishonesty as the following maxims:1. Lie, Bullshit (BS), or withhold information as little as possible", "to achieve your objective.2. Never lie if you can achieve your objective by BS.3. Never lie nor", "BS if you can achieve your objective by withholding", "Information.4. Never lie, BS, nor withhold information if you can achieve your objective", "with a half-truth. A particular topic that has received attention is deceptive, or dishonest", ", agents in negotiations BID31 BID47 .With these concepts in mind, we will pursue research to answer the following", ":What kind of reasoning models are required to generate explanations of a deceptive or rebellious nature?" ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2666666507720947, 0.3478260934352875, 0.1818181723356247, 0.06451612710952759, 0.045454543083906174, 0, 0.06896550953388214, 0.1111111044883728, 0, 0.23076923191547394, 0.1395348757505417, 0, 0.11999999731779099, 0, 0, 0, 0.1818181723356247, 0.06896550953388214, 0, 0, 0.1428571343421936, 0, 0.07999999821186066, 0, 0.09999999403953552, 0, 0, 0.0555555522441864, 0.09090908616781235, 0.08695651590824127, 0.0833333283662796, 0, 0, 0, 0, 0.07407406717538834, 0.25 ]
Bkxj7a2Q5E
true
[ "Position paper proposing rebellious and deceptive explanations for agents." ]
[ "We investigate a variant of variational autoencoders where there is a superstructure of discrete latent variables on top of the latent features.", "In general, our superstructure is a tree structure of multiple super latent variables and it is automatically learned from data.", "When there is only one latent variable in the superstructure, our model reduces to one that assumes the latent features to be generated from a Gaussian mixture model.", "We call our model the latent tree variational autoencoder (LTVAE).", "Whereas previous deep learning methods for clustering produce only one partition of data, LTVAE produces multiple partitions of data, each being given by one super latent variable.", "This is desirable because high dimensional data usually have many different natural facets and can be meaningfully partitioned in multiple ways.", "Clustering is a fundamental task in unsupervised machine learning, and it is central to many datadriven application domains.", "Cluster analysis partitions all the data into disjoint groups, and one can understand the structure of the data by examining examples in each group.", "Many clustering methods have been proposed in the literature BID0 , such as k-means BID18 , Gaussian mixture models BID5 and spectral clustering BID30 .", "Conventional clustering methods are generally applied directly on the original data space.", "However, it is challenging to perform cluster analysis on high dimensional and unstructured data BID26 , such as images.", "It is not only because the dimensionality is high, but also because the original data space is too complex to interpret, e.g. there are semantic gaps between pixel values and objects in images.Recently, deep learning based clustering methods have been proposed that simultanously learn nonlinear embeddings through deep neural networks and perform cluster analysis on the embedding space.", "The representation learning process learns effective high-level representations from high dimensional data and helps the cluster analysis.", "This is typically achieved by unsupervised deep learning methods, such as restricted Boltzmann machine (RBM) BID11 , autoencoders (AE) BID28 , variational autoencoders (VAE) BID16 , etc.", "Previous deep learning based clustering methods BID33 BID10 BID14 BID34 ) assume one single partition over the data and that all attributes define that partition.", "In real-world applications, however, the assumptions are usually not true.", "High-dimensional data are often multifaceted and can be meaningfully partitioned in multiple ways based on subsets of attributes BID4 .", "For example, a student population can be clustered in one way based on course grades and in another way based on extracurricular activities.", "Movie reviews can be clustered based on both sentiment (positive or negative) and genre (comedy, action, war, etc.) .", "It is challenging to discover the multi-facet structures of data, especially for high-dimensional data.To resolve the above issues, we propose an unsupervised learning method, latent tree variational autoencoder (LTVAE) to learn latent superstructures in variational autoencoders, and simultaneously perform representation learning and structure learning.", "LTVAE is a generative model, where the data is assumed to be generated from latent features through neural networks, while the latent features themselves are generated from tree-structured Bayesian networks with another level of latent variables as shown in Fig. 1 .", "Each of those latent variables defines a facet of clustering.", "The proposed method automatically selects subsets of latent features for each facet, and learns the dependency structure among different facets.", "This is achieved through systematic structure learning.", "Consequently, LTVAE is able to discover complex structures of data rather than one partition.", "We also propose efficient learning algorithms for LTVAE with gradient descent and Stepwise EM through message passing.The rest of the paper is organized as follows.", "The related works are reviewed in Section", "2. We introduce the proposed method and learning algorithms in Section", "3. In Section 4, we present the empirical results.", "The conclusion is given in Section 5.", "LTVAE learns the dependencies among latent variables Y. In general, latent variables are often correlated.", "For example, the social skills and academic skills of a student are generally correlated.", "Therefore, its better to model this relationship to better fit the data.", "Experiments show that removing such dependencies in LTVAE models results in inferior data loglikelihood.In this paper, for the inference network, we simply use mean-field inference network with same structure as the generative network BID16 .", "However, the limited expressiveness of the mean-field inference network could restrict the learning in the generative network and the quality of the learned model BID31 BID6 .", "Using a faithful inference network structure as in BID31 to incorporate the dependencies among latent variables in the posterior, for example one parameterized with masked autoencoder distribution estimator (MADE) model BID8 , could have a significant improvement in learning.", "We leave it for future investigation.", "In this paper, we propose an unsupervised learning method, latent tree variational autoencoder (LT-VAE), which simultaneously performs representation learning and multidimensional clustering.", "Different from previous deep learning based clustering methods, LTVAE learns latent embeddings from data and discovers multi-facet clustering structure based on subsets of latent features rather than one partition over data.", "Experiments show that the proposed method achieves state-of-the-art clustering performance and reals reasonable multifacet structures of the data." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 1, 0.3243243098258972, 0.2926829159259796, 0.2857142686843872, 0.0952380895614624, 0.051282044500112534, 0.11428570747375488, 0.10256409645080566, 0.04999999329447746, 0.13333332538604736, 0.10810810327529907, 0.11428570747375488, 0.05714285373687744, 0.1428571343421936, 0.04878048226237297, 0.0714285671710968, 0.10810810327529907, 0.10810810327529907, 0.05405404791235924, 0.178571417927742, 0.307692289352417, 0.29629629850387573, 0.21052631735801697, 0.07999999821186066, 0.1249999925494194, 0.1818181723356247, 0, 0.13793103396892548, 0.07407406717538834, 0.07999999821186066, 0.19354838132858276, 0.19354838132858276, 0.0714285671710968, 0.04081632196903229, 0.10810810327529907, 0.15094339847564697, 0.0833333283662796, 0.10256409645080566, 0.1818181723356247, 0.11428570747375488 ]
SJgNwi09Km
true
[ "We investigate a variant of variational autoencoders where there is a superstructure of discrete latent variables on top of the latent features." ]
[ "Many practical robot locomotion tasks require agents to use control policies that can be parameterized by goals.", "Popular deep reinforcement learning approaches in this direction involve learning goal-conditioned policies or value functions, or Inverse Dynamics Models (IDMs).", "IDMs map an agent’s current state and desired goal to the required actions.", "We show that the key to achieving good performance with IDMs lies in learning the information shared between equivalent experiences, so that they can be generalized to unseen scenarios.", "We design a training process that guides the learning of latent representations to encode this shared information.", "Using a limited number of environment interactions, our agent is able to efficiently navigate to arbitrary points in the goal space.", "We demonstrate the effectiveness of our approach in high-dimensional locomotion environments such as the Mujoco Ant, PyBullet Humanoid, and PyBullet Minitaur.", "We provide quantitative and qualitative results to show that our method clearly outperforms competing baseline approaches.", "In reinforcement learning (RL), an agent optimizes its behaviour to maximize a specific reward function that encodes tasks such as moving forward or reaching a target.", "After training, the agent simply executes the learned policy from its initial state until termination.", "In practical settings in robotics, however, control policies are invoked at the lowest level of a larger system by higher-level components such as perception and planning units.", "In such systems, agents have to follow a dynamic sequence of intermediate waypoints, instead of following a single policy until the goal is achieved.", "A typical approach to achieving goal-directed motion using RL involves learning goal-conditioned policies or value functions (Schaul et al. (2015) ).", "The key idea is to learn a function conditioned on a combination of the state and goal by sampling goals during the training process.", "However, this approach requires a large number of training samples, and does not leverage waypoints provided by efficient planning algorithms.", "Thus, it is desirable to learn models that can compute actions to transition effectively between waypoints.", "A popular class of such models is called Inverse Dynamics Model (IDM) (Christiano et al. (2016) ; Pathak et al. (2017) ).", "IDMs typically map the current state (or a history of states and actions) and the goal state, to the action.", "In this paper, we address the need of an efficient control module by learning a generalized IDM that can achieve goal-direction motion by leveraging data collected while training a state-of-the-art RL algorithm.", "We do not require full information of the goal state, or a history of previous states to learn the IDM.", "We learn on a reduced goal space, such as 3-D positions to which the agent must learn to navigate.", "Thus, given just the intermediate 3-D positions, or waypoints, our agent can navigate to the goal, without requiring any additional information about the intermediate states.", "The basic framework of the IDM is shown in Fig. 1 .", "The unique aspect of our algorithm is that we eliminate the need to randomly sample goals during training.", "Instead, we exploit the known symmetries/equivalences of the system (as is common in many robotics settings) to guide the collection of generalized experiences during training.", "We propose a class of algorithms that utilize the property of equivalence between transitions modulo the difference in a fixed set of attributes.", "In the locomotion setting, the agent's transitions are symmetric under translations and rotations.", "We capture this symmetry by defining equivalence modulo orientation among experiences.", "We use this notion of equivalence to guide the training of latent representations shared by these experiences and provide them as input to the IDM to produce the desired actions, as shown in Fig. 4 .", "A common challenge faced by agents trained using RL techniques is lack of generalization capability.", "The standard way of training produces policies that work very well on the states encountered by the agent during training, but often fail on unseen states.", "Achieving good performance using IDMs requires both these components: collecting generalized experiences, and learning these latent representations, as we demonstrate in Section 6.", "Our model exhibits high sample efficiency and superior performance, in comparison to other methods involving sampling goals during training.", "We demonstrate the effectiveness of our approach in the Mujoco Ant environment (Todorov et al. (2012) ) in OpenAI Gym (Brockman et al. (2016) ), and the Minitaur and Humanoid environments in PyBullet (Coumans & Bai (2016) ).", "From a limited number of experiences collected during training under a single reward function of going in one direction, our generalized IDM succeeds at navigating to arbitrary goal positions in the 3-D space.", "We measure performance by calculating the closest distance to the goal an agent achieves.", "We perform ablation experiments to show that (1) collecting generalized experience in the form of equivalent input pairs boosts performance over all baselines, (2) these equivalent input pairs can be condensed into a latent representation that encodes relevant information, and (3) learning this latent representation is in fact critical to success of our algorithm.", "Details of experiments and analysis of results can be found in Sections 5 and 6.", "We propose a new algorithm to achieve goal-directed motion for a variety of locomotion agents by learning the inverse dynamics model on shared latent representations for equivalent experiences.", "To this end, we take three important steps: (1) We utilize the experience collected by our agent while training a standard reinforcement learning algorithm, so that our IDM has \"good\" samples in which the agent walks reasonably well.", "(2) We generalize this experience by modifying the initial configuration for each observed trajectory in the collected data, and generate the equivalent trajectories.", "(3) We learn the important shared information between such symmetric pairs of experience samples through a latent representation that is used as an input to the IDM to produce the action required for the agent to reach the goal.", "We provide extensive qualitative and quantitative evidence to show that our methods surpass existing methods to achieve generalization over unseen parts of the state space.", "(Brockman et al., 2016) , and Humanoid and Minitaur environments in PyBullet (Coumans & Bai, 2016) .", "In each of these environments, for our methods, the agent is trained to perform well on the 3-D locomotion task in one direction.", "For the baseline methods, the agent is trained to reach goals generated in different 3-D positions.", "The reward function for collecting data for RL, GE and LR includes contact and control costs, and rewards for moving forward.", "The reward function for training VGCP includes contact and control costs, but instead of moving forward, the agent is encouraged to move in the direction of the goal.", "In the case of HER-Sparse, the agent only receives a reward 0 if it reaches the goal, and -1 otherwise.", "For HER-Dense, the agent receives a weighted sum of distance to the goal and contact and control costs as its reward.", "Humanoid Humanoid is a bipedal robot, with a 43-D state space and 17-D action space.", "The state contains information about the agent's height, orientation (yaw, pitch, roll), 3-D velocity, and the 3-D relative positions of each of its joints (knees, shoulders, etc.).", "The action consists of the torques at these joints.", "Minitaur Minitaur is a quadrupedal robot, with a 17-D state space and 8-D action space.", "The state contains information about motor angles, torques, velocities, and the orientation of the base.", "The action consists of the torque to be applied at each joint.", "Ant Ant is a quadrupedal robot, with a 111-D state space and 8-D action space.", "The state space contains the agent's height, 3-D linear and angular velocity, joint velocities, joint angles, and the external forces at each link.", "The action consists of the torques at all the 8 joints.", "Our choice of these locomotion environments is driven by the motivation provided earlier: we want an agent to navigate to a desired goal position in the 3-D space.", "In view of this, we do not use 2-D locomotion environments like Half-Cheetah, Walker, or Hopper.", "Also, though Reacher and Pusher are goal-based environments, we do not use them as they are manipulation environments, and we are aiming to achieve navigation by agents trained on 3-D locomotion tasks." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17391303181648254, 0.08510638028383255, 0.1428571343421936, 0.9454545378684998, 0.43478259444236755, 0.12244897335767746, 0.1249999925494194, 0.17777776718139648, 0.1111111044883728, 0.04651162400841713, 0.0714285671710968, 0.07843136787414551, 0.11999999731779099, 0.11764705181121826, 0, 0.1818181723356247, 0, 0.1304347813129425, 0.16949151456356049, 0.1702127605676651, 0.1304347813129425, 0.15686273574829102, 0.09999999403953552, 0.12765957415103912, 0.15686273574829102, 0.2083333283662796, 0.04878048226237297, 0.04999999701976776, 0.24137930572032928, 0, 0.11538460850715637, 0.31372547149658203, 0.0833333283662796, 0.10344827175140381, 0.1355932205915451, 0.1904761791229248, 0.3513513505458832, 0.1428571343421936, 0.290909081697464, 0.1875, 0.1599999964237213, 0.25806450843811035, 0.23076923191547394, 0.04651162400841713, 0.11764705181121826, 0.13636362552642822, 0, 0.1111111044883728, 0.04255318641662598, 0.0833333283662796, 0.04878048226237297, 0.07407406717538834, 0.052631575614213943, 0.04878048226237297, 0.09302324801683426, 0.1463414579629898, 0.04878048226237297, 0.04081632196903229, 0.05128204822540283, 0.1090909019112587, 0, 0.0714285671710968 ]
HylloR4YDr
true
[ "We show that the key to achieving good performance with IDMs lies in learning latent representations to encode the information shared between equivalent experiences, so that they can be generalized to unseen scenarios." ]
[ "In this paper, we first identify \\textit{angle bias}, a simple but remarkable phenomenon that causes the vanishing gradient problem in a multilayer perceptron (MLP) with sigmoid activation functions.", "We then propose \\textit{linearly constrained weights (LCW)} to reduce the angle bias in a neural network, so as to train the network under the constraints that the sum of the elements of each weight vector is zero.", "A reparameterization technique is presented to efficiently train a model with LCW by embedding the constraints on weight vectors into the structure of the network.", "Interestingly, batch normalization (Ioffe & Szegedy, 2015) can be viewed as a mechanism to correct angle bias.", "Preliminary experiments show that LCW helps train a 100-layered MLP more efficiently than does batch normalization.", "Neural networks with a single hidden layer have been shown to be universal approximators BID6 BID8 .", "However, an exponential number of neurons may be necessary to approximate complex functions.", "A solution to this problem is to use more hidden layers.", "The representation power of a network increases exponentially with the addition of layers BID17 BID2 .", "A major obstacle in training deep nets, that is, neural networks with many hidden layers, is the vanishing gradient problem.", "Various techniques have been proposed for training deep nets, such as layer-wise pretraining BID5 , rectified linear units BID13 BID9 , variance-preserving initialization BID3 , and normalization layers BID7 BID4 .In", "this paper, we first identify the angle bias that arises in the dot product of a nonzero vector and a random vector. The", "mean of the dot product depends on the angle between the nonzero vector and the mean vector of the random vector. We", "show that this simple phenomenon is a key cause of the vanishing gradient in a multilayer perceptron (MLP) with sigmoid activation functions. We", "then propose the use of so-called linearly constrained weights (LCW) to reduce the angle bias in a neural network. LCW", "is a weight vector subject to the constraint that the sum of its elements is zero. A reparameterization", "technique is presented to embed the constraints on weight vectors into the structure of a neural network. This enables us to train", "a neural network with LCW by using optimization solvers for unconstrained problems, such as stochastic gradient descent. Preliminary experiments", "show that we can train a 100-layered MLP with sigmoid activation functions by reducing the angle bias in the network. Interestingly, batch normalization", "BID7 can be viewed as a mechanism to correct angle bias in a neural network, although it was originally developed to overcome another problem, that is, the internal covariate shift problem. Preliminary experiments suggest that", "LCW helps train deep MLPs more efficiently than does batch normalization.In Section 2, we define angle bias and discuss its relation to the vanishing gradient problem. In Section 3, we propose LCW as an approach", "to reduce angle bias in a neural network. We also present a reparameterization technique", "to efficiently train a model with LCW and an initialization method for LCW. In Section 4, we review related work; mainly,", "we examine existing normalization techniques from the viewpoint of reducing the angle bias. In Section 5, we present empirical results that", "show that it is possible to efficiently train a 100-layered MLP by reducing the angle bias using LCW. Finally, we conclude with a discussion of future", "works.", "In this paper, we have first identified the angle bias that arises in the dot product of a nonzero vector and a random vector.", "The mean of the dot product depends on the angle between the nonzero vector and the mean vector of the random vector.", "In a neural network, the preactivation value of a neuron is biased depending on the angle between the weight vector of the neuron and the mean of the activation vector in the previous layer.", "We have shown that such biases cause a vanishing gradient in a neural network with sigmoid activation functions.", "To overcome this problem, we have proposed linearly constrained weights to reduce the angle bias in a neural network; these can be learned efficiently by the reparameterization technique.", "Preliminary experiments suggest that reducing the angle bias is essential to train deep MLPs with sigmoid activation functions." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3404255211353302, 0.3529411852359772, 0.09302324801683426, 0.1621621549129486, 0.0555555522441864, 0.0555555522441864, 0.12121211737394333, 0.13333332538604736, 0.05882352590560913, 0.3499999940395355, 0.08163265138864517, 0.3499999940395355, 0.23529411852359772, 0.2857142686843872, 0.3589743673801422, 0.1666666567325592, 0.09999999403953552, 0.051282044500112534, 0.2380952388048172, 0.26923075318336487, 0.42307692766189575, 0.3636363446712494, 0.19999998807907104, 0.20512819290161133, 0.2222222238779068, 0.2926829159259796, 0.1764705777168274, 0.1860465109348297, 0.2702702581882477, 0.25531914830207825, 0.31578946113586426 ]
HylgYB3pZ
true
[ "We identify angle bias that causes the vanishing gradient problem in deep nets and propose an efficient method to reduce the bias." ]
[ "Markov Logic Networks (MLNs), which elegantly combine logic rules and probabilistic graphical models, can be used to address many knowledge graph problems.", "However, inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.", "In recent years, graph neural networks (GNNs) have emerged as efficient and effective tools for large-scale graph problems.", "Nevertheless, GNNs do not explicitly incorporate prior logic rules into the models, and may require many labeled examples for a target task.", "In this paper, we explore the combination of MLNs and GNNs, and use graph neural networks for variational inference in MLN.", "We propose a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.", "Our extensive experiments on several benchmark datasets demonstrate that ExpressGNN leads to effective and efficient probabilistic logic reasoning.", "Knowledge graphs collect and organize relations and attributes about entities, which are playing an increasingly important role in many applications, including question answering and information retrieval.", "Since knowledge graphs may contain incorrect, incomplete or duplicated records, additional processing such as link prediction, attribute classification, and record de-duplication is typically needed to improve the quality of knowledge graphs and derive new facts.", "Markov Logic Networks (MLNs) were proposed to combine hard logic rules and probabilistic graphical models, which can be applied to various tasks on knowledge graphs (Richardson & Domingos, 2006) .", "The logic rules incorporate prior knowledge and allow MLNs to generalize in tasks with small amount of labeled data, while the graphical model formalism provides a principled framework for dealing with uncertainty in data.", "However, inference in MLN is computationally intensive, typically exponential in the number of entities, limiting the real-world application of MLN.", "Graph neural networks (GNNs) have recently gained increasing popularity for addressing many graph related problems effectively (Dai et al., 2016; Li et al., 2016; Kipf & Welling, 2017; Schlichtkrull et al., 2018) .", "However, the design and training procedure of GNNs do not explicitly take into account the prior knowledge in the form of logic rules.", "To achieve good performance, these models typically require sufficient labeled instances on specific end tasks (Xiong et al., 2018) .", "In this paper, we explore the combination of the best of both worlds, aiming for a method which is data-driven yet can exploit the prior knowledge encoded in logic rules.", "To this end, we design a simple variant of graph neural networks, named ExpressGNN, which can be efficiently trained in the variational EM framework for MLN.", "An overview of our method is illustrated in Fig. 1 .", "ExpressGNN and the corresponding reasoning framework lead to the following desiderata:", "• Efficient inference and learning: ExpressGNN can be viewed as the inference network for MLN, which scales up MLN inference to much larger knowledge graph problems.", "• Combining logic rules and data supervision: ExpressGNN can leverage the prior knowledge encoded in logic rules, as well as the supervision from labeled data.", "• Compact and expressive model: ExpressGNN may have small number of parameters, yet it is sufficient to represent mean-field distributions in MLN.", "This paper studies the probabilistic logic reasoning problem, and proposes ExpressGNN to combine the advantages of Markov Logic Networks in logic reasoning and graph neural networks in graph representation learning.", "ExpressGNN addresses the scalability issue of Markov Logic Networks with efficient stochastic training in the variational EM framework.", "ExpressGNN employs GNNs to capture the structure knowledge that is implicitly encoded in the knowledge graph, which serves as supplement to the knowledge from logic formulae.", "ExpressGNN is a general framework that can trade-off the model compactness and expressiveness by tuning the dimensionality of the GNN and the embedding part.", "Extensive experiments on multiple benchmark datasets demonstrates the effectiveness and efficiency of ExpressGNN." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.24390242993831635, 0.24242423474788666, 0.3333333134651184, 0.1463414579629898, 0.5128204822540283, 0.21052631735801697, 0.10810810327529907, 0.09302324801683426, 0.11764705181121826, 0.1702127605676651, 0.23529411852359772, 0.22857142984867096, 0.1702127605676651, 0.20512819290161133, 0, 0.17391303181648254, 0.4000000059604645, 0.13793103396892548, 0.20689654350280762, 0.23255813121795654, 0.14999999105930328, 0.1463414579629898, 0.5116279125213623, 0.555555522441864, 0.09999999403953552, 0.20512819290161133, 0.1875 ]
rJg76kStwH
true
[ "We employ graph neural networks in the variational EM framework for efficient inference and learning of Markov Logic Networks." ]
[ "Reinforcement learning (RL) methods achieved major advances in multiple tasks surpassing human performance.", "However, most of RL strategies show a certain degree of weakness and may become computationally intractable when dealing with high-dimensional and non-stationary environments.", "In this paper, we build a meta-reinforcement learning (MRL) method embedding an adaptive neural network (NN) controller for efficient policy iteration in changing task conditions.", "Our main goal is to extend RL application to the challenging task of urban autonomous driving in CARLA simulator.", "\"Every living organism interacts with its environment and uses those interactions to improve its own actions in order to survive and increase\"", "BID13 .", "Inspired from animal behaviorist psychology, reinforcement learning (RL) is widely used in artificial intelligence research and refers to goal-oriented optimization driven by an impact response or signal BID30 .", "Properly formalized and converted into practical approaches BID9 , RL algorithms have recently achieved major progress in many fields as games BID18 BID28 and advanced robotic manipulations BID12 BID17 beating human performance.", "However, and despite several years of research and evolution, most of RL strategies show a certain degree of weakness and may become computationally intractable when dealing with high-dimensional and non-stationary environments BID34 .", "More specifically, the industrial application of autonomous driving in which we are interested in this work, remains a highly challenging \"unsolved problem\" more than one decade after the promising 2007 DARPA Urban Challenge BID2 ).", "The origin of its complexity lies in the large variability inherent to driving task arising from the uncertainty of human behavior, diversity of driving styles and complexity of scene perception.An interpretation of the observed vulnerability due to learning environment changes has been provided in contextaware (dependence) research assuming that \"concepts in the real world are not eternally fixed entities or structures, but can have a different appearance or definition or meaning in different contexts\" BID36 .", "There are several tasks that require context-aware adaptation like weather forecast with season or geography, speech recognition with speaker origins and control processes of industrial installations with climate conditions.", "One solution to cope with this variability is to imitate the behavior of human who are more comfortable with learning from little experience and adapting to unexpected perturbations.", "These natural differences compared to machine learning and specifically RL methods are shaping the current research intending to eschew the problem of data inefficiency and improve artificial agents generalization capabilities BID10 .", "Tackling this issue as a multi-task learning problem BID3 , meta-learning has shown promising results and stands as one of the preferred frames to design fast adapting strategies BID25 BID23 .", "It refers to learn-to-learn approaches that aim at training a model on a set of different but linked tasks and subsequently generalize to new cases using few additional examples BID7 .In", "this paper we aim at extending RL application to the challenging task of urban autonomous driving in CARLA simulator. We", "build a meta-reinforcement learning (MRL) method where agent policies behave efficiently and flexibly in changing task conditions. We", "consolidate the approach robustness by integrating a neural network (NN) controller that performs a continuous iteration of policy evaluation and improvement. The", "latter allows reducing the variance of the policy-based RL and accelerating its convergence. Before", "embarking with a theoretical modeling of the proposed approach in section 3, we introduce in the next section metalearning background and related work in order to better understand the current issues accompanying its application to RL settings. In the", "last section, we evaluate our method using CARLA simulator and discuss experimental results.", "In this paper we addressed the limits of RL algorithms in solving high-dimensional and complex tasks.", "Built on gradient-based meta-learning, the proposed approach implements a continuous process of policy assessment and improvement using a NN controller.", "Evaluated on the challenging problem of autonomous driving using CARLA simulator, our approach showed higher performance and faster learning capabilities than conventionally pre-trained and randomly initialized RL algorithms.", "Considering this paper as a preliminary attempt to scale up RL approaches to high-dimensional real world applications like autonomous driving, we plan in future work to bring deeper focus on several sides of the approach such as the reward function, CNN architecture and including vehicle characteristics in the tasks complexity setup." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.06896550953388214, 0.10810810327529907, 0.3414634168148041, 0.23529411852359772, 0.11428570747375488, 0.09090908616781235, 0, 0.09302324801683426, 0.12244897335767746, 0.10526315122842789, 0.04651162400841713, 0.1463414579629898, 0.09090908616781235, 0.13333332538604736, 0.08888888359069824, 0.2222222238779068, 0.1764705777168274, 0.2702702581882477, 0, 0.1666666567325592, 0.06896550953388214, 0, 0.17142856121063232, 0.1860465109348297, 0.13114753365516663 ]
S1eoN9rsnN
true
[ "A meta-reinforcement learning approach embedding a neural network controller applied to autonomous driving with Carla simulator." ]
[ "The information bottleneck principle is an elegant and useful approach to representation learning.", "In this paper, we investigate the problem of representation learning in the context of reinforcement learning using the information bottleneck framework, aiming at improving the sample efficiency of the learning algorithms.We analytically derive the optimal conditional distribution of the representation, and provide a variational lower bound.", "Then, we maximize this lower bound with the Stein variational (SV) gradient method. \n", "We incorporate this framework in the advantageous actor critic algorithm (A2C) and the proximal policy optimization algorithm (PPO).", "Our experimental results show that our framework can improve the sample efficiency of vanilla A2C and PPO significantly.", "Finally, we study the information-bottleneck (IB) perspective in deep RL with the algorithm called mutual information neural estimation(MINE).\n", "We experimentally verify that the information extraction-compression process also exists in deep RL and our framework is capable of accelerating this process.", "We also analyze the relationship between MINE and our method, through this relationship, we theoretically derive an algorithm to optimize our IB framework without constructing the lower bound.", "In training a reinforcement learning algorithm, an agent interacts with the environment, explores the (possibly unknown) state space, and learns a policy from the exploration sample data.", "In many cases, such samples are quite expensive to obtain (e.g., requires interactions with the physical environment).", "Hence, improving the sample efficiency of the learning algorithm is a key problem in RL and has been studied extensively in the literature.", "Popular techniques include experience reuse/replay, which leads to powerful off-policy algorithms (e.g., (Mnih et al., 2013; Silver et al., 2014; Van Hasselt et al., 2015; Nachum et al., 2018a; Espeholt et al., 2018 )), and model-based algorithms (e.g., (Hafner et al., 2018; Kaiser et al., 2019) ).", "Moreover, it is known that effective representations can greatly reduce the sample complexity in RL.", "This can be seen from the following motivating example: In the environment of a classical Atari game: Seaquest, it may take dozens of millions samples to converge to an optimal policy when the input states are raw images (more than 28,000 dimensions), while it takes less samples when the inputs are 128-dimension pre-defined RAM data (Sygnowski & Michalewski, 2016) .", "Clearly, the RAM data contain much less redundant information irrelevant to the learning process than the raw images.", "Thus, we argue that an efficient representation is extremely crucial to the sample efficiency.", "In this paper, we try to improve the sample efficiency in RL from the perspective of representation learning using the celebrated information bottleneck framework (Tishby et al., 2000) .", "In standard deep learning, the experiments in (Shwartz-Ziv & Tishby, 2017) show that during the training process, the neural network first \"remembers\" the inputs by increasing the mutual information between the inputs and the representation variables, then compresses the inputs to efficient representation related to the learning task by discarding redundant information from inputs (decreasing the mutual information between inputs and representation variables).", "We call this phenomena \"information extraction-compression process\" \"information extraction-compression process\" \"information extraction-compression process\"(information E-C process).", "Our experiments shows that, similar to the results shown in (Shwartz-Ziv & Tishby, 2017) , we first (to the best of our knowledge) observe the information extraction-compression phenomena in the context of deep RL (we need to use MINE (Belghazi et al., 2018) for estimating the mutual information).", "This observation motivates us to adopt the information bottleneck (IB) framework in reinforcement learning, in order to accelerate the extraction-compression process.", "The IB framework is intended to explicitly enforce RL agents to learn an efficient representation, hence improving the sample efficiency, by discarding irrelevant information from raw input data.", "Our technical contributions can be summarized as follows:", "1. We observe that the \"information extraction-compression process\" also exists in the context of deep RL (using MINE (Belghazi et al., 2018) to estimate the mutual information).", "2. We derive the optimization problem of our information bottleneck framework in RL.", "In order to solve the optimization problem, we construct a lower bound and use the Stein variational gradient method developed in (Liu et al., 2017) to optimize the lower bound.", "3. We show that our framework can accelerate the information extraction-compression process.", "Our experimental results also show that combining actor-critic algorithms (such as A2C, PPO) with our framework is more sample-efficient than their original versions.", "4. We analyze the relationship between our framework and MINE, through this relationship, we theoretically derive an algorithm to optimize our IB framework without constructing the lower bound.", "Finally, we note that our IB method is orthogonal to other methods for improving the sample efficiency, and it is an interesting future work to incorporate it in other off-policy and model-based algorithms.", "We study the information bottleneck principle in RL: We propose an optimization problem for learning the representation in RL based on the information-bottleneck framework and derive the optimal form of the target distribution.", "We construct a lower bound and utilize Stein Variational gradient method to optimize it.", "Finally, we verify that the information extraction and compression process also exists in deep RL, and our framework can accelerate this process.", "We also theoretically derive an algorithm based on MINE that can directly optimize our framework and we plan to study it experimentally in the future work.", "According to the assumption, naturally we have:", "Notice that if we use our IB framework in value-based algorithm, then the objective function J π can be defined as:", "where", "and d π is the discounted future state distribution, readers can find detailed definition of d π in the appendix of (Chen et al., 2018) .", "We can get:" ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.37037035822868347, 0.23999999463558197, 0, 0.19999998807907104, 0.1249999925494194, 0.1249999925494194, 0.22857142984867096, 0.14999999105930328, 0.21052631735801697, 0, 0.1764705777168274, 0.04081632196903229, 0.06896550953388214, 0.0312499962747097, 0.13333332538604736, 0.0714285671710968, 0.24390242993831635, 0.145454540848732, 0, 0.072727270424366, 0.3125, 0.1463414579629898, 0, 0.04999999701976776, 0.29629629850387573, 0.09999999403953552, 0.1538461446762085, 0.05405404791235924, 0.1538461446762085, 0.1428571343421936, 0.3414634168148041, 0.0714285671710968, 0.23529411852359772, 0.19999998807907104, 0, 0.11428570747375488, 0.1111111044883728, 0 ]
Syl-xpNtwS
true
[ "Derive an information bottleneck framework in reinforcement learning and some simple relevant theories and tools." ]
[ " A core aspect of human intelligence is the ability to learn new tasks quickly and switch between them flexibly.", "Here, we describe a modular continual reinforcement learning paradigm inspired by these abilities.", "We first introduce a visual interaction environment that allows many types of tasks to be unified in a single framework.", "We then describe a reward map prediction scheme that learns new tasks robustly in the very large state and action spaces required by such an environment.", "We investigate how properties of module architecture influence efficiency of task learning, showing that a module motif incorporating specific design principles (e.g. early bottlenecks, low-order polynomial nonlinearities, and symmetry) significantly outperforms more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance.", "Finally, we present a meta-controller architecture for task switching based on a dynamic neural voting scheme, which allows new modules to use information learned from previously-seen tasks to substantially improve their own learning efficiency.", "In the course of everyday functioning, people are constantly faced with real-world environments in which they are required to shift unpredictably between multiple, sometimes unfamiliar, tasks BID2 .", "They are nonetheless able to flexibly adapt existing decision schemas or build new ones in response to these challenges BID1 .", "How humans support such flexible learning and task switching is largely unknown, both neuroscientifically and algorithmically BID28 BID5 .We", "investigate solving this problem with a neural module approach in which simple, task-specialized decision modules are dynamically allocated on top of a largely-fixed underlying sensory system BID0 BID14 . The", "sensory system computes a general-purpose visual representation from which the decision modules read. While", "this sensory backbone can be large, complex, and learned comparatively slowly with significant amounts of training data, the task modules that deploy information from the base representation must, in contrast, be lightweight, quick to be learned, and easy to switch between. In the", "case of visually-driven tasks, results from neuroscience and computer vision suggest the role of the fixed general purpose visual representation may be played by the ventral visual stream, modeled as a deep convolutional neural network (Yamins & DiCarlo, 2016; BID23 . However", ", the algorithmic basis for how to efficiently learn and dynamically deploy visual decision modules remains far from obvious. The TouchStream", "environment is a touchscreen-like GUI for continual learning agents, in which a spectrum of visual reasoning tasks can be posed in a large but unified action space. On each timestep", ", the environment (cyan box) emits a visual image (xt) and a reward (rt). The agent recieves", "xt and rt as input and emits an action at. The action represents", "a \"touch\" at some location on a two-dimensional screen e.g. at ∈ {0, . . . , H − 1} × {0, . . . , W − 1}, where H and W are the screen height and width. The environment's policy", "is a program computing xt and rt as a function of the agent's action history. The agent's goal is to learn", "how to choose optimal actions to maximize the amount of reward it recieves over time. The agent consists of several", "component neural networks including a fixed visual backbone (yellow inset), a set of learned neural modules (grey inset), and a meta-controller (red inset) which mediates the deployment of these learned modules for task solving. The modules use the ReMaP algorithm", "§ 2 to learn how to estimate reward as a function of action (heatmap), conditional on the agent's recent history. Using a sampling policy on this reward", "map, the agent chooses an optimal action to maximize its aggregate reward.In standard supervised learning, it is often assumed that the output space of a problem is prespecified in a manner that just happens to fit the task at hand -e.g. for a classification task, a discrete output with a fixed number of classes might be determined ahead of time, while for a continuous estimation problem, a one-dimensional real-valued target might be chosen instead. This is a very convenient simplification", "in supervised learning or single-task reinforcement learning contexts, but if one is interested in the learning and deployment of decision structures in a rich environment defining tasks with many different natural output types, this simplification becomes cumbersome.To go beyond this limitation, we build a unified environment in which many different tasks are naturally embodied. Specifically, we model an agent interacting", "with a two-dimensional touchscreenlike GUI that we call the TouchStream, in which all tasks (discrete categorization tasks, continuous estimation problems, and many other combinations and variants thereof) can be encoded using a single common and intuitive -albeit large -output space. This choice frees us from having to hand-design", "or programmatically choose between different output domain spaces, but forces us to confront the core challenge of how a naive agent can quickly and emergently learn the implicit \"interfaces\" required to solve different tasks.We then introduce Reward Map Prediction (ReMaP) networks, an algorithm for continual reinforcement learning that is able to discover implicit task-specific interfaces in large action spaces like those of the TouchStream environment. We address two major algorithmic challenges associated", "with learning ReMaP modules. First, what module architectural motifs allow for efficient", "task interface learning? We compare several candidate architectures and show that those", "incorporating certain intuitive design principles (e.g. early visual bottlenecks, low-order polynomial nonlinearities and symmetry-inducing concatenations) significantly outperform more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Second, what system architectures are effective for switching", "between tasks? We present a meta-controller architecture based on a dynamic", "neural voting scheme, allowing new modules to use information learned from previously-seen tasks to substantially improve their own learning efficiency.In § 1 we formalize the TouchStream environment. In § 2, we introduce the ReMaP algorithm. In § 3, we describe", "and evaluate comparative performance of multiple", "ReMaP module architectures on a variety of TouchStream tasks. In § 4, we describe the Dynamic Neural Voting meta-controller, and evaluate", "its ability to efficiently transfer knowledge between ReMaP modules on task switches.", "In this work, we introduce the TouchStream environment, a continual reinforcement learning framework that unifies a wide variety of spatial decision-making tasks within a single context.", "We describe a general algorithm (ReMaP) for learning light-weight neural modules that discover implicit task interfaces within this large-action/state-space environment.", "We show that a particular module architecture (EMS) is able to remain compact while retaining high task performance, and thus is especially suitable for flexible task learning and switching.", "We also describe a simple but general dynamic task-switching architecture that shows substantial ability to transfer knowledge when modules for new tasks are learned.A crucial future direction will be to expand insights from the current work into a more complete continual-learning agent.", "We will need to show that our approach scales to handle dozens or hundreds of task switches in sequence.", "We will also need to address issues of how the agent determines when to build a new module and how to consolidate modules when appropriate (e.g. when a series of tasks previously understood as separate can be solved by a single smaller structure).", "It will also be critical to extend our approach to handle visual tasks with longer horizons, such as navigation or game play with extended strategic planning, which will likely require the use of recurrent memory stores as part of the feature encoder.From an application point of view, we are particularly interested in using techniques like those described here to produce agents that can autonomously discover and operate the interfaces present in many important real-world two-dimensional problem domains, such as on smartphones or the internet BID12 .", "We also expect many of the same spatially-informed techniques that enable our ReMaP/EMS modules to perform well in the 2-D TouchStream environment will also transfer naturally to a three-dimensional context, where autonomous robotics applications BID7 Friedemann Zenke, Ben Poole, and Surya Ganguli.", "Improved multitask learning through synaptic intelligence.", "arXiv preprint arXiv:1703.04200, 2017." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0555555522441864, 0.19999998807907104, 0.3333333134651184, 0.23255813121795654, 0.16129031777381897, 0.16326530277729034, 0.09302324801683426, 0.0555555522441864, 0.11428570747375488, 0.2222222238779068, 0.12903225421905518, 0.07407406717538834, 0.1111111044883728, 0.10526315122842789, 0.40909090638160706, 0.1818181723356247, 0.0714285671710968, 0.043478257954120636, 0.17142856121063232, 0.05714285373687744, 0.1249999925494194, 0.14999999105930328, 0.125, 0.1515151411294937, 0.19354838132858276, 0.19512194395065308, 0.20689654350280762, 0.06896550953388214, 0.10169491171836853, 0.14814814925193787, 0.1599999964237213, 0, 0.10526315122842789, 0.06896550953388214, 0.1463414579629898, 0.2702702581882477, 0.23255813121795654, 0.10344827175140381, 0.17142856121063232, 0.15094339847564697, 0.11235954612493515, 0.1428571343421936, 0.08695651590824127, 0 ]
rkPLzgZAZ
true
[ "We propose a neural module approach to continual learning using a unified visual environment with a large action space." ]
[ "Interpretability and small labelled datasets are key issues in the practical application of deep learning, particularly in areas such as medicine.", "In this paper, we present a semi-supervised technique that addresses both these issues simultaneously.", "We learn dense representations from large unlabelled image datasets, then use those representations to both learn classifiers from small labeled sets and generate visual rationales explaining the predictions.", "Using chest radiography diagnosis as a motivating application, we show our method has good generalization ability by learning to represent our chest radiography dataset while training a classifier on an separate set from a different institution.", "Our method identifies heart failure and other thoracic diseases.", "For each prediction, we generate visual rationales for positive classifications by optimizing a latent representation to minimize the probability of disease while constrained by a similarity measure in image space.", "Decoding the resultant latent representation produces an image without apparent disease.", "The difference between the original and the altered image forms an interpretable visual rationale for the algorithm's prediction.", "Our method simultaneously produces visual rationales that compare favourably to previous techniques and a classifier that outperforms the current state-of-the-art.", "Deep learning as applied to medicine has attracted much interest in recent years as a potential solution to many difficult problems in medicine, such as the recognition of diseases on pathology slides or radiology images.", "However, adoption of machine learning algorithms in fields such as medicine relies on the end user being able to understand and trust the algorithm, as incorrect implementation and errors may have significant consequences.", "Hence, there has recently been much interest in interpretability in machine learning as this is a key aspect of implementing machine learning algorithms in practice.", "We propose a novel method of creating visual rationales to help explain individual predictions and explore a specific application to classifying chest radiographs.There are several well-known techniques in the literature for generating visual heatmaps.", "Gradient based methods were first proposed in 2013 described as a saliency map in BID11 , where the derivative of the final class predictions is computed with respect to the input pixels, generating a map of which pixels are considered important.", "However, these saliency maps are often unintelligible as convolutional neural networks tend to be sensitive to almost imperceptible changes in pixel intensities, as demonstrated by recent work in adversarial examples.", "In fact, obtaining the saliency map is often the first step in generating adversarial examples as in BID3 .", "Other recent developments in gradient based methods such as Integrated Gradients from BID12 have introduced fundamental axioms, including the idea of sensitivity which helps focus gradients on relevant features.Occlusion sensitivity proposed by Zeiler & Fergus (2013) is another method which covers parts of the image with a grey box, mapping the resultant change in prediction.", "This produces a heatmap where features important to the final prediction are highlighted as they are occluded.", "Another wellknown method of generating visual heatmaps is global average pooling.", "Using fully convolutional neural networks with a global average pooling layer as described in BID15 , we can examine the class activation map for the final convolutional output prior to pooling, providing a low resolution heatmap for activations pertinent to that class.A novel analysis method by BID10 known as locally interpretable model-agnostic explanations (LIME) attempts to explain individual predictions by simulating model predictions in the local neighbourhood around this example.", "Gradient based methods and occlusion sensitivity can also be viewed in this light -attempting to explain each classification by changing individual input pixels or occluding square areas.However, sampling the neighbourhood surrounding an example in raw feature space can often be tricky, especially for image data.", "Image data is extremely complex and high-dimensional -hence real examples are sparsely distributed in pixel space.", "Sampling randomly in all directions around pixel space is likely to produce non-realistic images.", "LIME's solution to this is to use superpixel based algorithms to oversegment images, and to perturb the image by replacing each superpixel by its average value, or a fixed pre-determined value.", "While this produces more plausible looking images as opposed to occlusion or changing individual pixels, it is still sensitive to the parameters and the type of oversegmentation used -as features larger than a superpixel and differences in global statistics may not be represented in the set of perturbed images.", "This difficulty in producing high resolution visual rationales using existing techniques motivates our current research.", "We show in this work that using the generator of a GAN as the decoder of an autoencoder is viable and produces high quality autoencoders.", "The constraints of adversarial training force the generator to produce realistic radiographs for a given latent space, in this case a 100-dimensional space normally distributed around 0 with a standard deviation of 1.This method bears resemblance to previous work done on inverting GANS done by BID2 , although we are not as concerned with recovering the exact latent representation but rather the ability to recreate images from our dataset.", "It is suggested in previous work in BID8 that directly training a encoder to reverse the mapping learnt by the generator in a decoupled fashion does not yield good results as the encoder never sees any real images during training.", "By training upon the loss between the real input and generated output images we overcome this.We further establish the utility of this encoder by using encoded latent representations to predict outcomes on unseen datasets, including one not from our institution.", "We achieve this without retraining our encoder on these unseen datasets, suggesting that the encoder has learnt useful features about chest radiographs in general.Our primary contribution in this paper however is not the inversion of the generator but rather the ability to generate useful visual rationales.", "For each prediction of the model we generate a corresponding visual rationale with a target class different to the original prediction.", "We display some examples of the rationales this method produces and inspect these manually to check if these are similar to our understanding of how to interpret these images.", "The ability to autoencode inputs is essential to our rationale generation although we have not explored in-depth in this paper the effect of different autoencoding algorithms (for instance variational autoencoders) upon the quality of the generated rationales, as our initial experiments with variational and vanilla autoencoders were not able to reconstruct the level of detail required.For chest radiographs, common signs of heart failure are an enlarged heart or congested lung fields, which appear as increased opacities in the parts of the image corresponding to the lungs.", "The rationales generated by the normally trained classifier in FIG0 to be consistent with features described in the medical literature while the contaminated classifier is unable to generate these rationales.We also demonstrate the generation of rationales with the MNIST dataset where the digit 9 is transformed into 4 while retaining the appearance of the original digit.", "We can see that the transformation generally removes the upper horizontal line of the 9 to convert this into a 4.", "Interestingly, some digits are not successfully converted.", "Even with different permutations of delta and gamma weights in Algorithm 2 some digits remain resistant to conversion.", "We hypothesize that this may be due to the relative difficulty of the chest radiograph dataset compared to MNIST -leading to the extreme confidence of the MNIST model that some digits are not the target class.", "This may cause vanishingly small gradients in the target class prediction, preventing gradient descent from achieving the target class.We compare the visual rationale generated by our method to various other methods including integrated gradients, saliency maps, occlusion sensitivity as well as LIME in Fig. 6 .All", "of these methods share similarities in that they attempt to perturb the original image to examine the impact of changes in the image on the final prediction, thereby identifying the most salient elements. In", "the saliency map approach, each individual pixel is perturbed, while in the occlusion sensitivity method, squares of the image are perturbed. LIME", "changes individual superpixels in an image by changing all the pixels in a given superpixel to the average value. This", "approach fails on images where the superpixel classification is too coarse, or where the classification is not dependent on high resolution details within the superpixel. To paraphrase", "BID12 , attribution or explanation for humans relies upon counterfactual intuition -or altering the image to remove the cause of the predicted outcome. Model agnostic", "methods such as gradient based methods, while fulfilling the sensitivity and implementation invariance axioms, do not acknowledge the natural structure of the inputs. For instance,", "this often leads to noisy pixel-wise attribution as seen in Fig. 6 . This does not", "fit well with our human intuition as for many images, large continuous objects dominate our perception and we often do not expect attributions to differ drastically between neighbouring pixels.Fundamentally these other approaches suffer from their inability to perturb the image in a realistic fashion, whereas our approach perturbs the image's latent representation, enabling each perturbed image to look realistic as enforced by the GAN's constraints.Under the manifold hypothesis, natural images lie on a low dimensional manifold embedded in pixel space. Our learned latent", "space serves as a approximate but useful coordinate system for the manifold of natural images. More specifically", "the image (pardon the pun) of the generator G [R d ] is approximately the set of 'natural images' (in this case radiographs) and small displacements in latent space around a point z closely map into the tangent space of natural images around G(z). Performing optimization", "in latent space is implicitly constraining the solutions to lie on the manifold of natural images, which is why our output images remain realistic while being modified under almost the same objective used for adversarial image generation.Hence, our method differs from these previously described methods as it generates high resolution rationales by switching the predicted class of an input image while observing the constraints of the input structure. This can be targeted at", "particular classes, enabling us answer the question posed to our trained model -'Why does this image represent Class A rather than Class B?'There are obvious limitations in this paper in that we do not have a rigorous definition of what interpretability entails, as pointed out by BID12 . An intuitive understanding", "of the meaning of interpretability can be obtained from its colloquial usage -as when a teacher attempts to teach by example, an interpretation or explanation for each image helps the student to learn faster and generalize broadly without needing specific examples.Future work could focus on the measurement of interpretability by judging how much data a second model requires when learning from the predictions and interpretations provided by another pretrained model. Maximizing the interpretability", "of a model may be related to the ability of models to transfer information between each other, facilitating learning without resorting to the use of large scale datasets. Such an approach could help evaluate", "non-image based visual explanations such as sentences, as described in BID5 .Other technical limitations include the", "difficulty of training a GAN capable of generating realistic images larger than 128 by 128 pixels. This limits the performance of subsequent", "classifiers in identifying small features. This can be seen in the poor performance", "of our model in detecting nodules, a relatively small feature, compared to the baseline implementation in the NIH dataset.In conclusion, we describe a method of semi-supervised learning and apply this to chest radiographs, using local data as well as recent datasets. We show that this method can be leveraged", "to generate visual rationales and demonstrate these qualitatively on chest radiographs as well as the well known MNIST set." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.052631575614213943, 0.0624999962747097, 0.2790697515010834, 0.12244897335767746, 0.07407406717538834, 0.260869562625885, 0, 0.05882352590560913, 0.2702702581882477, 0.12244897335767746, 0.0833333283662796, 0.10256409645080566, 0.4399999976158142, 0.15094339847564697, 0.04444443807005882, 0, 0.0882352888584137, 0.11764705181121826, 0.20689654350280762, 0.15584415197372437, 0.06557376682758331, 0, 0.0624999962747097, 0.09090908616781235, 0.09999999403953552, 0.24242423474788666, 0.2926829159259796, 0.10256409645080566, 0.07843136787414551, 0.1428571343421936, 0.20689654350280762, 0.3333333134651184, 0.2380952388048172, 0.06976743787527084, 0.17241379618644714, 0.21621620655059814, 0, 0.1111111044883728, 0.17777776718139648, 0.1355932205915451, 0.09090908616781235, 0.052631575614213943, 0.1111111044883728, 0.052631575614213943, 0.09756097197532654, 0.04878048226237297, 0.060606054961681366, 0.04494381695985794, 0.11428570747375488, 0.0714285671710968, 0.125, 0.1230769157409668, 0.1315789371728897, 0.21276594698429108, 0.060606054961681366, 0.10810810327529907, 0, 0.23333333432674408, 0.22857142984867096 ]
B13EC5u6W
true
[ "We propose a method of using GANs to generate high quality visual rationales to help explain model predictions. " ]
[ "Evolutionary Strategies (ES) are a popular family of black-box zeroth-order optimization algorithms which rely on search distributions to efficiently optimize a large variety of objective functions.", "This paper investigates the potential benefits of using highly flexible search distributions in ES algorithms, in contrast to standard ones (typically Gaussians).", "We model such distributions with Generative Neural Networks (GNNs) and introduce a new ES algorithm that leverages their expressiveness to accelerate the stochastic search.", "Because it acts as a plug-in, our approach allows to augment virtually any standard ES algorithm with flexible search distributions.", "We demonstrate the empirical advantages of this method on a diversity of objective functions.", "We are interested in the global minimization of a black-box objective function, only accessible through a zeroth-order oracle.", "In many instances of this problem the objective is expensive to evaluate, which excludes brute force methods as a reasonable mean of optimization.", "Also, as the objective is potentially non-convex and multi-modal, its global optimization cannot be done greedily but requires a careful balance between exploitation and exploration of the optimization landscape (the surface defined by the objective).", "The family of algorithms used to tackle such a problem is usually dictated by the cost of one evaluation of the objective function (or equivalently, by the maximum number of function evaluations that are reasonable to make) and by a precision requirement.", "For instance, Bayesian Optimization (Jones et al., 1998; Shahriari et al., 2016) targets problems of very high evaluation cost, where the global minimum must be approximately discovered after a few hundreds of function evaluations.", "When aiming for a higher precision and hence having a larger budget (e.g. thousands of function evaluations), a popular algorithm class is the one of Evolutionary Strategies (ES) (Rechenberg, 1978; Schwefel, 1977) , a family of heuristic search procedures.", "ES algorithms rely on a search distribution, which role is to propose queries of potentially small value of the objective function.", "This search distribution is almost always chosen to be a multivariate Gaussian.", "It is namely the case of the Covariance Matrix Adaptation Evolution Strategies (CMA-ES) (Hansen & Ostermeier, 2001 ), a state-of-the-art ES algorithm made popular in the machine learning community by its good results on hyper-parameter tuning (Friedrichs & Igel, 2005; Loshchilov & Hutter, 2016) .", "It is also the case for Natural Evolution Strategies (NES) (Wierstra et al., 2008) algorithms, which were recently used for direct policy search in Reinforcement Learning (RL) and shown to compete with state-of-the-art MDP-based RL techniques (Salimans et al., 2017) .", "Occasionally, other distributions have been used; e.g. fat-tails distributions like the Cauchy were shown to outperform the Gaussian for highly multi-modal objectives (Schaul et al., 2011) .", "We argue in this paper that in ES algorithms, the choice of a standard parametric search distribution (Gaussian, Cauchy, ..) constitutes a potentially harmful implicit constraint for the stochastic search of a global minimum.", "To overcome the limitations of classical parametric search distributions, we propose using flexible distributions generated by bijective Generative Neural Networks (GNNs), with computable and differentiable log-probabilities.", "We discuss why common existing optimization methods in ES algorithms cannot be directly used to train such models and design a tailored algorithm that efficiently train GNNs for an ES objective.", "We show how this new algorithm can readily incorporate existing ES algorithms that operates on simple search distributions, Algorithm 1: Generic ES procedure input: zeroth-order oracle on f , distribution π 0 , population size λ repeat (Sampling) Sample x 1 , . . . , x λ i.i.d", "∼ π t (Evaluation) Evaluate f (x 1 ), . . . , f (x n ).", "(Update)", "Update π t to produce x of potentially smaller objective values.", "until convergence; like the Gaussian.", "On a variety of objective functions, we show that this extension can significantly accelerate ES algorithms.", "We formally introduce the problem and provide background on Evolutionary Strategies in Section", "2. We discuss the role of GNNs in generating flexible search distributions in Section", "3. We explain why usual algorithms fail to train GNNs for an ES objective and introduce a new algorithm in Section", "4. Finally we report experimental results in Section 5.", "In this work, we motivate the use of GNNs for improving Evolutionary Strategies by pinpointing the limitations of classical search distributions, commonly used by standard ES algorithms.", "We propose a new algorithm that leverages the high flexibility of distributions generated by bijective GNNs with an ES objective.", "We highlight that this algorithm can be seen as a plug-in extension to existing ES algorithms, and therefore can virtually incorporate any of them.", "Finally, we show its empirical advantages across a diversity of synthetic objective functions, as well as from objectives coming from Reinforcement Learning.", "Beyond the proposal of this algorithm, we believe that our work highlights the role of expressiveness in exploration for optimization tasks.", "This idea could be leverage in other settings where exploration is crucial, such a MDP-based policy search methods.", "An interesting line of future work could focus on optimizing GNN-based conditional distribution for RL tasks -an idea already developed in Ward et al. (2019); Mazoure et al. (2019) .", "Other possible extensions to our work could focus on investigating first-order and mixed oracles, such as in Grathwohl et al. (2017) ; Faury et al. (2018" ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2926829159259796, 0.15789473056793213, 0.4878048598766327, 0.1621621549129486, 0.2666666507720947, 0.23529411852359772, 0.20512819290161133, 0.1249999925494194, 0.20408162474632263, 0.12244897335767746, 0.23076923191547394, 0.3243243098258972, 0.13793103396892548, 0.17543859779834747, 0.1090909019112587, 0.09302324801683426, 0.17777776718139648, 0.2790697515010834, 0.21739129722118378, 0.13793103396892548, 0, 0.1428571343421936, 0.09090908616781235, 0.1818181723356247, 0.2666666507720947, 0.19999998807907104, 0.31578946113586426, 0, 0.24390242993831635, 0.37837836146354675, 0.25, 0.10810810327529907, 0.1666666567325592, 0.05714285373687744, 0.045454539358615875, 0.04878048226237297 ]
SJlDDnVKwS
true
[ "We propose a new algorithm leveraging the expressiveness of Generative Neural Networks to improve Evolutionary Strategies algorithms." ]
[ "We propose and evaluate new techniques for compressing and speeding up dense matrix multiplications as found in the fully connected and recurrent layers of neural networks for embedded large vocabulary continuous speech recognition (LVCSR).", "For compression, we introduce and study a trace norm regularization technique for training low rank factored versions of matrix multiplications.", "Compared to standard low rank training, we show that our method leads to good accuracy versus number of parameter trade-offs and can be used to speed up training of large models.", "For speedup, we enable faster inference on ARM processors through new open sourced kernels optimized for small batch sizes, resulting in 3x to 7x speed ups over the widely used gemmlowp library.", "Beyond LVCSR, we expect our techniques and kernels to be more generally applicable to embedded neural networks with large fully connected or recurrent layers.", "For embedded applications of machine learning, we seek models that are as accurate as possible given constraints on size and on latency at inference time.", "For many neural networks, the parameters and computation are concentrated in two basic building blocks:1.", "Convolutions.", "These tend to dominate in, for example, image processing applications.2.", "Dense matrix multiplications (GEMMs) as found, for example, inside fully connected layers or recurrent layers such as GRU and LSTM.", "These are common in speech and natural language processing applications.These two building blocks are the natural targets for efforts to reduce parameters and speed up models for embedded applications.", "Much work on this topic already exists in the literature.", "For a brief overview, see Section 2.In this paper, we focus only on dense matrix multiplications and not on convolutions.", "Our two main contributions are:1.", "Trace norm regularization: We describe a trace norm regularization technique and an accompanying training methodology that enables the practical training of models with competitive accuracy versus number of parameter trade-offs.", "It automatically selects the rank and eliminates the need for any prior knowledge on suitable matrix rank.", "We worked on compressing and reducing the inference latency of LVCSR speech recognition models.", "To better compress models, we introduced a trace norm regularization technique and demonstrated its potential for faster training of low rank models on the WSJ speech corpus.", "To reduce latency at inference time, we demonstrated the importance of optimizing for low batch sizes and released optimized kernels for the ARM64 platform.", "Finally, by combining the various techniques in this paper, we demonstrated an effective path towards production-grade on-device speech recognition on a range of embedded devices.Figure 7: Contours of ||σ|| 1 and ||σ|| 2 .", "||σ|| 2 is kept constant at σ.", "For this case, ||σ|| 1 can vary from σ to √ 2σ." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.23999999463558197, 0.307692289352417, 0.1702127605676651, 0.19607841968536377, 0.1428571343421936, 0.1904761791229248, 0.05882352590560913, 0, 0.05405404791235924, 0.2790697515010834, 0.06896550953388214, 0.1538461446762085, 0, 0.3478260934352875, 0.11764705181121826, 0.3636363446712494, 0.43478259444236755, 0.1463414579629898, 0.2745097875595093, 0, 0 ]
B1tC-LT6W
true
[ "We compress and speed up speech recognition models on embedded devices through a trace norm regularization technique and optimized kernels." ]
[ "Training activation quantized neural networks involves minimizing a piecewise constant training loss whose gradient vanishes almost everywhere, which is undesirable for the standard back-propagation or chain rule.", "An empirical way around this issue is to use a straight-through estimator (STE) (Bengio et al., 2013) in the backward pass only, so that the \"gradient\" through the modified chain rule becomes non-trivial.", "Since this unusual \"gradient\" is certainly not the gradient of loss function, the following question arises: why searching in its negative direction minimizes the training loss?", "In this paper, we provide the theoretical justification of the concept of STE by answering this question.", "We consider the problem of learning a two-linear-layer network with binarized ReLU activation and Gaussian input data.", "We shall refer to the unusual \"gradient\" given by the STE-modifed chain rule as coarse gradient.", "The choice of STE is not unique.", "We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss.", "We further show the associated coarse gradient descent algorithm converges to a critical point of the population loss minimization problem. ", "Moreover, we show that a poor choice of STE leads to instability of the training algorithm near certain local minima, which is verified with CIFAR-10 experiments.", "Deep neural networks (DNN) have achieved the remarkable success in many machine learning applications such as computer vision (Krizhevsky et al., 2012; Ren et al., 2015) , natural language processing (Collobert & Weston, 2008) and reinforcement learning (Mnih et al., 2015; Silver et al., 2016) .", "However, the deployment of DNN typically require hundreds of megabytes of memory storage for the trainable full-precision floating-point parameters, and billions of floating-point operations to make a single inference.", "To achieve substantial memory savings and energy efficiency at inference time, many recent efforts have been made to the training of coarsely quantized DNN, meanwhile maintaining the performance of their float counterparts (Courbariaux et al., 2015; Rastegari et al., 2016; Cai et al., 2017; Hubara et al., 2018; Yin et al., 2018b) .Training", "fully quantized DNN amounts to solving a very challenging optimization problem. It calls", "for minimizing a piecewise constant and highly nonconvex empirical risk function f (w) subject to a discrete set-constraint w ∈ Q that characterizes the quantized weights. In particular", ", weight quantization of DNN have been extensively studied in the literature; see for examples (Li et al., 2016; Zhu et al., 2016; Li et al., 2017; Yin et al., 2016; 2018a; Hou & Kwok, 2018; He et al., 2018; Li & Hao, 2018) . On the other", "hand, the gradient ∇f (w) in training activation quantized DNN is almost everywhere (a.e.) zero, which makes the standard back-propagation inapplicable. The arguably", "most effective way around this issue is nothing but to construct a non-trivial search direction by properly modifying the chain rule. Specifically", ", one can replace the a.e. zero derivative of quantized activation function composited in the chain rule with a related surrogate. This proxy", "derivative used in the backward pass only is referred as the straight-through estimator (STE) (Bengio et al., 2013) . In the same", "paper, Bengio et al. (2013) proposed an alternative approach based on stochastic neurons. In addition", ", Friesen & Domingos (2017) proposed the feasible target propagation algorithm for learning hard-threshold (or binary activated) networks (Lee et al., 2015) via convex combinatorial optimization." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10810810327529907, 0.1428571343421936, 0.11764705181121826, 0.4166666567325592, 0.2222222238779068, 0.1599999964237213, 0.11764705181121826, 0.15789473056793213, 0.20000000298023224, 0.11428570747375488, 0.0416666641831398, 0.23529411852359772, 0.07407407462596893, 0, 0.10810810327529907, 0.1395348757505417, 0.05882352590560913, 0.0624999962747097, 0.1249999925494194, 0.20000000298023224, 0, 0.1111111044883728 ]
Skh4jRcKQ
true
[ "We make theoretical justification for the concept of straight-through estimator." ]
[ "This paper presents GumbelClip, a set of modifications to the actor-critic algorithm, for off-policy reinforcement learning.", "GumbelClip uses the concepts of truncated importance sampling along with additive noise to produce a loss function enabling the use of off-policy samples.", "The modified algorithm achieves an increase in convergence speed and sample efficiency compared to on-policy algorithms and is competitive with existing off-policy policy gradient methods while being significantly simpler to implement.", "The effectiveness of GumbelClip is demonstrated against existing on-policy and off-policy actor-critic algorithms on a subset of the Atari domain.", "Recent advances in reinforcement learning (RL) have enabled the extension of long-standing methods to complex and large-scale tasks such as Atari (Mnih et al., 2015) , Go , and DOTA (OpenAI, 2018) .", "The key driver has been the use of deep neural networks, a non-linear function approximator, with the combination usually referred to as Deep Reinforcement Learning (DRL) (LeCun et al., 2015; Mnih et al., 2015) .", "However, deep learning-based methods are usually data-hungry, requiring millions of samples before the network converges to a stable solution.", "As such, DRL methods are usually trained in a simulated environment where an arbitrary amount of data can be generated.", "RL algorithms can be classified as either learning in an off-policy or on-policy setting.", "In the onpolicy setting, an agent learns directly from experience generated by its current policy.", "In contrast, the off-policy setting enables the agent to learn from experience generated by its current policy or/and other separate policies.", "An algorithm that learns in the off-policy setting has much greater sample efficiency as old experience from the current policy can be reused; it also enables off-policy algorithms to learn an optimal policy while executing an exploration-focused policy (Sutton et al., 1998) .", "The most famous off-policy method is Q-Learning (Watkins & Dayan, 1992) which learns an actionvalue function, Q(s, a), that maps the value to a state s and action a pair.", "Deep Q-Learning (DQN), the marriage of Q-Learning with deep neural networks, was popularised by Mnih et al. (2015) and used various modifications, such as experience replay, for stable convergence.", "Within DQN, experience replay (Lin, 1992) is often motivated as a technique for reducing sample correlation.", "Unfortunately, all action-value methods, including Q-Learning, have two significant disadvantages.", "First, they learn deterministic policies, which cannot handle problems that require stochastic policies.", "Second, finding the greedy action with respect to the Q function is costly for large action spaces.", "To overcome these limitations, one could use policy gradient algorithms (Sutton et al., 2000) , such as actor-critic methods, which learn in an on-policy setting at the cost of sample efficiency.", "The ideal solution would be to combine the sample efficiency of off-policy algorithms with the desirable attributes of on-policy algorithms.", "Work along this line has been done by using importance sampling (Degris et al., 2012) or by combining several techniques together, as in ACER (Wang et al., 2016) .", "However, the resulting methods are quite complex and require many modifications to existing algorithms.", "This paper, proposes a set of adjustments to A2C , a parallel on-policy actor-critic algorithm, enabling off-policy learning from stored trajectories.", "Therefore, our contributions are as follows:", "• GumbelClip, a fully off-policy actor-critic algorithm, the result of a small set of simple adjustments, in under 10 lines of code (LOC) 1 , to the A2C algorithm.", "• GumbelClip has increased sample efficiency and overall performance over on-policy actorcritic algorithms, such as A2C.", "• GumbelClip performs similarily to other off-policy actor-critic algorithms, such as ACER, while being significantly simpler to implement.", "The paper is organized as follows: Section 2 covers background information, Section 3 describes the GumbelClip algorithm, Section 4 details the experiments along with results, discussion, and ablations of our methodology.", "Section 5 discusses possible future work, and finally Section 6 provides concluding remarks.", "In this paper we have presented GumbelClip, a set of adjustments to the on-policy A2C algorithm, enabling full off-policy learning from stored trajectories in a replay memory.", "Our approach relies on aggressive clipping of the importance weight, large batchsize, and additive noise sampled from the Gumbel distribution.", "We have empirically validated the use of each component in GumbelClip through ablations and shown the stability of the algorithm.", "Furthermore, we have shown that GumbelClip achieves superior performance and higher sample efficiency than A2C.", "GumbelClip nears the performance and sample efficiency of ACER on many of the tested environments.", "Our methodology requires minimal changes to the A2C algorithm, which in contrast to ACER, makes the implementation of GumbelClip straightforward." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2857142686843872, 0.2222222238779068, 0.22580644488334656, 0.23076923191547394, 0.0952380895614624, 0.1538461446762085, 0.1538461446762085, 0.15094339847564697, 0.08510638028383255, 0.0833333283662796, 0.11320754140615463, 0.14084506034851074, 0.22580644488334656, 0.13114753365516663, 0.04081632196903229, 0, 0.043478257954120636, 0.1249999925494194, 0.1249999925494194, 0.19999998807907104, 0.03389830142259598, 0.1702127605676651, 0.2641509473323822, 0.05128204822540283, 0.3103448152542114, 0.08163265138864517, 0.1599999964237213, 0.13114753365516663, 0.04444444179534912, 0.20338982343673706, 0.19230768084526062, 0.07999999821186066, 0.1249999925494194, 0.1304347813129425, 0.11764705181121826 ]
SJxngREtDB
true
[ "With a set of modifications, under 10 LOC, to A2C you get an off-policy actor-critic that outperforms A2C and performs similarly to ACER. The modifications are large batchsizes, aggressive clamping, and policy \"forcing\" with gumbel noise." ]
[ "In the past few years, various advancements have been made in generative models owing to the formulation of Generative Adversarial Networks (GANs).", "GANs have been shown to perform exceedingly well on a wide variety of tasks pertaining to image generation and style transfer.", "In the field of Natural Language Processing, word embeddings such as word2vec and GLoVe are state-of-the-art methods for applying neural network models on textual data.", "Attempts have been made for utilizing GANs with word embeddings for text generation.", "This work presents an approach to text generation using Skip-Thought sentence embeddings in conjunction with GANs based on gradient penalty functions and f-measures.", "The results of using sentence embeddings with GANs for generating text conditioned on input information are comparable to the approaches where word embeddings are used.", "Numerous efforts have been made in the field of natural language text generation for tasks such as sentiment analysis BID35 and machine translation BID7 BID24 .", "Early techniques for generating text conditioned on some input information were template or rule-based engines, or probabilistic models such as n-gram.", "In recent times, state-of-the-art results on these tasks have been achieved by recurrent BID23 BID20 and convolutional neural network models trained for likelihood maximization.", "This work proposes an Code available at: https://github.com/enigmaeth/skip-thought-gan approach for text generation using Generative Adversarial Networks with Skip-Thought vectors.GANs BID9 are a class of neural networks that explicitly train a generator to produce high-quality samples by pitting against an adversarial discriminative model.", "GANs output differentiable values and hence the task of discrete text generation has to use vectors as differentiable inputs.", "This is achieved by training the GAN with sentence embedding vectors produced by Skip-Thought , a neural network model for learning fixed length representations of sentences." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.277777761220932, 0.05714285373687744, 0.14999999105930328, 0.2222222238779068, 0.31578946113586426, 0.3684210479259491, 0.14999999105930328, 0.05714285373687744, 0, 0.28070175647735596, 0.1818181723356247, 0.25 ]
rylLud_moQ
true
[ "Generating text using sentence embeddings from Skip-Thought Vectors with the help of Generative Adversarial Networks." ]
[ "Autoregressive recurrent neural decoders that generate sequences of tokens one-by-one and left-to-right are the workhorse of modern machine translation.", "In this work, we propose a new decoder architecture that can generate natural language sequences in an arbitrary order.", "Along with generating tokens from a given vocabulary, our model additionally learns to select the optimal position for each produced token.", "The proposed decoder architecture is fully compatible with the seq2seq framework and can be used as a drop-in replacement of any classical decoder.", "We demonstrate the performance of our new decoder on the IWSLT machine translation task as well as inspect and interpret the learned decoding patterns by analyzing how the model selects new positions for each subsequent token." ]
[ 0, 0, 0, 0, 1 ]
[ 0.23999999463558197, 0.1538461446762085, 0.0714285671710968, 0.06896551698446274, 0.2631579041481018 ]
B1ejpNkhim
false
[ "new out-of-order decoder for neural machine translation" ]
[ "In this paper, we study a new graph learning problem: learning to count subgraph isomorphisms.", "Although the learning based approach is inexact, we are able to generalize to count large patterns and data graphs in polynomial time compared to the exponential time of the original NP-complete problem.", "Different from other traditional graph learning problems such as node classification and link prediction, subgraph isomorphism counting requires more global inference to oversee the whole graph.", "To tackle this problem, we propose a dynamic intermedium attention memory network (DIAMNet) which augments different representation learning architectures and iteratively attends pattern and target data graphs to memorize different subgraph isomorphisms for the global counting.", "We develop both small graphs (<= 1,024 subgraph isomorphisms in each) and large graphs (<= 4,096 subgraph isomorphisms in each) sets to evaluate different models.", "Experimental results show that learning based subgraph isomorphism counting can help reduce the time complexity with acceptable accuracy.", "Our DIAMNet can further improve existing representation learning models for this more global problem.", "Graphs are general data structures widely used in many applications, including social network analysis, molecular structure analysis, natural language processing and knowledge graph modeling, etc.", "Learning with graphs has recently drawn much attention as neural network approaches to representation learning have been proven to be effective for complex data structures (Niepert et al., 2016; Kipf & Welling, 2017; Hamilton et al., 2017b; Schlichtkrull et al., 2018; Velickovic et al., 2018; Xu et al., 2019) .", "Most of existing graph representation learning algorithms focus on problems such as node classification, linking prediction, community detection, etc. (Hamilton et al., 2017a) .", "These applications are of more local decisions for which a learning algorithm can usually make inferences by inspecting the local structure of a graph.", "For example, for the node classification problem, after several levels of neighborhood aggregation, the node representation may be able to incorporate sufficient higher-order neighborhood information to discriminate different classes (Xu et al., 2019) .", "In this paper, we study a more global learning problem: learning to count subgraph isomorphisms (counting examples are shown as Figure 1 ).", "Although subgraph isomorphism is the key to solve graph representation learning based applications (Xu et al., 2019) , tasks of identifying or counting subgraph isomorphisms themselves are also significant and may support broad applications, such as bioinformatics (Milo et al., 2002; Alon et al., 2008) , chemoinformatics (Huan et al., 2003) , and online social network analysis (Kuramochi & Karypis, 2004) .", "For example, in a social network, we can solve search queries like \"groups of people who like X and visited Y-city/state.\"", "In a knowledge graph, we can answer questions like \"how many languages are there in Africa speaking by people living near the banks of the Nile River?\"", "Many pattern mining algorithms or graph database indexing based approaches have been proposed to tackle subgraph isomorphism problems (Ullmann, 1976; Cordella et al., 2004; He & Singh, 2008; Han et al., 2013; Carletti et al., 2018) .", "However, these approaches cannot be applied to large-scale graphs because of the exponential time complexity.", "Thanks to the powerful graph representation learning models which can effectively capture local structural information, we can use a learning algorithm to learn how to count subgraph isomorphisms from a lot of examples.", "Then the algorithm can scan a large graph and memorize all necessary local information based on a query pattern graph.", "In this case, although learning based approaches can be inexact, we can roughly estimate the range of the number of subgraph isomorphism.", "This can already help many applications that do not require exact match or need a more efficient pre- processing step.", "To this end, in addition to trying different representation learning architectures, we develop a dynamic intermedium attention memory network (DIAMNet) to iteratively attend the query pattern and the target data graph to memorize different local subgraph isomorphisms for global counting.", "To evaluate the learning effectiveness and efficiency, we develop a small (≤ 1,024 subgraph isomorphisms in each graph) and a large (≤ 4,096 subgraph isomorphisms in each graph) dataset and evaluate different neural network architectures.", "Our main contributions are as follows.", "• To our best knowledge, this is the first work to model the subgraph isomorphism counting problem as a learning problem, for which both the training and prediction time complexities are polynomial.", "• We exploit the representation power of different deep neural network architectures in an end-toend learning framework.", "In particular, we provide universal encoding methods for both sequence models and graph models, and upon them we introduce a dynamic intermedium attention memory network to address the more global inference problem for counting.", "• We conduct extensive experiments on developed datasets which demonstrate that our framework can achieve good results on both relatively large graphs and large patterns compared to existing studies.", "In this paper, we study the challenging subgraph isomorphism counting problem.", "With the help of deep graph representation learning, we are able to convert the NP-complete problem to a learning based problem.", "Then we can use the learned model to predict the subgraph isomorphism counts in polynomial time.", "Counting problem is more related to a global inference rather than only learning node or edge representations.", "Therefore, we have developed a dynamic intermedium attention memory network to memorize local information and summarize for the global output.", "We build two datasets to evaluate different representation learning models and global inference models.", "Results show that learning based method is a promising direction for subgraph isomorphism detection and counting and memory networks indeed help the global inference.", "We also performed detailed analysis of model behaviors for different pattern and graph sizes and labels.", "Results show that there is much space to improve when the vertex label size is large.", "Moreover, we have seen the potential real-world applications of subgraph isomorphism counting problems such as question answering and information retrieval.", "It would be very interesting to see the domain adaptation power of our developed pretrained models on more real-world applications.", "As shown in Figure 7 , different interaction modules perform differently in different views.", "We can find MaxPool always predicts higher counting values when the pattern is small and the graph is large, while AttnPool always predicts very small numbers except when the pattern vertex size is 8, and the graph vertex size is 64.", "The same result appears when we use edge sizes as the x-axis.", "This observation shows that AttnPool has difficulties predicting counting values when either of the pattern and the graph is small.", "It shows that attention focuses more on the zero vector we added rather than the pattern pooling result.", "Our DIAMNet, however, performs the best in all pattern/graph sizes.", "When the bins are ordered by vertex label sizes or edge label sizes, the performance of all the three interaction modules among the distribution are similar.", "When bins are ordered by vertex label sizes, we have the same discovery that AttnPool prefers to predict zeros when then patterns are small.", "MaxPool fails when facing complex patterns with more vertex labels.", "DIAMNet also performs not so good over these patterns.", "As for edge labels, results look good for MaxPool and DIAMNet but AttnPool is not satisfactory.", "As shown in Figure 8 , different representation modules perform differently in different views.", "CNN performs badly when the graph size is large (shown in Figure 8a and 8d) and patterns become complicated (show in Figure 8g and 8j), which further indicates that CNN can only extract the local information and suffers from issues when global information is need in larger graphs.", "RNN, on the other hand, performs worse when the graph are large, especially when patterns are small (show in Figure 8e ), which is consistent with its nature, intuitively.", "On the contrary, RGCN-SUM with DIAMNet is not affected by the edge sizes because it directly learns vertex representations rather than edge representations." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 1, 0.19512194395065308, 0.20512820780277252, 0.2916666567325592, 0.1818181723356247, 0.1249999925494194, 0.1428571343421936, 0.052631575614213943, 0.07407406717538834, 0.10526315122842789, 0.17142856121063232, 0.045454539358615875, 0.6666666865348816, 0.1538461446762085, 0.11428570747375488, 0.14999999105930328, 0.12765957415103912, 0.06896550953388214, 0.380952388048172, 0.1249999925494194, 0.3030303120613098, 0.05882352590560913, 0.3199999928474426, 0.25641024112701416, 0, 0.22727271914482117, 0.06451612710952759, 0.2222222238779068, 0.04878048226237297, 0.47999998927116394, 0.3125, 0.20689654350280762, 0.19354838132858276, 0.1764705777168274, 0.14814814925193787, 0.1621621549129486, 0.06896550953388214, 0.06896550953388214, 0.11764705181121826, 0.05882352590560913, 0, 0.04999999701976776, 0.07692307233810425, 0.060606054961681366, 0.06451612710952759, 0, 0, 0.10810810327529907, 0, 0, 0, 0, 0.039215683937072754, 0.04999999701976776, 0 ]
HJx-akSKPS
true
[ "In this paper, we study a new graph learning problem: learning to count subgraph isomorphisms." ]
[ "Domain adaptation is an open problem in deep reinforcement learning (RL).", "Often, agents are asked to perform in environments where data is difficult to obtain.", "In such settings, agents are trained in similar environments, such as simulators, and are then transferred to the original environment.", "The gap between visual observations of the source and target environments often causes the agent to fail in the target environment.", "We present a new RL agent, SADALA (Soft Attention DisentAngled representation Learning Agent).", "SADALA first learns a compressed state representation.", "It then jointly learns to ignore distracting features and solve the task presented.", "SADALA's separation of important and unimportant visual features leads to robust domain transfer.", "SADALA outperforms both prior disentangled-representation based RL and domain randomization approaches across RL environments (Visual Cartpole and DeepMind Lab).", "RL agents learn to maximize rewards within a task by taking actions based on observations.", "The advent of deep learning has enabled RL agents to learn in high dimensional feature spaces and complex action spaces (Krizhevsky et al., 2012; Mnih et al., 2016) .", "Deep RL methods have beat human performance in a variety of tasks, such as Go (Silver & Hassabis, 2016) .", "However, deep RL has two crippling drawbacks: high sample complexity and specificity to training task.", "There are many domains where data collection is expensive and time consuming, such as healthcare, autonomous vehicles, and robotics (Gottesman et al., 2019; Barrett et al., 2010) .", "Thus, agents are often trained in simulation and must transfer the resulting knowledge to reality.", "While this solves the issue of sample complexity, reality and simulated domains are sufficiently different that it is infeasible to naively train a deep RL agent in a simulation and transfer.", "This is known as the reality gap (Sadeghi & Levine, 2016) .", "Jumping the reality gap is difficult for two orthogonal reasons.", "The first is that dynamics of a simulation are an approximation to the dynamics of the real world.", "Prior work has shown success in transfer between domains with different dynamics (Killian et al., 2017; Yao et al., 2018; Doshi-Velez & Konidaris, 2016) .", "In this paper, we address the second difficulty: the difference in visual observations of states.", "Due to limitations in current photorealistic rendering, simulation and the real world are effectively two different visual domains (Sadeghi & Levine, 2016 ).", "We present a method of robust transfer between visual RL domains, using attention and a β variational autoencoder to automatically learn a state representation sufficient to solve both source and target domains.", "By learning disetangled and relevant state representation, our approach does not require target domain samples when training.", "The state representation enables the RL agent to attend to only the relevant state information and ignore all other, potentially distracting information.", "We test the SADALA framework on two transfer learning tasks, using A3C as the deep RL algorithm.", "The first task is Visual Cartpole.", "This domain is the same as Cartpole-v1 in OpenAI Gym with two key differences (Brockman et al., 2016) .", "The observed state is now the pixel rendering of the cartpole as well as the velocities of the cart and pole.", "Thus, the agent must learn to predict the position of the cart and pole from the rendering.", "Additionally, we modify this domain to include a transfer task.", "The agent must learn to transfer its knowledge of the game across different color configurations for the cart, pole, and track.", "Thus, Visual Cartpole is defined as a family of MDPs M where the true state S z is the positions and velocities of the cart and pole and the observed state S U = G U (S z ) for an MDP U ∈ M is the pixel observations and velocity values.", "Optimally, the agent should learn to ignore the factors in extracted latent state factorsŜ z that correspond to color, as they do not aid the agent in balancing the pole.", "This task tests the agent's ability to ignore irrelevant latent state factors.", "The second task is the \"Collect Good Objects\" task from Deepmind Lab (Beattie et al., 2016) .", "The agent must learn to navigate in first person and pick up \"good\" objects while avoiding \"bad\" objects.", "This task is defined as a family of MDPs M where the true state S z contains the position of the agent, the position of good and bad objects, the type of good and bad objects, and the color of the walls and floor.", "In a single MDP U ∈ M , all good objects are hats or all good objects are balloons.", "Similarly, all bad objects are either cans or cakes.", "The walls and floor can either take a green and orange colorscheme or a red and blue colorscheme.", "The agent is trained on hats/cans with the green/orange colorscheme and balloons/cakes with both colorschemes.", "It is then tested on hats/cans with the red/blue colorscheme.", "Optimally, the agent should learn to ignore the color of the floor and walls.", "Additionally, it should use the type of object to determine if it is good or bad.", "This task tests the agent's ability to ignore distracting latent state factors (the color of the walls and floor) while attending to relevant factors (the positions and types of objects and its own position).", "To test the results of the SADALA algorithm, we first test the reconstruction and disentanglement properties of the β-VAE used in the state representation stage.", "Note that this stage is identical to that of DARLA (Higgins et al., 2017) .", "As such, we expect the disentanglement properties to be similar.", "See figure 3 for reconstructions of the cartpole state.", "Based on the reconstructions, it is apparent that the β-VAE has learned to represent cart position and pole angle.", "Though the angle of the poles is slightly incorrect in the first set of images, the pole is tilted in the correct direction, yielding sufficiently correct extracted latent state factors.", "Additionally, the color of the cart, pole, and background is incorrect in the third pair of images.", "While this demonstrates that the identification and reconstructions of colors is not infallible, the position of the cart and pole remains correct, yielding a set of extracted latent state parameters that is sufficient to solve the MDP.", "See figure 5 for a visualization of reconstruction with attention.", "In the original image, the pole is standing straight and the cart is centered.", "In the reconstruction, the cart is centered, and the pole is almost upright.", "However, the reconstruction does not include the colors of the cart or pole.", "Instead it fills the cart and pole with the mean color of the dataset.", "This shows that the attention weights are properly learning to ignore color and instead pay attention to the position of the cart and pole.", "Figures 6 and 7 gives a comparison of the performance of the algorithms across environments.", "Note that all of the algorithms are sufficient at solving the source task, with the single-task learner performing slightly better.", "This is due to the fact that the single-task learner can optimize its con- The single-task learner achieves better rewards on all source tasks than any of the transfer-specific agents.", "Domain randomization performs less well because of the complexity of the domain randomization task.", "Rather than optimizing reward for a single domain, the agent must optimize reward across a large set of domains.", "DARLA, SADALA, and SADALA with reduced variance also perform less well on the source task than the baseline agent.", "This is due to imperfections in the β-VAE.", "Though the β-VAE attempts to reconstruct the input image, it does not do so perfectly, as shown in figure 3 .", "This shows that its extraction of latent state features is not perfect, leading to potentially confusing states given to the RL agent.", "Additionally, while the β-VAE's goal is to learn a disentangled representation, it does not do so perfectly.", "As such, (partially) entangled latent state factors may further confuse the RL agent.", "The single-task learner fails to transfer its policy to the target domain.", "DARLA transfers some of its knowledge to the target domain.", "Domain randomization also transfers some knowledge.", "Finally, SADALA transfers more of its knowledge.", "The single-task agent has no incentive to learn a factored state representation that enables transfer.", "Its convolutional filters will directly optimize to maximize reward in the source policy.", "Thus, if the filters are searching for a hat on a blue floor but tested on a hat on a red floor the convolutional filters will fail to transfer.", "DARLA learns a state representation that is mostly disentangled.", "This allows the RL agent to learn to ignore unimportant features such as cart color and utilize important features such as cart position.", "The factored state representation forces the RL agent to learn a more robust policy.", "However, the neural network parameterization of the RL policy must implicitly learn to ignore unimportant factors.", "Therefore, when presented with unseen information, the RL agent may not properly ignore unimportant factors.", "Domain randomization forces the neural network to implicitly learn a state representation sufficient to transfer between tasks.", "This requires large amounts of training data and is less robust than explicit modeling of latent state factors.", "SADALA builds on DARLA by adding an explicit attention mechanism, allowing it to more effectively ignore unimportant features.", "Due to the use of the sigmoid activation in the attention mechanism, the attention weights W are bounded between 0 and 1.", "In addition to providing a direct weight on the importance of a feature, this bound prevents high variance of attention weights across different inputs.", "SADALA with the variance reduction term performs worse than both DARLA and SADALA without variance reduction on the Deepmind lab task but better on the other two.", "In the scenario where the extracted latent state factors from the β-VAE are perfectly disentangled, static attention weights should be sufficient to solve the source task and should transfer better to the target task, as in the Visual Cartpole task.", "However, the β-VAE does not output perfectly disentangled factors, especially in more complex visual domains such as the Deepmind lab.", "Thus, the amount of attention paid to each feature from the β-VAE may differ across tasks, violating the assumption that attention weights should be zero variance.", "In this paper we propose SADALA, a three stage method for zero-shot domain transfer.", "First, SADALA learns a feature extractor that represents input states (images) as disentangled factors.", "It then filters these latent factors using an attention mechanism to select those most important to solving a source task.", "Jointly, it learns a policy for the source task that is robust to changes in states, and is able to transfer to related target tasks.", "We validate the performance of SADALA on both a high-dimensional continuous-control problem (Visual Cartpole) and a 3D naturalistic first-person simulated environments (Deepmind Lab).", "We show that the attention mechanism introduced is able to differentiate between important and unimportant latent features, enabling robust transfer.", "are constrained to be outside of a hypersphere of radius 0.1 the test environment.", "When evaluating source domain performance, the agents trained on multiple source domains are all evaluated on the same domain, randomly sampled from the set of source domains.", "The single task learner is evaluated on the source domain it is trained on." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05714285373687744, 0.05405404791235924, 0.0952380895614624, 0.2857142686843872, 0.1621621549129486, 0.06451612710952759, 0.21621620655059814, 0.3243243098258972, 0.04878048226237297, 0.1538461446762085, 0.07999999821186066, 0.04651162400841713, 0.10256409645080566, 0.08163265138864517, 0.1538461446762085, 0.2641509473323822, 0, 0, 0.20512820780277252, 0.12765957415103912, 0.10526315122842789, 0.1702127605676651, 0.42307692766189575, 0.04878048226237297, 0.1904761791229248, 0.09999999403953552, 0, 0, 0.04999999701976776, 0.21052631735801697, 0.1764705777168274, 0.1818181723356247, 0.13333332538604736, 0.1666666567325592, 0.1666666567325592, 0.04999999701976776, 0.1463414579629898, 0.07843136787414551, 0.05128204822540283, 0, 0.10526315122842789, 0.10526315122842789, 0, 0.2222222238779068, 0.05128204822540283, 0.11764705181121826, 0.04651162400841713, 0.10526315122842789, 0.05882352590560913, 0, 0.1428571343421936, 0, 0.052631575614213943, 0.15094339847564697, 0.11764705181121826, 0.05714285373687744, 0.05882352590560913, 0, 0.0555555522441864, 0.23255813121795654, 0.10810810327529907, 0.0476190410554409, 0.07999999821186066, 0, 0.1463414579629898, 0.0952380895614624, 0.0624999962747097, 0.04651162400841713, 0.17777776718139648, 0.09756097197532654, 0.05405404791235924, 0.11428570747375488, 0.05882352590560913, 0, 0, 0.25641024112701416, 0.05405404791235924, 0.13636362552642822, 0.12121211737394333, 0.2380952388048172, 0.21052631735801697, 0.10256409645080566, 0.10256409645080566, 0.19999998807907104, 0.09756097197532654, 0.2380952388048172, 0.1904761791229248, 0.1304347813129425, 0.04444443807005882, 0.178571417927742, 0.09302324801683426, 0.1702127605676651, 0.10526315122842789, 0.10526315122842789, 0.23255813121795654, 0.260869562625885, 0.1304347813129425, 0.40909090638160706, 0.10526315122842789, 0.08888888359069824, 0 ]
HklPzxHFwB
true
[ "We present an agent that uses a beta-vae to extract visual features and an attention mechanism to ignore irrelevant features from visual observations to enable robust transfer between visual domains." ]
[ "Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding the behavior of a given model and for obtaining safety guarantees.", "However, previous methods are usually limited to relatively simple neural networks.", "In this paper, we consider the robustness verification problem for Transformers.", "Transformers have complex self-attention layers that pose many challenges for verification, including cross-nonlinearity and cross-position dependency, which have not been discussed in previous work.", "We resolve these challenges and develop the first verification algorithm for Transformers.", "The certified robustness bounds computed by our method are significantly tighter than those by naive Interval Bound Propagation.", "These bounds also shed light on interpreting Transformers as they consistently reflect the importance of words in sentiment analysis.", "Deep neural networks have been successfully applied to many domains.", "However, a major criticism is that these black box models are difficult to analyze and their behavior is not guaranteed.", "Moreover, it has been shown that the predictions of deep networks become unreliable and unstable when tested in unseen situations, e.g., in the presence of small and adversarial perturbation to the input (Szegedy et al., 2013; Goodfellow et al., 2014; Lin et al., 2019) .", "Therefore, neural network verification has become an important tool for analyzing and understanding the behavior of neural networks, with applications in safety-critical applications (Katz et al., 2017; Julian et al., 2019; Lin et al., 2019) , model explanation (Shih et al., 2018) and robustness analysis (Tjeng et al., 2019; Wang et al., 2018c; Gehr et al., 2018; Wong & Kolter, 2018; Singh et al., 2018; Weng et al., 2018; Zhang et al., 2018) .", "Formally, a neural network verification algorithm aims to provably characterize the prediction of a network within some input space.", "For example, given a K-way classification model f : R d → R K , we can verify some linear specification (defined by a vector c) as below:", "where S is a predefined input space.", "For example, in the robustness verification problem that we are going to focus on in this paper, S = {x | x−x 0 p ≤ } is defined as some small p -ball around the original example x 0 , and setting up c = 1 y0 − 1 y can verify whether the logit output of class y 0 is always greater than another class y within S. This is a nonconvex optimization problem which makes computing the exact solution challenging, and thus algorithms are recently proposed to find lower bounds of Eq. (1) in order to efficiently obtain a safety guarantee (Gehr et al., 2018; Weng et al., 2018; Zhang et al., 2018; Singh et al., 2019) .", "Moreover, extension of these algorithms can be used for verifying some properties beyond robustness, such as rotation or shift invariant (Singh et al., 2019) , conservation of energy (Qin et al., 2019) and model correctness (Yang & Rinard, 2019) .", "However, most of existing verification methods focus on relatively simple neural network architectures, such as feed-forward and recurrent neural networks, and cannot handle complex structures.", "In this paper, we develop the first robustness verification algorithm for Transformers (Vaswani et al., 2017) with self-attention layers.", "Transformers have been widely used in natural language processing (Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019) and many other domains (Parmar et al., 2018; Kang & McAuley, 2018; Li et al., 2019b; Su et al., 2019; Li et al., 2019a) .", "For frames under perturbation in the input sequence, we aim to compute a lower bound such that when these frames are perturbed within p -balls centered at the original frames respectively and with a radius of , the model prediction is certified to be unchanged.", "To compute such a bound efficiently, we adopt the linear-relaxation framework (Weng et al., 2018; Zhang et al., 2018 ) -we recursively propagate and compute linear lower bound and upper bound for each neuron with respect to the input within perturbation set S.", "We resolve several particular challenges in verifying Transformers.", "First, Transformers with selfattention layers have a complicated architecture.", "Unlike simpler networks, they cannot be written as multiple layers of linear transformations or element-wise operations.", "Therefore, we need to propagate linear bounds differently for self-attention layers.", "Second, dot products, softmax, and weighted summation in self-attention layers involve multiplication or division of two variables under perturbation, namely cross-nonlinearity, which is not present in feed-forward networks.", "Ko et al. (2019) proposed a gradient descent based approach to find linear bounds, however it is inefficient and poses a computational challenge for transformer verification as self-attention is the core of transformers.", "In contrast, we derive closed-form linear bounds that can be computed in O(1) complexity.", "Third, neurons in each position after a self-attention layer depend on all neurons in different positions before the self-attention (namely cross-position dependency), unlike the case in recurrent neural networks where outputs depend on only the hidden features from the previous position and the current input.", "Previous works (Zhang et al., 2018; Weng et al., 2018; Ko et al., 2019) have to track all such dependency and thus is costly in time and memory.", "To tackle this, we introduce an efficient bound propagating process in a forward manner specially for self-attention layers, enabling the tighter backward bounding process for other layers to utilize bounds computed by the forward process.", "In this way, we avoid cross-position dependency in the backward process which is relatively slower but produces tighter bounds.", "Combined with the forward process, the complexity of the backward process is reduced by O(n) for input length n, while the computed bounds remain comparably tight.", "Our contributions are summarized below:", "• We propose an effective and efficient algorithm for verifying the robustness of Transformers with self-attention layers.", "To our best knowledge, this is the first method for verifying Transformers.", "• We resolve key challenges in verifying Transformers, including cross-nonlinearity and crossposition dependency.", "Our bounds are significantly tighter than those by adapting Interval Bound Propagation (IBP) (Mirman et al., 2018; .", "• We quantitatively and qualitatively show that the certified lower bounds consistently reflect the importance of input words in sentiment analysis, which justifies that the computed bounds are meaningful in practice.", "We propose the first robustness verification method for Transformers, and tackle key challenges in verifying Transformers, including cross-nonlinearity and cross-position dependency, for efficient and effective verification.", "Our method computes certified lower bounds that are significantly tighter than those by IBP.", "Quantitative and qualitative analyses further show that our bounds are meaningful and can reflect the importance of different words in sentiment analysis.", "A ILLUSTRATION OF DIFFERENT BOUNDING PROCESSES Figure 1 : Illustration of three different bounding processes: Fully-Forward", "(a), Fully-Backward", "(b), and Backward&Forward", "(c).", "We show an example of a 2-layer Transformer, where operations can be divided into two kinds of blocks, \"Feed-forward\" and \"Self-attention\".", "\"Self-attention\" contains operations in the self-attention mechanism starting from queries, keys, and values, and \"Feed-forward\" contains all the other operations including linear transformations and unary nonlinear functions.", "Arrows with solid lines indicate the propagation of linear bounds in a forward manner.", "Each backward arrow A k → B k with a dashed line for blocks A k , B k indicates that there is a backward bound propagation to block B k when computing bounds for block A k .", "Blocks with blue rectangles have forward processes inside the blocks, while those with green rounded rectangles have backward processes inside.", "Backward & Forward algorithm, we use backward processes for the feed-forward parts and forward processes for self-attention layers, and for layers after self-attention layers, they no longer need backward bound propagation to layers prior to self-attention layers.", "In this way, we resolve the cross-position dependency in verifying Transformers while still keeping bounds comparably tight as those by using fully backward processes.", "Empirical comparison of the three frameworks are presented in Sec. 4.3." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1621621549129486, 0, 0.380952388048172, 0.12121211737394333, 0.5454545617103577, 0.07407406717538834, 0.20689654350280762, 0, 0, 0.08510638028383255, 0.14035087823867798, 0.2222222238779068, 0, 0, 0.06122448667883873, 0.13636362552642822, 0.060606054961681366, 0.4000000059604645, 0.04999999701976776, 0.08163265138864517, 0.08510638028383255, 0.3333333432674408, 0.10526315122842789, 0.07692307233810425, 0.0952380895614624, 0.054054051637649536, 0.1463414579629898, 0, 0.045454543083906174, 0, 0.09999999403953552, 0.06896550953388214, 0.1818181723356247, 0, 0.6666666865348816, 0.45454543828964233, 0.17391303181648254, 0, 0.1666666567325592, 0.4516128897666931, 0, 0.12903225421905518, 0.07692307233810425, 0, 0.13333332538604736, 0.0624999962747097, 0.1666666567325592, 0.05714285373687744, 0.07999999821186066, 0.1111111044883728, 0.1764705777168274, 0.1818181723356247 ]
BJxwPJHFwS
true
[ "We propose the first algorithm for verifying the robustness of Transformers." ]
[ "In the last few years, deep learning has been tremendously successful in many applications.", "However, our theoretical understanding of deep learning, and thus the ability of providing principled improvements, seems to lag behind.", "A theoretical puzzle concerns the ability of deep networks to predict well despite their intriguing apparent lack of generalization: their classification accuracy on the training set is not a proxy for their performance on a test set.", "How is it possible that training performance is independent of testing performance?", "Do indeed deep networks require a drastically new theory of generalization?", "Or are there measurements based on the training data that are predictive of the network performance on future data?", "Here we show that when performance is measured appropriately, the training performance is in fact predictive of expected performance, consistently with classical machine learning theory.", "Is it possible to decide the prediction performance of a deep network from its performance in training -as it is typically the case for shallower classifiers such as kernel machines and linear classifiers?", "Is there any relationship at all between training and test performances?", "Figure 1a shows that when the network has more parameters than the size of the training set -which is the standard regime for deep nets -the training classification error can be zero and is very different from the testing error.", "This intriguing lack of generalization was recently highlighted by the surprising and influential observation (Zhang et al. (2016) ) that the same network that predicts well on normally labeled data (CIFAR10), can fit randomly labeled images with zero classification error in training while its test classification error is of course at chance level, see Figure 1b .", "The riddle of large capacity and good predictive performance led to many papers, with a variety of claims ranging from \"This situation poses a conceptual challenge to statistical learning theory as traditional measures of model complexity struggle to explain the generalization ability of large artificial neural networks... \" Zhang et al. (2016) , to various hypotheses about the role of flat minima Keskar et al. (2016) ; Dinh et al. (2017) ; Chaudhari et al. (2016) , about SGD Chaudhari & Soatto (2017) ; Zhang et al. (2017) and to a number of other explanations (e.g. Belkin et al. (2018) ; Martin & Mahoney (2019) ) for such unusual properties of deep networks.", "We start by defining some key concepts.", "We call \"loss\" the measure of performance of the network f on a training set S = x 1 , y 1 , · · · , x N , y N .", "The most common loss optimized during training for binary classification is the logistic loss L(f ) = 1 N N n=1 ln(1 + e −ynf (xn) ).", "We call classification \"error\" 1 N N n=1 H(−y n f (x n )), where y is binary and H is the Heaviside function with H(−yf (x)) = 1 if −yf > 0 which correspond to wrong classification.", "There is a close relation between the logistic loss and the classification error: the logistic loss is an upper bound for the classification error.", "Thus minimizing the logistic loss implies minimizing the classification error.", "The criticism in papers such as Zhang et al. (2016) refers to the classification error.", "However, training minimizes the logistic loss.", "As a first step it seems therefore natural to look at whether logistic loss in training can be used as a proxy for the logistic loss at testing.", "The second step follows from the following observation.", "The logistic loss can always be made arbitrarily small for separable data (when f (x n )y n > 0, ∀n) by scaling up the value of f and in fact it can be shown that the norm of the weights of f grows monotonically with time", "The linear relationship we found means that the generalization error of Equation 3 is small once the complexity of the space of deep networks is \"dialed-down\" by normalization.", "It also means that, as expected from the theory of uniform convergence, the generalization gap decreases to zero for increasing size of the training set (see Figure 1 ).", "Thus there is indeed asymptotic generalization -defined as training loss converging to test loss when the number of training examples grows to infinity -in deep neural networks, when appropriately measured.", "The title in Zhang et al. (2016) \"Understanding deep learning requires rethinking generalization\" seems to suggest that deep networks are so \"magical\" to be beyond the reach of existing machine learning theory.", "This paper shows that this is not the case.", "On the other hand, the generalization gap for the classification error and for the unnormalized cross-entropy is expected to be small only for much larger N (N must be significantly larger than the number of parameters).", "However, consistently with classical learning theory, the cross-entropy loss at training predicts well the cross-entropy loss at test when the complexity of the function space is reduced by appropriate normalization.", "For the normalized case with R = 1 this happens in our data sets for a relatively \"small\" number N of training examples as shown by the linear relationship of Figure 2 .", "The classical analysis of ERM algorithms studies their asymptotic behavior for the number of data N going to infinity.", "In this limiting regime, N > W where W is the fixed number of weights; consistency (informally the expected error of the empirical minimizer converges to the best in the class) and generalization (the empirical error of the minimizer converges to the expected error of the minimizer) are equivalent.", "This note implies that there is indeed asymptotic generalization and consistency in deep networks.", "However, it has been shown that in the case of linear regression, for instance with kernels, there are situations -depending on the kernel and the data -in which there is simultaneously interpolation of the training data and good expected error.", "This is typically when W > N and corresponds to the limit for λ = 0 of regularization, that is the pseudoinverse.", "It is likely that deep nets may have a similar regime, in which case the implicit regularization described here, with its asymptotic generalization effect, is just an important prerequisite for a full explanation for W > N -as it is the case for kernel machines under the square loss.", "The results of this paper strongly suggested that the complexity of the normalized network is controlled by the optimization process.", "In fact a satisfactory theory of the precise underlying implicit regularization mechanism has now been proposed Soudry et al. (2017) As expected, the linear relationship we found holds in a robust way for networks with different architectures, different data sets and different initializations.", "Our observations, which are mostly relevant for theory, yield a recommendation for practitioners: it is better to monitor during training the empirical \"normalized\" cross-entropy loss instead of the unnormalized cross-entropy loss actually minimized.", "The former matters in terms of stopping time and predicts test performance in terms of cross-entropy and ranking of classification error.", "More significantly for the theory of Deep Learning, this paper confirms that classical machine learning theory can describe how training performance is a proxy for testing performance of deep networks." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1621621549129486, 0.19512194395065308, 0.30188679695129395, 0.23529411852359772, 0.1764705777168274, 0.25641024112701416, 0.6521739363670349, 0.2641509473323822, 0.11764705181121826, 0.2142857164144516, 0.16438356041908264, 0.1818181723356247, 0, 0.1818181723356247, 0.12765957415103912, 0.1428571343421936, 0.09999999403953552, 0.06451612710952759, 0.10526315122842789, 0.13793103396892548, 0.12765957415103912, 0.06451612710952759, 0.09836065024137497, 0.17391303181648254, 0.20408162474632263, 0.40816324949264526, 0.26923075318336487, 0.1249999925494194, 0.15686273574829102, 0.3829787075519562, 0.15094339847564697, 0.19512194395065308, 0.14814814925193787, 0.10810810327529907, 0.178571417927742, 0.23255813121795654, 0.1249999925494194, 0.14999999105930328, 0.12903225421905518, 0.19230768084526062, 0.1538461446762085, 0.40816324949264526 ]
BkelnhNFwB
true
[ "Contrary to previous beliefs, the training performance of deep networks, when measured appropriately, is predictive of test performance, consistent with classical machine learning theory." ]
[ "We propose a \"plan online and learn offline\" framework for the setting where an agent, with an internal model, needs to continually act and learn in the world.", "Our work builds on the synergistic relationship between local model-based control, global value function learning, and exploration.", "We study how local trajectory optimization can cope with approximation errors in the value function, and can stabilize and accelerate value function learning.", "Conversely, we also study how approximate value functions can help reduce the planning horizon and allow for better policies beyond local solutions.", "Finally, we also demonstrate how trajectory optimization can be used to perform temporally coordinated exploration in conjunction with estimating uncertainty in value function approximation.", "This exploration is critical for fast and stable learning of the value function.", "Combining these components enable solutions to complex control tasks, like humanoid locomotion and dexterous in-hand manipulation, in the equivalent of a few minutes of experience in the real world.", "We consider a setting where an agent with limited memory and computational resources is dropped into a world.", "The agent has to simultaneously act in the world and learn to become proficient in the tasks it encounters.", "Let us further consider a setting where the agent has some prior knowledge about the world in the form of a nominal dynamics model.", "However, the state space of the world could be very large and complex, and the set of possible tasks very diverse.", "This complexity and diversity, combined with limited computational capability, rules out the possibility of an omniscient agent that has experienced all situations and knows how to act optimally in all states, even if the agent knows the dynamics.", "Thus, the agent has to act in the world while learning to become competent.Based on the knowledge of dynamics and its computational resources, the agent is imbued with a local search procedure in the form of trajectory optimization.", "While the agent would certainly benefit from the most powerful of trajectory optimization algorithms, it is plausible that very complex procedures are still insufficient or inadmissible due to the complexity or inherent unpredictability of the environment.", "Limited computational resources may also prevent these powerful methods from real-time operation.", "While the trajectory optimizer may be insufficient by itself, we show that it provides a powerful vehicle for the agent to explore and learn about the world.Due to the limited capabilities of the agent, a natural expectation is for the agent to be moderately competent for new tasks that occur infrequently and skillful in situations that it encounters repeatedly by learning from experience.", "Based on this intuition, we propose the plan online and learn offline (POLO) framework for continual acting and learning.", "POLO is based on the tight synergistic coupling between local trajectory optimization, global value function learning, and exploration.We will first provide intuitions for why there may be substantial performance degradation when acting greedily using an approximate value function.", "We also show that value function learning can be accelerated and stabilized by utilizing trajectory optimization integrally in the learning process, and that a trajectory optimization procedure in conjunction with an approximate value function can compute near optimal actions.", "In addition, exploration is critical to propagate global information in value function learning, and for trajectory optimization to escape local solutions and saddle FIG4 : Examples of tasks solved with POLO.", "A 2D point agent navigating a maze without any directed reward signal, a complex 3D humanoid standing up from the floor, pushing a box, and inhand re-positioning of a cube to various orientations with a five-fingered hand.", "Video demonstration of our results can be found at: https://sites.google.com/view/polo-mpc.points.", "In POLO, the agent forms hypotheses on potential reward regions, and executes temporally coordinated action sequences through trajectory optimization.", "This is in contrast to strategies like −greedy and Boltzmann exploration that explore at the granularity of individual timesteps.", "The use of trajectory optimization enables the agent to perform directed and efficient exploration, which in turn helps to find better global solutions.The setting studied in the paper models many problems of interest in robotics and artificial intelligence.", "Local trajectory optimization becomes readily feasible when a nominal model and computational resources are available to an agent, and can accelerate learning of novel task instances.", "In this work, we study the case where the internal nominal dynamics model used by the agent is accurate.", "Nominal dynamics models based on knowledge of physics , or through learning (Ljung, 1987) , complements a growing body of work on successful simulation to reality transfer and system identification BID34 BID31 Lowrey et al., 2018; BID23 .", "Combining the benefits of local trajectory optimization for fast improvement with generalization enabled by learning is critical for robotic agents that live in our physical world to continually learn and acquire a large repertoire of skills.", "Through empirical evaluation, we wish to answer the following questions:1.", "Does trajectory optimization in conjunction with uncertainty estimation in value function approximation result in temporally coordinated exploration strategies?2", ".", "Can the use of an approximate value function help reduce the planning horizon for MPC?3", ".", "Does trajectory optimization enable faster and more stable value function learning?Before", "answering the questions in detail, we first point out that POLO can scale up to complex high-dimensional agents like 3D humanoid and dexterous anthropomorphic hand BID23 which are among the most complex control tasks studied in robot learning. Video", "demonstration can be found at: https://sites.google.com/view/polo-mpc", "In this work we presented POLO, which combines the strengths of trajectory optimization and value function learning.", "In addition, we studied the benefits of planning for exploration in settings where we track uncertainties in the value function.", "Together, these components enabled control of complex agents like 3D humanoid and five-fingered hand.", "In this work, we assumed access to an accurate internal dynamics model.", "A natural next step is to study the influence of approximation errors in the internal model and improving it over time using the real world interaction data." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3589743673801422, 0.1249999925494194, 0.22857142984867096, 0.1621621549129486, 0.10526315122842789, 0.2857142686843872, 0.19512194395065308, 0.1875, 0.12903225421905518, 0.1111111044883728, 0.06451612710952759, 0.12765957415103912, 0.17391303181648254, 0.08695651590824127, 0, 0.19354838132858276, 0.3030303120613098, 0.1538461446762085, 0.2666666507720947, 0.1818181723356247, 0.1249999925494194, 0, 0.05882352590560913, 0.23529411852359772, 0.12765957415103912, 0.14999999105930328, 0, 0.12244897335767746, 0.2448979616165161, 0, 0.12903225421905518, 0.13793103396892548, 0.07692307233810425, 0.19607843458652496, 0, 0.1249999925494194, 0.25, 0.13793103396892548, 0, 0.09999999403953552 ]
Byey7n05FQ
true
[ "We propose a framework that incorporates planning for efficient exploration and learning in complex environments." ]
[ "Modern neural network architectures take advantage of increasingly deeper layers, and various advances in their structure to achieve better performance.", "While traditional explicit regularization techniques like dropout, weight decay, and data augmentation are still being used in these new models, little about the regularization and generalization effects of these new structures have been studied. \n", "Besides being deeper than their predecessors, could newer architectures like ResNet and DenseNet also benefit from their structures' implicit regularization properties? \n", "In this work, we investigate the skip connection's effect on network's generalization features.", "Through experiments, we show that certain neural network architectures contribute to their generalization abilities.", "Specifically, we study the effect that low-level features have on generalization performance when they are introduced to deeper layers in DenseNet, ResNet as well as networks with 'skip connections'.", "We show that these low-level representations do help with generalization in multiple settings when both the quality and quantity of training data is decreased.", "Deep models have achieved significant success in many applications.", "However, deep models are hard to train and require longer times to converge.", "A solution by construction is copying the learned layers from the shallower model and setting additional layers to identity mapping.", "Skip connection proposed in the Residual Network BID0 , shows the new insight of innovation in network structure for computer vision.In the following years, more new and multi-layer-skipping structures have been proposed and proved to have better performance, among which one typical example is DenseNet BID1 .", "ResNet BID0 , HighwayNet (Rupesh Kumar BID3 and FractalNets BID2 have all succeeded by passing the deep information directly to the shallow layers via shortcut connection.", "Densenet further maximize the benefit of shortcut connections to the extreme.", "In DenseNet (more accurately in one dense block) every two layers has been linked, making each layer be able to use the information from all its previous layers.", "In doing this, DenseNet is able to effectively mitigate the problem of gradient vanishing or degradation, making the input features of each layer various and diverse and the calculation more efficient.Concatenation in Dense Block: the output of each layer will concatenate with its own input and then being passed forward to the next layer together.", "This makes the input characteristics of the next layer diversified and effectively improves the computation and helps the network to integrate shallow layer features to learn discriminative feature.", "Meanwhile, the neurons in the same Dense block are interconnected to achieve the effect of feature reused.", "This is why DenseNet does not need to be very wide and can achieve very good results.Therefore, shortcut connections form the multi-channel model, making the flow of information from input to output unimpeded.", "Gradient information can also be fed backward directly from the loss function to the the various nodes.In this paper we make the following contributions:• We design experiments to illustrate that on many occasions it is worth adding some skip connections while sacrificing some of the network width.", "Every single skip connection replacing some of width is able to benefit the whole network's learning ability.", "Our 'connection-by-connection' adding experiment results can indicate this well.•", "We perform experiments to show that networks that reuse low-level features in subsequent layers perform better than a simple feed-forward model. We", "degrade both the quantity and the quality of the training data in different settings and compare the validation performances of these models. Our", "results suggest that while all models are able to achieve perfect training accuracy, both DenseNet and ResNet are able to exhibit better generalization performance given similar model complexities.• We", "investigate solutions learned by the three types of networks in both a regression and classification involving task in low dimensions and compare the effects of both the dense connections and the skip connections. We show", "that the contribution of the feature maps reintroduced to deeper layers via the connections allow for more representational power.", "By introducing skip connections, modern neural network has proved better performance in computer vision area.", "This paper investigates how skip connections works in vision task and how they effect the learning power of networks.", "For this reason, we have design some experiments and verify that networks with skip connections can do the regression best among tested network architectures.", "It indicates that we can get the insights of this interesting architecture and its tremendous learning power." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.09302324801683426, 0.14814814925193787, 0, 0.1111111044883728, 0.05405404791235924, 0.23529411852359772, 0.1702127605676651, 0, 0, 0.04878048226237297, 0.158730149269104, 0.0416666604578495, 0.12121211737394333, 0.07999999821186066, 0.09090908616781235, 0.08888888359069824, 0.10526315122842789, 0.1111111044883728, 0.1230769157409668, 0.09999999403953552, 0.060606054961681366, 0.1428571343421936, 0.1463414579629898, 0.07999999821186066, 0.16326530277729034, 0.25, 0.052631575614213943, 0.24390242993831635, 0.12765957415103912, 0.19999998807907104 ]
rJbs5gbRW
true
[ "Our paper analyses the tremendous representational power of networks especially with 'skip connections', which may be used as a method for better generalization." ]
[ "One of the challenges in training generative models such as the variational auto encoder (VAE) is avoiding posterior collapse.", "When the generator has too much capacity, it is prone to ignoring latent code.", "This problem is exacerbated when the dataset is small, and the latent dimension is high.", "The root of the problem is the ELBO objective, specifically the Kullback–Leibler (KL) divergence term in objective function.", "This paper proposes a new objective function to replace the KL term with one that emulates the maximum mean discrepancy (MMD) objective.", "It also introduces a new technique, named latent clipping, that is used to control distance between samples in latent space.", "A probabilistic autoencoder model, named $\\mu$-VAE, is designed and trained on MNIST and MNIST Fashion datasets, using the new objective function and is shown to outperform models trained with ELBO and $\\beta$-VAE objective.", "The $\\mu$-VAE is less prone to posterior collapse, and can generate reconstructions and new samples in good quality.", "Latent representations learned by $\\mu$-VAE are shown to be good and can be used for downstream tasks such as classification. ", "Autoencoders(AEs) are used to learn low-dimensional representation of data.", "They can be turned into generative models by using adversarial, or variational training.", "In the adversarial approach, one can directly shape the posterior distribution over the latent variables by either using an additional network called a Discriminator (Makhzani et al., 2015) , or using the encoder itself as a discriminator (Huang et al., 2018) .", "AEs trained with variational methods are called Variational Autoencoders (VAEs) (Kingma & Ba, 2014; Rezende et al., 2014) .", "Their objective maximizes the variational lower bound (or evidence lower bound, ELBO) of p θ (x).", "Similar to AEs, VAEs contain two networks:", "Encoder -Approximate inference network: In the context of VAEs, the encoder is a recognition model q φ (z|x) 1 , which is an approximation to the true posterior distribution over the latent variables, p θ (z|x).", "The encoder tries to map high-level representations of the input x onto latent variables such that the salient features of x are encoded on z.", "Decoder -Generative network: The decoder learns a conditional distribution p θ (x|z) and has two tasks:", "i) For the task of reconstruction of input, it solves an inverse problem by taking mapped latent z computed using output of encoder and predicts what the original input is (i.e. reconstruction x ≈ x).", "ii) For generation of new data, it samples new data x , given the latent variables z.", "During training, encoder learns to map the data distribution p d (x) to a simple distribution such as Gaussian while the decoder learns to map it back to data distribution p(x) 2 .", "VAE's objective function has two terms: log-likelihood term (reconstruction term of AE objective function) and a prior regularization term 3 .", "Hence, VAEs add an extra term to AE objective function, and approximately maximizes the log-likelihood of the data, log p(x), by maximizing the evidence lower bound (ELBO):", "Maximizing ELBO does two things:", "• Increase the probability of generating each observed data x.", "• Decrease distance between estimated posterior q(z|x) and prior distribution p(z), pushing KL term to zero.", "Smaller KL term leads to less informative latent variable.", "Pushing KL terms to zero encourages the model to ignore latent variable.", "This is especially true when the decoder has a high capacity.", "This leads to a phenomenon called posterior collapse in literature (Razavi et al., 2019; Dieng et al., 2018; van den Oord et al., 2017; Bowman et al., 2015; Sønderby et al., 2016; Zhao et al., 2017) .", "This work proposes a new method to mitigate posterior collapse.", "The main idea is to modify the KL term of the ELBO such that it emulates the MMD objective (Gretton et al., 2007; Zhao et al., 2019) .", "In ELBO objective, minimizing KL divergence term pushes mean and variance parameters of each sample at the output of encoder towards zero and one respectively.", "This , in turn, brings samples closer, making them indistinguishable.", "The proposed method replaces the KL term in the ELBO in order to encourage samples from latent variable to spread out while keeping the aggregate mean of samples close to zero.", "This enables the model to learn a latent representation that is amenable to clustering samples which are similar.", "As shown in later sections, the proposed method enables learning good generative models as well as good representations of data.", "The details of the proposal are discussed in Section 4." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.05882352590560913, 0.0624999962747097, 0.1666666567325592, 0.949999988079071, 0.20512819290161133, 0.21739129722118378, 0.10810810327529907, 0.09999999403953552, 0.06896550953388214, 0, 0.07407406717538834, 0.051282044500112534, 0.05714285373687744, 0.07407406717538834, 0.07843136787414551, 0.0952380895614624, 0.0555555522441864, 0, 0.0555555522441864, 0.09302324801683426, 0.21621620655059814, 0.13333332538604736, 0, 0, 0.1666666567325592, 0.20689654350280762, 0.12903225421905518, 0.12903225421905518, 0.12765957415103912, 0.3333333432674408, 0.27272728085517883, 0.1860465109348297, 0.06666666269302368, 0.17777776718139648, 0.21621620655059814, 0, 0 ]
rJgWiaNtwH
true
[ "This paper proposes a new objective function to replace KL term with one that emulates maximum mean discrepancy (MMD) objective. " ]
[ "Likelihood-based generative models are a promising resource to detect out-of-distribution (OOD) inputs which could compromise the robustness or reliability of a machine learning system.", "However, likelihoods derived from such models have been shown to be problematic for detecting certain types of inputs that significantly differ from training data.", "In this paper, we pose that this problem is due to the excessive influence that input complexity has in generative models' likelihoods.", "We report a set of experiments supporting this hypothesis, and use an estimate of input complexity to derive an efficient and parameter-free OOD score, which can be seen as a likelihood-ratio, akin to Bayesian model comparison.", "We find such score to perform comparably to, or even better than, existing OOD detection approaches under a wide range of data sets, models, model sizes, and complexity estimates.", "Assessing whether input data is novel or significantly different than the one used in training is critical for real-world machine learning applications.", "Such data are known as out-of-distribution (OOD) inputs, and detecting them should facilitate safe and reliable model operation.", "This is particularly necessary for deep neural network classifiers, which can be easily fooled by OOD data (Nguyen et al., 2015) .", "Several approaches have been proposed for OOD detection on top of or within a neural network classifier (Hendrycks & Gimpel, 2017; Lakshminarayanan et al., 2017; Liang et al., 2018; Lee et al., 2018) .", "Nonetheless, OOD detection is not limited to classification tasks nor to labeled data sets.", "Two examples of that are novelty detection from an unlabeled data set and next-frame prediction from video sequences.", "A rather obvious strategy to perform OOD detection in the absence of labels (and even in the presence of them) is to learn a density model M that approximates the true distribution p * (X ) of training inputs x ∈ X (Bishop, 1994) .", "Then, if such approximation is good enough, that is, p(x|M) ≈ p * (x), OOD inputs should yield a low likelihood under model M. With complex data like audio or images, this strategy was long thought to be unattainable due to the difficulty of learning a sufficiently good model.", "However, with current approaches, we start having generative models that are able to learn good approximations of the density conveyed by those complex data.", "Autoregressive and invertible models such as PixelCNN++ (Salimans et al., 2017) and Glow (Kingma & Dhariwal, 2018) perform well in this regard and, in addition, can approximate p(x|M) with arbitrary accuracy.", "Figure 1 : Likelihoods from a Glow model trained on CIFAR10.", "Qualitatively similar results are obtained for other generative models and data sets (see also results in Choi et al., 2018; Nalisnick et al., 2019a) .", "trained on CIFAR10, generative models report higher likelihoods for SVHN than for CIFAR10 itself ( Fig. 1 ; data descriptions are available in Appendix A).", "Intriguingly, this behavior is not consistent across data sets, as other ones correctly tend to produce likelihoods lower than the ones of the training data (see the example of TrafficSign in Fig. 1 ).", "A number of explanations have been suggested for the root cause of this behavior (Choi et al., 2018; Nalisnick et al., 2019a; Ren et al., 2019) but, to date, a full understanding of the phenomenon remains elusive.", "In this paper, we shed light to the above phenomenon, showing that likelihoods computed from generative models exhibit a strong bias towards the complexity of the corresponding inputs.", "We find that qualitatively complex images tend to produce the lowest likelihoods, and that simple images always yield the highest ones.", "In fact, we show a clear negative correlation between quantitative estimates of complexity and the likelihood of generative models.", "In the second part of the paper, we propose to leverage such estimates of complexity to detect OOD inputs.", "To do so, we introduce a widely-applicable OOD score for individual inputs that corresponds, conceptually, to a likelihoodratio test statistic.", "We show that such score turns likelihood-based generative models into practical and effective OOD detectors, with performances comparable to, or even better than the state-of-theart.", "We base our experiments on an extensive collection of alternatives, including a pool of 12 data sets, two conceptually-different generative models, increasing model sizes, and three variants of complexity estimates.", "We illustrate a fundamental insight with regard to the use of generative models' likelihoods for the task of detecting OOD data.", "We show that input complexity has a strong effect in those likelihoods, and pose that it is the main culprit for the puzzling results of using generative models' likelihoods for OOD detection.", "In addition, we show that an estimate of input complexity can be used to com- Ren et al. (2019) ,", "(b) by Lee et al. (2018) , and", "(c) by Choi et al. (2018) .", "Results for Typicality test correspond to using batches of 2 samples of the same type.", "Trained on: FashionMNIST CIFAR10 OOD data:", "MNIST Omniglot SVHN CelebA CIFAR100 Classifier-based approaches ODIN (Liang et al., 2018) (Choi et al., 2018) 0.766 0.796 1.000 0.997 -Outlier exposure (Hendrycks et al., 2019) --0.758 -0.685 Typicality test (Nalisnick et al., 2019b) 0.140 -0.420 --Likelihood-ratio (Ren et al., 2019) 0.997 -0.912 --S using Glow and FLIF (ours) 0.998 1.000 0.950 0.863 0.736 S using PixelCNN++ and FLIF (ours) 0.967 1.000 0.929 0.776 0.535 pensate standard negative log-likelihoods in order to produce an efficient and reliable OOD score.", "We also offer an interpretation of our score as a likelihood-ratio akin to Bayesian model comparison.", "Such score performs comparably to, or even better than several state-of-the-art approaches, with results that are consistent across a range of data sets, models, model sizes, and compression algorithms.", "The proposed score has no hyper-parameters besides the definition of a generative model and a compression algorithm, which makes it easy to employ in a variety of practical problems and situations." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2978723347187042, 0.21276594698429108, 0.3181818127632141, 0.145454540848732, 0.15094339847564697, 0.04444443807005882, 0.19512194395065308, 0.04347825422883034, 0.037735845893621445, 0.05405404791235924, 0.1463414579629898, 0.16129031777381897, 0.14492753148078918, 0.25, 0.03703703358769417, 0.05714285373687744, 0.1304347813129425, 0.1249999925494194, 0.11320754140615463, 0.1090909019112587, 0.2800000011920929, 0.2380952388048172, 0.1904761791229248, 0.19999998807907104, 0.1860465109348297, 0.20408162474632263, 0.1538461446762085, 0.3720930218696594, 0.37735849618911743, 0.09090908616781235, 0.1249999925494194, 0.06666666269302368, 0.10526315122842789, 0, 0.04444444179534912, 0.14999999105930328, 0.15094339847564697, 0.23529411852359772 ]
SyxIWpVYvr
true
[ "We pose that generative models' likelihoods are excessively influenced by the input's complexity, and propose a way to compensate it when detecting out-of-distribution inputs" ]
[ " Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients.", "In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings).", "We show that one cause for such failures is the exponential moving average used in the algorithms.", "We provide an explicit example of a simple convex optimization setting where Adam does not converge to the optimal solution, and describe the precise problems with the previous analysis of Adam algorithm.", "Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with ``long-term memory'' of past gradients, and propose new variants of the Adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance.", "Stochastic gradient descent (SGD) is the dominant method to train deep networks today.", "This method iteratively updates the parameters of a model by moving them in the direction of the negative gradient of the loss evaluated on a minibatch.", "In particular, variants of SGD that scale coordinates of the gradient by square roots of some form of averaging of the squared coordinates in the past gradients have been particularly successful, because they automatically adjust the learning rate on a per-feature basis.", "The first popular algorithm in this line of research is ADAGRAD BID2 BID5 , which can achieve significantly better performance compared to vanilla SGD when the gradients are sparse, or in general small.Although ADAGRAD works well for sparse settings, its performance has been observed to deteriorate in settings where the loss functions are nonconvex and gradients are dense due to rapid decay of the learning rate in these settings since it uses all the past gradients in the update.", "This problem is especially exacerbated in high dimensional problems arising in deep learning.", "To tackle this issue, several variants of ADAGRAD, such as RMSPROP BID7 , ADAM BID3 , ADADELTA (Zeiler, 2012) , NADAM BID1 , etc, have been proposed which mitigate the rapid decay of the learning rate using the exponential moving averages of squared past gradients, essentially limiting the reliance of the update to only the past few gradients.", "While these algorithms have been successfully employed in several practical applications, they have also been observed to not converge in some other settings.", "It has been typically observed that in these settings some minibatches provide large gradients but only quite rarely, and while these large gradients are quite informative, their influence dies out rather quickly due to the exponential averaging, thus leading to poor convergence.In this paper, we analyze this situation in detail.", "We rigorously prove that the intuition conveyed in the above paragraph is indeed correct; that limiting the reliance of the update on essentially only the past few gradients can indeed cause significant convergence issues.", "In particular, we make the following key contributions:• We elucidate how the exponential moving average in the RMSPROP and ADAM algorithms can cause non-convergence by providing an example of simple convex optimization prob-lem where RMSPROP and ADAM provably do not converge to an optimal solution.", "Our analysis easily extends to other algorithms using exponential moving averages such as ADADELTA and NADAM as well, but we omit this for the sake of clarity.", "In fact, the analysis is flexible enough to extend to other algorithms that employ averaging squared gradients over essentially a fixed size window (for exponential moving averages, the influences of gradients beyond a fixed window size becomes negligibly small) in the immediate past.", "We omit the general analysis in this paper for the sake of clarity.•", "The above result indicates that in order to have guaranteed convergence the optimization algorithm must have \"long-term memory\" of past gradients. Specifically", ", we point out a problem with the proof of convergence of the ADAM algorithm given by BID3 . To resolve", "this issue, we propose new variants of ADAM which rely on long-term memory of past gradients, but can be implemented in the same time and space requirements as the original ADAM algorithm. We provide", "a convergence analysis for the new variants in the convex setting, based on the analysis of BID3 , and show a datadependent regret bound similar to the one in ADAGRAD.• We provide", "a preliminary empirical study of one of the variants we proposed and show that it either performs similarly, or better, on some commonly used problems in machine learning.", "In this paper, we study exponential moving variants of ADAGRAD and identify an important flaw in these algorithms which can lead to undesirable convergence behavior.", "We demonstrate these problems through carefully constructed examples where RMSPROP and ADAM converge to highly suboptimal solutions.", "In general, any algorithm that relies on an essentially fixed sized window of past gradients to scale the gradient updates will suffer from this problem.We proposed fixes to this problem by slightly modifying the algorithms, essentially endowing the algorithms with a long-term memory of past gradients.", "These fixes retain the good practical performance of the original algorithms, and in some cases actually show improvements.The primary goal of this paper is to highlight the problems with popular exponential moving average variants of ADAGRAD from a theoretical perspective.", "RMSPROP and ADAM have been immensely successful in development of several state-of-the-art solutions for a wide range of problems.", "Thus, it is important to understand their behavior in a rigorous manner and be aware of potential pitfalls while using them in practice.", "We believe this paper is a first step in this direction and suggests good design principles for faster and better stochastic optimization." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.12121211737394333, 0.23728813230991364, 0.1818181723356247, 0.3571428656578064, 0.3333333432674408, 0.09756097197532654, 0.1249999925494194, 0.12903225421905518, 0.2222222238779068, 0.04999999701976776, 0.1621621549129486, 0.25, 0.1944444328546524, 0.178571417927742, 0.38805970549583435, 0.18518517911434174, 0.158730149269104, 0.19512194395065308, 0.2448979616165161, 0.17391303181648254, 0.3050847351551056, 0.4000000059604645, 0.1818181723356247, 0.3396226465702057, 0.2222222238779068, 0.1515151411294937, 0.2153846174478531, 0.1304347813129425, 0.1599999964237213, 0.1666666567325592 ]
ryQu7f-RZ
true
[ "We investigate the convergence of popular optimization algorithms like Adam , RMSProp and propose new variants of these methods which provably converge to optimal solution in convex settings. " ]
[ "Targeted clean-label poisoning is a type of adversarial attack on machine learning systems where the adversary injects a few correctly-labeled, minimally-perturbed samples into the training data thus causing the deployed model to misclassify a particular test sample during inference.", "Although defenses have been proposed for general poisoning attacks (those which aim to reduce overall test accuracy), no reliable defense for clean-label attacks has been demonstrated, despite the attacks' effectiveness and their realistic use cases.", "In this work, we propose a set of simple, yet highly-effective defenses against these attacks. \n", "We test our proposed approach against two recently published clean-label poisoning attacks, both of which use the CIFAR-10 dataset.", "After reproducing their experiments, we demonstrate that our defenses are able to detect over 99% of poisoning examples in both attacks and remove them without any compromise on model performance.", "Our simple defenses show that current clean-label poisoning attack strategies can be annulled, and serve as strong but simple-to-implement baseline defense for which to test future clean-label poisoning attacks.", "Machine-learning-based systems are increasingly deployed in settings with high societal impact, such as biometric applications (Sun et al., 2014) and hate speech detection on social networks (Rizoiu et al., 2019) , as well as settings with high cost of failure, such as autonomous driving (Chen et al., 2017a) and malware detection (Pascanu et al., 2015) .", "In such settings, robustness to not just noise but also adversarial manipulation of system behavior is paramount.", "Complicating matters is the increasing reliance of machine-learning-based systems on training data sourced from public and semi-public places such as social networks, collaboratively-edited forums, and multimedia posting services.", "Sourcing data from uncontrolled environments begets a simple attack vector: an adversary can strategically inject data that can manipulate or degrade system performance.", "Data poisoning attacks on neural networks occur at training time, wherein an adversary places specially-constructed poison instances into the training data with the intention of manipulating the performance of a classifier at test time.", "Most work on data poisoning has focused on either", "(i) an attacker generating a small fraction of new training inputs to degrade overall model performance, or", "(ii) a defender aiming to detect or otherwise mitigate the impact of that attack; for a recent overview, see Koh et al. (2018) .", "In this paper, we focus on clean-label data poisoning (Shafahi et al., 2018) , where an attacker injects a few correctly-labeled, minimally-perturbed samples into the training data.", "In contrast to traditional data poisoning, these samples are crafted to cause the model to misclassify a particular target test sample during inference.", "These attacks are plausible in a wide range of applications, as they do not require the attacker to have control over the labeling process.", "The attacker merely inserts apparently benign data into the training process, for example by posting images online which are scraped and (correctly) labeled by human labelers.", "Our contribution: In this paper, we initiate the study of defending against clean-label poisoning attacks on neural networks.", "We begin with a defense that exploits the fact that though the raw poisoned examples are not easily detected by human labelers, the feature representations of poisons are anomalous among the feature representations for data points with their (common) label.", "This intuition lends itself to a defense based on k nearest neighbors (k-NN) in the feature space; furthermore, the parameter k yields a natural lever for trading off between the power of the attack against which it can defend and the impact of running the defense on overall (unpoisoned) model accuracy.", "Next, we adapt a recent traditional data poisoning defense (Steinhardt et al., 2017; Koh et al., 2018) to the clean-label case, and show that-while still simple to implement-its performance in both precision and recall of identifying poison instances is worse than our proposed defense.", "We include a portfolio of additional baselines as well.", "For each defense, we test against state-of-the-art clean-label data poisoning attacks, using a slate of architectures, and show that our initial defense detects nearly all (99%+) of the poison instances without degrading overall performance.", "In summary, we have demonstrated that the simple k-NN baseline approach provides an effective defense against clean-label poisoning attacks with minimal degradation in model performance.", "The k-NN defense mechanism identifies virtually all poisons from two state-of-the-art clean label data poisoning attacks, while only filtering a small percentage of non-poisons.", "The k-NN defense outperforms other simple baselines against the existing attacks; these defenses provide benchmarks that could be used to measure the efficacy of future defense-aware clean label attacks.", "In the bottom two rows, filtered and non-filtered nonpoisons are shown-again there are not visually distinctive differences between pictures in the same class that are filtered rather than not filtered." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13636364042758942, 0.24390242993831635, 0.23999999463558197, 0.2142857164144516, 0.20512820780277252, 0.277777761220932, 0, 0.07692307233810425, 0, 0, 0.10526315122842789, 0.11764705181121826, 0.07692307233810425, 0.06451612710952759, 0.11428570747375488, 0.06666666269302368, 0.1249999925494194, 0, 0.2222222238779068, 0.04878048598766327, 0.03999999538064003, 0.12244897335767746, 0.1111111044883728, 0.0952380895614624, 0.23529411852359772, 0.060606054961681366, 0.1621621549129486, 0 ]
B1xgv0NtwH
true
[ "We present effective defenses to clean-label poisoning attacks. " ]
[ "Abstract reasoning, particularly in the visual domain, is a complex human ability, but it remains a challenging problem for artificial neural learning systems.", "In this work we propose MXGNet, a multilayer graph neural network for multi-panel diagrammatic reasoning tasks.", "MXGNet combines three powerful concepts, namely, object-level representation, graph neural networks and multiplex graphs, for solving visual reasoning tasks.", "MXGNet first extracts object-level representations for each element in all panels of the diagrams, and then forms a multi-layer multiplex graph capturing multiple relations between objects across different diagram panels.", "MXGNet summarises the multiple graphs extracted from the diagrams of the task, and uses this summarisation to pick the most probable answer from the given candidates.", "We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices (RPM).", "For an Euler Diagram Syllogism task MXGNet achieves state-of-the-art accuracy of 99.8%. ", "For PGM and RAVEN, two comprehensive datasets for RPM reasoning, MXGNet outperforms the state-of-the-art models by a considerable margin.", "Abstract reasoning has long been thought of as a key part of human intelligence, and a necessary component towards Artificial General Intelligence.", "When presented in complex scenes, humans can quickly identify elements across different scenes and infer relations between them.", "For example, when you are using a pile of different types of LEGO bricks to assemble a spaceship, you are actively inferring relations between each LEGO brick, such as in what ways they can fit together.", "This type of abstract reasoning, particularly in the visual domain, is a crucial key to human ability to build complex things.", "Many tests have been proposed to measure human ability for abstract reasoning.", "The most popular test in the visual domain is the Raven Progressive Matrices (RPM) test (Raven (2000) ).", "In the RPM test, the participants are asked to view a sequence of contextual diagrams, usually given as a 3 × 3 matrices of diagrams with the bottom-right diagram left blank.", "Participants should infer abstract relationships in rows or columns of the diagram, and pick from a set of candidate answers the correct one to fill in the blank.", "Figures 1 (a) shows an example of RPM tasks containing XOR relations across diagrams in rows.", "More examples can be found in Appendix C. Another widely used test for measuring reasoning in psychology is Diagram Syllogism task (Sato et al. (2015) ), where participants need to infer conclusions based on 2 given premises.", "Figure 1c shows an example of Euler Diagram Syllogism task.", "Barrett et al. (2018) recently published a large and comprehensive RPM-style dataset named Procedurally Generated Matrices 'PGM', and proposed Wild Relation Network (WReN), a state-of-the-art neural net for RPM-style tasks.", "While WReN outperforms other state-of-the-art vision models such as Residual Network He et al. (2016) , the performance is still far from deep neural nets' performance on other vision or natural language processing tasks.", "Recently, there has been a focus on object-level representations (Yi et al. (2018) ; Hu et al. (2017) ; Hudson & Manning (2018) ; Mao et al. (2019) ; Teney et al. (2017) ; Zellers et al. (2018) ) for visual reasoning tasks, which enable the use of inductive-biased architectures such as symbolic programs and scene graphs to directly capture relations between objects.", "For RPM-style tasks, symbolic programs are less suitable as these programs are generated from given questions in the Visual-Question Answering setting.", "In RPM-style tasks there are no explicit questions.", "Encoding RPM tasks into graphs is a more natural choice.", "However, previous works on scene graphs (Teney et al. (2017) ; Zellers et al. (2018) ) model a single image as graphs, which is not suitable for RPM tasks as there are many different layers of relations across different subsets of diagrams in a single task.", "In this paper we introduce MXGNet, a multi-layer multiplex graph neural net architecture for abstract diagram reasoning.", "Here 'Multi-layer' means the graphs are built across different diagram panels, where each diagram is a layer.", "'Multiplex' means that edges of the graphs encode multiple relations between different element attributes, such as colour, shape and position.", "Multiplex networks are discussed in detail by Kao & Porter (2018) .", "We first tested the application of multiplex graph on a Diagram Syllogism dataset (Wang et al. (2018a) ), and confirmed that multiplex graph improves performance on the original model.", "For RPM task, MXGNet encodes subsets of diagram panels into multi-layer multiplex graphs, and combines summarisation of several graphs to predict the correct candidate answer.", "With a hierarchical summarisation scheme, each graph is summarised into feature embeddings representing relationships in the subset.", "These relation embeddings are then combined to predict the correct answer.", "For PGM dataset (Barrett et al. (2018) (Zhang et al. (2019) ), MXGNet, without any auxiliary training with additional labels, achieves 83.91% test accuracy, outperforming 59.56% accuracy by the best model with auxiliary training for the RAVEN dataset.", "We also show that MXGNet is robust to variations in forms of object-level representations.", "Both variants of MXGNet achieve higher test accuracies than existing best models for the two datasets.", "We presented MXGNet, a new graph-based approach to diagrammatic reasoning problems in the style of Raven Progressive Matrices (RPM).", "MXGNet combines three powerful ideas, namely, object-level representation, graph neural networks and multiplex graphs, to capture relations present in the reasoning task.", "Through experiments we showed that MXGNet performs better than previous models on two RPM datasets.", "We also showed that MXGNet has better generalisation performance.", "One important direction for future work is to make MXGNet interpretable, and thereby extract logic rules from MXGNet.", "Currently, the learnt representations in MXGNet are still entangled, providing little in the way of understanding its mechanism of reasoning.", "Rule extraction can provide people with better understanding of the reasoning problem, and may allow neural networks to work seamlessly with more programmable traditional logic engines.", "While the multi-layer multiplex graph neural network is designed for RPM style reasoning task, it can be readily extended to other diagrammatic reasoning tasks where relations are present between multiple elements across different diagrams.", "One example of a real-world application scenario is robots assembling parts of an object into a whole, such as building a LEGO model from a room of LEGO blocks.", "MXGNet provides a suitable way of capturing relations between parts, such as ways of piecing and locking two parts together." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10256409645080566, 0.3030303120613098, 0.277777761220932, 0.17391303181648254, 0.052631575614213943, 0.2222222238779068, 0.12903225421905518, 0.1111111044883728, 0.10810810327529907, 0, 0.0416666604578495, 0.10810810327529907, 0.06896550953388214, 0.060606054961681366, 0.04651162400841713, 0.04878048226237297, 0.060606054961681366, 0.15094339847564697, 0, 0.09090908616781235, 0.1666666567325592, 0.1230769157409668, 0, 0.07999999821186066, 0.2222222238779068, 0.178571417927742, 0.29411762952804565, 0.12121211737394333, 0, 0, 0.2380952388048172, 0.09756097197532654, 0.1764705777168274, 0, 0.03999999538064003, 0.12903225421905518, 0.060606054961681366, 0.1666666567325592, 0.20512819290161133, 0.1249999925494194, 0.1538461446762085, 0.11764705181121826, 0.11764705181121826, 0.0476190410554409, 0.23999999463558197, 0.09999999403953552, 0.1111111044883728 ]
ByxQB1BKwH
true
[ "MXGNet is a multilayer, multiplex graph based architecture which achieves good performance on various diagrammatic reasoning tasks." ]
[ "Semantic structure extraction for spreadsheets includes detecting table regions, recognizing structural components and classifying cell types.", "Automatic semantic structure extraction is key to automatic data transformation from various table structures into canonical schema so as to enable data analysis and knowledge discovery.", "However, they are challenged by the diverse table structures and the spatial-correlated semantics on cell grids.", "To learn spatial correlations and capture semantics on spreadsheets, we have developed a novel learning-based framework for spreadsheet semantic structure extraction.", "First, we propose a multi-task framework that learns table region, structural components and cell types jointly; second, we leverage the advances of the recent language model to capture semantics in each cell value; third, we build a large human-labeled dataset with broad coverage of table structures.", "Our evaluation shows that our proposed multi-task framework is highly effective that outperforms the results of training each task separately.", "Spreadsheets are the most popular end-user development tool for data management and analysis.", "Unlike programming languages or databases, no syntax, data models or even vague standards are enforced for spreadsheets.", "Figure1(a) shows a real-world spreadsheet.", "To enable intelligent data analysis and knowledge discovery for the data in range B4:H24, one needs to manually transform the data to a standard form as shown in Figure1(e).", "It would be highly desirable to develop techniques to extract the semantic structure information for automated spreadsheet data transformation.", "Semantic structure extraction entails three chained tasks to: (1) We also show the transformed data in Figure1(e", "), where different cell types are highlighted using the same coloring scheme as in Figure1(d", ").", "Learning the semantic structure for spreadsheets is challenging.", "While table detection is confounded by the diverse multi-table layouts, component recognition is confounded by the various structures of table components , and cell type classification requires semantic-level understanding of cell values.", "Moreover, the tasks are chained in the sense that latter tasks need to leverage the outcomes of prior tasks.", "This poses challenges on preventing error propagation, but also provides opportunities for utilizing additional cues from other tasks to improve the current task.", "For example, header extraction may help table detection since headers need to be inside the table region and vice versa.", "In this paper, we present a multi-task learning framework to solve spreadsheet table detection, component recognition, and cell type classification jointly.", "Our contributions are as follows: 1.", "We formulate spreadsheet table structure extraction as a coarse-to-fine process including table detection, component recognition, and cell type classification.", "We also build a large labeled dataset.", "2. To capture the rich information in spreadsheet cells for model training, we devise a featurization scheme containing both hand-crafted features and model-based semantic representations.", "3. We propose a multi-task framework that can be trained to simultaneously locate table ranges, recognize table components and extract cell types.", "Our evaluation shows that the proposed multi-task framework is highly effective that outperforms the results of training each task separately.", "Cell type classification is the task of classifying each cell into a certain type such as value, value name, index, and index name.", "A value is a basic unit in the value region.", "A value name is a summary term that describes values.", "As shown in Figure1(a), \"Cost\" at E6 is a value name to describe the values in E8:H24.", "After the data extraction, as shown in Figure1(e), \"Cost\" at D1 is the label of Column D. An index refers to individual values that can be used for indexing data records.", "In Figure1(a), \"January\" -\"October\" at E5:H5 are indexes of columns E -H respectively.", "A group of indexes is used to breakdown the dataset into subsets.", "After data transformation, it will form a single data field as Column C shows in Figure1(e).", "An index name is a summary term that describes the indexes.", "In the previous example, \" Month\" is the index name of indexes \"January\" -\"October\".", "After data transformation, the \" Month\" in Figure1(a) corresponds to the column label at C1 in Figure1(e)." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.20512819290161133, 0.12765957415103912, 0.15789473056793213, 0.3181818127632141, 0.32258063554763794, 0.1904761791229248, 0.1111111044883728, 0.05128204822540283, 0.1428571343421936, 0.12765957415103912, 0.1463414579629898, 0.04999999329447746, 0.052631575614213943, 0.12903225421905518, 0.2916666567325592, 0.052631575614213943, 0.04347825422883034, 0.0952380895614624, 0.5, 0, 0.4878048598766327, 0.13333332538604736, 0.2083333283662796, 0.40909090638160706, 0.19512194395065308, 0.2222222238779068, 0.0624999962747097, 0.12121211737394333, 0.05128204822540283, 0.07692307233810425, 0, 0, 0.052631575614213943, 0.11764705181121826, 0, 0 ]
r1x3GTq5IB
true
[ "We propose a novel multi-task framework that learns table detection, semantic component recognition and cell type classification for spreadsheet tables with promising results." ]
[ "Open-domain dialogue generation has gained increasing attention in Natural Language Processing.", "Comparing these methods requires a holistic means of dialogue evaluation.", "Human ratings are deemed as the gold standard.", "As human evaluation is inefficient and costly, an automated substitute is desirable.", "In this paper, we propose holistic evaluation metrics which capture both the quality and diversity of dialogues.", "Our metrics consists of (1) GPT-2 based context coherence between sentences in a dialogue, (2) GPT-2 based fluency in phrasing, and, (3) $n$-gram based diversity in responses to augmented queries.", "The empirical validity of our metrics is demonstrated by strong correlation with human judgments.", "We provide the associated code, datasets and human ratings.", "This paper provides a holistic and automatic evaluation method of open-domain dialogue models.", "In contrast to prior art, our means of evaluation captures not only the quality of generation, but also the diversity of responses.", "We recruit GPT-2 as a strong language model to evaluate the fluency and context-coherency of a dialogue.", "For diversity evaluation, the diversity of queries is controlled while the diversity of responses is evaluated by n-gram entropy.", "Two methods for controlled diversity are proposed, WordNet Substitution and Conditional Text Generator.", "The proposed metrics show strong correlation with human judgments.", "We are providing the implementations of our proposed metrics, associated fine-tuned models and datasets to accelerate the research on open-domain dialogue systems.", "It is our hope the proposed holistic metrics may pave the way towards comparability of open-domain dialogue methods." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.07407406717538834, 0.07692307233810425, 0, 0.2222222238779068, 0.24242423474788666, 0.09756097197532654, 0.19999998807907104, 0.23999999463558197, 0.20689654350280762, 0.11428570747375488, 0.25, 0, 0.06896550953388214, 0.23999999463558197, 0.1621621549129486, 0.060606054961681366 ]
BJg_FgBtPH
true
[ "We propose automatic metrics to holistically evaluate open-dialogue generation and they strongly correlate with human evaluation." ]
[ "Convolution Neural Network (CNN) has gained tremendous success in computer vision tasks with its outstanding ability to capture the local latent features.", "Recently, there has been an increasing interest in extending CNNs to the general spatial domain.", "Although various types of graph convolution and geometric convolution methods have been proposed, their connections to traditional 2D-convolution are not well-understood.", "In this paper, we show that depthwise separable convolution is a path to unify the two kinds of convolution methods in one mathematical view, based on which we derive a novel Depthwise Separable Graph Convolution that subsumes existing graph convolution methods as special cases of our formulation.", "Experiments show that the proposed approach consistently outperforms other graph convolution and geometric convolution baselines on benchmark datasets in multiple domains.", "Convolution Neural Network (CNN) BID14 has been proven to be an efficient model family in extracting hierarchical local patterns from grid-structured data, which has significantly advanced the state-of-the-art performance of a wide range of machine learning tasks, including image classification, object detection and audio recognition BID15 .", "Recently, growing attention has been paid to dealing with data with an underlying graph/non-Euclidean structure, such as prediction tasks in sensor networks BID23 , transportation systems , and 3D shape correspondence application in the computation graphics BID2 .", "How to replicate the success of CNNs for manifold-structured data remains an open challenge.Many graph convolution and geometric convolution methods have been proposed recently.", "The spectral convolution methods BID3 BID5 BID11 are the mainstream algorithm developed as the graph convolution methods.", "Because their theory is based on the graph Fourier analysis BID20 ), one of their major limitations is that in this model the knowledge learned from different graphs is not transferrable BID19 .", "Other group of approaches is geometric convolution methods, which focuses on various ways to leverage spatial information about nodes BID17 BID19 .", "Existing models mentioned above are either not capable of capturing spatial-wise local information as in the standard convolution, or tend to have very large parameter space and hence, are prone to overfitting.", "As a result, both the spectral and the geometric convolution methods have not produced the results comparable to CNNs on related tasks.", "Such a misalignment makes it harder to leverage the rapidly developing 2D-convolution techniques in the generic spatial domain.", "We note graph convolution methods are also widely used in the pure graph structure data, like citation networks and social networks BID11 .", "Our paper will only focus on the data with the spatial information.In this paper, we provide a unified view of the graph convolution and traditional 2D-convolution methods with the label propagation process BID24 .", "It helps us better understand and compare the difference between them.", "Based on it, we propose a novel Depthwise Separable Graph Convolution (DSGC), which inherits the strength of depthwise separable convolution that has been extensively used in different state-of-the-art image classification frameworks including Inception Network BID22 , Xception Network BID4 and MobileNet (Howard et al., 2017) .", "Compared with previous graph and geometric methods, the DSGC is more expressive and aligns closer to the depthwise separable convolution network, and shares the desirable characteristic of small parameter size as in the depthwise separable convolution.", "In experiments section, we evaluate the DSGC and baselines in three different machine learning tasks.", "The experiment results show that the performance of the proposed method is close to the standard convolution network in the image classification task on CIFAR dataset.", "And it outperforms previous graph convolution and geometric convolution methods in all tasks.", "Furthermore, we demonstrate that the proposed method can easily leverage the advanced technique developed for the standard convolution network to enhance the model performance, such as the Inception module BID22 , the DenseNet architecture BID8 and the Squeeze-and-Excitation block BID7 .The", "main contribution of this paper is threefold:• A unified view of traditional 2D-convolution and graph convolution methods by introducing depthwise separable convolution.• A", "novel Depthwise Separable Graph Convolution (DSGC) for spatial domain data.• We", "demonstrate the efficiency of the DSGC with extensive experiments and show that it can facilitate the advanced technique of the standard convolution network to improve the model performance." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.13333332538604736, 0.15789473056793213, 0.04651162400841713, 0.380952388048172, 0.09302324801683426, 0.1492537260055542, 0.07017543166875839, 0.12765957415103912, 0.10810810327529907, 0.07843136787414551, 0.1818181723356247, 0.037735845893621445, 0.1395348757505417, 0.25, 0.1860465109348297, 0.18867923319339752, 0.05882352590560913, 0.3283582031726837, 0.23529411852359772, 0.052631575614213943, 0.1304347813129425, 0.05714285373687744, 0.10344827175140381, 0.1818181723356247, 0.5714285373687744, 0.1304347813129425 ]
H139Q_gAW
true
[ "We devise a novel Depthwise Separable Graph Convolution (DSGC) for the generic spatial domain data, which is highly compatible with depthwise separable convolution." ]
[ "Generating musical audio directly with neural networks is notoriously difficult because it requires coherently modeling structure at many different timescales.", "Fortunately, most music is also highly structured and can be represented as discrete note events played on musical instruments.", "Herein, we show that by using notes as an intermediate representation, we can train a suite of models capable of transcribing, composing, and synthesizing audio waveforms with coherent musical structure on timescales spanning six orders of magnitude (~0.1 ms to ~100 s), a process we call Wave2Midi2Wave.", "This large advance in the state of the art is enabled by our release of the new MAESTRO (MIDI and Audio Edited for Synchronous TRacks and Organization) dataset, composed of over 172 hours of virtuosic piano performances captured with fine alignment (~3 ms) between note labels and audio waveforms.", "The networks and the dataset together present a promising approach toward creating new expressive and interpretable neural models of music.", "Since the beginning of the recent wave of deep learning research, there have been many attempts to create generative models of expressive musical audio de novo.", "These models would ideally generate audio that is both musically and sonically realistic to the point of being indistinguishable to a listener from music composed and performed by humans.However, modeling music has proven extremely difficult due to dependencies across the wide range of timescales that give rise to the characteristics of pitch and timbre (short-term) as well as those of rhythm (medium-term) and song structure (long-term).", "On the other hand, much of music has a large hierarchy of discrete structure embedded in its generative process: a composer creates songs, sections, and notes, and a performer realizes those notes with discrete events on their instrument, creating sound.", "The division between notes and sound is in many ways analogous to the division between symbolic language and utterances in speech.The WaveNet model by BID18 may be the first breakthrough in generating musical audio directly with a neural network.", "Using an autoregressive architecture, the authors trained a model on audio from piano performances that could then generate new piano audio sample-bysample.", "However, as opposed to their highly convincing speech examples, which were conditioned on linguistic features, the authors lacked a conditioning signal for their piano model.", "The result was audio that sounded very realistic at very short time scales (1 or 2 seconds), but that veered off into chaos beyond that.", "BID4 made great strides towards providing longer term structure to WaveNet synthesis by implicitly modeling the discrete musical structure described above.", "This was achieved by training a hierarchy of VQ-VAE models at multiple time-scales, ending with a WaveNet decoder to generate piano audio as waveforms.", "While the results are impressive in their ability to capture long-term structure directly from audio waveforms, the resulting sound suffers from various artifacts at the fine-scale not present in the unconditional WaveNet, clearly distinguishing it from real musical audio.", "Also, while the model learns a version of discrete structure from the audio, it is not Transcription: onsets & frames (section 4) Synthesis: conditional WaveNet (section 6) Piano roll (MIDI)", "We have demonstrated the Wave2Midi2Wave system of models for factorized piano music modeling, all enabled by the new MAESTRO dataset.", "In this paper we have demonstrated all capabilities on the same dataset, but thanks to the new state-of-the-art piano transcription capabilities, any large set of piano recordings could be used, 6 which we plan to do in future work.", "After transcribing the recordings, the transcriptions could be used to train a WaveNet and a Music Transformer model, and then new compositions could be generated with the Transformer and rendered with the WaveNet.", "These new compositions would have similar musical characteristics to the music in the original dataset, and the audio renderings would have similar acoustical characteristics to the source piano.The most promising future work would be to extend this approach to other instruments or even multiple simultaneous instruments.", "Finding a suitable training dataset and achieving sufficient transcription performance will likely be the limiting factors.The new dataset (MIDI, audio, metadata, and train/validation/test split configurations) is available at https://g.co/magenta/maestro-datasetunder a Creative Commons Attribution NonCommercial Share-Alike 4.0 license.", "The Online Supplement, including audio examples, is available at https://goo.gl/magenta/maestro-examples." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1395348757505417, 0.0952380895614624, 0.4848484694957733, 0.307692289352417, 0.3333333432674408, 0.21739129722118378, 0.18421052396297455, 0.17241378128528595, 0.25, 0.1860465109348297, 0.08510638028383255, 0.04444443807005882, 0.1395348757505417, 0.30434781312942505, 0.1090909019112587, 0.11764705181121826, 0.4285714328289032, 0.10344827175140381, 0.2666666507720947, 0.17241378128528595, 0.1666666567325592, 0.05882352590560913 ]
r1lYRjC9F7
true
[ "We train a suite of models capable of transcribing, composing, and synthesizing audio waveforms with coherent musical structure, enabled by the new MAESTRO dataset." ]
[ "Variational inference based on chi-square divergence minimization (CHIVI) provides a way to approximate a model's posterior while obtaining an upper bound on the marginal likelihood.", "However, in practice CHIVI relies on Monte Carlo (MC) estimates of an upper bound objective that at modest sample sizes are not guaranteed to be true bounds on the marginal likelihood.", "This paper provides an empirical study of CHIVI performance on a series of synthetic inference tasks.", "We show that CHIVI is far more sensitive to initialization than classic VI based on KL minimization, often needs a very large number of samples (over a million), and may not be a reliable upper bound.", "We also suggest possible ways to detect and alleviate some of these pathologies, including diagnostic bounds and initialization strategies.", "Estimating the marginal likelihood in probabilistic models is the holy grail of Bayesian inference.", "Marginal likelihoods allow us to compute the posterior probability of model parameters or perform Bayesian model selection (Bishop et al., 1995) .", "While exact computation of the marginal is not tractable for most models, variational inference (VI) (Jordan et al., 1999 ) offers a promising and scalable approximation.", "VI suggests choosing a simple family of approximate distributions q and then optimizing the parameters of q to minimize its divergence from the true (intractable) posterior.", "The canonical choice is the KL divergence, where minimizing corresponds to tightening a lower bound on the marginal likelihood.", "Recently, (Dieng et al., 2017a) showed that minimizing a χ 2 divergence leads to a chi-divergence upper bound (\"CUBO\").", "Practitioners often wish to combine upper and lower bound estimates to \"sandwich\" the model evidence in a narrow range for later decision making, so the CUBO's flexible applicability to all latent variable models is appealing.", "However, both the estimation of the upper bound and computing its gradient for minimization require Monte Carlo estimators to approximate tough integrals.", "These estimators may have large variance even at modest number of samples.", "A natural question is then how reliable CUBO minimization is in practice.", "In this paper, we provide empirical evidence that CUBO optimization is often tricky, and the bound itself ends up being too loose even Figure 1: Minimizing χ 2 divergence using MC gradient estimates via the reparametrization trick can be challenging even with simple univariate Gaussian distributions.", "Each column shows results under a different number of MC samples.", "The last column compares ELBO and CUBO traces for S = 10 4 ; diamonds correspond to sanity-check estimator from Eq. (2).", "Top row : variational parameter traces with fixed true variance but changing starting mean locations.", "Bottom row: same, but with fixed true mean and changing start variance values.", "using hundreds of samples.", "Our contributions include:", "i) evaluation of the CUBO in two simple scenarios, and comparison to other bounds to gauge its utility;", "ii) empirical analysis of CUBO optimization in both scenarios, in terms of convergence rate and sensitivity to the number of samples;", "iii) review of alternative upper bounds and best practices for diagnosing and testing new bounds." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.27272728085517883, 0.15686273574829102, 0.277777761220932, 0.2545454502105713, 0.051282044500112534, 0.23529411852359772, 0.0952380895614624, 0.2083333283662796, 0.13636362552642822, 0.20512819290161133, 0.14999999105930328, 0.07547169178724289, 0.0952380895614624, 0.060606054961681366, 0.1249999925494194, 0.1846153736114502, 0.0624999962747097, 0.09302324801683426, 0.0555555522441864, 0, 0.07999999821186066, 0, 0.15789473056793213, 0.20512819290161133, 0.05882352590560913 ]
BJxk51h4FS
true
[ "An empirical study of variational inference based on chi-square divergence minimization, showing that minimizing the CUBO is trickier than maximizing the ELBO" ]
[ "It has been widely recognized that adversarial examples can be easily crafted to fool deep networks, which mainly root from the locally non-linear behavior nearby input examples.", "Applying mixup in training provides an effective mechanism to improve generalization performance and model robustness against adversarial perturbations, which introduces the globally linear behavior in-between training examples.", "However, in previous work, the mixup-trained models only passively defend adversarial attacks in inference by directly classifying the inputs, where the induced global linearity is not well exploited.", "Namely, since the locality of the adversarial perturbations, it would be more efficient to actively break the locality via the globality of the model predictions.", "Inspired by simple geometric intuition, we develop an inference principle, named mixup inference (MI), for mixup-trained models.", "MI mixups the input with other random clean samples, which can shrink and transfer the equivalent perturbation if the input is adversarial.", "Our experiments on CIFAR-10 and CIFAR-100 demonstrate that MI can further improve the adversarial robustness for the models trained by mixup and its variants.", "Deep neural networks (DNNs) have achieved state-of-the-art performance on various tasks (Goodfellow et al., 2016) .", "However, counter-intuitive adversarial examples generally exist in different domains, including computer vision (Szegedy et al., 2014) , natural language processing (Jin et al., 2019) , reinforcement learning (Huang et al., 2017) , speech and graph data (Dai et al., 2018) .", "As DNNs are being widely deployed, it is imperative to improve model robustness and defend adversarial attacks, especially in safety-critical cases.", "Previous work shows that adversarial examples mainly root from the locally unstable behavior of classifiers on the data manifolds (Goodfellow et al., 2015; Fawzi et al., 2016; 2018; Pang et al., 2018b) , where a small adversarial perturbation in the input space can lead to an unreasonable shift in the feature space.", "On the one hand, many previous methods try to solve this problem in the inference phase, by introducing transformations on the input images.", "These attempts include performing local linear transformation like adding Gaussian noise (Tabacof & Valle, 2016) , where the processed inputs are kept nearby the original ones, such that the classifiers can maintain high performance on the clean inputs.", "However, as shown in Fig. 1(a) , the equivalent perturbation, i.e., the crafted adversarial perturbation, is still δ and this strategy is easy to be adaptively evaded since the randomness of x 0 w.r.t x 0 is local (Athalye et al., 2018) .", "Another category of these attempts is to apply various non-linear transformations, e.g., different operations of image processing (Guo et al., 2018; Raff et al., 2019) .", "They are usually off-the-shelf for different classifiers, and generally aim to disturb the adversarial perturbations, as shown in Fig. 1(b ).", "Yet these methods are not quite reliable since there is no illustration or guarantee on to what extent they can work.", "On the other hand, many efforts have been devoted to improving adversarial robustness in the training phase.", "For examples, the adversarial training (AT) methods (Madry et al., 2018; Shafahi et al., 2019) induce locally stable behavior via data augmentation on adversarial examples.", "However, AT methods are usually computationally expensive, and will often degenerate model performance on the clean inputs or under general-purpose transformations like rotation (Engstrom et al., 2019) .", "In contrast, the mixup training method introduces globally linear behavior in-between the data manifolds, which can also improve adversarial robustness (Zhang", "Generality.", "According to Sec. 3, except for the mixup-trained models, the MI method is generally compatible with any trained model with induced global linearity.", "These models could be trained by other methods, e.g., manifold mixup (Verma et al., 2019a; Inoue, 2018; .", "Besides, to better defend white-box adaptive attacks, the mixup ratio λ in MI could also be sampled from certain distribution to put in additional randomness.", "Empirical gap.", "As demonstrated in Fig. 2 , there is a gap between the empirical results and the theoretical formulas in Table 1 .", "This is because that the mixup mechanism mainly acts as a regularization in training, which means the induced global linearity may not satisfy the expected behaviors.", "To improve the performance of MI, a stronger regularization can be imposed, e.g., training with mixup for more epochs, or applying matched λ both in training and inference." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1463414579629898, 0.19512194395065308, 0.4000000059604645, 0.3529411852359772, 0.19354838132858276, 0.11764705181121826, 0.1621621549129486, 0, 0.08510638028383255, 0.1666666567325592, 0.17543859779834747, 0.2222222238779068, 0.04081632196903229, 0.18867924809455872, 0.10256409645080566, 0.2222222238779068, 0.0555555522441864, 0.25806450843811035, 0.10526315122842789, 0.04651162400841713, 0.11428570747375488, 0.277777761220932, 0.05882352590560913, 0.15789473056793213, 0.11764705181121826, 0.20512820780277252, 0.1818181723356247 ]
ByxtC2VtPB
true
[ "We exploit the global linearity of the mixup-trained models in inference to break the locality of the adversarial perturbations." ]
[ "Fine-tuning language models, such as BERT, on domain specific corpora has proven to be valuable in domains like scientific papers and biomedical text.", "In this paper, we show that fine-tuning BERT on legal documents similarly provides valuable improvements on NLP tasks in the legal domain.", "Demonstrating this outcome is significant for analyzing commercial agreements, because obtaining large legal corpora is challenging due to their confidential nature.", "As such, we show that having access to large legal corpora is a competitive advantage for commercial applications, and academic research on analyzing contracts.", "Businesses rely on contracts to capture critical obligations with other parties, such as: scope of work, amounts owed, and cancellation policies.", "Various efforts have gone into automatically extracting and classifying these terms.", "These efforts have usually been modeled as: classification, entity and relation extraction tasks.", "In this paper we focus on classification, but in our application we have found that our findings apply equally and sometimes, more profoundly, on other tasks.", "Recently, numerous studies have shown the value of fine-tuning language models such as ELMo [3] and BERT [4] to achieve state-of-the-art results [5] on domain specific tasks [6, 7] .", "In this paper we investigate and quantify the impact of utilizing a large domain-specific corpus of legal agreements to improve the accuracy of classification models by fine-tuning BERT.", "Specifically, we assess:", "(i) the performance of a simple model that only uses the pre-trained BERT language model,", "(ii) the impact of further fine tuning BERT, and", "(iii) how this impact changes as we train on larger corpora.", "Ultimately, our investigations show marginal, but valuable, improvements that increase as we grow the size of the legal corpus used to fine-tine BERT -and allow us to confidently claim that not only is this approach valuable for increasing accuracy, but commercial enterprises seeking to create these models will have an edge if they can amass a corpus of legal documents." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2631579041481018, 0.5714285373687744, 0.11428570747375488, 0.1538461446762085, 0.0555555522441864, 0, 0.0714285671710968, 0.21052631735801697, 0.22727271914482117, 0.14999999105930328, 0, 0.13793103396892548, 0.0833333283662796, 0.1538461446762085, 0.20895521342754364 ]
rkeRMT9cLH
true
[ "Fine-tuning BERT on legal corpora provides marginal, but valuable, improvements on NLP tasks in the legal domain." ]
[ "We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques.", "Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus.", "By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion.", "In contrast with traditional generative sequence models (e.g. the HMM), our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution.", "While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate.", "Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss.", "Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation.", "Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes.", "Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art.", "Text sequence transduction systems convert a given text sequence from one domain to another.", "These techniques can be applied to a wide range of natural language processing applications such as machine translation (Bahdanau et al., 2015) , summarization (Rush et al., 2015) , and dialogue response generation (Zhao et al., 2017) .", "In many cases, however, parallel corpora for the task at hand are scarce.", "Therefore, unsupervised sequence transduction methods that require only non-parallel data are appealing and have been receiving growing attention (Bannard & Callison-Burch, 2005; Ravi & Knight, 2011; Mizukami et al., 2015; Shen et al., 2017; Lample et al., 2018; .", "This trend is most pronounced in the space of text style transfer tasks where parallel data is particularly challenging to obtain (Hu et al., 2017; Shen et al., 2017; Yang et al., 2018) .", "Style transfer has historically referred to sequence transduction problems that modify superficial properties of texti.e.", "style rather than content.", "We focus on a standard suite of style transfer tasks, including formality transfer (Rao & Tetreault, 2018) , author imitation (Xu et al., 2012) , word decipherment (Shen et al., 2017) , sentiment transfer (Shen et al., 2017) , and related language translation (Pourdamghani & Knight, 2017) .", "General unsupervised translation has not typically been considered style transfer, but for the purpose of comparison we also conduct evaluation on this task (Lample et al., 2017) .", "Recent work on unsupervised text style transfer mostly employs non-generative or non-probabilistic modeling approaches.", "For example, Shen et al. (2017) and Yang et al. (2018) design adversarial discriminators to shape their unsupervised objective -an approach that can be effective, but often introduces training instability.", "Other work focuses on directly designing unsupervised training objectives by incorporating intuitive loss terms (e.g. backtranslation loss), and demonstrates state-ofthe-art performance on unsupervised machine translation (Lample et al., 2018; Artetxe et al., 2019) and style transfer (Lample et al., 2019) .", "However, the space of possible unsupervised objectives is extremely large and the underlying modeling assumptions defined by each objective can only be reasoned about indirectly.", "As a result, the process of designing such systems is often heuristic.", "In contrast, probabilistic models (e.g. the noisy channel model (Shannon, 1948) ) define assumptions about data more explicitly and allow us to reason about these assumptions during system design.", "Further, the corresponding objectives are determined naturally by principles of probabilistic inference, reducing the need for empirical search directly in the space of possible objectives.", "That said, classical probabilistic models for unsupervised sequence transduction (e.g. the HMM or semi-HMM) typically enforce overly strong independence assumptions about data to make exact inference tractable (Knight et al., 2006; Ravi & Knight, 2011; Pourdamghani & Knight, 2017) .", "This has restricted their development and caused their performance to lag behind unsupervised neural objectives on complex tasks.", "Luckily, in recent years, powerful variational approximation techniques have made it more practical to train probabilistic models without strong independence assumptions (Miao & Blunsom, 2016; Yin et al., 2018) .", "Inspired by this, we take a new approach to unsupervised style transfer.", "We directly define a generative probabilistic model that treats a non-parallel corpus in two domains as a partially observed parallel corpus.", "Our model makes few independence assumptions and its true posterior is intractable.", "However, we show that by using amortized variational inference (Kingma & Welling, 2013) , a principled probabilistic technique, a natural unsupervised objective falls out of our modeling approach that has many connections with past work, yet is different from all past work in specific ways.", "In experiments across a suite of unsupervised text style transfer tasks, we find that the natural objective of our model actually outperforms all manually defined unsupervised objectives from past work, supporting the notion that probabilistic principles can be a useful guide even in deep neural systems.", "Further, in the case of unsupervised machine translation, our model matches the current state-of-the-art non-generative approach." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3499999940395355, 0.10526315122842789, 0.25, 0.14814814925193787, 0.17777776718139648, 0.2142857164144516, 0.31372547149658203, 0.13333332538604736, 0.13636362552642822, 0.2222222238779068, 0.14814814925193787, 0, 0.10526315122842789, 0.23529411852359772, 0.20512819290161133, 0.07407407462596893, 0.24561403691768646, 0.15686273574829102, 0.21621620655059814, 0.11764705181121826, 0.1428571343421936, 0.12765957415103912, 0.11428570747375488, 0.15686273574829102, 0.09090908616781235, 0.12903225421905518, 0.19999998807907104, 0.07547169178724289, 0.2857142686843872, 0.19512194395065308, 0.17142856121063232, 0.1538461446762085, 0.3125, 0.15789473056793213 ]
HJlA0C4tPS
true
[ "We formulate a probabilistic latent sequence model to tackle unsupervised text style transfer, and show its effectiveness across a suite of unsupervised text style transfer tasks. " ]
[ "Current practice in machine learning is to employ deep nets in an overparametrized limit, with the nominal number of parameters typically exceeding the number of measurements.", "This resembles the situation in compressed sensing, or in sparse regression with $l_1$ penalty terms, and provides a theoretical avenue for understanding phenomena that arise in the context of deep nets.", "One such phenonemon is the success of deep nets in providing good generalization in an interpolating regime with zero training error.", "Traditional statistical practice calls for regularization or smoothing to prevent \"overfitting\" (poor generalization performance).", "However, recent work shows that there exist data interpolation procedures which are statistically consistent and provide good generalization performance\\cite{belkin2018overfitting} (\"perfect fitting\").", "In this context, it has been suggested that \"classical\" and \"modern\" regimes for machine learning are separated by a peak in the generalization error (\"risk\") curve, a phenomenon dubbed \"double descent\"\\cite{belkin2019reconciling}.", "While such overfitting peaks do exist and arise from ill-conditioned design matrices, here we challenge the interpretation of the overfitting peak as demarcating the regime where good generalization occurs under overparametrization. \n\n", "We propose a model of Misparamatrized Sparse Regression (MiSpaR) and analytically compute the GE curves for $l_2$ and $l_1$ penalties.", "We show that the overfitting peak arising in the interpolation limit is dissociated from the regime of good generalization.", "The analytical expressions are obtained in the so called \"thermodynamic\" limit.", "We find an additional interesting phenomenon: increasing overparametrization in the fitting model increases sparsity, which should intuitively improve performance of $l_1$ penalized regression.", "However, at the same time, the relative number of measurements decrease compared to the number of fitting parameters, and eventually overparametrization does lead to poor generalization.", "Nevertheless, $l_1$ penalized regression can show good generalization performance under conditions of data interpolation even with a large amount of overparametrization.", "These results provide a theoretical avenue into studying inverse problems in the interpolating regime using overparametrized fitting functions such as deep nets.", "Modern machine learning has two salient characteristics: large numbers of measurements m, and non-linear parametric models with very many fitting parameters p, with both m and p in the range of 10 6" ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19230768084526062, 0.27586206793785095, 0.1599999964237213, 0.045454539358615875, 0.039215680211782455, 0.09999999403953552, 0.1355932205915451, 0.20408162474632263, 0.12765957415103912, 0.09756097197532654, 0.18867923319339752, 0.15686273574829102, 0.03999999538064003, 0.19230768084526062, 0.13333332538604736 ]
BklhoQ258B
true
[ "Proposes an analytically tractable model and inference procedure (misparametrized sparse regression, inferred using L_1 penalty and studied in the data-interpolation limit) to study deep-net related phenomena in the context of inverse problems. " ]
[ "Hashing-based collaborative filtering learns binary vector representations (hash codes) of users and items, such that recommendations can be computed very efficiently using the Hamming distance, which is simply the sum of differing bits between two hash codes.", "A problem with hashing-based collaborative filtering using the Hamming distance, is that each bit is equally weighted in the distance computation, but in practice some bits might encode more important properties than other bits, where the importance depends on the user. \n", "To this end, we propose an end-to-end trainable variational hashing-based collaborative filtering approach that uses the novel concept of self-masking: the user hash code acts as a mask on the items (using the Boolean AND operation), such that it learns to encode which bits are important to the user, rather than the user's preference towards the underlying item property that the bits represent.", "This allows a binary user-level importance weighting of each item without the need to store additional weights for each user.", "We experimentally evaluate our approach against state-of-the-art baselines on 4 datasets, and obtain significant gains of up to 12% in NDCG.", "We also make available an efficient implementation of self-masking, which experimentally yields <4% runtime overhead compared to the standard Hamming distance.", "Collaborative filtering (Herlocker et al., 1999) is an integral part of personalized recommender systems and works by modelling user preference on past item interactions to predict new items the user may like (Sarwar et al., 2001 ).", "Early work is based on matrix factorization approaches (Koren et al., 2009 ) that learn a mapping to a shared m-dimensional real-valued space between users and items, such that user-item similarity can be estimated by the inner product.", "The purpose of hashing-based collaborative filtering (Liu et al., 2014) is the same as traditional collaborative filtering, but allows for fast similarity searches to massively increase efficiency (e.g., realtime brute-force search in a billion items (Shan et al., 2018) ).", "This is done by learning semantic hash functions that map users and items into binary vector representations (hash codes) and then using the Hamming distance (the sum of differing bits between two hash codes) to compute user-item similarity.", "This leads to both large storage reduction (floating point versus binary representations) and massively faster computation through the use of the Hamming distance.", "One problem with hashing-based collaborative filtering is that each bit is weighted equally when computing the Hamming distance.", "This is a problem because the importance of each bit in an item hash code might differ between users.", "The only step towards addressing this problem has been to associate a weight with k-bit blocks of each hash code (Liu et al., 2019) .", "However, smaller values of k lead to increased storage cost, but also significantly slower computation due to the need of computing multiple weighted Hamming distances.", "To solve this problem, without using any additional storage and only a marginal increase in computation time, we present Variational Hashing-based collaborative filtering with Self-Masking (VaHSM-CF).", "VaHSM-CF is our novel variational deep learning approach for hashing-based collaborative filtering that learns hash codes optimized for selfmasking.", "Self-masking is a novel technique that we propose in this paper for user-level bit-weighting on all items.", "Self-masking modifies item hash codes by applying an AND operation between an item and user hash code, before computing the standard Hamming distance between the user and selfmasked item hash codes.", "Hash codes optimized with self-masking represent which bit-dimensions encode properties that are important for the user (rather than a bitwise -1/1 preference towards each property).", "In practice, when ranking a set of items for a specific user, self-masking ensures that only bit differences on bit-dimensions that are equal to 1 for the user hash code are considered, while ignoring the ones with a -1 value, thus providing a user-level bitwise binary weigthing.", "Since selfmasking is applied while having the user and item hash codes in the lowest levels of memory (i.e., register), it only leads to a very marginal efficiency decrease.", "We contribute", "(i) a new variational hashing-based collaborative filtering approach, which is optimized for", "(ii) a novel self-masking technique, that outperforms state-of-the-art baselines by up to 12% in NDCG across 4 different datasets, while experimentally yielding less than 4% runtime overhead compared to the standard Hamming distance.", "We publicly release the code for our model, as well as an efficient implementation of the Hamming distance with self-masking 1 .", "We proposed an end-to-end trainable variational hashing-based collaborative filtering method, which optimizes hash codes using a novel modification to the Hamming distance, which we call selfmasking.", "The Hamming distance with self-masking first creates a modified item hash code, by applying an AND operation between the user and item hash codes, before computing the Hamming distance.", "Intuitively, this can be seen as ignoring user-specified bits when computing the Hamming distance, corresponding to applying a binary importance weight to each bit, but without using more storage and only a very marginal runtime overhead.", "We verified experimentally that our model outperforms state-of-the-art baselines by up to 12% in NDCG at different cutoffs, across 4 widely used datasets.", "These gains come at a minimal cost in recommendation time (self-masking only increased computation time by less than 4%)." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.22580644488334656, 0.21875, 0.3291139304637909, 0.21739129722118378, 0.375, 0.25, 0.22580644488334656, 0.15625, 0.24242423474788666, 0.16129031777381897, 0.16326530277729034, 0.22727271914482117, 0.1304347813129425, 0.11538460850715637, 0.1599999964237213, 0.11320754140615463, 0.35555556416511536, 0.22727271914482117, 0.1249999925494194, 0.19230768084526062, 0.17910447716712952, 0.14035087823867798, 0.4615384638309479, 0.37288135290145874, 0.21739129722118378, 0.4615384638309479, 0.15686273574829102, 0.16393442451953888, 0.3199999928474426, 0.08888888359069824 ]
rylDzTEKwr
true
[ "We propose a new variational hashing-based collaborative filtering approach optimized for a novel self-mask variant of the Hamming distance, which outperforms state-of-the-art by up to 12% on NDCG." ]
[ "Determining the appropriate batch size for mini-batch gradient descent is always time consuming as it often relies on grid search.", "This paper considers a resizable mini-batch gradient descent (RMGD) algorithm based on a multi-armed bandit that achieves performance equivalent to that of best fixed batch-size.", "At each epoch, the RMGD samples a batch size according to a certain probability distribution proportional to a batch being successful in reducing the loss function.", "Sampling from this probability provides a mechanism for exploring different batch size and exploiting batch sizes with history of success. ", "After obtaining the validation loss at each epoch with the sampled batch size, the probability distribution is updated to incorporate the effectiveness of the sampled batch size.", "Experimental results show that the RMGD achieves performance better than the best performing single batch size.", "It is surprising that the RMGD achieves better performance than grid search.", "Furthermore, it attains this performance in a shorter amount of time than grid search.", "Gradient descent (GD) is a common optimization algorithm for finding the minimum of the expected loss.", "It takes iterative steps proportional to the negative gradient of the loss function at each iteration.", "It is based on the observation that if the multi-variable loss functions f (w) is differentiable at point w, then f (w) decreases fastest in the direction of the negative gradient of f at w, i.e., −∇f (w).", "The model parameters are updated iteratively in GD as follows: DISPLAYFORM0 where w t , g t , and η t are the model parameters, gradients of f with respect to w, and learning rate at time t respectively.", "For small enough η t , f (w t ) ≥ f (w t+1 ) and ultimately the sequence of w t will move down toward a local minimum.", "For a convex loss function, GD is guaranteed to converge to a global minimum with an appropriate learning rate.There are various issues to consider in gradient-based optimization.", "First, GD can be extremely slow and impractical for large dataset: gradients of all the data have to be evaluated for each iteration.", "With larger data size, the convergence rate, the computational cost and memory become critical, and special care is required to minimize these factors.", "Second, for non-convex function which is often encountered in deep learning, GD can get stuck in a local minimum without the hope of escaping.", "Third, stochastic gradient descent (SGD), which is based on the gradient of a single training sample, has large gradient variance, and it requires a large number of iterations.", "This ultimately translates to slow convergence.", "Mini-batch gradient descent (MGD), which is based on the gradient over a small batch of training data, trades off between the robustness of SGD and the stability of GD.", "There are three advantages for using MGD over GD and SGD:", "1) The batching allows both the efficiency of memory usage and implementations;", "2) The model update frequency is higher than GD which allows for a more robust convergence avoiding local minimum;", "3) MGD requires less iteration per epoch and provides a more stable update than SGD.", "For these reasons, MGD has been a popular algorithm for machine learning.", "However, selecting an appropriate batch size is difficult.", "Various studies suggest that there is a close link between performance and batch size used in MGD Breuel (2015) ; Keskar et al. (2016) ; Wilson & Martinez (2003) .There", "are various guidelines for selecting a batch size but have not been completely practical BID1 . Grid", "search is a popular method but it comes at the expense of search time. There", "are a small number of adaptive MGD algorithms to replace grid search BID3 ; BID4 Friedlander & Schmidt (2012) . These", "algorithms increase the batch size gradually according to their own criterion. However", ", these algorithms are based on convex loss function and hard to be applied to deep learning. For non-convex", "optimization, it is difficult to determine the optimal batch size for best performance.This paper considers a resizable mini-batch gradient descent (RMGD) algorithm based on a multi-armed bandit for achieving best performance in grid search by selecting an appropriate batch size at each epoch with a probability defined as a function of its previous success/failure. At each epoch,", "RMGD samples a batch size from its probability distribution, then uses the selected batch size for mini-batch gradient descent. After obtaining", "the validation loss at each epoch, the probability distribution is updated to incorporate the effectiveness of the sampled batch size. The benefit of", "RMGD is that it avoids the need for cumbersome grid search to achieve best performance and that it is simple enough to apply to any optimization algorithm using MGD. The detailed algorithm", "of RMGD are described in Section 4, and experimental results are presented in Section 5.", "Selecting batch size affects the model quality and training efficiency, and determining the appropriate batch size is time consuming and requires considerable resources as it often relies on grid search.", "The focus of this paper is to design a simple robust algorithm that is theoretically sound and applicable in many situations.", "This paper considers a resizable mini-batch gradient descent (RMGD) algorithm based on a multiarmed bandit that achieves equivalent performance to that of best fixed batch-size.", "At each epoch, the RMGD samples a batch size according to certain probability distribution of a batch being successful in reducing the loss function.", "Sampling from this probability provides a mechanism for exploring different batch size and exploiting batch sizes with history of success.", "After obtaining the validation loss at each epoch with the sampled batch size, the probability distribution is updated to incorporate the effectiveness of the sampled batch size.The goal of this algorithm is not to achieve state-of-the-art accuracy but rather to select appropriate batch size which leads low misupdating and performs better.", "The RMGD essentially assists the learning process to explore the possible domain of the batch size and exploit successful batch size.", "The benefit of RMGD is that it avoids the need for cumbersome grid search to achieve best performance and that it is simple enough to apply to various field of machine learning including deep learning using MGD.", "Experimental results show that the RMGD achieves the best grid search performance on various dataset, networks, and optimizers.", "Furthermore, it, obviously, attains this performance in a shorter amount of time than the grid search.", "Also, there is no need to worry about which batch size set or cost function to choose when setting RMGD.", "In conclusion, the RMGD is effective and flexible mini-batch gradient descent algorithm." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.14999999105930328, 0.1860465109348297, 0.24390242993831635, 0.25, 0.24390242993831635, 0.17142856121063232, 0.0624999962747097, 0, 0.17142856121063232, 0.05714285373687744, 0.16326530277729034, 0.038461532443761826, 0.045454539358615875, 0.13333332538604736, 0.04878048226237297, 0.04878048226237297, 0.04651162400841713, 0.1860465109348297, 0, 0.22727271914482117, 0.06451612710952759, 0.0624999962747097, 0.051282044500112534, 0.05714285373687744, 0.0624999962747097, 0.1428571343421936, 0.16326530277729034, 0.1666666567325592, 0, 0, 0.1249999925494194, 0.21052631735801697, 0.17391304671764374, 0.1538461446762085, 0.25641024112701416, 0.17391303181648254, 0.0624999962747097, 0.17777776718139648, 0.14999999105930328, 0.1860465109348297, 0.24390242993831635, 0.25641024112701416, 0.2666666507720947, 0.21621620655059814, 0.11999999731779099, 0.21621620655059814, 0, 0.1538461446762085, 0.1249999925494194 ]
H1lGHsA9KX
true
[ "An optimization algorithm that explores various batch sizes based on probability and automatically exploits successful batch size which minimizes validation loss." ]
[ "Knowledge graph has gained increasing attention in recent years for its successful applications of numerous tasks.", "Despite the rapid growth of knowledge construction, knowledge graphs still suffer from severe incompletion and inevitably involve various kinds of errors.", "Several attempts have been made to complete knowledge graph as well as to detect noise.", "However, none of them considers unifying these two tasks even though they are inter-dependent and can mutually boost the performance of each other.", "In this paper, we proposed to jointly combine these two tasks with a unified Generative Adversarial Networks (GAN) framework to learn noise-aware knowledge graph embedding.", "Extensive experiments have demonstrated that our approach is superior to existing state-of-the-art algorithms both in regard to knowledge graph completion and error detection.", "Knowledge graph, as a well-structured effective representation of knowledge, plays a pivotal role in many real-world applications such as web search (Graupmann et al., 2005) , question answering (Hao et al., 2017; Yih et al., 2015) , and personalized recommendation (Zhang et al., 2016) .", "It is constructed by extracting information as the form of triple from unstructured text using information extraction systems.", "Each triple (h, r, t) represents a relation r between a head entity h and a tail entity t.", "Recent years have witnessed extensive construction of knowledge graph, such as Freebase (Bollacker et al., 2008) , DBPedia (Auer et al., 2007) , and YAGO (Suchanek et al., 2007) .", "However, these knowledge graphs suffer from severe sparsity as we can never collect all the information.", "Moreover, due to the huge volumes of web resources, the task to construct knowledge graph usually involves automatic mechanisms to avoid human supervision and thus inevitably introduces many kinds of errors, including ambiguous, conflicting and erroneous and redundant information.", "To address these shortcomings, various methods for knowledge graph refinement have been proposed, whose goals can be arguably classified into two categories: (1) knowledge graph completion, the task to add missing knowledge to the knowledge graph, and (2) error detection, the task to identify incorrect triples in the knowledge graph.", "Knowledge graph embedding (KGE) currently hold the state-of-the-art in knowledge graph completion for their promising results (Bordes et al., 2013; Yang et al., 2014) .", "Nonetheless, they highly rely on high quality training data and thus are lack of robustness to noise (Pujara et al., 2017) .", "Error detection in knowledge graph is a challenging problem due to the difficulty of obtaining noisy data.", "Reasoning based methods are the most widely used methods for this task (Paulheim, 2017) .", "Without the guidance of noisy data, they detect errors by performing reasoning over the knowledge graph to determine the correctness of a triple.", "A rich ontology information is required for such kind of methods and thus impede its application for real-world knowledge graphs.", "Existing works consider knowledge graph embedding and error detection independently whereas these two tasks are inter-dependent and can greatly influence each other.", "On one hand, error detection model is extremely useful to prepare reliable data for knowledge graph embedding.", "On the other hand, high quality embedding learned by KGE model provides a basis for reasoning to identify noisy data.", "Inspired by the recent advances of generative adversarial deep models (Goodfellow et al., 2014) , in this paper, we proposed to jointly combine these two tasks with a unified GAN framework, known as NoiGAN, to learn noise-aware knowledge graph embedding.", "In general, NoiGAN consists of two main components, a noise-aware KGE model to learn robuster representation of knowledge and an adversarial learning framework for error detection.", "During the training, noiseaware KGE model takes the confidence score learned by GAN as guidance to eliminate the noisy data from the learning process whereas the GAN requires that KGE model continuously provides high quality embedding as well as credible positive examples to model the discriminator and the generator.", "Cooperation between the two components drives both to improve their capability.", "The main contributions of this paper are summarized as follows:", "• We propose a unified generative adversarial framework NoiGAN, to learn noise-aware knowledge graph embedding.", "Under the framework, the KGE model and error detection model could benefit from each other: the error detection model prepares reliable data for KGE model to improve the quality of embedding it learns, while the KGE model provides a promising reasoning model for the error detection model to better distinguish noisy triples from the correct one.", "• Our proposed framework can be easily generalized to various KGE models to enhance their ability in dealing with noisy knowledge graph.", "• We experimentally demonstrate that our new algorithm is superior to existing state-of-the-art algorithms.", "The KGE model and GAN can alternately and iteratively boost performance in terms of both knowledge graph completion and noise detection.", "In this paper, we propose a novel framework NoiGAN, to jointly combine the tasks of knowledge graph completion and error detection for noise aware knowledge graph embedding learning.", "It consists of two main components, a noise-aware KGE model for knowledge graph completion and an adversarial learning framework for error detection.", "Under the framework, the noise-aware KGE model and the adversarial learning framework can alternately and iteratively boost performance of each other.", "Extensive experiments show the superiority of our proposed NoiGAN both in regard to knowledge graph completion and error detection." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.06451612710952759, 0.05882352590560913, 0.2142857164144516, 0, 0.7179487347602844, 0.1621621549129486, 0.03999999538064003, 0, 0.06451612710952759, 0.052631575614213943, 0.06451612710952759, 0.1249999925494194, 0.11320754140615463, 0.1621621549129486, 0.05405404791235924, 0.25, 0, 0.22857142984867096, 0.05882352590560913, 0.1666666567325592, 0.25, 0.17142856121063232, 0.3333333432674408, 0.29999998211860657, 0.07843136787414551, 0.07692307233810425, 0, 0.6666666865348816, 0.11999999731779099, 0.277777761220932, 0.13793103396892548, 0.11764705181121826, 0.2926829159259796, 0.277777761220932, 0.12121211737394333, 0.23529411852359772 ]
rkgTdkrtPH
true
[ "We proposed a unified Generative Adversarial Networks (GAN) framework to learn noise-aware knowledge graph embedding." ]
[ "Energy-based models (EBMs), a.k.a.", "un-normalized models, have had recent successes in continuous spaces.", "However, they have not been successfully applied to model text sequences. ", "While decreasing the energy at training samples is straightforward, mining (negative) samples where the energy should be increased is difficult. ", "In part, this is because standard gradient-based methods are not readily applicable when the input is high-dimensional and discrete. ", "Here, we side-step this issue by generating negatives using pre-trained auto-regressive language models. ", "The EBM then works\nin the {\\em residual} of the language model; and is trained to discriminate real text from text generated by the auto-regressive models.\n", "We investigate the generalization ability of residual EBMs, a pre-requisite for using them in other applications. ", "We extensively analyze generalization for the task of classifying whether an input is machine or human generated, a natural task given the training loss and how we mine negatives.", "Overall, we observe that EBMs can generalize remarkably well to changes in the architecture of the generators producing negatives.", "However, EBMs exhibit more sensitivity to the training set used by such generators.", "Energy-based models (EBMs) have a long history in machine learning (Hopfield, 1982; Hinton, 2002; LeCun et al., 2006) .", "Their appeal stems from the minimal assumptions they make about the generative process of the data.", "Unlike directed or auto-regressive models which are defined in terms of a sequence of conditional distributions, EBMs are defined in terms of a single scalar energy function, representing the joint compatibility between all input variables.", "EBMs are a strict generalization of probability models, as the energy function need not be normalized or even have convergent integral.", "Training an EBM consists of decreasing the energy function at the observed training data points (a.k.a. positives), while increasing it at other data points (a.k.a. negatives) (LeCun et al., 2006) .", "Different learning strategies mainly differ in how negatives are mined (Ranzato et al., 2007) .", "Some find negatives by gradient descent, or using Monte Carlo methods like Gibbs sampling (Welling et al., 2005) and hybrid Monte Carlo (Teh et al., 2003) , which enable the loss to approximate maximum likelihood training (Hinton, 2002) .", "Other approaches instead use implicit negatives, by enforcing global constraints on the energy function, like sparsity of the internal representation (Ranzato et al., 2007) , for instance.", "GANs (Goodfellow et al., 2014) can be interpreted as a particular form of EBM where the negatives are generated by a learned model.", "While there are works exploring the use of EBMs for modeling images (Teh et al., 2003; Ranzato et al., 2013; Du & Mordatch, 2019) , they have not been successfully applied to text.", "One reason is that text consists of sequences of discrete variables, which makes the energy function not differentiable with respect to its inputs.", "Therefore, it is not possible to mine negatives using gradient-based methods.", "Other approaches to mine negatives are also not immediately applicable or may be too inefficient to work at scale.", "In this work, we start from the observation that current large auto-regressive locally-normalized language models are already strong , and therefore, it may be beneficial to use them to constrain the search space of negatives.", "We propose to learn in the residual space of a pre-trained language model (LM), which we accomplish by using such LM to generate negatives for the EBM.", "Given a dataset of positives and pre-generated negatives, the EBM can be trained using either a binary cross-entropy loss or a ranking loss, to teach the model to assign a lower energy to true human generated text than to the text generated by the pre-trained LM.", "The question we ask in this work is whether such an EBM can generalize well.", "Understanding this is important for two reason.", "First, this generalization is a prerequisite for using residual EBMs for modeling text.", "Second, in our setting, this generalization question is equivalent to the question of whether it is possible for a learned model (the energy function) to discriminate real text from text generated by an auto-regressive model.", "Discriminating real vs. machine-generated text is an important task on its own that has recently gained a lot of attention (Gehrmann et al., 2019; Zellers et al., 2019) .", "Our contribution is an extensive study of the generalization ability of such residual EBMs, or in other words, the generalization ability of models trained to detect real text from machine generated text.", "In particular, we assess how well the energy function is robust to changes in the architecture of the generator and to changes in the data used to train the generator.", "The overall finding is that the energy function is remarkably robust, and the bigger the model and the longer the generation the better its performance.", "Moreover, the energy function is robust to changes in the architecture of the LM producing negatives at test time.", "However, it is sensitive to the training dataset of the test generator." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0, 0, 0.12121211737394333, 0.052631575614213943, 0.09999999403953552, 0, 0.2666666507720947, 0.21052631735801697, 0.2916666567325592, 0.051282044500112534, 0.05882352590560913, 0.04999999329447746, 0, 0.04081632196903229, 0.0476190410554409, 0.04081632196903229, 0, 0.072727270424366, 0.04255318641662598, 0.09090908616781235, 0.11538460850715637, 0.1860465109348297, 0.1249999925494194, 0.051282044500112534, 0.07407406717538834, 0.21739129722118378, 0.2142857164144516, 0.1111111044883728, 0.1428571343421936, 0.3030303120613098, 0.2745097875595093, 0.1249999925494194, 0.3404255211353302, 0.1428571343421936, 0.1538461446762085, 0.10526315122842789, 0.1249999925494194 ]
SkgpGgrYPH
true
[ "A residual EBM for text whose formulation is equivalent to discriminating between human and machine generated text. We study its generalization behavior." ]
[ "Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints.", "In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples.", "We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions.", "We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that mixup augmentation and setting a minimum number of labeled samples per mini-batch are effective regularization techniques for reducing it.", "The proposed approach achieves state-of-the-art results in CIFAR-10/100 and Mini-ImageNet despite being much simpler than other state-of-the-art.", "These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work.", "Code will be made available.", "Convolutional neural networks (CNNs) have become the dominant approach in computer vision (Lin et al., 2017; Liu et al., 2018; Kim et al., 2018; Xie et al., 2018) .", "To best exploit them, vast amounts of labeled data are required.", "Obtaining such labels, however, is not trivial, and the research community is exploring alternatives to alleviate this (Li et al., 2017; Oliver et al., 2018; Liu et al., 2019) .", "Knowledge transfer via deep domain adaptation (Wang & Deng, 2018 ) is a popular alternative that seeks to learn transferable representations from source to target domains by embedding domain adaptation in the learning pipeline.", "Other approaches focus exclusively on learning useful representations from scratch in a target domain when annotation constraints are relaxed (Oliver et al., 2018; Arazo et al., 2019; Gidaris et al., 2018) .", "Semi-supervised learning (SSL) (Oliver et al., 2018) focuses on scenarios with sparsely labeled data and extensive amounts of unlabeled data; learning with label noise (Arazo et al., 2019) seeks robust learning when labels are obtained automatically and may not represent the image content; and self-supervised learning (Gidaris et al., 2018) uses data supervision to learn from unlabeled data in a supervised manner.", "This paper focuses on SSL for image classification, a recently very active research area (Li et al., 2019) .", "SSL is a transversal task for different domains including images (Oliver et al., 2018) , audio (Zhang et al., 2016) , time series (González et al., 2018) , and text (Miyato et al., 2016) .", "Recent approaches in image classification primarily focus on exploiting the consistency in the predictions for the same sample under different perturbations (consistency regularization) (Sajjadi et al., 2016; Li et al., 2019) , while other approaches directly generate labels for the unlabeled data to guide the learning process (pseudo-labeling) (Lee, 2013; Iscen et al., 2019) .", "These two alternatives differ importantly in the mechanism they use to exploit unlabeled samples.", "Consistency regularization and pseudo-labeling approaches apply different strategies such as a warm-up phase using labeled data (Tarvainen & Valpola, 2017; Iscen et al., 2019) , uncertainty weighting (Shi et al., 2018; Li et al., 2019) , adversarial attacks (Miyato et al., 2018; Qiao et al., 2018) , or graph-consistency (Luo et al., 2018; Iscen et al., 2019) .", "These strategies deal with confirmation bias (Tarvainen & Valpola, 2017; Li et al., 2019) , also known as noise accumulation (Zhang et al., 2016) .", "This bias stems from using incorrect predictions on unlabeled data for training in subsequent epochs and, thereby increasing confidence in incorrect predictions and producing a model that will tend to resist new changes.", "This paper explores pseudo-labeling for semi-supervised deep learning from the network predictions and shows that, contrary to previous attempts on pseudo-labeling (Iscen et al., 2019; Oliver et al., 2018; Shi et al., 2018) , simple modifications to prevent confirmation bias lead to state-of-the-art performance without adding consistency regularization strategies.", "We adapt the approach proposed by Tanaka et al. (2018) in the context of label noise and apply it exclusively on unlabeled samples.", "Experiments show that this naive pseudo-labeling is limited by confirmation bias as prediction errors are fit by the network.", "To deal with this issue, we propose to use mixup augmentation as an effective regularization that helps calibrate deep neural networks (Thulasidasan et al., 2019) and, therefore, alleviates confirmation bias.", "We find that mixup alone does not guarantee robustness against confirmation bias when reducing the amount of labeled samples or using certain network architectures (see Subsection 4.3), and show that, when properly introduced, dropout regularization (Srivastava et al., 2014) and data augmentation mitigates this issue.", "Our purely pseudo-labeling approach achieves state-of-the-art results (see Subsection 4.4) without requiring multiple networks (Tarvainen & Valpola, 2017; Qiao et al., 2018; Li et al., 2019; Verma et al., 2019) , nor does it require over a thousand epochs of training to achieve peak performance in every dataset (Athiwaratkun et al., 2019; Berthelot et al., 2019) , nor needs many (ten) forward passes for each sample (Li et al., 2019) .", "Compared to other pseudo-labeling approaches, the proposed approach is simpler in that it does not require graph construction and diffusion (Iscen et al., 2019) or combination with consistency regularization methods (Shi et al., 2018) , but still achieves state-of-the-art results.", "This paper presented a semi-supervised learning approach for image classification based on pseudolabeling.", "We proposed to directly use the network predictions as soft pseudo-labels for unlabeled data together with mixup augmentation, a minimum number of labeled samples per mini-batch, dropout and data augmentation to alleviate confirmation bias.", "This conceptually simple approach outperforms related work in four datasets, demonstrating that pseudo-labeling is a suitable alternative to the dominant approach in recent literature: consistency-regularization.", "The proposed approach is, to the best of our knowledge, both simpler and more accurate than most recent approaches.", "Future work should explore SSL in class-unbalanced and large-scale datasets, synergies of pseudo-labelling and consistency regularization, and careful hyperparameter tuning.", "Figure 3 presents the cross-entropy loss for labeled samples when training with 13-CNN, WR-28 and PR-18 and using 500 and 250 labels in CIFAR-10.", "This loss is a good indicator of a robust convergence to reasonable performance as the interquartile range for cases failing (250 labels for WR-28 and PR-18) is much higher." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.07999999821186066, 0.11538460850715637, 0.1463414579629898, 0.2711864411830902, 0.04878048226237297, 0.1395348757505417, 0.06666666269302368, 0.04255318641662598, 0, 0.03999999538064003, 0.178571417927742, 0.07547169178724289, 0.10810810327529907, 0.09090908616781235, 0.08163265138864517, 0.08955223113298416, 0.05128204822540283, 0.0634920597076416, 0.12765957415103912, 0.1818181723356247, 0.20895521342754364, 0.04255318641662598, 0.1860465109348297, 0.178571417927742, 0.08571428060531616, 0.12345678359270096, 0.158730149269104, 0.2631579041481018, 0.21052631735801697, 0.2916666567325592, 0.09090908616781235, 0, 0.08510638028383255, 0.11764705181121826 ]
rJel41BtDH
true
[ "Pseudo-labeling has shown to be a weak alternative for semi-supervised learning. We, conversely, demonstrate that dealing with confirmation bias with several regularizations makes pseudo-labeling a suitable approach." ]
[ "Model-free reinforcement learning (RL) has been proven to be a powerful, general tool for learning complex behaviors.", "However, its sample efficiency is often impractically large for solving challenging real-world problems, even for off-policy algorithms such as Q-learning.", "A limiting factor in classic model-free RL is that the learning signal consists only of scalar rewards, ignoring much of the rich information contained in state transition tuples.", "Model-based RL uses this information, by training a predictive model, but often does not achieve the same asymptotic performance as model-free RL due to model bias.", "We introduce temporal difference models (TDMs), a family of goal-conditioned value functions that can be trained with model-free learning and used for model-based control.", "TDMs combine the benefits of model-free and model-based RL: they leverage the rich information in state transitions to learn very efficiently, while still attaining asymptotic performance that exceeds that of direct model-based RL methods.", "Our experimental results show that, on a range of continuous control tasks, TDMs provide a substantial improvement in efficiency compared to state-of-the-art model-based and model-free methods.", "Reinforcement learning (RL) algorithms provide a formalism for autonomous learning of complex behaviors.", "When combined with rich function approximators such as deep neural networks, RL can provide impressive results on tasks ranging from playing games BID23 BID29 , to flying and driving BID36 , to controlling robotic arms BID14 .", "However, these deep RL algorithms often require a large amount of experience to arrive at an effective solution, which can severely limit their application to real-world problems where this experience might need to be gathered directly on a real physical system.", "Part of the reason for this is that direct, model-free RL learns only from the reward: experience that receives no reward provides minimal supervision to the learner.In contrast, model-based RL algorithms obtain a large amount of supervision from every sample, since they can use each sample to better learn how to predict the system dynamics -that is, to learn the \"physics\" of the problem.", "Once the dynamics are learned, near-optimal behavior can in principle be obtained by planning through these dynamics.", "Model-based algorithms tend to be substantially more efficient BID9 BID24 , but often at the cost of larger asymptotic bias: when the dynamics cannot be learned perfectly, as is the case for most complex problems, the final policy can be highly suboptimal.", "Therefore, conventional wisdom holds that model-free methods are less efficient but achieve the best asymptotic performance, while model-based methods are more efficient but do not produce policies that are as optimal.Can we devise methods that retain the efficiency of model-based learning while still achieving the asymptotic performance of model-free learning?", "This is the question that we study in this paper.", "The search for methods that combine the best of model-based and model-free learning has been ongoing for decades, with techniques such as synthetic experience generation BID31 , partial modelbased backpropagation BID25 , and layering model-free learning on the residuals of model-based estimation BID6 ) being a few examples.", "However, a direct connection between model-free and model-based RL has remained elusive.", "By effectively bridging the gap between model-free and model-based RL, we should be able to smoothly transition from learning models to learning policies, obtaining rich supervision from every sample to quickly gain a moderate level of proficiency, while still converging to an unbiased solution.To arrive at a method that combines the strengths of model-free and model-based RL, we study a variant of goal-conditioned value functions BID32 BID28 BID0 .", "Goal-conditioned value functions learn to predict the value function for every possible goal state.", "That is, they answer the following question: what is the expected reward for reaching a particular state, given that the agent is attempting (as optimally as possible) to reach it?", "The particular choice of reward function determines what such a method actually does, but rewards based on distances to a goal hint at a connection to model-based learning: if we can predict how easy it is to reach any state from any current state, we must have some kind of understanding of the underlying \"physics.\"", "In this work, we show that we can develop a method for learning variable-horizon goalconditioned value functions where, for a specific choice of reward and horizon, the value function corresponds directly to a model, while for larger horizons, it more closely resembles model-free approaches.", "Extension toward more model-free learning is thus achieved by acquiring \"multi-step models\" that can be used to plan over progressively coarser temporal resolutions, eventually arriving at a fully model-free formulation.The principle contribution of our work is a new RL algorithm that makes use of this connection between model-based and model-free learning to learn a specific type of goal-conditioned value function, which we call a temporal difference model (TDM).", "This value function can be learned very efficiently, with sample complexities that are competitive with model-based RL, and can then be used with an MPC-like method to accomplish desired tasks.", "Our empirical experiments demonstrate that this method achieves substantially better sample complexity than fully model-free learning on a range of challenging continuous control tasks, while outperforming purely model-based methods in terms of final performance.", "Furthermore, the connection that our method elucidates between model-based and model-free learning may lead to a range of interesting future methods.", "In this paper, we derive a connection between model-based and model-free reinforcement learning, and present a novel RL algorithm that exploits this connection to greatly improve on the sample efficiency of state-of-the-art model-free deep RL algorithms.", "Our temporal difference models can be viewed both as goal-conditioned value functions and implicit dynamics models, which enables them to be trained efficiently on off-policy data while still minimizing the effects of model bias.", "As a result, they achieve asymptotic performance that compares favorably with model-free algorithms, but with a sample complexity that is comparable to purely model-based methods.While the experiments focus primarily on the new RL algorithm, the relationship between modelbased and model-free RL explored in this paper provides a number of avenues for future work.", "We demonstrated the use of TDMs with a very basic planning approach, but further exploring how TDMs can be incorporated into powerful constrained optimization methods for model-predictive control or trajectory optimization is an exciting avenue for future work.", "Another direction for future is to further explore how TDMs can be applied to complex state representations, such as images, where simple distance metrics may no longer be effective.", "Although direct application of TDMs to these domains is not straightforward, a number of works have studied how to construct metric embeddings of images that could in principle provide viable distance functions.", "We also note that while the presentation of TDMs have been in the context of deterministic environments, the extension to stochastic environments is straightforward: TDMs would learn to predict the expected distance between the future state and a goal state.", "Finally, the promise of enabling sample-efficient learning with the performance of model-free RL and the efficiency of model-based RL is to enable widespread RL application on real-world systems.", "Many applications in robotics, autonomous driving and flight, and other control domains could be explored in future work.", "The maximum distance was set to 5 rather than 6 for this experiment, so the numbers should be lower than the ones reported in the paper." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09302324801683426, 0.08695651590824127, 0.07692307233810425, 0.11538460850715637, 0.4313725531101227, 0.21052631735801697, 0.26923075318336487, 0.05128204822540283, 0.13114753365516663, 0.0937499925494194, 0.15584415197372437, 0.1395348757505417, 0.0937499925494194, 0.158730149269104, 0.10810810327529907, 0.17910447716712952, 0.1538461446762085, 0.17283950746059418, 0.09999999403953552, 0.07407406717538834, 0.10666666179895401, 0.2153846174478531, 0.2142857164144516, 0.37735849618911743, 0.29999998211860657, 0.2083333283662796, 0.21052631735801697, 0.19999998807907104, 0.24657534062862396, 0.19354838132858276, 0.07407406717538834, 0.1071428507566452, 0.16949151456356049, 0.20408162474632263, 0.1395348757505417, 0.07999999821186066 ]
Skw0n-W0Z
true
[ "We show that a special goal-condition value function trained with model free methods can be used within model-based control, resulting in substantially better sample efficiency and performance." ]
[ "We introduce a neural architecture to perform amortized approximate Bayesian inference over latent random permutations of two sets of objects.", "The method involves approximating permanents of matrices of pairwise probabilities using recent ideas on functions defined over sets.", "Each sampled permutation comes with a probability estimate, a quantity unavailable in MCMC approaches.", "We illustrate the method in sets of 2D points and MNIST images.\n", "Posterior inference in generative models with discrete latent variables presents well-known challenges when the variables live in combinatorially large spaces.", "In this work we focus on the popular and non-trivial case where the latent variables represent random permutations.", "While inference in these models has been studied in the past using MCMC techniques (Diaconis, 2009 ) and variational methods , here we propose an amortized approach, whereby we invest computational resources to train a model, which later is used for very fast posterior inference (Gershman and Goodman, 2014) .", "Unlike the variational autoencoder approach (Kingma and Welling, 2013) , in our case we do not learn a generative model.", "Instead, the latter is postulated (through its samples) and posterior inference is the main focus of the learning phase.", "This approach has been recently explored in sundry contexts, such as Bayesian networks (Stuhlmüller et al., 2013) , sequential Monte Carlo (Paige and Wood, 2016) , probabilistic programming (Ritchie et al., 2016; Le et al., 2016) , neural decoding (Parthasarathy et al., 2017) and particle tracking (Sun and Paninski, 2018) .", "Our method is inspired by the technique introduced in (Pakman and Paninski, 2018 ) to perform amortized inference over discrete labels in mixture models.", "The basic idea is to use neural networks to express posteriors in the form of multinomial distributions (with varying support) in terms of fixed-dimensional, distributed representations that respect the permutation symmetries imposed by the discrete variables.", "After training the neural architecture using labeled samples from a particular generative model, we can obtain independent approximate posterior samples of the permutation posterior for any new set of observations of arbitrary size.", "These samples can be used to compute approximate expectations, as high quality importance samples, or as independent Metropolis-Hastings proposals.", "Our results on simple datasets validate this approach to posterior inference over latent permutations.", "More complex generative models with latent permutations can be approached using similar tools, a research direction we are presently exploring.", "The curves show mean training negative log-likelihood/iteration in the MNIST example.", "f = 0 is a baseline model, were we ignore the unassigned points in (9).", "The other two curves correspond to encoding the symmetry p(y n , x cn ) = p(x cn , y n ) as f (g(H x,cn ) + g(H y )) or as f (H x,cn H y )." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.46666666865348816, 0.0714285671710968, 0, 0, 0.13793103396892548, 0.1428571343421936, 0.1071428507566452, 0, 0.07407406717538834, 0.03999999538064003, 0.1764705777168274, 0.0476190447807312, 0.1538461446762085, 0, 0.3199999928474426, 0.12903225421905518, 0, 0, 0 ]
rJgFtkhEtr
true
[ "A novel neural architecture for efficient amortized inference over latent permutations " ]
[ "Machine learned large-scale retrieval systems require a large amount of training data representing query-item relevance.", "However, collecting users' explicit feedback is costly.", "In this paper, we propose to leverage user logs and implicit feedback as auxiliary objectives to improve relevance modeling in retrieval systems.", "Specifically, we adopt a two-tower neural net architecture to model query-item relevance given both collaborative and content information.", "By introducing auxiliary tasks trained with much richer implicit user feedback data, we improve the quality and resolution for the learned representations of queries and items.", "Applying these learned representations to an industrial retrieval system has delivered significant improvements.", "In this paper, we propose a novel transfer learning model architecture for large-scale retrieval systems.", "The retrieval problem is defined as follows: given a query and a large set of candidate items, retrieve the top-k most relevant candidates.", "Retrieval systems are useful in many real-world applications such as search BID28 and recommendation BID6 BID31 BID10 .", "The recent efforts on building large-scale retrieval systems mostly focus on the following two aspects:• Better representation learning.", "Many machine learning models have been developed to learn the mapping of queries and candidate items to an embedding space BID14 BID15 .", "These models leverage various features such as collaborative and content information BID29 the top-k relevant items given the similarity (distance) metric associated with the embedding space BID3 BID8 .However", ", it is challenging to design and develop real-world large-scale retrieval systems for many reasons:• Sparse relevance data. It is", "costly to collect users' true opinions regarding item relevance. Often", ", researchers and engineers design human-eval templates with Likert scale questions for relevance BID5 , and solicit feedback via crowd-sourcing platforms (e.g., Amazon Mechnical Turk).• Noisy", "feedback. In addition", ", user feedback is often highly subjective and biased, due to human bias in designing the human-eval templates, as well as the subjectivity in providing feedback.• Multi-modality", "feature space. We need to learn", "relevance in a feature space generated from multiple modalities, e.g., query content features, candidate content features, context features, and graph features from connections between query and candidate BID29 BID21 BID7 .In this paper, we", "propose to learn relevance by leveraging both users' explicit answers on relevance and users' implicit feedback such as clicks and other types of user engagement. Specifically, we", "develop a transfer-learning framework which first learns the effective query and candidate item representations using a large quantity of users' implicit feedback, and then refines these representations using users' explicit feedback collected from survey responses. The proposed model", "architecture is depicted in FIG1 .Our proposed model", "is based on a two-tower deep neural network (DNN) commonly deployed in large-scale retrieval systems BID15 . This model architecture", ", as depicted in FIG0 , is capable of learning effective representations from multiple modalities of features. These representations", "can be subsequently served using highly efficient nearest neighbor search systems BID8 .To transfer the knowledge", "learned from implicit feedback to explicit feedback, we extend the two-tower model by adopting a shared-bottom architecture which has been widely used in the context of multi-task learning BID4 . Specifically, the final loss", "includes training objectives for both the implicit and explicit feedback tasks. These two tasks share some hidden", "layers, and each task has its own independent sub-tower. At serving time, only the representations", "learned for explicit feedback are used and evaluated.Our experiments on an industrial large-scale retrieval system have shown that by transferring knowledge from rich implicit feedback, we can significantly improve the prediction accuracy of sparse relevance feedback.In summary, our contributions are as follows:• We propose a transfer learning framework which leverages rich implicit feedback in order to learn better representations for sparse explicit feedback.• We design a novel model architecture which", "optimizes two training objectives sequentially.• We evaluate our model on a real-world large-scale", "retrieval system and demonstrate significant improvements.The rest of this paper is organized as follows: Section 2 discusses related work in building large-scale retrieval systems. Section 3 introduces our problem and training objectives", ". Section 4 describes our proposed approach. Section 5 reports", "the experimental results on a large-scale", "retrieval system. Finally, in Section 6, we conclude with our findings.", "The success of transfer learning hinges on a proper parameterization of both the auxiliary and main tasks.", "On one hand, we need sufficient capacity to learn a high-quality representation from a large amount of auxiliary data.", "On the other hand, we want to limit the capacity for the main task to avoid over-fitting to its sparse labels.", "As a result, our proposed model architecture is slightly different from the traditional pre-trained and fine-tuning model BID12 .", "Besides shared layers, each task has its own hidden layers with different capacities.", "In addition, we apply a two-stage training with stop gradients to avoid potential issues caused by the extreme data skew between the main task and auxiliary task.Our experiences have motivated us to continue our work in the following directions:• We will consider multiple types of user implicit feedback using different multi-task learning frameworks, such as Multi-gate Mixture-of-Expert BID17 and Sub-Network Routing BID18 .", "We will continue to explore new model architectures to combine transfer learning with multi-task learning.•", "The auxiliary task requires hyper-parameter tuning to learn the optimal representation for the main task. We", "will explore AutoML BID26 techniques to automate the learning of proper parameterizations across tasks for both the query and the candidate towers.", "In this paper, we propose a novel model architecture to learn better query and candidate representations via transfer learning.", "We extend the two-tower neural network approach to enhance sparse task learning by leveraging auxiliary tasks with rich implicit feedback.", "By introducing auxiliary objectives and jointly learning this model using implicit feedback, we observe a significant improvement for relevance prediction on one of Google's large-scale retrieval systems." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.277777761220932, 0, 0.2857142686843872, 0.307692289352417, 0.08888888359069824, 0.11764705181121826, 0.5, 0.09302324801683426, 0.052631575614213943, 0.15789473056793213, 0.0476190410554409, 0, 0.29999998211860657, 0.12903225421905518, 0.08510638028383255, 0, 0.04444443807005882, 0.14814814925193787, 0.11999999731779099, 0.17777776718139648, 0.14814814925193787, 0.13793103396892548, 0.3499999940395355, 0.05405404791235924, 0.10810810327529907, 0.307692289352417, 0.10810810327529907, 0, 0.39024388790130615, 0.22857142984867096, 0.11764705181121826, 0, 0.14814814925193787, 0.0624999962747097, 0.05405404791235924, 0.1538461446762085, 0.10526315122842789, 0.21052631735801697, 0, 0.1012658178806305, 0.17142856121063232, 0.17142856121063232, 0.09756097197532654, 0.29999998211860657, 0.24390242993831635, 0.3333333432674408 ]
SJxPVcSonN
true
[ "We propose a novel two-tower shared-bottom model architecture for transferring knowledge from rich implicit feedbacks to predict relevance for large-scale retrieval systems." ]
[ "The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent, from household robotic vacuums to autonomous vehicles.", "Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry, but fail to model dynamic objects (such as other agents) or semantic constraints (such as wet floors or doorways).", "Learning-based RL agents are an attractive alternative because they can incorporate both semantic and geometric information, but are notoriously sample inefficient, difficult to generalize to novel settings, and are difficult to interpret.", "In this paper, we combine the best of both worlds with a modular approach that {\\em learns} a spatial representation of a scene that is trained to be effective when coupled with traditional geometric planners.", "Specifically, we design an agent that learns to predict a spatial affordance map that elucidates what parts of a scene are navigable through active self-supervised experience gathering.", "In contrast to most simulation environments that assume a static world, we evaluate our approach in the VizDoom simulator, using large-scale randomly-generated maps containing a variety of dynamic actors and hazards.", "We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.", "The ability to explore and navigate within a physical space is a fundamental requirement for virtually any mobile autonomous agent, from household robotic vacuums to autonomous vehicles.", "Traditional approaches for navigation and exploration rely on simultaneous localization and mapping (SLAM) methods to recover scene geometry, producing an explicit geometric map as output.", "Such maps can be used in conjunction with classic geometric motion planners for exploration and navigation (such as those based on graph search).", "However, geometric maps fail to capture dynamic objects within an environment, such as humans, vehicles, or even other autonomous agents.", "In fact, such dynamic obstacles are intentionally treated as outliers to be ignored when learning a geometric map.", "However, autonomous agents must follow a navigation policy that avoids collisions with dynamic obstacles to ensure safe operation.", "Moreover, real-world environments also offer a unique set of affordances and semantic constraints to each agent: a human-sized agent might fit through a particular door, but a car-sized agent may not; similarly, a bicycle lane may be geometrically free of obstacles, but access is restricted to most agents.", "Such semantic and behavioral constraints are challenging to encode with classic SLAM.", "One promising alternative is end-to-end reinforcement learning (RL) of a policy for exploration and navigation.", "Such approaches have the potential to jointly learn an exploration/navigation planner together with an internal representation that captures both geometric, semantic, and dynamic constraints.", "However, such techniques suffer from well-known challenges common to RL such as high sample complexity (because reward signals tend to be sparse), difficulty in generalization to novel environments (due to overfitting), and lack of interpretability.", "We advocate a hybrid approach that combines the best of both worlds.", "Rather than end-to-end learning of both a spatial representation and exploration policy, we apply learning only \"as needed\".", "Specifically, we employ off-the-shelf planners, but augment the classic geometric map with a spatial affordance map that encodes where the agent can safely move.", "Crucially, the affordance map is learned through self-supervised interaction with the environment.", "For example, our agent can discover that spatial regions with wet-looking floors are non-navigable and that spatial regions that recently contained human-like visual signatures should be avoided with a large margin of safety.", "Evaluating on an exploration-based task, we demonstrate that affordance map-based approaches are far more sample-efficient, generalizable, and interpretable than current RL-based methods.", "Even though we believe our problem formulation to be rather practical and common, evaluation is challenging in both the physical world and virtual simulators.", "It it notoriously difficult to evaluate real-world autonomous agents over a large and diverse set of environments.", "Moreover, many realistic simulators for navigation and exploration assume a static environment (Wu et al., 2018; Savva et al., 2017; Xia et al., 2018) .", "We opt for first-person game-based simulators that populate virtual worlds with dynamic actors.", "Specifically, we evaluate exploration and navigation policies in VizDoom (Wydmuch et al., 2018) , a popular platform for RL research.", "We demonstrate that affordance maps, when combined with classic planners, dramatically outperform traditional geometric methods by 60% and state-of-the-art RL approaches by 70% in the exploration task.", "Additionally, we demonstrate that by combining active learning and affordance maps with geometry, navigation performance improves by up to 55% in the presence of hazards.", "However, a significant gap still remains between human and autonomous performance, indicating the difficulty of these tasks even in the relatively simple setting of a simulated world.", "We have described a learnable approach for exploration and navigation in novel environments.", "Like RL-based policies, our approach learns to exploit semantic, dynamic, and even behavioural properties of the novel environment when navigating (which are difficult to capture using geometry alone).", "But unlike traditional RL, our approach is made sample-efficient and interpretable by way of a spatial affordance map, a novel representation that is interactively-trained so as to be useful for navigation with off-the-shelf planners.", "Though conceptually simple, we believe affordance maps open up further avenues for research and could help close the gap between human and autonomous exploration performance.", "For example, the dynamics of moving obstacles are currently captured only in an implicit fashion.", "A natural extension is making this explicit, either in the form of a dynamic map or navigability module that makes use of spatio-temporal cues for better affordance prediction." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10169491171836853, 0.0937499925494194, 0.16393442451953888, 0.2153846174478531, 0.19999998807907104, 0.2461538463830948, 0.3448275923728943, 0.10169491171836853, 0.1355932205915451, 0.3103448152542114, 0.1090909019112587, 0.11320754140615463, 0.15094339847564697, 0.10810810327529907, 0.08510638028383255, 0.20000000298023224, 0.10344827175140381, 0.1818181723356247, 0.21276594698429108, 0.23076923191547394, 0.28070175647735596, 0.17391304671764374, 0.2222222238779068, 0.17543859779834747, 0.13793103396892548, 0.1538461446762085, 0.1428571343421936, 0.0833333283662796, 0.2181818187236786, 0.39344263076782227, 0.2711864411830902, 0.23728813230991364, 0.25, 0.12903225421905518, 0.23880596458911896, 0.20338982343673706, 0.11999999731779099, 0.19354838132858276 ]
BJgMFxrYPB
true
[ "We address the task of autonomous exploration and navigation using spatial affordance maps that can be learned in a self-supervised manner, these outperform classic geometric baselines while being more sample efficient than contemporary RL algorithms" ]