source
sequence
source_labels
sequence
rouge_scores
sequence
paper_id
stringlengths
9
11
ic
unknown
target
sequence
[ "Common-sense physical reasoning is an essential ingredient for any intelligent agent operating in the real-world.", "For example, it can be used to simulate the environment, or to infer the state of parts of the world that are currently unobserved.", "In order to match real-world conditions this causal knowledge must be learned without access to supervised data.", "To address this problem we present a novel method that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion.", "It incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and learn efficiently.", "On videos of bouncing balls we show the superior modelling capabilities of our method compared to other unsupervised neural approaches that do not incorporate such prior knowledge.", "We demonstrate its ability to handle occlusion and show that it can extrapolate learned knowledge to scenes with different numbers of objects.", "Humans rely on common-sense physical reasoning to solve many everyday physics-related tasks BID32 .", "For example, it enables them to foresee the consequences of their actions (simulation), or to infer the state of parts of the world that are currently unobserved.", "This causal understanding is an essential ingredient for any intelligent agent that is to operate within the world.Common-sense physical reasoning is facilitated by the discovery and representation of objects (a core domain of human cognition BID45 ) that serve as primitives of a compositional system.", "They allow humans to decompose a complex visual scene into distinct parts, describe relations between them and reason about their dynamics as well as the consequences of their interactions BID4 BID32 BID48 .The", "most successful machine learning approaches to common-sense physical reasoning incorporate such prior knowledge in their design. They", "maintain explicit object representations, which allow for general physical dynamics to be learned between object pairs in a compositional manner BID3 BID8 BID49 . However", ", in these approaches learning is supervised, as it relies on object-representations from external sources (e.g. a physics simulator) that are typically unavailable in real-world scenarios.Neural approaches that learn to directly model motion or physical interactions in pixel space offer an alternative solution BID46 BID47 ). However", ", while unsupervised, these methods suffer from a lack compositionality at the representational level of objects. This prevents", "such end-to-end neural approaches from efficiently learning functions that operate on multiple entities and generalize in a human-like way (c.f. BID4 ; BID32 ; BID41 , but see BID39 ).In this work we", "propose Relational N-EM (R-NEM), a novel approach to common-sense physical reasoning that learns physical interactions between objects from raw visual images in a purely unsupervised fashion. At its core is", "Neural Expectation Maximization (N-EM; , a method that allows for the discovery of compositional object-representations, yet is unable to model interactions between objects. Therefore, we", "endow N-EM with a relational mechanism inspired by previous work BID3 BID8 BID41 , enabling it to factor interactions between object-pairs, learn efficiently, and generalize to visual scenes with a varying number of objects without re-training.", "We have argued that the ability to discover and describe a scene in terms of objects provides an essential ingredient for common-sense physical reasoning.", "This is supported by converging evidence from cognitive science and developmental psychology that intuitive physics and reasoning capabilities are built upon the ability to perceive objects and their interactions BID43 BID48 .", "The fact that young infants already exhibit this ability, may even suggest an innate bias towards compositionality BID32 BID37 BID45 .", "Inspired by these observations we have proposed R-NEM, a method that incorporates inductive biases about the existence of objects and interactions, implemented by its clustering objective and interaction function respectively.", "The specific nature of the objects, and their dynamics and interactions can then be learned efficiently purely from visual observations.In our experiments we find that R-NEM indeed captures the (physical) dynamics of various environments more accurately than other methods, and that it exhibits improved generalization to environments with different numbers of objects.", "It can be used as an approximate simulator of the environment, and to predict movement and collisions of objects, even when they are completely occluded.", "This demonstrates a notion of object permanence and aligns with evidence that young infants seem to infer that occluded objects move in connected paths and continue to maintain objectspecific properties BID44 .", "Moreover, young infants also appear to expect that objects only interact when they come into contact BID44 , which is analogous to the behaviour of R-NEM to only attend to other objects when a collision is imminent.", "In summary, we believe that our method presents an important step towards learning a more human-like model of the world in a completely unsupervised fashion.Current limitations of our approach revolve around grouping and prediction.", "What aspects of a scene humans group together typically varies as a function of the task in mind.", "One may perceive a stack of chairs as a whole if the goal is to move them to another room, or as individual chairs if the goal is to count the number of chairs in the stack.", "In order to facilitate this dynamic grouping one would need to incorporate top-down feedback from an agent into the grouping procedure to deviate from the built-in inductive biases.", "Another limitation of our approach is the need to incentivize R-NEM to produce useful groupings by injecting noise, or reducing capacity.", "The former may prevent very small regularities in the input from being detected.", "Finally the interaction in the E-step among the groups makes it difficult to increase the number of components above ten without causing harmful training instabilities.", "Due to the multitude of interactions and objectives in R-NEM (and RNN-EM) we find that they are sometimes challenging to train.In terms of prediction we have implicitly assumed that objects in the environment behave according to rules that can be inferred.", "This poses a challenge when objects deform in a manner that is difficult to predict (as is the case for objects in Space Invaders due to downsampling).", "However in practice we find that (once pixels have been grouped together) the masking of the input helps each component in quickly adapting its representation to any unforeseen behaviour across consecutive time steps.", "Perhaps a more severe limitation of R-NEM (and of RNN-EM in general) is that the second loss term of the outer training objective hinders in modelling more complex varying backgrounds, as the background group would have to predict the \"pixel prior\" for every other group.We argue that the ability to engage in common-sense physical reasoning benefits any intelligent agent that needs to operate in a physical environment, which provides exciting future research opportunities.", "In future work we intend to investigate how top-down feedback from an agent could be incorporated in R-NEM to facilitate dynamic groupings, but also how the compositional representations produced by R-NEM can benefit a reinforcement learner, for example to learn a modular policy that easily generalizes to novel combinations of known objects.", "Other interactions between a controller C and a model of the world M (implemented by R-NEM) as posed in BID42 constitute further research directions." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.14999999105930328, 0.08888888359069824, 0.04878048226237297, 0.7692307829856873, 0.13636362552642822, 0.11764705181121826, 0.21739129722118378, 0.21052631735801697, 0.12765957415103912, 0.2153846174478531, 0.2142857164144516, 0.2857142686843872, 0.1666666567325592, 0.22857142984867096, 0.1395348757505417, 0.16949151456356049, 0.7169811129570007, 0.23999999463558197, 0.20689654350280762, 0.44897958636283875, 0.29629629850387573, 0.04444443807005882, 0.15094339847564697, 0.2571428418159485, 0.0833333283662796, 0.22641508281230927, 0.145454540848732, 0.28070175647735596, 0.09756097197532654, 0.1249999925494194, 0.0833333283662796, 0.08888888359069824, 0.10526315122842789, 0.08510638028383255, 0.20338982343673706, 0.21276594698429108, 0.1071428507566452, 0.19512194395065308, 0.19718308746814728, 0.2083333283662796 ]
ryH20GbRW
true
[ "We introduce a novel approach to common-sense physical reasoning that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion" ]
[ "The idea that neural networks may exhibit a bias towards simplicity has a long history.", "Simplicity bias provides a way to quantify this intuition. ", "It predicts, for a broad class of input-output maps which can describe many systems in science and engineering, that simple outputs are exponentially more likely to occur upon uniform random sampling of inputs than complex outputs are. ", "This simplicity bias behaviour has been observed for systems ranging from the RNA sequence to secondary structure map, to systems of coupled differential equations, to models of plant growth. ", "Deep neural networks can be viewed as a mapping from the space of parameters (the weights) to the space of functions (how inputs get transformed to outputs by the network). ", "We show that this parameter-function map obeys the necessary conditions for simplicity bias, and numerically show that it is hugely biased towards functions with low descriptional complexity. ", "We also demonstrate a Zipf like power-law probability-rank relation. ", "A bias towards simplicity may help explain why neural nets generalize so well.", "In a recent paper BID4 , an inequality inspired by the coding theorem from algorithmic information theory (AIT) BID5 , and applicable to computable input-output maps was derived using the following simple procedure.", "Consider a map f : I → O between N I inputs and N O outputs.", "The size of the inputs space is parameterized as n, e.g. if the inputs are binary strings, then N I = 2 n .", "Assuming f and n are given, implement the following simple procedure: first enumerate all 2 n inputs and map them to outputs using f .", "Then order the outputs by how frequently Preliminary work.", "Under review by the International Conference on Machine Learning (ICML).", "Do not distribute.", "they appear.", "Using a Shannon-Fano code, one can then describe x with a code of length − log 2 P (x) + O(1), which therefore upper bounds the Kolmogorov complexity, giving the relation P (x) ≤ 2 −K(x|f,n)+O BID0 .", "The O(1) terms are independent of x (but hard to estimate).", "Similar bounds can be found in standard works BID5 .", "As pointed out in BID4 , if the maps are simple, that is condition 1: K(f ) + K(n) K(x) + O(1) holds, then because K(x) ≤ K(x|f, n) + K(f ) + K(n) + O(1), and K(x|f, n) ≤ K(x) + O(1), it follows that K(x|f, n) ≈ K(x) + O(1).", "The problem remains that Kolmogorov complexity is fundamentally uncomputable BID5 , and that the O(1) terms are hard to estimate.", "However, in reference (5) a more pragmatic approach was taken to argue that a bound on the probability P (x) that x obtains upon random sampling of inputs can be approximated as DISPLAYFORM0 whereK(x) is a suitable approximation to the Kolmogorov complexity of x.", "Here a and b are constants that are independent of x and which can often be determined from some basic information about the map.", "These constants pick up multiplicative and additive factors in the approximation to K(x) and to the O(1) terms.In addition to the simplicity of the the input-output map f (condition (1)), the map also needs to obey conditions BID1 Redundancy: that the number of inputs N I is much larger than the number of outputs N O , as otherwise P (x) can't vary much;", "3) Large systems where N O 0, so that finite size effects don't play a dominant role;", "4) Nonlinear: If the map f is linear it won't show bias and", "5) Well-behaved: The map should not have a significant fraction of pseudorandom outputs because it is hard to find good approximationsK(x).", "For example many randomnumber generators produce outputs that appear complex, but in fact have low K(x) because they are generated by a relatively simple algorithms with short descriptions.Some of the steps above may seem rather rough to AIT purists.", "For example: Can a reasonable approximation to K(x) be found?", "What about O(1) terms?", "And, how do you know condition", "5) is fulfilled?", "Notwithstanding these important questions, in reference (5) the simplicity bias bound (1) was tested empirically for a wide range of different maps, ranging from a sequence to RNA secondary 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 Simplicity bias in the parameter-function map of deep neural networks structure map, to a set of coupled differential equations, to L-systems (a model for plant morphology and computer graphics) to a stochastic financial model.", "In each case the bound works remarkably well: High probability outputs have low complexity, and high complexity outputs have low probability (but not necessarily vice versa).", "A simple matrix map that allows condition 1 to be directly tested also demonstrates that when the map becomes sufficiently complex, simplicity bias phenomena disappear.this method is that it relies on the assumption of the put x, the upper bound was only a poor approximation, .", "nd b can generally be estimated with a limited amount text.", "As long as there are ways to estimate max(K(x)), first order can simply be set to zero.", "Of course some ay obey them, but we can always simply fix a and b only a small amount of information is needed to fix the e chosen approximate measure of complexity.", "In this a di↵erent complexity sayK ↵, = ↵C LZ (x) + , then nts a ↵, = a/↵ and b ↵, = b a /↵.", "In other words, the parameters.", "Such robustness is a useful property.plexity for di↵erent sized systems.", "(a) RNA n = 10 and simplest structure does have the largest probability.", "upper bound, a = 0.23, b = 1.08; (c) RNA n = 80 shows er bound, a = 0.33, b = 6.39.", "FIG0 .", "Probability that an RNA secondary structure x obtains upon random sampling of length L = 80 sequences versus a Lempel-Ziv measure of the complexity of the structure.", "The black solid line is the simplicity-bias bound (1), while the dashed line denotes the bound with the parameter b set to zero.In FIG0 we illustrate an iconic input-output map for RNA, a linear biopolymer that can fold into well-defined sructures due to specific bonding between the four different types of nucleotides ACUG from which its sequences are formed.", "While the full three-dimensional structure is difficult to predict, the secondary structure, which records which nucleotide binds to which nucleotide, can be efficiently and accurately calculated.", "This mapping from sequences to secondary structures fulfills the conditions above.", "Most importantly, the map, which uses the laws of physics to determine the lowest free-energy structure for a given sequence, is independent of the length of the sequences, and so fulfills the simplicity condition (1).", "The structures (the outputs x) can be written in terms of a ternary string, and so simple compression algorithms can be used to estimate their complexity.", "In FIG0 , we observe, as expected, that the probability P (x) that a particular secondary structure x is found upon random sampling of sequences is bounded by Eq. (1) as predicted.", "Similar robust simplicity bies behaviour to that seen in this figure was observed for the other maps.Similar scaling (5) was also observed for this map with a series of other approximations to K(x), suggesting that the precise choice of complexity measure was not critical, as long as it captures some basic essential features.In summary then, the simplicity bias bound (1) works robustly well for a wide range of different maps.", "The predictions are strong: the probability that an output obtains upon random sampling of inputs should drop (at least) exponentially with linear increases in the descriptional complexity of the output.", "Nevertheless, it is important to note that while the 5 conditions above are sufficient for the bound (1) to hold, they are not sufficient to guarantee that the map will be biased (and therefore simplicity biased).", "One can easily construct maps that obey them, but do not show bias.", "Understanding the conditions resulting in biased maps is very much an open area of investigation.The question we will address here is: Can deep learning be re-cast into the language of input-output maps, and if so, do these maps also exhibit the very general phenomenon of simplicity bias?2", ".", "The parameter-function map It is not hard to see that the map above obeys condition 1: The shortest description of the map grows slowly with the logarithm of the size of the space of functions (which determines the typical K(x)).", "Conditions 2-4 are also clearly met.", "Condition 5 is more complex and requires empirical testing.", "But given that simplicity bias was observed for such a wide range of maps, our expectation is that it will hold robustly for neural networks also.", "We have provided evidence that neural networks exhibit simplicity bias.", "The fact that the phenomena observed are remarkably similar to those of a wide range of maps from science and engineering BID4 suggests that this behaviour is general, and will hold for many neural network architectures.", "It would be interesting to test this claim for larger systems, which will require new sampling techniques, and to derive analytic arguments for a bias towards simplicity, as done in BID12 .", "222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 Simplicity bias in the parameter-function map of deep neural" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19999998807907104, 0.1666666567325592, 0.19672130048274994, 0.19230768084526062, 0.19230768084526062, 0.23076923191547394, 0, 0.1538461446762085, 0.14035087823867798, 0.05128204822540283, 0.1666666567325592, 0.1702127605676651, 0.05714285373687744, 0.0555555522441864, 0, 0.06896550953388214, 0.1621621549129486, 0.17142856121063232, 0.13793103396892548, 0.17777776718139648, 0.1904761791229248, 0.1666666567325592, 0.1599999964237213, 0, 0.20512820780277252, 0.21276594698429108, 0.1818181723356247, 0.1111111044883728, 0, 0, 0.06896551698446274, 0.13740457594394684, 0.0416666604578495, 0.27272728085517883, 0.054054051637649536, 0.0952380895614624, 0.145454540848732, 0, 0.06451612710952759, 0.054054051637649536, 0.05128204822540283, 0, 0.08163265138864517, 0.15189872682094574, 0.1666666567325592, 0.10810810327529907, 0.14814814925193787, 0.23999999463558197, 0.145454540848732, 0.20000000298023224, 0.1538461446762085, 0.2181818187236786, 0.10256409645080566, 0.29411762952804565, 0.2545454502105713, 0, 0.05714285373687744, 0.19999998807907104, 0.1111111044883728, 0.2711864411830902, 0.1818181723356247, 0.15909090638160706 ]
r1l9jy39p4
true
[ "A very strong bias towards simple outpouts is observed in many simple input-ouput maps. The parameter-function map of deep networks is found to be biased in the same way." ]
[ "Imitation Learning (IL) is an appealing approach to learn desirable autonomous behavior.", "However, directing IL to achieve arbitrary goals is difficult.", "In contrast, planning-based algorithms use dynamics models and reward functions to achieve goals.", "Yet, reward functions that evoke desirable behavior are often difficult to specify.", "In this paper, we propose \"Imitative Models\" to combine the benefits of IL and goal-directed planning.", "Imitative Models are probabilistic predictive models of desirable behavior able to plan interpretable expert-like trajectories to achieve specified goals.", "We derive families of flexible goal objectives, including constrained goal regions, unconstrained goal sets, and energy-based goals.", "We show that our method can use these objectives to successfully direct behavior.", "Our method substantially outperforms six IL approaches and a planning-based approach in a dynamic simulated autonomous driving task, and is efficiently learned from expert demonstrations without online data collection. ", "We also show our approach is robust to poorly-specified goals, such as goals on the wrong side of the road.", "Imitation learning (IL) is a framework for learning a model to mimic behavior.", "At test-time, the model pursues its best-guess of desirable behavior.", "By letting the model choose its own behavior, we cannot direct it to achieve different goals.", "While work has augmented IL with goal conditioning (Dosovitskiy & Koltun, 2016; Codevilla et al., 2018) , it requires goals to be specified during training, explicit goal labels, and are simple (e.g., turning).", "In contrast, we seek flexibility to achieve general goals for which we have no demonstrations.", "In contrast to IL, planning-based algorithms like model-based reinforcement learning (MBRL) methods do not require expert demonstrations.", "MBRL can adapt to new tasks specified through reward functions (Kuvayev & Sutton, 1996; Deisenroth & Rasmussen, 2011) .", "The \"model\" is a dynamics model, used to plan under the user-supplied reward function.", "Planning enables these approaches to perform new tasks at test-time.", "The key drawback is that these models learn dynamics of possible behavior rather than dynamics of desirable behavior.", "This means that the responsibility of evoking desirable behavior is entirely deferred to engineering the input reward function.", "Designing reward functions that cause MBRL to evoke complex, desirable behavior is difficult when the space of possible undesirable behaviors is large.", "In order to succeed, the rewards cannot lead the model astray towards observations significantly different than those with which the model was trained.", "Our goal is to devise an algorithm that combines the advantages of MBRL and IL by offering MBRL's flexibility to achieve new tasks at test-time and IL's potential to learn desirable behavior entirely from offline data.", "To accomplish this, we first train a model to forecast expert trajectories with a density function, which can score trajectories and plans by how likely they are to come from the expert.", "A probabilistic model is necessary because expert behavior is stochastic: e.g. at an intersection, the expert could choose to turn left or right.", "Next, we derive a principled probabilistic inference objective to create plans that incorporate both (1) the model and (2) arbitrary new tasks.", "Finally, we derive families of tasks that we can provide to the inference framework.", "Our method can accomplish new tasks specified as complex goals without having seen an expert complete these tasks before.", "We investigate properties of our method on a dynamic simulated autonomous driving task (see Fig. 1 ).", "Videos are available at https://sites.google.com/view/imitative-models.", "Our contributions are as follows:", "Figure 1: Our method: deep imitative models.", "Top Center.", "We use demonstrations to learn a probability density function q of future behavior and deploy it to accomplish various tasks.", "Left: A region in the ground plane is input to a planning procedure that reasons about how the expert would achieve that task.", "It coarsely specifies a destination, and guides the vehicle to turn left.", "Right: Goal positions and potholes yield a plan that avoids potholes and achieves one of the goals on the right.", "1. Interpretable expert-like plans with minimal reward engineering.", "Our method outputs multistep expert-like plans, offering superior interpretability to one-step imitation learning models.", "In contrast to MBRL, our method generates expert-like behaviors with minimal reward engineering.", "2. Flexibility to new tasks: In contrast to IL, our method flexibly incorporates and achieves goals not seen during training, and performs complex tasks that were never demonstrated, such as navigating to goal regions and avoiding test-time only potholes, as depicted in Fig. 1 .", "3. Robustness to goal specification noise: We show that our method is robust to noise in the goal specification.", "In our application, we show that our agent can receive goals on the wrong side of the road, yet still navigate towards them while staying on the correct side of the road.", "4. State-of-the-art CARLA performance: Our method substantially outperforms MBRL, a custom IL method, and all five prior CARLA IL methods known to us.", "It learned near-perfect driving through dynamic and static CARLA environments from expert observations alone.", "We proposed \"Imitative Models\" to combine the benefits of IL and MBRL.", "Imitative Models are probabilistic predictive models able to plan interpretable expert-like trajectories to achieve new goals.", "Inference with an Imitative Model resembles trajectory optimization in MBRL, enabling it to both incorporate new goals and plan to them at test-time, which IL cannot.", "Learning an Imitative Model resembles offline IL, enabling it to circumvent the difficult reward-engineering and costly online data collection necessities of MBRL.", "We derived families of flexible goal objectives and showed our model can successfully incorporate them without additional training.", "Our method substantially outperformed six IL approaches and an MBRL approach in a dynamic simulated autonomous driving task.", "We showed our approach is robust to poorly specified goals, such as goals on the wrong side of the road.", "We believe our method is broadly applicable in settings where expert demonstrations are available, flexibility to new situations is demanded, and safety is paramount.", "Future work could investigate methods to handle both observation noise and out-of-distribution observations to enhance the applicability to robust real systems -we expand on this issue in Appendix E. Finally, to facilitate more general planning, future work could extend our approach to explicitly reason about all agents in the environment in order to inform a closed-loop plan for the controlled agent." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1463414579629898, 0.21052631735801697, 0.2857142686843872, 0.1463414579629898, 0.5777778029441833, 0.7234042286872864, 0.13636362552642822, 0.0952380895614624, 0.07017543166875839, 0.1666666567325592, 0.09999999403953552, 0.20512820780277252, 0.2222222238779068, 0.16129031777381897, 0.23255813121795654, 0.08695651590824127, 0.08695651590824127, 0.1395348757505417, 0.05128204822540283, 0.1818181723356247, 0.21739129722118378, 0.19999998807907104, 0.12244897335767746, 0.25806450843811035, 0.17543859779834747, 0.15686273574829102, 0.19607841968536377, 0.1904761791229248, 0.08510638028383255, 0.043478257954120636, 0, 0, 0.0555555522441864, 0.1666666567325592, 0.11999999731779099, 0.1463414579629898, 0.21739129722118378, 0.054054051637649536, 0.1395348757505417, 0.1428571343421936, 0.11764705181121826, 0.08888888359069824, 0.18518517911434174, 0.11999999731779099, 0.04651162400841713, 0.3414634168148041, 0.5909090638160706, 0.2222222238779068, 0.19607841968536377, 0.08510638028383255, 0.08510638028383255, 0.2083333283662796, 0.07843136787414551, 0.1265822798013687 ]
Skl4mRNYDr
true
[ "In this paper, we propose Imitative Models to combine the benefits of IL and goal-directed planning: probabilistic predictive models of desirable behavior able to plan interpretable expert-like trajectories to achieve specified goals." ]
[ "Learning communication via deep reinforcement learning has recently been shown to be an effective way to solve cooperative multi-agent tasks.", "However, learning which communicated information is beneficial for each agent's decision-making remains a challenging task.", "In order to address this problem, we introduce a fully differentiable framework for communication and reasoning, enabling agents to solve cooperative tasks in partially-observable environments.", "The framework is designed to facilitate explicit reasoning between agents, through a novel memory-based attention network that can learn selectively from its past memories.", "The model communicates through a series of reasoning steps that decompose each agent's intentions into learned representations that are used first to compute the relevance of communicated information, and second to extract information from memories given newly received information.", "By selectively interacting with new information, the model effectively learns a communication protocol directly, in an end-to-end manner.", "We empirically demonstrate the strength of our model in cooperative multi-agent tasks, where inter-agent communication and reasoning over prior information substantially improves performance compared to baselines.", "Communication is one of the fundamental building blocks for cooperation in multi-agent systems.", "The ability to effectively represent and communicate information valuable to a task is especially important in multi-agent reinforcement learning (MARL).", "Apart from learning what to communicate, it is critical that agents learn to reason based on the information communicated to them by their teammates.", "Such a capability enables agents to develop sophisticated coordination strategies that would be invaluable in application scenarios such as search-and-rescue for multi-robot systems (Li et al., 2002) , swarming and flocking with adversaries (Kitano et al., 1999) , multiplayer games (e.g., StarCraft, (Vinyals et al., 2017 ), DoTA, (OpenAI, 2018 ), and autonomous vehicle planning, (Petrillo et al., 2018) Building agents that can solve complex cooperative tasks requires us to answer the question: how do agents learn to communicate in support of intelligent cooperation?", "Indeed, humans inspire this question as they exhibit highly complex collaboration strategies, via communication and reasoning, allowing them to recognize important task information through a structured reasoning process, (De Ruiter et al., 2010; Garrod et al., 2010; Fusaroli et al., 2012) .", "Significant progress in multiagent deep reinforcement learning (MADRL) has been made in learning effective communication (protocols), through the following methods:", "(i) broadcasting a vector representation of each agent's private observations to all agents (Sukhbaatar et al., 2016; Foerster et al., 2016) ,", "(ii) selective and targeted communication through the use of soft-attention networks, (Vaswani et al., 2017) , that compute the importance of each agent and its information, (Jiang & Lu, 2018; Das et al., 2018) , and", "(iii) communication through a shared memory channel (Pesce & Montana, 2019; Foerster et al., 2018) , which allows agents to collectively learn and contribute information at every time instant.", "The architecture of (Jiang & Lu, 2018) implements communication by enabling agents to communicate intention as a learned representation of private observations, which are then integrated in the hidden state of a recurrent neural network as a form of agent memory.", "One downside to this approach is that as the communication is constrained in the neighborhood of each agent, communicated information does not enrich the actions of all agents, even if certain agent communications may be critical for a task.", "For example, if an agent from afar has covered a landmark, this information would be beneficial to another agent that has a trajectory planned towards the same landmark.", "In contrast, Memory Driven Multi-Agent Deep Deterministic Policy Gradient (MD-MADDPG), (Pesce & Montana, 2019) , implements a shared memory state between all agents that is updated sequentially after each agent selects an action.", "However, the importance of each agent's update to the memory in MD-MADDPG is solely decided by its interactions with the memory channel.", "In addition, the sequential nature of updating the memory channel restricts the architecture's performance to 2-agent systems.", "Targeted Multi-Agent Communication (TarMAC), (Das et al., 2018) , uses soft-attention (Vaswani et al., 2017) for the communication mechanism to infer the importance of each agent's information, however without the use of memory in the communication step.", "The paradigm of using relations in agent-based reinforcement learning was proposed by (Zambaldi et al., 2018) through multi-headed dot-product attention (MHDPA) (Vaswani et al., 2017) .", "The core idea of relational reinforcement learning (RRL) combines inductive logic programming (Lavrac & Dzeroski, 1994; Džeroski et al., 2001 ) and reinforcement learning to perform reasoning steps iterated over entities in the environment.", "Attention is a widely adopted framework in Natural Language Processing (NLP) and Visual Question Answering (VQA) tasks (Andreas et al., 2016b; a; Hudson & Manning, 2018) for computing these relations and interactions between entities.", "The mechanism (Vaswani et al., 2017) generates an attention distribution over the entities, or more simply a weighted value vector based on importance for the task at hand.", "This method has been adopted successfully in state-of-the-art results for Visual Question Answering (VQA) tasks (Andreas et al., 2016b) , (Andreas et al., 2016a) , and more recently (Hudson & Manning, 2018) , demonstrating the robustness and generalization capacity of reasoning methods in neural networks.", "In the context of multi-agent cooperation, we draw inspiration from work in soft-attention (Vaswani et al., 2017) to implement a method for computing relations between agents, coupled with a memory based attention network from Compositional Attention Networks (MAC) (Hudson & Manning, 2018) , yielding a framework for a memory-based communication that performs attentive reasoning over new information and past memories.", "Concretely, we develop a communication architecture in MADRL by leveraging the approach of RRL and the capacity to learn from past experiences.", "Our architecture is guided by the belief that a structured and iterative reasoning between non-local entities should enable agents to capture higherorder relations that are necessary for complex problem-solving.", "To seek a balance between computational efficiency and adaptivity to variable team sizes, we exploit the soft-attention (Vaswani et al., 2017) as the base operation for selectively attending to an entity or information.", "To capture the information and histories of other entities, and to better equip agents to make a deliberate decision, we separate out the attention and reasoning steps.", "The attention unit informs the agent of which entities are most important for the current time-step, while the reasoning steps use previous memories and the information guided by the attention step to extract the shared information that is most relevant.", "This explicit separation in communication enables agents to not only place importance on new information from other agents, but to selectively choose information from its past memories given new information.", "This communication framework is learned in an end-to-end fashion, without resorting to any supervision, as a result of task-specific rewards.", "Our empirical study demonstrates the effectiveness of our novel architecture to solve cooperative multi-agent tasks, with varying team sizes and environments.", "By leveraging the paradigm of centralized learning and decentralized execution, alongside communication, we demonstrate the efficacy of the learned cooperative strategies.", "We have introduced a novel framework, SARNet, for communication in multi-agent deep RL which performs a structured attentive reasoning between agents to improve coordination skills.", "Through a decomposition of the representations of communication into reasoning steps, our agents exceed baseline methods in overall performance.", "Our experiments demonstrate key benefits of gathering insights from (1) its own memories, and (2) the internal representations of the information available to agent.", "The communication architecture is learned end-to-end, and is capable of computing taskrelevant importance of each piece of computed information from cooperating agents.", "While this multi-agent communication mechanism shows promising results, we believe that we can further adapt this method to scale to a larger number of agents, through a gating mechanism to initiate communication, and decentralized learning." ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13793103396892548, 0.07999999821186066, 0.11764705181121826, 0.05882352590560913, 0.04444444179534912, 0.0714285671710968, 0.1666666567325592, 0.260869562625885, 0.06896550953388214, 0.0624999962747097, 0.04999999701976776, 0.04255318641662598, 0.0714285671710968, 0.06666666269302368, 0.10256409645080566, 0.10256409645080566, 0.17777776718139648, 0.13333332538604736, 0, 0.04651162400841713, 0.13793103396892548, 0.1599999964237213, 0.25, 0.11764705181121826, 0.04651162400841713, 0.045454543083906174, 0.21052631735801697, 0.0833333283662796, 0.2153846174478531, 0.19354838132858276, 0.10526315122842789, 0.0476190447807312, 0.12121211737394333, 0.1428571343421936, 0.05714285373687744, 0.13333332538604736, 0.19354838132858276, 0.0714285671710968, 0.1764705777168274, 0.1428571343421936, 0.0624999962747097, 0.20689654350280762, 0.20512820780277252 ]
H1lVIxHtPS
true
[ "Novel architecture of memory based attention mechanism for multi-agent communication." ]
[ "In this paper, we introduce Symplectic ODE-Net (SymODEN), a deep learning framework which can infer the dynamics of a physical system from observed state trajectories.", "To achieve better generalization with fewer training samples, SymODEN incorporates appropriate inductive bias by designing the associated computation graph in a physics-informed manner.", "In particular, we enforce Hamiltonian dynamics with control to learn the underlying dynamics in a transparent way which can then be leveraged to draw insight about relevant physical aspects of the system, such as mass and potential energy.", "In addition, we propose a parametrization which can enforce this Hamiltonian formalism even when the generalized coordinate data is embedded in a high-dimensional space or we can only access velocity data instead of generalized momentum.", "This framework, by offering interpretable, physically-consistent models for physical systems, opens up new possibilities for synthesizing model-based control strategies.", "In recent years, deep neural networks (Goodfellow et al., 2016) have become very accurate and widely used in many application domains, such as image recognition (He et al., 2016) , language comprehension (Devlin et al., 2019) , and sequential decision making (Silver et al., 2017) .", "To learn underlying patterns from data and enable generalization beyond the training set, the learning approach incorporates appropriate inductive bias (Haussler, 1988; Baxter, 2000) by promoting representations which are simple in some sense.", "It typically manifests itself via a set of assumptions, which in turn can guide a learning algorithm to pick one hypothesis over another.", "The success in predicting an outcome for previously unseen data then depends on how well the inductive bias captures the ground reality.", "Inductive bias can be introduced as the prior in a Bayesian model, or via the choice of computation graphs in a neural network.", "In a variety of settings, especially in physical systems, wherein laws of physics are primarily responsible for shaping the outcome, generalization in neural networks can be improved by leveraging underlying physics for designing the computation graphs.", "Here, by leveraging a generalization of the Hamiltonian dynamics, we develop a learning framework which exploits the underlying physics in the associated computation graph.", "Our results show that incorporation of such physics-based inductive bias offers insight about relevant physical properties of the system, such as inertia, potential energy, total conserved energy.", "These insights, in turn, enable a more accurate prediction of future behavior and improvement in out-of-sample behavior.", "Furthermore, learning a physically-consistent model of the underlying dynamics can subsequently enable usage of model-based controllers which can provide performance guarantees for complex, nonlinear systems.", "In particular, insight about kinetic and potential energy of a physical system can be leveraged to synthesize appropriate control strategies, such as the method of controlled Lagrangian (Bloch et al., 2001 ) and interconnection & damping assignment (Ortega et al., 2002) , which can reshape the closed-loop energy landscape to achieve a broad range of control objectives (regulation, tracking, etc.) .", "Here we have introduced Symplectic ODE-Net which provides a systematic way to incorporate prior knowledge of Hamiltonian dynamics with control into a deep learning framework.", "We show that SymODEN achieves better prediction with fewer training samples by learning an interpretable, physically-consistent state-space model.", "Future works will incorporate a broader class of physicsbased prior, such as the port-Hamiltonian system formulation, to learn dynamics of a larger class of physical systems.", "SymODEN can work with embedded angle data or when we only have access to velocity instead of generalized momentum.", "Future works would explore other types of embedding, such as embedded 3D orientations.", "Another interesting direction could be to combine energy shaping control (potential as well as kinetic energy shaping) with interpretable end-to-end learning frameworks.", "Tianshu Wei, Yanzhi Wang, and Qi Zhu.", "Deep Reinforcement Learning for Building HVAC Control.", "In Proceedings of the 54th Annual Design Automation Conference (DAC), pp. 22:1-22:6, 2017." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.16326530277729034, 0.0416666604578495, 0.2666666507720947, 0.145454540848732, 0.23255813121795654, 0.03278687968850136, 0.10526315122842789, 0.08510638028383255, 0, 0.04444443807005882, 0.0357142798602581, 0.08695651590824127, 0.03999999538064003, 0.04999999701976776, 0.1249999925494194, 0.1621621549129486, 0.20408162474632263, 0.09302324801683426, 0.1702127605676651, 0.22727271914482117, 0.052631575614213943, 0.2222222238779068, 0.0624999962747097, 0, 0 ]
ryxmb1rKDS
true
[ "This work enforces Hamiltonian dynamics with control to learn system models from embedded position and velocity data, and exploits this physically-consistent dynamics to synthesize model-based control via energy shaping." ]
[ "Federated learning, where a global model is trained by iterative parameter averaging of locally-computed updates, is a promising approach for distributed training of deep networks; it provides high communication-efficiency and privacy-preservability, which allows to fit well into decentralized data environments, e.g., mobile-cloud ecosystems.", "However, despite the advantages, the federated learning-based methods still have a challenge in dealing with non-IID training data of local devices (i.e., learners).", "In this regard, we study the effects of a variety of hyperparametric conditions under the non-IID environments, to answer important concerns in practical implementations:", "(i) We first investigate parameter divergence of local updates to explain performance degradation from non-IID data.", "The origin of the parameter divergence is also found both empirically and theoretically.", "(ii) We then revisit the effects of optimizers, network depth/width, and regularization techniques; our observations show that the well-known advantages of the hyperparameter optimization strategies could rather yield diminishing returns with non-IID data.", "(iii) We finally provide the reasons of the failure cases in a categorized way, mainly based on metrics of the parameter divergence.", "Over the recent years, federated learning (McMahan et al., 2017) has been a huge success to reduce the communication overhead in distributed training of deep networks.", "Guaranteeing competitive performance, the federated learning permits each learner to compute their local updates of each round for relatively many iterations (e.g., 1 epoch, 10 epochs, etc.), which provides much higher communication-efficiency compared to the conventional data parallelism approaches (for intra-datacenter environments, e.g., Dean et al. (2012) ; Chen et al. (2016) ) that generally require very frequent gradient aggregation.", "Furthermore, the federated learning can also significantly reduce data privacy and security risks by enabling to conceal on-device data of each learner from the server or other learners; thus the approach can be applied well to environments with highly private data (e.g., personal medical data), it is now emerging as a promising methodology for privacypreserving distributed learning along with differential privacy-based methods (Hard et al., 2018; Yang et al., 2018; Bonawitz et al., 2019; Chen et al., 2019) .", "On this wise, the federated learning takes a simple approach that performs iterative parameter averaging of local updates computed from each learners' own dataset, which suggests an efficient way to learn a shared model without centralizing training data from multiple sources; but hereby, since the local data of each device is created based on their usage pattern, the heterogeneity of training data distributions across the learners might be naturally assumed in real-world cases.", "Hence, each local dataset would not follow the population distribution, and handling the decentralized non-IID data still remains a statistical challenge in the field of federated learning (Smith et al., 2017) .", "For instance, Zhao et al. (2018) observed severe performance degradation in multi-class classification accuracy under highly skewed non-IID data; it was reported that more diminishing returns could be yielded as the probabilistic distance of learners' local data from the population distribution increases.", "We now explain the internal reasons of the observations in the previous subsection.", "Through the experimental results, we were able to classify the causes of the failures under non-IID data into three categories; the following discussions are described based on this.", "8 Note that our discussion in this subsection is mostly made from the results under Nesterov momentum SGD and on CIFAR-10; the complete results including other optimizers (e.g., pure SGD, Polyak momentum SGD, and Adam) and datasets (e.g., SVHN) are given in Appendix C.", "Inordinate magnitude of parameter divergence.", "As mentioned before, bigger parameter divergence is the root cause of diminishing returns under federated learning methods with non-IID data.", "By extension, here we observe that even under the same non-IID data setting, some of the considered hyperparametric methods yield greater parameter divergence than when they are not applied.", "For example, from the left plot of Figure 3 , we see that under the Non-IID(2) setting, the parameter divergence values (in the last fully-connected layer) become greater as the network depth increases (note that NetA-Baseline, NetA-Deeper, and NetA-Deepest have 3, 6, and 9 convolutional layers, respectively; see also Appendix A.1 for their detailed architecture).", "The corresponding final test accuracy was found to be 74.11%, 73.67%, and 68.98%, respectively, in order of the degree of shallowness; this fits well into the parameter divergence results.", "Since the NetA-Deeper and NetA-Deepest have twice and three times as many model parameters as NetA-Baseline, it can be expected enough that the deeper models yield bigger parameter divergence in the whole model; but our results also show its qualitative increase in a layer level.", "In relation, we also provide the results using the modern network architecture (e.g., ResNet (He et al., 2016) ) in Table 8 of the appendix.", "From the middle plot of the figure, we can also observe bigger parameter divergence in a high level of weight decay under the Non-IID(2) setting.", "Under the non-IID data setting, the test accuracy of about 72 ∼ 74% was achieved in the low levels (≤ 0.0001), but weight decay factor of 0.0005 yielded only that of 54.11%.", "Hence, this suggests that with non-IID data we should apply much smaller weight decay to federated learning-based methods.", "Here we note that if a single iteration is considered for each learner's local update per round, the corresponding parameter divergence will be of course the same without regard to degree of weight decay.", "However, in our experiments, the great number of local iterations per round (i.e., 100) made a big difference of the divergence values under the non-IID data setting; this eventually yielded the accuracy gap.", "We additionally observe for the non-IID cases that even with weight decay factor of 0.0005, the parameter divergence values are similar to those with the smaller factors at very early rounds in which the norms of the weights are relatively very small.", "In addition, it is observed from the right plot of the figure that Dropout (Hinton et al., 2012; Srivatava et al., 2014 ) also yields bigger parameter divergence under the non-IID data setting.", "The corresponding test accuracy was seen to be a diminishing return with Nesterov momentum SGD (i.e., using Dropout we can achieve +2.85% under IID, but only +1.69% is obtained under non-IID(2), compared to when it is not applied; see Table 2 ); however, it was observed that the generalization effect of the Dropout is still valid in test accuracy for the pure SGD and the Adam (refer to also Table 13 in the appendix).", "Steep fall phenomenon.", "As we see previously, inordinate magnitude of parameter divergence is one of the notable characteristics for failure cases under federated learning with non-IID data.", "However, under the non-IID data setting, some of the failure cases have been observed where the test accuracy is still low but the parameter divergence values of the last fully-connected layer decrease (rapidly) over rounds; as the round goes, even the values were sometimes seen to be lower than those of the comparison targets.", "We refer to this phenomenon as steep fall phenomenon.", "It is inferred that these (unexpected abnormal) sudden drops of parameter divergence values indicate going into poor local minima (or saddles); this can be supported by the behaviors that test accuracy increases plausibly at very early rounds, but the growth rate quickly stagnates and eventually becomes much lower than the comparison targets.", "The left plot of Figure 4 shows the effect of the Adam optimizer with respect to its implementations.", "Through the experiments, we identified that under non-IID data environments, the performance of Adam is very sensitive to the range of model variables to be averaged, unlike the non-adaptive optimizers (e.g., momentum SGD); its moment variables should be also considered in the parameter averaging together with weights and biases (see also Table 3 ).", "The poor performance of the Adam-WB under the Non-IID(2) setting would be from twice as many momentum variables as the momentum SGD, which indicates the increased number of them affected by the non-IIDness; thus, originally we had thought that extreme parameter divergence could appear if the momentum variables are not averaged together with weights and biases.", "However, it was seen that the parameter divergence values under the Adam-WB was seen to be similar or even smaller than under Adam-A (see also Figure 11 in the appendix).", "Instead, from the left panel we can observe that the parameter divergence of Adam-WB in the last fully-connected layer is bigger than that of Adam-A at the very early rounds (as we expected), but soon it is abnormally sharply reduced over rounds; this is considered the steep fall phenomenon.", "The middle and the right plots of the figure also show the steep fall phenomenon in the last fullyconnected layer, with respect to network width and whether to use Batch Normalization, respectively.", "In the case of the NetC models, NetC-Baseline, NetC-Wider, and NetC-Widest use the global average pooling, the max pooling with stride 4, and the max pooling with stride 2, respectively, after the last convolutional layer; the number of neurons in the output layer becomes 2560, 10240, and 40960, respectively (see also Appendix A.1 for their detailed architecture).", "Under the Non-IID(2) setting, the corresponding test accuracy was found to be 64.06%, 72.61%, and 73.64%, respectively, in order of the degree of wideness.", "In addition, we can see that under Non-IID(2), Batch Normalization 9 yields not only big parameter divergence (especially before the first learning rate drop) but also the steep fall phenomenon; the corresponding test accuracy was seen to be very low (see Table 3 ).", "The failure of the Batch Normalization stems from that the dependence of batchnormalized hidden activations makes each learner's update too overfitted to the distribution of their local training data.", "Batch Renormalization, by relaxing the dependence, yields a better outcome; however, it still fails to exceed the performance of the baseline due to the significant parameter divergence.", "To explain the impact of the steep fall phenomenon in test accuracy, we provide Figure 5 , which indicates that the loss landscapes for the failure cases (e.g., Adam-WB and with Batch Normalization) commonly show sharper minima that leads to poorer generalization (Hochreiter & Schmidhuber, 9 For its implementations into the considered federated learning algorithm, we let the server get the proper moving variance by 1997; Keskar et al., 2017) , and the minimal value in the bowl is relatively greater.", "10 Here it is also observed that going into sharp minima starts even in early rounds such as 25th.", "Excessively high training loss of local updates.", "The final cause that we consider for the failure cases is excessively high training loss of local updates.", "For instance, from the left plot of Figure 6 , we see that under the Non-IID(2) setting, NetB-Baseline gives much higher training loss than the other models.", "Here we note that for the NetB-Baseline model, the global average pooling is applied after the last convolutional layer, and the number of neurons in the first fully-connected layer thus becomes 256 · 256; on the other hand, NetB-Wider and NetB-Widest use the max pooling with stride 4 and 2, which make the number of neurons in that layer become 1024 · 256 and 4096 · 256, respectively (see also Appendix A.1 for their details).", "The experimental results were shown that NetB-Baseline has notably lower test accuracy (see Table 4 ).", "We additionally remark that for NetBBaseline, very high losses are observed under the IID setting, and their values even are greater than in the non-IID case; however, note that one have to be aware that local updates are extremely easy to be overfitted to each training dataset under non-IID data environments, thus the converged training losses being high is more critical than the IID cases.", "The middle and the right plot of the figure show the excessive training loss under the non-IID setting when applying the weight decay factor of 0.0005 and the data augmentation, respectively.", "In the cases of the high level of weight decay, the severe performance degradation appears compared to when the levels are low (i.e., ≤ 0.0001) as already discussed.", "In addition, we observed that with Nesterov momentum SGD, the data augmentation yields a diminishing return in test accuracy (i.e., with the data augmentation we can achieve +3.36% under IID, but −0.16% is obtained under non-IID(2), compared to when it is not applied); with Adam the degree of the diminishment becomes higher (refer to Table 12 in the appendix).", "In the data augmentation cases, judging from that the 10 Based on Li et al. (2018) , the visualization of loss surface was conducted by L(α, β) = (θ * + αδ + βγ), where θ * is a center point of the model parameters, and δ and γ is the orthogonal direction vectors.", "parameter divergence values are not so different between with and without it, we can identify that the performance degradation stems from the high training loss (see Figures 30 and 31 in the appendix).", "Here we additionally note that unlike on the CIFAR-10, in the experiments on SVHN it was seen that the generalization effect of the data augmentation is still valid in test accuracy (see Table 12 ).", "In this paper, we explored the effects of various hyperparameter optimization strategies for optimizers, network depth/width, and regularization on federated learning of deep networks.", "Our primary concern in this study was lied on non-IID data, in which we found that under non-IID data settings many of the probed factors show somewhat different behaviors compared to under the IID setting and vanilla training.", "To explain this, a concept of the parameter divergence was utilized, and its origin was identified both empirically and theoretically.", "We also provided the internal reasons of our observations with a number of the experimental cases.", "In the meantime, the federated learning has been vigorously studied for decentralized data environments due to its inherent strength, i.e., high communication-efficiency and privacy-preservability.", "However, so far most of the existing works mainly dealt with only IID data, and the research to address non-IID data has just entered the beginning stage very recently despite its high real-world possibility.", "Our study, as one of the openings, handles the essential factors in the federated training under the non-IID data environments, and we expect that it will provide refreshing perspectives for upcoming works.", "A EXPERIMENTAL DETAILS" ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0952380895614624, 0.2666666507720947, 0.1860465109348297, 0.3243243098258972, 0.11764705181121826, 0.4313725531101227, 0.25, 0.1702127605676651, 0.1265822798013687, 0.1627907007932663, 0.17073170840740204, 0.2745097875595093, 0.19354838132858276, 0.3125, 0.21739129722118378, 0.1355932205915451, 0.07692307233810425, 0.39024388790130615, 0.20408162474632263, 0.08571428060531616, 0.07843136787414551, 0.06557376682758331, 0.08695651590824127, 0.09302324801683426, 0.15686273574829102, 0.20512819290161133, 0.07547169178724289, 0.19230768084526062, 0.1428571343421936, 0.19607841968536377, 0.0731707289814949, 0, 0.27272728085517883, 0.1230769157409668, 0.06896550953388214, 0.05714285373687744, 0.10810810327529907, 0.11764705181121826, 0.08955223113298416, 0.04347825422883034, 0.09836065024137497, 0.0833333283662796, 0.0615384578704834, 0.08888888359069824, 0.0634920597076416, 0.17391303181648254, 0.09090908616781235, 0.08888888359069824, 0, 0.0714285671710968, 0.10256409645080566, 0.1304347813129425, 0.07894736528396606, 0, 0.11594202369451523, 0.17391303181648254, 0.0833333283662796, 0.11267605423927307, 0.1515151411294937, 0.07843136787414551, 0.1599999964237213, 0.3636363446712494, 0.1818181723356247, 0.10256409645080566, 0.34285715222358704, 0.21739129722118378, 0.15094339847564697, 0.19999998807907104, 0 ]
SJeOAJStwB
true
[ "We investigate the internal reasons of our observations, the diminishing effects of the well-known hyperparameter optimization methods on federated learning from decentralized non-IID data." ]
[ "Deep learning models are known to be vulnerable to adversarial examples.", "A practical adversarial attack should require as little as possible knowledge of attacked models T. Current substitute attacks need pre-trained models to generate adversarial examples and their attack success rates heavily rely on the transferability of adversarial examples.", "Current score-based and decision-based attacks require lots of queries for the T. In this study, we propose a novel adversarial imitation attack.", "First, it produces a replica of the T by a two-player game like the generative adversarial networks (GANs).", "The objective of the generative model G is to generate examples which lead D returning different outputs with T. The objective of the discriminative model D is to output the same labels with T under the same inputs.", "Then, the adversarial examples generated by D are utilized to fool the T. Compared with the current substitute attacks, imitation attack can use less training data to produce a replica of T and improve the transferability of adversarial examples.", "Experiments demonstrate that our imitation attack requires less training data than the black-box substitute attacks, but achieves an attack success rate close to the white-box attack on unseen data with no query.", "Deep neural networks are often vulnerable to imperceptible perturbations of their inputs, causing incorrect predictions (Szegedy et al., 2014) .", "Studies on adversarial examples developed attacks and defenses to assess and increase the robustness of models, respectively.", "Adversarial attacks include white-box attacks, where the attack method has full access to models, and black-box attacks, where the attacks do not need knowledge of models structures and weights.", "White-box attacks need training data and the gradient information of models, such as FGSM (Fast Gradient Sign Method) (Goodfellow et al., 2015) , BIM (Basic Iterative Method) (Kurakin et al., 2017a) and JSMA (Jacobian-based Saliency Map Attack) (Papernot et al., 2016b) .", "However, the gradient information of attacked models is hard to access, the white-box attack is not practical in real-world tasks.", "Literature shows adversarial examples have transferability property and they can affect different models, even the models have different architectures (Szegedy et al., 2014; Papernot et al., 2016a; Liu et al., 2017) .", "Such a phenomenon is closely related to linearity and over-fitting of models (Szegedy et al., 2014; Hendrycks & Gimpel, 2017; Goodfellow et al., 2015; Tramèr et al., 2018) .", "Therefore, substitute attacks are proposed to attack models without the gradient information.", "Substitute black-box attacks utilize pre-trained models to generate adversarial examples and apply these examples to attacked models.", "Their attack success rates rely on the transferability of adversarial examples and are often lower than that of white-box attacks.", "Black-box score-based attacks Ilyas et al., 2018a; b) do not need pre-trained models, they access the output probabilities of the attacked model to generate adversarial examples iteratively.", "Black-box decisionbased attacks (Brendel et al., 2017; Cheng et al., 2018; Chen et al., 2019) require less information than the score-based attacks.", "They utilize hard labels of the attacked model to generate adversarial examples.", "Adversarial attacks need knowledge of models.", "However, a practical attack method should require as little as possible knowledge of attacked models, which include training data and procedure, models weights and architectures, output probabilities and hard labels (Athalye et al., 2018) .", "The disadvantage of current substitute black-box attacks is that they need pre-trained substitute models trained by the same dataset with attacked model T (Hendrycks & Gimpel, 2017; Goodfellow et al., 2015; Kurakin et al., 2017a) or a number of images to imitate the outputs of T to produce substitute networks .", "Actually, the prerequisites of these attacks are hard to obtain in real-world tasks.", "The substitute models trained by limited images hardly generate adversarial examples with well transferability.", "The disadvantage of current decision-based and score-based black-box attacks is that every adversarial example is synthesized by numerous queries.", "Hence, developing a practical attack mechanism is necessary.", "In this paper, we propose an adversarial imitation training, which is a special two-player game.", "The game has a generative model G and a imitation model D. The G is designed to produce examples to make the predicted label of the attacked model T and D different, while the imitation model D fights for outputting the same label with T .", "The proposed imitation training needs much less training data than the T and does not need the labels of these data, and the data do not need to coincide with the training data.", "Then, the adversarial examples generated by D are utilized to fool the T like substitute attacks.", "We call this new attack mechanism as adversarial imitation attack.", "Compared with current substitute attacks, our adversarial imitation attack requires less training data.", "Score-based and decision-based attacks need a lot of queries to generate each adversarial attack.", "The similarity between the proposed method and current score-based and decision-based attacks is that adversarial imitation attack also needs to obtain a lot of queries in the training stage.", "The difference between these two kinds of attack is our method do not need any additional queries in the test stage like other substitute attacks.", "Experiments show that our proposed method achieves state-of-the-art performance compared with current substitute attacks and decision-based attack.", "We summarize our main contributions as follows:", "• The proposed new attack mechanism needs less training data of attacked models than current substitute attacks, but achieves an attack success rate close to the white-box attacks.", "• The proposed new attack mechanism requires the same information of attacked models with decision attacks on the training stage, but is query-independent on the testing stage.", "Practical adversarial attacks should have as little as possible knowledge of attacked model T .", "Current black-box attacks need numerous training images or queries to generate adversarial images.", "In this study, to address this problem, we combine the advantages of current black-box attacks and proposed a new attack mechanism, imitation attack, to replicate the information of the T , and generate adversarial examples fooling deep learning models efficiently.", "Compared with substitute attacks, imitation attack only requires much less data than the training set of T and do not need the labels of the training data, but adversarial examples generated by imitation attack have stronger transferability for the T .", "Compared with score-based and decision-based attacks, our imitation attack only needs the same information with decision attacks, but achieves state-of-the-art performances and is query-independent on testing stage.", "Experiments showed the superiority of the proposed imitation attack.", "Additionally, we observed that deep learning classification model T is easy to be stolen by limited unlabeled images, which are much fewer than the training images of T .", "In future work, we will evaluate the performance of the proposed adversarial imitation attack on other tasks except for image classification.", "A NETWORK ARCHITECTURES Figure 2 and Figure 3 .", "The experiments show that adversarial examples generated by the proposed imitation attack can fool the attacked model with a small perturbation." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.4000000059604645, 0.24390242993831635, 0.25, 0.07692307233810425, 0.05714285373687744, 0.24390242993831635, 0.15789473056793213, 0.06666666269302368, 0.1538461446762085, 0.1764705777168274, 0, 0.2142857164144516, 0.1111111044883728, 0.11428570747375488, 0.27272728085517883, 0.25, 0.13793103396892548, 0.10810810327529907, 0, 0.1818181723356247, 0.1249999925494194, 0.0952380895614624, 0.07692307233810425, 0.08695651590824127, 0.1666666567325592, 0.0714285671710968, 0.1111111044883728, 0.1599999964237213, 0.10256409645080566, 0.12121211737394333, 0.23999999463558197, 0.31578946113586426, 0.260869562625885, 0.25, 0.21621620655059814, 0.05714285373687744, 0.07407406717538834, 0, 0.1621621549129486, 0.11764705181121826, 0.08695651590824127, 0.1818181723356247, 0.27272728085517883, 0.1428571343421936, 0.11764705181121826, 0.2222222238779068, 0.10810810327529907, 0.20000000298023224, 0.11764705181121826, 0.2666666507720947 ]
SJlVVAEKwS
true
[ "A novel adversarial imitation attack to fool machine learning models." ]
[ "Stochastic Gradient Descent (SGD) methods using randomly selected batches are widely-used to train neural network (NN) models.", "Performing design exploration to find the best NN for a particular task often requires extensive training with different models on a large dataset, which is very computationally expensive.", "The most straightforward method to accelerate this computation is to distribute the batch of SGD over multiple processors.", "However, large batch training often times leads to degradation in accuracy, poor generalization, and even poor robustness to adversarial attacks. ", "Existing solutions for large batch training either do not work or require massive hyper-parameter tuning.", "To address this issue, we propose a novel large batch training method which combines recent results in adversarial training (to regularize against ``sharp minima'') and second order optimization (to use curvature information to change batch size adaptively during training).", "We extensively evaluate our method on Cifar-10/100, SVHN, TinyImageNet, and ImageNet datasets, using multiple NNs, including residual networks as well as compressed networks such as SqueezeNext. ", "Our new approach exceeds the performance of the existing solutions in terms of both accuracy and the number of SGD iterations (up to 1\\% and $3\\times$, respectively).", "We emphasize that this is achieved without any additional hyper-parameter tuning to tailor our method to any of these experiments.\n", "Finding the right NN architecture for a particular application requires extensive hyper-parameter tuning and architecture search, often times on a very large dataset.", "The delays associated with training NNs is often the main bottleneck in the design process.", "One of the ways to address this issue to use large distributed processor clusters; however, to efficiently utilize each processor, the portion of the batch associated with each processor (sometimes called the mini-batch) must grow correspondingly.", "In the ideal case, the hope is to decrease the computational time proportional to the increase in batch size, without any drop in generalization quality.", "However, large batch training has a number of well known draw backs.", "These include degradation of accuracy, poor generalization, and poor robustness to adversarial perturbations BID17 BID36 .In", "order to address these drawbacks, many solutions have been proposed BID14 BID37 BID7 BID29 BID16 . However", ", these methods either work only for particular models on a particular dataset, or they require massive hyperparameter tuning, which is often times not discussed in the presentation of results. Note that", "while extensive hyper-parameter turning may result in good result tables, it is antithetical to the original motivation of using large batch sizes to reduce training time.One solution to reduce the brittleness of SGD to hyper-parameter tuning is to use second-order methods. Full Newton", "method with line search is parameter-free, and it does not require a learning rate. This is achieved", "by using a second-order Taylor series approximation to the loss function, instead of a first-order one as in SGD, to obtain curvature information. BID25 ; BID34 BID2", "show that Newton/quasi-Newton methods outperform SGD for training NNs. However, their re-sults", "only consider simple fully connected NNs and auto-encoders. A problem with second-order", "methods is that they can exacerbate the large batch problem, as by construction they have a higher tendency to get attracted to local minima as compared to SGD. For these reasons, early attempts", "at using second-order methods for training convolutional NNs have so far not been successful.Ideally, if we could find a regularization scheme to avoid local/bad minima during training, this could resolve many of these issues. In the seminal works of El Ghaoui", "& BID9 ; BID33 , a very interesting connection was made between robust optimization and regularization. It was shown that the solution to", "a robust optimization problem for least squares is the same as the solution of a Tikhonov regularized problem BID9 . This was also extended to the Lasso", "problem in BID33 . Adversarial learning/training methods", ", which are a special case of robust optimization methods, are usually described as a min-max optimization procedure to make the model more robust. Recent studies with NNs have empirically", "found that robust optimization usually converges to points in the optimization landscape that are flatter and are more robust to adversarial perturbation BID36 .Inspired by these results, we explore whether", "second order information regularized by robust optimization can be used to do large batch size training of NNs. We show that both classes of methods have properties", "that can be exploited in the context of large batch training to help reduce the brittleness of SGD with large batch size training, thereby leading to significantly improved results.", "We introduce an adaptive batch size algorithm based on Hessian information to speed up the training process of NNs, and we combine this approach with adversarial training (which is a form of robust optimization, and which could be viewed as a regularization term for large batch training).", "We extensively test our method on multiple datasets (SVHN, Cifar-10/100, TinyImageNet and ImageNet) with multiple NN models (AlexNet, ResNet, Wide ResNet and SqueezeNext).", "As the goal of large batch is to reduce training time, we did not perform any hyper-parameter tuning to tailor our method for any of these tests.", "Our method allows one to increase batch size and learning rate automatically, based on Hessian information.", "This helps significantly reduce the number of parameter updates, and it achieves superior generalization performance, without the need to tune any of the additional hyper-parameters.", "Finally, we show that a block Hessian can be used to approximate the trend of the full Hessian to reduce the overhead of using second-order information.", "These improvements are useful to reduce NN training time in practice.", "• L(θ) is continuously differentiable and the gradient function of L is Lipschitz continuous with Lipschitz constant L g , i.e. DISPLAYFORM0 for all θ 1 and θ 2 .Also", ", the global minima of L(θ) is achieved at θ * and L(θ * ) = L * .• Each", "gradient of each individual l i (z i ) is an unbiased estimation of the true gradient, i.e. DISPLAYFORM1 where V(·) is the variance operator, i.e. DISPLAYFORM2 From the Assumption 2, it is not hard to get, DISPLAYFORM3 DISPLAYFORM4 With Assumption 2, the following two lemmas could be found in any optimization reference, e.g. . We give", "the proofs here for completeness. Lemma 3", ". Under", "Assumption 2, after one iteration of stochastic gradient update with step size η t at θ t , we have DISPLAYFORM5 where DISPLAYFORM6 Proof. With the", "L g smooth of L(θ), we have DISPLAYFORM7 From above, the result follows.Lemma 4. Under", "Assumption 2, for any θ, we have DISPLAYFORM8 Proof. Let DISPLAYFORM9", "Then h(θ) has a unique global minima atθ DISPLAYFORM10 The following lemma is trivial, we omit the proof here. DISPLAYFORM11 PROOF", "OF THEOREM 1Given these lemmas, we now proceed with the proof of Theorem 1.Proof. Assume the batch used", "at step t is b t , according to Lemma 3 and 5, DISPLAYFORM12 where the last inequality is from Lemma 4. This yields DISPLAYFORM13", "It is not hard to see, DISPLAYFORM14 which concludes DISPLAYFORM15 Therefore, DISPLAYFORM16 We show a toy example of binary logistic regression on mushroom classification dataset 2 . We split the whole dataset to", "6905 for training and 1819 for validation. η 0 = 1.2 for SGD with batch", "size 100 and full gradient descent. We set 100 ≤ b t ≤ 3200 for", "our algorithm, i.e. ABS. Here we mainly focus on the", "training losses of different optimization algorithms. The results are shown in FIG3", ". In order to see if η 0 is not", "an optimal step size of full gradient descent, we vary η 0 for full gradient descent; see results in FIG3 ." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.07407406717538834, 0.054054051637649536, 0.07407406717538834, 0.27586206793785095, 0.1599999964237213, 0.3478260934352875, 0.11764705181121826, 0.0624999962747097, 0, 0.06451612710952759, 0.0833333283662796, 0.052631575614213943, 0.06666666269302368, 0.1818181723356247, 0.1599999964237213, 0.07692307233810425, 0, 0.13636362552642822, 0.07692307233810425, 0.11428570747375488, 0.09090908616781235, 0.09090908616781235, 0.05128204822540283, 0.07999999821186066, 0.0624999962747097, 0, 0, 0, 0.11428570747375488, 0.3333333432674408, 0.1764705777168274, 0.23076923191547394, 0.06451612710952759, 0.11764705181121826, 0.307692289352417, 0.0624999962747097, 0.12903225421905518, 0.0952380895614624, 0.0555555522441864, 0.0714285671710968, 0, 0, 0.05714285373687744, 0, 0, 0, 0.0714285671710968, 0.0624999962747097, 0, 0.25, 0.17391303181648254, 0, 0.09090908616781235, 0.10526315122842789, 0.0714285671710968 ]
H1lnJ2Rqt7
true
[ "Large batch size training using adversarial training and second order information" ]
[ "Inspired by neurophysiological discoveries of navigation cells in the mammalian\n", "brain, we introduce the first deep neural network architecture for modeling Egocentric\n", "Spatial Memory (ESM).", "It learns to estimate the pose of the agent and\n", "progressively construct top-down 2D global maps from egocentric views in a spatially\n", "extended environment.", "During the exploration, our proposed ESM network\n", "model updates belief of the global map based on local observations using a recurrent\n", "neural network.", "It also augments the local mapping with a novel external\n", "memory to encode and store latent representations of the visited places based on\n", "their corresponding locations in the egocentric coordinate.", "This enables the agents\n", "to perform loop closure and mapping correction.", "This work contributes in the\n", "following aspects: first, our proposed ESM network provides an accurate mapping\n", "ability which is vitally important for embodied agents to navigate to goal locations.\n", "In the experiments, we demonstrate the functionalities of the ESM network in\n", "random walks in complicated 3D mazes by comparing with several competitive\n", "baselines and state-of-the-art Simultaneous Localization and Mapping (SLAM)\n", "algorithms.", "Secondly, we faithfully hypothesize the functionality and the working\n", "mechanism of navigation cells in the brain.", "Comprehensive analysis of our model\n", "suggests the essential role of individual modules in our proposed architecture and\n", "demonstrates efficiency of communications among these modules.", "We hope this\n", "work would advance research in the collaboration and communications over both\n", "fields of computer science and computational neuroscience.", "Egocentric spatial memory (ESM) refers to a memory system that encodes, stores, recognizes and recalls the spatial information about the environment from an egocentric perspective BID24 .", "Such information is vitally important for embodied agents to construct spatial maps and reach goal locations in navigation tasks.For the past decades, a wealth of neurophysiological results have shed lights on the underlying neural mechanisms of ESM in mammalian brains.", "Mostly through single-cell electrophysiological recordings studies in mammals BID23 , there are four types of cells identified as specialized for processing spatial information: head-direction cells (HDC), border and boundary vector cells (BVC), place cells (PC) and grid cells (GC).", "Their functionalities are: (1) According to BID38 , HDC, together with view cells BID5 , fires whenever the mammal's head orients in certain directions.", "(2) The firing behavior of BVC depends on the proximity to environmental boundaries BID22 and directions relative to the mammals' heads BID1 .", "(3) PC resides in hippocampus and increases firing rates when the animal is in specific locations independent of head orientations BID1 .", "(4) GC, as a metric of space BID35 , are regularly distributed in a grid across the environment BID11 .", "They are updated based on animal's speed and orientation BID1 .", "The corporation of these cell types enables mammals to navigate and reach goal locations in complex environments; hence, we are motivated to endow artificial agents with the similar memory capability but a computational architecture for such ESM is still absent.Inspired by neurophysiological discoveries, we propose the first computational architecture, named as the Egocentric Spatial Memory Network (ESMN), for modeling ESM using a deep neural network.", "ESMN unifies functionalities of different navigation cells within one end-to-end trainable framework and accurately constructs top-down 2D global maps from egocentric views.", "To our best knowledge, we are the first to encapsulate the four cell types respectively with functionally similar neural networkbased modules within one integrated architecture.", "In navigation tasks, the agent with the ESMN takes one egomotion from a discrete set of macro-actions.", "ESMN fuses the observations from the agent over time and produces a top-down 2D local map using a recurrent neural network.", "In order to align the spatial information at the current step with all the past predicted local maps, ESMN estimates the agent's egomotion and transforms all the past information using a spatial transformer neural network.", "ESMN also augments the local mapping module with a novel spatial memory capable of integrating local maps into global maps and storing the discriminative representations of the visited places.", "The loop closure component will then detect whether the current place was visited by comparing its observation with the representations in the external memory which subsequently contributes to global map correction.Neuroscience-inspired AI is an emerging research field BID12 .", "Our novel deep learning architecture to model ESMN in the mammalian navigation system attempts to narrow the gap between computer science (CS) and computational neuroscience (CN) and bring interests to both communities.", "On one hand, our novel ESMN outperforms several competitive baselines and the state-of-the-art monocular visual SLAMs.", "Our outstanding performance in map construction brings great advancements in robotics and CS.", "It could also have many potential engineering applications, such as path planning for robots.", "(2) In CN, the neuroplausible navigation system with four types of cells integrated is still under development.", "In our work, we put forward bold hypothesis about how these navigation cells may cooperate and perform integrated navigation functions.", "We also faithfully propose several possible communication links among them in the form of deep architectures.We evaluate ESMN in eight 3D maze environments where they feature complex geometry, varieties of textures, and variant lighting conditions.", "In the experiments, we demonstrate the acquired skills of ESMN in terms of positional inference, free space prediction, loop closure classification and map correction which play important roles in navigation.", "We provide detailed analysis of each module in ESMN as well as their functional mappings with the four cell types.", "Lastly, we conduct ablation studies, compare with state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithms and show the efficacy of our integrated framework on unifying the four modules.", "We get inspirations from neurophysiological discoveries and propose the first deep neural network architecture for modeling ESM which unifies the functionalities of the four navigation cell types: head-direction cells, border cells and boundary vector cells, place cells and grid cells.", "Our learnt model demonstrates the capacity of estimating the pose of the agent and constructing a top-down 2D spatial representations of the physical environments in the egocentric coordinate which could have many potential applications, such as path planning for robot agents.", "Our ESMN accumulates the belief about the free space by integrating egocentric views.", "To eliminate errors during mapping, ESMN also augments the local mapping module with an external spatial memory to keep track of the discriminative representations of the visited places for loop closure detection.", "We conduct exhaustive evaluation experiments by comparing our model with some competitive baselines and state-of-the-art SLAM algorithms.", "The experimental results demonstrate that our model surpasses all these methods.", "The comprehensive ablation study suggests the essential role of individual modules in our proposed architecture and the efficiency of communications among these modules." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.5517241358757019, 0.4516128897666931, 0.1818181723356247, 0.0714285671710968, 0.06451612710952759, 0.07692307233810425, 0.060606054961681366, 0, 0.0624999962747097, 0.07692307233810425, 0, 0, 0.0833333283662796, 0.06666666269302368, 0.0624999962747097, 0.20689654350280762, 0.13333332538604736, 0, 0, 0.38461539149284363, 0.0833333283662796, 0.12903225421905518, 0.07692307233810425, 0, 0.06666666269302368, 0.07692307233810425, 0.0476190410554409, 0.24561403691768646, 0.15094339847564697, 0.0952380895614624, 0.051282044500112534, 0.10256409645080566, 0.10810810327529907, 0, 0.33766233921051025, 0.1463414579629898, 0.09302324801683426, 0.11428570747375488, 0.10526315122842789, 0.08695651590824127, 0.04651162400841713, 0.0714285671710968, 0.1702127605676651, 0, 0.06451612710952759, 0.060606054961681366, 0.1666666567325592, 0.10526315122842789, 0.11538460850715637, 0.1304347813129425, 0.10526315122842789, 0.04444443807005882, 0.42307692766189575, 0.1111111044883728, 0.06451612710952759, 0.0833333283662796, 0.0555555522441864, 0, 0.10256409645080566 ]
SkmM6M_pW
true
[ "first deep neural network for modeling Egocentric Spatial Memory inspired by neurophysiological discoveries of navigation cells in mammalian brain" ]
[ "We show that if the usual training loss is augmented by a Lipschitz regularization term, then the networks generalize. ", "We prove generalization by first establishing a stronger convergence result, along with a rate of convergence. ", "A second result resolves a question posed in Zhang et al. (2016): how can a model distinguish between the case of clean labels, and randomized labels? ", "Our answer is that Lipschitz regularization using the Lipschitz constant of the clean data makes this distinction. ", "In this case, the model learns a different function which we hypothesize correctly fails to learn the dirty labels. ", "While deep neural networks networks (DNNs) give more accurate predictions than other machine learning methods BID30 , they lack some of the performance guarantees of these other methods.", "One step towards performance guarantees for DNNs is a proof of generalization with a rate.", "In this paper, we present such a result, for Lipschitz regularized DNNs.", "In fact, we prove a stronger convergence result from which generalization follows.We also consider the following problem, inspired by (Zhang et al., 2016) .", "Problem 1.1.", "[Learning from dirty data] Suppose we are given a labelled data set, which has Lipschitz constant Lip(D) = O(1) (see (3) below).", "Consider making copies of 10 percent of the data, adding a vector of norm to the perturbed data points, and changing the label of the perturbed points.", "Call the new, dirty, data setD.", "The dirty data has Lip(D) = O(1/ ).", "However, if we compute the histogram of the pairwise Lipschitz constants, the distribution of the values on the right hand side of (3), are mostly below Lip(D) with a small fraction of the values being O(1/ ), since the duplicated images are apart but with different labels.", "Thus we can solve (1) with L 0 estimate using the prevalent smaller values, which is an accurate estimate of the clean data Lipschitz constant.", "The solution of (1) using such a value is illustrated on the right of Figure 1 .", "Compare to the Tychonoff regularized solution on the right of Figure 2 .", "We hypothesis that on dirty data the solution of (1) replaces the thin tall spikes with short fat spikes leading to better approximation of the original clean data.In Figure 1 we illustrate the solution of (1) (with L 0 = 0), using synthetic one dimensional data.", "In this case, the labels {−1, 0, 1} are embedded naturally into Y = R, and λ = 0.1.", "Notice that the solution matches the labels exactly on a subset of the data.", "In the second part of the figure, we show a solution with dirty labels which introduce a large Lipschitz constant, in this case, the solution reduces the Lipschitz constant, thereby correcting the errors.Learning from dirty labels is studied in §2.4.", "We show that the model learns a different function than the dirty label function.", "We conjecture, based on synthetic examples, that it learns a better approximation to the clean labels.We begin by establishing notation.", "Consider the classification problem to fix ideas, although our restuls apply to other problems as well.", "Definition 1.2.", "Let D n = x 1 , . . . , x n be a sequence of i.i.d. random variables sampled from the probability distribution ρ.", "The data x i are in X = [0, 1] d .", "Consider the classification problem with D labels, and represent the labels by vertices of the probability simplex, Y ⊂ R D .", "Write y i = u 0 (x i ) for the map from data to labels.", "Write u(x; w) for the map from the input to data to the last layer of the network.1", "Augment the training loss with Lipschitz regularization DISPLAYFORM0 The first term in (1) is the usual average training loss.", "The second term in (1) the Lipschitz regularization term: the excess Lipschitz constant of the map u, compared to the constant L 0 .In", "order to apply the generalization theorem, we need to take L 0 ≥ Lip(u 0 ), the Lipschitz constant of the data on the whole data manifold. In", "practice, Lip(u 0 ) can be estimated by the Lipschitz constant of the empirical data. The", "definition of the Lipschitz constants for functions and data, as well as the implementation details are presented in §1.3 below.Figure 1: Synthetic labelled data and Lipschitz regularized solution u. Left", ": The solution value matches the labels exactly on a large portion of the data set. Right", ": dirtly labels: 10% of the data is incorrect; the regularized solution corrects the errors.Our analysis will apply to the problem (1) which is convex in u, and does not depend explicitly on the weights, w. Of course", ", once u is restricted to a fixed neural network architecture, the corresponding minimization problem becomes non-convex in the weights. Our analysis", "can avoid the dependence on the weights because we make the assumption that there are enough parameters so that u can exactly fit the training data. The assumption", "is justified by Zhang et al. (2016) . As we send n →", "∞ for convergence, we require that the network also grow, in order to continue to satisfy this assumption. Our results apply", "to other non-parametric methods in this regime." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.380952388048172, 0.31578946113586426, 0.375, 0.20512819290161133, 0.1428571343421936, 0.08510638028383255, 0.21621620655059814, 0.17142856121063232, 0.3333333134651184, 0.08888888359069824, 0.23255813121795654, 0.06896551698446274, 0, 0.13793103396892548, 0.1304347813129425, 0.15789473056793213, 0.1764705777168274, 0.13333332538604736, 0.0476190410554409, 0.17142856121063232, 0.18518517911434174, 0.17142856121063232, 0.23255813121795654, 0.10526315122842789, 0.13333332538604736, 0.05882352590560913, 0.1463414579629898, 0.10526315122842789, 0.15789473056793213, 0.3589743673801422, 0.3333333432674408, 0.2222222238779068, 0.21052631735801697, 0.15686273574829102, 0.1538461446762085, 0.14035087823867798, 0.1818181723356247, 0.08888888359069824, 0.2857142686843872, 0.1395348757505417, 0.13333332538604736 ]
r1l3NiCqY7
true
[ "We prove generalization of DNNs by adding a Lipschitz regularization term to the training loss. We resolve a question posed in Zhang et al. (2016)." ]
[ "For fast and energy-efficient deployment of trained deep neural networks on resource-constrained embedded hardware, each learned weight parameter should ideally be represented and stored using a single bit. ", "Error-rates usually increase when this requirement is imposed.", "Here, we report large improvements in error rates on multiple datasets, for deep convolutional neural networks deployed with 1-bit-per-weight.", "Using wide residual networks as our main baseline, our approach simplifies existing methods that binarize weights by applying the sign function in training; we apply scaling factors for each layer with constant unlearned values equal to the layer-specific standard deviations used for initialization.", "For CIFAR-10, CIFAR-100 and ImageNet, and models with 1-bit-per-weight requiring less than 10 MB of parameter memory, we achieve error rates of 3.9%, 18.5% and 26.0% / 8.5% (Top-1 / Top-5) respectively.", "We also considered MNIST, SVHN and ImageNet32, achieving 1-bit-per-weight test results of 0.27%, 1.9%, and 41.3% / 19.1% respectively.", "For CIFAR, our error rates halve previously reported values, and are within about 1% of our error-rates for the same network with full-precision weights.", "For networks that overfit, we also show significant improvements in error rate by not learning batch normalization scale and offset parameters.", "This applies to both full precision and 1-bit-per-weight networks.", "Using a warm-restart learning-rate schedule, we found that training for 1-bit-per-weight is just as fast as full-precision networks, with better accuracy than standard schedules, and achieved about 98%-99% of peak performance in just 62 training epochs for CIFAR-10/100.", "For full training code and trained models in MATLAB, Keras and PyTorch see https://github.com/McDonnell-Lab/1-bit-per-weight/ .", "Fast parallel computing resources, namely GPUs, have been integral to the resurgence of deep neural networks, and their ascendancy to becoming state-of-the-art methodologies for many computer vision tasks.", "However, GPUs are both expensive and wasteful in terms of their energy requirements.", "They typically compute using single-precision floating point (32 bits), which has now been recognized as providing far more precision than needed for deep neural networks.", "Moreover, training and deployment can require the availability of large amounts of memory, both for storage of trained models, and for operational RAM.", "If deep-learning methods are to become embedded in resourceconstrained sensors, devices and intelligent systems, ranging from robotics to the internet-of-things to self-driving cars, reliance on high-end computing resources will need to be reduced.To this end, there has been increasing interest in finding methods that drive down the resource burden of modern deep neural networks.", "Existing methods typically exhibit good performance but for the ideal case of single-bit parameters and/or processing, still fall well-short of state-of-the-art error rates on important benchmarks.In this paper, we report a significant reduction in the gap (see Figure 1 and Results) between Convolutional Neural Networks (CNNs) deployed using weights stored and applied using standard precision (32-bit floating point) and networks deployed using weights represented by a single-bit each.In the process of developing our methods, we also obtained significant improvements in error-rates obtained by full-precision versions of the CNNs we used.In addition to having application in custom hardware deploying deep networks, networks deployed using 1-bit-per-weight have previously been shown BID21 to enable significant speedups on regular GPUs, although doing so is not yet possible using standard popular libraries.Aspects of this work was first communicated as a subset of the material in a workshop abstract and talk BID19 .", "Figure 1: Our error-rate gaps between using full-precision and 1-bit-per-weight.", "All points except black crosses are data from some of our best results reported in this paper for each dataset.", "Black points are results on the full ImageNet dataset, in comparison with results of BID22 (black crosses).", "The notation 4x, 10x and 15x corresponds to network width (see Section 4).1.1 RELATED WORK" ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.25925925374031067, 0, 0.2222222238779068, 0.24242423474788666, 0.07017543166875839, 0.0416666604578495, 0.08163265138864517, 0.08510638028383255, 0.05714285373687744, 0.23333333432674408, 0, 0.037735845893621445, 0, 0.15686273574829102, 0.08888888359069824, 0.10666666179895401, 0.10218977928161621, 0.0555555522441864, 0.08695651590824127, 0.0476190410554409, 0 ]
rytNfI1AZ
true
[ "We train wide residual networks that can be immediately deployed using only a single bit for each convolutional weight, with signficantly better accuracy than past methods." ]
[ "This paper presents a system for immersive visualization of Non-Euclidean spaces using real-time ray tracing.", "It exploits the capabilities of the new generation of GPU’s based on the NVIDIA’s Turing architecture in order to develop new methods for intuitive exploration of landscapes featuring non-trivial geometry and topology in virtual reality." ]
[ 1, 0 ]
[ 0.23076923191547394, 0.09999999403953552 ]
hPqt8zfmbi
false
[ "Immersive Visualization of the Classical Non-Euclidean Spaces using Real-Time Ray Tracing." ]
[ "We propose the set autoencoder, a model for unsupervised representation learning for sets of elements.", "It is closely related to sequence-to-sequence models, which learn fixed-sized latent representations for sequences, and have been applied to a number of challenging supervised sequence tasks such as machine translation, as well as unsupervised representation learning for sequences.\n", "In contrast to sequences, sets are permutation invariant.", "The proposed set autoencoder considers this fact, both with respect to the input as well as the output of the model.", "On the input side, we adapt a recently-introduced recurrent neural architecture using a content-based attention mechanism.", "On the output side, we use a stable marriage algorithm to align predictions to labels in the learning phase.\n", "We train the model on synthetic data sets of point clouds and show that the learned representations change smoothly with translations in the inputs, preserve distances in the inputs, and that the set size is represented directly.", "We apply the model to supervised tasks on the point clouds using the fixed-size latent representation.", "For a number of difficult classification problems, the results are better than those of a model that does not consider the permutation invariance.", "Especially for small training sets, the set-aware model benefits from unsupervised pretraining.", "Autoencoders are a class of machine learning models that have been used for various purposes such as dimensionality reduction, representation learning, or unsupervised pretraining (see, e.g., BID13 ; BID1 ; BID6 ; BID10 ).", "In a nutshell, autoencoders are feed-forward neural networks which encode the given data in a latent, fixed-size representation, and subsequently try to reconstruct the input data in their output variables using a decoder function.", "This basic mechanism of encoding and decoding is applicable to a wide variety of input distributions.", "Recently, researchers have proposed a sequence autoencoder BID5 , a model that is able to handle sequences of inputs by using a recurrent encoder and decoder.", "Furthermore, there has been growing interest to tackle sets of elements with similar recurrent architectures BID21 Xu et al., 2016) .", "In this paper, we propose the set autoencoder -a model that learns to embed a set of elements in a permutation-invariant, fixed-size representation using unlabeled training data only.", "The basic architecture of our model corresponds to that of current sequence-to-sequence models BID20 BID3 BID23 : It consists of a recurrent encoder that takes a set of inputs and creates a fixed-length embedding, and a recurrent decoder that uses the fixedlength embedding and outputs another set.", "As encoder, we use an LSTM network with an attention mechanism as in BID21 .", "This ensures that the embedding is permutation-invariant in the input.", "Since we want the loss of the model to be permutation-invariant in the decoder output, we re-order the output and align it to the input elements, using a stable matching algorithm that calculates a permutation matrix.", "This approach yields a loss which is differentiable with respect to the model's parameters.", "The proposed model can be trained in an unsupervised fashion, i.e., without having a labeled data set for a specific task.", "In a series of experiments, we analyze the properties of the embedding.", "For example, we show that the learned embedding is to some extent distance-preserving, i.e., the distance between two sets of elements correlates with the distances of their embeddings.", "Also, the embedding is smooth, i.e., small changes in the input set lead to small changes of the respective embedding.", "Furthermore, we show Figure 1: Example of a sequence-to-sequence translation model.", "The encoder receives the input characters [\"g\",\"o\"] .", "Its internal state is passed to the decoder, which outputs the translation, i.e., the characters of the word \"aller\".that", "pretraining in an unsupervised fashion can help to increase the performance on supervised tasks when using the fixed-size embedding as input to a classification or regression model, especially if training data is limited. The", "rest of the paper is organized as follows. Section", "2 introduces the preliminaries and briefly discusses related work. In Section", "3, we present the details of the set autoencoder. Section 4", "presents experimental setup and results. We discuss", "the results and conclude the paper in Section 5.2 RELATED WORK", "We presented the set autoencoder, a model that can be trained to reconstruct sets of elements using a fixed-size latent representation.", "The model achieves permutation invariance in the inputs by using a content-based attention mechanism, and permutation invariance in the outputs, by reordering the outputs using a stable marriage algorithm during training.", "The fixed-size representation possesses a number of interesting attributes, such as distance preservation.", "We show that, despite the output permutation invariance, the model learns to output elements in a particular order.", "A series of experiments show that the set autoencoder learns representations that can be useful for tasks that require information about each set element, especially if the tasks are more difficult, and few labeled training examples are present.", "There are a number of directions for future research.", "The most obvious is to use non-linear functions for f inp and f out to enable the set autoencoder to capture non-linear structures in the input set, and test the performance on point clouds of 3d data sets such as ShapeNet BID4 .", "Also, changes to the structure of the encoder/decoder (e.g., which variables are interpreted as query or embedding) and alternative methods for aligning the decoder outputs to the inputs can be investigated.", "Furthermore, more research is necessary to get a better understanding for which tasks the permutation invariance property is helpful, and unsupervised pretraining can be advantageous.", "BID0 to implement all models.", "For the implementation and experiments, we made the following design choices:Model Architecture• Both the encoder and the decoder LSTMs are have peephole connections BID8 .", "We use the LSTM implementation of Tensorflow" ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 1, 0.2448979616165161, 0.09090908616781235, 0.25, 0.13793103396892548, 0.1875, 0.2790697515010834, 0.2857142686843872, 0.23529411852359772, 0.307692289352417, 0.25, 0.09302324801683426, 0.13793103396892548, 0.15789473056793213, 0.17142856121063232, 0.4000000059604645, 0.20408162474632263, 0, 0.08695651590824127, 0.1860465109348297, 0.1428571343421936, 0.277777761220932, 0.25, 0.19512194395065308, 0.19354838132858276, 0.23999999463558197, 0.0952380895614624, 0.12121211737394333, 0.1304347813129425, 0.17391303181648254, 0.07999999821186066, 0.25, 0.0952380895614624, 0.07999999821186066, 0.5882353186607361, 0.1621621549129486, 0.2222222238779068, 0.3333333134651184, 0.17391303181648254, 0.260869562625885, 0.20408162474632263, 0.1395348757505417, 0.21052631735801697, 0, 0.05882352590560913, 0.2857142686843872 ]
r1tJKuyRZ
true
[ "We propose the set autoencoder, a model for unsupervised representation learning for sets of elements." ]
[ "Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a considerable amount of experience to be collected by the agent.", "In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt.", "However, not all tasks are easily or automatically reversible.", "In practice, this learning process requires considerable human intervention.", "In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and backward policy, with the backward policy resetting the environment for a subsequent attempt.", "By learning a value function for the backward policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts.", "Our experiments illustrate that proper use of the backward policy can greatly reduce the number of manual resets required to learn a task and can reduce the number of unsafe actions that lead to non-reversible states.", "Deep reinforcement learning (RL) algorithms have the potential to automate acquisition of complex behaviors in a variety of real-world settings.", "Recent results have shown success on games BID16 ), locomotion BID22 ), and a variety of robotic manipulation skills ; ; BID8 ).", "However, the complexity of tasks achieved with deep RL in simulation still exceeds the complexity of the tasks learned in the real world.", "Why have real-world results lagged behind the simulated accomplishments of deep RL algorithms?One", "challenge with real-world application of deep RL is the scaffolding required for learning: a bad policy can easily put the system into an unrecoverable state from which no further learning is possible. For", "example, an autonomous car might collide at high speed, and a robot learning to clean glasses might break them. Even", "in cases where failures are not catastrophic, some degree of human intervention is often required to reset the environment between attempts (e.g., BID2 ).Most", "RL algorithms require sampling from the initial state distribution at the start of each episode. On real-world", "tasks, this operation often corresponds to a manual reset of the environment after every episode, an expensive solution for complex environments. Even when tasks", "are designed so that these resets are easy (e.g., and BID8 ), manual resets are necessary when the robot or environment breaks (e.g., BID7 ). The bottleneck", "for learning many real-world tasks is not that the agent collects data too slowly, but rather that data collection stops entirely when the agent is waiting for a manual reset. To avoid manual", "resets caused by the environment breaking, task designers often add negative rewards to dangerous states and intervene to prevent agents from taking dangerous actions. While this works", "well for simple tasks, scaling to more complex environments requires writing large numbers of rules for types of actions the robot should avoid. For example, a robot", "should avoid hitting itself, except when clapping. One interpretation of", "our method is as automatically learning these safety rules. Decreasing the number", "of manual resets required to learn to a task is important for scaling up RL experiments outside simulation, allowing researchers to run longer experiments on more agents for more hours.We propose to address these challenges by forcing our agent to \"leave no trace.\" The goal is to learn", "not only how to do the task at hand, but also how to undo it. The intuition is that", "the sequences of actions that are reversible are safe; it is always possible to undo them to get back to the original state. This property is also", "desirable for continual learning of agents, as it removes the requirements for manual resets. In this work, we learn", "two policies that alternate between attempting the task and resetting the environment. By learning how to reset", "the environment at the end of each episode, the agent we learn requires significantly fewer manual resets. Critically, our value-based", "reset policy restricts the agent to only visit states from which it can return, intervening to prevent the forward policy from taking potentially irreversible actions. Using the reset policy to regularize", "the forward policy encodes the assumption that whether our learned reset policy can reset is a good proxy for whether any reset policy can reset. The algorithm we propose can be applied", "to both deterministic and stochastic MDPs. For stochastic MDPs we say that an action", "is reversible if the probability that an oracle reset policy can successfully reset from the next state is greater than some safety threshold. The set of states from which the agent knows", "how to return grows over time, allowing the agent to explore more parts of the environment as soon as it is safe to do so.The main contribution of our work is a framework for continually and jointly learning a reset policy in concert with a forward task policy. We show that this reset policy not only automates", "resetting the environment between episodes, but also helps ensure safety by reducing how frequently the forward policy enters unrecoverable states. Incorporating uncertainty into the value functions", "of both the forward and reset policy further allows us to make this process risk-aware, balancing exploration against safety. Our experiments illustrate that this approach reduces", "the number of \"hard\" manual resets required during learning of a variety of simulated robotic skills.", "In this paper, we presented a framework for automating reinforcement learning based on two principles: automated resets between trials, and early aborts to avoid unrecoverable states.", "Our method simultaneously learns a forward and reset policy, with the value functions of the two policies used to balance exploration against recoverability.", "Experiments in this paper demonstrate that our algorithm not only reduces the number of manual resets required to learn a task, but also learns to avoid unsafe states and automatically induces a curriculum.Our algorithm can be applied to a wide range of tasks, only requiring a few manual resets to learn some tasks.", "During the early stages of learning we cannot accurately predict the consequences of our actions.", "We cannot learn to avoid a dangerous state until we have visited that state (or a similar state) and experienced a manual reset.", "Nonetheless, reducing the number of manual resets during learning will enable researchers to run experiments for longer on more agents.", "A second limitation of our work is that we treat all manual resets as equally bad.", "In practice, some manual resets are more costly than others.", "For example, it is more costly for a grasping robot to break a wine glass than to push a block out of its workspace.", "An approach not studied in this paper for handling these cases would be to specify costs associated with each type of manual reset, and incorporate these reset costs into the learning algorithm.While the experiments for this paper were done in simulation, where manual resets are inexpensive, the next step is to apply our algorithm to real robots, where manual resets are costly.", "A challenge introduced when switching to the real world is automatically identifying when the agent has reset.", "In simulation we can access the state of the environment directly to compute the distance between the current state and initial state.", "In the real world, we must infer states from noisy sensor observations to deduce if they are the same." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.15686273574829102, 0.23255813121795654, 0, 0.05882352590560913, 0.9056603908538818, 0.31372547149658203, 0.23076923191547394, 0.1818181723356247, 0.08695651590824127, 0.09756097197532654, 0.052631575614213943, 0.25, 0.22727271914482117, 0.07692307233810425, 0.04878048226237297, 0.20408162474632263, 0.1599999964237213, 0.19607841968536377, 0.11999999731779099, 0.12244897335767746, 0, 0.1621621549129486, 0.1230769157409668, 0.0952380895614624, 0.08510638028383255, 0.1395348757505417, 0.2926829159259796, 0.09302324801683426, 0.1249999925494194, 0.2916666567325592, 0.1621621549129486, 0.1538461446762085, 0.3333333432674408, 0.20408162474632263, 0.19999998807907104, 0.1538461446762085, 0.19607841968536377, 0.3829787075519562, 0.1492537260055542, 0.10526315122842789, 0.17777776718139648, 0.13333332538604736, 0.04878048226237297, 0, 0.08695651590824127, 0.138888880610466, 0.04999999701976776, 0.1428571343421936, 0.04651162400841713 ]
S1vuO-bCW
true
[ "We propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and backward policy, with the backward policy resetting the environment for a subsequent attempt." ]
[ "It has been argued that current machine learning models do not have commonsense, and therefore must be hard-coded with prior knowledge (Marcus, 2018).", "Here we show surprising evidence that language models can already learn to capture certain common sense knowledge.", "Our key observation is that a language model can compute the probability of any statement, and this probability can be used to evaluate the truthfulness of that statement. ", "On the Winograd Schema Challenge (Levesque et al., 2011), language models are 11% higher in accuracy than previous state-of-the-art supervised methods.", "Language models can also be fine-tuned for the task of Mining Commonsense Knowledge on ConceptNet to achieve an F1 score of 0.912 and 0.824, outperforming previous best results (Jastrzebskiet al., 2018). ", "Further analysis demonstrates that language models can discover unique features of Winograd Schema contexts that decide the correct answers without explicit supervision.", "It has been argued that current machine learning models do not have common sense BID4 BID15 .", "For example, even best machine learning models perform poorly on commonsense reasoning tasks such as Winograd Schema Challenge BID11 BID14 .", "This argument is often combined with another important criticism of supervised learning that it only works well on problems that have a lot of labeled data.", "The Winograd Schema Challenge (WSC) is an opposite of such problems because its labeled set size is only on the order of a few hundreds examples, with no official training data.", "Based on this argument, it is suggested that machine learning models must be integrated with prior knowledge BID15 BID10 .As", "an example, consider the following question from the WSC dataset:\"The trophy doesn't fit in the suitcase because it is too big.\" What", "is \"it\"? Answer", "0: the trophy. Answer", "1: the suitcase.The main point of this dataset is that no machine learning model today can do a good job at answering this type of questions.In this paper, we present surprising evidence that language models do capture certain common sense knowledge and this knowledge can be easily extracted. Key to", "our method is the use of language models (LMs), trained on a large amount of unlabeled data, to score multiple choice questions posed by the challenge and similar datasets. In the", "above example, we will first substitute the pronoun (\"it\") with the candidates (\"the trophy\" and \"the suitcase\"), and then use an LM to compute the probability of the two resulting sentences (\"The trophy doesn't fit in the suitcase because the trophy is too big.\" and \"The trophy doesn't fit in the suitcase because the suitcase is too big.\"). The substitution", "that results in a more probable sentence will be the chosen answer. Using this simple", "method, we are able to achieve 63.7% accuracy, 11% above that of the previous state-of-the-art result 1 .To demonstrate a practical", "impact of this work, we show that the trained LMs can be used to enrich human-annotated knowledge bases, which are known to be low in coverage and expensive to expand. For example, \"Suitcase is", "a type of container\", a relevant knowledge to the above Winograd Schema example, does not present in the ConceptNet knowledge base BID13 . The goal of this task is", "to add such new facts to the knowledge base at a cheaper cost than human annotation, in our case using LM scoring. We followed the Commonsense", "Knowledge Mining task formulation from BID0 BID12 BID8 , which posed the task as a classification problem of unseen facts and non-facts. Without an additional classification", "layer, LMs are fine-tuned to give different scores to facts and non-facts tuples from ConceptNet. Results obtained by this method outperform", "all previous results, despite the small training data size (100K instances). On the full test set, LMs can identify commonsense", "facts with 0.912 F1 score, which is 0.02 better than supervised trained networks BID8 .", "We introduced a simple method to apply pretrained language models to tasks that require commonsense knowledge.", "Key to our method is the insight that large LMs trained on massive text corpora can capture certain aspect of human knowledge, and therefore can be used to score textual statements.", "On the Winograd Schema Challenge, LMs are able to achieve 11 points of accuracy above the best previously reported result.", "On mining novel commonsense facts from ConceptNet knowledge base, LM scoring also outperforms previous methods on two different test criteria.", "We analyse the trained language models and observe that key features of the context that identify the correct answer are discovered and used in their predictions.Traditional approaches to capturing common sense usually involve expensive human annotation to build knowledge bases.", "This work demonstrates that commonsense knowledge can alternatively be learned and stored in the form of distributed representations.", "At the moment, we consider language modeling for learning from texts as this supplies virtually unlimited data.", "It remains an open question for unsupervised learning to capture commonsense from other modalities such as images or videos." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1818181723356247, 0.2631579041481018, 0.08888888359069824, 0.1860465109348297, 0.2222222238779068, 0.1428571343421936, 0.21621620655059814, 0.19512194395065308, 0.13333332538604736, 0.19999998807907104, 0.1463414579629898, 0, 0, 0, 0.2461538463830948, 0.08163265138864517, 0.0634920597076416, 0.1111111044883728, 0.09302324801683426, 0.11538460850715637, 0.13333332538604736, 0.08888888359069824, 0.13333332538604736, 0.09756097197532654, 0.04999999329447746, 0.0555555522441864, 0.1111111044883728, 0.19999998807907104, 0.14999999105930328, 0.04878048226237297, 0.17543859779834747, 0.10256409645080566, 0, 0.04999999329447746 ]
rkgfWh0qKX
true
[ "We present evidence that LMs do capture common sense with state-of-the-art results on both Winograd Schema Challenge and Commonsense Knowledge Mining." ]
[ "In many real-world learning scenarios, features are only acquirable at a cost constrained under a budget.", "In this paper, we propose a novel approach for cost-sensitive feature acquisition at the prediction-time.", "The suggested method acquires features incrementally based on a context-aware feature-value function.", "We formulate the problem in the reinforcement learning paradigm, and introduce a reward function based on the utility of each feature.", "Specifically, MC dropout sampling is used to measure expected variations of the model uncertainty which is used as a feature-value function.", "Furthermore, we suggest sharing representations between the class predictor and value function estimator networks.", "The suggested approach is completely online and is readily applicable to stream learning setups.", "The solution is evaluated on three different datasets including the well-known MNIST dataset as a benchmark as well as two cost-sensitive datasets: Yahoo Learning to Rank and a dataset in the medical domain for diabetes classification.", "According to the results, the proposed method is able to efficiently acquire features and make accurate predictions.", "In traditional machine learning settings, it is usually assumed that a training dataset is freely available and the objective is to train models that generalize well.", "In this paradigm, the feature set is fixed, and we are dealing with complete feature vectors accompanied by class labels that are provided for training.", "However, in many real-world scenarios, there are certain costs for acquiring features as well as budgets limiting the total expenditure.", "Here, the notation of cost is more general than financial cost and it also refers to other concepts such as computational cost, privacy impacts, energy consumption, patient discomfort in medical tests, and so forth BID22 .", "Take the example of the disease diagnosis based on medical tests.", "Creating a complete feature vector from all the relevant information is synonymous with conducting many tests such as MRI scan, blood test, etc. which would not be practical.", "On the other hand, a physician approaches the problem by asking a set of basic easy-to-acquire features, and then incrementally prescribes other tests based on the current known information (i.e., context) until a reliable diagnosis can be made.", "Furthermore, in many real-world use-cases, due to the volume of data or necessity of prompt decisions, learning and prediction should take place in an online and stream-based fashion.", "In the medical diagnosis example, it is consistent with the fact that the latency of diagnosis is vital (e.g., urgency of specific cases and diagnosis), and it is often impossible to defer the decisions.", "Here, by online we mean processing samples one at a time as they are being received.Various approaches were suggested in the literature for cost-sensitive feature acquisition.", "To begin with, traditional feature selection methods suggested to limit the set of features being used for training BID11 BID17 .", "For instance, L1 regularization for linear classifiers results in models that effectively use a subset of features BID9 .", "Note that these methods focus on finding a fixed subset of features to be used (i.e., feature selection), while a more optimal solution would be making feature acquisition decisions based on the sample at hand and at the prediction-time.More recently, probabilistic methods were suggested that measure the value of each feature based on the current evidence BID5 .", "However, these methods are usually applicable to Bayesian networks or similar probabilistic models and make limiting assumptions such as having binary features and binary classes .", "Furthermore, these probabilistic methods are computationally expensive and intractable in large scale problems BID5 .Motivated", "by the success of discriminative learning, cascade and tree based classifiers suggested as an intuitive way to incorporate feature costs BID20 BID3 . Nevertheless", ", these methods are basically limited to the modeling capability of tree classifiers and are limited to fixed predetermined structures. A recent work", "by BID27 suggested a gating method that employs adaptive linear or tree-based classifiers, alternating between low-cost models for easy-to-handle instances and higher-cost models to handle more complicated cases. While this method", "outperforms many of the previous work on the tree-based and cascade cost-sensitive classifiers, the low-cost model being used is limited to simple linear classifiers or pruned random forests.As an alternative approach, sensitivity analysis of trained predictors is suggested to measure the importance of each feature given a context BID7 BID18 . These approaches", "either require an exhaustive measurement of sensitivities or rely on approximations of sensitivity. These methods are", "easy to use as they work without any significant modification to the predictor models being trained. However, theoretically", ", finding the global sensitivity is a difficult and computationally expensive problem. Therefore, frequently", ", approximate or local sensitivities are being used in these methods which may cause not optimal solutions.Another approach that is suggested in the literature is modeling the feature acquisition problem as a learning problem in the imitation learning BID13 or reinforcement learning BID14 BID29 BID15 domain. These approaches are", "promising in terms of performance and scalability. However, the value functions", "used in these methods are usually not intuitive and require tuning hyper-parameters to balance the cost vs. accuracy trade-off. More specifically, they often", "rely on one or more hyper-parameters to adjust the average cost at which these models operate. On the other hand, in many real-world", "scenarios it is desirable to adjust the trade-off at the prediction-time rather than the training-time. For instance, it might be desirable to", "spend more for a certain instance or continue the feature acquisition until a desired level of prediction confidence is achieved. This paper presents a novel method based", "on deep Q-networks for cost-sensitive feature acquisition. The proposed solution employs uncertainty", "analysis in neural network classifiers as a measure for finding the value of each feature given a context. Specifically, we use variations in the certainty", "of predictions as a reward function to measure the value per unit of the cost given the current context. In contrast to the recent feature acquisition methods", "that use reinforcement learning ideas BID14 BID29 BID15 , the suggested reward function does not require any hyper-parameter tuning to balance cost versus performance trade-off. Here, features are acquired incrementally, while maintaining", "a certain budget or a stopping criterion. Moreover, in contrast to many other work in the literature that", "assume an initial complete dataset BID13 BID5 BID8 BID27 , the proposed solution is stream-based and online which learns and optimizes acquisition costs during the training and the prediction. This might be beneficial as, in many real-world use cases, it might", "be prohibitively expensive to collect all features for all training data. Furthermore, this paper suggests a method for sharing the representations", "between the class predictor and action-value models that increases the training efficiency.", "In this paper, we proposed an approach for cost-sensitive learning in stream-based settings.", "We demonstrated that certainty estimation in neural network classifiers can be used as a viable measure for the value of features.", "Specifically, variations of the model certainty per unit of the cost is used as measure of feature value.", "In this paradigm, a reinforcement learning solution is suggested which is efficient to train using a shared representation.", "The introduced method is evaluated on three different real-world datasets representing different applications: MNIST digits recognition, Yahoo LTRC web ranking dataset, and diabetes prediction using health records.", "Based on the results, the suggested method is able to learn from data streams, make accurate predictions, and effectively reduce the prediction-time feature acquisition cost." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.25, 0, 0.1428571343421936, 0, 0.08695651590824127, 0.1818181723356247, 0.09999999403953552, 0.0833333283662796, 0.0624999962747097, 0.1875, 0.0714285671710968, 0.0476190447807312, 0, 0.054054051637649536, 0.045454543083906174, 0.1764705777168274, 0.0555555522441864, 0.2222222238779068, 0.13793103396892548, 0.07407406717538834, 0.1090909093618393, 0.0624999962747097, 0.0833333283662796, 0.1249999925494194, 0.06896550953388214, 0.10526315122842789, 0.072727270424366, 0, 0, 0.08695651590824127, 0.08163265138864517, 0.09999999403953552, 0.0624999962747097, 0, 0, 0.23529411852359772, 0.2857142686843872, 0.12903225421905518, 0.12903225421905518, 0, 0, 0.17777776718139648, 0.0714285671710968, 0.09999999403953552, 0.09090908616781235, 0.06666666269302368, 0.0833333283662796, 0, 0.11428570747375488, 0.1875 ]
S1eOHo09KX
true
[ "An online algorithm for cost-aware feature acquisition and prediction" ]
[ "This paper revisits the problem of sequence modeling using convolutional \n", "architectures.", " Although both convolutional and recurrent architectures have", "a\nlong history in sequence prediction, the current \"default\" mindset in much of\n", "the deep learning community is that generic sequence modeling is best handled\n", "using recurrent networks.", " The goal of this paper is to question this assumption", ". \nSpecifically, we consider a simple generic temporal convolution network (TCN", "),\nwhich adopts features from modern ConvNet architectures such as a dilations and \n", "residual connections.", " We show that on a variety of sequence modeling tasks", ",\nincluding many frequently used as benchmarks for evaluating recurrent networks", ",\nthe TCN outperforms baseline RNN methods (LSTMs, GRUs, and vanilla RNNs) and\n", "sometimes even highly specialized approaches.", " We further show that the\n", "potential \"infinite memory\" advantage that RNNs have over TCNs is largely\n", "absent in practice: TCNs indeed exhibit longer effective history sizes than their \n", "recurrent counterparts.", " As a whole, we argue that it may be time to (re)consider \n", "ConvNets as the default \"go to\" architecture for sequence modeling.", "Since the re-emergence of neural networks to the forefront of machine learning, two types of network architectures have played a pivotal role: the convolutional network, often used for vision and higher-dimensional input data; and the recurrent network, typically used for modeling sequential data.", "These two types of architectures have become so ingrained in modern deep learning that they can be viewed as constituting the \"pillars\" of deep learning approaches.", "This paper looks at the problem of sequence modeling, predicting how a sequence will evolve over time.", "This is a key problem in domains spanning audio, language modeling, music processing, time series forecasting, and many others.", "Although exceptions certainly exist in some domains, the current \"default\" thinking in the deep learning community is that these sequential tasks are best handled by some type of recurrent network.", "Our aim is to revisit this default thinking, and specifically ask whether modern convolutional architectures are in fact just as powerful for sequence modeling.Before making the main claims of our paper, some history of convolutional and recurrent models for sequence modeling is useful.", "In the early history of neural networks, convolutional models were specifically proposed as a means of handling sequence data, the idea being that one could slide a 1-D convolutional filter over the data (and stack such layers together) to predict future elements of a sequence from past ones BID20 BID30 .", "Thus, the idea of using convolutional models for sequence modeling goes back to the beginning of convolutional architectures themselves.", "However, these models were subsequently largely abandoned for many sequence modeling tasks in favor of recurrent networks BID13 .", "The reasoning for this appears straightforward: while convolutional architectures have a limited ability to look back in time (i.e., their receptive field is limited by the size and layers of the filters), recurrent networks have no such limitation.", "Because recurrent networks propagate forward a hidden state, they are theoretically capable of infinite memory, the ability to make predictions based upon data that occurred arbitrarily long ago in the sequence.", "This possibility seems to be realized even moreso for the now-standard architectures of Long ShortTerm Memory networks (LSTMs) BID21 , or recent incarnations such as the Gated Recurrent Unit (GRU) ; these architectures aim to avoid the \"vanishing gradient\" challenge of traditional RNNs and appear to provide a means to actually realize this infinite memory.Given the substantial limitations of convolutional architectures at the time that RNNs/LSTMs were initially proposed (when deep convolutional architectures were difficult to train, and strategies such as dilated convolutions had not reached widespread use), it is no surprise that CNNs fell out of favor to RNNs.", "While there have been a few notable examples in recent years of CNNs applied to sequence modeling (e.g., the WaveNet BID40 and PixelCNN BID41 architectures), the general \"folk wisdom\" of sequence modeling prevails, that the first avenue of attack for these problems should be some form of recurrent network.The fundamental aim of this paper is to revisit this folk wisdom, and thereby make a counterclaim.", "We argue that with the tools of modern convolutional architectures at our disposal (namely the ability to train very deep networks via residual connections and other similar mechanisms, plus the ability to increase receptive field size via dilations), in fact convolutional architectures typically outperform recurrent architectures on sequence modeling tasks, especially (and perhaps somewhat surprisingly) on domains where a long effective history length is needed to make proper predictions.", "This paper consists of two main contributions.", "First, we describe a generic, baseline temporal convolutional network (TCN) architecture, combining best practices in the design of modern convolutional architectures, including residual layers and dilation.", "We emphasize that we are not claiming to invent the practice of applying convolutional architectures to sequence prediction, and indeed the TCN architecture here mirrors closely architectures such as WaveNet (in fact TCN is notably simpler in some respects).", "We do, however, want to propose a generic modern form of convolutional sequence prediction for subsequent experimentation.", "Second, and more importantly, we extensively evaluate the TCN model versus alternative approaches on a wide variety of sequence modeling tasks, spanning many domains and datasets that have typically been the purview of recurrent models, including word-and character-level language modeling, polyphonic music prediction, and other baseline tasks commonly used to evaluate recurrent architectures.", "Although our baseline TCN can be outperformed by specialized (and typically highly-tuned) RNNs in some cases, for the majority of problems the TCN performs best, with minimal tuning on the architecture or the optimization.", "This paper also analyzes empirically the myth of \"infinite memory\" in RNNs, and shows that in practice, TCNs of similar size and complexity may actually demonstrate longer effective history sizes.", "Our chief claim in this paper is thus an empirical one: rather than presuming that RNNs will be the default best method for sequence modeling tasks, it may be time to (re)consider ConvNets as the \"go-to\" approach when facing a new dataset or task in sequence modeling.", "In this work, we revisited the topic of modeling sequence predictions using convolutional architectures.", "We introduced the key components of the TCN and analyzed some vital advantages and disadvantages of using TCN for sequence predictions instead of RNNs.", "Further, we compared our generic TCN model to the recurrent architectures on a set of experiments that span a wide range of domains and datasets.", "Through these experiments, we have shown that TCN with minimal tuning can outperform LSTM/GRU of the same model size (and with standard regularizations) in most of the tasks.", "Further experiments on the copy memory task and LAMBADA task revealed that TCNs actually has a better capability for long-term memory than the comparable recurrent architectures, which are commonly believed to have unlimited memory.It is still important to note that, however, we only presented a generic architecture here, with components all coming from standard modern convolutional networks (e.g., normalization, dropout, residual network).", "And indeed, on specific problems, the TCN model can still be beaten by some specialized RNNs that adopt carefully designed optimization strategies.", "Nevertheless, we believe the experiment results in Section 4 might be a good signal that instead of considering RNNs as the \"default\" methodology for sequence modeling, convolutional networks too, can be a promising and powerful toolkit in studying time-series data." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.307692289352417, 0.08695651590824127, 0.1428571343421936, 0.29629629850387573, 0.10526315867900848, 0, 0, 0, 0.38461539149284363, 0.14814814925193787, 0.0714285671710968, 0, 0.2857142686843872, 0.07407406717538834, 0, 0.2142857164144516, 0.38461539149284363, 0.20000000298023224, 0.1538461446762085, 0.1249999925494194, 0, 0.1395348757505417, 0.22641508281230927, 0.13793103396892548, 0.3125, 0.29411762952804565, 0.15094339847564697, 0.17391303181648254, 0.12631578743457794, 0.1944444477558136, 0.21333332359790802, 0, 0.09756097197532654, 0.19607841968536377, 0.24242423474788666, 0.158730149269104, 0.1304347813129425, 0.09302324801683426, 0.24137930572032928, 0.2666666507720947, 0.22857142984867096, 0.10256409645080566, 0.1463414579629898, 0.13333332538604736, 0.15789473056793213, 0.26923075318336487 ]
rk8wKk-R-
true
[ "We argue that convolutional networks should be considered the default starting point for sequence modeling tasks." ]
[ "Deep neural networks work well at approximating complicated functions when provided with data and trained by gradient descent methods.", "At the same time, there is a vast amount of existing functions that programmatically solve different tasks in a precise manner eliminating the need for training.", "In many cases, it is possible to decompose a task to a series of functions, of which for some we may prefer to use a neural network to learn the functionality, while for others the preferred method would be to use existing black-box functions.", "We propose a method for end-to-end training of a base neural network that integrates calls to existing black-box functions.", "We do so by approximating the black-box functionality with a differentiable neural network in a way that drives the base network to comply with the black-box function interface during the end-to-end optimization process.", "At inference time, we replace the differentiable estimator with its external black-box non-differentiable counterpart such that the base network output matches the input arguments of the black-box function.", "Using this ``Estimate and Replace'' paradigm, we train a neural network, end to end, to compute the input to black-box functionality while eliminating the need for intermediate labels.", "We show that by leveraging the existing precise black-box function during inference, the integrated model generalizes better than a fully differentiable model, and learns more efficiently compared to RL-based methods.", "End-to-end supervised learning with deep neural networks (DNNs) has taken the stage in the past few years, achieving state-of-the-art performance in multiple domains including computer vision BID25 , natural language processing BID23 BID10 , and speech recognition BID29 .", "Many of the tasks addressed by DNNs can be naturally decomposed to a series of functions.", "In such cases, it might be advisable to learn neural network approximations for some of these functions and use precise existing functions for others.", "Examples of such tasks include Semantic Parsing and Question Answering.", "Since such a decomposition relies partly on precise functions, it may lead to a superior solution compared to an approximated one based solely on a learned neural model.", "Decomposing a solution into trainable networks and existing functions requires matching the output of the networks to the input of the existing functions, and vice-versa.", "The input and output are defined by the existing functions' interface.", "We shall refer to these functions as black-box functions (bbf), focusing only on their interface.", "For example, consider the question: \"Is 7.2 greater than 4.5?\"", "Given that number comparison is a solved problem in symbolic computation, a natural solution would be to decompose the task into a two-step process of", "(i) converting the natural language to an executable program, and", "(ii) executing the program on an arithmetic module.", "While a DNN may be a good fit forWe propose an alternative approach called Estimate and Replace that finds a differentiable function approximation, which we term black-box estimator, for estimating the black-box function.", "We use the black-box estimator as a proxy to the original black-box function during training, and by that allow the learnable parts of the model to be trained using gradient-based optimization.", "We compensate for not using any intermediate labels to direct the learnable parts by using the black-box function as an oracle for training the black-box estimator.", "During inference, we replace the black-box estimator with the original non-differentiable black-box function.End-to-end training of a solution composed of trainable components and black-box functions poses several challenges we address in this work-coping with non-differentiable black-box functions, fitting the network to call these functions with the correct arguments, and doing so without any intermediate labels.", "Two more challenges are the lack of prior knowledge on the distribution of inputs to the black-box function, and the use of gradient-based methods when the function approximation is near perfect and gradients are extremely small.", "This work is organized as follows: In Section 2, we formulate the problem of decomposing the task to include calls to a black-box function.", "Section 3 describes the network architecture and training procedures.", "In Section 4, we present experiments and comparison to Policy Gradient-based RL, and to fully neural models.", "We further discuss the potential and benefits of the modular nature of our approach in Section 6.", "Interpretability via Composability Lipton (2016) identifies composability as a strong contributor to model interpretability.", "They define composability as the ability to divide the model into components and interpret them individually to construct an explanation from which a human can predict the model's output.", "The Estimate and Replace approach solves the black-box interface learning problem in a way that is modular by design.", "As such, it provides an immediate interpretability benefit.", "Training a model to comply with a well-defined and well-known interface inherently supports model composability and, thus, directly contributes to its interpretability.For example, suppose you want to let a natural language processing model interface with a WordNet service to receive additional synonym and antonym features for selected input words.", "Because the WordNet interface is interpretable, the intermediate output of the model to the WordNet service (the words for which the model requested additional features) can serve as an explanation to the model's final prediction.", "Knowing which words the model chose to obtain additional features for gives insight to how it made its final decision.Reusability via Composability An additional clear benefit of model composability in the context of our solution is reusability.", "Training a model to comply with a well-defined interface induces well-defined module functionality which is a necessary condition for module reuse.", "Current solutions for learning using black-box functionality in neural network prediction have critical limitations which manifest themselves in at least one of the following aspects:", "(i) poor generalization,", "(ii) low learning efficiency,", "(iii) under-utilization of available optimal functions, and", "(iv) the need for intermediate labels.In this work, we proposed an architecture, termed EstiNet, and a training and deployment process, termed Estimate and Replace, which aim to overcome these limitations.", "We then showed empirical results that validate our approach.Estimate and Replace is a two-step training and deployment approach by which we first estimate a given black-box functionality to allow end-to-end training via back-propagation, and then replace the estimator with its concrete black-box function at inference time.", "By using a differentiable estimation module, we can train an end-to-end neural network model using gradient-based optimization.", "We use labels that we generate from the black-box function during the optimization process to compensate for the lack of intermediate labels.", "We show that our training process is more stable and has lower sample complexity compared to policy gradient methods.", "By leveraging the concrete black-box function at inference time, our model generalizes better than end-to-end neural network models.", "We validate the advantages of our approach with a series of simple experiments.", "Our approach implies a modular neural network that enjoys added interpretability and reusability benefits.Future Work We limit the scope of this work to tasks that can be solved with a single black-box function.", "Solving the general case of this problem requires learning of multiple black-box interfaces, along unbounded successive calls, where the final prediction is a computed function over the output of these calls.", "This introduces several difficult challenges.", "For example, computing the final prediction over a set of black-box functions, rather than a direct prediction of a single one, requires an additional network output module.", "The input of this module must be compatible with the output of the previous layer, be it an estimation function at training time, or its black-box function counterpart at inference time, which belong to different distributions.", "We reserve this area of research for future work.As difficult as it is, we believe that artificial intelligence does not lie in mere knowledge, nor in learning from endless data samples.", "Rather, much of it is in the ability to extract the right piece of information from the right knowledge source for the right purpose.", "Thus, training a neural network to intelligibly interact with black-box functions is a leap forward toward stronger AI." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1395348757505417, 0.1666666567325592, 0.13793103396892548, 0.1904761791229248, 0.23999999463558197, 0.1666666567325592, 0.16326530277729034, 0.15094339847564697, 0.06779660284519196, 0.3589743673801422, 0.1304347813129425, 0, 0.0833333283662796, 0.1428571343421936, 0.17142856121063232, 0.15789473056793213, 0.0555555522441864, 0.1702127605676651, 0.1764705777168274, 0.1249999925494194, 0.15094339847564697, 0.2800000011920929, 0.4000000059604645, 0.24242423474788666, 0.07692307233810425, 0.08695651590824127, 0.12121211737394333, 0.05128204822540283, 0.05128204822540283, 0.052631575614213943, 0.1599999964237213, 0.1860465109348297, 0.0624999962747097, 0.1269841194152832, 0.23529411852359772, 0.07017543166875839, 0.19512194395065308, 0.0833333283662796, 0, 0, 0, 0.23076923191547394, 0.21875, 0.14999999105930328, 0.23255813121795654, 0.1395348757505417, 0.0476190410554409, 0.1111111044883728, 0.2142857164144516, 0.039215680211782455, 0, 0.08510638028383255, 0.2222222238779068, 0.0363636314868927, 0.0952380895614624, 0.19512194395065308 ]
r1e13s05YX
true
[ "Training DNNs to interface w\\ black box functions w\\o intermediate labels by using an estimator sub-network that can be replaced with the black box after training" ]
[ "This paper proposes a new actor-critic-style algorithm called Dual Actor-Critic or Dual-AC. ", "It is derived in a principled way from the Lagrangian dual form of the Bellman optimality equation, which can be viewed as a two-player game between the actor and a critic-like function, which is named as dual critic. ", "Compared to its actor-critic relatives, Dual-AC has the desired property that the actor and dual critic are updated cooperatively to optimize the same objective function, providing a more transparent way for learning the critic that is directly related to the objective function of the actor.", "We then provide a concrete algorithm that can effectively solve the minimax optimization problem, using techniques of multi-step bootstrapping, path regularization, and stochastic dual ascent algorithm.", "We demonstrate that the proposed algorithm achieves the state-of-the-art performances across several benchmarks.", "Reinforcement learning (RL) algorithms aim to learn a policy that maximizes the long-term return by sequentially interacting with an unknown environment.", "Value-function-based algorithms first approximate the optimal value function, which can then be used to derive a good policy.", "These methods BID23 BID28 often take advantage of the Bellman equation and use bootstrapping to make learning more sample efficient than Monte Carlo estimation BID25 .", "However, the relation between the quality of the learned value function and the quality of the derived policy is fairly weak BID6 .", "Policy-search-based algorithms such as REINFORCE BID29 and others (Kakade, 2002; BID18 , on the other hand, assume a fixed space of parameterized policies and search for the optimal policy parameter based on unbiased Monte Carlo estimates.", "The parameters are often updated incrementally along stochastic directions that on average are guaranteed to increase the policy quality.", "Unfortunately, they often have a greater variance that results in a higher sample complexity.Actor-critic methods combine the benefits of these two classes, and have proved successful in a number of challenging problems such as robotics (Deisenroth et al., 2013) , meta-learning BID3 , and games (Mnih et al., 2016 ).", "An actor-critic algorithm has two components: the actor (policy) and the critic (value function).", "As in policy-search methods, actor is updated towards the direction of policy improvement.", "However, the update directions are computed with the help of the critic, which can be more efficiently learned as in value-function-based methods BID24 Konda & Tsitsiklis, 2003; BID13 BID7 BID19 .", "Although the use of a critic may introduce bias in learning the actor, its reduces variance and thus the sample complexity as well, compared to pure policy-search algorithms.While the use of a critic is important for the efficiency of actor-critic algorithms, it is not entirely clear how the critic should be optimized to facilitate improvement of the actor.", "For some parametric family of policies, it is known that a certain compatibility condition ensures the actor parameter update is an unbiased estimate of the true policy gradient BID24 .", "In practice, temporaldifference methods are perhaps the most popular choice to learn the critic, especially when nonlinear function approximation is used (e.g., BID19 ).In", "this paper, we propose a new actor-critic-style algorithm where the actor and the critic-like function, which we named as dual critic, are trained cooperatively to optimize the same objective function. The", "algorithm, called Dual Actor-Critic , is derived in a principled way by solving a dual form of the Bellman equation BID6 . The", "algorithm can be viewed as a two-player game between the actor and the dual critic, and in principle can be solved by standard optimization algorithms like stochastic gradient descent (Section 2). We", "emphasize the dual critic is not fitting the value function for current policy, but that of the optimal policy. We", "then show that, when function approximation is used, direct application of standard optimization techniques can result in instability in training, because of the lack of convex-concavity in the objective function (Section 3). Inspired", "by the augmented Lagrangian method (Luenberger & Ye, 2015; Boyd et al., 2010) , we propose path regularization for enhanced numerical stability. We also", "generalize the two-player game formulation to the multi-step case to yield a better bias/variance tradeoff. The full", "algorithm is derived and described in Section 4, and is compared to existing algorithms in Section 5. Finally,", "our algorithm is evaluated on several locomotion tasks in the MuJoCo benchmark BID27 , and compares favorably to state-of-the-art algorithms across the board.Notation. We denote", "a discounted MDP by M = (S, A, P, R, γ), where S is the state space, A the action space, P (·|s, a) the transition probability kernel defining the distribution over next-state upon taking action a in state x, R(s, a) the corresponding immediate rewards, and γ ∈ (0, 1) the discount factor. If there", "is no ambiguity, we will use a f (a) and f (a)da interchangeably.", "In this paper, we revisited the linear program formulation of the Bellman optimality equation, whose Lagrangian dual form yields a game-theoretic view for the roles of the actor and the dual critic.", "Although such a framework for actor and dual critic allows them to be optimized for the same objective function, parametering the actor and dual critic unfortunately induces instablity in optimization.", "We analyze the sources of instability, which is corroborated by numerical experiments.", "We then propose Dual Actor-Critic , which exploits stochastic dual ascent algorithm for the path regularized, DISPLAYFORM0 Figure 2: The results of Dual-AC against TRPO and PPO baselines.", "Each plot shows average reward during training across 5 random seeded runs, with 50% confidence interval.", "The x-axis is the number of training iterations.", "The Dual-AC achieves comparable performances comparing with TRPO and PPO in some tasks, but outperforms on more challenging tasks.multi-step bootstrapping two-player game, to bypass these issues.", "Proof We rewrite the linear programming 3 as DISPLAYFORM1 Recall the T is monotonic, i.e., if DISPLAYFORM2 Theorem 1 (Optimal policy from occupancy) s,a∈S×A ρ * (s, a) = 1, and π DISPLAYFORM3 a∈A ρ * (s,a) .", "Proof For the optimal occupancy measure, it must satisfy DISPLAYFORM4 where P denotes the transition distribution and I denotes a |S| × |SA| matrix where I ij = 1 if and only if j ∈ [(i − 1) |A| + 1, . . . , i |A|].", "Multiply both sides with 1, due to µ and P are probabilities, we have 1, ρ * = 1.Without loss of generality, we assume there is only one best action in each state.", "Therefore, by the KKT complementary conditions of (3), i.e., ρ(s, a) R(s, a) + γE s |s,a [V (s )] − V (s) = 0, which implies ρ * (s, a) = 0 if and only if a = a * , therefore, the π * by normalization.Theorem 2 The optimal policy π * and its corresponding value function V * is the solution to the following saddle problem DISPLAYFORM5 Proof Due to the strong duality of the optimization (3), we have DISPLAYFORM6 Then, plugging the property of the optimum in Theorem 1, we achieve the final optimization (6)." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1904761791229248, 0.5, 0.1904761791229248, 0.2222222238779068, 0.4390243887901306, 0.07999999821186066, 0.12765957415103912, 0.14814814925193787, 0.17777776718139648, 0.09677419066429138, 0.08510638028383255, 0.11267605423927307, 0.0952380895614624, 0.1904761791229248, 0.14035087823867798, 0.13698630034923553, 0.145454540848732, 0.07407406717538834, 0.24561403691768646, 0.6399999856948853, 0.21052631735801697, 0.21276594698429108, 0.1428571343421936, 0.15094339847564697, 0.13636362552642822, 0.1860465109348297, 0.29629629850387573, 0.10666666179895401, 0.09756097197532654, 0.290909081697464, 0.15094339847564697, 0.24390242993831635, 0.35087719559669495, 0.04444443807005882, 0.21621620655059814, 0.1428571343421936, 0.1230769157409668, 0.05882352590560913, 0.09836065024137497, 0.13861384987831116 ]
BkUp6GZRW
true
[ "We propose Dual Actor-Critic algorithm, which is derived in a principled way from the Lagrangian dual form of the Bellman optimality equation. The algorithm achieves the state-of-the-art performances across several benchmarks." ]
[ "Training a model to perform a task typically requires a large amount of data from the domains in which the task will be applied.", "However, it is often the case that data are abundant in some domains but scarce in others.", "Domain adaptation deals with the challenge of adapting a model trained from a data-rich source domain to perform well in a data-poor target domain.", "In general, this requires learning plausible mappings between domains.", "CycleGAN is a powerful framework that efficiently learns to map inputs from one domain to another using adversarial training and a cycle-consistency constraint.", "However, the conventional approach of enforcing cycle-consistency via reconstruction may be overly restrictive in cases where one or more domains have limited training data.", "In this paper, we propose an augmented cyclic adversarial learning model that enforces the cycle-consistency constraint via an external task specific model, which encourages the preservation of task-relevant content as opposed to exact reconstruction.", "This task specific model both relaxes the cycle-consistency constraint and complements the role of the discriminator during training, serving as an augmented information source for learning the mapping.", "We explore adaptation in speech and visual domains in low resource in supervised setting.", "In speech domains, we adopt a speech recognition model from each domain as the task specific model.", "Our approach improves absolute performance of speech recognition by 2% for female speakers in the TIMIT dataset, where the majority of training samples are from male voices.", "In low-resource visual domain adaptation, the results show that our approach improves absolute performance by 14% and 4% when adapting SVHN to MNIST and vice versa, respectively, which outperforms unsupervised domain adaptation methods that require high-resource unlabeled target domain. \n", "Domain adaptation BID14 BID31 BID1 aims to generalize a model from source domain to a target domain.", "Typically, the source domain has a large amount of training data, whereas the data are scarce in the target domain.", "This challenge is typically addressed by learning a mapping between domains, which allows data from the source domain to enrich the available data for training in the target domain.", "One way of learning such mappings is through Generative Adversarial Networks (GANs BID7 with cycle-consistency constraint (CycleGAN Zhu et al., 2017) , which enforces that mapping of an example from the source to the target and then back to the source domain would result in the same example (and vice versa for a target example).", "Due to this constraint, CycleGAN learns to preserve the 'content' 1 from the source domain while only transferring the 'style' to match the distribution of the target domain.", "This is a powerful constraint, and various works BID32 BID20 BID10 have demonstrated its effectiveness in learning cross domain mappings.Enforcing cycle-consistency is appealing as a technique for preserving semantic information of the data with respect to a task, but implementing it through reconstruction may be too restrictive when data are imbalanced across domains.", "This is because the reconstruction error encourages exact match of samples from the reverse mapping, which may in turn encourage the forward-mapping to keep the sample close to the original domain.", "Normally, the adversarial objectives would counter this effect; however, when data from the target domain are scarce, it is very difficult to learn a powerful discriminator that can capture meaningful properties of the target distribution.", "Therefore, the resulting mappings learned is likely to be sub-optimal.", "Importantly, for the learned mapping to be meaningful, it is not necessary to have the exact reconstruction.", "As long as the 'semantic' information is preserved and the 'style' matches the corresponding distribution, it would be a valid mapping.To address this issue, we propose an augmented cyclic adversarial learning model (ACAL) for domain adaptation.", "In particular, we replace the reconstruction objective with a task specific model.", "The model learns to preserve the 'semantic' information from the data samples in a particular domain by minimizing the loss of the mapped samples for the task specific model.", "On the other hand, the task specific model also serves as an additional source of information for the corresponding domain and hence supplements the discriminator in that domain to facilitate better modeling of the distribution.", "The task specific model can also be viewed as an implicit way of disentangling the information essential to the task from the 'style' information that relates to the data distribution of different domain.", "We show that our approach improves the performance by 40% as compared to the baseline on digit domain adaptation.", "We improve the phoneme error rate by ∼ 5% on TIMIT dataset, when adapting the model trained on one speech from one gender to the other.", "In this paper, we propose to use augmented cycle-consistency adversarial learning for domain adaptation and introduce a task specific model to facilitate learning domain related mappings.", "We enforce cycle-consistency using a task specific loss instead of the conventional reconstruction objective.", "Additionally, we use the task specific model as an additional source of information for the discriminator in the corresponding domain.", "We demonstrate the effectiveness of our proposed approach by evaluating on two domain adaptation tasks, and in both cases we achieve significant performance improvement as compared to the baseline.By extending the definition of task-specific model to unsupervised learning, such as reconstruction loss using autoencoder, or self-supervision, our proposed method would work on all settings of domain adaptation.", "Such unsupervised task can be speech modeling using wavenet BID30 , or language modeling using recurrent or transformer networks BID24 ." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1764705777168274, 0.06666666269302368, 0.22857142984867096, 0.08695651590824127, 0.17142856121063232, 0.052631575614213943, 0.21739129722118378, 0.1538461446762085, 0.1538461446762085, 0.27586206793785095, 0.10256409645080566, 0.11999999731779099, 0.2142857164144516, 0.19354838132858276, 0.25641024112701416, 0.13114753365516663, 0.05714285373687744, 0.125, 0.09999999403953552, 0.1304347813129425, 0, 0, 0.2448979616165161, 0.23076923191547394, 0.37837836146354675, 0.1860465109348297, 0.14999999105930328, 0.1875, 0.0555555522441864, 0.37837836146354675, 0.2857142686843872, 0.25, 0.16393442451953888, 0.06451612710952759 ]
HJxjSR5so7
true
[ "A robust domain adaptation by employing a task specific loss in cyclic adversarial learning" ]
[ "The success of popular algorithms for deep reinforcement learning, such as policy-gradients and Q-learning, relies heavily on the availability of an informative reward signal at each timestep of the sequential decision-making process.", "When rewards are only sparsely available during an episode, or a rewarding feedback is provided only after episode termination, these algorithms perform sub-optimally due to the difficultly in credit assignment.", "Alternatively, trajectory-based policy optimization methods, such as cross-entropy method and evolution strategies, do not require per-timestep rewards, but have been found to suffer from high sample complexity by completing forgoing the temporal nature of the problem.", "Improving the efficiency of RL algorithms in real-world problems with sparse or episodic rewards is therefore a pressing need.", "In this work, we introduce a self-imitation learning algorithm that exploits and explores well in the sparse and episodic reward settings.", "We view each policy as a state-action visitation distribution and formulate policy optimization as a divergence minimization problem.", "We show that with Jensen-Shannon divergence, this divergence minimization problem can be reduced into a policy-gradient algorithm with shaped rewards learned from experience replays.", "Experimental results indicate that our algorithm works comparable to existing algorithms in environments with dense rewards, and significantly better in environments with sparse and episodic rewards.", "We then discuss limitations of self-imitation learning, and propose to solve them by using Stein variational policy gradient descent with the Jensen-Shannon kernel to learn multiple diverse policies.", "We demonstrate its effectiveness on a challenging variant of continuous-control MuJoCo locomotion tasks.", "Deep reinforcement learning (RL) has demonstrated significant applicability and superior performance in many problems outside the reach of traditional algorithms, such as computer and board games BID28 , continuous control BID25 , and robotics .", "Using deep neural networks as functional approximators, many classical RL algorithms have been shown to be very effective in solving sequential decision problems.", "For example, a policy that selects actions under certain state observation can be parameterized by a deep neural network that takes the current state observation as input and gives an action or a distribution over actions as output.", "Value functions that take both state observation and action as inputs and predict expected future reward can also be parameterized as neural networks.", "In order to optimize such neural networks, policy gradient methods BID29 BID37 BID38 and Q-learning algorithms BID28 capture the temporal structure of the sequential decision problem and decompose it to a supervised learning problem, guided by the immediate and discounted future reward from rollout data.Unfortunately, when the reward signal becomes sparse or delayed, these RL algorithms may suffer from inferior performance and inefficient sample complexity, mainly due to the scarcity of the immediate supervision when training happens in single-timestep manner.", "This is known as the temporal credit assignment problem BID44 .", "For instance, consider the Atari Montezuma's revenge game -a reward is received after collecting certain items or arriving at the final destination in the lowest level, while no reward is received as the agent is trying to reach these goals.", "The sparsity of the reward makes the neural network training very inefficient and also poses challenges in exploration.", "It is not hard to see that many of the real-world problems tend to be of the form where rewards are either only sparsely available during an episode, or the rewards are episodic, meaning that a non-zero reward is only provided at the end of the trajectory or episode.In addition to policy-gradient and Q-learning, alternative algorithms, such as those for global-or stochastic-optimization, have recently been studied for policy search.", "These algorithms do not decompose trajectories into individual timesteps, but instead apply zeroth-order finite-difference gradient or gradient-free methods to learn policies based on the cumulative rewards of the entire trajectory.", "Usually, trajectory samples are first generated by running the current policy and then the distribution of policy parameters is updated according to the trajectory-returns.", "The cross-entropy method (CEM, Rubinstein & Kroese (2016) ) and evolution strategies BID36 are two nominal examples.", "Although their sample efficiency is often not comparable to the policy gradient methods when dense rewards are available from the environment, they are more widely applicable in the sparse or episodic reward settings as they are agnostic to task horizon, and only the trajectorybased cumulative reward is needed.Our contribution is the introduction of a new algorithm based on policy-gradients, with the objective of achieving better performance than existing RL algorithms in sparse and episodic reward settings.", "Using the equivalence between the policy function and its state-action visitation distribution, we formulate policy optimization as a divergence minimization problem between the current policy's visitation and the distribution induced by a set of experience replay trajectories with high returns.", "We show that with the Jensen-Shannon divergence (D JS ), this divergence minimization problem can be reduced into a policy-gradient algorithm with shaped, dense rewards learned from these experience replays.", "This algorithm can be seen as self-imitation learning, in which the expert trajectories in the experience replays are self-generated by the agent during the course of learning, rather than using some external demonstrations.", "We combine the divergence minimization objective with the standard RL objective, and empirically show that the shaped, dense rewards significantly help in sparse and episodic settings by improving credit assignment.", "Following that, we qualitatively analyze the shortcomings of the self-imitation algorithm.", "Our second contribution is the application of Stein variational policy gradient (SVPG) with the Jensen-Shannon kernel to simultaneously learn multiple diverse policies.", "We demonstrate the benefits of this addition to the self-imitation framework by considering difficult exploration tasks with sparse and deceptive rewards.Related Works.", "Divergence minimization has been used in various policy learning algorithms.", "Relative Entropy Policy Search (REPS) BID33 restricts the loss of information between policy updates by constraining the KL-divergence between the state-action distribution of old and new policy.", "Policy search can also be formulated as an EM problem, leading to several interesting algorithms, such as RWR BID32 and PoWER BID20 .", "Here the M-step minimizes a KL-divergence between trajectory distributions, leading to an update rule which resembles return-weighted imitation learning.", "Please refer to BID7 for a comprehensive exposition.", "MATL BID47 uses adversarial training to bring state occupancy from a real and simulated agent close to each other for efficient transfer learning.", "In Guided Policy Search (GPS, BID21 ), a parameterized policy is trained by constraining the divergence between the current policy and a controller learnt via trajectory optimization.Learning from Demonstrations (LfD).", "The objective in LfD, or imitation learning, is to train a control policy to produce a trajectory distribution similar to the demonstrator.", "Approaches for self-driving cars BID4 and drone manipulation BID34 have used human-expert data, along with Behavioral Cloning algorithm to learn good control policies.", "Deep Q-learning has been combined with human demonstrations to achieve performance gains in Atari and robotics tasks BID46 BID30 .", "Human data has also been used in the maximum entropy IRL framework to learn cost functions under which the demonstrations are optimal .", "BID17 use the same framework to derive an imitation-learning algorithm (GAIL) which is motivated by minimizing the divergence between agent's rollouts and external expert demonstrations.", "Besides humans, other sources of expert supervision include planningbased approaches such as iLQR and MCTS .", "Our algorithm departs from prior work in forgoing external supervision, and instead using the past experiences of the learner itself as demonstration data.Exploration and Diversity in RL.", "Count-based exploration methods utilize state-action visitation counts N (s, a), and award a bonus to rarely visited states BID42 .", "In large statespaces, approximation techniques BID45 , and estimation of pseudo-counts by learning density models BID3 BID13 has been researched.", "Intrinsic motivation has been shown to aid exploration, for instance by using information gain or prediction error BID41 as a bonus.", "Hindsight Experience Replay adds additional goals (and corresponding rewards) to a Q-learning algorithm.", "We also obtain additional rewards, but from a discriminator trained on past agent experiences, to accelerate a policy-gradient algorithm.", "Prior work has looked at training a diverse ensemble of agents with good exploratory skills BID27 BID6 BID12 .", "To enjoy the benefits of diversity, we incorporate a modification of SVPG BID27 in our final algorithm.In very recent work, BID31 propose exploiting past good trajectories to drive exploration.", "Their algorithm buffers (s, a) and the corresponding return for each transition in rolled trajectories, and reuses them for training if the stored return value is higher than the current state-value estimate.", "Our approach presents a different objective for self-imitation based on divergence-minimization.", "With this view, we learn shaped, dense rewards which are then used for policy optimization.", "We further improve the algorithm with SVPG.", "Reusing high-reward trajectories has also been explored for program synthesis and semantic parsing tasks BID23 BID0 .", "We approached policy optimization for deep RL from the perspective of JS-divergence minimization between state-action distributions of a policy and its own past good rollouts.", "This leads to a self-imitation algorithm which improves upon standard policy-gradient methods via the addition of a simple gradient term obtained from implicitly shaped dense rewards.", "We observe substantial performance gains over the baseline for high-dimensional, continuous-control tasks with episodic and noisy rewards.", "Further, we discuss the potential limitations of the self-imitation approach, and propose ensemble training with the SVPG objective and JS-kernel as a solution.", "Through experimentation, we demonstrate the benefits of a self-imitating, diverse ensemble for efficient exploration and avoidance of local minima.An interesting future work is improving our algorithm using the rich literature on exploration in RL.", "Since ours is a population-based exploration method, techniques for efficient single agent exploration can be readily combined with it.", "For instance, parameter-space noise or curiosity-driven exploration can be applied to each agent in the SI-interact-JS ensemble.", "Secondly, our algorithm for training diverse agents could be used more generally.", "In Appendix 5.6, we show preliminary results for two cases:", "a) hierarchical RL, where a diverse group of Swimmer bots is trained for downstream use in a complex Swimming+Gathering task;", "b) RL without environment rewards, relying solely on diversity as the optimization objective.", "Further investigation is left for future work." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.07843136787414551, 0.07843136787414551, 0.14035087823867798, 0.1463414579629898, 0.0952380895614624, 0.10810810327529907, 0.2222222238779068, 0.09090908616781235, 0.16326530277729034, 0, 0.07547169178724289, 0, 0.07547169178724289, 0, 0.09195402264595032, 0.0624999962747097, 0.0363636314868927, 0.10256409645080566, 0.07792207598686218, 0.07843136787414551, 0.09302324801683426, 0, 0.09999999403953552, 0.18518517911434174, 0.19999998807907104, 0.11999999731779099, 0.20408162474632263, 0.0624999962747097, 0.09302324801683426, 0.22727271914482117, 0.0624999962747097, 0.13636362552642822, 0.04651162400841713, 0.09756097197532654, 0.06666666269302368, 0.13636362552642822, 0.2800000011920929, 0.04878048226237297, 0.13333332538604736, 0.04878048226237297, 0.04651162400841713, 0.17391303181648254, 0, 0.1702127605676651, 0.04878048226237297, 0.0952380895614624, 0.1395348757505417, 0, 0.09999999403953552, 0.09999999403953552, 0.19607841968536377, 0.08163265138864517, 0.060606054961681366, 0.1621621549129486, 0.20689654350280762, 0.052631575614213943, 0.31111109256744385, 0.21276594698429108, 0.20512819290161133, 0.1904761791229248, 0.14814814925193787, 0.19999998807907104, 0.10256409645080566, 0.05882352590560913, 0.060606054961681366, 0.04878048226237297, 0.11428570747375488, 0.06896551698446274 ]
HyxzRsR9Y7
true
[ "Policy optimization by using past good rollouts from the agent; learning shaped rewards via divergence minimization; SVPG with JS-kernel for population-based exploration." ]
[ "We study the precise mechanisms which allow autoencoders to encode and decode a simple geometric shape, the disk.", "In this carefully controlled setting, we are able to describe the specific form of the optimal solution to the minimisation problem of the training step.", "We show that the autoencoder indeed approximates this solution during training.", "Secondly, we identify a clear failure in the generalisation capacity of the autoencoder, namely its inability to interpolate data.", "Finally, we explore several regularisation schemes to resolve the generalisation problem.", "Given the great attention that has been recently given to the generative capacity of neural networks, we believe that studying in depth simple geometric cases sheds some light on the generation process and can provide a minimal requirement experimental setup for more complex architectures. \n", "Autoencoders are neural networks, often convolutional neural networks, whose purpose is twofold.", "Firstly, to compress some input data by transforming it from the input domain to another space, known as the latent, or code, space.", "The second goal of the autoencoder is to take this latent representation and transform it back to the original space, such that the output is similar, with respect to some criterion, to the input.", "One of the main objectives of this learning process being to reveal important structure in the data via the latent space, and therefore to represent this data in a more meaningful fashion or one that is easier to model.", "Autoencoders have been proven to be extremely useful in many tasks ranging from image compression to synthesis.", "Many variants on the basic idea of autoencoders have been proposed, the common theme being how to impose useful properties on the learned latent space.", "However, very little is known about the actual inner workings and mechanisms of the autoencoder.The goal of this work is to investigate these mechanisms and describe how the autoencoder functions.", "Many applications of autoencoders or similar networks consider relatively high-level input objects, ranging from the MNIST handwritten digits to abstract sketches of conceptual objects BID18 ; BID7 ).", "Here, we take a radically different approach.", "We consider, in depth, the encoding/decoding processes of a simple geometric shape, the disk, and investigate how the autoencoder functions in this case.", "There are several important advantages to such an approach.", "Firstly, since the class of objects we consider has an explicit parametrisation, it is possible to describe the \"optimal\" performance of the autoencoder, ie.", "can it compress and uncompress a disk to and from a code space of dimensionality 1 ?", "Secondly, the setting of this study fixes certain architecture characteristics of the network, such as the number of layers, leaving fewer free parameters to tune.", "This means that the conclusions which we obtain are more likely to be robust than in the case of more high-level applications.", "Finally, it is easier to identify the roles of different components of the network, which enables us to carry out an instructive ablation study.Using this approach, we show that the autoencoder approximates the theoretical solution of the training problem when no biases are involved in the network.", "Secondly, we identify certain limitations in the generalisation capacity of autoencoders when the training database is incomplete with respect to the underlying manifold.", "We observe the same limitation using the architecture of BID18 , which is considerably more complex and is proposed to encode natural images.", "Finally, we analyse several regularisation schemes and identify one in particular which greatly aids in overcoming this generalisation problem.", "We have investigated in detail the specific mechanisms which allow autoencoders to encode image information in an optimal manner in the specific case of disks.", "We have shown that, in this case, the encoder functions by integrating over disk, and so the code z represents the area of the disk.", "In the case where the autoencoder is trained with no bias, the decoder learns a single function which is multiplied by scalar depending on the input.", "We have shown that this function corresponds to the optimal function.", "The bias is then used to induce a thresholding process applied to ensure the disk is correctly decoded.", "We have also illustrated certain limitations of the autoencoder with respect to generalisation when datapoints are missing in the training set.", "This is especially problematic for higher-level applications, whose data have higher intrinsic dimensionality and therefore are more likely to include such \"holes\".", "Finally, we identify a regularisation approach which is able to overcome this problem particularly well.", "This regularisation is asymmetrical as it consists of regularizing the encoder while leaving more freedom to the decoder.An important future goal is to extend the theoretical analyses obtained to increasingly complex visual objects, in order to understand whether the same mechanisms remain in place.", "We have experimented with other simple geometric objects such as squares and ellipses, with similar results in an optimal code size.", "Another question is how the decoder functions with the biases included.", "This requires a careful study of the different non-linearity activations as the radius increases.", "Finally, the ultimate goal of these studies is to determine the capacity of autoencoders to encode and generate images representing more complex objects or scenes.", "As we have seen, the proposed framework can help identifying some limitations of complex networks such as the one from BID18 and future works should investigate whether this framework can help developing the right regularization scheme or architecture.", "Value of < f, 1 Br >, plotted against z Figure 7 : Verification of the hypothesis that y(t, r) = h(r)f (t) for decoding in the case where the autoencoder contains no bias..", "We have determined the average profile of the output of the autoencoder when no biases are involved. On the left, we have divided several random experimental profiles y by the function h, and plotted the result, which is close to constant (spatially) for a fixed radius of the input disk. On the right, we plot z against the theoretically optimal value of h (C f, 1 Br , where C is some constant accounting for the arbitrary normalization of f ). This experimental sanity check confirms our theoretical derivations." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3478260934352875, 0.12244897335767746, 0.09999999403953552, 0.25531914830207825, 0.20000000298023224, 0.2535211145877838, 0, 0.08163265138864517, 0.2142857164144516, 0.23333333432674408, 0.13333332538604736, 0.19607841968536377, 0.15094339847564697, 0.1428571343421936, 0.0555555522441864, 0.2857142686843872, 0.052631575614213943, 0.11999999731779099, 0.1818181723356247, 0.19999998807907104, 0.20408162474632263, 0.14492753148078918, 0.2800000011920929, 0.19999998807907104, 0.1702127605676651, 0.2800000011920929, 0.19607841968536377, 0.11764705181121826, 0.1538461446762085, 0.13333332538604736, 0.2857142686843872, 0.11764705181121826, 0.13636362552642822, 0.1818181723356247, 0.20408162474632263, 0.10256409645080566, 0.1904761791229248, 0.19607841968536377, 0.09677419066429138, 0.13333332538604736, 0.1428571343421936 ]
r111KtCp-
true
[ "We study the functioning of autoencoders in a simple setting and advise new strategies for their regularisation in order to obtain bettre generalisation with latent interpolation in mind for image sythesis. " ]
[ "We present a simple idea that allows to record a speaker in a given language and synthesize their voice in other languages that they may not even know.", "These techniques open a wide range of potential applications such as cross-language communication, language learning or automatic video dubbing.", "We call this general problem multi-language speaker-conditioned speech synthesis and we present a simple but strong baseline for it.\n\n", "Our model architecture is similar to the encoder-decoder Char2Wav model or Tacotron.", "The main difference is that, instead of conditioning on characters or phonemes that are specific to a given language, we condition on a shared phonetic representation that is universal to all languages.", "This cross-language phonetic representation of text allows to synthesize speech in any language while preserving the vocal characteristics of the original speaker.", "Furthermore, we show that fine-tuning the weights of our model allows us to extend our results to speakers outside of the training dataset." ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[ 1, 0.09302324801683426, 0.22727271914482117, 0.05714285373687744, 0.19607841968536377, 0.27272728085517883, 0.1395348757505417 ]
HkltHkBB37
false
[ "We present a simple idea that allows to record a speaker in a given language and synthesize their voice in other languages that they may not even know." ]
[ "The goal of imitation learning (IL) is to enable a learner to imitate expert behavior given expert demonstrations.", "Recently, generative adversarial imitation learning (GAIL) has shown significant progress on IL for complex continuous tasks.", "However, GAIL and its extensions require a large number of environment interactions during training.", "In real-world environments, the more an IL method requires the learner to interact with the environment for better imitation, the more training time it requires, and the more damage it causes to the environments and the learner itself.", "We believe that IL algorithms could be more applicable to real-world problems if the number of interactions could be reduced. \n", "In this paper, we propose a model-free IL algorithm for continuous control.", "Our algorithm is made up mainly three changes to the existing adversarial imitation learning (AIL) methods –", "(a) adopting off-policy actor-critic (Off-PAC) algorithm to optimize the learner policy,", "(b) estimating the state-action value using off-policy samples without learning reward functions, and", "(c) representing the stochastic policy function so that its outputs are bounded.", "Experimental results show that our algorithm achieves competitive results with GAIL while significantly reducing the environment interactions.", "Recent advances in reinforcement learning (RL) have achieved super-human performance on several domains BID20 BID21 .", "On most of such domains with the success of RL, the design of reward, that explains what agent's behavior is favorable, is obvious for humans.", "Conversely, on domains where it is unclear how to design the reward, agents trained by RL algorithms often obtain poor policies and behave worse than what we expect them to do.", "Imitation learning (IL) comes in such cases.", "The goal of IL is to enable the learner to imitate expert behavior given the expert demonstrations without the reward signal.", "We are interested in IL because we desire an algorithm that can be applied to real-world problems for which it is often hard to design the reward.", "In addition, since it is generally hard to model a variety of real-world environments with an algorithm, and the state-action pairs in a vast majority of realworld applications such as robotics control can be naturally represented in continuous spaces, we focus on model-free IL for continuous control.A wide variety of IL methods have been proposed in the last few decades.", "The simplest IL method among those is behavioral cloning (BC) BID23 which learns an expert policy in a supervised fashion without environment interactions during training.", "BC can be the first IL option when enough demonstration is available.", "However, when only a limited number of demonstrations are available, BC often fails to imitate the expert behavior because of the problem which is referred to compounding error BID25 -inaccuracies compound over time and can lead the learner to encounter unseen states in the expert demonstrations.", "Since it is often hard to obtain a large number of demonstrations in real-world environments, BC is often not the best choice for real-world IL scenarios.Another widely used approach, which overcomes the compounding error problem, is Inverse Reinforcement Learning (IRL) BID27 BID22 BID0 BID33 .", "Recently, BID15 have proposed generative adversarial imitation learning (GAIL) which is based on prior IRL works.", "Since GAIL has achieved state-of-the-art performance on a variety of continuous control tasks, the adversarial IL (AIL) framework has become a popular choice for IL BID1 BID11 BID16 .", "It is known that the AIL methods are more sample efficient than BC in terms of the expert demonstration.", "However, as pointed out by BID15 , the existing AIL methods have sample complexity in terms of the environment interaction.", "That is, even if enough demonstration is given by the expert before training the learner, the AIL methods require a large number of state-action pairs obtained through the interaction between the learner and the environment 1 .", "The sample complexity keeps existing AIL from being employed to real-world applications for two reasons.", "First, the more an AIL method requires the interactions, the more training time it requires.", "Second, even if the expert safely demonstrated, the learner may have policies that damage the environments and the learner itself during training.", "Hence, the more it performs the interactions, the more it raises the possibility of getting damaged.", "For the real-world applications, we desire algorithms that can reduce the number of interactions while keeping the imitation capability satisfied as well as the existing AIL methods do.The following three properties of the existing AIL methods which may cause the sample complexity in terms of the environment interactions:(a", ") Adopting on-policy RL methods which fundamentally have sample complexity in terms of the environment interactions.(b", ") Alternating three optimization processes -learning reward functions, value estimation with learned reward functions, and RL to update the learner policy using the estimated value. In", "general, as the number of parameterized functions which are related to each other increases, the training progress may be unstable or slower, and thus more interactions may be performed during training.(c)", "Adopting Gaussian policy as the learner's stochastic policy, which has infinite support on a continuous action space. In", "common IL settings, we observe action space of the expert policy from the demonstration where the expert action can take on values within a bounded (finite) interval. As", "BID3 suggests, the policy which can select actions outside the bound may slow down the training progress and make the problem harder to solve, and thus more interactions may be performed during training.In this paper, we propose an IL algorithm for continuous control to improve the sample complexity of the existing AIL methods. Our", "algorithm is made up mainly three changes to the existing AIL methods as follows:(a)", "Adopting off-policy actor-critic (Off-PAC) algorithm BID5 to optimize the learner policy instead of on-policy RL algorithms. Off-policy", "learning is commonly known as the promising approach to improve the complexity.(b) Estimating", "the state-action value using off-policy samples without learning reward functions instead of using on-policy samples with the learned reward functions. Omitting the reward", "learning reduces functions to be optimized. It is expected to make", "training progress stable and faster and thus reduce the number of interactions during training.(c) Representing the stochastic", "policy function of which outputs are bounded instead of adopting Gaussian policy. Bounding action values may make", "the problem easier to solve and make the training faster, and thus reduce the number of interactions during training.Experimental results show that our algorithm enables the learner to imitate the expert behavior as well as GAIL does while significantly reducing the environment interactions. Ablation experimental results show", "that (a) adopting the off-policy scheme", "requires about 100 times fewer environment interactions to imitate the expert behavior than the one on-policy IL algorithms require, (b) omitting the reward learning makes", "the training stable and faster, and (c) bounding action values makes the training", "faster.", "In this paper, we proposed a model-free IL algorithm for continuous control.", "Experimental results showed that our algorithm achieves competitive performance with GAIL while significantly reducing the environment interactions.A DETAILED DESCRIPTION OF EXPERIMENT TAB0 summarizes the description of each task, the performance of an agent with random policy, and the performance of the experts." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.045454539358615875, 0.13636362552642822, 0.1904761791229248, 0.2222222238779068, 0.1702127605676651, 0.5, 0.08888888359069824, 0.1538461446762085, 0.09756097197532654, 0.09999999403953552, 0.6818181872367859, 0, 0.16326530277729034, 0.06896550953388214, 0, 0.08888888359069824, 0.2222222238779068, 0.2531645596027374, 0.15094339847564697, 0.09999999403953552, 0.060606054961681366, 0.11764705181121826, 0.045454539358615875, 0.2641509473323822, 0.08695651590824127, 0.08510638028383255, 0.10169491171836853, 0.04651162400841713, 0.05128204822540283, 0.08695651590824127, 0.05128204822540283, 0.1846153736114502, 0.13333332538604736, 0.11999999731779099, 0.0714285671710968, 0.17391303181648254, 0.1538461446762085, 0.29729729890823364, 0.0952380895614624, 0.13333332538604736, 0.04878048226237297, 0.13636362552642822, 0, 0.09302324801683426, 0, 0.375, 0.1764705926179886, 0.15686273574829102, 0.052631575614213943, 0.550000011920929, 0.5161290168762207 ]
BkN5UoAqF7
true
[ "In this paper, we proposed a model-free, off-policy IL algorithm for continuous control. Experimental results showed that our algorithm achieves competitive results with GAIL while significantly reducing the environment interactions." ]
[ "The dominant approach to unsupervised \"style transfer'' in text is based on the idea of learning a latent representation, which is independent of the attributes specifying its \"style''.", "In this paper, we show that this condition is not necessary and is not always met in practice, even with domain adversarial training that explicitly aims at learning such disentangled representations.", "We thus propose a new model that controls several factors of variation in textual data where this condition on disentanglement is replaced with a simpler mechanism based on back-translation.", "Our method allows control over multiple attributes, like gender, sentiment, product type, etc., and a more fine-grained control on the trade-off between content preservation and change of style with a pooling operator in the latent space.", "Our experiments demonstrate that the fully entangled model produces better generations, even when tested on new and more challenging benchmarks comprising reviews with multiple sentences and multiple attributes.", "One of the objectives of unsupervised learning is to learn representations of data that enable fine control over the underlying latent factors of variation, e.g., pose and viewpoint of objects in images, or writer style and sentiment of a product review.", "In conditional generative modeling, these latent factors are given BID38 BID31 BID9 , or automatically inferred via observation of samples from the data distribution BID4 BID15 ).More", "recently, several studies have focused on learning unsupervised mappings between two data domains such as images BID39 , words or sentences from different languages BID6 BID26 .In this", "problem setting, the generative model is conditioned not only on the desired attribute values, but also on a initial input, which it must transform. Generations", "should retain as many of the original input characteristics as possible, provided the attribute constraint is not violated. This learning", "task is typically unsupervised because no example of an input and its corresponding output with the specified attribute is available during training. The model only", "sees random examples and their attribute values.The dominant approach to learn such a mapping in text is via an explicit constraint on disentanglement BID17 BID10 BID37 : the learned representation should be invariant to the specified attribute, and retain only attribute-agnostic information about the \"content\". Changing the style", "of an input at test time then amounts to generating an output based on the disentangled latent representation computed from the input and the desired attributes. Disentanglement is", "often achieved through an adversarial term in the training objective that aims at making the attribute value unrecoverable from the latent representation. This paper aims to", "extend previous studies on \"style transfer\" along three axes. (i) First, we seek", "to gain a better understanding of what is necessary to make things work, and in particular, whether Table 1 : Our approach can be applied to many different domains beyond sentiment flipping, as illustrated here with example re-writes by our model on public social media content. The first line in", "each box is an input provided to the model with the original attribute, followed by its rewrite when given a different attribute value. disentanglement is", "key, or even actually achieved by an adversarial loss in practice. In Sec. 3.1 we provide", "strong empirical evidence that disentanglement is not necessary to enable control over the factors of variation, and that even a method using adversarial loss to disentangle BID10 does not actually learn representations that are disentangled. (ii) Second, we introduce", "a model which replaces the adversarial term with a back-translation BID35 objective which exposes the model to a pseudo-supervised setting, where the model's outputs act as supervised training data for the ultimate task at hand. The resulting model is similar", "to recently proposed methods for unsupervised machine translation BID24 BID0 BID44 ), but with two major differences: (a) we use a pooling operator", "which is used to control the trade-off between style transfer and content preservation; and (b) we extend this model to support", "multiple attribute control. (iii) Finally, in Sec. 4.1 we point", "out that current style transfer benchmarks based on collections of user reviews have severe limitations, as they only consider a single attribute control (sentiment), and very small sentences in isolation with noisy labels. To address this issue, we propose a", "new set of benchmarks based on existing review datasets, which comprise full reviews, where multiple attributes are extracted from each review.The contributions of this paper are thus: (1) a deeper understanding of the necessary components of style transfer through extensive experiments, resulting in (2) a generic and simple learning framework based on mixing a denoising auto-encoding loss with an online back-translation technique and a novel neural architecture combining a pooling operator and support for multiple attributes, and (3) a new, more challenging and realistic version of existing benchmarks which uses full reviews and multiple attributes per review, as well as a comparison of our approach w.r.t. baselines using both new metrics and human evaluations. We will open-source our code and release", "the new benchmark datasets used in this work, as well as our pre-trained classifiers and language models for reproducibility. This will also enable fair empirical comparisons", "on automatic evaluation metrics in future work on this problem.", "We present a model that is capable of re-writing sentences conditioned on given attributes, that is not based on a disentanglement criterion as often used in the literature.", "We demonstrate our model's ability to generalize to a realistic setting of restaurant/product reviews consisting of several sentences per review.", "We also present model components that allow fine-grained control over the trade-off between attribute control versus preserving the content in the input.", "Experiments with automatic and human-based metrics show that our model significantly outperforms the current state of the art not only on existing datasets, but also on the large-scale datasets we created.", "The source code and benchmarks will be made available to the research community after the reviewing process.A SUPPLEMENTARY MATERIAL" ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17142856121063232, 0, 0.054054051637649536, 0.09302325546741486, 0.1666666567325592, 0, 0, 0.052631575614213943, 0.12121211737394333, 0, 0, 0.07407407462596893, 0.11764705181121826, 0, 0.08695651590824127, 0.035087715834379196, 0, 0, 0, 0.0476190447807312, 0.060606054961681366, 0, 0.0952380895614624, 0.04081632196903229, 0.07843136787414551, 0.05714285373687744, 0.10526315122842789, 0.11764705181121826, 0, 0, 0.052631575614213943, 0.06896550953388214 ]
H1g2NhC5KQ
true
[ "A system for rewriting text conditioned on multiple controllable attributes" ]
[ "Vanilla RNN with ReLU activation have a simple structure that is amenable to systematic dynamical systems analysis and interpretation, but they suffer from the exploding vs. vanishing gradients problem.", "Recent attempts to retain this simplicity while alleviating the gradient problem are based on proper initialization schemes or orthogonality/unitary constraints on the RNN’s recurrency matrix, which, however, comes with limitations to its expressive power with regards to dynamical systems phenomena like chaos or multi-stability.", "Here, we instead suggest a regularization scheme that pushes part of the RNN’s latent subspace toward a line attractor configuration that enables long short-term memory and arbitrarily slow time scales.", "We show that our approach excels on a number of benchmarks like the sequential MNIST or multiplication problems, and enables reconstruction of dynamical systems which harbor widely different time scales.", "Theories of complex systems in biology and physics are often formulated in terms of sets of stochastic differential or difference equations, i.e. as stochastic dynamical systems (DS).", "A long-standing desire is to retrieve these generating dynamical equations directly from observed time series data (Kantz & Schreiber, 2004) .", "A variety of machine and deep learning methodologies toward this goal have been introduced in recent years (Chen et al., 2017; Champion et al., 2019; Jordan et al., 2019; Duncker et al., 2019; Ayed et al., 2019; Durstewitz, 2017; Koppe et al., 2019) , many of them based on recurrent neural networks (RNN) which can universally approximate any DS (i.e., its flow field) under some mild conditions (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998) .", "However, vanilla RNN as often used in this context are well known for their problems in capturing long-term dependencies and slow time scales in the data (Hochreiter & Schmidhuber, 1997; Bengio et al., 1994) .", "In DS terms, this is generally due to the fact that flexible information maintenance over long periods requires precise fine-tuning of model parameters toward 'line attractor' configurations ( Fig. 1) , a concept first propagated in computational neuroscience for addressing animal performance in parametric working memory tasks (Seung, 1996; Seung et al., 2000; Durstewitz, 2003) .", "Line attractors introduce directions of zero-flow into the model's state space that enable long-term maintenance of arbitrary values (Fig. 1) .", "Specially designed RNN architectures equipped with gating mechanisms and (linear) memory cells have been suggested for solving this issue (Hochreiter & Schmidhuber, 1997; Cho et al., 2014) .", "However, from a DS perspective, simpler models that can more easily be analyzed and interpreted in DS terms, and for which more efficient inference algorithms exist that emphasize approximation of the true underlying DS would be preferable.", "Recent solutions to the vanishing vs. exploding gradient problem attempt to retain the simplicity of vanilla RNN by initializing or constraining the recurrent weight matrix to be the identity (Le et al., 2015) , orthogonal (Henaff et al., 2016; Helfrich et al., 2018) or unitary (Arjovsky et al., 2016) .", "In this way, in a system including piecewise linear (PL) components like rectified-linear units (ReLU), line attractor dimensions are established from the start by construction or ensured throughout training by a specifically parameterized matrix decomposition.", "However, for many DS problems, line attractors instantiated by mere initialization procedures may be unstable and quickly dissolve during training.", "On the other hand, orthogonal or unitary constraints are too restrictive for reconstructing DS, and more generally from a computational perspective as well (Kerg et al., 2019) : For instance, neither", "2) with flow field (grey) and nullclines (set of points at which the flow of one of the variables vanishes, in blue and red).", "Insets: Time graphs of z 1 for T = 30 000.", "A) Perfect line attractor.", "The flow converges to the line attractor from all directions and is exactly zero on the line, thus retaining states indefinitely in the absence of perturbations, as illustrated for 3 example trajectories (green) started from different initial conditions.", "B) Slightly detuned line attractor (cf.", "Durstewitz (2003) ).", "The system's state still converges toward the 'line attractor ghost ' (Strogatz, 2015) , but then very slowly crawls up within the 'attractor tunnel' (green trajectory) until it hits the stable fixed point at the intersection of nullclines.", "Within the tunnel, flow velocity is smoothly regulated by the gap between nullclines, thus enabling arbitrary time constants.", "Note that along other, not illustrated dimensions of the system's state space the flow may still evolve freely in all directions.", "C) Simple 2-unit solution to the addition problem exploiting the line attractor properties of ReLUs in the positive quadrant.", "The output unit serves as a perfect integrator, while the input unit will only convey those input values to the output unit that are accompanied by a '1' in the second input stream (see 7.1.1 for complete parameters).", "chaotic behavior (that requires diverging directions) nor settings with multiple isolated fixed point or limit cycle attractors are possible.", "Here we therefore suggest a different solution to the problem, by pushing (but not strictly enforcing) ReLU-based, piecewise-linear RNN (PLRNN) toward line attractor configurations along some (but not all) directions in state space.", "We achieve this by adding special regularization terms for a subset of RNN units to the loss function that promote such a configuration.", "We demonstrate that our approach outperforms, or is en par with, LSTM and other, initialization-based, methods on a number of 'classical' machine learning benchmarks (Hochreiter & Schmidhuber, 1997) .", "More importantly, we demonstrate that while with previous methods it was difficult to capture slow behavior in a DS that exhibits widely different time scales, our new regularization-supported inference efficiently captures all relevant time scales.", "In this work we have introduced a simple solution to the long short-term memory problem in RNN that on the one hand retains the simplicity and tractability of vanilla RNN, yet on the other hand does not curtail the universal computational capabilities of RNN (Koiran et al., 1994; Siegelmann & Sontag, 1995) and their ability to approximate arbitrary DS (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998; Trischler & D'Eleuterio, 2016) .", "We achieved this by adding regularization terms to the loss function that encourage the system to form a 'memory subspace', that is, line attractor dimensions (Seung, 1996; Durstewitz, 2003) which would store arbitrary values for, if unperturbed, arbitrarily long periods.", "At the same time we did not rigorously enforce this constraint which has important implications for capturing slow time scales in the data: It allows the RNN to slightly depart from a perfect line attractor, which has been shown to constitute a general dynamical mechanism for regulating the speed of flow and thus the learning of arbitrary time constants that are not naturally included qua RNN design (Durstewitz, 2003; 2004) .", "This is because as we come infinitesimally close to a line attractor and thus a bifurcation in the system's parameter space, the flow along this direction becomes arbitrarily slow until it vanishes completely in the line attractor configuration (Fig. 1) .", "Moreover, part of the RNN's latent space was not regularized at all, leaving the system enough degrees of freedom for realizing arbitrary computations or dynamics.", "We showed that the rPLRNN is en par with or outperforms initialization-based approaches and LSTMs on a number of classical benchmarks, and, more importantly, that the regularization strongly facilitates the identification of challenging DS with widely different time scales in PLRNN-based algorithms for DS reconstruction.", "Future work will explore a wider range of DS models and empirical data with diverse temporal and dynamical phenomena.", "Another future direction may be to replace the EM algorithm by black-box variational inference, using the re-parameterization trick for gradient descent (Kingma & Welling, 2013; Rezende et al., 2014; Chung et al., 2015) .", "While this would come with better scaling in M , the number of latent states (the scaling in T is linear for EM as well, see Paninski et al. (2010) ), the EM used here efficiently exploits the model's piecewise linear structure in finding the posterior over latent states and computing the parameters (see Suppl. 7.1.3).", "It may thus be more accurate and suitable for smaller-scale problems where high precision is required, as often encountered in neuroscience or physics.", "7 SUPPLEMENTARY MATERIAL 7.1 SUPPLEMENTARY TEXT 7.1.1 Simple exact PLRNN solution for addition problem", "The exact PLRNN parameter settings (cf. eq. 1) for solving the addition problem with 2 units (cf. Fig. 1C ) are as follows:" ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.25, 0.0923076868057251, 0.3636363446712494, 0.4285714328289032, 0.1599999964237213, 0.08510638028383255, 0.04597700759768486, 0.19999998807907104, 0.14814814925193787, 0.1304347813129425, 0.1818181723356247, 0.17241378128528595, 0.0937499925494194, 0.03333332762122154, 0.08510638028383255, 0.10169491171836853, 0.1304347813129425, 0.10526315122842789, 0, 0.09677419066429138, 0, 0, 0.032258059829473495, 0.09090908616781235, 0.08510638028383255, 0.045454539358615875, 0.10344827175140381, 0.04347825422883034, 0.06896550953388214, 0.2448979616165161, 0.2181818187236786, 0.23333333432674408, 0.23255813121795654, 0.15625, 0.24096384644508362, 0.06557376682758331, 0.11999999731779099, 0.3333333432674408, 0.2222222238779068, 0.03448275476694107, 0.10810810327529907, 0.07999999821186066, 0.05128204822540283, 0.08163265138864517 ]
rylZKTNYPr
true
[ "We develop a new optimization approach for vanilla ReLU-based RNN that enables long short-term memory and identification of arbitrary nonlinear dynamical systems with widely differing time scales." ]
[ "In the field of Generative Adversarial Networks (GANs), how to design a stable training strategy remains an open problem.", "Wasserstein GANs have largely promoted the stability over the original GANs by introducing Wasserstein distance, but still remain unstable and are prone to a variety of failure modes.", "In this paper, we present a general framework named Wasserstein-Bounded GAN (WBGAN), which improves a large family of WGAN-based approaches by simply adding an upper-bound constraint to the Wasserstein term.", "Furthermore, we show that WBGAN can reasonably measure the difference of distributions which almost have no intersection.", "Experiments demonstrate that WBGAN can stabilize as well as accelerate convergence in the training processes of a series of WGAN-based variants.", "Over the past few years, Generative Adversarial Networks (GANs) have shown impressive results in many generative tasks.", "They are inspired by the game theory, that two models compete with each other: a generator which seeks to produce samples from the same distribution as the data, and a discriminator whose job is to distinguish between real and generated data.", "Both models are forced stronger simultaneously during the training process.", "GANs are capable of producing plausible synthetic data across a wide diversity of data modalities, including natural images (Karras et al., 2017; Brock et al., 2018; Lucic et al., 2019) , natural language (Press et al., 2017; Lin et al., 2017; Rajeswar et al., 2017) , music Mogren, 2016; Dong et al., 2017; Dong & Yang, 2018) , etc.", "Despite their success, it is often difficult to train a GAN model in a fast and stable way, and researchers are facing issues like vanishing gradients, training instability, mode collapse, etc.", "This has led to a proliferation of works that focus on improving the quality of GANs by stabilizing the training procedure (Radford et al., 2015; Salimans et al., 2016; Zhao et al., 2016; Nowozin et al., 2016; Qi, 2017; Deshpande et al., 2018) .", "In particular, introduced a variant of GANs based on the Wasserstein distance, and releases the problem of gradient disappearance to some extent.", "However, WGANs limit the weight within a range to enforce the continuity of Lipschitz, which can easily cause over-simplified critic functions (Gulrajani et al., 2017) .", "To solve this issue, Gulrajani et al. (2017) proposed a gradient penalty method termed WGAN-GP, which replaces the weight clipping in WGANs with a gradient penalty term.", "As such, WGAN-GP provides a more stable training procedure and succeeds in a variety of generating tasks.", "Based on WGAN-GP, more works (Wei et al., 2018; Petzka et al., 2017; Wu et al., 2018; Mescheder et al., 2018; Thanh-Tung et al., 2019; Kodali et al., 2017; adopt different forms of gradient penalty terms to further improve training stability.", "However, it is often observed that such gradient penalty strategy sometimes generate samples with unsatisfying quality, or even do not always converge to the equilibrium point (Mescheder et al., 2018) .", "In this paper, we propose a general framework named Wasserstein-Bounded GAN (WBGAN), which improve the stability of WGAN training by bounding the Wasserstein term.", "The highlight is that the instability of WGANs also resides in the dramatic changes of the estimated Wasserstein distance during the initial iterations.", "Many previous works just focused on improving the gradient penalty term for stable training, while they ignored the bottleneck of the Wasserstein term.", "The proposed training strategy is able to adaptively enforce the Wasserstein term within a certain value, so as to balance the Wasserstein loss and gradient penalty loss dynamically and make the training process more stable.", "WBGANs are generalized, which can be instantiated using different kinds of bound estimations, and incorporated into any variant of WGANs to improve the training stability and accelerate the convergence.", "Specifically, with Sinkhorn distance (Cuturi, 2013; Genevay et al., 2017) for bound estimation, we test three representative variants of WGANs (WGAN-GP (Gulrajani et al., 2017) , WGANdiv (Wu et al., 2018) , and WGAN-GPReal (Mescheder et al., 2018) ) on the CelebA dataset (Liu et al., 2015) .", "As shown in Fig. 1", "This paper introduced a general framework called WBGANs, which can be applied to a variety of WGAN variants to stabilize the training process and improve the performance.", "We clarify that WBGANs can stabilize the Wasserstein term at the beginning of the iterations, which is beneficial for smoother convergence of WGAN-based methods.", "We present an instantiated bound estimation method via Sinkhorn distance and give a theoretical analysis on it.", "It remains an open topic on how to set a better bound for higher resolution image generation tasks." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ 0.060606054961681366, 0.05128204822540283, 0.09302324801683426, 0, 0.12121211737394333, 0.06451612710952759, 0.03999999538064003, 0, 0, 0.09302324801683426, 0, 0.05882352590560913, 0.05128204822540283, 0.10526315122842789, 0.13333332538604736, 0, 0, 0.05405404791235924, 0.12121211737394333, 0.05882352590560913, 0.0476190447807312, 0.09999999403953552, 0.11999999731779099, 0.10526315122842789, 0.15789473056793213, 0.05714285373687744, 0.12903225421905518, 0.1875 ]
BkxgrAVFwH
true
[ "Propose an improved framework for WGANs and demonstrate its better performance in theory and practice." ]
[ "Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian.\n", "After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample.\n", "In this paper, we show that the latent space operations used in the literature so far induce a distribution mismatch between the resulting outputs and the prior distribution the model was trained on.", "To address this, we propose to use distribution matching transport maps to ensure that such latent space operations preserve the prior distribution, while minimally modifying the original operation. \n", "Our experimental results validate that the proposed operations give higher quality samples compared to the original operations.", "Generative models such as Variational Autoencoders (VAEs) BID6 and Generative Adversarial Networks (GANs) BID3 have emerged as popular techniques for unsupervised learning of intractable distributions.", "In the framework of Generative Adversarial Networks (GANs) BID3 , the generative model is obtained by jointly training a generator G and a discriminator D in an adversarial manner.", "The discriminator is trained to classify synthetic samples from real ones, whereas the generator is trained to map samples drawn from a fixed prior distribution to synthetic examples which fool the discriminator.", "Variational Autoencoders (VAEs) BID6 are also trained for a fixed prior distribution, but this is done through the loss of an Autoencoder that minimizes the variational lower bound of the data likelihood.", "For both VAEs and GANs, using some data X we end up with a trained generator G, that is supposed to map latent samples z from the fixed prior distribution to output samples G(z) which (hopefully) have the same distribution as the data.In order to understand and visualize the learned model G(z), it is a common practice in the literature of generative models to explore how the output G(z) behaves under various arithmetic operations on the latent samples z.", "In this paper, we show that the operations typically used so far, such as linear interpolation BID3 , spherical interpolation (White, 2016) , vicinity sampling and vector arithmetic BID12 , cause a distribution mismatch between the latent prior distribution and the results of the operations.", "This is problematic, since the generator G was trained on a fixed prior and expects to see inputs with statistics consistent with that distribution.", "We show that this, somewhat paradoxically, is also a problem if the support of resulting (mismatched) distribution is within the support of a uniformly distributed prior, whose points all have equal likelihood during training.To address this, we propose to use distribution matching transport maps, to obtain analogous latent space operations (e.g. interpolation, vicinity sampling) which preserve the prior distribution of the latent space, while minimally changing the original operation.", "In Figure 1 we showcase how our proposed technique gives an interpolation operator which avoids distribution mismatch when interpolating between samples of a uniform distribution.", "The points of the (red) matched trajectories samples from prior linear matched (ours) spherical (a) Uniform prior: Trajectories of linear interpolation, our matched interpolation and the spherical interpolation (White, 2016) .", "(White, 2016)", "Figure 1: We show examples of distribution mismatches induced by the previous interpolation schemes when using a uniform prior in two dimensions.", "Our matched interpolation avoids this with a minimal modification to the linear trajectory, traversing through the space such that all points along the path are distributed identically to the prior.are obtained as minimal deviations (in expectation of l 1 distance) from the the points of the (blue) linear trajectory.", "We have shown that the common latent space operations used for Generative Models induce distribution mismatch from the prior distribution the models were trained for.", "This problem has been mostly ignored by the literature so far, partially due to the belief that this should not be a problem for uniform priors.", "However, our statistical and experimental analysis shows that the problem is real, with the operations used so far producing significantly lower quality samples compared to their inputs.", "To address the distribution mismatch, we propose to use optimal transport to minimally modify (in l 1 distance) the operations such that they fully preserve the prior distribution.", "We give analytical formulas of the resulting (matched) operations for various examples, which are easily implemented.", "The matched operators give a significantly higher quality samples compared to the originals, having the potential to become standard tools for evaluating and exploring generative models.", "We note that the analysis here can bee seen as a more rigorous version of an observation made by White (2016) , who experimentally show that there is a significant difference between the average norm of the midpoint of linear interpolation and the points of the prior, for uniform and Gaussian distributions.Suppose our latent space has a prior with DISPLAYFORM0 In this case, we can look at the squared norm DISPLAYFORM1 From the Central Limit Theorem (CLT), we know that as d → ∞, DISPLAYFORM2 in distribution.", "Thus, assuming d is large enough such that we are close to convergence, we can approximate the distribution of z 2 as N (dµ Z 2 , dσ 2 Z 2 ).", "In particular, this implies that almost all points lie on a relatively thin spherical shell, since the mean grows as O(d) whereas the standard deviation grows only as O( DISPLAYFORM3 We note that this property is well known for i.i.d Gaussian entries (see e.g. Ex. 6.14 in MacKay FORMULA5 ).", "For Uniform distribution on the hypercube it is also well known that the mass is concentrated in the corner points (which is consistent with the claim here since the corner points lie on a sphere).Now", "consider an operator such as the midpoint of linear interpolation, y = DISPLAYFORM4 In this case, we can compute: DISPLAYFORM5 Thus, the distribution of y 2 can be approximated with N ( DISPLAYFORM6 . Therefore", ", y also mostly lies on a spherical shell, but with a different radius than z. In fact,", "the shells will intersect at regions which have a vanishing probability for large d. In other", "words, when looking at the squared norm y 2 , y 2 is a (strong) outlier with respect to the distribution of z 2 ." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2641509473323822, 0.2295081913471222, 0.4150943458080292, 0.38461539149284363, 0.14999999105930328, 0.0833333283662796, 0.19230768084526062, 0.1666666567325592, 0.14814814925193787, 0.21176470816135406, 0.26229506731033325, 0.2083333283662796, 0.24390242993831635, 0.16326530277729034, 0.0833333283662796, 0.21276594698429108, 0.158730149269104, 0.260869562625885, 0.16326530277729034, 0.15686273574829102, 0.2857142686843872, 0.04878048226237297, 0.20408162474632263, 0.23157894611358643, 0.19230768084526062, 0.10958904027938843, 0.1538461446762085, 0.1818181723356247, 0.0476190410554409, 0.09756097197532654, 0.17777776718139648 ]
SyBBgXWAZ
true
[ "Operations in the GAN latent space can induce a distribution mismatch compared to the training distribution, and we address this using optimal transport to match the distributions. " ]
[ "Neural program embeddings have shown much promise recently for a variety of program analysis tasks, including program synthesis, program repair, code completion, and fault localization.", "However, most existing program embeddings are based on syntactic features of programs, such as token sequences or abstract syntax trees.", "Unlike images and text, a program has well-defined semantics that can be difficult to capture by only considering its syntax (i.e. syntactically similar programs can exhibit vastly different run-time behavior), which makes syntax-based program embeddings fundamentally limited.", "We propose a novel semantic program embedding that is learned from program execution traces.", "Our key insight is that program states expressed as sequential tuples of live variable values not only capture program semantics more precisely, but also offer a more natural fit for Recurrent Neural Networks to model.", "We evaluate different syntactic and semantic program embeddings on the task of classifying the types of errors that students make in their submissions to an introductory programming class and on the CodeHunt education platform.", "Our evaluation results show that the semantic program embeddings significantly outperform the syntactic program embeddings based on token sequences and abstract syntax trees.", "In addition, we augment a search-based program repair system with predictions made from our semantic embedding and demonstrate significantly improved search efficiency.\n", "Recent breakthroughs in deep learning techniques for computer vision and natural language processing have led to a growing interest in their applications in programming languages and software engineering.", "Several well-explored areas include program classification, similarity detection, program repair, and program synthesis.", "One of the key steps in using neural networks for such tasks is to design suitable program representations for the networks to exploit.", "Most existing approaches in the neural program analysis literature have used syntax-based program representations.", "BID6 proposed a convolutional neural network over abstract syntax trees (ASTs) as the program representation to classify programs based on their functionalities and detecting different sorting routines.", "DeepFix BID4 , SynFix BID1 , and sk p BID9 are recent neural program repair techniques for correcting errors in student programs for MOOC assignments, and they all represent programs as sequences of tokens.", "Even program synthesis techniques that generate programs as output, such as RobustFill BID3 , also adopt a token-based program representation for the output decoder.", "The only exception is BID8 , which introduces a novel perspective of representing programs using input-output pairs.", "However, such representations are too coarse-grained to accurately capture program properties -programs with the same input-output behavior may have very different syntactic characteristics.", "Consequently, the embeddings learned from input-output pairs are not precise enough for many program analysis tasks.Although these pioneering efforts have made significant contributions to bridge the gap between deep learning techniques and program analysis tasks, syntax-based program representations are fundamentally limited due to the enormous gap between program syntax (i.e. static expression) and Bubble Insertion [5,5,1,4,3] [5,5,1,4,3] [5,8,1,4,3] [5,8,1,4,3] [5, 1,1,4,3] [5,1,1,4,3] [5, 1,8,4,3] [5,1,8,4,3] [1, 1,8,4,3] [5,1,4,4,3] [ 1,5,8,4,3] [5,1,4,8,3] [1, 5,4,4,3] [5,1,4,3,3] [1, 5,4,8,3] [5,1,4,3,8] [1, 4, 4, 8, 3] [1, 1, 4, 3, 8] [1, 4, 5, 8, 3] [1, 5, 4, 3, 8] [ 1, 4, 5, 3, 3] [1, 4, 4, 3, 8] [ 1, 4, 5, 3, 8] [1, 4, 5, 3, 8] [ 1, 4, 3, 3, 8] [1, 4, 3, 3, 8] [ 1, 4, 3, 5, 8] [1,4,3,5,8] [1,3,3,5,8] [1,3,3,5,8] [1,3,4,5,8] [1, 3, 4, 5, 8] Figure 1: Bubble sort and insertion sort (code highlighted in shadow box are the only syntactic differences between the two algorithms).", "Their execution traces for the input vector A = [8, 5, 1, 4, 3] are displayed on the right, where, for brevity, only values for variable A are shown.", "semantics (i.e. dynamic execution).", "This gap can be illustrated as follows.", "First, when a program is executed at runtime, its statements are almost never interpreted in the order in which the corresponding token sequence is presented to the deep learning models (the only exception being straightline programs, i.e., ones without any control-flow statements).", "For example, a conditional statement only executes one branch each time, but its token sequence is expressed sequentially as multiple branches.", "Similarly, when iterating over a looping structure at runtime, it is unclear in which order any two tokens are executed when considering different loop iterations.", "Second, program dependency (i.e. data and control) is not exploited in token sequences and ASTs despite its essential role in defining program semantics.", "FIG0 shows an example using a simple max function.", "On line 8, the assignment statement means variable max val is data-dependent on item.", "In addition, the execution of this statement depends on the evaluation of the if condition on line 7, i.e., max val is also control-dependent on item as well as itself.", "Third, from a pure program analysis standpoint, the gap between program syntax and semantics is manifested in that similar program syntax may lead to vastly different program semantics.", "For example, consider the two sorting functions shown in Figure 1 .", "Both functions sort the array via two nested loops, compare the current element to its successor, and swap them if the order is incorrect.", "However, the two functions implement different algorithms, namely Bubble Sort and Insertion Sort.", "Therefore minor syntactic discrepancies can lead to significant semantic differences.", "This intrinsic weakness will be inherited by any deep learning technique that adopts a syntax-based program representation.", "We have evaluated our dynamic program embeddings in the context of automated program repair.", "In particular, we use the program embeddings to classify the type of mistakes students made to their programming assignments based on a set of common error patterns (described in the appendix).", "The dataset for the experiments consists of the programming submissions made to Module 2 assignment in Microsoft-DEV204.1X and two additional problems from the Microsoft CodeHunt platform.", "The results show that our dynamic embeddings significantly outperform syntax-based program embeddings, including those trained on token sequences and abstract syntax trees.", "In addition, we show that our dynamic embeddings can be leveraged to significantly improve the efficiency of a searchbased program corrector SARFGEN 1 BID13 ) (the algorithm is presented in the appendix).", "More importantly, we believe that our dynamic program embeddings can be useful for many other program analysis tasks, such as program synthesis, fault localization, and similarity detection.To summarize, the main contributions of this paper are: (1) we show the fundamental limitation of representing programs using syntax-level features; (2) we propose dynamic program embeddings learned from runtime execution traces to overcome key issues with syntactic program representations; (3) we evaluate our dynamic program embeddings for predicting common mistake patterns students make in program assignments, and results show that the dynamic program embeddings outperform state-of-the-art syntactic program embeddings; and (4) we show how the dynamic program embeddings can be utilized to improve an existing production program repair system.", "We have presented a new program embedding that learns program representations from runtime execution traces.", "We have used the new embeddings to predict error patterns that students make in their online programming submissions.", "Our evaluation shows that the dynamic program embeddings significantly outperform those learned via program syntax.", "We also demonstrate, via an additional application, that our dynamic program embeddings yield more than 10x speedups compared to an enumerative baseline for search-based program repair.", "Beyond neural program repair, we believe that our dynamic program embeddings can be fruitfully utilized for many other neural program analysis tasks such as program induction and synthesis.", "for Pc ∈ Pcs do // Generates the syntactic discrepencies w.r.t. each Pc 7 C(P , Pc) ← DiscrepenciesGeneration(P , Ps) // Executing P to extract the dynamic execution trace 8 T (P ) ← DynamicTraceExtraction(P ) // Prioritizing subsets of C(P , Pc) through pre-trained model 9 C subs (P , Pc) ← Prioritization(C(P , Pc), T (P ), M) 10 for C sub (P , Pc) ∈ C subs (P , Pc) do" ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13333332538604736, 0.1428571343421936, 0.045454543083906174, 0.2857142686843872, 0.09756097197532654, 0.1621621549129486, 0.1428571343421936, 0.19354838132858276, 0.060606058686971664, 0.10526315122842789, 0.14814814925193787, 0.0952380895614624, 0.05714285373687744, 0.10526315122842789, 0.06666666269302368, 0.07999999821186066, 0.06451612710952759, 0.042553190141916275, 0.0624999962747097, 0, 0, 0.0833333283662796, 0, 0, 0.06896550953388214, 0, 0, 0.05882352590560913, 0.06451612710952759, 0, 0, 0, 0.1111111044883728, 0.1599999964237213, 0.1904761791229248, 0.11428570747375488, 0.060606058686971664, 0.06666666269302368, 0.10256409645080566, 0.045454543083906174, 0.27272728085517883, 0.07692307233810425, 0.09090908616781235, 0.0624999962747097, 0.0624999962747097, 0.0357142835855484 ]
BJuWrGW0Z
true
[ "A new way of learning semantic program embedding" ]
[ "In this paper, we propose a generalization of the BN algorithm, diminishing batch normalization (DBN), where we update the BN parameters in a diminishing moving average way.", "Batch normalization (BN) is very effective in accelerating the convergence of a neural network training phase that it has become a common practice. \n", "Our proposed DBN algorithm remains the overall structure of the original BN algorithm while introduces a weighted averaging update to some trainable parameters. \n", "We provide an analysis of the convergence of the DBN algorithm that converges to a stationary point with respect to trainable parameters.", "Our analysis can be easily generalized for original BN algorithm by setting some parameters to constant.", "To the best knowledge of authors, this analysis is the first of its kind for convergence with Batch Normalization introduced.", "We analyze a two-layer model with arbitrary activation function. \n", "The primary challenge of the analysis is the fact that some parameters are updated by gradient while others are not. \n", "The convergence analysis applies to any activation function that satisfies our common assumptions.\n", "For the analysis, we also show the sufficient and necessary conditions for the stepsizes and diminishing weights to ensure the convergence. \n", "In the numerical experiments, we use more complex models with more layers and ReLU activation.", "We observe that DBN outperforms the original BN algorithm on Imagenet, MNIST, NI and CIFAR-10 datasets with reasonable complex FNN and CNN models.", "Deep neural networks (DNN) have shown unprecedented success in various applications such as object detection.", "However, it still takes a long time to train a DNN until it converges.", "Ioffe & Szegedy identified a critical problem involved in training deep networks, internal covariate shift, and then proposed batch normalization (BN) to decrease this phenomenon.", "BN addresses this problem by normalizing the distribution of every hidden layer's input.", "In order to do so, it calculates the preactivation mean and standard deviation using mini-batch statistics at each iteration of training and uses these estimates to normalize the input to the next layer.", "The output of a layer is normalized by using the batch statistics, and two new trainable parameters per neuron are introduced that capture the inverse operation.", "It is now a standard practice Bottou et al. (2016) ; He et al. (2016) .", "While this approach leads to a significant performance jump, to the best of our knowledge, there is no known theoretical guarantee for the convergence of an algorithm with BN.", "The difficulty of analyzing the convergence of the BN algorithm comes from the fact that not all of the BN parameters are updated by gradients.", "Thus, it invalidates most of the classical studies of convergence for gradient methods.In this paper, we propose a generalization of the BN algorithm, diminishing batch normalization (DBN), where we update the BN parameters in a diminishing moving average way.", "It essentially means that the BN layer adjusts its output according to all past mini-batches instead of only the current one.", "It helps to reduce the problem of the original BN that the output of a BN layer on a particular training pattern depends on the other patterns in the current mini-batch, which is pointed out by Bottou et al..", "By setting the layer parameter we introduce into DBN to a specific value, we recover the original BN algorithm.We give a convergence analysis of the algorithm with a two-layer batch-normalized neural network and diminishing stepsizes.", "We assume two layers (the generalization to multiple layers can be made by using the same approach but substantially complicating the notation) and an arbitrary loss function.", "The convergence analysis applies to any activation function that follows our common assumption.", "The main result shows that under diminishing stepsizes on gradient updates and updates on mini-batch statistics, and standard Lipschitz conditions on loss functions DBN converges to a stationary point.", "As already pointed out the primary challenge is the fact that some trainable parameters are updated by gradient while others are updated by a minor recalculation.Contributions.", "The main contribution of this paper is in providing a general convergence guarantee for DBN.", "Specifically, we make the following contributions.•", "In section 4, we show the sufficient and necessary conditions for the stepsizes and diminishing weights to ensure the convergence of BN parameters.•", "We show that the algorithm converges to a stationary point under a general nonconvex objective function.This paper is organized as follows. In", "Section 2, we review the related works and the development of the BN algorithm. We", "formally state our model and algorithm in Section 3.", "We present our main results in Sections 4.", "In Section 5, we numerically show that the DBN algorithm outperforms the original BN algorithm. Proofs", "for main steps are collected in the Appendix." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2916666567325592, 0.3265306055545807, 0.1666666567325592, 0.31111109256744385, 0.1428571343421936, 0.27272728085517883, 0.1111111044883728, 0.17777776718139648, 0.14999999105930328, 0.22727271914482117, 0.14999999105930328, 0.2083333283662796, 0.04878048226237297, 0.10526315122842789, 0.19607841968536377, 0.1538461446762085, 0.14814814925193787, 0.23529411852359772, 0.052631575614213943, 0.26923075318336487, 0.17777776718139648, 0.3448275923728943, 0.1304347813129425, 0.21052631735801697, 0.2857142686843872, 0.11764705181121826, 0.1538461446762085, 0.11764705181121826, 0.12244897335767746, 0.2926829159259796, 0.060606058686971664, 0.25531914830207825, 0.2083333283662796, 0.20512820780277252, 0.11428570747375488, 0.11764705181121826, 0.19999998807907104, 0.1764705777168274 ]
SkzK4iC5Ym
true
[ "We propose a extension of the batch normalization, show a first-of-its-kind convergence analysis for this extension and show in numerical experiments that it has better performance than the original batch normalizatin." ]
[ "Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian.", "After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample.", "However, the latent space operations commonly used in the literature so far induce a distribution mismatch between the resulting outputs and the prior distribution the model was trained on.", "Previous works have attempted to reduce this mismatch with heuristic modification to the operations or by changing the latent distribution and re-training models.", "In this paper, we propose a framework for modifying the latent space operations such that the distribution mismatch is fully eliminated.", "Our approach is based on optimal transport maps, which adapt the latent space operations such that they fully match the prior distribution, while minimally modifying the original operation.", "Our matched operations are readily obtained for the commonly used operations and distributions and require no adjustment to the training procedure.", "Generative models such as Variational Autoencoders (VAEs) BID7 and Generative Adversarial Networks (GANs) BID3 have emerged as popular techniques for unsupervised learning of intractable distributions.", "In the framework of Generative Adversarial Networks (GANs) BID3 , the generative model is obtained by jointly training a generator G and a discriminator D in an adversarial manner.", "The discriminator is trained to classify synthetic samples from real ones, whereas the generator is trained to map samples drawn from a fixed prior distribution to synthetic examples which fool the discriminator.", "Variational Autoencoders (VAEs) BID7 are also trained for a fixed prior distribution, but this is done through the loss of an Autoencoder that minimizes the variational lower bound of the data likelihood.", "For both VAEs and GANs, using some data X we end up with a trained generator G, that is supposed to map latent samples z from the fixed prior distribution to output samples G(z) which (hopefully) have the same distribution as the data.In order to understand and visualize the learned model G(z), it is a common practice in the literature of generative models to explore how the output G(z) behaves under various arithmetic operations on the latent samples z.", "However, the operations typically used so far, such as linear interpolation BID3 , spherical interpolation BID20 , vicinity sampling and vector arithmetic BID12 , cause a distribution mismatch between the latent prior distribution and the results of the operations.", "This is problematic, since the generator G was trained on a fixed prior and expects to see inputs with statistics consistent with that distribution.To address this, we propose to use distribution matching transport maps, to obtain analogous latent space operations (e.g. interpolation, vicinity sampling) which preserve the prior distribution of samples from prior linear matched (ours) spherical (a) Uniform prior: Trajectories of linear interpolation, our matched interpolation and the spherical interp.", "BID20 .", "(e) Spherical midpoint distribution BID20 Figure 1: We show examples of distribution mismatches induced by the previous interpolation schemes when using a uniform prior in two dimensions.", "Our matched interpolation avoids this with a minimal modification to the linear trajectory, traversing through the space such that all points along the path are distributed identically to the prior.", "the latent space, while minimally changing the original operation.", "In Figure 1 we showcase how our proposed technique gives an interpolation operator which avoids distribution mismatch when interpolating between samples of a uniform distribution.", "The points of the (red) matched trajectories are obtained as minimal deviations (in expectation of l 1 distance) from the the points of the (blue) linear trajectory.", "We proposed a framework that fully eliminates the distribution mismatch in the common latent space operations used for generative models.", "Our approach uses optimal transport to minimally modify (in l 1 distance) the operations such that they fully preserve the prior distribution.", "We give analytical formulas of the resulting (matched) operations for various examples, which are easily implemented.", "The matched operators give a significantly higher quality samples compared to the originals, having the potential to become standard tools for evaluating and exploring generative models." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3333333134651184, 0.29032257199287415, 0.6274510025978088, 0.25, 0.6808510422706604, 0.4150943458080292, 0.17777776718139648, 0.11999999731779099, 0.25925925374031067, 0.23999999463558197, 0.25, 0.29885056614875793, 0.35087719559669495, 0.3294117748737335, 0.18867923319339752, 0.22641508281230927, 0.11428570747375488, 0.15686273574829102, 0.0416666604578495, 0.5652173757553101, 0.2916666567325592, 0.23255813121795654, 0.19607841968536377 ]
BklCusRct7
true
[ "We propose a framework for modifying the latent space operations such that the distribution mismatch between the resulting outputs and the prior distribution the generative model was trained on is fully eliminated." ]
[ "The problem of building a coherent and non-monotonous conversational agent with proper discourse and coverage is still an area of open research.", "Current architectures only take care of semantic and contextual information for a given query and fail to completely account for syntactic and external knowledge which are crucial for generating responses in a chit-chat system.", "To overcome this problem, we propose an end to end multi-stream deep learning architecture which learns unified embeddings for query-response pairs by leveraging contextual information from memory networks and syntactic information by incorporating Graph Convolution Networks (GCN) over their dependency parse.", "A stream of this network also utilizes transfer learning by pre-training a bidirectional transformer to extract semantic representation for each input sentence and incorporates external knowledge through the neighbourhood of the entities from a Knowledge Base (KB).", "We benchmark these embeddings on next sentence prediction task and significantly improve upon the existing techniques.", "Furthermore, we use AMUSED to represent query and responses along with its context to develop a retrieval based conversational agent which has been validated by expert linguists to have comprehensive engagement with humans.", "With significant advancements in Automatic speech recognition systems (Hinton et al., 2012; Kumar et al., 2018) and the field of natural language processing, conversational agents have become an important part of the current research.", "It finds its usage in multiple domains ranging from self-driving cars (Chen et al., 2017b) to social robots and virtual assistants (Chen et al., 2017a) .", "Conversational agents can be broadly classified into two categories: a task oriented chat bot and a chit-chat based system respectively.", "The former works towards completion of a certain goal and are specifically designed for domain-specific needs such as restaurant reservations (Wen et al., 2017) , movie recommendation (Dhingra et al., 2017) , flight ticket booking systems ) among many others.", "The latter is more of a personal companion and engages in human-computer interaction for entertainment or emotional companionship.", "An ideal chit chat system should be able to perform non-monotonous interesting conversation with context and coherence.", "Current chit chat systems are either generative (Vinyals & Le, 2015) or retrieval based in nature.", "The generative ones tend to generate natural language sentences as responses and enjoy scalability to multiple domains without much change in the network.", "Even though easier to train, they suffer from error-prone responses (Zhang et al., 2018b) .", "IR based methods select the best response from a given set of answers which makes them error-free.", "But, since the responses come from a specific dataset, they might suffer from distribution bias during the course of conversation.", "A chit-chat system should capture semantic, syntactic, contextual and external knowledge in a conversation to model human like performance.", "Recent work by Bordes et al. (2016) proposed a memory network based approach to encode contextual information for a query while performing generation and retrieval later.", "Such networks can capture long term context but fail to encode relevant syntactic information through their model.", "Things like anaphora resolution are properly taken care of if we incorporate syntax.", "Our work improves upon previous architectures by creating enhanced representations of the conversation using multiple streams which includes Graph Convolution networks (Bruna et al., 2014) , Figure 1 : Overview of AMUSED.", "AMUSED first encodes each sentence by concatenating embeddings (denoted by ⊕) from Bi-LSTM and Syntactic GCN for each token, followed by word attention.", "The sentence embedding is then concatenated with the knowledge embedding from the Knowledge Module ( Figure 2 ).", "The query embedding passes through the Memory Module ( Figure 3 ) before being trained using triplet loss.", "Please see Section 4 for more details.", "transformers (Vaswani et al., 2017) and memory networks (Bordes et al., 2016) in an end to end setting, where each component captures conversation relevant information from queries, subsequently leading to better responses.", "Our contribution for this paper can be summarized as follows:", "• We propose AMUSED, a novel multi stream deep learning model which learns rich unified embeddings for query response pairs using triplet loss as a training metric.", "• We perform multi-head attention over query-response pairs which has proven to be much more effective than unidirectional or bi-directional attention.", "• We use Graph Convolutions Networks in a chit-chat setting to incorporate the syntactical information in the dialogue using its dependency parse.", "• Even with the lack of a concrete metric to judge a conversational agent, our embeddings have shown to perform interesting response retrieval on Persona-Chat dataset.", "In the paper, we propose AMUSED, a multi-stream architecture which effectively encodes semantic information from the query while properly utilizing external knowledge for improving performance on natural dialogue.", "It also employs GCN to capture long-range syntactic information and improves context-awareness in dialogue by incorporating memory network.", "Through our experiments and results using different metrics, we demonstrate that learning these rich representations through smart training (using triplets) would improve the performance of chit-chat systems.", "The ablation studies show the importance of different components for a better dialogue.", "Our ideas can easily be extended to various conversational tasks which would benefit from such enhanced representations." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.08510638028383255, 0.2857142686843872, 0.3384615480899811, 0.2295081913471222, 0.09302324801683426, 0.14035087823867798, 0.10344827175140381, 0.11999999731779099, 0.08695651590824127, 0.1269841194152832, 0.17777776718139648, 0.09090908616781235, 0.09302324801683426, 0.12244897335767746, 0.0476190447807312, 0.045454539358615875, 0.04444443807005882, 0.260869562625885, 0.26923075318336487, 0.09090908616781235, 0, 0.03448275476694107, 0.1702127605676651, 0, 0, 0.05882352590560913, 0.178571417927742, 0.10810810327529907, 0.22641508281230927, 0.12765957415103912, 0.21276594698429108, 0.11764705181121826, 0.2222222238779068, 0.2666666507720947, 0.07407406717538834, 0.14999999105930328, 0.045454539358615875 ]
rJe6t1SFDB
true
[ "This paper provides a multi -stream end to end approach to learn unified embeddings for query-response pairs in dialogue systems by leveraging contextual, syntactic, semantic and external information together." ]
[ "We examine techniques for combining generalized policies with search algorithms to exploit the strengths and overcome the weaknesses of each when solving probabilistic planning problems.", "The Action Schema Network (ASNet) is a recent contribution to planning that uses deep learning and neural networks to learn generalized policies for probabilistic planning problems.", "ASNets are well suited to problems where local knowledge of the environment can be exploited to improve performance, but may fail to generalize to problems they were not trained on.", "Monte-Carlo Tree Search (MCTS) is a forward-chaining state space search algorithm for optimal decision making which performs simulations to incrementally build a search tree and estimate the values of each state.", "Although MCTS can achieve state-of-the-art results when paired with domain-specific knowledge, without this knowledge, MCTS requires a large number of simulations in order to obtain reliable estimates in the search tree.", "By combining ASNets with MCTS, we are able to improve the capability of an ASNet to generalize beyond the distribution of problems it was trained on, as well as enhance the navigation of the search space by MCTS.\n", "Planning is the essential ability of a rational agent to solve the problem of choosing which actions to take in an environment to achieve a certain goal.", "This paper is mainly concerned with combining the advantages of forward-chaining state space search through UCT BID11 , an instance of Monte-Carlo Tree Search (MCTS) BID5 , with the domain-specific knowledge learned by Action Schema Networks (ASNets) BID18 ), a domain-independent learning algorithm.", "By combining UCT and ASNets, we hope to more effectively solve planning problems, and achieve the best of both worlds.The Action Schema Network (ASNet) is a recent contribution in planning that uses deep learning and neural networks to learn generalized policies for planning problems.", "A generalized policy is a policy that can be applied to any problem from a given planning domain.", "Ideally, this generalized policy is able to reliably solve all problems in the given domain, although this is not always feasible.", "ASNets are well suited to problems where \"local knowledge of the environment can help to avoid certain traps\" BID18 .", "In such problems, an ASNet can significantly outperform traditional planners that use heuristic search.", "Moreover, a significant advantage of ASNets is that a network can be trained on a limited number of small problems, and generalize to problems of any size.", "However, an ASNet is not guaranteed to reliably solve all problems of a given domain.", "For example, an ASNet could fail to generalize to difficult problems that it was not trained on -an issue often encountered with machine learning algorithms.", "Moreover, the policy learned by an ASNet could be suboptimal due to a poor choice of hyperparameters that has led to an undertrained or overtrained network.", "Although our discussion is closely tied to ASNets, our contributions are more generally applicable to any method of learning a (generalized) policy.Monte-Carlo Tree Search (MCTS) is a state-space search algorithm for optimal decision making which relies on performing Monte-Carlo simulations to build a search tree and estimate the values of each state BID5 ).", "As we perform more and more of these simulations, the state estimates become more accurate.", "MCTS-based game-playing algorithms have often achieved state-of-the-art performance when paired with domain-specific knowledge, the most notable being AlphaGo (Silver et al. 2016) .", "One significant limitation of vanilla MCTS is that we may require a large number of simulations in order to obtain reliable estimates in the search tree.", "Moreover, because simulations are random, the search may not be able to sense that certain branches of the tree will lead to sub-optimal outcomes.", "We are concerned with UCT, a variant of MCTS that balances the trade-off between exploration and exploitation.", "However, our work can be more generally used with other search algorithms.Combining ASNets with UCT achieves three goals.(1", ") Learn what we have not learned: improve the capability of an ASNet to generalize beyond the distribution of problems it was trained on, and of UCT to bias the exploration of actions to those that an ASNet wishes to exploit. (", "2) Improve on sub-optimal learning: obtain reasonable evaluation-time performance even when an ASNet was trained with suboptimal hyperparameters, and allow UCT to converge to the optimal action in a smaller number of trials.", "(3) Be robust to changes in the environment or domain: improve performance when the test environment differs substantially from the training environment.The rest of the paper is organized as follows.", "Section 2 formalizes probabilistic planning as solving a Stochastic Shortest Path problem and gives an overview of ASNets and MCTS along with its variants.", "Section 3 defines a framework for Dynamic Programming UCT (DP-UCT) BID10 .", "Next, Section 4 examines techniques for combining the policy learned by an ASNet with DP-UCT.", "Section 5 then presents and analyzes our results.", "Finally, Section 6 summarizes our contributions and discusses related and future work.", "In this paper, we have investigated techniques to improve search using generalized policies.", "We discussed a framework for DP-UCT, extended from THTS, that allowed us to generate different flavors of DP-UCT including those that exploited the generalized policy learned by an ASNet.", "We then introduced methods of using this generalized policy in the simulation function, through STOCHASTIC ASNETS and MAXIMUM ASNETS.", "These allowed us to obtain more accurate state-value estimates and action-value estimates in the search tree.", "We also extended UCB1 to bias the navigation of the search space to the actions that an ASNet wants to exploit whilst maintaining the fundamental balance between exploration and exploitation, by introducing SIMPLE-ASNET and RANKED-ASNET action selection.We have demonstrated through our experiments that our algorithms are capable of improving the capability of an ASNet to generalize beyond the distribution of problems it was trained on, as well as improve sub-optimal learning.", "By combining DP-UCT with ASNets, we are able to bias the exploration of actions to those that an ASNet wishes to exploit, and allow DP-UCT to converge to the optimal action in a smaller number of trials.", "Our experiments have also demonstrated that by harnessing the power of search, we may overcome any misleading information provided by an ASNet due to a change in the environment.", "Hence, we achieved the three following goals: (1) Learn what we have not learned, (2) Improve on sub-optimal learning, and (3) Be robust to changes in the environment or domain.It is important to observe that our contributions are more generally applicable to any method of learning a (generalized) policy (not just ASNets), and potentially to other trialbased search algorithms including (L)RTDP.In the deterministic setting, there has been a long tradition of learning generalized policies and using them to guide heuristic Best First Search (BFS).", "For instance, Yoon et al. BID20 add the states resulting from selecting actions prescribed by the learned generalized policy to the the queue of a BFS guided by a relaxed-plan heuristic, and de la BID7 learn and use generalized policies to generate lookahead states within a BFS guided by the FF heuristic.", "These authors observe that generalized policies provide effective search guidance, and that search helps correcting deficiencies in the learned policy.", "Search control knowledgeà la TLPlan, Talplanner or SHOP2 has been successfully used to prune the search of probabilistic planners BID13 BID17 ).", "More recently, BID15 have also experimented with the use of preferred actions in variants of RTDP BID1 and AO* BID14 , albeit with limited success.", "Our work differs from these approaches by focusing explicitly on MCTS as the search algorithm and, unlike existing work combining deep learning and MCTS (e.g. AlphaGo (Silver et al. 2016)), looks not only at using neural network policies as a simulation function for rollouts, but also as a means to bias the UCB1 action selection rule.There are still many potential avenues for future work.", "We may investigate how to automatically learn the influence parameter M for SIMPLE-ASNET and RANKED-ASNET action selection, or how to combat bad information provided by an ASNet in a simulation function by mixing ASNet simulations with random simulations.", "We may also investigate techniques to interleave planning with learning by using UCT with ASNets as a 'teacher' for training an AS" ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.9130434989929199, 0.3478260934352875, 0.1666666567325592, 0.2800000011920929, 0.23999999463558197, 0.25925925374031067, 0.13636362552642822, 0.16393442451953888, 0.32258063554763794, 0.15789473056793213, 0.19512194395065308, 0.19999998807907104, 0.0555555522441864, 0.17777776718139648, 0.1621621549129486, 0.17391303181648254, 0.1304347813129425, 0.20588235557079315, 0.17142856121063232, 0.1818181723356247, 0.17391303181648254, 0.1818181723356247, 0.20512819290161133, 0.1463414579629898, 0.2222222238779068, 0.2222222238779068, 0.1666666567325592, 0.2666666507720947, 0.060606054961681366, 0.21621620655059814, 0.06666666269302368, 0.060606054961681366, 0.22857142984867096, 0.19999998807907104, 0.19999998807907104, 0.21621620655059814, 0.21052631735801697, 0.23076923191547394, 0.16326530277729034, 0.1666666567325592, 0.19999998807907104, 0.25, 0.22727271914482117, 0.17777776718139648, 0.17499999701976776, 0.1818181723356247, 0.1860465109348297 ]
B1gqEZwpvE
true
[ "Techniques for combining generalized policies with search algorithms to exploit the strengths and overcome the weaknesses of each when solving probabilistic planning problems" ]
[ "Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice.", "When the train and test distributions are mismatched, accuracy can plummet.", "Currently there are few techniques that improve robustness to unforeseen data shifts encountered during deployment.", "In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers.", "We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions.", "AugMix significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half.", "Current machine learning models depend on the ability of training data to faithfully represent the data encountered during deployment.", "In practice, data distributions evolve (Lipton et al., 2018) , models encounter new scenarios (Hendrycks & Gimpel, 2017) , and data curation procedures may capture only a narrow slice of the underlying data distribution (Torralba & Efros, 2011) .", "Mismatches between the train and test data are commonplace, yet the study of this problem is not.", "As it stands, models do not robustly generalize across shifts in the data distribution.", "If models could identify when they are likely to be mistaken, or estimate uncertainty accurately, then the impact of such fragility might be ameliorated.", "Unfortunately, modern models already produce overconfident predictions when the training examples are independent and identically distributed to the test distribution.", "This overconfidence and miscalibration is greatly exacerbated by mismatched training and testing distributions.", "Small corruptions to the data distribution are enough to subvert existing classifiers, and techniques to improve corruption robustness remain few in number.", "Hendrycks & Dietterich (2019) show that classification error of modern models rises from 22% on the usual ImageNet test set to 64% on ImageNet-C, a test set consisting of various corruptions applied to ImageNet test images.", "Even methods which aim to explicitly quantify uncertainty, such as probabilistic and Bayesian neural networks, struggle under data shift, as recently demonstrated by Ovadia et al. (2019) .", "Improving performance in this setting has been difficult.", "One reason is that training against corruptions only encourages networks to memorize the specific corruptions seen during training and leaves models unable to generalize to new corruptions (Vasiljevic et al., 2016; Geirhos et al., 2018) .", "Further, networks trained on translation augmentations remain highly sensitive to images shifted by a single pixel (Gu et al., 2019; Hendrycks & Dietterich, 2019) .", "Others have proposed aggressive data augmentation schemes (Cubuk et al., 2018) , though at the cost of a computational increase.", "demonstrates that many techniques may improve clean accuracy at the cost of robustness while many techniques which improve robustness harm uncertainty, and contrariwise.", "In all, existing techniques have considerable trade-offs.", "In this work, we propose a technique to improve both the robustness and uncertainty estimates of classifiers under data shift.", "We propose AUGMIX, a method which simultaneously achieves new state-of-the-art results for robustness and uncertainty estimation while maintaining or improving accuracy on standard benchmark datasets.", "AUGMIX utilizes stochasticity and diverse augmentations, a Jensen-Shannon Divergence consistency loss, and a formulation to mix multiple augmented images to achieve state-of-the-art performance.", "On CIFAR-10 and CIFAR-100, our method roughly halves the corruption robustness error of standard training procedures from 28.4% to 12.4% and 54.3% to 37.8% error, respectively.", "On ImageNet, AUGMIX also achieves state-of-the-art corruption robustness and decreases perturbation instability from 57.2% to 37.4%.", "Code is available at https://github.com/google-research/augmix.", "AUGMIX is a data processing technique which mixes randomly generated augmentations and uses a Jensen-Shannon loss to enforce consistency.", "Our simple-to-implement technique obtains state-of-the-art performance on CIFAR-10/100-C, ImageNet-C, CIFAR-10/100-P, and ImageNet-P.", "AUGMIX models achieve state-of-the-art calibration and can maintain calibration even as the distribution shifts.", "We hope that AUGMIX will enable more reliable models, a necessity for models deployed in safety-critical environments." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13636362552642822, 0.13333332538604736, 0.1764705777168274, 0.2222222238779068, 0.19512194395065308, 0.12765957415103912, 0.1666666567325592, 0.07547169178724289, 0.11428570747375488, 0.060606054961681366, 0.0952380895614624, 0.15789473056793213, 0.06451612710952759, 0.20512819290161133, 0.0833333283662796, 0.17777776718139648, 0, 0.0833333283662796, 0.09090908616781235, 0.10256409645080566, 0.15789473056793213, 0, 0.3589743673801422, 0.27272728085517883, 0.1538461446762085, 0.1304347813129425, 0.21621620655059814, 0, 0.1621621549129486, 0.19354838132858276, 0.3125, 0.0555555522441864 ]
S1gmrxHFvB
true
[ "We obtain state-of-the-art on robustness to data shifts, and we maintain calibration under data shift even though even when accuracy drops" ]
[ "Automatic Piano Fingering is a hard task which computers can learn using data.", "As data collection is hard and expensive, we propose to automate this process by automatically extracting fingerings from public videos and MIDI files, using computer-vision techniques.", "Running this process on 90 videos results in the largest dataset for piano fingering with more than 150K notes.", "We show that when running a previously proposed model for automatic piano fingering on our dataset and then fine-tuning it on manually labeled piano fingering data, we achieve state-of-the-art results.\n", "In addition to the fingering extraction method, we also introduce a novel method for transferring deep-learning computer-vision models to work on out-of-domain data, by fine-tuning it on out-of-domain augmentation proposed by a Generative Adversarial Network (GAN).\n\n", "For demonstration, we anonymously release a visualization of the output of our process for a single video on https://youtu.be/Gfs1UWQhr5Q", "Learning to play the piano is a hard task taking years to master.", "One of the challenging aspects when learning a new piece is the fingering choice in which to play each note.", "While beginner booklets contain many fingering suggestions, advanced pieces often contain none or a select few.", "Automatic prediction of PIANO-FINGERING can be a useful addition to new piano learners, to ease the learning process of new pieces.", "As manually labeling fingering for different sheet music is an exhausting and expensive task 1 , In practice previous work (Parncutt et al., 1997; Hart et al., 2000; Jacobs, 2001; Kasimi et al., 2007; Nakamura et al., 2019 ) used very few tagged pieces for evaluation, with minimal or no training data.", "In this paper, we propose an automatic, low-cost method for detecting PIANO-FINGERING from piano playing performances captured on videos which allows training modern -data-hungry -neural networks.", "We introduce a novel pipeline that adapts and combines several deep learning methods which lead to an automatic labeled PIANO-FINGERING dataset.", "Our method can serve two purposes: (1) an automatic \"transcript\" method that detects PIANO-FINGERING from video and MIDI files, when these are available, and (2) serve as a dataset for training models and then generalize to new pieces.", "Given a video and a MIDI file, our system produces a probability distribution over the fingers for each played.", "Running this process on large corpora of piano pieces played by different artists, yields a total of 90 automatically finger-tagged pieces (containing 155,107 notes in total) and results in the first public large scale PIANO-FINGERING dataset, which we name APFD.", "This dataset will grow over time, as more videos are uploaded to YouTube.", "We provide empirical evidence that APFD is valuable, both by evaluating a model trained on it over manually labeled videos, as well as its usefulness by fine-tuning the model on a manually created dataset, which achieves state-of-the-art results.", "The process of extracting PIANO-FINGERING from videos alone is a hard task as it needs to detect keyboard presses, which are often subtle even for the human eye.", "We, therefore, turn to MIDI files to obtain this information.", "The extraction steps are as follows: We begin by locating the keyboard and identify each key on the keyboard ( §3.2).", "Then, we identify the playing hands on top of the keyboard ( §3.3), and detect the fingers given the hands bounding boxes ( §3.4).", "Next, we align between the MIDI file and its corresponding video ( §3.6) and finally assign for every pressed note, the finger which was most likely used to play it ( §3.5).", "Albeit the expectation from steps like hand detection and pose estimation, which were extensively studied in the computer-vision literature, we find that in practice, state-of-the-art models do not excel in these tasks for our scenario.", "We therefore address these weaknesses by fine-tuning an object detection model §3.3 on a new dataset we introduce and train a CycleGAN (Zhu et al., 2017) to address the different lighting scenarios with the pose estimation model §3.4.", "In this work, we present an automatic method for detecting PIANO-FINGERING from MIDI and video files of a piano performance.", "We employ this method on a large set of videos, and create the first large scale PIANO-FINGERING dataset, containing 90 unique pieces, with 155,107 notes in total.", "We show this dataset-although being noisy-is valuable, by training a neural network model on it, fine-tuning on a gold dataset, where we achieve state-of-the-art results.", "In future work, we intend to improve the data collection by improving the pose-estimation model, better handling high speed movements and the proximity of the hands, which often cause errors in estimating their pose.", "Furthermore, we intend to design improved neural models that can take previous fingering predictions into account, in order to have a better global fingering transition." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.1904761791229248, 0.2222222238779068, 0.17777776718139648, 0.12244897335767746, 0.05714285373687744, 0.13793103396892548, 0.2222222238779068, 0.0624999962747097, 0.2857142686843872, 0.06451612710952759, 0.1395348757505417, 0.15789473056793213, 0.15686273574829102, 0, 0.15094339847564697, 0.13333332538604736, 0.04081632196903229, 0.17777776718139648, 0.1538461446762085, 0.05405404791235924, 0.05405404791235924, 0.08510638028383255, 0.12244897335767746, 0.07547169178724289, 0.21621620655059814, 0.1395348757505417, 0.04999999329447746, 0.1249999925494194, 0.19999998807907104 ]
H1MOqeHYvB
true
[ "We automatically extract fingering information from videos of piano performances, to be used in automatic fingering prediction models." ]
[ "Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable.", "A recent approach for finding a common representation of the two domains is via domain adversarial training (Ganin & Lempitsky, 2015), which attempts to induce a feature extractor that matches the source and target feature distributions in some feature space.", "However, domain adversarial training faces two critical limitations:", "1) if the feature extraction function has high-capacity, then feature distribution matching is a weak constraint,", "2) in non-conservative domain adaptation (where no single classifier can perform well in both the source and target domains), training the model to do well on the source domain hurts performance on the target domain.", "In this paper, we address these issues through the lens of the cluster assumption, i.e., decision boundaries should not cross high-density data regions.", "We propose two novel and related models:", "1) the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption;", "2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) model, which takes the VADA model as initialization and employs natural gradient steps to further minimize the cluster assumption violation.", "Extensive empirical results demonstrate that the combination of these two models significantly improve the state-of-the-art performance on the digit, traffic sign, and Wi-Fi recognition domain adaptation benchmarks.", "The development of deep neural networks has enabled impressive performance in a wide variety of machine learning tasks.", "However, these advancements often rely on the existence of a large amount of labeled training data.", "In many cases, direct access to vast quantities of labeled data for the task of interest (the target domain) is either costly or otherwise absent, but labels are readily available for related training sets (the source domain).", "A notable example of this scenario occurs when the source domain consists of richly-annotated synthetic or semi-synthetic data, but the target domain consists of unannotated real-world data BID28 Vazquez et al., 2014) .", "However, the source data distribution is often dissimilar to the target data distribution, and the resulting significant covariate shift is detrimental to the performance of the source-trained model when applied to the target domain BID27 .Solving", "the covariate shift problem of this nature is an instance of domain adaptation BID2 . In this", "paper, we consider a challenging setting of domain adaptation where 1) we are", "provided with fully-labeled source samples and completely-unlabeled target samples, and 2) the existence", "of a classifier in the hypothesis space with low generalization error in both source and target domains is not guaranteed. Borrowing approximately", "the terminology from BID2 , we refer to this setting as unsupervised, non-conservative domain adaptation. We note that this is in", "contrast to conservative domain adaptation, where we assume our hypothesis space contains a classifier that performs well in both the source and target domains.To tackle unsupervised domain adaptation, BID9 proposed to constrain the classifier to only rely on domain-invariant features. This is achieved by training", "the classifier to perform well on the source domain while minimizing the divergence between features extracted from the source versus target domains. To achieve divergence minimization", ", BID9 employ domain adversarial training. We highlight two issues with this", "approach: 1) when the feature function has", "high-capacity and the source-target supports are disjoint, the domain-invariance constraint is potentially very weak (see Section 3), and 2) good generalization on the source", "domain hurts target performance in the non-conservative setting. BID24 addressed these issues by replacing", "domain adversarial training with asymmetric tri-training (ATT), which relies on the assumption that target samples that are labeled by a sourcetrained classifier with high confidence are correctly labeled by the source classifier. In this paper, we consider an orthogonal", "assumption: the cluster assumption BID5 , that the input distribution contains separated data clusters and that data samples in the same cluster share the same class label. This assumption introduces an additional", "bias where we seek decision boundaries that do not go through high-density regions. Based on this intuition, we propose two", "novel models: 1) the Virtual Adversarial Domain Adaptation", "(VADA) model which incorporates an additional virtual adversarial training BID20 and conditional entropy loss to push the decision boundaries away from the empirical data, and 2) the Decision-boundary Iterative Refinement", "Training with a Teacher (DIRT-T) model which uses natural gradients to further refine the output of the VADA model while focusing purely on the target domain. We demonstrate that 1. In conservative domain", "adaptation, where the", "classifier is trained to perform well on the source domain, VADA can be used to further constrain the hypothesis space by penalizing violations of the cluster assumption, thereby improving domain adversarial training.2. In non-conservative domain adaptation, where", "we account for the mismatch between the source and target optimal classifiers, DIRT-T allows us to transition from a joint (source and target) classifier (VADA) to a better target domain classifier. Interestingly, we demonstrate the advantage", "of natural gradients in DIRT-T refinement steps.We report results for domain adaptation in digits classification (MNIST-M, MNIST, SYN DIGITS, SVHN), traffic sign classification (SYN SIGNS, GTSRB), general object classification (STL-10, CIFAR-10), and Wi-Fi activity recognition (Yousefi et al., 2017) . We show that, in nearly all experiments, VADA", "improves upon previous methods and that DIRT-T improves upon VADA, setting new state-of-the-art performances across a wide range of domain adaptation benchmarks. In adapting MNIST → SVHN, a very challenging", "task, we out-perform ATT by over 20%.", "In this paper, we presented two novel models for domain adaptation inspired by the cluster assumption.", "Our first model, VADA, performs domain adversarial training with an added term that penalizes violations of the cluster assumption.", "Our second model, DIRT-T, is an extension of VADA that recursively refines the VADA classifier by untethering the model from the source training signal and applying approximate natural gradients to further minimize the cluster assumption violation.", "Our experiments demonstrate the effectiveness of the cluster assumption: VADA achieves strong performance across several domain adaptation benchmarks, and DIRT-T further improves VADA performance.Our proposed models open up several possibilities for future work.", "One possibility is to apply DIRT-T to weakly supervised learning; another is to improve the natural gradient approximation via K-FAC BID18 and PPO BID25 .", "Given the strong performance of our models, we also recommend them for other downstream domain adaptation applications.", "DISPLAYFORM0 Gaussian noise, σ = 1 DISPLAYFORM1 Gaussian noise, σ = 1 DISPLAYFORM2" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.22857142984867096, 0.08695651590824127, 0.1111111044883728, 0.07999999821186066, 0.22857142984867096, 0.11764705181121826, 0, 0.1875, 0.15789473056793213, 0.22857142984867096, 0, 0.1599999964237213, 0.04651162400841713, 0.10526315122842789, 0.1111111044883728, 0.25, 0.1818181723356247, 0.09090908616781235, 0.06451612710952759, 0.20000000298023224, 0.20000000298023224, 0.19354838132858276, 0.09090908616781235, 0.11764705181121826, 0.12903225421905518, 0.25, 0.2380952388048172, 0.1764705777168274, 0.06896550953388214, 0.1111111044883728, 0.054054051637649536, 0.1538461446762085, 0.1538461446762085, 0.22727271914482117, 0.10526315122842789, 0.07407407462596893, 0.10810810327529907, 0.11764705181121826, 0.4615384638309479, 0.27586206793785095, 0.190476194024086, 0.20512820780277252, 0.06451612710952759, 0.2222222238779068, 0 ]
H1q-TM-AW
true
[ "SOTA on unsupervised domain adaptation by leveraging the cluster assumption." ]
[ "In this paper, we propose Continuous Graph Flow, a generative continuous flow based method that aims to model complex distributions of graph-structured data. ", "Once learned, the model can be applied to an arbitrary graph, defining a probability density over the random variables represented by the graph.", "It is formulated as an ordinary differential equation system with shared and reusable functions that operate over the graphs. ", "This leads to a new type of neural graph message passing scheme that performs continuous message passing over time.", "This class of models offers several advantages: a flexible representation that can generalize to variable data dimensions; ability to model dependencies in complex data distributions; reversible and memory-efficient; and exact and efficient computation of the likelihood of the data.", "We demonstrate the effectiveness of our model on a diverse set of generation tasks across different domains: graph generation, image puzzle generation, and layout generation from scene graphs.", "Our proposed model achieves significantly better performance compared to state-of-the-art models.", "Modeling and generating graph-structured data has important applications in various scientific fields such as building knowledge graphs (Lin et al., 2015; Bordes et al., 2011) , inventing new molecular structures (Gilmer et al., 2017) and generating diverse images from scene graphs (Johnson et al., 2018) .", "Being able to train expressive graph generative models is an integral part of AI research.", "Significant research effort has been devoted in this direction.", "Traditional graph generative methods (Erdős & Rényi, 1959; Leskovec et al., 2010; Albert & Barabási, 2002; Airoldi et al., 2008) are based on rigid structural assumptions and lack the capability to learn from observed data.", "Modern deep learning frameworks within the variational autoencoder (VAE) (Kingma & Welling, 2014) formalism offer promise of learning distributions from data.", "Specifially, for structured data, research efforts have focused on bestowing VAE based generative models with the ability to learn structured latent space models (Lin et al., 2018; He et al., 2018; Kipf & Welling, 2016) .", "Nevertheless, their capacity is still limited mainly because of the assumptions placed on the form of distributions.", "Another class of graph generative models are based on autoregressive methods (You et al., 2018; Kipf et al., 2018) .", "These models construct graph nodes sequentially wherein each iteration involves generation of edges connecting a generated node in that iteration with the previously generated set of nodes.", "Such autoregressive models have been proven to be the most successful so far.", "However, due to the sequential nature of the generation process, the generation suffers from the inability to maintain long-term dependencies in larger graphs.", "Therefore, existing methods for graph generation are yet to realize the full potential of their generative power, particularly, the ability to model complex distributions with the flexibility to address variable data dimensions.", "Alternatively, for modeling the relational structure in data, graph neural networks (GNNs) or message passing neural networks (MPNNs) (Scarselli et al., 2009; Gilmer et al., 2017; Duvenaud et al., 2015; Kipf & Welling, 2017; Santoro et al., 2017; Zhang et al., 2018) have been shown to be effective in learning generalizable representations over variable input data dimensions.", "These models operate on the underlying principle of iterative neural message passing wherein the node representations are updated iteratively for a fixed number of steps.", "Hereafter, we use the term message passing to refer to this neural message passing in GNNs.", "We leverage this representational ability towards graph generation.", "In this paper, we introduce a new class of models -Continuous Graph Flow (CGF): a graph generative model based on continuous normalizing flows Grathwohl et al., 2019 ) that Figure 1 : Illustration of evolution of message passing mechanisms from discrete updates", "(a) to our proposed continuous updates", "(b).", "Continuous Graph Flow leverages normalizing flows to transform simple distributions (e.g. Gaussian) at t 0 to the target distributions at t 1 .", "The distribution of only one graph node is shown here for visualization, but, all the node distributions transform over time.", "generalizes the message passing mechanism in GNNs to continuous time.", "Specifically, to model continuous time dynamics of the graph variables, we adopt a neural ordinary different equation (ODE) formulation.", "Our CGF model has both the flexibility to handle variable data dimensions (by using GNNs) and the ability to model arbitrarily complex data distributions due to free-form model architectures enabled by the neural ODE formulation.", "Inherently, the ODE formulation also imbues the model with following properties: reversibility and exact likelihood computation.", "Concurrent work on Graph Normalizing Flows (GNF) (Liu et al., 2019 ) also proposes a reversible graph neural network using normalizing flows.", "However, their model requires a fixed number of transformations.", "In contrast, while our proposed CGF is also reversible and memory efficient, the underlying flow model relies on continuous message passing scheme.", "Moreover, the message passing in GNF involves partitioning of data dimensions into two halves and employs coupling layers to couple them back.", "This leads to several constraints on function forms and model architectures that have a significant impact on performance (Kingma & Dhariwal, 2018) .", "In contrast, our CGF model has unconstrained (free-form) Jacobians, enabling it to learn more expressive transformations.", "Moreover, other similar work GraphNVP Madhawa et al. (2019) is also based on normalizing flows as compared to CGF that models continuous time dynamics.", "We demonstrate the effectiveness of our CGF-based models on three diverse tasks: graph generation, image puzzle generation, and layout generation based on scene graphs.", "Experimental results show that our proposed model achieves significantly better performance than state-of-the-art models.", "In this paper, we presented continuous graph flow, a generative model that generalizes the neural message passing in graphs to continuous time.", "We formulated the model as an neural ordinary differential equation system with shared and reusable functions that operate over the graph structure.", "We conducted evaluation for a diverse set of generation tasks across different domains: graph generation, image puzzle generation, and layout generation for scene graph.", "Experimental results showed that continuous graph flow achieves significant performance improvement over various of state-ofthe-art baselines.", "For future work, we will focus on generation tasks for large-scale graphs which is promising as our model is reversible and memory-efficient." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.29999998211860657, 0.05405404791235924, 0.1111111044883728, 0.3636363446712494, 0.12765957415103912, 0.09756097197532654, 0.14814814925193787, 0, 0.25806450843811035, 0, 0.16326530277729034, 0.0555555522441864, 0.21276594698429108, 0.12903225421905518, 0.29411762952804565, 0.10256409645080566, 0.13793103396892548, 0.11764705181121826, 0.13636362552642822, 0.09836065024137497, 0.25641024112701416, 0.20689654350280762, 0, 0.3214285671710968, 0.1818181723356247, 0.11428570747375488, 0.11428570747375488, 0.38461539149284363, 0.2857142686843872, 0.09090908616781235, 0, 0.1538461446762085, 0.07999999821186066, 0.21052631735801697, 0.21052631735801697, 0.10810810327529907, 0.0624999962747097, 0.29999998211860657, 0.21052631735801697, 0.06666666269302368, 0.3243243098258972, 0.10810810327529907, 0.0555555522441864, 0.1249999925494194, 0.05405404791235924 ]
BkgZSCEtvr
true
[ "Graph generative models based on generalization of message passing to continuous time using ordinary differential equations " ]
[ "The practical successes of deep neural networks have not been matched by theoretical progress that satisfyingly explains their behavior.", "In this work, we study the information bottleneck (IB) theory of deep learning, which makes three specific claims: first, that deep networks undergo two distinct phases consisting of an initial fitting phase and a subsequent compression phase; second, that the compression phase is causally related to the excellent generalization performance of deep networks; and third, that the compression phase occurs due to the diffusion-like behavior of stochastic gradient descent.", "Here we show that none of these claims hold true in the general case.", "Through a combination of analytical results and simulation, we demonstrate that the information plane trajectory is predominantly a function of the neural nonlinearity employed: double-sided saturating nonlinearities like tanh yield a compression phase as neural activations enter the saturation regime, but linear activation functions and single-sided saturating nonlinearities like the widely used ReLU in fact do not.", "Moreover, we find that there is no evident causal connection between compression and generalization: networks that do not compress are still capable of generalization, and vice versa.", "Next, we show that the compression phase, when it exists, does not arise from stochasticity in training by demonstrating that we can replicate the IB findings using full batch gradient descent rather than stochastic gradient descent.", "Finally, we show that when an input domain consists of a subset of task-relevant and task-irrelevant information, hidden representations do compress the task-irrelevant information, although the overall information about the input may monotonically increase with training time, and that this compression happens concurrently with the fitting process rather than during a subsequent compression period.", "Deep neural networks (Schmidhuber, 2015; are the tool of choice for real-world tasks ranging from visual object recognition BID16 , to unsupervised learning BID11 BID19 and reinforcement learning (Silver et al., 2016) .", "These practical successes have spawned many attempts to explain the performance of deep learning systems BID12 , mostly in terms of the properties and dynamics of the optimization problem in the space of weights (Saxe et al., 2014; BID8 BID1 , or the classes of functions that can be efficiently represented by deep networks BID20 Poggio et al., 2017) .", "This paper analyzes a recent inventive proposal to study the dynamics of learning through the lens of information theory (Tishby & Zaslavsky, 2015; Shwartz-Ziv & Tishby, 2017) .", "In this view, deep learning is a question of representation learning: each layer of a deep neural network can be seen as a set of summary statistics which contain some but not all of the information present in the input, while retaining as much information about the target output as possible.", "The amount of information in a hidden layer regarding the input and output can then be measured over the course of learning, yielding a picture of the optimization process in the information plane.", "Crucially, this method holds the promise to serve as a general analysis that can be used to compare different architectures, using the common currency of mutual information.", "Moreover, the elegant information bottleneck (IB) theory provides a fundamental bound on the amount of input compression and target output information that any representation can achieve (Tishby et al., 1999) .", "The IB bound thus serves as a method-agnostic ideal to which different architectures and algorithms may be compared.A preliminary empirical exploration of these ideas in deep neural networks has yielded striking findings (Shwartz-Ziv & Tishby, 2017) .", "Most saliently, trajectories in the information plane appear to consist of two distinct phases: an initial \"fitting\" phase where mutual information between the hidden layers and both the input and output increases, and a subsequent \"compression\" phase where mutual information between the hidden layers and the input decreases.", "It has been hypothesized that this compression phase is responsible for the excellent generalization performance of deep networks, and further, that this compression phase occurs due to the random diffusion-like behavior of stochastic gradient descent.Here we study these phenomena using a combination of analytical methods and simulation.", "In Section 2, we show that the compression observed by Shwartz-Ziv & Tishby (2017) arises primarily due to the double-saturating tanh activation function used.", "Using simple models, we elucidate the effect of neural nonlinearity on the compression phase.", "Importantly, we demonstrate that the ReLU activation function, often the nonlinearity of choice in practice, does not exhibit a compression phase.", "We discuss how this compression via nonlinearity is related to the assumption of binning or noise in the hidden layer representation.", "To better understand the dynamics of learning in the information plane, in Section 3 we study deep linear networks in a tractable setting where the mutual information can be calculated exactly.", "We find that deep linear networks do not compress over the course of training for the setting we examine.", "Further, we show a dissociation between generalization and compression.", "In Section 4, we investigate whether stochasticity in the training process causes compression in the information plane.", "We train networks with full batch gradient descent, and compare the results to those obtained with stochastic gradient descent.", "We find comparable compression in both cases, indicating that the stochasticity of SGD is not a primary factor in the observed compression phase.", "Moreover, we show that the two phases of SGD occur even in networks that do not compress, demonstrating that the phases are not causally related to compression.", "These results may seem difficult to reconcile with the intuition that compression can be necessary to attain good performance: if some input channels primarily convey noise, good generalization requires excluding them.", "Therefore, in Section 5 we study a situation with explicitly task-relevant and task-irrelevant input dimensions.", "We show that the hidden-layer mutual information with the task-irrelevant subspace does indeed drop during training, though the overall information with the input increases.", "However, instead of a secondary compression phase, this task-irrelevant information is compressed at the same time that the taskrelevant information is boosted.", "Our results highlight the importance of noise assumptions in applying information theoretic analyses to deep learning systems, and put in doubt the generality of the IB theory of deep learning as an explanation of generalization performance in deep architectures.", "Our results suggest that compression dynamics in the information plane are not a general feature of deep networks, but are critically influenced by the nonlinearities employed by the network.", "Doublesaturating nonlinearities lead to compression, if mutual information is estimated by binning activations or by adding homoscedastic noise, while single-sided saturating nonlinearities like ReLUs do not compress in general.", "Consistent with this view, we find that stochasticity in the training process does not contribute to compression in the cases we investigate.", "Furthermore, we have found instances where generalization performance does not clearly track information plane behavior, questioning the causal link between compression and generalization.", "Hence information compression may parallel the situation with sharp minima: although empirical evidence has shown a correlation with generalization error in certain settings and architectures, further theoretical analysis has shown that sharp minima can in fact generalize well BID9 .", "We emphasize that compression still may occur within a subset of the input dimensions if the task demands it.", "This compression, however, is interleaved rather than in a secondary phase and may not be visible by information metrics that track the overall information between a hidden layer and the input.", "Finally, we note that our results address the specific claims of one scheme to link the information bottleneck principle with current practice in deep networks.", "The information bottleneck principle itself is more general and may yet offer important insights into deep networks BID0 .", "Moreover, the information bottleneck principle could yield fundamentally new training algorithms for networks that are inherently stochastic and where compression is explicitly encouraged with appropriate regularization terms BID5 BID2" ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.21621620655059814, 0.20000000298023224, 0.5625, 0.1875, 0.1860465109348297, 0.20408162474632263, 0.1666666567325592, 0.16326530277729034, 0.1875, 0.2380952388048172, 0.24137930572032928, 0.1860465109348297, 0.23255813121795654, 0.25531914830207825, 0.1090909019112587, 0.1599999964237213, 0.13793103396892548, 0.1463414579629898, 0.12903225421905518, 0.2631579041481018, 0.21052631735801697, 0.27272728085517883, 0.3333333134651184, 0.07407406717538834, 0.1818181723356247, 0.11428570747375488, 0.31578946113586426, 0.3499999940395355, 0.08510638028383255, 0.060606054961681366, 0.2702702581882477, 0.21621620655059814, 0.2978723347187042, 0.41860464215278625, 0.17777776718139648, 0.21621620655059814, 0.14999999105930328, 0.1538461446762085, 0.2222222238779068, 0.2222222238779068, 0.380952388048172, 0.2222222238779068, 0.21276594698429108 ]
ry_WPG-A-
true
[ "We show that several claims of the information bottleneck theory of deep learning are not true in the general case." ]
[ "Over the past four years, neural networks have been proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions.", "We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs.", "For most current network architectures, we prove that the L1-norm of these gradients grows as the square root of the input size.", "These nets therefore become increasingly vulnerable with growing image size.", "Our proofs rely on the network’s weight distribution at initialization, but extensive experiments confirm that our conclusions still hold after usual training.", "Following the work of BID7 , Convolutional Neural Networks (CNNs) have been found vulnerable to adversarial examples: an adversary can drive the performance of state-of-the art CNNs down to chance level with imperceptible changes of the inputs.", "A number of studies have tried to address this issue, but only few have stressed that, because adversarial examples are essentially small input changes that create large output variations, they are inherently caused by large gradients of the neural network with respect to its inputs.", "Of course, this view, which we will focus on here, assumes that the network and loss are differentiable.", "It has the advantage to yield a large body of specific mathematical tools, but might not be easily extendable to masked gradients, non-smooth models or the 0-1-loss.", "Nevertheless, our conclusions might even hold for non-smooth models, given that the latter can often be viewed as smooth at a coarser level.Contributions.", "More specifically, we provide theoretical and empirical arguments supporting the existence of a monotonic relationship between the gradient norm of the training objective (of a differentiable classifier) and its adversarial vulnerability.", "Evaluating this norm based on the weight statistics at initialization, we show that CNNs and most feed-forward networks, by design, exhibit increasingly large gradients with input dimension d, almost independently of their architecture.", "That leaves them increasingly vulnerable to adversarial noise.", "We corroborate our theoretical results by extensive experiments.", "Although some of those experiments involve adversarial regularization schemes, our goal is not to advocate a new adversarial defense (these schemes are already known), but to show how their effect can be explained by our first order analysis.", "We do not claim to explain all aspects of adversarial vulnerability, but we claim that our first order argument suffices to explain a significant part of the empirical findings on adversarial vulnerability.", "This calls for researching the design of neural network architectures with inherently smaller gradients and provides useful guidelines to practitioners and network designers.", "For differentiable classifiers and losses, we showed that adversarial vulnerability increases with the gradients ∂ x L of the loss, which is confirmed by the near-perfect functional relationship between gradient norms and vulnerability FIG1 We then evaluated the size of ∂ x L q and showed that, at initialization, usual feed-forward nets (convolutional or fully connected) are increasingly vulnerable to p -attacks with growing input dimension d (the image-size), almost independently of their architecture.", "Our experiments show that, on the tested architectures, usual training escapes those prior gradient (and vulnerability) properties on the training, but not on the test set.", "BID14 suggest that alleviating this generalization gap requires more data.", "But a natural (complementary) alternative would be to search for architectures with naturally smaller gradients, and in particular, with well-behaved priors.", "Despite all their limitations (being only first-order, assuming a prior weight-distribution and a differentiable loss and architecture), our theoretical insights may thereby still prove to be precious future allies." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11428570747375488, 0.13333332538604736, 0.12903225421905518, 0.1818181723356247, 0.05882352590560913, 0.13636362552642822, 0.19230769574642181, 0.06666666269302368, 0.05405404791235924, 0.0555555522441864, 0, 0.17777776718139648, 0.19999998807907104, 0.09999999403953552, 0.04255318641662598, 0.05128204822540283, 0.060606054961681366, 0.13698630034923553, 0, 0.09090908616781235, 0, 0 ]
H1MzKs05F7
true
[ "Neural nets have large gradients by design; that makes them adversarially vulnerable." ]
[ "Effective performance of neural networks depends critically on effective tuning of optimization hyperparameters, especially learning rates (and schedules thereof).", "We present Amortized Proximal Optimization (APO), which takes the perspective that each optimization step should approximately minimize a proximal objective (similar to the ones used to motivate natural gradient and trust region policy optimization).", "Optimization hyperparameters are adapted to best minimize the proximal objective after one weight update.", "We show that an idealized version of APO (where an oracle minimizes the proximal objective exactly) achieves global convergence to stationary point and locally second-order convergence to global optimum for neural networks.", "APO incurs minimal computational overhead.", "We experiment with using APO to adapt a variety of optimization hyperparameters online during training, including (possibly layer-specific) learning rates, damping coefficients, and gradient variance exponents.", "For a variety of network architectures and optimization algorithms (including SGD, RMSprop, and K-FAC), we show that with minimal tuning, APO performs competitively with carefully tuned optimizers.", "Tuning optimization hyperparameters can be crucial for effective performance of a deep learning system.", "Most famously, carefully selected learning rate schedules have been instrumental in achieving state-of-the-art performance on challenging datasets such as ImageNet BID6 and WMT BID36 .", "Even algorithms such as RMSprop BID34 and Adam (Kingma & Ba, 2015) , which are often interpreted in terms of coordinatewise adaptive learning rates, still have a global learning rate parameter which is important to tune.", "A wide variety of learning rate schedules have been proposed BID24 BID14 BID2 .", "Seemingly unrelated phenomena have been explained in terms of effective learning rate schedules BID35 .", "Besides learning rates, other hyperparameters have been identified as important, such as the momentum decay factor BID31 , the batch size BID28 , and the damping coefficient in second-order methods BID20 BID19 .There", "have been many attempts to adapt optimization hyperparameters to minimize the training error after a small number of updates BID24 BID1 BID2 . This", "approach faces two fundamental obstacles: first, learning rates and batch sizes have been shown to affect generalization performance because stochastic updates have a regularizing effect BID5 BID18 BID27 BID35 . Second", ", minimizing the short-horizon expected loss encourages taking very small steps to reduce fluctuations at the expense of long-term progress BID37 . While", "these effects are specific to learning rates, they present fundamental obstacles to tuning any optimization hyperparameter, since basically any optimization hyperparameter somehow influences the size of the updates.In this paper, we take the perspective that the optimizer's job in each iteration is to approximately minimize a proximal objective which trades off the loss on the current batch with the average change in the predictions. Specifically", ", we consider proximal objectives of the form J(φ) = h(f (g(θ, φ))) + λD(f (θ), f (g(θ, φ))), where f is a model with parameters θ, h is an approximation to the objective function, g is the base optimizer update with hyperparameters φ, and D is a distance metric. Indeed, approximately", "solving such a proximal objective motivated the natural gradient algorithm BID0 , as well as proximal reinforcement learning algorithms BID26 . We introduce Amortized", "Proximal Optimization (APO), an approach which adapts optimization hyperparameters to minimize the proximal objective in each iteration. We use APO to tune hyperparameters", "of SGD, RMSprop, and K-FAC; the hyperparameters we consider include (possibly layer-specific) learning rates, damping coefficients, and the power applied to the gradient covariances.Notice that APO has a hyperparameter λ which controls the aggressiveness of the updates. We believe such a hyperparameter is", "necessary until the aforementioned issues surrounding stochastic regularization and short-horizon bias are better understood. However, in practice we find that by", "performing a simple grid search over λ, we can obtain automatically-tuned learning rate schedules that are competitive with manual learning rate decay schedules. Furthermore, APO can automatically adapt", "several optimization hyperparameters with only a single hand-tuned hyperparameter.We provide theoretical justification for APO by proving strong convergence results for an oracle which solves the proximal objective exactly in each iteration. In particular, we show global linear convergence", "and locally quadratic convergence under mild assumptions. These results motivate the proximal objective as", "a useful target for meta-optimization.We evaluate APO on real-world tasks including image classification on MNIST, CIFAR-10, CIFAR-100, and SVHN. We show that adapting learning rates online via", "APO yields faster training convergence than the best fixed learning rates for each task, and is competitive with manual learning rate decay schedules. Although we focus on fast optimization of the training", "objective, we also find that the solutions found by APO generalize at least as well as those found by fixed hyperparameters or fixed schedules.", "We introduced amortized proximal optimization (APO), a method for online adaptation of optimization hyperparameters, including global and per-layer learning rates, and damping parameters for approximate second-order methods.", "We evaluated our approach on real-world neural network optimization tasks-training MLP and CNN models-and showed that it converges faster and generalizes better than optimal fixed learning rates.", "Empirically, we showed that our method overcomes short horizon bias and performs well with sensible default values for the meta-optimization parameters.Guodong Zhang, Chaoqi Wang, Bowen Xu, and Roger Grosse.", "Three mechanisms of weight decay regularization.", "arXiv preprint arXiv:1810.12281, 2018.A PROOF OF THEOREM 1We first introduce the following lemma:Lemma 1.", "Assume the manifold is smooth with C-bounded curvature, the gradient norm of loss function L is upper bounded by G. If the effective gradient at point Z k ∈ M is g k , then for any DISPLAYFORM0 Proof.", "We construct the Z satisfying the above inequality.", "Consider the following point in R d : DISPLAYFORM1 We show that Z is a point satisfying the inequality in the lemma.", "Firstly, we notice that DISPLAYFORM2 This is because when we introduce the extra curveṽ DISPLAYFORM3 Here we use the fact thatv = 0 and v ≤ C. Therefore we have DISPLAYFORM4 Here the first equality is by introducing the extra Y , the first inequality is by triangle inequality, the second equality is by the definition of g k being ∇ Z L(Z k ) projecting onto a plane, the second inequality is due to the above bound of Y − Z , the last inequality is due to DISPLAYFORM5 , there is therefore DISPLAYFORM6 which completes the proof." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1395348757505417, 0.28070175647735596, 0.1538461446762085, 0.18867923319339752, 0, 0.7843137383460999, 0.19999998807907104, 0.25641024112701416, 0.08163265138864517, 0.20338982343673706, 0.15789473056793213, 0.10256409645080566, 0.18518517911434174, 0.25531914830207825, 0.14814814925193787, 0.08695651590824127, 0.17721518874168396, 0.1764705777168274, 0.260869562625885, 0.260869562625885, 0.3606557250022888, 0.04347825422883034, 0.12244897335767746, 0.16129031777381897, 0.10256409645080566, 0.23529411852359772, 0.14814814925193787, 0.04444443807005882, 0.5714285373687744, 0.15686273574829102, 0.07407406717538834, 0.06451612710952759, 0.04878048226237297, 0.06896550953388214, 0.0624999962747097, 0.09302324801683426, 0.11363635957241058 ]
rJl6M2C5Y7
true
[ "We introduce amortized proximal optimization (APO), a method to adapt a variety of optimization hyperparameters online during training, including learning rates, damping coefficients, and gradient variance exponents." ]
[ "Dense word vectors have proven their values in many downstream NLP tasks over the past few years.", "However, the dimensions of such embeddings are not easily interpretable.", "Out of the d-dimensions in a word vector, we would not be able to understand what high or low values mean.", "Previous approaches addressing this issue have mainly focused on either training sparse/non-negative constrained word embeddings, or post-processing standard pre-trained word embeddings.", "On the other hand, we analyze conventional word embeddings trained with Singular Value Decomposition, and reveal similar interpretability.", "We use a novel eigenvector analysis method inspired from Random Matrix Theory and show that semantically coherent groups not only form in the row space, but also the column space.", "This allows us to view individual word vector dimensions as human-interpretable semantic features.", "Understanding words has a fundamental impact on many natural language processing tasks, and has been modeled with the Distributional Hypothesis BID0 .", "Dense d-dimensional vector representations of words created from this model are often referred to as word embeddings, and have successfully captured similarities between words, such as word2vec and GloVe BID1 BID2 .", "They have also been applied to downstream NLP tasks as word representation features, ranging from sentiment analysis to machine translation BID3 BID4 .Despite", "their widespread popularity in usage, the dimensions of these word vectors are difficult to interpret BID5 . Consider", "w president = [0.1, 2.4, 0.3] as the 3-dimensional vector of \"president\" from word2vec. In this", "3-dimensional space (or the row space), semantically similar words like \"minister\" and \"president\" are closely located. However", ", it is unclear what the dimensions represent, as we do not know the meaning of the 2.4 in w president . It is", "difficult to answer questions like 'what is the meaning of high and low values in the columns of W' and 'how can we interpret the dimensions of word vectors'. To address", "this problem, previous literature focused on the column space by either training word embeddings with sparse and non-negative constraints BID6 BID7 BID8 , or post-processing pre-trained word embeddings BID5 BID9 BID10 . We instead", "investigate this problem from a random matrix perspective.In our work, we analyze the eigenvectors of word embeddings obtained with truncated Singular Value Decomposition (SVD) BID11 BID12 of the Positive Pointwise Mutual Information (PPMI) matrix BID13 . Moreover,", "we compare this analysis with the row and column space analysis of Skip Gram Negative Sampling (SGNS), a model used to train word2vec BID14 . From the", "works of BID15 proving that both SVD and SGNS factorizes and approximates the same matrix, we hypothesize that a study of the principal eigenvectors of the PPMI matrix reflects the information contained in SGNS.Contributions: Without requiring any constraints or post-processing, we show that the dimensions of word vectors can be interpreted as semantic features. In doing", "so, we also introduce novel word embedding analysis methods inspired by the literature of eigenvector analysis techniques from Random Matrix Theory.", "In this work, we analyzed the eigenvectors, or the column space, of the word embeddings obtained from the Singular Value Decomposition of PPMI matrix.", "We revealed that the significant participants of the eigenvectors form semantically coherent groups, allowing us to view each word vector as an interpretable feature vector composed of semantic groups.", "These results can be very useful in error analysis in downstream NLP tasks, or cherry-picking useful feature dimensions to easily create compressed and efficient task-specific embeddings.", "Future work will proceed in this direction on applying interpretability to practical usage." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.1538461446762085, 0.1875, 0.2790697515010834, 0.0952380895614624, 0.14999999105930328, 0.11764705181121826, 0.2857142686843872, 0.0476190410554409, 0.11764705181121826, 0.09090908616781235, 0.25641024112701416, 0.1463414579629898, 0.05128204822540283, 0.23255813121795654, 0.25, 0.15094339847564697, 0.1428571343421936, 0.1304347813129425, 0.5970149040222168, 0.1904761791229248, 0.2380952388048172, 0.25, 0.17391303181648254, 0 ]
rJfJiR5ooX
true
[ "Without requiring any constraints or post-processing, we show that the salient dimensions of word vectors can be interpreted as semantic features. " ]
[ "Neural networks in the brain and in neuromorphic chips confer systems with the ability to perform multiple cognitive tasks.", "However, both kinds of networks experience a wide range of physical perturbations, ranging from damage to edges of the network to complete node deletions, that ultimately could lead to network failure.", "A critical question is to understand how the computational properties of neural networks change in response to node-damage and whether there exist strategies to repair these networks in order to compensate for performance degradation.", "Here, we study the damage-response characteristics of two classes of neural networks, namely multilayer perceptrons (MLPs) and convolutional neural networks (CNNs) trained to classify images from MNIST and CIFAR-10 datasets respectively.", "We also propose a new framework to discover efficient repair strategies to rescue damaged neural networks.", "The framework involves defining damage and repair operators for dynamically traversing the neural networks loss landscape, with the goal of mapping its salient geometric features.", "Using this strategy, we discover features that resemble path-connected attractor sets in the loss landscape.", "We also identify that a dynamic recovery scheme, where networks are constantly damaged and repaired, produces a group of networks resilient to damage as it can be quickly rescued.", "Broadly, our work shows that we can design fault-tolerant networks by applying on-line retraining consistently during damage for real-time applications in biology and machine learning." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.17391304671764374, 0.125, 0.22857142984867096, 0.1764705926179886, 0.4761904776096344, 0.20000000298023224, 0, 0.1818181723356247, 0.06451612710952759 ]
HkgVE7YUIH
false
[ "strategy to repair damaged neural networks" ]
[ "Automatic question generation from paragraphs is an important and challenging problem, particularly due to the long context from paragraphs.", "In this paper, we propose and study two hierarchical models for the task of question generation from paragraphs.", "Specifically, we propose", "(a) a novel hierarchical BiLSTM model with selective attention and", "(b) a novel hierarchical Transformer architecture, both of which learn hierarchical representations of paragraphs. \n", "We model a paragraph in terms of its constituent sentences, and a sentence in terms of its constituent words.", "While the introduction of the attention mechanism benefits the hierarchical BiLSTM model, the hierarchical Transformer, with its inherent attention and positional encoding mechanisms also performs better than flat transformer model.\n", "We conducted empirical evaluation on the widely used SQuAD and MS MARCO datasets using standard metrics. \n", "The results demonstrate the overall effectiveness of the hierarchical models over their flat counterparts. \n", "Qualitatively, our hierarchical models are able to generate fluent and relevant questions.\n", "Question Generation (QG) from text has gained significant popularity in recent years in both academia and industry, owing to its wide applicability in a range of scenarios including conversational agents, automating reading comprehension assessment, and improving question answering systems by generating additional training data.", "Neural network based methods represent the stateof-the-art for automatic question generation.", "These models do not require templates or rules, and are able to generate fluent, high-quality questions.", "Most of the work in question generation takes sentences as input (Du & Cardie, 2018; Kumar et al., 2018; Song et al., 2018; Kumar et al., 2019 ).", "QG at the paragraph level is much less explored and it has remained a challenging problem.", "The main challenges in paragraph-level QG stem from the larger context that the model needs to assimilate in order to generate relevant questions of high quality.", "Existing question generation methods are typically based on recurrent neural networks (RNN), such as bi-directional LSTM.", "Equipped with different enhancements such as the attention, copy and coverage mechanisms, RNN-based models (Du et al., 2017; Kumar et al., 2018; Song et al., 2018) achieve good results on sentence-level question generation.", "However, due to their ineffectiveness in dealing with long sequences, paragraph-level question generation remains a challenging problem for these models.", "Recently, Zhao et al. (2018) proposed a paragraph-level QG model with maxout pointers and a gated self-attention encoder.", "To the best of our knowledge this is the only model that is designed to support paragraph-level QG and outperforms other models on the SQuAD dataset (Rajpurkar et al., 2016) .", "One straightforward extension to such a model would be to reflect the structure of a paragraph in the design of the encoder.", "Our first attempt is indeed a hierarchical BiLSTM-based paragraph encoder ( HPE ), wherein, the hierarchy comprises the word-level encoder that feeds its encoding to the sentence-level encoder.", "Further, dynamic paragraph-level contextual information in the BiLSTM-HPE is incorporated via both word-and sentence-level selective attention.", "However, LSTM is based on the recurrent architecture of RNNs, making the model somewhat rigid and less dynamically sensitive to different parts of the given sequence.", "Also LSTM models are slower to train.", "In our case, a paragraph is a sequence of sentences and a sentence is a sequence of words.", "The Transformer (Vaswani et al., 2017 ) is a recently proposed neural architecture designed to address some deficiencies of RNNs.", "Specifically, the Transformer is based on the (multi-head) attention mechanism, completely discarding recurrence in RNNs.", "This design choice allows the Transformer to effectively attend to different parts of a given sequence.", "Also Transformer is relatively much faster to train and test than RNNs.", "As humans, when reading a paragraph, we look for important sentences first and then important keywords in those sentences to find a concept around which a question can be generated.", "Taking this inspiration, we give the same power to our model by incorporating word-level and sentence-level selective attention to generate high-quality questions from paragraphs.", "In this paper, we present and contrast novel approachs to QG at the level of paragraphs.", "Our main contributions are as follows:", "• We present two hierarchical models for encoding the paragraph based on its structure.", "We analyse the effectiveness of these models for the task of automatic question generation from paragraph.", "• Specifically, we propose a novel hierarchical Transformer architecture.", "At the lower level, the encoder first encodes words and produces a sentence-level representation.", "At the higher level, the encoder aggregates the sentence-level representations and learns a paragraphlevel representation.", "• We also propose a novel hierarchical BiLSTM model with selective attention, which learns to attend to important sentences and words from the paragraph that are relevant to generate meaningful and fluent questions about the encoded answer.", "• We also present attention mechanisms for dynamically incorporating contextual information in the hierarchical paragraph encoders and experimentally validate their effectiveness.", "In Table 1 and Table 2 we present automatic evaluation results of all models on SQuAD and MS MARCO datasets respectively.", "We present human evaluation results in Table 3 and Table 4 respectively.", "A number of interesting observations can be made from automatic evaluation results in Table 1 and Table 4 : Human evaluation results (column \"Score\") as well as inter-rater agreement (column \"Kappa\") for each model on the MS MARCO test set.", "The scores are between 0-100, 0 being the worst and 100 being the best.", "Best results for each metric (column) are bolded.", "The three evaluation criteria are: (1) syntactically correct (Syntax), (2) semantically correct (Semantics), and (3) relevant to the text (Relevance).", "• Overall, the hierarchical BiLSTM model HierSeq2Seq + AE shows the best performance, achieving best result on BLEU2-BLEU4 metrics on both SQuAD dataset, whereas the hierarchical Transformer model TransSeq2Seq + AE performs best on BLEU1 and ROUGE-L on the SQuAD dataset.", "• Compared to the flat LSTM and Transformer models, their respective hierarchical counterparts always perform better on both the SQuAD and MS MARCO datasets.", "• On the MS MARCO dataset, we observe the best consistent performance using the hierarchical BiLSTM models on all automatic evaluation metrics.", "• On the MS MARCO dataset, the two LSTM-based models outperform the two Transformer-based models.", "Interestingly, human evaluation results, as tabulated in Table 3 and Table 4 , demonstrate that the hierarchical Transformer model TransSeq2Seq + AE outperforms all the other models on both datasets in both syntactic and semantic correctness.", "However, the hierarchical BiLSTM model HierSeq2Seq + AE achieves best, and significantly better, relevance scores on both datasets.", "From the evaluation results, we can see that our proposed hierarchical models demonstrate benefits over their respective flat counterparts in a significant way.", "Thus, for paragraph-level question generation, the hierarchical representation of paragraphs is a worthy pursuit.", "Moreover, the Transformer architecture shows great potential over the more traditional RNN models such as BiLSTM as shown in human evaluation.", "Thus the continued investigation of hierarchical Transformer is a promising research avenue.", "In the Appendix, in Section B, we present several examples that illustrate the effectiveness of our Hierarchical models.", "In Section C of the appendix, we present some failure cases of our model, along with plausible explanations.", "We proposed two hierarchical models for the challenging task of question generation from paragraphs, one of which is based on a hierarchical BiLSTM model and the other is a novel hierarchical Transformer architecture.", "We perform extensive experimental evaluation on the SQuAD and MS MARCO datasets using standard metrics.", "Results demonstrate the hierarchical representations to be overall much more effective than their flat counterparts.", "The hierarchical models for both Transformer and BiLSTM clearly outperforms their flat counterparts on all metrics in almost all cases.", "Further, our experimental results validate that hierarchical selective attention benefits the hierarchical BiLSTM model.", "Qualitatively, our hierarchical models also exhibit better capability of generating fluent and relevant questions." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3199999928474426, 0.38461539149284363, 0, 0.1111111044883728, 0.0952380895614624, 0.0952380895614624, 0.05882352590560913, 0.07999999821186066, 0.1818181723356247, 0.1904761791229248, 0.08163265138864517, 0.21052631735801697, 0.0833333283662796, 0.13333332538604736, 0.0833333283662796, 0.06451612710952759, 0.1666666567325592, 0.15789473056793213, 0.2142857164144516, 0, 0.0555555522441864, 0.07999999821186066, 0.1249999925494194, 0, 0, 0.13333332538604736, 0.09999999403953552, 0, 0, 0, 0, 0.05882352590560913, 0.06451612710952759, 0, 0, 0.27272728085517883, 0.45454543828964233, 0.11764705181121826, 0, 0, 0.1463414579629898, 0.13793103396892548, 0.07407406717538834, 0, 0.04651162400841713, 0, 0, 0, 0.0555555522441864, 0.06666666269302368, 0.2142857164144516, 0.10526315122842789, 0.10256409645080566, 0.07692307233810425, 0.12903225421905518, 0.1818181723356247, 0.07407406717538834, 0.09999999403953552, 0.07999999821186066, 0, 0.2857142686843872, 0.08695651590824127, 0.08695651590824127, 0.14814814925193787, 0.0952380895614624, 0.1818181723356247 ]
BJeVXgBKDH
true
[ "Automatic question generation from paragraph using hierarchical models" ]
[ "Many deployed learned models are black boxes: given input, returns output.", "Internal information about the model, such as the architecture, optimisation procedure, or training data, is not disclosed explicitly as it might contain proprietary information or make the system more vulnerable.", "This work shows that such attributes of neural networks can be exposed from a sequence of queries.", "This has multiple implications.", "On the one hand, our work exposes the vulnerability of black-box neural networks to different types of attacks -- we show that the revealed internal information helps generate more effective adversarial examples against the black box model.", "On the other hand, this technique can be used for better protection of private content from automatic recognition models using adversarial examples.", "Our paper suggests that it is actually hard to draw a line between white box and black box models.", "Black-box models take a sequence of query inputs, and return corresponding outputs, while keeping internal states such as model architecture hidden.", "They are deployed as black boxes usually on purpose -for protecting intellectual properties or privacy-sensitive training data.", "Our work aims at inferring information about the internals of black box models -ultimately turning them into white box models.", "Such a reverse-engineering of a black box model has many implications.", "On the one hand, it has legal implications to intellectual properties (IP) involving neural networks -internal information about the model can be proprietary and a key IP, and the training data may be privacy sensitive.", "Disclosing hidden details may also render the model more susceptible to attacks from adversaries.", "On the other hand, gaining information about a black-box model can be useful in other scenarios.", "E.g. there has been work on utilising adversarial examples for protecting private regions (e.g. faces) in photographs from automatic recognisers BID12 .", "In such scenarios, gaining more knowledge on the recognisers will increase the chance of protecting one's privacy.", "Either way, it is a crucial research topic to investigate the type and amount of information that can be gained from a black-box access to a model.", "We make a first step towards understanding the connection between white box and black box approaches -which were previously thought of as distinct classes.We introduce the term \"model attributes\" to refer to various types of information about a trained neural network model.", "We group them into three types: (1) architecture (e.g. type of non-linear activation), (2) optimisation process (e.g. SGD or ADAM?), and (3) training data (e.g. which dataset?).", "We approach the problem as a standard supervised learning task applied over models.", "First, collect a diverse set of white-box models (\"meta-training set\") that are expected to be similar to the target black box at least to a certain extent.", "Then, over the collected meta-training set, train another model (\"metamodel\") that takes a model as input and returns the corresponding model attributes as output.", "Importantly, since we want to predict attributes at test time for black-box models, the only information available for attribute prediction is the query input-output pairs.", "As we will see in the experiments, such input-output pairs allow to predict model attributes surprisingly well.In summary, we contribute: (1) Investigation of the type and amount of internal information about the black-box model that can be extracted from querying; (2) Novel metamodel methods that not only reason over outputs from static query inputs, but also actively optimise query inputs that can extract more information; (3) Study of factors like size of the meta-training set, quantity and quality of queries, and the dissimilarity between the meta-training models and the test black box (generalisability); (4) Empirical verification that revealed information leads to greater susceptibility of a black-box model to an adversarial example based attack.", "We have verified through our novel kennen metamodels that black-box access to a neural network exposes much internal information.", "We have shown that only 100 single-label outputs already reveals a great deal about a black box.", "When the black-box classifier is quite different from the metatraining classifiers, the performance of our best metamodel -kennen-io-decreases; however, the prediction accuracy for black box internal information is still surprisingly high.", "Our metamodel can predict architecture families for ImageNet classifiers with high accuracy.", "We additionally show that this reverse-engineering enables more focused attack on black-boxes.", "We have presented first results on the inference of diverse neural network attributes from a sequence of input-output queries.", "Our novel metamodel methods, kennen, can successfully predict attributes related not only to the architecture but also to training hyperparameters (optimisation algorithm and dataset) even in difficult scenarios (e.g. single-label output, or a distribution gap between the metatraining models and the target black box).", "We have additionally shown in ImageNet experiments that reverse-engineering a black box makes it more vulnerable to adversarial examples." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.0624999962747097, 0.08695651590824127, 0.21621620655059814, 0, 0.25925925374031067, 0.1395348757505417, 0.1538461446762085, 0.0952380895614624, 0.052631575614213943, 0.25641024112701416, 0.25806450843811035, 0.1538461446762085, 0.05714285373687744, 0.2222222238779068, 0.09302324801683426, 0.05405404791235924, 0.2222222238779068, 0.27586206793785095, 0.04255318641662598, 0.05882352590560913, 0.17777776718139648, 0.04878048226237297, 0.1818181723356247, 0.1682242900133133, 0.29999998211860657, 0.2702702581882477, 0.2916666567325592, 0.060606054961681366, 0, 0.25641024112701416, 0.09677419066429138, 0.14999999105930328 ]
BydjJte0-
true
[ "Querying a black-box neural network reveals a lot of information about it; we propose novel \"metamodels\" for effectively extracting information from a black box." ]
[ "Inverse reinforcement learning (IRL) is used to infer the reward function from the actions of an expert running a Markov Decision Process (MDP).", "A novel approach using variational inference for learning the reward function is proposed in this research.", "Using this technique, the intractable posterior distribution of the continuous latent variable (the reward function in this case) is analytically approximated to appear to be as close to the prior belief while trying to reconstruct the future state conditioned on the current state and action.", "The reward function is derived using a well-known deep generative model known as Conditional Variational Auto-encoder (CVAE) with Wasserstein loss function, thus referred to as Conditional Wasserstein Auto-encoder-IRL (CWAE-IRL), which can be analyzed as a combination of the backward and forward inference.", "This can then form an efficient alternative to the previous approaches to IRL while having no knowledge of the system dynamics of the agent.", "Experimental results on standard benchmarks such as objectworld and pendulum show that the proposed algorithm can effectively learn the latent reward function in complex, high-dimensional environments.", "Reinforcement learning, formalized as Markov decision process (MDP), provides a general solution to sequential decision making, where given a state, the agent takes an optimal action by maximizing the long-term reward from the environment Bellman (1957) .", "However, in practice, defining a reward function that weighs the features of the state correctly can be challenging, and techniques like reward shaping are often used to solve complex real-world problems Ng et al. (1999) .", "The process of inferring the reward function given the demonstrations by an expert is defined as inverse reinforcement learning (IRL) or apprenticeship learning Ng et al. (2000) ; Abbeel & Ng (2004) .", "The fundamental problem with IRL lies in the fact that the algorithm is under defined and infinitely different reward functions can yield the same policy Finn et al. (2016) .", "Previous approaches have used preferences on the reward function to address the non-uniqueness.", "Ng et al. (2000) suggested reward function that maximizes the difference in the values of the expert's policy and the second best policy.", "Ziebart et al. (2008) adopted the principle of maximum entropy for learning the policy whose feature expectations are constrained to match those of the expert's.", "Ratliff et al. (2006) applied the structured max-margin optimization to IRL and proposed a method for finding the reward function that maximizes the margin between expert's policy and all other policies.", "Neu & Szepesvári (2009) unified a direct method that minimizes deviation from the expert's behavior and an indirect method that finds an optimal policy from the learned reward function using IRL.", "Syed & Schapire (2008) used a game-theoretic framework to find a policy that improves with respect to an expert's.", "Another challenge for IRL is that some variant of the forward reinforcement learning problem needs to be solved in a tightly coupled manner to obtain the corresponding policy, and then compare this policy to the demonstrated actions Finn et al. (2016) .", "Most early IRL algorithms proposed solving an MDP in the inner loop Ng et al. (2000) ; Abbeel & Ng (2004); Ziebart et al. (2008) .", "This requires perfect knowledge of the expert's dynamics which are almost always impossible to have.", "Several works have proposed to relax this requirement, for example by learning a value function instead of a cost Todorov (2007) , solving an approximate local control problem Levine & Koltun (2012) or generating a discrete graph of states Byravan et al. (2015) .", "However, all these methods still require some partial knowledge of the system dynamics.", "Most of the early research in this field has expressed the reward function as a weighted linear combination of hand selected features Ng et al. (2000) ; Ramachandran & Amir (2007); Ziebart et al. (2008) .", "Non-parametric methods such as Gaussian Processes (GPs) have also been used for potentially complex, nonlinear reward functions Levine et al. (2011) .", "While in principle this helps extend the IRL paradigm to flexibly account for non-linear reward approximation; the use of kernels simultaneously leads to higher sample size requirements.", "Universal function approximators such as non-linear deep neural network have been proposed recently Wulfmeier et al. (2015) ; Finn et al. (2016) .", "This moves away from using hand-crafted features and helps in learning highly non-linear reward functions but they still need the agent in the loop to generate new samples to \"guide\" the cost to the optimal reward function.", "Fu et al. (2017) has recently proposed deriving an adversarial reward learning formulation which disentangles the reward learning process by a discriminator trained via binary regression data and uses policy gradient algorithms to learn the policy as well.", "The Bayesian IRL (BIRL) algorithm proposed by Ramachandran & Amir (2007) uses the expert's actions as evidence to update the prior on reward functions.", "The reward learning and apprenticeship learning steps are solved by performing the inference using a modified Markov Chain Monte Carlo (MCMC) algorithm.", "Zheng et al. (2014) described an expectation-maximization (EM) approach for solving the BIRL problem, referring to it as the Robust BIRL (RBIRL).", "Variational Inference (VI) has been used as an efficient and alternative strategy to MCMC sampling for approximating posterior densities Jordan et al. (1999); Wainwright et al. (2008) .", "Variational Auto-encoder (VAE) was proposed by Kingma & Welling (2014) as a neural network version of the approximate inference model.", "The loss function of the VAE is given in such a way that it automatically tries to maximize the likelihood of the data given the current latent variables (reconstruction loss), while encouraging the latent variables to be close to our prior belief of how the variables should look like (KullbeckLiebler divergence loss).", "This can be seen as an generalization of EM from maximum a-posteriori (MAP) estimation of the single parameter to an approximation of complete posterior distribution.", "Conditional VAE (CVAE) has been proposed by Sohn et al. (2015) to develop a deep conditional generative model for structured output prediction using Gaussian latent variables.", "Wasserstein AutoEncoder (WAE) has been proposed by Tolstikhin et al. (2017) to utilize Wasserstein loss function in place of KL divergence loss for robustly estimating the loss in case of small samples, where VAE fails.", "This research is motivated by the observation that IRL can be formulated as a supervised learning problem with latent variable modelling.", "This intuition is not unique.", "It has been proposed by Klein et al. (2013) using the Cascaded Supervised IRL (CSI) approach.", "However, CSI uses non-generalizable heuristics to classify the dataset and find the decision rule to estimate the reward function.", "Here, I propose to utilize the CVAE framework with Wasserstein loss function to determine the non-linear, continuous reward function utilizing the expert trajectories without the need for system dynamics.", "The encoder step of the CVAE is used to learn the original reward function from the next state conditioned on the current state and action.", "The decoder step is used to recover the next state given the current state, action and the latent reward function.", "The likelihood loss, composed of the reconstruction error and the Wasserstein loss, is then fed to optimize the CVAE network.", "The Gaussian distribution is used here as the prior distribution; however, Ramachandran & Amir (2007) has described various other prior distributions which can be used based on the class of problem being solved.", "Since, the states chosen are supplied by the expert's trajectories, the CWAE-IRL algorithm is run only on those states without the need to run an MDP or have the agent in the loop.", "Two novel contributions are made in this paper:", "• Proposing a generative model such as an auto-encoder for estimating the reward function leads to a more effective and efficient algorithm with locally optimal, analytically approximate solution.", "• Using only the expert's state-action trajectories provides a robust generative solution without any knowledge of system dynamics.", "Section 2 gives the background on the concepts used to build our model; Section 3 describes the proposed methodology; Section 4 gives the results and Section 5 provides the discussion and conclusions." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2702702581882477, 0.19354838132858276, 0.23529411852359772, 0.11538460850715637, 0.05714285373687744, 0.14999999105930328, 0.12765957415103912, 0.1666666567325592, 0.1818181723356247, 0.0952380895614624, 0.14814814925193787, 0.11764705181121826, 0.10810810327529907, 0.1395348757505417, 0.09756097197532654, 0.1875, 0.19230768084526062, 0.05405404791235924, 0.06666666269302368, 0.1090909019112587, 0, 0.1304347813129425, 0.0555555522441864, 0.14999999105930328, 0, 0.17777776718139648, 0.16326530277729034, 0.10526315122842789, 0.1666666567325592, 0.05714285373687744, 0.04999999701976776, 0.05714285373687744, 0.14814814925193787, 0.05405404791235924, 0.1463414579629898, 0.08888888359069824, 0.277777761220932, 0, 0, 0.12903225421905518, 0.20512820780277252, 0.1111111044883728, 0.1818181723356247, 0.0624999962747097, 0, 0.09756097197532654, 0.08695651590824127, 0.1428571343421936, 0.12121211737394333, 0.052631575614213943 ]
rJlCXlBtwH
true
[ "Using a supervised latent variable modeling framework to determine reward in inverse reinforcement learning task" ]
[ "The travelling salesman problem (TSP) is a well-known combinatorial optimization problem with a variety of real-life applications.", "We tackle TSP by incorporating machine learning methodology and leveraging the variable neighborhood search strategy.", "More precisely, the search process is considered as a Markov decision process (MDP), where a 2-opt local search is used to search within a small neighborhood, while a Monte Carlo tree search (MCTS) method (which iterates through simulation, selection and back-propagation steps), is used to sample a number of targeted actions within an enlarged neighborhood.", "This new paradigm clearly distinguishes itself from the existing machine learning (ML) based paradigms for solving the TSP, which either uses an end-to-end ML model, or simply applies traditional techniques after ML for post optimization.", "Experiments based on two public data sets show that, our approach clearly dominates all the existing learning based TSP algorithms in terms of performance, demonstrating its high potential on the TSP.", "More importantly, as a general framework without complicated hand-crafted rules, it can be readily extended to many other combinatorial optimization problems.", "The travelling salesman problem (TSP) is a well-known combinatorial optimization problem with various real-life applications, such as transportation, logistics, biology, circuit design.", "Given n cities as well as the distance d ij between each pair of cities i and j, the TSP aims to find a cheapest tour which starts from a beginning city (arbitrarily chosen), visits each city exactly once, and finally returns to the beginning city.", "This problem is NP-hard, thus being extremely difficult from the viewpoint of theoretical computer science.", "Due to its importance in both theory and practice, many algorithms have been developed for the TSP, mostly based on traditional operations research (OR) methods.", "Among the existing TSP algorithms, the best exact solver Concorde (Applegate et al., 2009 ) succeeded in demonstrating optimality of an Euclidean TSP instance with 85,900 cities, while the leading heuristics (Helsgaun, 2017) and (Taillard & Helsgaun, 2019) are capable of obtaining near-optimal solutions for instances with millions of cities.", "However, these algorithms are very complicated, which generally consist of many hand-crafted rules and heavily rely on expert knowledge, thus being difficult to generalize to other combinatorial optimization problems.", "To overcome those limitations, recent years have seen a number of ML based algorithms being proposed for the TSP (briefly reviewed in the next section), which attempt to automate the search process by learning mechanisms.", "This type of methods do not rely on expert knowledge, can be easily generalized to various combinatorial optimization problems, thus become promising research direction at the intersection of ML and OR.", "For the TSP, existing ML based algorithms can be roughly classified into two paradigms, i.e.: (1) End-to-end ML paradigm which uses a ML model alone to directly convert the input instance to a solution.", "(2) ML followed by OR paradigm which applies ML at first to provide some additional information, to guide the following OR procedure towards promising regions.", "Despite its high potential, for the TSP, existing ML based methods are still in its infancy, struggling to solve instances with more than 100 cities, leaving much room for further improvement compared with traditional methods.", "To this end, we propose a novel framework by combining Monte Carlo tree search (MCTS) with a basic OR method (2-opt based local search) using variable neighborhood strategy to solve the TSP.", "The main contributions are summarized as follows.", "• Framework: We propose a new paradigm which combines OR and ML using variable neighborhood strategy.", "Starting from an initial state, a basic 2-opt based local search is firstly used to search within a small neighborhood.", "When no improvement is possible within the small neighborhood, the search turns into an enlarged neighborhood, where a reinforcement learning (RL) based method is used to identify a sample of promising actions, and iteratively choose one action to apply.", "Under this new paradigm, OR and ML respectively work within disjoint space, being flexible and targeted, and clearly different from the two paradigms mentioned above.", "More importantly, as a general framework without complicated hand-crafted rules, this framework could not only be applied to the TSP, but also be easily extended to many other combinatorial optimization problems.", "• Methodology: When we search within an enlarged neighborhood, it is infeasible to enumerate all the actions.", "We then seek to sample a number of promising actions.", "To do this automatically, we implement a MCTS method which iterates through simulation, selection and back-propagation steps, to collect useful information that guides the sampling process.", "To the best of our knowledge, there is only one existing paper (Shimomura & Takashima, 2016) which also uses MCTS to solve the TSP.", "However, their method is a constructive approach, where each state is a partial TSP tour, and each action adds a city to increase the partial tour, until forming a complete tour.", "By contrast, our MCTS method is a conversion based approach, where each state is a complete TSP tour, and each action converts the original state to a new state (also a complete TSP tour).", "The methodology and implementation details of our MCTS are very different from the MCTS method developed in (Shimomura & Takashima, 2016 ).", "• Results: We carry out experiments on two sets of public TSP instances.", "Experimental results (detailed in Section 4) show that, on both data sets our MCTS algorithm obtains (within reasonable time) statistically much better results with respect to all the existing learning based algorithms.", "These results clearly indicate the potential of our new method for solving the TSP.", "The rest of this paper is organized as follows: Section 2 briefly reviews the existing learning based methods on the TSP.", "Section 3 describes in detail the new paradigm and the MCTS method.", "Section 4 provides and analyzes the experimental results.", "Finally Section 5 concludes this paper.", "This paper newly develops a novel flexible paradigm for solving TSP, which combines OR and ML in a variable neighborhood search strategy, and achieves highly competitive performance with respect to the existing learning based TSP algorithms.", "However, how to combine ML and OR reasonably is still an open question, which deserves continuous investigations.", "In the future, we would try more new paradigms to better answer this question, and extend the work to other combinatorial optimization problems." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11428570747375488, 0.2857142686843872, 0.32258063554763794, 0.07692307233810425, 0.12765957415103912, 0.09756097197532654, 0.09756097197532654, 0.145454540848732, 0.11428570747375488, 0.13333332538604736, 0.1230769157409668, 0.0416666604578495, 0.22641508281230927, 0.11999999731779099, 0.11764705181121826, 0.0952380895614624, 0.19607841968536377, 0.5098038911819458, 0, 0.2222222238779068, 0.31578946113586426, 0.14814814925193787, 0.04651162400841713, 0.1249999925494194, 0.1621621549129486, 0.13333332538604736, 0.1304347813129425, 0.23255813121795654, 0.1818181723356247, 0.17777776718139648, 0.09756097197532654, 0.060606054961681366, 0.15686273574829102, 0.12121211737394333, 0.14999999105930328, 0.12903225421905518, 0.0714285671710968, 0.07692307233810425, 0.4444444477558136, 0.05405404791235924, 0.09756097197532654 ]
ByxtHCVKwB
true
[ "This paper combines Monte Carlo tree search with 2-opt local search in a variable neighborhood mode to solve the TSP effectively." ]
[ "Significant strides have been made toward designing better generative models in recent years.", "Despite this progress, however, state-of-the-art approaches are still largely unable to capture complex global structure in data.", "For example, images of buildings typically contain spatial patterns such as windows repeating at regular intervals; state-of-the-art generative methods can’t easily reproduce these structures.", "We propose to address this problem by incorporating programs representing global structure into the generative model—e.g., a 2D for-loop may represent a configuration of windows.", "Furthermore, we propose a framework for learning these models by leveraging program synthesis to generate training data.", "On both synthetic and real-world data, we demonstrate that our approach is substantially better than the state-of-the-art at both generating and completing images that contain global structure.\n", "There has been much interest recently in generative models, following the introduction of both variational autoencoders (VAEs) BID13 and generative adversarial networks (GANs) BID6 .", "These models have successfully been applied to a range of tasks, including image generation BID16 , image completion BID10 , texture synthesis BID12 ; BID22 , sketch generation BID7 , and music generation BID3 .Despite", "their successes, generative models still have difficulty capturing global structure. For example", ", consider the image completion task in Figure 1 . The original", "image (left) is of a building, for which the global structure is a 2D repeating pattern of windows. Given a partial", "image (middle left), the goal is to predict the completion of the image. As can be seen,", "a state-of-the-art image completion algorithm has trouble reconstructing the original image (right) BID10 .1 Real-world data often contains such global structure, including repetitions, reflectional or rotational symmetry, or even more complex patterns.In the past few years, program synthesis Solar- BID17 has emerged as a promising approach to capturing patterns in data BID4 ; BID19 . The idea is that", "simple programs can capture global structure that evades state-of-the-art deep neural networks. A key benefit of", "using program synthesis is that we can design the space of programs to capture different kinds of structure-e.g., repeating patterns BID5 , symmetries, or spatial structure BID2 -depending on the application domain. The challenge is", "that for the most part, existing approaches have synthesized programs that operate directly over raw data. Since programs have", "difficulty operating over perceptual data, existing approaches have largely been limited to very simple data-e.g., detecting 2D repeating patterns of simple shapes BID5 .We propose to address", "these shortcomings by synthesizing programs that represent the underlying structure of high-dimensional data. In particular, we decompose", "programs into two parts: (i) a sketch s ∈ S that represents", "the skeletal structure of the program BID17 , with holes that are left unimplemented, and (ii) components c ∈ C that can be", "used to fill these holes. We consider perceptual components-i.e", "., holes in the sketch are filled with raw perceptual data. For example, the original image x *", "partial image x part completionx (ours) completionx (baseline) Figure 1 : The task is to complete the partial image x part (middle left) into an image that is close to the original image x * (left). By incorporating programmatic structure", "into generative models, the completion (middle right) is able to substantially outperform a state-of-the-art baseline BID10 (right) . Note that not all non-zero pixels in the", "sketch rendering retain the same value in the completed picture due to the nature of the following completion process program represents the structure in the original image x * in Figure 1 (left). The black text is the sketch, and the component", "is a sub-image taken from the given partial image. Then, the draw function renders the given sub-image", "at the given position. We call a sketch whose holes are filled with perceptual", "components a neurosymbolic program.Building on these ideas, we propose an approach called program-synthesis (guided) generative models (PS-GM) that combines neurosymbolic programs representing global structure with state-of-the-art deep generative models. By incorporating programmatic structure, PS-GM substantially", "improves the quality of these state-of-the-art models. As can be seen, the completion produced using PS-GM (middle", "right of Figure 1 ) substantially outperforms the baseline.We show that PS-GM can be used for both generation from scratch and for image completion. The generation pipeline is shown in FIG0 . At a high level,", "PS-GM for generation operates in two phases:•", "First, it generates a program that represents the global structure in the image to be generated.In particular, it generates a program P = (s, c) representing the latent global structure in the image (left in FIG0 , where s is a sketch (in the domain considered here, a list of 12 for-loop structures) and c is a perceptual component (in the domain considered here, a list of 12 sub-images).• Second, our algorithm executes P to obtain a structure rendering", "x struct representing the program as an image (middle of FIG0 ). Then, our algorithm uses a deep generative model to complete x struct", "into a full image (right of FIG0 ). The structure in x struct helps guide the deep generative model towards", "images that preserve the global structure.The image-completion pipeline (see Figure 3 ) is similar.Training these models end-to-end is challenging, since a priori, ground truth global structure is unavailable. Furthermore, representative global structure is very sparse, so approaches", "such as reinforcement learning do not scale. Instead, we leverage domain-specific program synthesis algorithms to produce", "examples of programs that represent global structure of the training data. In particular, we propose a synthesis algorithm tailored to the image domain", ", which extracts programs with nested for-loops that can represent multiple 2D repeating patterns in images. Then, we use these example programs as supervised training data.Our programs", "can capture rich spatial structure in the training data. For example, in FIG0 , the program structure encodes a repeating structure of", "0's and 2's on the whole image, and a separate repeating structure of 3's on the right-hand side of the image. Furthermore, in Figure 1 , the generated image captures the idea that the repeating", "pattern of windows does not extend to the bottom portion of the image.for loop from sampled program P structure rendering x struct completed image x (ii) Our model executes P to obtain a rendering of the program structure x struct (", "middle). (iii) Our model samples a completion x ∼ p θ (x | s, c) of x struct into a full image", "(right).Contributions. We propose an architecture of generative models that incorporates programmatic", "structure, as", "well as an algorithm for training these models (Section 2). Our learning algorithm depends on a domain-specific program synthesis algorithm for extracting", "global structure from the training data; we propose such an algorithm for the image domain (Section 3). Finally, we evaluate our approach on synthetic data and on a real-world dataset of building facades", "Tyleček &Šára (2013), both on the task of generation from scratch and on generation from a partial image. We show that our approach substantially outperforms several state-of-the-art deep generative models", "(Section 4).Related work. There has been growing interest in applying program synthesis to machine learning, for", "purposes of interpretability", "BID21 ; BID20 , safety BID1 , and lifelong learning BID19 . Most relevantly, there has been interest in using programs to capture structure that deep learning models have difficulty", "representing Lake et al. (2015) ; BID4 ; . For instance, BID4 proposes an unsupervised learning algorithm for capturing repeating patterns in simple line drawings;", "however, not only are their domains simple, but they can only handle a very small amount of noise. Similarly, BID5 captures 2D repeating patterns of simple circles and polygons; however, rather than synthesizing programs", "with perceptual components, they learn a simple mapping from images to symbols as a preprocessing step. The closest work we are aware of is BID19 , which synthesizes programs with neural components (i.e., components implemented", "as neural networks); however, their application is to lifelong learning, not generation, and to learning with supervision (labels) rather than to unsupervised learning of structure.Additionally, there has been work extending neural module networks BID0 to generative models BID2 . These algorithms essentially learn a collection of neural components that can be composed together based on hierarchical structure", ". However, they require that the structure be available (albeit in natural language form) both for training the model and for generating", "new images.Finally, there has been work incorporating spatial structure into generative models for generating textures BID12 ; however, their work only handles a single infinite repeating 2D pattern. In contrast, we can capture a rich variety of spatial patterns parameterized by a space of programs. For example, the image in Figure", "1 generated by our technique contains different repeating patterns in different parts of the image.", "We have proposed a new approach to generation that incorporates programmatic structure into state-ofthe-art deep learning models.", "In our experiments, we have demonstrated the promise of our approach to improve generation of high-dimensional data with global structure that current state-of-the-art deep generative models have difficulty capturing.", "We leave a number of directions for future work.", "Most importantly, we have relied on a custom synthesis algorithm to eliminate the need for learning latent program structure.", "Learning to synthesize latent structure during training is an important direction for future work.", "In addition, future work will explore more expressive programmatic structures, including if-then-else statements.A EXPERIMENTAL DETAILS" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.060606054961681366, 0.04999999701976776, 0.1904761791229248, 0.3636363446712494, 0.09756097197532654, 0.1538461446762085, 0.35555556416511536, 0, 0.2222222238779068, 0.24242423474788666, 0.3333333134651184, 0.20588235557079315, 0.1249999925494194, 0.20408162474632263, 0.0624999962747097, 0.09302324801683426, 0.12121211737394333, 0.0714285671710968, 0.21621620655059814, 0.07692307233810425, 0.12121211737394333, 0.13333332538604736, 0.19999998807907104, 0.2857142686843872, 0.20689654350280762, 0.13333332538604736, 0.12244897335767746, 0.1875, 0.2800000011920929, 0.08695651590824127, 0.20895521342754364, 0.3684210479259491, 0.277777761220932, 0.0833333283662796, 0.25, 0.3243243098258972, 0, 0.23529411852359772, 0.24390242993831635, 0.2666666507720947, 0.2222222238779068, 0.0714285671710968, 0.22857142984867096, 0.21739129722118378, 0.3255814015865326, 0.1818181723356247, 0.10526315867900848, 0.1860465109348297, 0.052631575614213943, 0.12765957415103912, 0.12244897335767746, 0.1538461446762085, 0.11428570747375488, 0.1269841194152832, 0.19999998807907104, 0.3030303120613098, 0.2380952388048172, 0.1599999964237213, 0.34285715222358704, 0.06666666269302368, 0 ]
S1gUCFx4dN
true
[ "Applying program synthesis to the tasks of image completion and generation within a deep learning framework" ]
[ "Learning control policies in robotic tasks requires a large number of interactions due to small learning rates, bounds on the updates or unknown constraints.", "In contrast humans can infer protective and safe solutions after a single failure or unexpected observation. \n", "In order to reach similar performance, we developed a hierarchical Bayesian optimization algorithm that replicates the cognitive inference and memorization process for avoiding failures in motor control tasks.", "A Gaussian Process implements the modeling and the sampling of the acquisition function.", "This enables rapid learning with large learning rates while a mental replay phase ensures that policy regions that led to failures are inhibited during the sampling process. \n", "The features of the hierarchical Bayesian optimization method are evaluated in a simulated and physiological humanoid postural balancing task.", "We quantitatively compare the human learning performance to our learning approach by evaluating the deviations of the center of mass during training.", "Our results show that we can reproduce the efficient learning of human subjects in postural control tasks which provides a testable model for future physiological motor control tasks.", "In these postural control tasks, our method outperforms standard Bayesian Optimization in the number of interactions to solve the task, in the computational demands and in the frequency of observed failures.", "Autonomous systems such as anthropomorphic robots or self-driving cars must not harm cooperating humans in co-worker scenarios, pedestrians on the road or them selves.", "To ensure safe interactions with the environment state-of-the-art robot learning approaches are first applied to simulations and afterwards an expert selects final candidate policies to be run on the real system.", "However, for most autonomous systems a fine-tuning phase on the real system is unavoidable to compensate for unmodelled dynamics, motor noise or uncertainties in the hardware fabrication.", "Several strategies were proposed to ensure safe policy exploration.", "In special tasks like in robot arm manipulation the operational space can be constrained, for example, in classical null-space control approaches Baerlocher & Boulic (1998) ; Slotine (1991) ; Choi & Kim (2000) ; Gienger et al. (2005) ; Saab et al. (2013) ; Modugno et al. (2016) or constraint black-box optimizer Hansen et al. (2003) ; Wierstra et al. (2008) ; Kramer et al. (2009) ; Sehnke et al. (2010) ; Arnold & Hansen (2012) .", "While this null-space strategy works in controlled environments like research labs where the environmental conditions do not change, it fails in everyday life tasks as in humanoid balancing where the priorities or constraints that lead to hardware damages when falling are unknown.", "Alternatively, limiting the policy updates by applying probabilistic bounds in the robot configuration or motor command space Bagnell & Schneider (2003) ; ; Rueckert et al. (2014) ; Abdolmaleki et al. (2015) ; Rueckert et al. (2013) were proposed.", "These techniques do not assume knowledge about constraints.", "Closely related are also Bayesian optimization techniques with modulated acquisition functions Gramacy & Lee (2010) ; Berkenkamp et al. (2016) ; Englert & Toussaint (2016) ; Shahriari et al. (2016) to avoid exploring policies that might lead to failures.", "However, all these approaches do not avoid failures but rather an expert interrupts the learning process when it anticipates a potential dangerous situation.", "Figure 1: Illustration of the hierarchical BO algorithm.", "In standard BO (clock-wise arrow), a mapping from policy parameters to rewards is learned, i.e., φ → r ∈ R 1 .", "We propose a hierarchical process, where first features κ are sampled and later used to predict the potential of policies conditioned on these features, φ|κ → r.", "The red dots show the first five successive roll-outs in the feature and the policy space of a humanoid postural control task.", "All the aforementioned strategies cannot avoid harming the system itself or the environment without thorough experts knowledge, controlled environmental conditions or human interventions.", "As humans require just few trials to perform reasonably well, it is desired to enable robots to reach similar performance even for high-dimensional problems.", "Thereby, most approaches are based on the assumption of a \"low effective dimensionality\", thus most dimensions of a high-dimensional problem do not change the objective function significantly.", "In Chen et al. (2012) a method for relevant variable selection based on Hierarchical Diagonal Sampling for both, variable selection and function optimization, has been proposed.", "Randomization combined with Bayesian Optimization is proposed in Wang et al. (2013) to exploit effectively the aforementioned \"low effective dimensionality\".", "In Li et al. (2018) a dropout algorithm has been introduced to overcome the high-dimensionality problem by only train onto a subset of variables in each iteration, evaluating a \"regret gap\" and providing strategies to reduce this gap efficiently.", "In Rana et al. (2017) an algorithm has been proposed which optimizes an acquisition function by building new Gaussian Processes with sufficiently large kernellengths scales.", "This ensures significant gradient updates in the acquisition function to be able to use gradient-dependent methods for optimization.", "The contribution of this paper is a computational model for psychological motor control experiments based on hierarchical acquisition functions in Bayesian Optimization (HiBO).", "Our motor skill learning method uses features for optimization to significantly reduce the number of required roll-outs.", "In the feature space, we search for the optimum of the acquisition function by sampling and later use the best feature configuration to optimize the policy parameters which are conditioned on the given features, see also Figure 1 .", "In postural control experiments, we show that our approach reduces the number of required roll-outs significantly compared to standard Bayesian Optimization.", "The focus of this study is to develop a testable model for psychological motor control experiments where well known postural control features could be used.", "These features are listed in Table 3 .", "In future work we will extend our model to autonomous feature learning and will validate the approach in more challenging robotic tasks where 'good' features are hard to hand-craft.", "We introduced HiBO, a hierarchical approach for Bayesian Optimization.", "We showed that HiBO outperforms standard BO in a complex humanoid postural control task.", "Moreover, we demonstrated the effects of the choice of the features and for different number of mental replay episodes.", "We compared our results to the learning performance of real humans at the same task.", "We found that the learning behavior is similar.", "We found that our proposed hierarchical BO algorithm can reproduce the rapid motor adaptation of human subjects.", "In contrast standard BO, our comparison method, is about four times slower.", "In future work, we will examine the problem of simultaneously learning task relevant features in neural nets." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13333332538604736, 0.10526315122842789, 0.16326530277729034, 0.0624999962747097, 0.1702127605676651, 0.19999998807907104, 0.051282044500112534, 0.2978723347187042, 0.1304347813129425, 0.045454539358615875, 0.07999999821186066, 0.1304347813129425, 0, 0.054794516414403915, 0, 0, 0, 0.11538460850715637, 0.045454539358615875, 0.06896550953388214, 0.045454539358615875, 0.1666666567325592, 0.1463414579629898, 0.04878048226237297, 0.04651162400841713, 0.13636362552642822, 0.1818181723356247, 0.04878048226237297, 0.035087715834379196, 0.08888888359069824, 0.15789473056793213, 0.5, 0.10526315122842789, 0.11320754140615463, 0.0952380895614624, 0.2666666507720947, 0.0714285671710968, 0.0833333283662796, 0.20000000298023224, 0.17142856121063232, 0.1111111044883728, 0, 0, 0.15789473056793213, 0, 0.052631575614213943 ]
S1eYchEtwH
true
[ "This paper presents a computational model for efficient human postural control adaptation based on hierarchical acquisition functions with well-known features. " ]
[ "Deep reinforcement learning has achieved great success in many previously difficult reinforcement learning tasks, yet recent studies show that deep RL agents are also unavoidably susceptible to adversarial perturbations, similar to deep neural networks in classification tasks.", "Prior works mostly focus on model-free adversarial attacks and agents with discrete actions.", "In this work, we study the problem of continuous control agents in deep RL with adversarial attacks and propose the first two-step algorithm based on learned model dynamics.", "Extensive experiments on various MuJoCo domains (Cartpole, Fish, Walker, Humanoid) demonstrate that our proposed framework is much more effective and efficient than model-free based attacks baselines in degrading agent performance as well as driving agents to unsafe states.", "Deep reinforcement learning (RL) has revolutionized the fields of AI and machine learning over the last decade.", "The introduction of deep learning has achieved unprecedented success in solving many problems that were intractable in the field of RL, such as playing Atari games from pixels and performing robotic control tasks (Mnih et al., 2015; Lillicrap et al., 2015; Tassa et al., 2018) .", "Unfortunately, similar to the case of deep neural network classifiers with adversarial examples, recent studies show that deep RL agents are also vulnerable to adversarial attacks.", "A commonly-used threat model allows the adversary to manipulate the agent's observations at every time step, where the goal of the adversary is to decrease the agent's total accumulated reward.", "As a pioneering work in this field, (Huang et al., 2017) show that by leveraging the FGSM attack on each time frame, an agent's average reward can be significantly decreased with small input adversarial perturbations in five Atari games.", "(Lin et al., 2017) further improve the efficiency of the attack in (Huang et al., 2017) by leveraging heuristics of detecting a good time to attack and luring agents to bad states with sample-based Monte-Carlo planning on a trained generative video prediction model.", "Since the agents have discrete actions in Atari games (Huang et al., 2017; Lin et al., 2017) , the problem of attacking Atari agents often reduces to the problem of finding adversarial examples on image classifiers, also pointed out in (Huang et al., 2017) , where the adversaries intend to craft the input perturbations that would drive agent's new action to deviate from its nominal action.", "However, for agents with continuous actions, the above strategies can not be directly applied.", "Recently, (Uesato et al., 2018) studied the problem of adversarial testing for continuous control domains in a similar but slightly different setting.", "Their goal was to efficiently and effectively find catastrophic failure given a trained agent and to predict its failure probability.", "The key to success in (Uesato et al., 2018) is the availability of agent training history.", "However, such information may not always be accessible to the users, analysts, and adversaries.", "Therefore, in this paper we study the robustness of deep RL agents in a more challenging setting where the agent has continuous actions and its training history is not available.", "We consider the threat models where the adversary is allowed to manipulate an agent's observations or actions with small perturbations, and we propose a two-step algorithmic framework to find efficient adversarial attacks based on learned dynamics models.", "Experimental results show that our proposed modelbased attack can successfully degrade agent performance and is also more effective and efficient than model-free attacks baselines.", "The contributions of this paper are the following: Figure 1: Two commonly-used threat models.", "• To the best of our knowledge, we propose the first model-based attack on deep RL agents with continuous actions.", "Our proposed attack algorithm is a general two-step algorithm and can be directly applied to the two commonly-used threat models (observation manipulation and action manipulation).", "• We study the efficiency and effectiveness of our proposed model-based attack with modelfree attack baselines based on random searches and heuristics (rand-U, rand-B, flip, see Section 4).", "We show that our model-based attack can degrade agent performance more significantly and efficiently than model-free attacks, which remain ineffective in numerous MuJoCo domains ranging from Cartpole, Fish, Walker, and Humanoid.", "Evaluating on the total reward.", "Often times, the reward function is a complicated function and its exact definition is often unavailable.", "Learning the reward function is also an active research field, which is not in the coverage of this paper.", "Nevertheless, as long as we have some knowledge of unsafe states (which is often the case in practice), then we can define unsafe states that are related to low reward and thus performing attacks based on unsafe states (i.e. minimizing the total loss of distance to unsafe states) would naturally translate to decreasing the total reward of agent.", "As demonstrated in Table 2 , the results have the same trend of the total loss result in Table 1 , where our proposed attack significantly outperforms all the other three baselines.", "In particular, our method can lower the average total reward up to 4.96× compared to the baselines result, while the baseline results are close to the perfect total reward of 1000.", "Evaluating on the efficiency of attack.", "We also study the efficiency of the attack in terms of sample complexity, i.e. how many episodes do we need to perform an effective attack?", "Here we adopt the convention in control suite (Tassa et al., 2018) where one episode corresponds to 1000 time steps (samples), and we learn the neural network dynamical model f with different number of episodes.", "Figure 3 plots the total head height loss of the walker (task stand) for the three baselines and our method with dynamical model f trained with three different number of samples: {5e5, 1e6, 5e6}, or equivalently {500, 1000, 5000} episodes.", "We note that the sweep of hyper parameters is the same for all the three models, and the only difference is the number of training samples.", "The results show that for the baselines rand-U and flip, the total losses are roughly at the order of 1400-1500, while (21) 809 (85) 959 (5) 193 (114) walk 934 (22) 913 (21) 966 (6) 608 ( a stronger baseline rand-B still has total losses of 900-1200.", "However, if we solve Equation 3 with f trained by 5e5 or 1e6 samples, the total losses can be decreased to the order of 400-700 and are already winning over the three baselines by a significant margin.", "Same as our expectation, if we use more samples (e.g. 5e6, which is 5-10 times more), to learn a more accurate dynamics model, then it is beneficial to our attack method -the total losses can be further decreased by more than 2× and are at the order of 50-250 over 10 different runs.", "Here we also give a comparison between our model-based attack to existing works (Uesato et al., 2018; Gleave et al., 2019) on the sample complexity.", "In (Uesato et al., 2018) , 3e5 episodes of training data is used to learn the adversarial value function, which is roughly 1000× more data than even our strongest adversary (with 5e3 episodes).", "Similarly, (Gleave et al., 2019) use roughly 2e4 episodes to train an adversary via deep RL, which is roughly 4× more data than ours 2 .", "In this paper, we study the problem of adversarial attacks in deep RL with continuous control for two commonly-used threat models (observation manipulation and action manipulation).", "Based on the threat models, we proposed the first model-based attack algorithm and showed that our formulation can be easily solved by off-the-shelf gradient-based solvers.", "Through extensive experiments on 4 MuJoCo domains (Cartpole, Fish, Walker, Humanoid), we show that our proposed algorithm outperforms all the model-free based attack baselines by a large margin.", "There are several interesting future directions can be investigated based on this work and is detailed in Appendix." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17543859779834747, 0.31578946113586426, 0.807692289352417, 0.22580644488334656, 0.14999999105930328, 0.1875, 0.3333333134651184, 0.1249999925494194, 0.1875, 0.2950819730758667, 0.1944444328546524, 0.20512820780277252, 0.3333333134651184, 0.0952380895614624, 0.1428571343421936, 0.10256409645080566, 0.37735849618911743, 0.4067796468734741, 0.1249999925494194, 0.10256409645080566, 0.3636363446712494, 0.25, 0.3529411852359772, 0.1090909019112587, 0.13333332538604736, 0.1538461446762085, 0.1428571343421936, 0.2028985470533371, 0.15686273574829102, 0.07999999821186066, 0.19354838132858276, 0.20408162474632263, 0.23728813230991364, 0.1666666567325592, 0.17777776718139648, 0.12121211737394333, 0.16949151456356049, 0.13513512909412384, 0.12244897335767746, 0.1071428507566452, 0.03999999538064003, 0.5098038911819458, 0.20408162474632263, 0.22641508281230927, 0.1860465109348297 ]
SylL0krYPS
true
[ "We study the problem of continuous control agents in deep RL with adversarial attacks and proposed a two-step algorithm based on learned model dynamics. " ]
[ "A leading hypothesis for the surprising generalization of neural networks is that the dynamics of gradient descent bias the model towards simple solutions, by searching through the solution space in an incremental order of complexity.", "We formally define the notion of incremental learning dynamics and derive the conditions on depth and initialization for which this phenomenon arises in deep linear models.", "Our main theoretical contribution is a dynamical depth separation result, proving that while shallow models can exhibit incremental learning dynamics, they require the initialization to be exponentially small for these dynamics to present themselves.", "However, once the model becomes deeper, the dependence becomes polynomial and incremental learning can arise in more natural settings.", "We complement our theoretical findings by experimenting with deep matrix sensing, quadratic neural networks and with binary classification using diagonal and convolutional linear networks, showing all of these models exhibit incremental learning.", "Neural networks have led to a breakthrough in modern machine learning, allowing us to efficiently learn highly expressive models that still generalize to unseen data.", "The theoretical reasons for this success are still unclear, as the generalization capabilities of neural networks defy the classic statistical learning theory bounds.", "Since these bounds, which depend solely on the capacity of the learned model, are unable to account for the success of neural networks, we must examine additional properties of the learning process.", "One such property is the optimization algorithm -while neural networks can express a multitude of possible ERM solutions for a given training set, gradient-based methods with the right initialization may be implicitly biased towards certain solutions which generalize.", "A possible way such an implicit bias may present itself, is if gradient-based methods were to search the hypothesis space for possible solutions of gradually increasing complexity.", "This would suggest that while the hypothesis space itself is extremely complex, our search strategy favors the simplest solutions and thus generalizes.", "One of the leading results along these lines has been by Saxe et al. (2013) , deriving an analytical solution for the gradient flow dynamics of deep linear networks and showing that for such models, the singular values converge at different rates, with larger values converging first.", "At the limit of infinitesimal initialization of the deep linear network, Gidel et al. (2019) show these dynamics exhibit a behavior of \"incremental learning\" -the singular values of the model are learned separately, one at a time.", "Our work generalizes these results to small but finite initialization scales.", "Incremental learning dynamics have also been explored in gradient descent applied to matrix completion and sensing with a factorized parameterization (Gunasekar et al. (2017) , Arora et al. (2018) , Woodworth et al. (2019) ).", "When initialized with small Gaussian weights and trained with a small learning rate, such a model is able to successfully recover the low-rank matrix which labeled the data, even if the problem is highly over-determined and no additional regularization is applied.", "In their proof of low-rank recovery for such models, Li et al. (2017) show that the model remains lowrank throughout the optimization process, leading to the successful generalization.", "Additionally, Arora et al. (2019) explore the dynamics of such models, showing the singular values are learned at different rates and that deeper models exhibit stronger incremental learning dynamics.", "Our work deals with a more simplified setting, allowing us to determine explicitly under which conditions depth leads to this dynamical phenomenon.", "Finally, the learning dynamics of nonlinear models have been studied as well.", "Combes et al. (2018) and Williams et al. (2019) study the gradient flow dynamics of shallow ReLU networks under restrictive distributional assumptions, Ronen et al. (2019) show that shallow networks learn functions of gradually increasing frequencies and Nakkiran et al. (2019) show how deep ReLU networks correlate with linear classifiers in the early stages of training.", "These findings, along with others, suggest that the generalization ability of deep networks is at least in part due to the incremental learning dynamics of gradient descent.", "Following this line of work, we begin by explicitly defining the notion of incremental learning for a toy model which exhibits this sort of behavior.", "Analyzing the dynamics of the model for gradient flow and gradient descent, we characterize the effect of the model's depth and initialization scale on incremental learning, showing how deeper models allow for incremental learning in larger (realistic) initialization scales.", "Specifically, we show that a depth-2 model requires exponentially small initialization for incremental learning to occur, while deeper models only require the initialization to be polynomially small.", "Once incremental learning has been defined and characterized for the toy model, we generalize our results theoretically and empirically for larger linear and quadratic models.", "Examples of incremental learning in these models can be seen in figure 1, which we discuss further in section 4.", "Gradient-based optimization for deep linear models has an implicit bias towards simple (sparse) solutions, caused by an incremental search strategy over the hypothesis space.", "Deeper models have a stronger tendency for incremental learning, exhibiting it in more realistic initialization scales.", "This dynamical phenomenon exists for the entire optimization process for regression as well as classification tasks, and for many types of models -diagonal networks, convolutional networks, matrix completion and even the nonlinear quadratic network.", "We believe this kind of dynamical analysis may be able to shed light on the generalization of deeper nonlinear neural networks as well, with shallow quadratic networks being only a first step towards that goal.", "It may seem that the variance loss is an unnatural loss function to analyze, since it isn't used in practice.", "While this is true, we will show how the dynamics of this loss function are an approximation of the square loss dynamics.", "We begin by describing the dynamics of both losses, showing how incremental learning can't take place for quadratic networks as defined over the squared loss.", "Then, we show how adding a global bias to the quadratic network leads to similar dynamics for small initialization scales." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23255813121795654, 0.3243243098258972, 0.1304347813129425, 0.13333332538604736, 0.23255813121795654, 0, 0.17142856121063232, 0.14999999105930328, 0.0833333283662796, 0.1538461446762085, 0.05882352590560913, 0.2181818187236786, 0.1818181723356247, 0, 0.09302324801683426, 0.08695651590824127, 0.20512820780277252, 0.25, 0, 0.3199999928474426, 0.19230769574642181, 0.2631579041481018, 0.22857142984867096, 0.1860465109348297, 0.10810810327529907, 0.11428570747375488, 0.12903225421905518, 0.277777761220932, 0, 0.09756097197532654, 0.1304347813129425, 0.0624999962747097, 0.19999998807907104, 0.3243243098258972, 0.1875 ]
H1lj0nNFwB
true
[ "We study the sparsity-inducing bias of deep models, caused by their learning dynamics." ]
[ "Generative modeling of high dimensional data like images is a notoriously difficult and ill-defined problem.", "In particular, how to evaluate a learned generative model is unclear.\n", "In this paper, we argue that *adversarial learning*, pioneered with generative adversarial networks (GANs), provides an interesting framework to implicitly define more meaningful task losses for unsupervised tasks, such as for generating \"visually realistic\" images.", "By relating GANs and structured prediction under the framework of statistical decision theory, we put into light links between recent advances in structured prediction theory and the choice of the divergence in GANs.", "We argue that the insights about the notions of \"hard\" and \"easy\" to learn losses can be analogously extended to adversarial divergences.", "We also discuss the attractive properties of parametric adversarial divergences for generative modeling, and perform experiments to show the importance of choosing a divergence that reflects the final task.", "For structured prediction and data generation the notion of final task is at the same time crucial and not well defined.", "Consider machine translation; the goal is to predict a good translation, but even humans might disagree on the correct translation of a sentence.", "Moreover, even if we settle on a ground truth, it is hard to define what it means for a candidate translation to be close to the ground truth.", "In the same way, for data generation, the task of generating pretty pictures or more generally realistic samples is not well defined.", "Nevertheless, both for structured prediction and data generation, we can try to define criteria which characterize good solutions such as grammatical correctness for translation or non-blurry pictures for image generation.", "By incorporating enough criteria into a task loss, one can hope to approximate the final task, which is otherwise hard to formalize.Supervised learning and structured prediction are well-defined problems once they are formulated as the minimization of such a task loss.", "The usual task loss in object classification is the generalization error associated with the classification error, or 0-1 loss.", "In machine translation, where the goal is to predict a sentence, a structured loss, such as the BLEU score BID37 , formally specifies how close the predicted sentence is from the ground truth.", "The generalization error is defined through this structured loss.", "In both cases, models can be objectively compared and evaluated with respect to the task loss (i.e., generalization error).", "On the other hand, we will show that it is not as obvious in generative modeling to define a task loss that correlates well with the final task of generating realistic samples.Traditionally in statistics, distribution learning is formulated as density estimation where the task loss is the expected negative-log-likelihood.", "Although log-likelihood works fine in low-dimension, it was shown to have many problems in high-dimension .", "Among others, because the Kullback-Leibler is too strong of a divergence, it can easily saturate whenever the distributions are too far apart, which makes it hard to optimize.", "Additionally, it was shown in BID47 that the KL-divergence is a bad proxy for the visual quality of samples.In this work we give insights on how adversarial divergences BID26 can be considered as task losses and how they address some problems of the KL by indirectly incorporating hard-to-define criteria.", "We define parametric adversarial divergences as the following : DISPLAYFORM0 where {f φ : X → R d ; φ ∈ Φ} is a class of parametrized functions, such as neural networks, called the discriminators in the Generative Adversarial Network (GAN) framework BID15 .", "The constraints Φ and the function ∆ : R d × R d → R determine properties of the resulting divergence.", "Using these notations, we adopt the view 1 that training a GAN can be seen as training a generator network q θ (parametrized by θ) to minimize the parametric adversarial divergence Div NN (p||q θ ), where the generator network defines the probability distribution q θ over x.Our contributions are the following:• We show that compared to traditional divergences, parametric adversarial divergences offer a good compromise in terms of sample complexity, computation, ability to integrate prior knowledge, flexibility and ease of optimization.•", "We relate structured prediction and generative adversarial networks using statistical decision theory, and argue that they both can be viewed as formalizing a final task into the minimization of a statistical task loss.•", "We explain why it is necessary to choose a divergence that adequately reflects our final task in generative modeling. We", "make a parallel with results in structured learning (also dealing with high-dimensional data), which quantify the importance of choosing a good objective in a specific setting.• We", "explore with some simple experiments how the properties of the discriminator transfer to the adversarial divergence. Our", "experiments suggest that parametric adversarial divergences are especially adapted to problems such as image generation, where it is hard to formally define a perceptual loss that correlates well with human judgment.• We", "illustrate the importance of having a parametric discriminator by running experiments with the true (nonparametric) Wasserstein, and showing its shortcomings on complex datasets, on which GANs are known to perform well.• We", "perform qualitative and quantitative experiments to compare maximum-likelihood and parametric adversarial divergences under two settings: very high-dimensional images, and learning data with specific constraints.", "We gave arguments in favor of using adversarial divergences rather than traditional divergences for generative modeling, the most important of which being the ability to account for the final task.", "After linking structured prediction and generative modeling under the framework of statistical decision theory, we interpreted recent results from structured prediction, and related them to the notions of strong and weak divergences.", "Moreover, viewing adversarial divergences as statistical task losses led us to believe that some adversarial divergences could be used as evaluation criteria in the future, replacing hand-crafted criteria which cannot usually be exhaustive.", "In some sense, we want to extrapolate a few desirable properties into a meaningful task loss.", "In the future we would like to investigate how to define meaningful evaluation criteria with minimal human intervention." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.09090908616781235, 0.09756097197532654, 0.380952388048172, 0.2222222238779068, 0.2857142686843872, 0.4000000059604645, 0.25, 0.11999999731779099, 0.19230768084526062, 0.19999998807907104, 0.24561403691768646, 0.21212120354175568, 0.13333332538604736, 0.10526315122842789, 0.052631575614213943, 0.19999998807907104, 0.23529411852359772, 0.04651162400841713, 0.1111111044883728, 0.23999999463558197, 0.1492537260055542, 0.17391303181648254, 0.18947367370128632, 0.2711864411830902, 0.1249999925494194, 0.18867923319339752, 0.27272728085517883, 0.1666666567325592, 0.1666666567325592, 0.19607841968536377, 0.37037035822868347, 0.3214285671710968, 0.21052631735801697, 0.22727271914482117, 0.260869562625885 ]
rkEtzzWAb
true
[ "Parametric adversarial divergences implicitly define more meaningful task losses for generative modeling, we make parallels with structured prediction to study the properties of these divergences and their ability to encode the task of interest." ]
[ "Experimental reproducibility and replicability are critical topics in machine learning.", "Authors have often raised concerns about their lack in scientific publications to improve the quality of the field.", "Recently, the graph representation learning field has attracted the attention of a wide research community, which resulted in a large stream of works.\n", "As such, several Graph Neural Network models have been developed to effectively tackle graph classification.", "However, experimental procedures often lack rigorousness and are hardly reproducible.", "Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art.", "To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks.", "Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet.", "We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.", "Over the years, researchers have raised concerns about several flaws in scholarship, such as experimental reproducibility and replicability in machine learning (McDermott, 1976; Lipton & Steinhardt, 2018) and science in general (National Academies of Sciences & Medicine, 2019) .", "These issues are not easy to address, as a collective effort is required to avoid bad practices.", "Examples include the ambiguity of experimental procedures, the impossibility of reproducing results and the improper comparison of machine learning models.", "As a result, it can be difficult to uniformly assess the effectiveness of one method against another.", "This work investigates these issues for the graph representation learning field, by providing a uniform and rigorous benchmarking of state-of-the-art models.", "Graph Neural Networks (GNNs) (Micheli, 2009; Scarselli et al., 2008) have recently become the standard tool for machine learning on graphs.", "These architectures effectively combine node features and graph topology to build distributed node representations.", "GNNs can be used to solve node classification (Kipf & Welling, 2017) and link prediction tasks, or they can be applied to downstream graph classification (Bacciu et al., 2018) .", "In literature, such models are usually evaluated on chemical and social domains (Xu et al., 2019) .", "Given their appeal, an ever increasing number of GNNs is being developed (Gilmer et al., 2017) .", "However, despite the theoretical advancements reached by the latest contributions in the field, we find that the experimental settings are in many cases ambiguous or not reproducible.", "Some of the most common reproducibility problems we encounter in this field concern hyperparameters selection and the correct usage of data splits for model selection versus model assessment.", "Moreover, the evaluation code is sometimes missing or incomplete, and experiments are not standardized across different works in terms of node and edge features.", "These issues easily generate doubts and confusion among practitioners that need a fully transparent and reproducible experimental setting.", "As a matter of fact, the evaluation of a model goes through two different phases, namely model selection on the validation set and model assessment on the test set.", "Clearly, to fail in keeping these phases well separated could lead to over-optimistic and biased estimates of the true performance of a model, making it hard for other researchers to present competitive results without following the same ambiguous evaluation procedures.", "With this premise, our primary contribution is to provide the graph learning community with a fair performance comparison among GNN architectures, using a standardized and reproducible experimental environment.", "More in detail, we performed a large number of experiments within a rigorous model selection and assessment framework, in which all models were compared using the same features and the same data splits.", "Secondly, we investigate if and to what extent current GNN models can effectively exploit graph structure.", "To this end, we add two domain-specific and structure-agnostic baselines, whose purpose is to disentangle the contribution of structural information from node features.", "Much to our surprise, we found out that these baselines can even perform better than GNNs on some datasets; this calls for moderation when reporting improvements that do not clearly outperform structure-agnostic competitors.", "Our last contribution is a study on the effect of node degrees as features in social datasets.", "Indeed, we show that providing the degree can be beneficial in terms of performances, and it has also implications in the number of GNN layers needed to reach good results.", "We publicly release code and dataset splits to reproduce our results, in order to allow other researchers to carry out rigorous evaluations with minimum additional effort 1 .", "Disclaimer Before delving into the work, we would like to clarify that this work does not aim at pinpointing the best (or worst) performing GNN, nor it disavows the effort researchers have put in the development of these models.", "Rather, it is intended to be an attempt to set up a standardized and uniform evaluation framework for GNNs, such that future contributions can be compared fairly and objectively with existing architectures.", "In this paper, we wanted to show how a rigorous empirical evaluation of GNNs can help design future experiments and better reason about the effectiveness of different architectural choices.", "To this aim, we highlighted ambiguities in the experimental settings of different papers, and we proposed a clear and reproducible procedure for future comparisons.", "We then provided a complete re-evaluation of five GNNs on nine datasets, which required a significant amount of time and computational resources.", "This uniform environment helped us reason about the role of structure, as we found that structure-agnostic baselines outperform GNNs on some chemical datasets, thus suggesting that structural properties have not been exploited yet.", "Moreover, we objectively analyzed the effect of the degree feature on performances and model selection in social datasets, unveiling an effect on the depth of GNNs.", "Finally, we provide the graph learning community with reliable and reproducible results to which GNN practitioners can compare their architectures.", "We hope that this work, along with the library we release, will prove useful to researchers and practitioners that want to compare GNNs in a more rigorous way." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.06666666269302368, 0.1764705777168274, 0.2857142686843872, 0, 0.11764705181121826, 0.05128204822540283, 0.05714285373687744, 0.3684210479259491, 0.04255318641662598, 0.06896550953388214, 0.13793103396892548, 0.13333332538604736, 0.29411762952804565, 0.22857142984867096, 0.07692307233810425, 0.10256409645080566, 0, 0.06666666269302368, 0, 0.10810810327529907, 0.1111111044883728, 0.06666666269302368, 0.1764705777168274, 0.12244897335767746, 0.20000000298023224, 0.1463414579629898, 0.06896550953388214, 0.0555555522441864, 0.04444444179534912, 0.13333332538604736, 0.04999999701976776, 0.10526315122842789, 0.04081632196903229, 0.0952380895614624, 0.19512194395065308, 0.22857142984867096, 0.1818181723356247, 0.04444444179534912, 0.05882352590560913, 0.12121211737394333, 0.1538461446762085 ]
HygDF6NFPB
true
[ "We provide a rigorous comparison of different Graph Neural Networks for graph classification." ]
[ "Data augmentation (DA) is fundamental against overfitting in large convolutional neural networks, especially with a limited training dataset.", "In images, DA is usually based on heuristic transformations, like geometric or color transformations.", "Instead of using predefined transformations, our work learns data augmentation directly from the training data by learning to transform images with an encoder-decoder architecture combined with a spatial transformer network.", "The transformed images still belong to the same class, but are new, more complex samples for the classifier.", "Our experiments show that our approach is better than previous generative data augmentation methods, and comparable to predefined transformation methods when training an image classifier.", "Convolutional neural networks have shown impressive results in visual recognition tasks.", "However, for proper training and good performance, they require large labeled datasets.", "If the amount of training data is small, data augmentation is an effective way to improve the final performance of the network BID6 ; BID9 ).", "In images, data augmentation (DA) consists of applying predefined transformations such as flip, rotations or color changes BID8 ; BID3 ).", "This approach provides consistent improvements when training a classifier.", "However, the required transformations are dataset dependent.", "For instance, flipping an image horizontally makes sense for natural images, but produces ambiguities on datasets of numbers (e.g. 2 and 5).Several", "recent studies investigate automatic DA learning as a method to avoid the manual selection of transformations. BID10 define", "a large set of transformations and learn how to combine them. This approach", "works well however, as it is based on predefined transformations, it prevents the model from finding other transformations that could be useful for the classifier. Alternatively", ", BID2 and BID12 generate new samples via a generative adversarial networks model (GAN) from the probability distribution of the data p(X), while BID0 learn the transformations of images, instead of generating images from scratch. These alternative", "methods show their limits when the number of training samples is low, given the difficulty of training a high-performing generative model with a reduced dataset. BID5 learn the natural", "transformations in a dataset by aligning pairs of samples from the same class. This approach produces", "good results on easy datasets like MNIST however, it does not appear to be applicable to more complex datasets.Our work combines the advantages of generative models and transformation learning approaches in a single end-to-end network architecture. Our model is based on", "a conditional GAN architecture that learns to generate transformations of a given image that are useful for DA. In other words, instead", "of learning to generate samples from p(X), it learns to generate samples from the conditional distribution p(X|X), withX a reference image. As shown in FIG0 , our", "approach combines a global transformation defined by an affine matrix with a more localized transformation defined by ensures that the transformed sample G(x i , z) is dissimilar from the input sample x i but similar to a sample x j from the same class. (b) Given an input image", "x i and a random noise vector z, our generator first performs a global transformation using a spatial transformer network followed by more localized transformations using a convolutional encoder-decoder network.a convolutional encoder-decoder architecture. The global transformations", "are learned by an adaptation of spatial transformer network (STN) BID7 ) so that the entire architecture is differentiable and can be learned with standard back-propagation. In its normal use, the purpose", "of STN is to learn how to transform the input data, so that the model becomes invariant to certain transformations. In contrast, our approach uses", "STN to generate augmented samples in an adversarial way. With the proposed model we show", "that, for optimal performance, it is important to jointly train the generator of the augmented samples with the classifier in an end-to-end fashion. By doing that, we can also add", "an adversarial loss between the generator and classifier such that the generated samples are difficult, or adversarial, for the classifier.To summarize, the contributions of this paper are: i) We propose a DA network that", "can automatically learn to generate augmented samples without expensive searches for the optimal data transformations; ii) Our model trains jointly with", "a classifier, is fully differentiable, trainable end-to-end, and can significantly improve the performance of any image classifier; iii) In low-data regime it outperforms", "models trained with strong predefined DA; iv) Finally, we notice that, for optimal", "performance, it is fundamental to train the model jointly with the image classifier.", "In this work, we have presented a new approach for improving the learning of a classifier through an automatic generation of augmented samples.", "The method is fully differentiable and can be learned end-to-end.", "In our experiments, we have shown several elements contributing to an improved classification performance.", "First, the generator and the classifier should be trained jointly.", "Second, the combined use of global transformations with STN and local transformation with U-Net is essential to reach the highest accuracy levels.", "For future work, we want to include more differentiable transformations such as deformations and color transformations and evaluate how these additional sample augmentations affect the final accuracy." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.12121211737394333, 0.06896550953388214, 0.3720930218696594, 0.1249999925494194, 0.29999998211860657, 0, 0, 0.3333333432674408, 0.1666666567325592, 0.1666666567325592, 0, 0.1538461446762085, 0.1818181723356247, 0.2142857164144516, 0.10256409645080566, 0.12765957415103912, 0.10256409645080566, 0.12903225421905518, 0.18867924809455872, 0.34285715222358704, 0.21052631735801697, 0.15686273574829102, 0.1428571343421936, 0.13333332538604736, 0.10810810327529907, 0.13333332538604736, 0.1904761791229248, 0.17777776718139648, 0.1111111044883728, 0.21052631735801697, 0, 0.2222222238779068, 0.2222222238779068, 0, 0.13793103396892548, 0.0833333283662796, 0.11428570747375488, 0.04999999701976776 ]
BylSX4meOV
true
[ "Automatic Learning of data augmentation using a GAN based architecture to improve an image classifier" ]
[ "We consider the problem of information compression from high dimensional data.", "Where many studies consider the problem of compression by non-invertible trans- formations, we emphasize the importance of invertible compression.", "We introduce new class of likelihood-based auto encoders with pseudo bijective architecture, which we call Pseudo Invertible Encoders.", "We provide the theoretical explanation of their principles.", "We evaluate Gaussian Pseudo Invertible Encoder on MNIST, where our model outperform WAE and VAE in sharpness of the generated images.", "We consider the problem of information compression from high dimensional data.", "Where many studies consider the problem of compression by non-invertible transformations, we emphasize the importance of invertible compression as there are many cases where one cannot or will not decide a priori what part of the information is important and what part is not.", "Compression of images for person ID in a small company requires less resolution then person ID at an airport.", "To loose part of the information without harm to the future purpose of viewing the picture requires knowing the purpose upfront.", "Therefore, the fundamental advantage of invertible information compression is that compression can be undone if a future purpose so requires.Recent advances of classification models have demonstrated that deep learning architectures of proper design do not lead to information loss while still being able to achieve state-of-the-art in classification performance.", "These i-RevNet models BID5 implement a small but essential modification of the popular RevNet models while achieving invertibility and a performance similar to the standard RevNet BID2 .", "This is of great interest as it contradicts the intuition that information loss is essential to achieve good performance in classification BID13 .", "Despite the requirement of the invertibility, flow-based generating models BID0 ; BID11 ; BID6 demonstrate that the combination of bijective mappings allows one to transform the raw distribution of the input data to any desired distribution and perform the manipulation of the data.On the other hand, Auto-Encoders have provided the ideal mechanism to reduce the data to the bare minimum while retaining all essential information for a specific task, the one implemented in the loss function.", "Variational Auto Encoders (VAE) BID7 and Wasserstein Auto Encoders (WAE) BID14 are performing best.", "They provide an approach for stable training of autoencoders, which demonstrate good results at reconstruction and generation.", "However, both of these methods involve the optimization of the objective defined on the pixel level.", "We would emphasise the importance of avoiding the separate decoder part and training the model without relying on the reconstuction quality directly.Combining the best of Invertible mappings and Auto-Encoders, we introduce Pseudo Invertible Encoder.", "Our model combines bijectives with restriction and extension of the mappings to the dependent sub-manifolds FIG0 .", "The main contributions of this paper are the following:• We introduce new class of likelihood-based Auto-Encoders, which we call Pseudo Invertible Encoders.", "We provide the theoretical explanation of their principles.•", "We demonstrate the properties of Gaussian Pseudo Invertible Encoder in manifold learning.•", "We compare our model with WAE and VAE on MNIST, and report that the sharpness of the images, generated by our models is better. 2", "RELATED WORK", "In this paper we have proposed the new class of Auto Encoders, which we call Pseudo Invertible Encoder.", "We provided a theory which bridges the gap between Auto Encoders and Normalizing Flows.", "The experiments demonstrate that the proposed model learns the manifold structure and generates sharp images." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10526315122842789, 0.1666666567325592, 0.23076923191547394, 0.1249999925494194, 0.06896550953388214, 0.10526315122842789, 0.0952380895614624, 0.07999999821186066, 0.0833333283662796, 0.07843136787414551, 0.06451612710952759, 0.06896550953388214, 0.032258063554763794, 0, 0.07999999821186066, 0.0952380895614624, 0.0555555522441864, 0.17391303181648254, 0.06896550953388214, 0.11764705181121826, 0.0952380895614624, 0.13333332538604736, 0.07999999821186066, 0, 0 ]
SkgiX2Aqtm
true
[ "New Class of Autoencoders with pseudo invertible architecture" ]
[ "We exploit a recently derived inversion scheme for arbitrary deep neural networks to develop a new semi-supervised learning framework that applies to a wide range of systems and problems. \n", "The approach reaches current state-of-the-art methods on MNIST and provides reasonable performances on SVHN and CIFAR10.", "Through the introduced method, residual networks are for the first time applied to semi-supervised tasks.", "Experiments with one-dimensional signals highlight the generality of the method.", "Importantly, our approach is simple, efficient, and requires no change in the deep network architecture.", "Deep neural networks (DNNs) have made great strides recently in a wide range of difficult machine perception tasks.", "They consist of parametric functionals f Θ with internal parameters Θ.", "However, those systems are still trained in a fully supervised fashion using a large set of labeled data, which is tedious and costly to acquire.Semi-supervised learning relaxes this requirement by leaning Θ based on two datasets: a labeled set D of N training data pairs and an unlabeled set D u of N u training inputs.", "Unlabeled training data is useful for learning as unlabeled inputs provide information on the statistical distribution of the data that can both guide the learning required to classify the supervised dataset and characterize the unlabeled samples in D u hence improve generalization.", "Limited progress has been made on semi-supervised learning algorithms for DNNs BID15 ; BID16 BID14 , but today's methods suffer from a range of drawbacks, including training instability, lack of topology generalization, and computational complexity.In this paper, we take two steps forward in semi-supervised learning for DNNs.", "First, we introduce an universal methodology to equip any deep neural net with an inverse that enables input reconstruction.", "Second, we introduce a new semi-supervised learning approach whose loss function features an additional term based on this aforementioned inverse guiding weight updates such that information contained in unlabeled data are incorporated into the learning process.", "Our key insight is that the defined and general inverse function can be easily derived and computed; thus for unlabeled data points we can both compute and minimize the error between the input signal and the estimate provided by applying the inverse function to the network output without extra cost or change in the used model.", "The simplicity of this approach, coupled with its universal applicability promise to significantly advance the purview of semi-supervised and unsupervised learning.", "We have presented a well-justified inversion scheme for deep neural networks with an application to semi-supervised learning.", "By demonstrating the ability of the method to best current state-of-theart results on MNIST with different possible topologies support the portability of the technique as well as its potential.", "These results open up many questions in this yet undeveloped area of DNN inversion, input reconstruction, and their impact on learning and stability.Among the possible extensions, one can develop the reconstruction loss into a per-layer reconstruction loss.", "Doing so, there is the possibility to weight each layer penalty bringing flexibility as well as meaningful reconstruction.", "Define the per layer loss as DISPLAYFORM0 with DISPLAYFORM1 Doing so, one can adopt a strategy in favor of high reconstruction objective for inner layers, close to the final latent representation z (L) in order to lessen the reconstruction cost for layers closer to the input X n .", "In fact, inputs of standard dataset are usually noisy, with background, and the object of interest only contains a small energy with respect to the total energy of X n .", "Another extension would be to update the weighting while performing learning.", "Hence, if we denote by t the position in time such as the current epoch or batch, we now have the previous loss becoming DISPLAYFORM2 (13) One approach would be to impose some deterministic policy based on heuristic such as favoring reconstruction at the beginning to then switch to classification and entropy minimization.", "Finer approaches could rely on explicit optimization schemes for those coefficients.", "One way to perform this, would be to optimize the loss weighting coefficients α, β, γ ( ) after each batch or epoch by backpropagation on the updates weights.", "Define DISPLAYFORM3 as a generic iterative update based on a given policy such as gradient descent.", "One can thus adopt the following update strategy for the hyper-parameters as DISPLAYFORM4 and so for all hyper-parameters.", "Another approach would be to use adversarial training to update those hyper-parameters where both update cooperate trying to accelerate learning.EBGAN BID18 ) are GANs where the discriminant network D measures the energy of a given input X. D is formulated such as generated data produce high energy and real data produce lower energy.", "Same authors propose the use of an auto-encoder to compute such energy function.", "We plan to replace this autoencoder using our proposed method to reconstruct X and compute the energy; hence D(X) = R(X) and only one-half the parameters will be needed for D.Finally, our approach opens the possibility of performing unsupervised tasks such as clustering.", "In fact, by setting α = 0, we are in a fully unsupervised framework.", "Moreover, β can push the mapping f Θ to produce a low-entropy, clustered, representation or rather simply to produce optimal reconstruction.", "Even in a fully unsupervised and reconstruction case (α = 0, β = 1), the proposed framework is not similar to a deep-autoencoder for two main reasons.", "First, there is no greedy (per layer) reconstruction loss, only the final output is considered in the reconstruction loss.", "Second, while in both case there is parameter sharing, in our case there is also \"activation\" sharing that corresponds to the states (spline) that were used in the forward pass that will also be used for the backward one.", "In a deep autoencoder, the backward activation states are induced by the backward projection and will most likely not be equal to the forward ones." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.6808510422706604, 0, 0.23529411852359772, 0, 0.05714285373687744, 0.15789473056793213, 0, 0.1230769157409668, 0.1090909019112587, 0.1269841194152832, 0.21052631735801697, 0.1818181723356247, 0.0624999962747097, 0.14999999105930328, 0.6486486196517944, 0.09090908616781235, 0.14814814925193787, 0.05405404791235924, 0.09999999403953552, 0.08888888359069824, 0.12903225421905518, 0.03076922707259655, 0.06451612710952759, 0.04255318641662598, 0.05882352590560913, 0.05714285373687744, 0.0937499925494194, 0.12121211737394333, 0.10169491171836853, 0.11764705181121826, 0.10256409645080566, 0.17777776718139648, 0, 0.0833333283662796, 0.1428571343421936 ]
B1i7ezW0-
true
[ "We exploit an inversion scheme for arbitrary deep neural networks to develop a new semi-supervised learning framework applicable to many topologies." ]
[ "Deep learning has become a widely used tool in many computational and classification problems. \n", "Nevertheless obtaining and labeling data, which is needed for strong results, is often expensive or even not possible. \n", "In this paper three different algorithmic approaches to deal with limited access to data are evaluated and compared to each other. \n", "We show the drawbacks and benefits of each method. \n", "One successful approach, especially in one- or few-shot learning tasks, is the use of external data during the classification task. \n", "Another successful approach, which achieves state of the art results in semi-supervised learning (SSL) benchmarks, is consistency regularization.\n", "Especially virtual adversarial training (VAT) has shown strong results and will be investigated in this paper. \n", "The aim of consistency regularization is to force the network not to change the output, when the input or the network itself is perturbed.\n", "Generative adversarial networks (GANs) have also shown strong empirical results. \n", "In many approaches the GAN architecture is used in order to create additional data and therefor to increase the generalization capability of the classification network.\n", "Furthermore we consider the use of unlabeled data for further performance improvement. \n", "The use of unlabeled data is investigated both for GANs and VAT. \n", "Deep neural networks have shown great performance in a variety of tasks, like speech or image recognition.", "However often extremely large datasets are necessary for achieving this.", "In real world applications collecting data is often very expensive in terms of cost or time.", "Furthermore collected data is often unbalanced or even incorrect labeled.", "Hence performance achieved in academic papers is hard to match.Recently different approaches tackled these problems and tried to achieve good performance, when otherwise fully supervised baselines failed to do so.", "One approach to learn from very few examples, the so called few-shot learning task, consists of giving a collection of inputs and their corresponding similarities instead of input-label pairs.", "This approach was thoroughly investigated in BID9 , BID33 , BID28 and gave impressive results tested on the Omniglot dataset BID12 ).", "In essence a task specific similarity measure is learned, that embeds the inputs before comparison.Furthermore semi-supervised learning (SSL) achieved strong results in image classification tasks.", "In SSL a labeled set of input-target pairs (x, y) ∈ D L and additionally an unlabeled set of inputs x ∈ D U L is given.", "Generally spoken the use of D U L shall provide additional information about the structure of the data.", "Generative models can be used to create additional labeled or unlabeled samples and leverage information from these samples BID26 , BID18 ).", "Furthermore in BID2 it is argued, that GAN-based semi-supervised frameworks perform best, when the generated images are of poor quality.", "Using these badly generated images a classifier with better generalization capability is obtained.", "On the other side uses generative models in order to learn feature representations, instead of generating additional data.Another approach in order to deal with limited data is consistency regularization.", "The main point of consistency regularization is, that the output of the network shall not change, when the input or the network itself is perturbed.", "These perturbations may also result in inputs, which are not realistic anymore.", "This way a smooth manifold is found on which the data lies.", "Different approaches to consistency regularization can be found in BID15 , BID23 , BID11 , and BID32 .The", "aim of this paper is to investigate how different approaches behave compared to each other. Therefore", "a specific image and sound recognition task is created with varying amount of labeled data. Beyond that", "it is further explored how different amounts of unlabeled data support the tasks, whilst also varying the size of labeled data. The possible", "accuracy improvement by labeled and unlabeled examples is compared to each other. Since there", "is a correlation between category mismatch of unlabeled data and labeled data BID20 ) reported, we investigate how this correlation behaves for different approaches and datasets.", "In this paper three methods for dealing with little data have been compared to each other.", "When the amount of labeled data is very little and no unlabeled data is available, siamese neural networks offer the best alternative in order to achieve good results in terms of accuracy.", "Furthermore when there is additional unlabeled data available using GANs or VAT offer a good option.", "VAT outperforms GAN when the amount of data is low.", "On contrast GANs should be preferred for moderate or high amounts of data.", "Nevertheless both methods must be tested for any individual use case, since the behavior of these methods may change for different datasets.Surprising results have been obtained on the class mismatch experiment.", "It was observed that adding samples, which do not belong to the target classes, not necessarily reduce the accuracy.", "Whether adding such samples improves or reduce the accuracy, may heavily depend on how closely these samples/ classes are related to the target samples/ classes.", "An interesting questions remains whether datasets which perform good in transfer learning tasks (e.g. transferring from ImageNet to CIFAR-10) also may be suitable for such semi-supervised learning tasks.Furthermore any combinations of three examined methods can bear interesting results, e.g.VAT could be applied to the discriminator in the GAN framework.", "Also a combination of GAN and siamese neural networks could be useful, in this case the siamese neural network would have two outputs, one for the source and one for the similarity." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2142857164144516, 0.19354838132858276, 0.12121211737394333, 0.260869562625885, 0.1818181723356247, 0.1875, 0.13333332538604736, 0.1249999925494194, 0.0833333283662796, 0.1666666567325592, 0.23076923191547394, 0.38461539149284363, 0.13333332538604736, 0.08695651590824127, 0.06896550953388214, 0, 0.0476190447807312, 0.20000000298023224, 0.05882352590560913, 0.05128204822540283, 0.11428570747375488, 0.0714285671710968, 0.05882352590560913, 0.060606054961681366, 0, 0.05128204822540283, 0.060606054961681366, 0, 0, 0.06896550953388214, 0.0714285671710968, 0.13333332538604736, 0.060606054961681366, 0.07407406717538834, 0.1666666567325592, 0.06896550953388214, 0.20000000298023224, 0.06896550953388214, 0.17391303181648254, 0.1538461446762085, 0.0952380895614624, 0, 0, 0.13793103396892548, 0.2631579041481018 ]
rylU8oRctX
true
[ "Comparison of siamese neural networks, GANs, and VAT for few shot learning. " ]
[ "This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives.", "First, it puts forward a novel concept of \"History of Word\" to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation.", "Second, it identifies an attention scoring function that better utilizes the \"history of word\" concept.", "Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer.", "We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017).", "Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.", "Context: The Alpine Rhine is part of the Rhine, a famous European river.", "The Alpine Rhine begins in the most western part of the Swiss canton of Graubünden, and later forms the border between Switzerland to the West and Liechtenstein and later Austria to the East.", "On the other hand, the Danube separates Romania and Bulgaria.", "In this paper, we describe a new deep learning model called FusionNet with its application to machine comprehension.", "FusionNet proposes a novel attention mechanism with following three contributions:", "1. the concept of history-of-word to build the attention using complete information from the lowest word-level embedding up to the highest semantic-level representation;", "2. an attention scoring function to effectively and efficiently utilize history-of-word;", "3. a fully-aware multi-level fusion to exploit information layer by layer discriminatingly.", "We applied FusionNet to MRC task and experimental results show that FusionNet outperforms existing machine reading models on both the SQuAD dataset and the adversarial SQuAD dataset.", "We believe FusionNet is a general and improved attention mechanism and can be applied to many tasks.", "Our future work is to study its capability in other NLP problems." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.23529411852359772, 0.14999999105930328, 0.0624999962747097, 0.17777776718139648, 0.25, 0.1818181723356247, 0.06666666269302368, 0.09756097197532654, 0.07692307233810425, 0.11428570747375488, 0.14814814925193787, 0.1111111044883728, 0.2142857164144516, 0.1428571343421936, 0.307692289352417, 0.3030303120613098, 0.06896550953388214 ]
BJIgi_eCZ
true
[ "We propose a light-weight enhancement for attention and a neural architecture, FusionNet, to achieve SotA on SQuAD and adversarial SQuAD." ]
[ "We propose a novel hierarchical generative model with a simple Markovian structure and a corresponding inference model.", "Both the generative and inference model are trained using the adversarial learning paradigm.", "We demonstrate that the hierarchical structure supports the learning of progressively more abstract representations as well as providing semantically meaningful reconstructions with different levels of fidelity.", "Furthermore, we show that minimizing the Jensen-Shanon divergence between the generative and inference network is enough to minimize the reconstruction error.", " The resulting semantically meaningful hierarchical latent structure discovery is exemplified on the CelebA dataset", ". There, we show that the features learned by our model in an unsupervised way outperform the best handcrafted features", ". Furthermore, the extracted features remain competitive when compared to several recent deep supervised approaches on an attribute prediction task on CelebA.", "Finally, we leverage the model's inference network to achieve state-of-the-art performance on a semi-supervised variant of the MNIST digit classification task.", "Deep generative models represent powerful approaches to modeling highly complex high-dimensional data.", "There has been a lot of recent research geared towards the advancement of deep generative modeling strategies, including Variational Autoencoders (VAE) BID16 , autoregressive models BID32 b) and hybrid models BID9 BID31 .", "However, Generative Adversarial Networks (GANs) BID8 have emerged as the learning paradigm of choice across a varied range of tasks, especially in computer vision BID47 , simulation and robotics BID7 BID41 .", "GANs cast the learning of a generative network in the form of a game between the generative and discriminator networks.", "While the discriminator is trained to distinguish between the true and generated examples, the generative model is trained to fool the discriminator.", "Using a discriminator network in GANs avoids the need for an explicit reconstruction-based loss function.", "This allows this model class to generate visually sharper images than VAEs while simultaneously enjoying faster sampling than autoregressive models.Recent work, known as either ALI or BiGAN , has shown that the adversarial learning paradigm can be extended to incorporate the learning of an inference network.", "While the inference network, or encoder, maps training examples x to a latent space variable z, the decoder plays the role of the standard GAN generator mapping from space of the latent variables (that is typically sampled from some factorial distribution) into the data space.", "In ALI, the discriminator is trained to distinguish between the encoder and the decoder, while the encoder and decoder are trained to conspire together to fool the discriminator.", "Unlike some approaches that hybridize VAE-style inference with GAN-style generative learning (e.g. BID20 , ), the encoder and decoder in ALI use a purely adversarial approach.", "One big advantage of adopting an adversarial-only formalism is demonstrated by the high-quality of the generated samples.", "Additionally, we are given a mechanism to infer the latent code associated with a true data example.One interesting feature highlighted in the original ALI work is that even though the encoder and decoder models are never explicitly trained to perform reconstruction, this can nevertheless be easily done by projecting data samples via the encoder into the latent space, copying these values across to the latent variable layer of the decoder and projecting them back to the data space.", "Doing this yields reconstructions that often preserve some semantic features of the original input data, but are perceptually relatively different from the original samples.", "These observations naturally lead to the question of the source of the discrepancy between the data samples and their ALI reconstructions.", "Is the discrepancy due to a failure of the adversarial training paradigm, or is it due to the more standard challenge of compressing the information from the data into a rather restrictive latent feature vector?", "BID44 show that an improvement in reconstructions is achievable when additional terms which explicitly minimize reconstruction error in the data space are added to the training objective.", "BID23 palliates to the non-identifiability issues pertaining to bidirectional adversarial training by augmenting the generator's loss with an adversarial cycle consistency loss.In this paper we explore issues surrounding the representation of complex, richly-structured data, such as natural images, in the context of a novel, hierarchical generative model, Hierarchical Adversarially Learned Inference (HALI), which represents a hierarchical extension of ALI.", "We show that within a purely adversarial training paradigm, and by exploiting the model's hierarchical structure, we can modulate the perceptual fidelity of the reconstructions.", "We provide theoretical arguments for why HALI's adversarial game should be sufficient to minimize the reconstruction cost and show empirical evidence supporting this perspective.", "Finally, we evaluate the usefulness of the learned representations on a semi-supervised task on MNIST and an attribution prediction task on the CelebA dataset.", "In this paper, we introduced HALI, a novel adversarially trained generative model.", "HALI learns a hierarchy of latent variables with a simple Markovian structure in both the generator and inference networks.", "We have shown both theoretically and empirically the advantages gained by extending the ALI framework to a hierarchy.While there are many potential applications of HALI, one important future direction of research is to explore ways to render the training process more stable and straightforward.", "GANs are well-known to be challenging to train and the introduction of a hierarchy of latent variables only adds to this." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.38461539149284363, 0.3333333134651184, 0.17142856121063232, 0.12903225421905518, 0.23076923191547394, 0.13793103396892548, 0, 0, 0.0833333283662796, 0.0952380895614624, 0.0476190447807312, 0.14814814925193787, 0.29629629850387573, 0, 0.0363636314868927, 0.04255318641662598, 0.13333332538604736, 0.1538461446762085, 0, 0.11267605423927307, 0, 0.06896550953388214, 0.05128204822540283, 0, 0.16393442451953888, 0.11428570747375488, 0.0555555522441864, 0.12903225421905518, 0.25, 0.19999998807907104, 0.039215683937072754, 0.13333332538604736 ]
HyXNCZbCZ
true
[ "Adversarially trained hierarchical generative model with robust and semantically learned latent representation." ]
[ "Conservation laws are considered to be fundamental laws of nature.", "It has broad application in many fields including physics, chemistry, biology, geology, and engineering.", "Solving the differential equations associated with conservation laws is a major branch in computational mathematics.", "Recent success of machine learning, especially deep learning, in areas such as computer vision and natural language processing, has attracted a lot of attention from the community of computational mathematics and inspired many intriguing works in combining machine learning with traditional methods.", "In this paper, we are the first to explore the possibility and benefit of solving nonlinear conservation laws using deep reinforcement learning.", "As a proof of concept, we focus on 1-dimensional scalar conservation laws.", "We deploy the machinery of deep reinforcement learning to train a policy network that can decide on how the numerical solutions should be approximated in a sequential and spatial-temporal adaptive manner.", "We will show that the problem of solving conservation laws can be naturally viewed as a sequential decision making process and the numerical schemes learned in such a way can easily enforce long-term accuracy. \n", "Furthermore, the learned policy network is carefully designed to determine a good local discrete approximation based on the current state of the solution, which essentially makes the proposed method a meta-learning approach.\n", "In other words, the proposed method is capable of learning how to discretize for a given situation mimicking human experts.", "Finally, we will provide details on how the policy network is trained, how well it performs compared with some state-of-the-art numerical solvers such as WENO schemes, and how well it generalizes.", "Our code is released anomynously at \\url{https://github.com/qwerlanksdf/L2D}.", "Conservation laws are considered to be one of the fundamental laws of nature, and has broad applications in multiple fields such as physics, chemistry, biology, geology, and engineering.", "For example, Burger's equation, a very classic partial differential equation (PDE) in conservation laws, has important applications in fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow.", "Solving the differential equations associated with conservation laws has been a major branch of computational mathematics (LeVeque, 1992; 2002) , and a lot of effective methods have been proposed, from classic methods such as the upwind scheme, the Lax-Friedrichs scheme, to the advanced ones such as the ENO/WENO schemes (Liu et al., 1994; Shu, 1998) , the flux-limiter methods (Jerez Galiano & Uh Zapata, 2010) , and etc.", "In the past few decades, these traditional methods have been proven successful in solving conservation laws.", "Nonetheless, the design of some of the high-end methods heavily relies on expert knowledge and the coding of these methods can be a laborious process.", "To ease the usage and potentially improve these traditional algorithms, machine learning, especially deep learning, has been recently incorporated into this field.", "For example, the ENO scheme requires lots of 'if/else' logical judgments when used to solve complicated system of equations or high-dimensional equations.", "This very much resembles the old-fashioned expert systems.", "The recent trend in artificial intelligence (AI) is to replace the expert systems by the so-called 'connectionism', e.g., deep neural networks, which leads to the recent bloom of AI.", "Therefore, it is natural and potentially beneficial to introduce deep learning in traditional numerical solvers of conservation laws.", "In this paper, we proposed a general framework to learn how to solve 1-dimensional conservation laws via deep reinforcement learning.", "We first discussed how the procedure of numerically solving conservation laws can be naturally cast in the form of Markov Decision Process.", "We then elaborated how to relate notions in numerical schemes of PDEs with those of reinforcement learning.", "In particular, we introduced a numerical flux policy which was able to decide on how numerical flux should be designed locally based on the current state of the solution.", "We carefully design the action of our RL policy to make it a meta-learner.", "Our numerical experiments showed that the proposed RL based solver was able to outperform high order WENO and was well generalized in various cases.", "As part of the future works, we would like to consider using the numerical flux policy to inference more complicated numerical fluxes with guaranteed consistency and stability.", "Furthermore, we can use the proposed framework to learn a policy that can generate adaptive grids and the associated numerical schemes.", "Lastly, we would like consider system of conservation laws in 2nd and 3rd dimensional space.", "A COMPLEMENTARY EXPERIMENTS" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.1818181723356247, 0.052631575614213943, 0, 0.06666666269302368, 0.08888888359069824, 0.0555555522441864, 0.2641509473323822, 0.25, 0.037735845893621445, 0.045454539358615875, 0.15686273574829102, 0, 0.20408162474632263, 0.03999999538064003, 0.07894736528396606, 0, 0.13636362552642822, 0.04444443807005882, 0.09090908616781235, 0, 0.039215680211782455, 0.1904761791229248, 0.09302324801683426, 0.1818181723356247, 0.14999999105930328, 0.12244897335767746, 0.10526315122842789, 0.1702127605676651, 0.1249999925494194, 0.2790697515010834, 0.05128204822540283, 0 ]
rygBVTVFPB
true
[ "We observe that numerical PDE solvers can be regarded as Markov Desicion Processes, and propose to use Reinforcement Learning to solve 1D scalar Conservation Laws" ]
[ "We present a neural rendering architecture that helps variational autoencoders (VAEs) learn disentangled representations.", "Instead of the deconvolutional network typically used in the decoder of VAEs, we tile (broadcast) the latent vector across space, concatenate fixed X- and Y-“coordinate” channels, and apply a fully convolutional network with 1x1 stride.", "This provides an architectural prior for dissociating positional from non-positional features in the latent space, yet without providing any explicit supervision to this effect.", "We show that this architecture, which we term the Spatial Broadcast decoder, improves disentangling, reconstruction accuracy, and generalization to held-out regions in data space. ", "We show the Spatial Broadcast Decoder is complementary to state-of-the-art (SOTA) disentangling techniques and when incorporated improves their performance.", "Knowledge transfer and generalization are hallmarks of human intelligence.", "From grammatical generalization when learning a new language to visual generalization when interpreting a Picasso, humans have an extreme ability to recognize and apply learned patterns in new contexts.", "Current machine learning algorithms pale in contrast, suffering from overfitting, adversarial attacks, and domain specialization BID12 BID16 .", "We believe that one fruitful approach to improve generalization in machine learning is to learn compositional representations in an unsupervised manner.", "A compositional representation consists of components that can be recombined, and such recombination underlies generalization.", "For example, consider a pink elephant.", "With a representation that composes color and object independently, imagining a pink elephant is trivial.", "However, a pink elephant may not be within the scope of a representation that mixes color and object.", "Compositionality comes in a variety of flavors, including feature compositionality (e.g. pink elephant), multi-object compositionality (e.g. elephant next to a penguin), and relational compositionality (e.g. the smallest elephant).", "In this work we focus on feature compositionality.", "Here we present the Spatial Broadcast decoder for Variational Autoencoders.", "We demonstrate that it improves learned latent representations, most dramatically for datasets with objects varying in position.", "It also improves generalization in latent space and can be incorporated into SOTA models to boost their performance in terms of both disentangling and reconstruction accuracy.", "We believe that learning compositional representations is an important ingredient for flexibility and generalization in many contexts, from supervised learning to reinforcement learning, and the Spatial Broadcast decoder is one step towards robust compositional visual representation learning." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.7407407164573669, 0.09302324801683426, 0.05405404791235924, 0.10526315122842789, 0.0624999962747097, 0, 0.05405404791235924, 0, 0.25, 0.0714285671710968, 0.10526315122842789, 0.14814814925193787, 0.13333332538604736, 0.05405404791235924, 0, 0, 0.19999998807907104, 0.05405404791235924, 0.13333332538604736 ]
S1x7WjnzdV
true
[ "We introduce a neural rendering architecture that helps VAEs learn disentangled latent representations." ]
[ "Similar to humans and animals, deep artificial neural networks exhibit critical periods during which a temporary stimulus deficit can impair the development of a skill.", "The extent of the impairment depends on the onset and length of the deficit window, as in animal models, and on the size of the neural network.", "Deficits that do not affect low-level statistics, such as vertical flipping of the images, have no lasting effect on performance and can be overcome with further training. ", "To better understand this phenomenon, we use the Fisher Information of the weights to measure the effective connectivity between layers of a network during training. ", "Counterintuitively, information rises rapidly in the early phases of training, and then decreases, preventing redistribution of information resources in a phenomenon we refer to as a loss of \"Information Plasticity\". ", "Our analysis suggests that the first few epochs are critical for the creation of strong connections that are optimal relative to the input data distribution.", "Once such strong connections are created, they do not appear to change during additional training.", "These findings suggest that the initial learning transient, under-scrutinized compared to asymptotic behavior, plays a key role in determining the outcome of the training process.", "Our findings, combined with recent theoretical results in the literature, also suggest that forgetting (decrease of information in the weights) is critical to achieving invariance and disentanglement in representation learning.", "Finally, critical periods are not restricted to biological systems, but can emerge naturally in learning systems, whether biological or artificial, due to fundamental constrains arising from learning dynamics and information processing.", "Critical periods are time windows of early post-natal development during which sensory deficits can lead to permanent skill impairment BID12 .", "Researchers have documented critical periods affecting a range of species and systems, from visual acuity in kittens BID35 BID33 to song learning in birds BID18 .", "Uncorrected eye defects (e.g., strabismus, cataracts) during the critical period for visual development lead to amblyopia in one in fifty adults.The cause of critical periods is ascribed to the biochemical modulation of windows of neuronal plasticity BID10 .", "In this paper, however, we show that deep neural networks (DNNs), while completely devoid of such regulations, respond to sensory deficits in ways similar to those observed in humans and animal models.", "This surprising result suggests that critical periods may arise from information processing, rather than biochemical, phenomena.We propose using the information in the weights, measured by an efficient approximation of the Fisher Information, to study critical period phenomena in DNNs.", "We show that, counterintuitively, the information in the weights does not increase monotonically during training.", "Instead, a rapid growth in information (\"memorization phase\") is followed by a reduction of information (\"reorganization\" or \"forgetting\" phase), even as classification performance keeps increasing.", "This behavior is consistent across different tasks and network architectures.", "Critical periods are centered in the memorization phase.Under review as a conference paper at ICLR 2019 Performance is permanently impaired if the deficit is not corrected early enough, regardless of how much additional training is performed.", "As in animal models, critical periods coincide with the early learning phase during which test accuracy would rapidly increase in the absence of deficits (dashed).", "(B) For comparison, we report acuity for kittens monocularly deprived since birth and tested at the time of eye-opening (solid), and normal development visual acuity in kittens as a function of age (dashed) BID7 BID23 .artificial", "neural networks (ANNs) are only loosely inspired by biological systems (Hassabis et al., 2017) .Most studies", "to date have focused either on the behavior of networks at convergence (Representation Learning) or on the asymptotic properties of the numerical scheme used to get there (Optimization). The role of", "the initial transient, especially its effect in biasing the network towards \"good\" regions of the complex and high-dimensional optimization problem, is rarely addressed. To study this", "initial learning phase of ANNs, we replicate experiments performed in animal models and find that the responses to early deficits are remarkably similar, despite the large underlying differences between the two systems. In particular", ", we show that the quality of the solution depends only minimally on the final, relatively well-understood, phase of the training process or on its very first epochs; instead, it depends critically on the period prior to initial convergence.In animals, sensory deficits introduced during critical periods induce changes in the architecture of the corresponding areas BID4 BID34 BID9 . To determine", "whether a similar phenomenon exists in ANNs, we compute the Fisher Information of the weights of the network as a proxy to measure its \"effective connectivity\", that is, the density of connections that are effectively used by the network in order to solve the task. Like others", "before us BID28 , we observe two distinct phases during the training, first a \"learning phase\" in which the Fisher Information of the weights increases as the network learns from the data, followed by a \"consolidation\" or \"compression\" phase in which the Fisher Information decreases and stabilizes. Sensitivity", "to critical-period-inducing deficits is maximal exactly when the Fisher Information peaks.A layer-wise analysis of the network's effective connectivity shows that, in the tasks and deficits we consider, the hierarchy of low-level and high-level features in the training data is a key aspect behind the observed phenomena. In particular", ", our experiments suggest that the existence of critical periods in deep neural networks depends on the inability of the network to change its effective connectivity pattern in order to process different information (in response to deficit removal). We call this", "phenomenon, which is not mediated by any external factors, a loss of the \"Information Plasticity\" of the network.", "Critical periods have thus far been considered an exclusively biological phenomenon.", "At the same time, the analysis of DNNs has focused on asymptotic properties and neglected the initial transient behavior.", "To the best of our knowledge, we are the first to show that artificial neural networks exhibit critical period phenomena, and to highlight the critical role of the transient in determining the asymptotic performance of the network.", "Inspired by the role of synaptic connectivity in modulating critical periods, we introduce the use of Fisher Information to study this initial phase.", "We show that the initial sensitivity to deficits closely follows changes in the FIM, both global, as the network first rapidly increases and then decreases the amount of stored information, and layer-wise, as the network \"reorganizes\" its effective connectivity in order to optimally process information.Our work naturally relates to the extensive literature on critical periods in biology.", "Despite artificial networks being an extremely reductionist approximation of neuronal networks, they exhibit behaviors that are qualitatively similar to the critical periods observed in human and animal models.", "Our information analysis shows that the initial rapid memorization phase is followed by a loss of Information Plasticity which, counterintuitively, further improves the performance.", "On the other hand, when combined with the analysis of BID0 this suggests that a \"forgetting\" phase may be desirable, or even necessary, in order to learn robust, nuisance-invariant representations.The existence of two distinct phases of training has been observed and discussed by BID28 , although their analysis builds on the (Shannon) information of the activations, rather than the (Fisher) information in the weights.", "On a multi-layer perceptron (MLP), BID28 empirically link the two phases to a sudden increase in the gradients' covariance.", "It may be tempting to compare these results with our Fisher Information analysis.", "However, it must be noted that the FIM is computed using the gradients with respect to the model prediction, not to the ground truth label, leading to important qualitative differences.", "In Figure 6 , we show that the covariance and norm of the gradients exhibit no clear trends during training with and without deficits, and, therefore, unlike the FIM, do not correlate with the sensitivity to critical periods.", "However, Published as a conference paper at ICLR 2019 a connection between our FIM analysis and the information in the activations can be established based on the work of BID0 , which shows that the FIM of the weights can be used to bound the information in the activations.", "In fact, we may intuitively expect that pruning of connections naturally leads to loss of information in the corresponding activations.", "Thus, our analysis corroborates and expands on some of the claims of BID28 , while using an independent framework.Aside from being more closely related to the deficit sensitivity during critical periods, Fisher's Information also has a number of technical advantages: Its diagonal is simple to estimate, even on modern state-of-the-art architectures and compelling datasets, and it is less sensitive to the choice estimator of mutual information, avoiding some of the common criticisms to the use of information quantities in the analysis of deep learning models.", "Finally, the FIM allows us to probe fine changes in the effective connectivity across the layers of the network FIG5 ), which are not visible in BID28 .A", "complete analysis of the activations should account not only for the amount of information (both task-and nuisance-related), but also for its accessibility, e.g., how easily task-related information can be extracted by a linear classifier. Following", "a similar idea, BID24 aim to study the layer-wise, or \"spatial\" (but not temporal) evolution of the simplicity of the representation by performing a principal component analysis (PCA) of a radial basis function (RBF) kernel embedding of each layer representation. They show", "that, on a multi-layer perceptron, task-relevant information increasingly concentrate on the first principal components of the representation's embedding, implying that they become more easily \"accessible\" layer after layer, while nuisance information (when it is codified at all) is encoded in the remaining components. In our work", "we instead focus on the temporal evolution of the weights. However, it", "'s important to notice that a network with simpler weights (as measured by the FIM) also requires a simpler smooth representation (as measured, e.g., by the RBF embedding) in order to operate properly, since it needs to be resistant to perturbations of the weights. Thus our analysis", "is wholly compatible with the intuitions of BID24 . It would also be", "interesting to study the joint spatio-temporal evolution of the network using both frameworks at once.One advantage of focusing on the information of the weights rather than on the activations, or behavior of the network, is to have a readout of the \"effective connectivity\" during critical periods, which can be compared to similar readouts in animals. In fact, \"behavioral", "\" readouts upon deficit removal, both in artificial and neuronal networks, can potentially be confounded by deficit-coping changes at different levels of the visual pathways BID4 BID16 . On the other hand,", "deficits in deprived animals are mirrored by abnormalities in the circuitry of the visual pathways, which we characterize in DNNs using the FIM to study its \"effective connectivity\", i.e., the connections that are actually employed by the network to solve the task. Sensitivity to critical", "periods and the trace of the Fisher Information peak at the same epochs, in accord with the evidence that skill development and critical periods in neuronal networks are modulated by changes (generally experience-dependent) in synaptic plasticity BID16 BID10 . Our layer-wise analysis", "of the Fisher Information FIG5 ) also shows that visual deficits reinforce higher layers to the detriment of intermediate layers, leaving low-level layers virtually untouched. If the deficit is removed", "after the critical period ends, the network is not able to reverse these effects. Although the two systems", "are radically different, a similar response can be found in the visual pathways of animal models: Lower levels (e.g., retina, lateral geniculate nucleus) and higher-level visual areas (e.g., V2 and post-V2) show little remodeling upon deprivation, while most changes happen in different layers of V1 BID34 BID9 ).An insightful interpretation", "of critical periods in animal models was proposed by BID16 : The initial connections of neuronal networks are unstable and easily modified (highly plastic), but as more \"samples\" are observed, they change and reach a more stable configuration which is difficult to modify. Learning can, however, still", "happen within the newly created connectivity pattern. This is largely compatible with", "our findings: Sensitivity to critical-period-inducing deficits peaks when connections are remodeled (Figure 4, Left) , and different connectivity profiles are observed in networks trained with and without a deficit ( FIG5 ). Moreover, high-level deficits such", "as imageflipping and label permutation, which do not require restructuring of the network's connections in order to be corrected, do not exhibit a critical period.", "Our goal in this paper is not so much to investigate the human (or animal) brain through artificial networks, as to understand fundamental information processing phenomena, both in their biological or artificial implementations.", "It is also not our goal to suggest that, since they both exhibit critical periods, DNNs are necessarily a valid model of neurobiological information processing, although recent work has emphasized this aspect.", "We engage in an \"Artificial Neuroscience\" exercise in part to address a technological need to develop \"explainable\" artificial intelligence systems whose behavior can be understood and predicted.", "While traditionally well-understood mathematical models were used by neuroscientists to study biological phenomena, information processing in modern artificial networks is often just as poorly understood as in biology, so we chose to exploit well-known biological phenomena as probes to study information processing in artificial networks.Conversely, it would also be interesting to explore ways to test whether biological networks prune connections as a consequences of a loss of Information Plasticity, rather than as a cause.", "The mechanisms underlying network reconfiguration during learning and development might be an evolutionary outcome obtained under the pressure of fundamental information processing phenomena.", "DISPLAYFORM0 In order to avoid interferences between the annealing scheme and the architecture, in these experiments we fix the learning rate to 0.001.The Fully Connected network used for the MNIST experiments has hidden layers of size [2500, 2000, 1500, 1000, 500] .", "All hidden layers use batch normalization followed by ReLU activations.", "We fix the learning rate to 0.005.", "Weight decay is not used.", "We use data augmentation with random translations up to 4 pixels and random horizontal flipping.", "For MNIST, we pad the images with zeros to bring them to size 32 × 32." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2222222238779068, 0.20408162474632263, 0.24137930572032928, 0.15094339847564697, 0.3571428656578064, 0.11764705181121826, 0.08888888359069824, 0.22641508281230927, 0.24561403691768646, 0.17543859779834747, 0.23999999463558197, 0.14814814925193787, 0.1875, 0.1666666567325592, 0.1875, 0.1818181723356247, 0.18867923319339752, 0.04999999701976776, 0.1875, 0.18867923319339752, 0.19354838132858276, 0, 0.1090909019112587, 0.18518517911434174, 0.25806450843811035, 0.17499999701976776, 0.1538461446762085, 0.1764705777168274, 0.23188404738903046, 0.1538461446762085, 0.1304347813129425, 0, 0.21276594698429108, 0.27586206793785095, 0.19607841968536377, 0.2631579041481018, 0.27586206793785095, 0.22641508281230927, 0.1904761791229248, 0.1702127605676651, 0.04651162400841713, 0.072727270424366, 0.158730149269104, 0.2461538463830948, 0.2448979616165161, 0.1458333283662796, 0.14814814925193787, 0.1269841194152832, 0.0937499925494194, 0.11428570747375488, 0.09756097197532654, 0.11594202369451523, 0.0952380895614624, 0.18421052396297455, 0.29999998211860657, 0.1515151411294937, 0.15625, 0.1428571343421936, 0.08695651590824127, 0.12820512056350708, 0.19178082048892975, 0.0476190447807312, 0.1269841194152832, 0.22641508281230927, 0.2666666507720947, 0.12903225421905518, 0.1818181723356247, 0.1904761791229248, 0.18867923319339752, 0.14705881476402283, 0, 0.10526315122842789, 0, 0.09090908616781235, 0.09090908616781235 ]
BkeStsCcKQ
true
[ "Sensory deficits in early training phases can lead to irreversible performance loss in both artificial and neuronal networks, suggesting information phenomena as the common cause, and point to the importance of the initial transient and forgetting." ]
[ "We propose a method to incrementally learn an embedding space over the domain of network architectures, to enable the careful selection of architectures for evaluation during compressed architecture search.", "Given a teacher network, we search for a compressed network architecture by using Bayesian Optimization (BO) with a kernel function defined over our proposed embedding space to select architectures for evaluation.", "We demonstrate that our search algorithm can significantly outperform various baseline methods, such as random search and reinforcement learning (Ashok et al., 2018).", "The compressed architectures found by our method are also better than the state-of-the-art manually-designed compact architecture ShuffleNet (Zhang et al., 2018).", "We also demonstrate that the learned embedding space can be transferred to new settings for architecture search, such as a larger teacher network or a teacher network in a different architecture family, without any training.", "In many application domains, it is common practice to make use of well-known deep network architectures (e.g., VGG BID30 , GoogleNet BID33 , ResNet BID8 ) and to adapt them to a new task without optimizing the architecture for that task.", "While this process of transfer learning is surprisingly successful, it often results in over-sized networks which have many redundant or unused parameters.", "Inefficient network architectures can waste computational resources and over-sized networks can prevent them from being used on embedded systems.", "There is a pressing need to develop algorithms that can take large networks with high accuracy as input and compress their size while maintaining similar performance.", "In this paper, we focus on the task of compressed architecture search -the automatic discovery of compressed network architectures based on a given large network.One significant bottleneck of compressed architecture search is the need to repeatedly evaluate different compressed network architectures, as each evaluation is extremely costly (e.g., backpropagation to learn the parameters of a single deep network can take several days on a single GPU).", "This means that any efficient search algorithm must be judicious when selecting architectures to evaluate.", "Learning a good embedding space over the domain of compressed network architectures is important because it can be used to define a distribution on the architecture space that can be used to generate a priority ordering of architectures for evaluation.", "To enable the careful selection of architectures for evaluation, we propose a method to incrementally learn an embedding space over the domain of network architectures.In the network compression paradigm, we are given a teacher network and we aim to search for a compressed network architecture, a student network that contains as few parameters as possible while maintaining similar performance to the teacher network.", "We address the task of compressed architecture search by using Bayesian Optimization (BO) with a kernel function defined over our proposed embedding space to select architectures for evaluation.", "As modern neural architectures can", "We address the task of searching for a compressed network architecture by using BO.", "Our proposed method can find more efficient architectures than all the baselines on CIFAR-10 and CIFAR-100.", "Our key contribution is the proposed method to learn an embedding space over the domain of network architectures.", "We also demonstrate that the learned embedding space can be transferred to new settings for architecture search without any training.", "Possible future directions include extending our method to the general NAS problem to search for desired architectures from the scratch and combining our proposed embedding space with BID9 to identify the Pareto set of the architectures that are both small and accurate." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 1, 0.4444444477558136, 0.08163265138864517, 0.2083333283662796, 0.3214285671710968, 0.25, 0.0416666604578495, 0.09090908616781235, 0.07692307233810425, 0.3199999928474426, 0.1463414579629898, 0.5, 0.6000000238418579, 0.5185185074806213, 0.06451612710952759, 0.4000000059604645, 0.1428571343421936, 0.5581395030021667, 0.3478260934352875, 0.29999998211860657 ]
S1xLN3C9YX
true
[ "We propose a method to incrementally learn an embedding space over the domain of network architectures, to enable the careful selection of architectures for evaluation during compressed architecture search." ]
[ "Reinforcement Learning (RL) problem can be solved in two different ways - the Value function-based approach and the policy optimization-based approach - to eventually arrive at an optimal policy for the given environment.", "One of the recent breakthroughs in reinforcement learning is the use of deep neural networks as function approximators to approximate the value function or q-function in a reinforcement learning scheme.", "This has led to results with agents automatically learning how to play games like alpha-go showing better-than-human performance.", "Deep Q-learning networks (DQN) and Deep Deterministic Policy Gradient (DDPG) are two such methods that have shown state-of-the-art results in recent times.", "Among the many variants of RL, an important class of problems is where the state and action spaces are continuous --- autonomous robots, autonomous vehicles, optimal control are all examples of such problems that can lend themselves naturally to reinforcement based algorithms, and have continuous state and action spaces.", "In this paper, we adapt and combine approaches such as DQN and DDPG in novel ways to outperform the earlier results for continuous state and action space problems. ", "We believe these results are a valuable addition to the fast-growing body of results on Reinforcement Learning, more so for continuous state and action space problems.", "Reinforcement learning (RL) is about an agent interacting with the environment, learning an optimal policy, by trail and error, for sequential decision making problems in a wide range of fields such that the agent learns to control a system and maximize a numerical performance measure that expresses a long-term objective BID6 ).", "To summarize, this paper discusses the state of the art methods in reinforcement learning with our improvements that have led to RL algorithms in continuous state and action spaces that outperform the existing ones.The proposed algorithm combines the concept of prioritized action replay with deep deterministic policy gradients.", "As it has been shown, on a majority of the mujoco environments this algorithm vastly outperforms the DDPG algorithm both in terms of overall reward achieved and the average reward for any hundred epochs over the thousand epochs over which both were run.Hence, it can be concluded that the proposed algorithm learns much faster than the DDPG algorithm.", "Secondly, the fact that current reward is higher coupled with the observation that rate of increase in reward also being higher for the proposed algorithm, shows that it is unlikely for DDPG algorithm to surpass the results of the proposed algorithm on that majority of environments.", "Also, certain kinds of noises further improve PDDPG to help attain higher rewards.", "One other important conclusion is that different kinds of noises work better for different environments which was evident in how drastically the results changed based on the parameter noise.The presented algorithm can also be extended and improved further by finding more concepts in value based methods, which can be used in policy based methods.", "The overall improvements in the area of continuous space and action state space can help in making reinforcement learning more applicable in real world scenarios, as the real world systems provide continuous inputs.", "These methods can potentially be extended to safety critical systems, by incorporating the notion of safety during the training of a RL algorithm.", "This is currently a big challenge because of the necessary unrestricted exploration process of a typical RL algorithm." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.16326530277729034, 0.13636362552642822, 0.052631575614213943, 0.0952380895614624, 0.24137930572032928, 0.2916666567325592, 0.30434781312942505, 0.25, 0.2950819730758667, 0.1230769157409668, 0.11538460850715637, 0.05882352590560913, 0.20895521342754364, 0.3404255211353302, 0.19512194395065308, 0.1621621549129486 ]
ryGiYoAqt7
true
[ "Improving the performance of an RL agent in the continuous action and state space domain by using prioritised experience replay and parameter noise." ]
[ "Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis.", "We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space.", "We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information.", "One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus.", "It's noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI) dataset with respect to the strongest published system.", "Natural Language Inference (NLI also known as recognizing textual entiailment, or RTE) task requires one to determine whether the logical relationship between two sentences is among entailment (if the premise is true, then the hypothesis must be true), contradiction (if the premise is true, then the hypothesis must be false) and neutral (neither entailment nor contradiction).", "NLI is known as a fundamental and yet challenging task for natural language understanding , not only because it requires one to identify the language pattern, but also to understand certain common sense knowledge.", "In TAB0 , three samples from MultiNLI corpus show solving the task requires one to handle the full complexity of lexical and compositional semantics.", "The previous work on NLI (or RTE) has extensively researched on conventional approaches BID25 Bos & Markert, 2005; BID39 .", "Recent progress on NLI is enabled by the availability of 570k human annotated dataset and the advancement of representation learning technique.Among the core representation learning techniques, attention mechanism is broadly applied in many NLU tasks since its introduction: machine translation BID15 , abstractive summarization BID50 , Reading Comprehension , dialog system BID41 , etc.", "As described by BID57 , \"An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key\".", "Attention mechanism is known for its alignment between representations, focusing one part of representation over another, and modeling the dependency regardless of sequence length.", "Observing attention's powerful capability, we hypothesize that the attention weight can assist with machine to understanding the text.A regular attention weight, the core component of the attention mechanism, encodes the crosssentence word relationship into a alignment matrix.", "However, a multi-head attention weightVaswani et al. (2017) can encode such interaction into multiple alignment matrices, which shows a more powerful alignment.", "In this work, we push the multi-head attention to a extreme by building a word- by-word dimension-wise alignment tensor which we call interaction tensor.", "The interaction tensor encodes the high-order alignment relationship between sentences pair.", "Our experiments demonstrate that by capturing the rich semantic features in the interaction tensor, we are able to solve natural language inference task well, especially in cases with paraphrase, antonyms and overlapping words.We dub the general framework as Interactive Inference Network(IIN).", "To the best of our knowledge, it is the first attempt to solve natural language inference task in the interaction space.", "We further explore one instance of Interactive Inference Network, Densely Interactive Inference Network (DIIN), which achieves new state-of-the-art performance on both SNLI and MultiNLI copora.", "To test the generality of the architecture, we interpret the paraphrase identification task as natural language inference task where matching as entailment, not-matching as neutral.", "We test the model on Quora Question Pair dataset, which contains over 400k real world question pair, and achieves new state-of-the-art performance.We introduce the related work in Section 2, and discuss the general framework of IIN along with a specific instance that enjoys state-of-the-art performance on multiple datasets in Section", "3. We describe experiments and analysis in Section", "4. Finally, we conclude and discuss future work in Section 5.", "We show the interaction tensor (or attention weight) contains semantic information to understand the natural language.", "We introduce Interactive Inference Network, a novel class of architecture that allows the model to solve NLI or NLI alike tasks via extracting semantic feature from interaction tensor end-to-end.", "One instance of such architecture, Densely Interactive Inference Network (DIIN), achieves state-of-the-art performance on multiple datasets.", "By ablating each component in DIIN and changing the dimensionality, we show the effectiveness of each component in DIIN.Though we have the initial exploration of natural language inference in interaction space, the full potential is not yet clear.", "We will keep exploring the potential of interaction space.", "Incorporating common-sense knowledge from external resources such as knowledge base to leverage the capacity of the mode is another research goal of ours." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ 0.24242423474788666, 0.09090908616781235, 0.42424240708351135, 0, 0.05405404791235924, 0.072727270424366, 0.17777776718139648, 0.1666666567325592, 0, 0.03389830142259598, 0.10526315122842789, 0, 0.13333332538604736, 0.060606054961681366, 0.11764705181121826, 0, 0.26923075318336487, 0.375, 0, 0.24242423474788666, 0.0363636314868927, 0, 0, 0.5, 0.19512194395065308, 0, 0.1904761791229248, 0, 0.060606054961681366 ]
r1dHXnH6-
true
[ "show multi-channel attention weight contains semantic feature to solve natural language inference task." ]
[ "Determinantal Point Processes (DPPs) provide an elegant and versatile way to sample sets of items that balance the point-wise quality with the set-wise diversity of selected items.", "For this reason, they have gained prominence in many machine learning applications that rely on subset selection.", "However, sampling from a DPP over a ground set of size N is a costly operation, requiring in general an O(N^3) preprocessing cost and an O(Nk^3) sampling cost for subsets of size k. We approach this problem by introducing DppNets: generative deep models that produce DPP-like samples for arbitrary ground sets. ", "We develop an inhibitive attention mechanism based on transformer networks that captures a notion of dissimilarity between feature vectors. ", "We show theoretically that such an approximation is sensible as it maintains the guarantees of inhibition or dissimilarity that makes DPP so powerful and unique. ", "Empirically, we demonstrate that samples from our model receive high likelihood under the more expensive DPP alternative.", "Selecting a representative sample of data from a large pool of available candidates is an essential step of a large class of machine learning problems: noteworthy examples include automatic summarization, matrix approximation, and minibatch selection.", "Such problems require sampling schemes that calibrate the tradeoff between the point-wise quality -e.g. the relevance of a sentence to a document summary -of selected elements and the set-wise diversity 1 of the sampled set as a whole.Determinantal Point Processes (DPPs) are probabilistic models over subsets of a ground set that elegantly model the tradeoff between these often competing notions of quality and diversity.", "Given a ground set of size N , DPPs allow for O(N 3 ) sampling over all 2 N possible subsets of elements, assigning to any subset S of a ground set Y of elements the probability DISPLAYFORM0 where L ∈ R N ×N is the DPP kernel and L S = [L ij ] i,j∈S denotes the principal submatrix of L indexed by items in S. Intuitively, DPPs measure the volume spanned by the feature embedding of the items in feature space (Figure 1 ).", "BID31 to model the distribution of possible states of fermions obeying the Pauli exclusion principle, the properties of DPPs have since then been studied in depth BID19 BID6 , see e.g.).", "As DPPs capture repulsive forces between similar elements, they arise in many natural processes, such as the distribution of non-intersecting random walks BID22 , spectra of random matrix ensembles BID37 BID13 , and zerocrossings of polynomials with Gaussian coefficients BID20 ).", "More recently, DPPs have become a prominent tool in machine learning due to their elegance and tractability: recent applications include video recommendation BID10 , minibatch selection BID46 , and kernel approximation BID28 BID35 .However", ", the O(N 3 ) sampling cost makes the practical application of DPPs intractable for large datasets, requiring additional work such as subsampling from Y, structured kernels (Gartrell et al., (a) (b", ") (", "c)", "φ i φ j Figure 1 : Geometric intuition for DPPs: let φ i , φ j be two feature vectors of Φ such that the DPP kernel verifies L = ΦΦ T ; then P L ({i, j}) ∝ Vol(φ i , φ j ). Increasing", "the norm of a vector (quality) or increasing the angle between the vectors (diversity) increases the volume spanned by the vectors BID25 , Section 2.2.1).2017; BID34 , or approximate sampling methods BID2 BID27 BID0 . Nonetheless", ", even such methods require significant pre-processing time, and scale poorly with the size of the dataset. Furthermore", ", when dealing with ground sets with variable components, pre-processing costs cannot be amortized, significantly impeding the application of DPPs in practice.These setbacks motivate us to investigate the use of more scalable models to generate high-quality, diverse samples from datasets to obtain highly-scalable methods with flexibility to adapt to constantly changing datasets. Specifically", ", we use generative deep models to approximate the DPP distribution over a ground set of items with both fixed and variable feature representations. We show that", "a simple, carefully constructed neural network, DPPNET, can generate DPP-like samples with very little overhead, while maintaining fundamental theoretical properties of DPP measures. Furthermore,", "we show that DPPNETs can be trivially employed to sample from a conditional DPP (i.e. sampling S such that A ⊆ S is predefined) and for greedy mode approximation.", "We introduced DPPNETs, generative networks trained on DPPs over static and varying ground sets which enable fast and modular sampling in a wide variety of scenarios.", "We showed experimentally on several datasets and standard DPP applications that DPPNETs obtain competitive performance as evaluated in terms of NLLs, while being amenable to the extensive recent advances in speeding up computation for neural network architectures.Although we trained our models on DPPs on exponentiated quadratic and linear kernels; we can train on any kernel type built from a feature representations of the dataset.", "This is not the case for dual DPP exact sampling, which requires that the DPP kernel be L = ΦΦ for faster sampling.DPPNETs are not exchangeable: that is, two sequences i 1 , . . . , i k and σ(i 1 ), . . . , σ(i k ) where σ is a permutation of [k], which represent the same set of items, will not in general have the same probability under a DPPNET.", "Exchangeability can be enforced by leveraging previous work BID45 ; however, non-exchangeability can be an asset when sampling a ranking of items.Our models are trained to take as input a fixed-size subset representation; we aim to investigate the ability to take a variable-length encoding as input as future work.", "The scaling of the DPPNET's complexity with the ground set size also remains an open question.", "However, standard tricks to enforce fixed-size ground sets such as sub-sampling from the dataset may be applied to DPPNETs.", "Similarly, if further speedups are necessary, sub-sampling from the ground set -a standard approach for DPP sampling over very large set sizes -can be combined with DPPNET sampling.In light of our results on dataset sampling, the question of whether encoders can be trained to produce encodings conducive to dataset summarization via DPPNETs seems of particular interest.", "Assuming knowledge of the (encoding-independent) relative diversity of a large quantity of subsets, an end-to-end training of the encoder and the DPPNET simultaneously may yield interesting results.Finally, although Corollary 1.1 shows the log-submodularity of the DPP can be transferred to a generative model, understanding which additional properties of training distributions may be conserved through careful training remains an open question which we believe to be of high significance to the machine learning community in general.A MAINTAINING LOG-SUBMODULARITY IN THE GENERATIVE MODEL THEOREM 2.", "Let p be a strictly submodular distribution over subsets of a ground set Y, and q be a distribution over the same space such that DISPLAYFORM0 Then q is also submodular.Proof.", "In all the following, we assume that S, T are subsets of a ground set Y such that S = T and S, T ∈ {∅, Y} (the inequalities being immediate in these corner cases)." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.25641024112701416, 0, 0.06896551698446274, 0.05714285373687744, 0.14999999105930328, 0.1875, 0.045454539358615875, 0.158730149269104, 0.026315785944461823, 0.045454539358615875, 0.07692307233810425, 0.04255318641662598, 0, 0, 0.04444444179534912, 0.1249999925494194, 0.03333333134651184, 0.2380952388048172, 0.10256409645080566, 0.09302324801683426, 0.09999999403953552, 0.138888880610466, 0.02985074371099472, 0.03703703358769417, 0.06666666269302368, 0, 0.0624999962747097, 0.05063290894031525, 0.04999999701976776, 0.08695651590824127 ]
B1euhoAcKX
true
[ "We approximate Determinantal Point Processes with neural nets; we justify our model theoretically and empirically." ]
[ "This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature-metric bundle adjustment (BA), which explicitly enforces multi-view geometry constraints in the form of feature-metric error.", "The whole pipeline is differentiable, so that the network can learn suitable features that make the BA problem more tractable.", "Furthermore, this work introduces a novel depth parameterization to recover dense per-pixel depth.", "The network first generates several basis depth maps according to the input image, and optimizes the final depth as a linear combination of these basis depth maps via feature-metric BA.", "The basis depth maps generator is also learned via end-to-end training.", "The whole system nicely combines domain knowledge (i.e. hard-coded multi-view geometry constraints) and deep learning (i.e. feature learning and basis depth maps learning) to address the challenging dense SfM problem.", "Experiments on large scale real data prove the success of the proposed method.", "The Structure-from-Motion (SfM) problem has been extensively studied in the past a few decades.", "Almost all conventional SfM algorithms BID46 BID39 BID16 BID13 jointly optimize scene structures and camera motion via the Bundle-Adjustment (BA) algorithm BID43 BID1 , which minimizes the geometric BID46 BID39 or photometric BID17 BID13 error through the Levenberg-Marquardt (LM) algorithm BID35 .", "Some recent works BID44 attempt to solve SfM using deep learning techniques, but most of them do not enforce the geometric constraints between 3D structures and camera motion in their networks.", "For example, in the recent work DeMoN BID44 , the scene depths and the camera motion are estimated by two individual sub-network branches.", "This paper formulates BA as a differentiable layer, the BA-Layer, to bridge the gap between classic methods and recent deep learning based approaches.", "To this end, we learn a feed-forward multilayer perceptron (MLP) to predict the damping factor in the LM algorithm, which makes all involved computation differentiable.", "Furthermore, unlike conventional BA that minimizes geometric or photometric error, our BA-layer minimizes the distance between aligned CNN feature maps.", "Our novel feature-metric BA takes CNN features of multiple images as inputs and optimizes for the scene structures and camera motion.", "This feature-metric BA is desirable, because it has been observed by BID17 that the geometric BA does not exploit all image information, while the photometric BA is sensitive to moving objects, exposure or white balance changes, etc.", "Most importantly, our BA-Layer can back-propagate loss from scene structures and camera motion to learn appropriate features that are most suitable for structure-from-motion and bundle adjustment.", "In this way, our network hard-codes the multi-view geometry constraints in the BA-Layer and learns suitable feature representations from training data.We strive to estimate a dense per-pixel depth, because dense depth is critical for many tasks such as object detection and robot navigation.", "A major challenge in solving dense per-pixel depth is to find a compact parameterization.", "Direct per-pixel depth is computational expensive, which makes the network training intractable.", "So we train a network to generate a set of basis depth maps for an arbitrary input image and represent the result depth map as a linear combination of these basis 2 RELATED WORK Monocular Depth Estimation Networks Estimating depth from a monocular image is an ill-posed problem because an infinite number of possible scenes may have produced the same image.", "Before the raise of deep learning based methods, some works predict depth from a single image based on MRF BID37 BID36 , semantic segmentation BID29 , or manually designed features BID27 .", "BID15 propose a multi-scale approach for depth prediction with two CNNs, where a coarse-scale network first predicts the scene depth at the global level and then a fine-scale network will refine the local regions.", "This approach was extended in BID14 to handle semantic segmentation and surface normal estimation as well.", "Recently, BID30 propose to use ResNet BID24 based structure to predict depth, and BID47 construct multi-scale CRFs for depth prediction.", "In comparison, we exploit monocular image depth estimation network for depth parameterization, which only produces a set of basis depth maps and the final result will be further improved through optimization.Structure-from-Motion Networks Recently, some works exploit CNNs to resolve the SfM problem.", "BID22 solve the camera motion by a network from a pair of images with known depth.", "employ two CNNs for depth and camera motion estimation respectively, where both CNNs are trained jointly by minimizing the photometric loss in an unsupervised manner.", "implement the direct method BID40 as a differentiable component to compute camera motion after scene depth is estimated by the method in .", "In BID44 , the scene depth and the camera motion are predicted from optical flow features, which help to make it generalizing better to unseen data.", "However, the scene depth and the camera motion are solved by two separate network branches, multi-view geometry constraints between depth and motion are not enforced.", "Recently, propose to solve nonlinear least squares in two-view SfM using a LSTM-RNN BID26 as the optimizer.Our method belongs to this category.", "Unlike all previous works, we propose the BA-Layer to simultaneously predict the scene depth and the camera motion from CNN features, which explicitly enforces multi-view geometry constraints.", "The hard-coded multi-view geometry constraints enable our method to reconstruct more than two images, while most deep learning methods can only handle two images.", "Furthermore, we propose to minimize a feature-metric error instead of the photometric error in to enhance robustness.", "This paper presents the BA-Net, a network that explicitly enforces multi-view geometry constraints in terms of feature-metric error.", "It optimizes scene depths and camera motion jointly via feature-metric bundle adjustment.", "The whole pipeline is differentiable and thus end-to-end trainable, such that the features are learned from data to facilitate structure-from-motion.", "The dense depth is parameterized as a linear combination of several basis depth maps generated from the network.", "Our BA-Net nicely combines domain knowledge (hard-coded multi-view geometry constraint) with deep learning (learned feature representation and basis depth maps generator).", "It outperforms conventional BA and recent deep learning based methods.", "DISPLAYFORM0" ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.6818181872367859, 0.17142856121063232, 0.20689654350280762, 0.2380952388048172, 0.0714285671710968, 0.17777776718139648, 0.06896550953388214, 0.25806450843811035, 0.11538460850715637, 0.1249999925494194, 0.052631575614213943, 0.25641024112701416, 0.1463414579629898, 0.1111111044883728, 0.05405404791235924, 0.11999999731779099, 0.1904761791229248, 0.17241379618644714, 0.12903225421905518, 0.13793103396892548, 0.1538461446762085, 0.08695651590824127, 0.13333332538604736, 0.12121211737394333, 0.0555555522441864, 0.178571417927742, 0.25, 0.04878048226237297, 0.1621621549129486, 0.09756097197532654, 0.10810810327529907, 0.20512819290161133, 0.0952380895614624, 0.04999999329447746, 0.1875, 0.2857142686843872, 0.20689654350280762, 0.1621621549129486, 0.1764705777168274, 0.052631575614213943, 0 ]
B1gabhRcYX
true
[ "This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature bundle adjustment (BA)" ]
[ "Temporal Difference Learning with function approximation is known to be unstable.", "Previous work like \\citet{sutton2009fast} and \\citet{sutton2009convergent} has presented alternative objectives that are stable to minimize.", "However, in practice, TD-learning with neural networks requires various tricks like using a target network that updates slowly \\citep{mnih2015human}.", "In this work we propose a constraint on the TD update that minimizes change to the target values.", "This constraint can be applied to the gradients of any TD objective, and can be easily applied to nonlinear function approximation.", "We validate this update by applying our technique to deep Q-learning, and training without a target network.", "We also show that adding this constraint on Baird's counterexample keeps Q-learning from diverging.", "Temporal Difference learning is one of the most important paradigms in Reinforcement Learning (Sutton & Barto) .", "Techniques based on nonlinear function approximators and stochastic gradient descent such as deep networks have led to significant breakthroughs in the class of problems that these methods can be applied to BID9 BID13 BID12 .", "However, the most popular methods, such as TD(λ), Q-learning and Sarsa, are not true gradient descent techniques BID2 and do not converge on some simple examples BID0 .", "BID0 and BID1 propose residual gradients as a way to overcome this issue.", "Residual methods, also called backwards bootstrapping, work by splitting the TD error over both the current state and the next state.", "These methods are substantially slower to converge, however, and BID16 show that the fixed point that they converge to is not the desired fixed point of TD-learning methods.", "BID16 propose an alternative objective function formulated by projecting the TD target onto the basis of the linear function approximator, and prove convergence to the fixed point of this projected Bellman error is the ideal fixed point for TD methods.", "BID5 extend this technique to nonlinear function approximators by projecting instead on the tangent space of the function at that point.", "Subsequently, BID11 has combined these techniques of residual gradient and projected Bellman error by proposing an oblique projection, and BID8 has shown that the projected Bellman objective is a saddle point formulation which allows a finite sample analysis.However, when using deep networks for approximating the value function, simpler techniques like Q-learning and Sarsa are still used in practice with stabilizing techniques like a target network that is updated more slowly than the actual parameters BID10 .In", "this work, we propose a constraint on the update to the parameters that minimizes the change to target values, freezing the target that we are moving our current predictions towards. Subject", "to this constraint, the update minimizes the TD-error as much as possible. We show", "that this constraint can be easily added to existing techniques, and works with all the techniques mentioned above.We validate our method by showing convergence on Baird's counterexample and a gridworld domain. On the", "gridworld domain we parametrize the value function using a multi-layer perceptron, and show that we do not need a target network.", "In this paper we introduce a constraint on the updates to the parameters for TD learning with function approximation.", "This constraint forces the targets in the Bellman equation to not move when the update is applied to the parameters.", "We enforce this constraint by projecting the gradient of the TD error with respect to the parameters for state s t onto the orthogonal space to the gradient with respect to the parameters for state s t+1 .We", "show in our experiments that this added constraint stops parameters in Baird's counterexample from exploding when we use TD-learning. But", "since we do not allow changes to target parameters, this also keeps Residual Gradients from converging to the true values of the Markov Process.On a Gridworld domain we demonstrate that we can perform TD-learning using a 2-layer neural network, without the need for a target network that updates more slowly. We", "compare the solution obtained with DQN and show that it is closer to the solution obtained by tabular policy evaluation. Finally", ", we also show that constrained DQN can learn faster and with less variance on the classical Cartpole domain.For future work, we hope to scale this approach to larger problems such as the Atari domain BID4 . We would", "also like to prove convergence of TD-learning with this added constraint." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.06896550953388214, 0.1818181723356247, 0.2702702581882477, 0.34285715222358704, 0.22857142984867096, 0.4000000059604645, 0.375, 0.05882352590560913, 0.11764705181121826, 0.09302324801683426, 0.19354838132858276, 0.1111111044883728, 0.19999998807907104, 0.16326530277729034, 0.10810810327529907, 0.17283950746059418, 0.2380952388048172, 0.19999998807907104, 0.23999999463558197, 0.3243243098258972, 0.3333333134651184, 0.11764705181121826, 0.19512194395065308, 0.1621621549129486, 0.26229506731033325, 0.2222222238779068, 0.18867924809455872, 0.13793103396892548 ]
Bk-ofQZRb
true
[ "We show that adding a constraint to TD updates stabilizes learning and allows Deep Q-learning without a target network" ]
[ "We propose DuoRC, a novel dataset for Reading Comprehension (RC) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing RC datasets.", "DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie - one from Wikipedia and the other from IMDb - written by two different authors.", "We asked crowdsourced workers to create questions from one version of the plot and a different set of workers to extract or synthesize corresponding answers from the other version.", "This unique characteristic of DuoRC where questions and answers are created from different versions of a document narrating the same underlying story, ensures by design, that there is very little lexical overlap between the questions created from one version and the segments containing the answer in the other version.", "Further, since the two versions have different level of plot detail, narration style, vocabulary, etc., answering questions from the second version requires deeper language understanding and incorporating background knowledge not available in the given text.", "Additionally, the narrative style of passages arising from movie plots (as opposed to typical descriptive passages in existing datasets) exhibits the need to perform complex reasoning over events across multiple sentences.", "Indeed, we observe that state-of-the-art neural RC models which have achieved near human performance on the SQuAD dataset, even when coupled with traditional NLP techniques to address the challenges presented in DuoRC exhibit very poor performance (F1 score of 37.42% on DuoRC v/s 86% on SQuAD dataset).", "This opens up several interesting research avenues wherein DuoRC could complement other Reading Comprehension style datasets to explore novel neural approaches for studying language understanding.", "Natural Language Understanding is widely accepted to be one of the key capabilities required for AI systems.", "Scientific progress on this endeavor is measured through multiple tasks such as machine translation, reading comprehension, question-answering, and others, each of which requires the machine to demonstrate the ability to \"comprehend\" the given textual input (apart from other aspects) and achieve their task-specific goals.", "In particular, Reading Comprehension (RC) systems are required to \"understand\" a given text passage as input and then answer questions based on it.", "It is therefore critical, that the dataset benchmarks established for the RC task keep progressing in complexity to reflect the challenges that arise in true language understanding, thereby enabling the development of models and techniques to solve these challenges.For RC in particular, there has been significant progress over the recent years with several benchmark datasets, the most popular of which are the SQuAD dataset BID11 , TriviaQA BID4 , MS MARCO BID8 , MovieQA BID16 and cloze-style datasets BID6 BID9 BID2 .", "However, these benchmarks, owing to both the nature of the passages and the question-answer pairs to evaluate the RC task, have 2 primary limitations in studying language understanding:", "(i) Other than MovieQA, which is a small dataset of 15K QA pairs, all other large-scale RC datasets deal only with factual descriptive passages and not narratives (involving events with causality linkages that require reasoning and background knowledge) which is the case with a lot of real-world content such as story books, movies, news reports, etc.", "(ii) their questions possess a large lexical overlap with segments of the passage, or have a high noise level in Q/A pairs themselves.", "As demonstrated by recent work, this makes it easy for even simple keyword matching algorithms to achieve high accuracy BID19 .", "In fact, these models have been shown to perform poorly in the presence of adversarially inserted sentences which have a high word overlap with the question but do not contain the answer BID3 .", "While this problem does not exist in TriviaQA it is admittedly noisy because of the use of distant supervision.", "Similarly, for cloze-style datasets, due to the automatic question generation process, it is very easy for current models to reach near human performance BID1 .", "This therefore limits the complexity in language understanding that a machine is required to demonstrate to do well on the RC task.Motivated by these shortcomings and to push the state-of-the-art in language understanding in RC, in this paper we propose DuoRC, which specifically presents the following challenges beyond the existing datasets:1.", "DuoRC is especially designed to contain a large number of questions with low lexical overlap between questions and their corresponding passages.2.", "It requires the use of background and common-sense knowledge to arrive at the answer and go beyond the content of the passage itself.3.", "It contains narrative passages from movie plots that require complex reasoning across multiple sentences to infer the answer.4.", "Several of the questions in DuoRC, while seeming relevant, cannot actually be answered from the given passage, thereby requiring the machine to detect the unanswerability of questions.In order to capture these four challenges, DuoRC contains QA pairs created from pairs of documents describing movie plots which were gathered as follows.", "Each document in a pair is a different version of the same movie plot written by different authors; one version of the plot is taken from the Wikipedia page of the movie whereas the other from its IMDb page (see FIG0 for portions of an example pair of plots from the movie \"Twelve Monkeys\").", "We first showed crowd workers on Amazon Mechanical Turk (AMT) the first version of the plot and asked them to create QA pairs from it.", "We then showed the second version of the plot along with the questions created from the first version to a different set of workers on AMT and asked them to provide answers by reading the second version only.", "Since the two versions contain different levels of plot detail, narration style, vocabulary, etc., answering questions from the second version exhibits all of the four challenges mentioned above.We now make several interesting observations from the example in FIG0 .", "For 4 out of the 8 questions (Q1, Q2, Q4, and Q7), though the answers extracted from the two plots are exactly the same, the analysis required to arrive at this answer is very different in the two cases.", "In particular, for Q1 even though there is no explicit mention of the prisoner living in a subterranean shelter and hence no lexical overlap with the question, the workers were still able to infer that the answer is Philadelphia because that is the city to which James Cole travels to for his mission.", "Another interesting characteristic of this dataset is that for a few questions (Q6, Q8) alternative but valid answers are obtained from the second plot.", "Further, note the kind of complex reasoning required for answering Q8 where the machine needs to resolve coreferences over multiple sentences (that man refers to Dr. Peters) and use common sense knowledge that if an item clears an airport screening, then a person can likely board the plane with it.", "To re-emphasize, these examples exhibit the need for machines to demonstrate new capabilities in RC such as:", "(i) employing a knowledge graph (e.g. to know that Philadelphia is a city in Q1),", "(ii) common-sense knowledge (e.g., clearing airport security implies boarding)", "(iii) paraphrase/semantic understanding (e.g. revolver is a type of handgun in Q7)", "(iv) multiple-sentence inferencing across events in the passage including coreference resolution of named entities and nouns, and", "(v) educated guesswork when the question is not directly answerable but there are subtle hints in the passage (as in Q1).", "Finally, for quite a few questions, there wasn't sufficient information in the second plot to obtain their answers.", "In such cases, the workers marked the question as \"unanswerable\".", "This brings out a very important challenge for machines to exhibit (i.e. detect unanswerability of questions) because a practical system should be able to know when it is not possible for it to answer a particular question given the data available to it, and in such cases, possibly delegate the task to a human instead.Current RC systems built using existing datasets are far from possessing these capabilities to solve the above challenges.", "In Section 4, we seek to establish solid baselines for DuoRC employing state-of-the-art RC models coupled with a collection of standard NLP techniques to address few of the above challenges.", "Proposing novel neural models that solve all of the challenges in DuoRC is out of the scope of this paper.", "Our experiments demonstrate that when the existing state-of-the-art RC systems are trained and evaluated on DuoRC they perform poorly leaving a lot of scope for improvement and open new avenues for research in RC.", "Do note that this dataset is not a substitute for existing RC datasets but can be coupled with them to collectively address a large set of challenges in language understanding with RC (the more the merrier).", "The results of our experiments are summarized in TAB3 which we discuss in the following sub-sections.", "• SpanModel v/s GenModel: Comparing the first two rows (SelfRC) and the last two rows (ParaphraseRC) of TAB4 we see that the SpanModel clearly outperforms the GenModel.", "This is not very surprising for two reasons.", "First, around 70% (and 50%) of the answers in SelfRC (and ParaphraseRC) respectively, match an exact span in the document so the span based model still has scope to do well on these answers.", "On the other hand, even if the first stage of the GenModel predicts the span correctly, the second stage could make an error in generating the correct answer from it because generation is a harder problem.", "For the second stage, it is expected that the GenModel should learn to copy the predicted span to produce the answer output (as is required in most cases) and only occasionally where necessary, generate an answer.", "However, surprisingly the GenModel fails to even do this.", "Manual inspection of the generated answers shows that in many cases the generator ends up generating either more or fewer words compared the true answer.", "This demonstrates that there is clearly scope for the GenModel to perform better.•", "SelfRC v/s ParaphraseRC: Comparing the SelfRC and ParaphraseRC numbers in TAB4 , we observe that the performance of the models clearly drops for the latter task, thus validating our hypothesis that ParaphraseRC is a indeed a much harder task.•", "Effect of NLP pre-processing: As mentioned in Section 4, for ParaphraseRC, we first perform a few pre-processing steps to identify relevant sentences in the longer document. In", "order to evaluate whether the pre-processing method is effective, we compute: (i", ") the percentage of the document that gets pruned, and (", "ii) whether the true answer is present in the pruned document (i.e., average recall of the answer).", "We can compute the recall only for the span-based subset of the data since for the remaining data we do not know the true span.", "In TAB3 , we report these two quantities for the span-based subset using different pruning strategies.", "Finally, comparing the SpanModel with and without Paraphrasing in TAB4 for ParaphraseRC, we observe that the pre-processing step indeed improves the performance of the Span Detection Model.•", "Effect of oracle pre-processing: As noted in Section 3, the ParaphraseRC plot is almost double in length in comparison to the SelfRC plot, which while adding to the complexities of the former task, is clearly not the primary reason of the model's poor performance on that. To", "empirically validate this, we perform an Oracle pre-processing step, where, starting with the knowledge of the span containing the true answer, we extract a subplot around it such that the span is randomly located within that subplot and the average length of the subplot is similar to the SelfRC plots. The", "SpanModel with this Oracle preprocessed data exhibits a minor improvement in performance over that with rule-based preprocessing (1.6% in Accuracy and 4.3% in F1 over the Span Test), still failing to bridge the wide performance gap between the SelfRC and ParaphraseRC task.• Cross", "Testing We wanted to examine whether a model trained on SelfRC performs well on ParaphraseRC and vice-versa. We also", "wanted to evaluate if merging the two datasets improves the performance of the model. For this", "we experimented with various combinations of train and test data. The results", "of these experiments for the SpanModel are summarized in TAB5 . We make two", "main observations. First, training", "on one dataset and evaluating on the other results in a drop in the performance. Merging the training", "data from the two datasets exhibits better performance on the individual test sets.Based on our experiments and empirical observations we believe that the DuoRC dataset indeed holds a lot of potential for advancing the horizon of complex language understanding by exposing newer challenges in this area.", "In this paper we introduced DuoRC, a large scale RC dataset of 186K human-generated questionanswer pairs created from 7680 pairs of parallel movie-plots, each pair taken from Wikipedia and IMDb.", "We then showed that this dataset, by design, ensures very little or no lexical overlap between the questions created from one version and the segments containing the answer in the other version.", "With this, we hope to introduce the RC community to new research challenges on question-answering requiring external knowledge and common-sense driven reasoning, deeper language understanding and multiple-sentence inferencing.", "Through our experiments, we show how the state-of-the-art RC models, which have achieved near human performance on the SQuAD dataset, perform poorly on our dataset, thus emphasizing the need to explore further avenues for research." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.3544303774833679, 0.3529411852359772, 0.3199999928474426, 0.35164836049079895, 0.25581395626068115, 0.19999998807907104, 0.10638297349214554, 0.20779220759868622, 0.14492753148078918, 0.1538461446762085, 0.18666666746139526, 0.1538461446762085, 0.18421052396297455, 0.178217813372612, 0.18918918073177338, 0.0833333283662796, 0.1463414579629898, 0.05714285373687744, 0.10810810327529907, 0.23655913770198822, 0.21917808055877686, 0.11267605423927307, 0.19718308746814728, 0.25531914830207825, 0.2790697515010834, 0.2666666507720947, 0.32098764181137085, 0.18390804529190063, 0.16470587253570557, 0.1702127605676651, 0.21052631735801697, 0.18367347121238708, 0.11594202369451523, 0.05970148742198944, 0, 0.0923076868057251, 0.0882352888584137, 0.05633802339434624, 0.1428571343421936, 0.06557376682758331, 0.17543859779834747, 0.17499999701976776, 0.08695651590824127, 0.14457830786705017, 0.21176470816135406, 0.05970148742198944, 0.08219178020954132, 0.03333333134651184, 0.07499999552965164, 0.12195121496915817, 0.0731707289814949, 0.06557376682758331, 0.053333330899477005, 0.09090908616781235, 0.16470587253570557, 0.12820512056350708, 0.0624999962747097, 0.09677419066429138, 0.057971011847257614, 0.11267605423927307, 0.05882352590560913, 0.10389609634876251, 0.11363635957241058, 0.15555554628372192, 0.13333332538604736, 0.11594202369451523, 0.09090908616781235, 0.0624999962747097, 0.1230769231915474, 0, 0.1818181723356247, 0.23404255509376526, 0.3037974536418915, 0.375, 0.23076923191547394, 0.12195121496915817 ]
HJQhyvYwG
true
[ "We propose DuoRC, a novel dataset for Reading Comprehension (RC) containing 186,089 human-generated QA pairs created from a collection of 7680 pairs of parallel movie plots and introduce a RC task of reading one version of the plot and answering questions created from the other version; thus by design, requiring complex reasoning and deeper language understanding to overcome the poor lexical overlap between the plot and the question." ]
[ "We consider the problem of weakly supervised structured prediction (SP) with reinforcement learning (RL) – for example, given a database table and a question, perform a sequence of computation actions on the table, which generates a response and receives a binary success-failure reward. ", "This line of research has been successful by leveraging RL to directly optimizes the desired metrics of the SP tasks – for example, the accuracy in question answering or BLEU score in machine translation. ", "However, different from the common RL settings, the environment dynamics is deterministic in SP, which hasn’t been fully utilized by the model-freeRL methods that are usually applied.", "Since SP models usually have full access to the environment dynamics, we propose to apply model-based RL methods, which rely on planning as a primary model component.", "We demonstrate the effectiveness of planning-based SP with a Neural Program Planner (NPP), which, given a set of candidate programs from a pretrained search policy, decides which program is the most promising considering all the information generated from executing these programs.", "We evaluate NPP on weakly supervised program synthesis from natural language(semantic parsing) by stacked learning a planning module based on pretrained search policies.", "On the WIKITABLEQUESTIONS benchmark, NPP achieves a new state-of-the-art of 47.2% accuracy.", "Numerous results from natural language processing tasks have shown that Structured Prediction (SP) can be cast into a reinforcement learning (RL) framework, and known RL techniques can give formal performance bounds on SP tasks BID3 BID13 BID0 .", "RL also directly optimizes task metrics, such as, the accuracy in question answering or BLEU score in machine translation, and avoids the exposure bias problem when compaired to maximum likelihood training that is commonly used in SP BID13 BID12 .However", ", previous works on applying RL to SP problems often use model-free RL algorithms (e.g., REINFORCE or actor-critic) and fail to leverage the characteristics of SP, which are different than typical RL tasks, e.g., playing video games BID9 or the game of Go BID15 . In most", "SP problems conditioned on the input x, the environment dynamics, except for the reward signal, is known, deterministic, reversible, and therefore can be searched. This means", "that there is a perfect model 1 of the environment, which can be used to apply model-based RL methods that utilize planning 2 as a primary model component.Take semantic parsing BID1 BID11 as an example, semantic parsers trained by RL such as Neural Semantic Machine (NSM) BID8 typically rely on beam search for inference -the program with the highest probability in beam is used for execution and generating answer. However, the", "policy, which is used for beam search, may not be 1 A model of the environment usually means anything that an agent can use to predict how the environment will respond to its actions BID17 .2 planning usually refers to any computational process that takes a model as input and produces or improves a policy for interacting with the modeled environment BID17 able to assign the highest probability to the correct program. This limitation", "is due to the policy predicting locally normalized probabilities for each possible action based on the partially generated program, and the probability of a program is a product of these local probabilities.For example, when applied to the WEBQUESTIONSSP task, NSM made mistakes with two common patterns: (1) the program would ignore important information in the context; (2) the generated program does not execute to a reasonable output, but still receives high probability (spurious programs). Resolving this", "issue requires using the information of the full program and its execution output to further evaluate its quality based on the context, which can be seen as planning. This can be observed", "in Figure 4 where the model is asked a question \"Which programming is played the most?\". The full context of", "the input table (shown in TAB0 ) contains programming for a television station. The top program generated", "by a search policy produces the wrong answer, filtering by a column not relevant to the question. If provided the correct contextual", "features, and if allowed to evaluate the full program forward and backward through time, we observe that a planning model would be able to better evaluate which program would produce the correct answer.To handle errors related to context, we propose to train a value function to compute the utility of each token in a program. This utility is evaluated by considering", "the program and token probability as well as the attention mask generated by the sequence-to-sequence (seq2seq) model for the underlying policy. We also introduce beam and question context", "with a binary feature representing overlap from question/program and program/program, such as how many programs share a token at a given timestep.In the experiments, we found that applying a planner that uses a learned value function to re-rank the candidates in the beam can significantly and consistently improve the accuracy. On the WIKITABLEQUESTIONS benchmark, we improve", "the state-of-the-art by 0.9%, achieving an accuracy of 47.2%.", "Reinforcement learning applied to structured prediction suffers from limited use of the world model as well as not being able to consider future and past program context when generating a sequence.", "To overcome these limitations we proposed Neural Program Planner (NPP) which is a planning step after candidate program generation.", "We show that an additional planning model can better evaluate overall structure value.", "When applied to a difficult SP task NPP improves state of the art by 0.9% and allows intuitive analysis of its scoring model per program token.A MORE NPP SCORING DETAILS" ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.04255318641662598, 0, 0, 0.2222222238779068, 0, 0.1249999925494194, 0, 0.04444444179534912, 0, 0.039215683937072754, 0.05882352590560913, 0.17391304671764374, 0.0882352888584137, 0.02857142686843872, 0.10810810327529907, 0, 0, 0, 0.035087715834379196, 0, 0, 0, 0, 0.06896550953388214, 0.08695651590824127, 0.09999999403953552 ]
HJxPAFgEON
true
[ "A model-based planning component improves RL-based semantic parsing on WikiTableQuestions." ]
[ "Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost.", "To address this cost, a number of quantization schemeshave been proposed - but most of these techniques focused on quantizing weights, which are relatively smaller in size compared to activations.", "This paper proposes a novel quantization scheme for activations during training - that enables neural networks to work well with ultra low precision weights and activations without any significant accuracy degradation. ", "This technique, PArameterized Clipping acTi-vation (PACT), uses an activation clipping parameter α that is optimized duringtraining to find the right quantization scale.", "PACT allows quantizing activations toarbitrary bit precisions, while achieving much better accuracy relative to publishedstate-of-the-art quantization schemes.", "We show, for the first time, that both weights and activations can be quantized to 4-bits of precision while still achieving accuracy comparable to full precision networks across a range of popular models and datasets.", "We also show that exploiting these reduced-precision computational units in hardware can enable a super-linear improvement in inferencing performance dueto a significant reduction in the area of accelerator compute engines coupled with the ability to retain the quantized model and activation data in on-chip memories.", "Deep Convolutional Neural Networks (CNNs) have achieved remarkable accuracy for tasks in a wide range of application domains including image processing (He et al. (2016b) ), machine translation (Gehring et al. (2017) ), and speech recognition (Zhang et al. (2017) ).", "These state-of-the-art CNNs use very deep models, consuming 100s of ExaOps of computation during training and GBs of storage for model and data.", "This poses a tremendous challenge to widespread deployment, especially in resource constrained edge environments -leading to a plethora of explorations in compressed models that minimize memory footprint and computation while preserving model accuracy as much as possible.Recently, a whole host of different techniques have been proposed to alleviate these computational costs.", "Among them, reducing the bit-precision of key CNN data structures, namely weights and activations, has gained attention due to its potential to significantly reduce both storage requirements and computational complexity.", "In particular, several weight quantization techniques (Li & Liu (2016) and Zhu et al. (2017) ) showed significant reduction in the bit-precision of CNN weights with limited accuracy degradation.", "However, prior work (Hubara et al. (2016b) ; Zhou et al. (2016) ) has shown that a straightforward extension of weight quantization schemes to activations incurs significant accuracy degradation in large-scale image classification tasks such as ImageNet (Russakovsky et al. (2015) ).", "Recently, activation quantization schemes based on greedy layer-wise optimization were proposed (Park et al. (2017) ; Graham (2017) ; Cai et al. (2017) ), but achieve limited accuracy improvement.In this paper, we propose a novel activation quantization technique, PArameterized Clipping acTivation function (PACT) , that automatically optimizes the quantization scales during model training.", "PACT allows significant reductions in the bit-widths needed to represent both weights and activations and opens up new opportunities for trading off hardware complexity with model accuracy.The primary contributions of this work include: 1) PACT: A new activation quantization scheme for finding the optimal quantization scale during training.", "We introduce a new parameter α that is used to represent the clipping level in the activation function and is learnt via back-propagation.", "α sets the quantization scale smaller than ReLU to reduce the quantization error, but larger than a conventional clipping activation function (used in previous schemes) to allow gradients to flow more effectively.", "In addition, regularization is applied to α in the loss function to enable faster convergence.We provide reasoning and analysis on the expected effectiveness of PACT in preserving model accuracy.3) Quantitative results demonstrating the effectiveness of PACT on a spectrum of models and datasets.", "Empirically, we show that:", "(a) for extremely low bit-precision (≤ 2-bits for weights and activations), PACT achieves the highest model accuracy compared to all published schemes and", "(b) 4-bit quantized CNNs based on PACT achieve accuracies similar to single-precision floating point representations.4) System performance analysis to demonstrate the trade-offs in hardware complexity for different bit representations vs. model accuracy.", "We show that a dramatic reduction in the area of the computing engines is possible and use it to estimate the achievable system-level performance gains.The rest of the paper is organized as follows: Section 2 provides a summary of related prior work on quantized CNNs.", "Challenges in activation quantization are presented in Section", "3. We present PACT, our proposed solution for activation quantization in Section", "4. In Section 5 we demonstrate the effectiveness of PACT relative to prior schemes using experimental results on popular CNNs.", "Overall system performance analysis for a representative hardware system is presented in Section 6 demonstrating the observed trade-offs in hardware complexity for different bit representations." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1764705777168274, 0.16326530277729034, 0.039215680211782455, 0.2380952388048172, 0.10810810327529907, 0.07843136787414551, 0.10169491171836853, 0.1090909019112587, 0.04999999329447746, 0.03076922707259655, 0.0833333283662796, 0.12244897335767746, 0.07017543166875839, 0.12121211737394333, 0.21875, 0.24390242993831635, 0.21276594698429108, 0.072727270424366, 0, 0.04878048226237297, 0.039215680211782455, 0.06779660284519196, 0.14814814925193787, 0.1249999925494194, 0.09999999403953552, 0.04878048226237297 ]
By5ugjyCb
true
[ "A new way of quantizing activation of Deep Neural Network via parameterized clipping which optimizes the quantization scale via stochastic gradient descent." ]
[ "Pruning neural network parameters is often viewed as a means to compress models, but pruning has also been motivated by the desire to prevent overfitting.", "This motivation is particularly relevant given the perhaps surprising observation that a wide variety of pruning approaches increase test accuracy despite sometimes massive reductions in parameter counts.", "To better understand this phenomenon, we analyze the behavior of pruning over the course of training, finding that pruning's effect on generalization relies more on the instability it generates (defined as the drops in test accuracy immediately following pruning) than on the final size of the pruned model.", "We demonstrate that even the pruning of unimportant parameters can lead to such instability, and show similarities between pruning and regularizing by injecting noise, suggesting a mechanism for pruning-based generalization improvements that is compatible with the strong generalization recently observed in over-parameterized networks.", "Pruning weights and/or convolutional filters from deep neural networks (DNNs) can substantially shrink parameter counts with minimal loss in accuracy (LeCun et al., 1990; Hassibi & Stork, 1993; Han et al., 2015a; Li et al., 2016; Molchanov et al., 2017; Louizos et al., 2017; Liu et al., 2017; Ye et al., 2018) , enabling broader application of DNNs via reductions in memory-footprint and inference-FLOPs requirements.", "Moreover, many pruning methods have been found to actually improve generalization (measured by model accuracy on previously unobserved inputs) (Narang et al., 2017; Frankle & Carbin, 2018; You et al., 2019) .", "Consistent with this, pruning was originally motivated as a means to prevent over-parameterized networks from overfitting to comparatively small datasets (LeCun et al., 1990) .", "Concern about over-parameterizing models has weakened, however, as many recent studies have found that adding parameters can actually reduce a DNN's generalization-gap (the drop in performance when moving from previously seen to previously unseen inputs), even though it has been shown that the same networks have enough parameters to fit large datasets of randomized data (Neyshabur et al., 2014; Zhang et al., 2016) .", "Potential explanations for this unintuitive phenomenon have come via experiments (Keskar et al., 2016; Morcos et al., 2018; Yao et al., 2018; Belkin et al., 2018; Nagarajan & Kolter, 2019) , and the derivation of bounds on DNN generalization-gaps that suggest less overfitting might occur as parameter counts increase (Neyshabur et al., 2018) .", "This research has implications for neural network pruning, where a puzzling question has arisen: if larger parameter counts don't increase overfitting, how does pruning parameters throughout training improve generalization?", "To address this question we first introduce the notion of pruning instability, which we define to be the size of the drop in network accuracy caused by a pruning iteration (Section 3).", "We then empirically analyze the instability and generalization associated with various magnitude-pruning (Han et al., 2015b) algorithms in different settings, making the following contributions:", "1. We find a tradeoff between the stability and potential generalization benefits of pruning, and show iterative pruning's similarity to regularizing with noise-suggesting a mechanism unrelated to parameter counts through which pruning appears to affect generalization.", "2. We characterize the properties of pruning algorithms which lead to instability and correspondingly higher generalization.", "In this study, we defined the notion of pruning algorithm instability, and applied several pruning approaches 5 to multiple neural networks, assessing the approaches' effects on instability and generalization.", "Throughout these experiments, we observed that pruning algorithms that generated more instability led to better generalization (as measured by test accuracy).", "For a given pruning target and total pruning percentage, instability and generalization could be fueled by raising iterative pruning rates (Figure 4, Section 4.3) .", "Additionally, targeting more important weights, again holding total parameters pruned constant, led to more instability and generalization than targeting less important weights (Figure 1, Section 4.1) .", "These results support the idea that the generalization benefits of pruning cannot be explained solely by pruning's effect on parameter counts-the properties of the pruning algorithm must be taken into account.", "Our analysis also suggests that the capacity effects of weight-removal may not even be necessary to explain how pruning improves generalization.", "Indeed, we provide an interpretation of iterative pruning as noise injection, a popular approach to regularizing DNNs, and find that making pruning noise impermanent provides pruning-like generalization benefits while not removing as much capacity as permanent pruning ( Figure 5 , Section 4.4)." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1304347813129425, 0.12244897335767746, 0.19672130048274994, 0.19999998807907104, 0.0555555522441864, 0.07692307233810425, 0.04347825422883034, 0.05128204822540283, 0.1230769157409668, 0.03999999538064003, 0.20408162474632263, 0.17391303181648254, 0.18867923319339752, 0.31578946113586426, 0.2083333283662796, 0.1428571343421936, 0.13636362552642822, 0.08695651590824127, 0.2083333283662796, 0.1860465109348297, 0.09836065024137497 ]
B1eCk1StPH
true
[ "We demonstrate that pruning methods which introduce greater instability into the loss also confer improved generalization, and explore the mechanisms underlying this effect." ]
[ "Neural networks trained with backpropagation, the standard algorithm of deep learning which uses weight transport, are easily fooled by existing gradient-based adversarial attacks.", "This class of attacks are based on certain small perturbations of the inputs to make networks misclassify them.", "We show that less biologically implausible deep neural networks trained with feedback alignment, which do not use weight transport, can be harder to fool, providing actual robustness.", "Tested on MNIST, deep neural networks trained without weight transport (1) have an adversarial accuracy of 98% compared to 0.03% for neural networks trained with backpropagation and (2) generate non-transferable adversarial examples.", "However, this gap decreases on CIFAR-10 but is still significant particularly for small perturbation magnitude less than 1 ⁄ 2.", "Deep neural networks trained with backpropagation (BP) are not robust against certain hardly perceptible perturbation, known as adversarial examples, which are found by slightly altering the network input and nudging it along the gradient of the network's loss function [1] .", "The feedback-path synaptic weights of these networks use the transpose of the forward-path synaptic weights to run error propagation.", "This problem is commonly named the weight transport problem.", "Here we consider more biologically plausible neural networks introduced by Lillicrap et al. [2] to run error propagation using feedbackpath weights that are not the transpose of the forward-path ones i.e. without weight transport.", "This mechanism was called feedback alignment (FA).", "The introduction of a separate feedback path in [2] in the form of random fixed synaptic weights makes the feedback gradients a rough approximation of those computed by backpropagation.", "Since gradient-based adversarial attacks are very sensitive to the quality of gradients to perturb the input and fool the neural network, we suspect that the gradients computed without weight transport cannot be accurate enough to design successful gradient-based attacks.", "Here we compare the robustness of neural networks trained with either BP or FA on three well-known gradient-based attacks, namely the fast gradient sign method (FGSM) [3] , the basic iterative method (BIM) and the momentum iterative fast gradient sign method (MI-FGSM) [4] .", "To the best of our knowledge, no prior adversarial attacks have been applied for deep neural networks without weight transport.", "We perform an empirical evaluation investigating both the robustness of deep neural networks without weight transport and the transferability of adversarial examples generated with gradient-based attacks.", "The results on MNIST clearly show that (1) FA networks are robust to adversarial examples generated with FA and (2) the adversarial examples generated by FA are not transferable to BP networks.", "On the other hand, we find that these two conclusions are not true on CIFAR-10 even if FA networks showed a significant robustness to Figure 1b , we denote by \"BP → F A\" the generation of adversarial examples using BP to fool the FA network, and \"F A → BP \" the generation of adversarial examples using FA to fool the BP network gradient-based attacks.", "Therefore, one should consider performing more exhaustive analysis on more complex datasets to understand the impact of the approximated gradients provided by feedback alignment on the adversarial accuracy of biologically plausible neural networks attacked with gradient-based methods." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.21052631735801697, 0.1249999925494194, 0.5238094925880432, 0.3636363446712494, 0, 0.11538460850715637, 0.13333332538604736, 0.17391303181648254, 0.2857142686843872, 0, 0, 0.30434781312942505, 0.12244897335767746, 0.34285715222358704, 0.307692289352417, 0.10256409645080566, 0.0952380895614624, 0.1702127605676651 ]
HkgSXQtIIB
true
[ "Less biologically implausible deep neural networks trained without weight transport can be harder to fool." ]
[ "Chemical reactions can be described as the stepwise redistribution of electrons in molecules.", "As such, reactions are often depicted using \"arrow-pushing\" diagrams which show this movement as a sequence of arrows.", "We propose an electron path prediction model (ELECTRO) to learn these sequences directly from raw reaction data.", "Instead of predicting product molecules directly from reactant molecules in one shot, learning a model of electron movement has the benefits of", "(a) being easy for chemists to interpret,", "(b) incorporating constraints of chemistry, such as balanced atom counts before and after the reaction, and", "(c) naturally encoding the sparsity of chemical reactions, which usually involve changes in only a small number of atoms in the reactants.", "We design a method to extract approximate reaction paths from any dataset of atom-mapped reaction SMILES strings.", "Our model achieves excellent performance on an important subset of the USPTO reaction dataset, comparing favorably to the strongest baselines.", "Furthermore, we show that our model recovers a basic knowledge of chemistry without being explicitly trained to do so.", "The ability to reliably predict the products of chemical reactions is of central importance to the manufacture of medicines and materials, and to understand many processes in molecular biology.", "Theoretically, all chemical reactions can be described by the stepwise rearrangement of electrons in molecules (Herges, 1994b) .", "This sequence of bond-making and breaking is known as the reaction mechanism.", "Understanding the reaction mechanism is crucial because it not only determines the products (formed at the last step of the mechanism), but it also provides insight into why the products are formed on an atomistic level.", "Mechanisms can be treated at different levels of abstraction.", "On the lowest level, quantum-mechanical simulations of the electronic structure can be performed, which are prohibitively computationally expensive for most systems of interest.", "On the other end, chemical reactions can be treated as rules that \"rewrite\" reactant molecules to products, which abstracts away the individual electron redistribution steps into a single, global transformation step.", "To combine the advantages of both approaches, chemists use a powerful qualitative model of quantum chemistry colloquially called \"arrow pushing\", which simplifies the stepwise electron shifts using sequences of arrows which indicate the path of electrons throughout molecular graphs (Herges, 1994b) .Recently", ", there have been a number of machine learning models proposed for directly predicting the products of chemical reactions BID2 Jin et al., 2017; Schwaller et al., 2018; Segler and Waller, 2017a; Segler et al., 2018; Wei et al., 2016) , largely using graph-based or machine translation models. The task", "of reaction product prediction is shown on the left-hand side of FIG0 .In this paper", "we propose a machine learning model to predict the reaction mechanism, as shown on the right-hand side of FIG0 , for a particularly important subset of organic reactions. We argue that", "our The reaction product prediction problem: Given the reactants and reagents, predict the structure of the product. (Right) The reaction", "mechanism", "prediction problem: Given the reactants and reagents, predict how the reaction occurred to form the products.model is not only more interpretable than product prediction models, but also allows easier encoding of constraints imposed by chemistry. Proposed approaches to", "predicting reaction mechanisms have often been based on combining hand-coded heuristics and quantum mechanics BID0 Kim et al., 2018; Nandi et al., 2017; Segler and Waller, 2017b; Rappoport et al., 2014; Simm and Reiher, 2017; Zimmerman, 2013) , rather than using machine learning. We call our model ELECTRO", ", as it directly predicts the path of electrons through molecules (i.e., the reaction mechanism). To train the model we devise", "a general technique to obtain approximate reaction mechanisms purely from data about the reactants and products. This allows one to train our", "a model on large, unannotated reaction datasets such as USPTO (Lowe, 2012) . We demonstrate that not only", "does our model achieve impressive results, surprisingly it also learns chemical properties it was not explicitly trained on.", "In this paper we proposed ELECTRO, a model for predicting electron paths for reactions with linear electron flow.", "These electron paths, or reaction mechanisms, describe how molecules react together.", "Our model", "(i) produces output that is easy for chemists to interpret, and", "(ii) exploits the sparsity and compositionality involved in chemical reactions.", "As a byproduct of predicting reaction mechanisms we are also able to perform reaction product prediction, comparing favorably to the strongest baselines on this task." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.12903225421905518, 0.1111111044883728, 0.4571428596973419, 0.37837836146354675, 0.07999999821186066, 0.12121211737394333, 0.1621621549129486, 0.23529411852359772, 0.21621620655059814, 0.21621620655059814, 0.09756097197532654, 0.11428570747375488, 0.19999998807907104, 0.1249999925494194, 0.07407406717538834, 0.1538461446762085, 0.2083333283662796, 0.18518517911434174, 0.178571417927742, 0.25, 0.30434781312942505, 0.24242423474788666, 0.18867924809455872, 0.06779660284519196, 0.2631579041481018, 0.25641024112701416, 0.22857142984867096, 0.11428570747375488, 0.23529411852359772, 0.13793103396892548, 0.13793103396892548, 0.0714285671710968, 0.19512194395065308 ]
r1x4BnCqKX
true
[ "A generative model for reaction prediction that learns the mechanistic electron steps of a reaction directly from raw reaction data." ]
[ "When machine learning models are used for high-stakes decisions, they should predict accurately, fairly, and responsibly.", "To fulfill these three requirements, a model must be able to output a reject option (i.e. say \"``I Don't Know\") when it is not qualified to make a prediction.", "In this work, we propose learning to defer, a method by which a model can defer judgment to a downstream decision-maker such as a human user.", "We show that learning to defer generalizes the rejection learning framework in two ways: by considering the effect of other agents in the decision-making process, and by allowing for optimization of complex objectives.", "We propose a learning algorithm which accounts for potential biases held by decision-makerslater in a pipeline.", "Experiments on real-world datasets demonstrate that learning\n", "to defer can make a model not only more accurate but also less biased.", "Even when\n", "operated by highly biased users, we show that\n", "deferring models can still greatly improve the fairness of the entire pipeline.", "Recent machine learning advances have increased our reliance on learned automated systems in complex, high-stakes domains such as loan approvals BID6 , medical diagnosis BID12 , and criminal justice BID22 .", "This growing use of automated decisionmaking has raised questions about the obligations of classification systems.", "In many high-stakes situations, machine learning systems should satisfy (at least) three objectives: predict accurately (predictions should be broadly effective indicators of ground truth), predict fairly (predictions should be unbiased with respect to different types of input), and predict responsibly (predictions should not be made if the model cannot confidently satisfy the previous two objectives).Given", "these requirements, we propose learning to defer. When", "deferring, the algorithm does not output a prediction; rather it says \"I Don't Know\" (IDK), indicating it has insufficient information to make a responsible prediction, and that a more qualified external decision-maker (DM) is required. For", "example, in medical diagnosis, the deferred cases would lead to more medical tests, and a second expert opinion. Learning", "to defer extends the common rejection learning framework (Chow, 1957; BID9 in two ways. Firstly,", "it considers the expected output of the DM on each example, more accurately optimizing the output of the joint DM-model system. Furthermore", ", it can be used with a variety of training objectives, whereas most rejection learning research focuses solely on classification accuracy. We believe", "that algorithms that can defer, i.e., yield to more informed decision-makers when they cannot predict responsibly, are an essential component to accountable and reliable automated systems.In this work, we show that the standard rejection learning paradigm (learning to punt) is inadequate, if these models are intended to work as part of a larger system. We propose", "an alternative decision making framework (learning to defer) to learn and evaluate these models. We find that", "embedding a deferring model in a pipeline can improve the accuracy and fairness of the pipeline as a whole, particularly if the model has insight into decision makers later in the pipeline. We simulate", "such a pipeline where our model can defer judgment to a better-informed decision maker, echoing real-world situations where downstream decision makers have more resources or information. We propose", "different formulations of these models along with a learning algorithm for training a model that can work optimally with such a decision-maker. Our experimental", "results on two real-world datasets, from the legal and health domains, show that this algorithm learns models which, through deferring, can work with users to make fairer, more responsible decisions.", "In this work, we propose the idea of learning to defer.", "We propose a model which learns to defer fairly, and show that these models can better navigate the accuracy-fairness tradeoff.", "We also consider deferring models as one part of a decision pipeline.", "To this end, we provide a framework for evaluating deferring models by incorporating other decision makers' output into learning.", "We give an algorithm for learning to defer in the context of a larger system, and show how to train a deferring model to optimize the performance of the pipeline as a whole.", "This is a powerful, general framework, with ramifications for many complex domains where automated models interact with other decision-making agents.", "A model with a low deferral rate could be used to cull a large pool of examples, with all deferrals requiring further examination.", "Conversely, a model with a high deferral rate can be thought of as flagging the most troublesome, incorrect, or biased decisions by a DM, with all non-deferrals requiring further investigation.", "Automated models often operate within larger systems, with many moving parts.", "Through deferring, we show how models can learn to predict responsibly within their surrounding systems.", "Automated models often operate within larger systems, with many moving parts.", "Through deferring, we show how models can learn to predict responsibly within their surrounding systems.", "Building models which can defer to more capable decision makers is an essential step towards fairer, more responsible machine learning." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.04651162400841713, 0.14814814925193787, 0.20408162474632263, 0.18518517911434174, 0.0476190447807312, 0, 0.1463414579629898, 0, 0.2631579041481018, 0.0357142798602581, 0.1463414579629898, 0.11267605423927307, 0.05714285373687744, 0.1666666567325592, 0.17777776718139648, 0.0952380895614624, 0.09090908616781235, 0.11999999731779099, 0.19999998807907104, 0.09302324801683426, 0.38461539149284363, 0.1538461446762085, 0.1249999925494194, 0.17543859779834747, 0.21052631735801697, 0.21276594698429108, 0.10256409645080566, 0.1304347813129425, 0.18867923319339752, 0.08695651590824127, 0.1249999925494194, 0.14814814925193787, 0, 0.0952380895614624, 0, 0.0952380895614624, 0.08695651590824127 ]
SJUX_MWCZ
true
[ "Incorporating the ability to say I-don't-know can improve the fairness of a classifier without sacrificing too much accuracy, and this improvement magnifies when the classifier has insight into downstream decision-making." ]
[ "Hierarchical Task Networks (HTN) generate plans using a decomposition process guided by extra domain knowledge to guide search towards a planning task.", "While many HTN planners can make calls to external processes (e.g. to a simulator interface) during the decomposition process, this is a computationally expensive process, so planner implementations often use such calls in an ad-hoc way using very specialized domain knowledge to limit the number of calls.", "Conversely, the few classical planners that are capable of using external calls (often called semantic attachments) during planning do so in much more limited ways by generating a fixed number of ground operators at problem grounding time.", "In this paper we develop the notion of semantic attachments for HTN planning using semi co-routines, allowing such procedurally defined predicates to link the planning process to custom unifications outside of the planner.", "The resulting planner can then use such co-routines as part of its backtracking mechanism to search through parallel dimensions of the state-space (e.g. through numeric variables).", "We show empirically that our planner outperforms the state-of-the-art numeric planners in a number of domains using minimal extra domain knowledge.", "Planning in domains that require numerical variables, for example, to drive robots in the physical world, must represent and search through a space defined by real-valued functions with a potentially infinite domain, range, or both.", "This type of numeric planning problem poses challenges in two ways.", "First, the description formalisms BID6 might not make it easy to express the numeric functions and its variables, resulting in a description process that is time consuming and error-prone for real-world domains BID17 .", "Second, the planners that try to solve such numeric problems must find efficient strategies to find solutions through this type of state-space.", "Previous work on formalisms for domains with numeric values developed the Semantic Attachment (SA) construct BID3 ) in classical planning.", "Semantic attachments were coined by (Weyhrauch 1981) to describe the attachment of an interpretation to a predicate symbol using an external procedure.", "Such construct allows the planner to reason about fluents where numeric values come from externally defined functions.", "In this paper, we extend the basic notion of semantic attachment for HTN planning by defining the semantics of the functions used as semantic attachments in a way that allows the HTN search and backtracking mechanism to be substantially more efficient.", "Our current approach focused on depth-first search HTN implementation without heuristic guidance, with free variables expected to be fullyground before task decomposition continues.Most planners are limited to purely symbolic operations, lacking structures to optimize usage of continuous resources involving numeric values BID9 .", "Floating point numeric values, unlike discrete logical symbols, have an infinite domain and are harder to compare as one must consider rounding errors.", "One could overcome such errors with delta comparisons, but this solution becomes cumbersome as objects are represented by several numeric values which must be handled and compared as one, such as points or polygons.", "Planning descriptions usually simplify such complex objects to symbolic values (e.g. p25 or poly2) that are easier to compare.", "Detailed numeric values are ignored during planning or left to be decided later, which may force replanning BID17 .", "Instead of simplifying the description or doing multiple comparisons in the description itself, our goal is to exploit external formalisms orthogonal to the symbolic description.", "To achieve that we build a mapping from symbols to objects generated as we query semantic attachments.", "Semantic attachments have already been used in classical planning BID3 ) to unify values just like predicates, and their main advantage is that new users do not need to discern between them and common predicates.", "Thus, we extend classical HTN planning algorithms and their formalism to support semantic attachment queries.", "While external function calls map to functions defined outside the HTN description, we implement SAs as semi co-routines BID1 , subroutines that suspend and resume their state, to iterate across zero or more values provided one at a time by an external implementation, mitigating the potentially infinite range of the external function.Our contributions are threefold.", "First, we introduce SAs for HTN planning as a mechanism to describe and evaluate external predicates at execution time.", "Second, we introduce a symbol-object table to improve the readability of symbolic descriptions and the plans generated, while making it easier to handle external objects and structures.", "Finally, we empirically compare the resulting HTN planner with a modern classical planner BID10 in a number of mixed symbolic/numeric domains showing substantial gains in speed with minimal domain knowledge.", "We developed a notion of semantic attachments for HTN planners that not only allows a domain expert to easily define external numerical functions for real-world domains, but also provides substantial improvements on planning speed over comparable classical planning approaches.", "The use of semantic attachments improves the planning speed as one can express a potentially infinite state representation with procedures that can be exploited by a strategy described as HTN tasks.", "As only semantic attachments present in the path decomposed during planning are evaluated, a smaller amount of time is required when compared with approaches that precompute every possible value during operator grounding.", "Our description language is arguably more readable than the commonly used strategy of developing a domain specific planner with customized heuristics.", "Specifically, we allow designers to easily define external functions in a way that is readable within the domain knowledge encoded in HTN methods at design time, and also dynamically generate symbolic representations of external values at planning time, which makes generated plans easier to understand.Our work is the first attempt at defining the syntax and operation of semantic attachments for HTNs, allowing further research on search in SA-enabled domains within HTN planners.", "Future work includes implementing a cache to reuse previous values from external procedures applied to similar previous states BID4 ) and a generic construction to access such values in the symbolic layer, to obtain data from explored branches outside the state structure, i.e. to hold mutually exclusive predicate information.", "We plan to develop more domains, with varying levels of domain knowledge and SA usage, to obtain better comparison with other planners and their resulting plan quality.", "The advantage of being able to exploit external implementations conflicts with the ability to incorporate such domain knowledge into heuristic functions, as such knowledge is outside the description.", "Further work is required to expose possible metrics from a SA to heuristic functions." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1666666567325592, 0.1428571343421936, 0.19607843458652496, 0.23255813121795654, 0.04999999701976776, 0.0555555522441864, 0.0416666641831398, 0.07692307233810425, 0.04444444179534912, 0.05714285373687744, 0.05714285373687744, 0.17142856121063232, 0.0624999962747097, 0.11999999731779099, 0.1071428507566452, 0.052631575614213943, 0, 0.05882352590560913, 0.12121211737394333, 0.11428570747375488, 0.06451612710952759, 0.1249999925494194, 0.19999998807907104, 0.1230769231915474, 0.4117647111415863, 0.10256409645080566, 0.04878048226237297, 0.15686273574829102, 0.1395348757505417, 0.043478257954120636, 0, 0.13513512909412384, 0.1071428507566452, 0.052631575614213943, 0.10256409645080566, 0.0714285671710968 ]
BkxAIrBZtE
true
[ "An approach to perform HTN planning using external procedures to evaluate predicates at runtime (semantic attachments)." ]
[ "Recent literature suggests that averaged word vectors followed by simple post-processing outperform many deep learning methods on semantic textual similarity tasks.", "Furthermore, when averaged word vectors are trained supervised on large corpora of paraphrases, they achieve state-of-the-art results on standard STS benchmarks.", "Inspired by these insights, we push the limits of word embeddings even further.", "We propose a novel fuzzy bag-of-words (FBoW) representation for text that contains all the words in the vocabulary simultaneously but with different degrees of membership, which are derived from similarities between word vectors.", "We show that max-pooled word vectors are only a special case of fuzzy BoW and should be compared via fuzzy Jaccard index rather than cosine similarity.", "Finally, we propose DynaMax, a completely unsupervised and non-parametric similarity measure that dynamically extracts and max-pools good features depending on the sentence pair.", "This method is both efficient and easy to implement, yet outperforms current baselines on STS tasks by a large margin and is even competitive with supervised word vectors trained to directly optimise cosine similarity.", "Natural languages are able to encode sentences with similar meanings using very different vocabulary and grammatical constructs, which makes determining the semantic similarity between pieces of text a challenge.", "It is common to cast semantic similarity between sentences as the proximity of their vector representations.", "More than half a century since it was first proposed, the Bag-of-Words (BoW) representation (Harris, 1954; BID47 BID37 remains a popular baseline across machine learning (ML), natural language processing (NLP), and information retrieval (IR) communities.", "In recent years, however, BoW was largely eclipsed by representations learned through neural networks, ranging from shallow BID36 BID21 to recurrent BID12 BID53 , recursive BID51 BID55 , convolutional BID30 BID32 , self-attentive BID57 BID9 and hybrid architectures BID19 BID56 BID66 .Interestingly", ", BID5 showed that averaged word vectors BID38 BID44 BID6 BID29 weighted with the Smooth Inverse Frequency (SIF) scheme and followed by a Principal Component Analysis (PCA) post-processing procedure were a formidable baseline for Semantic Textual Similarity (STS) tasks, outperforming deep representations. Furthermore,", "BID59 and BID58 showed that averaged word vectors trained supervised on large corpora of paraphrases achieve state-of-the-art results, outperforming even the supervised systems trained directly on STS.Inspired by these insights, we push the boundaries of word vectors even further. We propose a", "novel fuzzy bag-of-words (FBoW) representation for text. Unlike classical", "BoW, fuzzy BoW contains all the words in the vocabulary simultaneously but with different degrees of membership, which are derived from similarities between word vectors.Next, we show that max-pooled word vectors are a special case of fuzzy BoW. Max-pooling significantly", "outperforms averaging on standard benchmarks when word vectors are trained unsupervised.Since max-pooled vectors are just a special case of fuzzy BoW, we show that the fuzzy Jaccard index is a more suitable alternative to cosine similarity for comparing these representations. By contrast, the fuzzy Jaccard", "index completely fails for averaged word vectors as there is no connection between the two. The max-pooling operation is commonplace", "throughout NLP and has been successfully used to extract features in supervised systems BID10 BID32 BID31 BID13 BID12 BID15 ; however, to the best of our knowledge, the present work is the first to study max-pooling of pre-trained word embeddings in isolation and to suggest theoretical underpinnings behind this operation.Finally, we propose DynaMax, a completely unsupervised and non-parametric similarity measure that dynamically extracts and max-pools good features depending on the sentence pair. DynaMax outperforms averaged word vector", "with cosine similarity on every benchmark STS task when word vectors are trained unsupervised. It even performs comparably to BID58 's", "vectors under cosine similarity, which is a striking result as the latter are in fact trained supervised to directly optimise cosine similarity between paraphrases, while our approach is completely unrelated to that objective. We believe this makes DynaMax a strong", "baseline that future algorithms should aim to beat in order to justify more complicated approaches to semantic similarity.As an additional contribution, we conduct significance analysis of our results. We found that recent literature on STS", "tends to apply unspecified or inappropriate parametric tests, or leave out significance analysis altogether in the majority of cases. By contrast, we rely on nonparametric", "approaches with much milder assumptions on the test statistic; specifically, we construct bias-corrected and accelerated (BCa) bootstrap confidence intervals BID17 for the delta in performance between two systems. We are not aware of any prior works that", "apply such methodology to STS benchmarks and hope the community finds our analysis to be a good starting point for conducting thorough significance testing on these types of experiments.", "In this work we combine word embeddings with classic BoW representations using fuzzy set theory.", "We show that max-pooled word vectors are a special case of FBoW, which implies that they should be compared via the fuzzy Jaccard index rather than the more standard cosine similarity.", "We also present a simple and novel algorithm, DynaMax, which corresponds to projecting word vectors onto a subspace dynamically generated by the given sentences before max-pooling over the features.", "DynaMax outperforms averaged word vectors compared with cosine similarity on every benchmark STS task when word vectors are trained unsupervised.", "It even performs comparably to supervised vectors that directly optimise cosine similarity between paraphrases, despite being completely unrelated to that objective.Both max-pooled vectors and DynaMax constitute strong baselines for further studies in the area of sentence representations.", "Yet, these methods are not limited to NLP and word embeddings, but can in fact be used in any setting where one needs to compute similarity between sets of elements that have rich vector representations.", "We hope to have demonstrated the benefits of experimenting more with similarity metrics based on the building blocks of meaning such as words, rather than complex representations of the final objects such as sentences.", "In the word fuzzification step the membership values for a word w are obtained through a similarity function sim (w, u (j) ) between the word embedding w and the rows of the universe matrix U , i.e. DISPLAYFORM0 In Section 2.2, sim(w, u (j) ) was the dot product w · u (j) and we could simply write µ = wU T .", "There are several reasons why we chose a similarity function that takes values in R as opposed to DISPLAYFORM1 First, we can always map the membership values from R to (0, 1) and vice versa using, e.g. the logistic function σ(x) = 1 1+e −ax with an appropriate scaling factor a > 0.", "Intuitively, large negative membership values would imply the element is really not in the set and large positive values mean it is really in the set.", "Of course, here both 'large' and 'really' depend on the scaling factor a.", "In any case, we see that the choice of R vs. [0, 1] is not very important mathematically.", "Interestingly, since we always max-pool with a zero vector, fuzzy BoW will not contain any negative membership values.", "This was not our intention, just a by-product of the model.", "For completeness, let us insist on the range [0, 1] and choose sim (w, u (j) ) to be the clipped cosine similarity max (0, cos(w, u (j) )).", "This is in fact equivalent to simply normalising the word vectors.", "Indeed, the dot product and cosine similarity become the same after normalisation, and max-pooling with the zero vector removes all the negative values, so the resulting representation is guaranteed to be a [0, 1]-fuzzy set.", "Our results for normalised word vectors are presented in TAB3 .After", "comparing TAB0 we can draw two conclusions. Namely", ", DynaMax still outperforms avg-cos by a large margin even when word vectors are normalised. However", ", normalisation hurts both approaches and should generally be avoided. This is", "not surprising since the length of word vectors is correlated with word importance, so normalisation essentially makes all words equally important BID48 ." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.25531914830207825, 0.1304347813129425, 0.1538461446762085, 0.3103448152542114, 0.31372547149658203, 0.2083333283662796, 0.24561403691768646, 0.1818181723356247, 0.0952380895614624, 0.06666666269302368, 0, 0.20588235557079315, 0.2295081913471222, 0.11428570747375488, 0.2666666507720947, 0.307692289352417, 0.13333332538604736, 0.1318681240081787, 0.2978723347187042, 0.16129031777381897, 0.20338982343673706, 0.03999999538064003, 0.16129031777381897, 0.07407406717538834, 0.24390242993831635, 0.290909081697464, 0.15094339847564697, 0.22727271914482117, 0.19672130048274994, 0.1355932205915451, 0.07407406717538834, 0.1621621549129486, 0.19178082048892975, 0.045454539358615875, 0.05128204822540283, 0.09090908616781235, 0.1818181723356247, 0.054054051637649536, 0.038461532443761826, 0.10810810327529907, 0.1428571343421936, 0.21621620655059814, 0.05882352590560913, 0.2380952388048172, 0, 0.12765957415103912 ]
SkxXg2C5FX
true
[ "Max-pooled word vectors with fuzzy Jaccard set similarity are an extremely competitive baseline for semantic similarity; we propose a simple dynamic variant that performs even better." ]
[ "State-of-the-art results in imitation learning are currently held by adversarial methods that iteratively estimate the divergence between student and expert policies and then minimize this divergence to bring the imitation policy closer to expert behavior.", "Analogous techniques for imitation learning from observations alone (without expert action labels), however, have not enjoyed the same ubiquitous successes. \n", "Recent work in adversarial methods for generative models has shown that the measure used to judge the discrepancy between real and synthetic samples is an algorithmic design choice, and that different choices can result in significant differences in model performance.", "Choices including Wasserstein distance and various $f$-divergences have already been explored in the adversarial networks literature, while more recently the latter class has been investigated for imitation learning.", "Unfortunately, we find that in practice this existing imitation-learning framework for using $f$-divergences suffers from numerical instabilities stemming from the combination of function approximation and policy-gradient reinforcement learning.", "In this work, we alleviate these challenges and offer a reparameterization of adversarial imitation learning as $f$-divergence minimization before further extending the framework to handle the problem of imitation from observations only.", "Empirically, we demonstrate that our design choices for coupling imitation learning and $f$-divergences are critical to recovering successful imitation policies.", "Moreover, we find that with the appropriate choice of $f$-divergence, we can obtain imitation-from-observation algorithms that outperform baseline approaches and more closely match expert performance in continous-control tasks with low-dimensional observation spaces.", "With high-dimensional observations, we still observe a significant gap with and without action labels, offering an interesting avenue for future work.", "Imitation Learning (IL) (Osa et al., 2018) refers to a paradigm of reinforcement learning in which the learning agent has access to an optimal, reward-maximizing expert for the underlying environment.", "In most work, this access is provided through a dataset of trajectories where each observed state is annotated with the action prescribed by the expert policy.", "This is often an extremely powerful learning paradigm in contrast to standard reinforcement learning, since not all tasks of interest admit easily-specified reward functions.", "Additionally, not all environments are amenable to the prolonged and potentially unsafe exploration needed for reward-maximizing agents to arrive at satisfactory policies (Achiam et al., 2017; Chow et al., 2019) .", "While the traditional formulation of the IL problem assumes access to optimal expert action labels, the provision of such information can often be laborious (in the case of a real, human expert) or incur significant financial cost (such as using elaborate instrumentation to record expert actions).", "Additionally, this restrictive assumption removes a vast number of rich, observation-only data sources from consideration (Zhou et al., 2018) .", "To bypass these challenges, recent work (Liu et al., 2018; Torabi et al., 2018a; b; Edwards et al., 2019; has explored what is perhaps a more natural problem formulation in which an agent must recover an imitation policy from a dataset containing only expert observation sequences.", "While this Imitation Learning from Observations (ILfO) setting carries tremendous potential, such as enabling an agent to learn complex tasks from watching freely available videos on the Internet, it also is fraught with significant additional challenges.", "In this paper, we show how to incorporate recent advances in generative-adversarial training of deep neural networks to tackle imitation-learning problems and advance the state-of-the-art in ILfO.", "With these considerations in mind, the overarching goal of this work is to enable sample-efficient imitation from expert demonstrations, both with and without the provision of expert action labels.", "Figure 1: Evaluating the original f -VIM framework (and its ILfO counterpart, f -VIMO) in the Ant (S ∈ R 111 ) and Hopper (S ∈ R 11 ) domains with the Total Variation distance.", "f -VIM/VIMO-sigmoid denotes our instantiation of the frameworks, detailed in Sections 4.2 and 4.3.", "Note that, in both plots, the lines for TV-VIM and TV-VIMO overlap.", "The rich literature on Generative Adversarial Networks (Goodfellow et al., 2014) has expanded in recent years to include alternative formulations of the underlying objective that yield qualitatively different solutions to the saddle-point optimization problem Dziugaite et al., 2015; Nowozin et al., 2016; .", "Of notable interest are the findings of Nowozin et al. (2016) who present Variational Divergence Minimization (VDM), a generalization of the generative-adversarial approach to arbitrary choices of distance measures between probability distributions drawn from the class of f -divergences (Ali & Silvey, 1966; .", "Applying VDM with varying choices of fdivergence, Nowozin et al. (2016) encounter learned synthetic distributions that can exhibit differences from one another while producing equally realistic samples.", "Translating this idea for imitation is complicated by the fact that the optimization of the generator occurs via policy-gradient reinforcement learning (Sutton et al., 2000) .", "Existing work in combining adversarial IL and f -divergences , despite being well-motivated, fails to account for this difference; the end results (shown partially in Figure 1 , where TV-VIM is the method of , and discussed further in later sections) are imitation-learning algorithms that scale poorly to environments with higher-dimensional observations.", "In this work, we assess the effect of the VDM principle and consideration of alternative fdivergences in the contexts of IL and ILfO.", "We begin by reparameterizing the framework of for the standard IL problem.", "Our version transparently exposes the choices practitioners must make when designing adversarial imitation algorithms for arbitrary choices of f -divergence.", "We then offer a single instantiation of our framework that, in practice, allows stable training of good policies across multiple choices of f -divergence.", "An example is illustrated in Figure 1 where our methods (TV-VIM-sigmoid and TV-VIMO-sigmoid) result in significantly superior policies.", "We go on to extend our framework to encapsulate the ILfO setting and examine the efficacy of the resulting new algorithms across a range of continuous-control tasks in the MuJoCo (Todorov et al., 2012) domain.", "Our empirical results validate our framework as a viable unification of adversarial imitation methods under the VDM principle.", "With the assistance of recent advances in stabilizing regularization for adversarial training (Mescheder et al., 2018) , improvements in performance can be attained under an appropriate choice of f -divergence.", "However, there is still a significant performance gap between the recovered imitation policies and expert behavior for tasks with high dimensional observations, leaving open directions for future work in developing improved ILfO algorithms.", "To highlight the importance of carefully selecting the variational function activation g f and validate our modifications to the f -VIM framework, we present results in Figure 2 comparing to the original f -VIM framework of and its natural ILfO counterpart.", "Activation functions for the original methods are chosen according to the choices outlined in ; Nowozin et al. (2016) .", "In our experiments using the KL and reverse KL divergences, we found that none of the trials reached completion due to numerical instabilities caused by exploding policy gradients.", "Consequently, we only present results for the Total Variation distance.", "We observe that under the original f -GAN activation selection, we fail to produce meaningful imitation policies with learning stagnating after 100 iterations or less.", "As previously mentioned, we suspect that this stems from the use of tanh with TV leading to a dissipating reward signal.", "We present results in Figure 3 to assess the utility of varying the choice of divergence in f -VIM and f -VIMO across each domain.", "In considering the impact of f -divergence choice, we find that most of the domains must be examined in isolation to observe a particular subset of f -divergences that stand out.", "In the IL setting, we find that varying the choice of f -divergence can yield different learning curves but, ultimately, produce near-optimal (if not optimal) imitation policies across all domains.", "In contrast, we find meaningful choices of f -divergence in the ILfO setting including {KL, TV} for Hopper, RKL for HalfCheetah, and {GAN, TV} for Walker.", "We note that the use of discriminator regularization per Mescheder et al. (2018) is crucial to achieving these performance gains, whereas the regularization generally fails to help performance in the IL setting.", "This finding is supportive of the logical intuition that ILfO poses a fundamentally more-challenging problem than standard IL.", "As a negative result, we find that the Ant domain (the most difficult environment with S ⊂ R 111 and A ⊂ R 8 ) still poses a challenge for ILfO algorithms across the board.", "More specifically, we observe that discriminator regularization hurts learning in both the IL and ILfO settings.", "While the choice of RKL does manage to produce a marginal improvement over GAIFO, the gap between existing stateof-the-art and expert performance remains unchanged.", "It is an open challenge for future work to either identify the techniques needed to achieve optimal imitation policies from observations only or characterize a fundamental performance gap when faced with sufficiently large observation spaces.", "In Figure 4 , we vary the total number of expert demonstrations available during learning and observe that certain choices of f -divergences can be more robust in the face of less expert data, both in the IL and ILfO settings.", "We find that KL-VIM and TV-VIM are slightly more performant than GAIL when only provided with a single expert demonstration.", "Notably, in each domain we see that certain choices of divergence for f -VIMO do a better job of residing close to their f -VIM counterparts suggesting that future improvements may come from examining f -divergences in the small-data regime.", "This idea is further exemplified when accounting for results collected while using discriminator regularization (Mescheder et al., 2018) .", "We refer readers to the Appendix for the associated learning curves.", "Our work leaves many open directions for future work to close the performance gap between student and expert policies in the ILfO setting.", "While we found the sigmoid function to be a suitable instantiation of our framework, exploring alternative choices of variational function activations could prove useful in synthesizing performant ILfO algorithms.", "Alternative choices of f -divergences could lead to more substantial improvements than the choices we examine in this paper.", "Moreover, while this work has a direct focus on f -divergences, Integral Probability Metrics (IPMs) (Müller, 1997; ) represent a distinct but well-established family of divergences between probability distributions.", "The success of Total Variation distance in our experiments, which doubles as both a fdivergence and IPM (Sriperumbudur et al., 2009) , is suggestive of future work building IPM-based ILfO algorithms .", "In this work, we present a general framework for imitation learning and imitation learning from observations under arbitrary choices of f -divergence.", "We empirically validate a single instantiation of our framework across multiple f -divergences, demonstrating that we overcome the shortcomings of prior work and offer a wide class of IL and ILfO algorithms capable of scaling to larger problems.", "(Schaal, 1997; Atkeson & Schaal, 1997; Argall et al., 2009) , where an agent must leverage demonstration data (typically provided as trajectories, each consisting of expert state-action pairs) to produce an imitation policy that correctly captures the demonstrated behavior.", "Within the context of LfD, a finer distinction can be made between behavioral cloning (BC) (Bain & Sommut, 1999; Pomerleau, 1989) and inverse reinforcement learning (IRL) (Ng et al.; Abbeel & Ng, 2004; Syed & Schapire, 2007; Ziebart et al., 2008; Finn et al., 2016; Ho & Ermon, 2016) approaches; BC approaches view the demonstration data as a standard dataset of input-output pairs and apply traditional supervisedlearning techniques to recover an imitation policy.", "Alternatively, IRL-based methods synthesize an estimate of the reward function used to train the expert policy before subsequently applying a reinforcement-learning algorithm (Sutton & Barto, 1998; Abbeel & Ng, 2004) to recover the corresponding imitation policy.", "Although not a focus of this work, we also acknowledge the myriad of approaches that operate at the intersection of IL and reinforcement learning or augment reinforcement learning with IL (Rajeswaran et al., 2017; Hester et al., 2018; Salimans & Chen, 2018; Sun et al., 2018; Borsa et al., 2019; Tirumala et al., 2019) .", "While BC approaches have been successful in some settings (Niekum et al., 2015; Giusti et al., 2016; Bojarski et al., 2016) , they are also susceptible to failures stemming from covariate shift where minute errors in the actions of the imitation policy compound and force the agent into regions of the state space not captured in the original demonstration data.", "While some preventative measures for covariate shift do exist (Laskey et al., 2017b) , a more principled solution can be found in methods like DAgger (Ross et al., 2011) and its descendants (Ross & Bagnell, 2014; Sun et al., 2017; Le et al., 2018 ) that remedy covariate shift by querying an expert to provide on-policy action labels.", "It is worth noting, however, that these approaches are only feasible in settings that admit such online interaction with an expert (Laskey et al., 2016) and, even then, failure modes leading to poor imitation policies do exist (Laskey et al., 2017a) .", "The algorithms presented in this work fall in with IRL-based approaches to IL.", "Early successes in this regime tend to rely on hand-engineered feature representations for success (Abbeel & Ng, 2004; Ziebart et al., 2008; Levine et al., 2011) .", "Only in recent years, with the aid of deep neural networks, has there been a surge in the number of approaches that are capable of scaling to the raw, highdimensional observations found in real-world control problems (Finn et al., 2016; Ho & Ermon, 2016; Duan et al., 2017; Li et al., 2017; Fu et al., 2017; Kim & Park, 2018) .", "Our work focuses attention exclusively on adversarial methods for their widespread effectiveness across a range of imitation tasks without requiring interactive experts (Ho & Ermon, 2016; Li et al., 2017; Fu et al., 2017; Kostrikov et al., 2018) ; at the heart of these methods is the Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016) approach which produces high-fidelity imitation policies and achieves state-of-the-art results across numerous continuous-control benchmarks by leveraging the expressive power of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) for modeling complex distributions over a high-dimensional support.", "From an IRL perspective, GAIL can be viewed as iteratively optimizing a parameterized reward function (discriminator) that, when used to optimize an imitation policy (generator) via policy-gradient reinforcement learning (Sutton et al., 2000) , allows the agent to shift its own behavior closer to that of the expert.", "From the perspective of GANs, this is achieved by discriminating between the respective distributions over state-action pairs visited by the imitation and expert policies before training a generator to fool the discriminator and induce a state-action visitation distribution similar to that of the expert.", "While a large body of prior work exists for IL, numerous recent works have drawn attention to the more challenging problem of imitation learning from observation (Sermanet et al., 2017; Liu et al., 2018; Goo & Niekum, 2018; Kimura et al., 2018; Torabi et al., 2018a; b; Edwards et al., 2019; .", "In an effort to more closely resemble observational learning in humans and leverage the wealth of publicly-available, observation-only data sources, the ILfO problem considers learning from expert demonstration data where no expert action labels are provided.", "Many early approaches to ILfO use expert observation sequences to learn a semantic embedding space so that distances between observation sequences of the imitation and expert policies can serve as a cost signal to be minimized via reinforcement learning (Gupta et al., 2017; Sermanet et al., 2017; Liu et al., 2018 choice and the multimodality of the expert trajectory distribution, we provide an empirical evaluation of their f -VIM framework across a range of continous control tasks in the Mujoco domain (Todorov et al., 2012) .", "Empirically, we find that some of the design choices f -VIM inherits from the original f -GAN work are problematic when coupled with adversarial IL and training of the generator by policy-gradient reinforcement learning, instead of via direct backpropagation as in traditional GANs.", "Consequently, we refactor their framework to expose this point and provide one practical instantiation that works well empirically.", "We then go on to extend the f -VIM framework to the IFO problem (f -VIMO) and evaluate the resulting algorithms empirically against the state-of-the-art, GAIFO." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2222222238779068, 0.21739129722118378, 0.1666666567325592, 0.11764705181121826, 0.19230768084526062, 0.25925925374031067, 0.13636362552642822, 0.18518517911434174, 0.260869562625885, 0.15094339847564697, 0.3265306055545807, 0.12244897335767746, 0.11320754140615463, 0.21875, 0.13333332538604736, 0.1515151411294937, 0.19999998807907104, 0.19999998807907104, 0.7450980544090271, 0.11320754140615463, 0.14999999105930328, 0.1621621549129486, 0.12903225421905518, 0.1269841194152832, 0.11538460850715637, 0.20408162474632263, 0.22857142984867096, 0.1860465109348297, 0.1111111044883728, 0.13636362552642822, 0.04255318641662598, 0.0952380895614624, 0.1428571343421936, 0.1395348757505417, 0.07547169178724289, 0.24561403691768646, 0.14035087823867798, 0.09302324801683426, 0.15686273574829102, 0.05714285373687744, 0.1599999964237213, 0.30434781312942505, 0.17391303181648254, 0.11764705181121826, 0.1111111044883728, 0.1249999925494194, 0.19230768084526062, 0.1395348757505417, 0.1071428507566452, 0.1463414579629898, 0.2083333283662796, 0.23728813230991364, 0.16949151456356049, 0.13333332538604736, 0.13333332538604736, 0.045454539358615875, 0.11428570747375488, 0.21739129722118378, 0.11538460850715637, 0.1860465109348297, 0.11320754140615463, 0.2181818187236786, 0.2222222238779068, 0.17241378128528595, 0.16129031777381897, 0.11235954612493515, 0.178571417927742, 0.15625, 0.1621621549129486, 0.10810810327529907, 0.158730149269104, 0.2702702581882477, 0.07999999821186066, 0.11428570747375488, 0.14432989060878754, 0.14492753148078918, 0.27586206793785095, 0.1818181723356247, 0.24561403691768646, 0.1573033630847931, 0.1904761791229248, 0.1395348757505417, 0.12765957415103912 ]
SyxDXJStPS
true
[ "The overall goal of this work is to enable sample-efficient imitation from expert demonstrations, both with and without the provision of expert action labels, through the use of f-divergences." ]
[ "Momentary fluctuations in attention (perceptual accuracy) correlate with neural activity fluctuations in primate visual areas.", "Yet, the link between such momentary neural fluctuations and attention state remains to be shown in the human brain.", "We investigate this link using a real-time cognitive brain machine interface (cBMI) based on steady state visually evoked potentials (SSVEPs): occipital EEG potentials evoked by rhythmically flashing stimuli.", "Tracking momentary fluctuations in SSVEP power, in real-time, we presented stimuli time-locked to when this power reached (predetermined) high or low thresholds.", "We observed a significant increase in discrimination accuracy (d') when stimuli were triggered during high (versus low) SSVEP power epochs, at the location cued for attention.", "Our results indicate a direct link between attention’s effects on perceptual accuracy and and neural gain in EEG-SSVEP power, in the human brain.\n" ]
[ 0, 0, 0, 0, 0, 1 ]
[ 0.10810810327529907, 0.380952388048172, 0.19999998807907104, 0.13333332538604736, 0.1599999964237213, 0.739130437374115 ]
ryeT47FIIS
false
[ "With a cognitive brain-machine interface, we show a direct link between attentional effects on perceptual accuracy and neural gain in EEG-SSVEP power, in the human brain." ]
[ "Recent studies have demonstrated the vulnerability of deep convolutional neural networks against adversarial examples.", "Inspired by the observation that the intrinsic dimension of image data is much smaller than its pixel space dimension and the vulnerability of neural networks grows with the input dimension, we propose to embed high-dimensional input images into a low-dimensional space to perform classification.", "However, arbitrarily projecting the input images to a low-dimensional space without regularization will not improve the robustness of deep neural networks.", "We propose a new framework, Embedding Regularized Classifier (ER-Classifier), which improves the adversarial robustness of the classifier through embedding regularization.", "Experimental results on several benchmark datasets show that, our proposed framework achieves state-of-the-art performance against strong adversarial attack methods.", "Deep neural networks (DNNs) have been widely used for tackling numerous machine learning problems that were once believed to be challenging.", "With their remarkable ability of fitting training data, DNNs have achieved revolutionary successes in many fields such as computer vision, natural language progressing, and robotics.", "However, they were shown to be vulnerable to adversarial examples that are generated by adding carefully crafted perturbations to original images.", "The adversarial perturbations can arbitrarily change the network's prediction but often too small to affect human recognition (Szegedy et al., 2013; Kurakin et al., 2016) .", "This phenomenon brings out security concerns for practical applications of deep learning.", "Two main types of attack settings have been considered in recent research (Goodfellow et al.; Carlini & Wagner, 2017a; Chen et al., 2017; Papernot et al., 2017) : black-box and white-box settings.", "In the black-box setting, the attacker can provide any inputs and receive the corresponding predictions.", "However, the attacker cannot get access to the gradients or model parameters under this setting; whereas in the white-box setting, the attacker is allowed to analytically compute the model's gradients, and have full access to the model architecture and weights.", "In this paper, we focus on defending against the white-box attack which is the harder task.", "Recent work (Simon-Gabriel et al., 2018) presented both theoretical arguments and an empirical one-to-one relationship between input dimension and adversarial vulnerability, showing that the vulnerability of neural networks grows with the input dimension.", "Therefore, reducing the data dimension may help improve the robustness of neural networks.", "Furthermore, a consensus in the highdimensional data analysis community is that, a method working well on the high-dimensional data is because the data is not really of high-dimension (Levina & Bickel, 2005) .", "These high-dimensional data, such as images, are actually embedded in a low dimensional space.", "Hence, carefully reducing the input dimension may improve the robustness of the model without sacrificing performance.", "Inspired by the observation that the intrinsic dimension of image data is actually much smaller than its pixel space dimension (Levina & Bickel, 2005) and the vulnerability of a model grows with its input dimension (Simon-Gabriel et al., 2018) , we propose a defense framework that embeds input images into a low-dimensional space using a deep encoder and performs classification based on the latent embedding with a classifier network.", "However, an arbitrary projection does not guarantee improving the robustness of the model, because there are a lot of mapping functions including non-robust ones from the raw input space to the low-dimensional space capable of minimizing the classification loss.", "To constrain the mapping function, we employ distribution regularization in the embedding space leveraging optimal transport theory.", "We call our new classification framework Embedding Regularized Classifier (ER-Classifier).", "To be more specific, we introduce a discriminator in the latent space which tries to separate the generated code vectors from the encoder network and the ideal code vectors sampled from a prior distribution, i.e., a standard Gaussian distribution.", "Employing a similar powerful competitive mechanism as demonstrated by Generative Adversarial Networks (Goodfellow et al., 2014) , the discriminator enforces the embedding space of the model to follow the prior distribution.", "In our ER-Classifier framework, the encoder and discriminator structures together project the input data to a low-dimensional space with a nice shape, then the classifier performs prediction based on the lowdimensional embedding.", "Based on the optimal transport theory, the proposed ER-Classifier minimizes the discrepancy between the distribution of the true label and the distribution of the framework output, thus only retaining important features for classification in the embedding space.", "With a small embedding dimension, the effect of the adversarial perturbation is largely diminished through the projection process.", "We compare ER-Classifier with other state-of-the-art defense methods on MNIST, CIFAR10, STL10 and Tiny Imagenet.", "Experimental results demonstrate that our proposed ER-Classifier outperforms other methods by a large margin.", "To sum up, this paper makes the following three main contributions:", "• A novel unified end-to-end robust deep neural network framework against adversarial attacks is proposed, where the input image is first projected to a low-dimensional space and then classified.", "• An objective is induced to minimize the optimal transport cost between the true class distribution and the framework output distribution, guiding the encoder and discriminator to project the input image to a low-dimensional space without losing important features for classification.", "• Extensive experiments demonstrate the robustness of our proposed ER-Classifier framework under the white-box attacks, and show that ER-Classifier outperforms other state-ofthe-art approaches on several benchmark image datasets.", "As far as we know, our approach is the first that applies optimal transport theory, i.e., a Wasserstein distance regularization, to a bottleneck embedding layer of a deep neural network in a purely supervised learning setting without considering any reconstruction loss, although optimal transport theory or a discriminator loss has been applied to generative models in an unsupervised learning setting (Makhzani et al., 2015; Tolstikhin et al., 2017) ; (2) Our method is also the first that establishes the connection between a Wasserstein distance regularization and the robustness of deep neural networks for defending against adversarial examples.", "In this paper, we propose a new defense framework, ER-Classifier, which projects the input images to a low-dimensional space to remove adversarial perturbation and stabilize the model through minimizing the discrepancy between the true label distribution and the framework output distribution.", "We empirically show that ER-Classifier is much more robust than other state-of-the-art defense methods on several benchmark datasets.", "Future work will include further exploration of the low-dimensional space to improve the robustness of deep neural network." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.25806450843811035, 0.18867924809455872, 0.2702702581882477, 0.4444444477558136, 0.1111111044883728, 0.052631575614213943, 0.0952380895614624, 0.1111111044883728, 0.09756097197532654, 0.13793103396892548, 0.08695651590824127, 0.13333332538604736, 0.08695651590824127, 0.0624999962747097, 0.21276594698429108, 0.20689654350280762, 0.0952380895614624, 0, 0.19354838132858276, 0.22857142984867096, 0.16326530277729034, 0.1818181723356247, 0.14814814925193787, 0.07999999821186066, 0.13333332538604736, 0.13333332538604736, 0.2666666507720947, 0.3030303120613098, 0.0624999962747097, 0.06451612710952759, 0.0714285671710968, 0.2666666507720947, 0.15686273574829102, 0.2790697515010834, 0.21739129722118378, 0.19999998807907104, 0.05714285373687744, 0.24242423474788666 ]
BygpAp4Ywr
true
[ "A general and easy-to-use framework that improves the adversarial robustness of deep classification models through embedding regularization." ]
[ "We investigate task clustering for deep learning-based multi-task and few-shot learning in the settings with large numbers of diverse tasks.", "Our method measures task similarities using cross-task transfer performance matrix.", "Although this matrix provides us critical information regarding similarities between tasks, the uncertain task-pairs, i.e., the ones with extremely asymmetric transfer scores, may collectively mislead clustering algorithms to output an inaccurate task-partition.", "Moreover, when the number of tasks is large, generating the full transfer performance matrix can be very time consuming.", "To overcome these limitations, we propose a novel task clustering algorithm to estimate the similarity matrix based on the theory of matrix completion.", "The proposed algorithm can work on partially-observed similarity matrices based on only sampled task-pairs with reliable scores, ensuring its efficiency and robustness.", "Our theoretical analysis shows that under mild assumptions, the reconstructed matrix perfectly matches the underlying “true” similarity matrix with an overwhelming probability.", "The final task partition is computed by applying an efficient spectral clustering algorithm to the recovered matrix.", "Our results show that the new task clustering method can discover task clusters that benefit both multi-task learning and few-shot learning setups for sentiment classification and dialog intent classification tasks.", "This paper leverages knowledge distilled from a large number of learning tasks BID0 BID19 , or MAny Task Learning (MATL), to achieve the goal of", "(i) improving the overall performance of all tasks, as in multi-task learning (MTL); and (ii) rapid-adaptation to a new task by using previously learned knowledge, similar to few-shot learning (FSL) and transfer learning.", "Previous work on multi-task learning and transfer learning used small numbers of related tasks (usually ∼10) picked by human experts.", "By contrast, MATL tackles hundreds or thousands of tasks BID0 BID19 , with unknown relatedness between pairs of tasks, introducing new challenges such as task diversity and model inefficiency.MATL scenarios are increasingly common in a wide range of machine learning applications with potentially huge impact.", "Examples include reinforcement learning for game playing -where many numbers of sub-goals are treated as tasks by the agents for joint-learning, e.g. BID19 achieved the state-of-the-art on the Ms. Pac-Man game by using a multi-task learning architecture to approximate rewards of over 1,000 sub-goals (reward functions).", "Another important example is enterprise AI cloud services -where many clients submit various tasks/datasets to train machine learning models for business-specific purposes.", "The clients could be companies who want to know opinion from their customers on products and services, agencies that monitor public reactions to policy changes, and financial analysts who analyze news as it can potentially influence the stock-market.", "Such MATL-based services thus need to handle the diverse nature of clients' tasks.Challenges on Handling Diverse (Heterogeneous) Tasks Previous multi-task learning and fewshot learning research usually work on homogeneous tasks, e.g. all tasks are binary classification problems, or tasks are close to each other (picked by human experts) so the positive transfer between tasks is guaranteed.", "However, with a large number of tasks in a MATL setting, the above assumption may not hold, i.e. we need to be able to deal with tasks with larger diversity.", "Such diversity can be reflected as", "(i) tasks with varying numbers of labels: when tasks are diverse, different tasks could have different numbers of labels; and the labels might be defined in different label spaces without relatedness.", "Most of the existing multi-task and few-shot learning methods will fail in this setting; and more importantly (ii) tasks with positive and negative transfers: since tasks are not guaranteed to be similar to each other in the MATL setting, they are not always able to help each other when trained together, i.e. negative transfer BID22 between tasks.", "For example, in dialog services, the sentences \"What fast food do you have nearby\" and \"Could I find any Indian food\" may belong to two different classes \"fast_food\" and \"indian_food\" for a restaurant recommendation service in a city; while for a travel-guide service for a park, those two sentences could belong to the same class \"food_options\".", "In this case the two tasks may hurt each other when trained jointly with a single representation function, since the first task turns to give similar representations to both sentences while the second one turns to distinguish them in the representation space.A Task Clustering Based Solution To deal with the second challenge above, we propose to partition the tasks to clusters, making the tasks in each cluster more likely to be related.", "Common knowledge is only shared across tasks within a cluster, thus the negative transfer problem is alleviated.", "There are a few task clustering algorithm proposed mainly for convex models BID12 BID9 BID5 BID0 , but they assume that the tasks have the same number of labels (usually binary classification).", "In order to handle tasks with varying numbers of labels, we adopt a similarity-based task clustering algorithm.", "The task similarity is measured by cross-task transfer performance, which is a matrix S whose (i,", "j)-entry S ij is the estimated accuracy by adapting the learned representations on the i-th (source) task to the j-th (target) task.", "The above task similarity computation does not require the source task and target task to have the same set of labels, as a result, our clustering algorithm could naturally handle tasks with varying numbers of labels.Although cross-task transfer performance can provide critical information of task similarities, directly using it for task clustering may suffer from both efficiency and accuracy issues.", "First and most importantly, evaluation of all entries in the matrix S involves conducting the source-target transfer learning O(n 2 ) times, where n is the number of tasks.", "For a large number of diverse tasks where the n can be larger than 1,000, evaluation of the full matrix is unacceptable (over 1M entries to evaluate).", "Second, the estimated cross-task performance (i.e. some S ij or S ji scores) is often unreliable due to small data size or label noises.", "When the number of the uncertain values is large, they can collectively mislead the clustering algorithm to output an incorrect task-partition.To address the aforementioned challenges, we propose a novel task clustering algorithm based on the theory of matrix completion BID2 .", "Specifically, we deal with the huge number of entries by randomly sample task pairs to evaluate the S ij and S ji scores; and deal with the unreliable entries by keeping only task pairs (i,", "j) with consistent S ij and S ji scores.", "Given a set of n tasks, we first construct an n × n partially-observed matrix Y, where its observed entries correspond to the sampled and reliable task pairs (i,", "j) with consistent S ij and S ji scores.", "Otherwise, if the task pairs (i,", "j) are not sampled to compute the transfer scores or the scores are inconsistent, we mark both Y ij and Y ji as unobserved.", "Given the constructed partially-observed matrix Y, our next step is to recover an n × n full similarity matrix using a robust matrix completion approach, and then generate the final task partition by applying spectral clustering to the completed similarity matrix.", "The proposed approach has a 2-fold advantage.", "First, our method carries a strong theoretical guarantee, showing that the full similarity matrix can be perfectly recovered if the number of observed correct entries in the partially observed similarity matrix is at least O(n log 2 n).", "This theoretical result allows us to only compute the similarities of O(n log 2 n) instead of O(n 2 ) pairs, thus greatly reduces the computation when the number of tasks is large.", "Second, by filtering out uncertain task pairs, the proposed algorithm will be less sensitive to noise, leading to a more robust clustering performance.The task clusters allow us to handle", "(i) diverse MTL problems, by model sharing only within clusters such that the negative transfer from irrelevant tasks can be alleviated; and (ii) diverse FSL problems, where a new task can be assigned a task-specific metric, which is a linear combination of the metrics defined by different clusters, such that the diverse few-shot tasks could derive different metrics from the previous learning experience.", "Our results show that the proposed task clustering algorithm, combined with the above MTL and FSL strategies, could give us significantly better deep MTL and FSL algorithms on sentiment classification and intent classification tasks.", "In this paper, we propose a robust task-clustering method that not only has strong theoretical guarantees but also demonstrates significantly empirical improvements when equipped by our MTL and FSL algorithms.", "Our empirical studies verify that", "(i) the proposed task clustering approach is very effective in the many-task learning setting especially when tasks are diverse;", "(ii) our approach could efficiently handle large number of tasks as suggested by our theory; and", "(iii) cross-task transfer performance can serve as a powerful task similarity measure.", "Our work opens up many future research directions, such as supporting online many-task learning with incremental computation on task similarities, and combining our clustering approach with the recent learning-to-learn methods (e.g. BID18 ), to enhance our MTL and FSL methods." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.8372092843055725, 0.060606054961681366, 0.1071428507566452, 0.1463414579629898, 0.3636363446712494, 0.1818181723356247, 0.09302324801683426, 0.19999998807907104, 0.375, 0.25531914830207825, 0.3461538553237915, 0.2857142686843872, 0.2461538463830948, 0.25806450843811035, 0.08888888359069824, 0.06896550953388214, 0.19178082048892975, 0.2857142686843872, 0, 0.2916666567325592, 0.2647058665752411, 0.1515151411294937, 0.1818181723356247, 0.1538461446762085, 0.29629629850387573, 0.4000000059604645, 0.10526315122842789, 0.09756097197532654, 0.2933333218097687, 0.2448979616165161, 0.25, 0.04347825422883034, 0.28070175647735596, 0.2083333283662796, 0.12903225421905518, 0.19999998807907104, 0.12903225421905518, 0.13793103396892548, 0.09302324801683426, 0.178571417927742, 0.06666666269302368, 0.1428571343421936, 0.1599999964237213, 0.19999998807907104, 0.260869562625885, 0.2745097875595093, 0.11320754140615463, 0, 0.2926829159259796, 0.21052631735801697, 0.11428570747375488, 0.19999998807907104 ]
B11bwYgfM
true
[ "We propose a matrix-completion based task clustering algorithm for deep multi-task and few-shot learning in the settings with large numbers of diverse tasks." ]
[ "The resemblance between the methods used in studying quantum-many body physics and in machine learning has drawn considerable attention.", "In particular, tensor networks (TNs) and deep learning architectures bear striking similarities to the extent that TNs can be used for machine learning.", "Previous results used one-dimensional TNs in image recognition, showing limited scalability and a request of high bond dimension.", "In this work, we train two-dimensional hierarchical TNs to solve image recognition problems, using a training algorithm derived from the multipartite entanglement renormalization ansatz (MERA).", "This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning.", "While keeping the TN unitary in the training phase, TN states can be defined, which optimally encodes each class of the images into a quantum many-body state.", "We study the quantum features of the TN states, including quantum entanglement and fidelity.", "We suggest these quantities could be novel properties that characterize the image classes, as well as the machine learning tasks.", "Our work could be further applied to identifying possible quantum properties of certain artificial intelligence methods.", "Over the past years, we have witnessed a booming progress in applying quantum theories and technologies to realistic problems.", "Paradigmatic examples include quantum simulators BID31 and quantum computers (Steane, 1998; BID16 BID2 aimed at tackling challenging problems that are beyond the capability of classical digital computations.", "The power of these methods stems from the properties quantum many-body systems.Tensor networks (TNs) belong to the most powerful numerical tools for studying quantum manybody systems BID22 BID13 BID26 .", "The main challenge lies in the exponential growth of the Hilbert space with the system size, making exact descriptions of such quantum states impossible even for systems as small as O(10 2 ) electrons.", "To break the \"exponential wall\", TNs were suggested as an efficient ansatz that lowers the computational cost to a polynomial dependence on the system size.", "Astonishing achievements have been made in studying, e.g. spins, bosons, fermions, anyons, gauge fields, and so on Cirac & Verstraete, 2009; BID23 BID26 BID26 .", "TNs are also exploited to predict interactions that are used to design quantum simulators BID25 .As", "TNs allowed the numerical treatment of difficult physical systems by providing layers of abstraction, deep learning achieved similar striking advances in automated feature extraction and pattern recognition BID19 . The", "resemblance between the two approaches is beyond superficial. At", "theoretical level, there is a mapping between deep learning and the renormalization group BID1 , which in turn connects holography and deep learning BID37 BID10 , and also allows studying network design from the perspective of quantum entanglement BID20 . In", "turn, neural networks can represent quantum states BID3 BID4 BID15 BID11 .Most", "recently, TNs have been applied to solve machine learning problems such as dimensionality reduction BID5 , handwriting recognition BID30 BID12 . Through", "a feature mapping, an image described as classical information is transferred into a product state defined in a Hilbert space. Then these", "states are acted onto a TN, giving an output vector that determines the classification of the images into a predefined number of classes. Going further", "with this clue, it can be seen that when using a vector space for solving image recognition problems, one faces a similar \"exponential wall\" as in quantum many-body systems. For recognizing", "an object in the real world, there exist infinite possibilities since the shapes and colors change, in principle, continuously. An image or a gray-scale", "photo provides an approximation, where the total number of possibilities is lowered to 256 N per channel, with N describing the number of pixels, and it is assumed to be fixed for simplicity. Similar to the applications", "in quantum physics, TNs show a promising way to lower such an exponentially large space to a polynomial one.This work contributes in two aspects. Firstly, we derive an efficient", "quantum-inspired learning algorithm based on a hierarchical representation that is known as tree TN (TTN) (see, e.g., BID21 ). Compared with Refs. BID30 BID12", "where a onedimensional", "(1D) TN (called matrix product state (MPS) (Östlund & Rommer, 1995) ) is used, TTN suits more the two-dimensional (2D) nature of images. The algorithm is inspired by the multipartite", "entanglement renormalization ansatz (MERA) approach BID35 BID36 BID7 BID9 , where the tensors in the TN are kept to be unitary during the training. We test the algorithm on both the MNIST (handwriting", "recognition with binary images) and CIFAR (recognition of color images) databases and obtain accuracies comparable to the performance of convolutional neural networks. More importantly, the TN states can then be defined", "that optimally encodes each class of images as a quantum many-body state, which is akin to the study of a duality between probabilistic graphical models and TNs BID27 . We contrast the bond dimension and model complexity", ", with results indicating that a growing bond dimension overfits the data. we study the representation in the different layers", "in the hierarchical TN with t-SNE ( BID32 , and find that the level of abstraction changes the same way as in a deep convolutional neural network BID18 or a deep belief network BID14 , and the highest level of the hierarchy allows for a clear separation of the classes. Finally, we show that the fidelities between each two", "TN states from the two different image classes are low, and we calculate the entanglement entropy of each TN state, which gives an indication of the difficulty of each class.", "We continued the forays into using tensor networks for machine learning, focusing on hierarchical, two-dimensional tree tensor networks that we found a natural fit for image recognition problems.", "This proved a scalable approach that had a high precision, and we can conclude the following observations:• The limitation of representation power (learnability) of the TTNs model strongly depends on the input bond (physical indexes).", "And, the virtual bond (geometrical indexes) determine how well the TTNs approximate this limitation.•", "A hierarchical tensor network exhibits the same increase level of abstraction as a deep convolutional neural network or a deep belief network.•", "Fidelity can give us an insight how difficult it is to tell two classes apart.•", "Entanglement entropy has potential to characterize the difficulty of representing a class of problems.In future work, we plan to use fidelity-based training in an unsupervised setting and applying the trained TTN to recover damaged or compressed images and using entanglement entropy to characterize the accuracy." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1666666567325592, 0.14999999105930328, 0.1111111044883728, 0, 1, 0.0952380895614624, 0.13333332538604736, 0.1666666567325592, 0.05882352590560913, 0.10810810327529907, 0.09090908616781235, 0.08888888359069824, 0.0416666604578495, 0, 0.0476190410554409, 0.0624999962747097, 0.08695651590824127, 0, 0.11538460850715637, 0.06666666269302368, 0.10256409645080566, 0.052631575614213943, 0, 0.0833333283662796, 0.04999999329447746, 0.04255318641662598, 0.13636362552642822, 0.045454539358615875, 0, 0, 0.04255318641662598, 0.04444443807005882, 0.11999999731779099, 0, 0.03333332762122154, 0.0476190410554409, 0.04651162400841713, 0.12244897335767746, 0, 0, 0, 0.0363636314868927 ]
ryF-cQ6T-
true
[ "This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning." ]
[ "Predicting outcomes and planning interactions with the physical world are long-standing goals for machine learning.", "A variety of such tasks involves continuous physical systems, which can be described by partial differential equations (PDEs) with many degrees of freedom.", "Existing methods that aim to control the dynamics of such systems are typically limited to relatively short time frames or a small number of interaction parameters.", "We present a novel hierarchical predictor-corrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames.", "We propose to split the problem into two distinct tasks: planning and control.", "To this end, we introduce a predictor network that plans optimal trajectories and a control network that infers the corresponding control parameters.", "Both stages are trained end-to-end using a differentiable PDE solver.", "We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs such as the incompressible Navier-Stokes equations.", "Intelligent systems that operate in the physical world must be able to perceive, predict, and interact with physical phenomena (Battaglia et al., 2013) .", "In this work, we consider physical systems that can be characterized by partial differential equations (PDEs).", "PDEs constitute the most fundamental description of evolving systems and are used to describe every physical theory, from quantum mechanics and general relativity to turbulent flows (Courant & Hilbert, 1962; Smith, 1985) .", "We aim to endow artificial intelligent agents with the ability to direct the evolution of such systems via continuous controls.", "Such optimal control problems have typically been addressed via iterative optimization.", "Differentiable solvers and the adjoint method enable efficient optimization of high-dimensional systems (Toussaint et al., 2018; de Avila Belbute-Peres et al., 2018; Schenck & Fox, 2018) .", "However, direct optimization through gradient descent (single shooting) at test time is resource-intensive and may be difficult to deploy in interactive settings.", "More advanced methods exist, such as multiple shooting and collocation, but they commonly rely on modeling assumptions that limit their applicability, and still require computationally intensive iterative optimization at test time.", "Iterative optimization methods are expensive because they have to start optimizing from scratch and typically require a large number of iterations to reach an optimum.", "In many real-world control problems, however, agents have to repeatedly make decisions in specialized environments, and reaction times are limited to a fraction of a second.", "This motivates the use of data-driven models such as deep neural networks, which combine short inference times with the capacity to build an internal representation of the environment.", "We present a novel deep learning approach that can learn to represent solution manifolds for a given physical environment, and is orders of magnitude faster than iterative optimization techniques.", "The core of our method is a hierarchical predictor-corrector scheme that temporally divides the problem into easier subproblems.", "This enables us to combine models specialized to different time scales in order to control long sequences of complex high-dimensional systems.", "We train our models using a differentiable PDE solver that can provide the agent with feedback of how interactions at any point in time affect the outcome.", "Our models learn to represent manifolds containing a large number of solutions, and can thereby avoid local minima that can trap classic optimization techniques.", "We evaluate our method on a variety of control tasks in systems governed by advection-diffusion PDEs such as the Navier-Stokes equations.", "We quantitatively evaluate the resulting sequences on how well they approximate the target state and how much force was exerted on the physical system.", "Our method yields stable control for significantly longer time spans than alternative approaches.", "We have demonstrated that deep learning models in conjunction with a differentiable physics solver can successfully predict the behavior of complex physical systems and learn to control them.", "The in- troduction of a hierarchical predictor-corrector architecture allowed the model to learn to reconstruct long sequences by treating the physical behavior on different time scales separately.", "We have shown that using a differentiable solver greatly benefits the quality of solutions since the networks can learn how their decisions will affect the future.", "In our experiments, hierarchical inference schemes outperform traditional sequential agents because they can easily learn to plan ahead.", "To model realistic environments, we have introduced observations to our pipeline which restrict the information available to the learning agent.", "While the PDE solver still requires full state information to run the simulation, this restriction does not apply when the agent is deployed.", "While we do not believe that learning approaches will replace iterative optimization, our method shows that it is possible to learn representations of solution manifolds for optimal control trajectories using data-driven approaches.", "Fast inference is vital in time-critical applications and can also be used in conjunction with classical solvers to speed up convergence and ultimately produce better solutions." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13333332538604736, 0.10810810327529907, 0.20512820780277252, 0.41025641560554504, 0.1428571343421936, 0.1818181723356247, 0.07999999821186066, 0.3255814015865326, 0.15789473056793213, 0.12903225421905518, 0.17777776718139648, 0.24242423474788666, 0.07692307233810425, 0.10256409645080566, 0.05405404791235924, 0, 0.1538461446762085, 0.1538461446762085, 0.14999999105930328, 0.2790697515010834, 0.12121211737394333, 0.23529411852359772, 0.19512194395065308, 0.15789473056793213, 0.2222222238779068, 0.11428570747375488, 0.0714285671710968, 0.3720930218696594, 0.19999998807907104, 0.20512820780277252, 0.060606054961681366, 0.060606054961681366, 0.0555555522441864, 0.2222222238779068, 0.05128204822540283 ]
HyeSin4FPB
true
[ "We train a combination of neural networks to predict optimal trajectories for complex physical systems." ]
[ "The ability of overparameterized deep networks to generalize well has been linked to the fact that stochastic gradient descent (SGD) finds solutions that lie in flat, wide minima in the training loss -- minima where the output of the network is resilient to small random noise added to its parameters. \n", "So far this observation has been used to provide generalization guarantees only for neural networks whose parameters are either \\textit{stochastic} or \\textit{compressed}.", "In this work, we present a general PAC-Bayesian framework that leverages this observation to provide a bound on the original network learned -- a network that is deterministic and uncompressed. ", "What enables us to do this is a key novelty in our approach: our framework allows us to show that if on training data, the interactions between the weight matrices satisfy certain conditions that imply a wide training loss minimum, these conditions themselves {\\em generalize} to the interactions between the matrices on test data, thereby implying a wide test loss minimum.", "We then apply our general framework in a setup where we assume that the pre-activation values of the network are not too small (although we assume this only on the training data).", "In this setup, we provide a generalization guarantee for the original (deterministic, uncompressed) network, that does not scale with product of the spectral norms of the weight matrices -- a guarantee that would not have been possible with prior approaches.", "Modern deep neural networks contain millions of parameters and are trained on relatively few samples.", "Conventional wisdom in machine learning suggests that such models should massively overfit on the training data, as these models have the capacity to memorize even a randomly labeled dataset of similar size (Zhang et al., 2017; Neyshabur et al., 2015) .", "Yet these models have achieved state-ofthe-art generalization error on many real-world tasks.", "This observation has spurred an active line of research (Soudry et al., 2018; BID2 BID11 ) that has tried to understand what properties are possessed by stochastic gradient descent (SGD) training of deep networks that allows these networks to generalize well.One particularly promising line of work in this area (Neyshabur et al., 2017; BID0 has been bounds that utilize the noise-resilience of deep networks on training data i.e., how much the training loss of the network changes with noise injected into the parameters, or roughly, how wide is the training loss minimum.", "While these have yielded generalization bounds that do not have a severe exponential dependence on depth (unlike other bounds that grow with the product of spectral norms of the weight matrices), these bounds are quite limited: they either apply to a stochastic version of the classifier (where the parameters are drawn from a distribution) or a compressed version of the classifier (where the parameters are modified and represented using fewer bits).In", "this paper, we revisit the PAC-Bayesian analysis of deep networks in Neyshabur et al. (2017; and provide a general framework that allows one to use noise-resilience of the deep network on training data to provide a bound on the original deterministic and uncompressed network. We", "achieve this by arguing that if on the training data, the interaction between the 'activated weight matrices' (weight matrices where the weights incoming from/outgoing to inactive units are zeroed out) satisfy certain conditions which results in a wide training loss minimum, these conditions themselves generalize to the weight matrix interactions on the test data.After presenting this general PAC-Bayesian framework, we specialize it to the case of deep ReLU networks, showing that we can provide a generalization bound that accomplishes two goals simultaneously: i)", "it applies to the original network and ii", ") it does not scale exponentially with depth in terms of the products of the spectral norms of the weight matrices; instead our bound scales with more meaningful terms that capture the interactions between the weight matrices and do not have such a severe dependence on depth in practice. We", "note that all but one of these terms are indeed quite small on networks in practice. However", ", one particularly (empirically) large term that we use is the reciprocal of the magnitude of the network pre-activations on the training data (and so our bound would be small only in the scenario where the pre-activations are not too small). We emphasize", "that this drawback is more of a limitation in how we characterize noise-resilience through the specific conditions we chose for the ReLU network, rather than a drawback in our PAC-Bayesian framework itself. Our hope is", "that, since our technique is quite general and flexible, by carefully identifying the right set of conditions, in the future, one might be able to derive a similar generalization guarantee that is smaller in practice.To the best of our knowledge, our approach of generalizing noise-resilience of deep networks from training data to test data in order to derive a bound on the original network that does not scale with products of spectral norms, has neither been considered nor accomplished so far, even in limited situations." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.21875, 0.2222222238779068, 0.2857142686843872, 0.1875, 0.2745097875595093, 0.2545454502105713, 0.21052631735801697, 0.19999998807907104, 0.11428570747375488, 0.23404255509376526, 0.16438356041908264, 0.48275861144065857, 0.2666666507720947, 0.19354838132858276, 0.16393442451953888, 0.14999999105930328, 0.23728813230991364, 0.19230768084526062, 0.35555556416511536 ]
Hygn2o0qKX
true
[ "We provide a PAC-Bayes based generalization guarantee for uncompressed, deterministic deep networks by generalizing noise-resilience of the network on the training data to the test data." ]
[ "Training neural networks to be certifiably robust is critical to ensure their safety against adversarial attacks.", "However, it is currently very difficult to train a neural network that is both accurate and certifiably robust.", "In this work we take a step towards addressing this challenge.", "We prove that for every continuous function $f$, there exists a network $n$ such that:\n", "(i) $n$ approximates $f$ arbitrarily close, and", "(ii) simple interval bound propagation of a region $B$ through $n$ yields a result that is arbitrarily close to the optimal output of $f$ on $B$.", "Our result can be seen as a Universal Approximation Theorem for interval-certified ReLU networks.", "To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.", "Much recent work has shown that neural networks can be fooled into misclassifying adversarial examples (Szegedy et al., 2014) , inputs which are imperceptibly different from those that the neural network classifies correctly.", "Initial work on defending against adversarial examples revolved around training networks to be empirically robust, usually by including adversarial examples found with various attacks into the training dataset (Gu and Rigazio, 2015; Papernot et al., 2016; Zheng et al., 2016; Athalye et al., 2018; Eykholt et al., 2018; Moosavi-Dezfooli et al., 2017; Xiao et al., 2018) .", "However, while empirical robustness can be practically useful, it does not provide safety guarantees.", "As a result, much recent research has focused on verifying that a network is certifiably robust, typically by employing methods based on mixed integer linear programming (Tjeng et al., 2019) , SMT solvers (Katz et al., 2017) , semidefinite programming (Raghunathan et al., 2018a) , duality (Wong and Kolter, 2018; Dvijotham et al., 2018b) , and linear relaxations (Gehr et al., 2018; Weng et al., 2018; Wang et al., 2018b; Zhang et al., 2018; Singh et al., 2018; Salman et al., 2019) .", "Because the certification rates were far from satisfactory, specific training methods were recently developed which produce networks that are certifiably robust: Mirman et al. (2018) ; Raghunathan et al. (2018b) ; Wang et al. (2018a) ; Wong and Kolter (2018) ; Wong et al. (2018) ; Gowal et al. (2018) train the network with standard optimization applied to an over-approximation of the network behavior on a given input region (the region is created around the concrete input point).", "These techniques aim to discover specific weights which facilitate verification.", "There is a tradeoff between the degree of the over-approximation used and the speed of training and certification.", "Recently, (Cohen et al., 2019b) proposed a statistical approach to certification, which unlike the non-probabilistic methods discussed above, creates a probabilistic classifier that comes with probabilistic guarantees.", "So far, some of the best non-probabilistic results achieved on the popular MNIST (Lecun et al., 1998) and CIFAR10 (Krizhevsky, 2009 ) datasets have been obtained with the simple Interval relaxation (Gowal et al., 2018; Mirman et al., 2019) , which scales well at both training and verification time.", "Despite this progress, there are still substantial gaps between known standard accuracy, experimental robustness, and certified robustness.", "For example, for CIFAR10, the best reported certified robustness is 32.04% with an accuracy of 49.49% when using a fairly modest l ∞ region with radius 8/255 (Gowal et al., 2018) .", "The state-of-the-art non-robust accuracy for this dataset is > 95% with experimental robustness > 50%.", "Given the size of this gap, a key question then is: can certified training ever succeed or is there a fundamental limit?", "In this paper we take a step in answering this question by proving a result parallel to the Universal Approximation Theorem (Cybenko, 1989; Hornik et al., 1989) .", "We prove that for any continuous function f defined on a compact domain Γ ⊆ R m and for any desired level of accuracy δ, there exists a ReLU neural network n which can certifiably approximate f up to δ using interval bound propagation.", "As an interval is a fairly imprecise relaxation, our result directly applies to more precise convex relaxations (e.g., Zhang et al. (2018); Singh et al. (2019) ).", "Theorem 1.1 (Universal Interval-Certified Approximation, Figure 1 ).", "Let Γ ⊂ R m be a compact set and let f : Γ → R be a continuous function.", "For all δ > 0, there exists a ReLU network n such that for all boxes [a, b] in Γ defined by points a, b ∈ Γ where a k ≤ b k for all k, the propagation of the box [a, b] using interval analysis through the network n, denoted n ([a, b]), approximates the set", "We recover the classical universal approximation theorem (|f (x) − n(x)| ≤ δ for all x ∈ Γ) by considering boxes [a, b] describing points (x = a = b).", "Note that here the lower bound is not [l, u] as the network n is an approximation of f .", "Because interval analysis propagates boxes, the theorem naturally handles l ∞ norm bound perturbations to the input.", "Other l p norms can be handled by covering the l p ball with boxes.", "The theorem can be extended easily to functions f : Γ → R k by applying the theorem component wise.", "Practical meaning of theorem The practical meaning of this theorem is as follows: if we train a neural network n on a given training data set (e.g., CIFAR10) and we are satisfied with the properties of n (e.g., high accuracy), then because n is a continuous function, the theorem tells us that there exists a network n which is as accurate as n and as certifiable with interval analysis as n is with a complete verifier.", "This means that if we fail to find such an n, then either n did not possess the required capacity or the optimizer was unsuccessful.", "Focus on the existence of a network We note that we do not provide a method for training a certified ReLU network -even though our method is constructive, we aim to answer an existential question and thus we focus on proving that a given network exists.", "Interesting future work items would be to study the requirements on the size of this network and the inherent hardness of finding it with standard optimization methods.", "Universal approximation is insufficient We now discuss why classical universal approximation is insufficient for establishing our result.", "While classical universal approximation theorems state that neural networks can approximate a large class of functions f , unlike our result, they do not state that robustness of the approximation n of f is actually certified with a scalable proof method (e.g., interval bound propagation).", "If one uses a non scalable complete verifier instead, then the standard Universal approximation theorem is sufficient.", "To demonstrate this point, consider the function f : R → R (Figure 2b ) mapping all x ≤ 0 to 1, all x ≥ 1 to 0 and all 0 < x < 1 to 1 − x and two ReLU networks n 1 (Figure 2a ) and n 2 (Figure 2c ) perfectly approximating f , that is n 1 (x) = f (x) = n 2 (x) for all x.", "For δ = 1 4 , the interval certification that n 1 maps all", "However, interval certification succeeds for n 2 , because n 2 ([0, 1]) = [0, 1] .", "To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.", "We proved that for all real valued continuous functions f on compact sets, there exists a ReLU network n approximating f arbitrarily well with the interval abstraction.", "This means that for arbitrary input sets, analysis using the interval relaxation yields an over-approximation arbitrarily close to the smallest interval containing all possible outputs.", "Our theorem affirmatively answers the open question, whether the Universal Approximation Theorem generalizes to Interval analysis.", "Our results address the question of whether the interval abstraction is expressive enough to analyse networks approximating interesting functions f .", "This is of practical importance because interval analysis is the most scalable non-trivial analysis." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.10810810327529907, 0.25641024112701416, 0.0624999962747097, 0.4324324131011963, 0, 0.2222222238779068, 0.1111111044883728, 0.15789473056793213, 0.07547169178724289, 0.0312499962747097, 0, 0.08219178020954132, 0.15584415197372437, 0.0624999962747097, 0.1111111044883728, 0.1249999925494194, 0.0312499962747097, 0.10256409645080566, 0.1818181723356247, 0.0555555522441864, 0.1860465109348297, 0.0833333283662796, 0.4193548262119293, 0.16326530277729034, 0, 0.10526315122842789, 0.2461538463830948, 0.11764705181121826, 0.25641024112701416, 0.10526315122842789, 0, 0.1463414579629898, 0.1944444328546524, 0.1304347813129425, 0.3448275923728943, 0.1304347813129425, 0.1111111044883728, 0.2950819730758667, 0.05128204822540283, 0.158730149269104, 0.11428570747375488, 0.1111111044883728, 0.15789473056793213, 0.4583333432674408, 0.2666666507720947, 0.05405404791235924, 0.2926829159259796, 0.11764705181121826 ]
B1gX8kBtPr
true
[ "We prove that for a large class of functions f there exists an interval certified robust network approximating f up to arbitrary precision." ]
[ "In this paper, we propose an efficient framework to accelerate convolutional neural networks.", "We utilize two types of acceleration methods: pruning and hints.", "Pruning can reduce model size by removing channels of layers.", "Hints can improve the performance of student model by transferring knowledge from teacher model.", "We demonstrate that pruning and hints are complementary to each other.", "On one hand, hints can benefit pruning by maintaining similar feature representations.", "On the other hand, the model pruned from teacher networks is a good initialization for student model, which increases the transferability between two networks.", "Our approach performs pruning stage and hints stage iteratively to further improve the\n", "performance.", "Furthermore, we propose an algorithm to reconstruct the parameters of hints layer and make the pruned model more suitable for hints.", "Experiments were conducted on various tasks including classification and pose estimation.", "Results on CIFAR-10, ImageNet and COCO demonstrate the generalization and superiority of our framework.", "In recent years, convolutional neural networks (CNN) have been applied in many computer vision tasks, e.g. classification BID21 ; BID6 , objects detection BID8 ; BID30 , and pose estimation BID25 .", "The success of CNN drives the development of computer vision.", "However, restricted by large model size as well as computation complexity, many CNN models are difficult to be put into practical use directly.", "To solve the problem, more and more researches have focused on accelerating models without degradation of performance.Pruning and knowledge distillation are two of mainstream methods in model acceleration.", "The goal of pruning is to remove less important parameters while maintaining similar performance of the original model.", "Despite pruning methods' superiority, we notice that for many pruning methods with the increase of pruned channel number, the performance of pruned model drops rapidlly.", "Knowledge distillation describes teacher-student framework: use high-level representations from teacher model to supervise student model.", "Hints method BID31 shares a similar idea of knowledge distillation, where the feature map of teacher model is used as high-level representations.", "According to BID36 , the student network can achieve better performance in knowledge transfer if its initialization can produce similar features as the teacher model.", "Inspired by this work, we propose that pruned model outputs similar features with original model's and provide a good initialization for student model, which does help distillation.", "And on the other hand, hints can help reconstruct parameters and alleviate degradation of performance caused by pruning operation.", "FIG0 illustrates the motivation of our framework.", "Based on this analysis, we propose an algorithm: we do pruning and hints operation iteratively.", "And for each iteration, we conduct a reconstructing step between pruning and hints operations.", "And we demonstrate that this reconstructing operation can provide a better initialization for student model and promote hints step (See FIG1 .", "We name our method as PWH Framework.", "To our best knowledge, we are the first to combine pruning and hints together as a framework.Our framework can be applied on different vision tasks.", "Experiments on CIFAR- 10 Krizhevsky & Hinton (2009) , ImageNet Deng et al. (2016) and COCO Lin et al. (2014) Hints can help pruned model reconstruct parameters.", "And the network pruned from the teacher model can provide a good initialization for student model in hints learning.effectiveness of our framework.", "Furthermore, our method is a framework where different pruning and hints methods can be included.To summarize, the contributions of this paper are as follows: FORMULA0 We analyze the properties of pruning and hints methods and show that these two model acceleration methods are complementary to each other.", "(2) To our best knowledge, this is the first work that combines pruning and hints.", "Our framework is easy to be extended to different pruning and hints methods.", "(3) Sufficient experiments show the effectiveness of our framework on different datasets for different tasks.", "In this paper, we propose PWH Framework, an iterative framework for model acceleration.", "Our framework takes the advantage of both pruning and hints methods.", "To our best knowledge, this is the first work that combine these two model acceleration methods.", "Furthermore, we conduct reconstructing operation between hints and pruning steps as a cascader.", "We analyze the property of these two methods and show they are complementary to each other: pruning provides a better initialization for student model and hints method helps to adjust parameters in pruned model.", "Experiments on CIFAR-10, ImageNet and COCO datasets for classification and pose estimation tasks demonstrate the superiority of PWH Framework." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.1666666567325592, 0, 0.07407406717538834, 0.1599999964237213, 0.07692307233810425, 0.22857142984867096, 0.23076923191547394, 0.1818181723356247, 0.07999999821186066, 0.14814814925193787, 0.045454539358615875, 0.08695651590824127, 0, 0.09999999403953552, 0.19354838132858276, 0.17142856121063232, 0, 0.22857142984867096, 0.05405404791235924, 0.1463414579629898, 0.1818181723356247, 0.0952380895614624, 0.1428571343421936, 0.2857142686843872, 0.17142856121063232, 0.0952380895614624, 0.20512820780277252, 0.05128204822540283, 0.17142856121063232, 0.22641509771347046, 0.3448275923728943, 0.23076923191547394, 0.1428571343421936, 0.07407406717538834, 0.23999999463558197, 0.19999998807907104, 0.2222222238779068, 0.2666666507720947, 0.1875 ]
Hyffti0ctQ
true
[ "This is a work aiming for boosting all the existing pruning and mimic method." ]
[ "For typical sequence prediction problems such as language generation, maximum likelihood estimation (MLE) has commonly been adopted as it encourages the predicted sequence most consistent with the ground-truth sequence to have the highest probability of occurring.", "However, MLE focuses on once-to-all matching between the predicted sequence and gold-standard, consequently treating all incorrect predictions as being equally incorrect.", "We refer to this drawback as {\\it negative diversity ignorance} in this paper.", "Treating all incorrect predictions as equal unfairly downplays the nuance of these sequences' detailed token-wise structure.", "To counteract this, we augment the MLE loss by introducing an extra Kullback--Leibler divergence term derived by comparing a data-dependent Gaussian prior and the detailed training prediction.", "The proposed data-dependent Gaussian prior objective (D2GPo) is defined over a prior topological order of tokens and is poles apart from the data-independent Gaussian prior (L2 regularization) commonly adopted in smoothing the training of MLE.", "Experimental results show that the proposed method makes effective use of a more detailed prior in the data and has improved performance in typical language generation tasks, including supervised and unsupervised machine translation, text summarization, storytelling, and image captioning.\n", "Language understanding is the crown jewel of artificial intelligence.", "As the well-known dictum by Richard Feynman states, \"what I cannot create, I do not understand.\"", "Language generation therefore reflects the level of development of language understanding.", "Language generation models have seen remarkable advances in recent years, especially with the rapid development of deep neural networks (DNNs).", "There are several models typically used in language generation, namely sequenceto-sequence (seq2seq) models (Kalchbrenner & Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015; Luong et al., 2015; Vaswani et al., 2017) , generative adversarial networks (GANs) (Goodfellow et al., 2014) , variational autoencoders (Kingma & Welling, 2013) , and auto-regressive networks (Larochelle & Murray, 2011; Van Oord et al., 2016) .", "Language generation is usually modeled as a sequence prediction task, which adopts maximum likelihood estimation (MLE) as the standard training criterion (i.e., objective).", "MLE has had much success owing to its intuitiveness and flexibility.", "However, sequence prediction has encountered the following series of problems due to MLE.", "• Exposure bias: The model is not exposed to the full range of errors during training.", "• Loss mismatch: During training, we maximize the log-likelihood, whereas, during inference, the model is evaluated by a different metric such as BLEU or ROUGE.", "• Generation diversity: The generations are dull, generic (Sordoni et al., 2015; Serban et al., 2016; Li et al., 2016a) , repetitive, and short-sighted (Li et al., 2016b ).", "• Negative diversity ignorance: MLE fails to assign proper scores to different incorrect model outputs, which means that all incorrect outputs are treated equally during training.", "A variety of work has alleviated the above MLE training shortcomings apart from negative diversity ignorance.", "Negative diversity ignorance is a result of unfairly downplaying the nuance of sequences' detailed token-wise structure.", "When the MLE objective compares its predicted and ground-truth sequences, it takes a once-for-all matching strategy; the predicted sequence is given a binary label, either correct or incorrect.", "However, these incorrect training predictions may be quite diverse and letting the model be aware of which incorrect predictions are more incorrect or less incorrect than others may more effectively guide model training.", "For instance, an armchair might be mistaken with a deckchair, but it should usually not be mistaken for a mushroom.", "To alleviate the issue of the negative diversity ignorance, we add an extra Gaussian prior objective to augment the current MLE training with an extra Kullback-Leibler divergence loss term.", "The extra loss is computed by comparing two probability distributions, the first of which is from the detailed model training prediction and the second of which is from a ground-truth token-wise distribution and is defined as a kind of data-dependent Gaussian prior distribution.", "The proposed data-dependent Gaussian prior objective (D2GPo) is then injected into the final loss through a KL divergence term.", "The D2GPo is poles apart from the commonly adopted data-independent Gaussian prior (L2 regularization) for the purpose of smoothing the training of MLE, which is also directly added into the MLE loss.", "Experimental results show that the proposed method makes effectively use of a more detailed prior in the data and improves the performance of typical language generation tasks, including supervised and unsupervised machine translation, text summarization, storytelling, and image captioning.", "This work proposed a data-dependent Gaussian prior objective (D2GPo) for language generation tasks with the hope of alleviating the difficulty of negative diversity ignorance.", "D2GPo imposes the prior from (linguistic) data over the sequence prediction models.", "D2GPo outperformed strong baselines in experiments on classic language generation tasks (i.e., neural machine translation, text summarization, storytelling, and image captioning tasks)." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11320754140615463, 0.0952380895614624, 0.1764705777168274, 0.052631575614213943, 0.3404255211353302, 0.31372547149658203, 0.13793103396892548, 0.12903225421905518, 0.052631575614213943, 0.0624999962747097, 0.0952380895614624, 0.02985074184834957, 0.1304347813129425, 0.12121211737394333, 0.17142856121063232, 0.15789473056793213, 0.1304347813129425, 0, 0.1304347813129425, 0.10526315122842789, 0.10810810327529907, 0.21276594698429108, 0.08695651590824127, 0.05128204822540283, 0.42553192377090454, 0.30188679695129395, 0.2926829159259796, 0.2448979616165161, 0.1428571343421936, 0.22727271914482117, 0.1818181723356247, 0.04347825422883034 ]
S1efxTVYDr
true
[ "We introduce an extra data-dependent Gaussian prior objective to augment the current MLE training, which is designed to capture the prior knowledge in the ground-truth data." ]
[ "We propose an interactive classification approach for natural language queries.", "Instead of classifying given the natural language query only, we ask the user for additional information using a sequence of binary and multiple-choice questions.", "At each turn, we use a policy controller to decide if to present a question or pro-vide the user the final answer, and select the best question to ask by maximizing the system information gain.", "Our formulation enables bootstrapping the system without any interaction data, instead relying on non-interactive crowdsourcing an-notation tasks.", "Our evaluation shows the interaction helps the system increase its accuracy and handle ambiguous queries, while our approach effectively balances the number of questions and the final accuracy.", "Responding to natural language queries through simple, single-step classification has been studied extensively in many applications, including user intent prediction Qu et al., 2019) , and information retrieval (Kang & Kim, 2003; Rose & Levinson, 2004) .", "Typical methods rely on a single user input to produce an output, missing an opportunity to interact with the user to reduce ambiguity and improve the final prediction.", "For example, users may under-specify a request due to incomplete understanding of the domain; or the system may fail to correctly interpret the nuances of the input query.", "In both cases, a low quality input could be mitigated by further interaction with the user.", "In this paper we propose a simple but effective interaction paradigm that consists of a sequence of binary and multiple choice questions allowing the system to ask the user for more information.", "Figure 1 illustrates the types of interaction supported by this method, showcasing the opportunity for clarification while avoiding much of the complexity involved in unrestricted natural language interactions.", "Following a natural language query from the user, our system then decides between posing another question to obtain more information or finalizing the current prediction.", "Unlike previous work which assumes access to full interaction data Hu et al., 2018; Rao & Daumé III, 2018) , we are interested in bootstrapping an interaction system using simple and relatively little annotation effort.", "This is particularly important in real-world applications, such as in virtual assistants, where the supported classification labels are subject to change and thereby require a lot of re-annotation.", "We propose an effective approach designed for interaction efficiency and simple system bootstrapping.", "Our approach adopts a Bayesian decomposition of the posterior distributions over classification labels and user's responses through the interaction process.", "Due to the decomposition, we can efficiently compute and select the next question that provides the maximal expected information based on the posteriors.", "To further balance the potential increase in accuracy with the cost of asking additional questions, we train a policy controller to decide whether to ask additional questions or return a final prediction.", "Our method also enables separately collecting natural language annotations to model the distributions of class labels and user responses.", "Specifically, we crowdsource initial natural language queries and question-answer pairs for each class label, alleviating the need for Wizard-of-Oz style dialog annotations (Kelley, 1984; Wen et al., 2016) .", "Furthermore, we leverage the natural language descriptions of class labels, questions and answers to help estimate their correlation and reduce the need for heavy annotation.", "Got it!", "The article below might be helpful:", "We propose an approach for interactive classification, where users can provide under-specified natural language queries and the system can inquire missing information through a sequence of simple binary or multi-choice questions.", "Our method uses information theory to select the best question at every turn, and a lightweight policy to efficiently control the interaction.", "We show how we can bootstrap the system without any interaction data.", "We demonstrate the effectiveness of our approach on two tasks with different characteristics.", "Our results show that our approach outperforms multiple baselines by a large margin.", "In addition, we provide a new annotated dataset for future work on bootstrapping interactive classification systems." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.5454545617103577, 0.4000000059604645, 0.2745097875595093, 0, 0.08695651590824127, 0.17241378128528595, 0.1304347813129425, 0.08888888359069824, 0.10256409645080566, 0.19230768084526062, 0.1666666567325592, 0.1702127605676651, 0.10526315122842789, 0.07999999821186066, 0.3333333432674408, 0.1428571343421936, 0.09302324801683426, 0.19607841968536377, 0.1428571343421936, 0.19607841968536377, 0.17391303181648254, 0, 0.49056604504585266, 0.1860465109348297, 0.05714285373687744, 0.1111111044883728, 0.1666666567325592, 0.1538461446762085 ]
rkx9_gHtvS
true
[ "We propose an interactive approach for classifying natural language queries by asking users for additional information using information gain and a reinforcement learning policy controller." ]
[ "Convolutional neural networks (CNNs) have achieved state of the art performance on recognizing and representing audio, images, videos and 3D volumes; that is, domains where the input can be characterized by a regular graph structure. \n", "However, generalizing CNNs to irregular domains like 3D meshes is challenging.", "Additionally, training data for 3D meshes is often limited.", "In this work, we generalize convolutional autoencoders to mesh surfaces.", "We perform spectral decomposition of meshes and apply convolutions directly in frequency space.", "In addition, we use max pooling and introduce upsampling within the network to represent meshes in a low dimensional space.", "We construct a complex dataset of 20,466 high resolution meshes with extreme facial expressions and encode it using our Convolutional Mesh Autoencoder.", "Despite limited training data, our method outperforms state-of-the-art PCA models of faces with 50% lower error, while using 75% fewer parameters.", "Convolutional neural networks BID27 have achieved state of the art performance in a large number of problems in computer vision BID26 BID22 , natural language processing BID32 and speech processing BID20 .", "In recent years, CNNs have also emerged as rich models for generating both images BID18 and audio .", "These successes may be attributed to the multi-scale hierarchical structure of CNNs that allows them to learn translational-invariant localized features.", "Since the learned filters are shared across the global domain, the number of filter parameters is independent of the domain size.", "We refer the reader to BID19 for a comprehensive overview of deep learning methods and the recent developments in the field.Despite the recent success, CNNs have mostly been successful in Euclidean domains with gridbased structured data.", "In particular, most applications of CNNs deal with regular data structures such as images, videos, text and audio, while the generalization of CNNs to irregular structures like graphs and meshes is not trivial.", "Extending CNNs to graph structures and meshes has only recently drawn significant attention BID8 BID14 .", "Following the work of BID14 on generalizing the CNNs on graphs using fast Chebyshev filters, we introduce a convolutional mesh autoencoder architecture for realistically representing high-dimensional meshes of 3D human faces and heads.The human face is highly variable in shape as it is affected by many factors such as age, gender, ethnicity etc.", "The face also deforms significantly with expressions.", "The existing state of the art 3D face representations mostly use linear transformations BID39 BID29 BID40 or higher-order tensor generalizations BID43 BID9 .", "While these linear models achieve state of the art results in terms of realistic appearance and Euclidean reconstruction error, we show that CNNs can perform much better at capturing highly non-linear extreme facial expressions with many fewer model parameters.One challenge of training CNNs on 3D facial data is the limited size of current datasets.", "Here we demonstrate that, since these networks have fewer parameters than traditional linear models, they can be effectively learned with limited data.", "This reduction in parameters is attributed to the locally invariant convolutional filters that can be shared on the surface of the mesh.", "Recent work has exploited thousands of 3D scans and 4D scan sequences for learning detailed models of 3D faces BID13 BID46 BID37 BID11 .", "The availability of this data enables us to a learn rich non-linear representation of 3D face meshes that can not be captured easily by existing linear models.In summary, our work introduces a convolutional mesh autoencoder suitable for 3D mesh processing.", "Our main contributions are:• We introduce a mesh convolutional autoencoder consisting of mesh downsampling and mesh upsampling layers with fast localized convolutional filters defined on the mesh surface.•", "We use the mesh autoencoder to accurately represent 3D faces in a low-dimensional latent space performing 50% better than a PCA model that is used in state of the art methods BID39 for face representation.•", "Our autoencoder uses up to 75% fewer parameters than linear PCA models, while being more accurate on the reconstruction error.•", "We provide 20,466 frames of highly detailed and complex 3D meshes from 12 different subjects for a range of extreme facial expressions along with our code for research purposes. Our", "data and code is located at http://withheld.for.review.This work takes a step towards the application of CNNs to problems in graphics involving 3D meshes. Key", "aspects of such problems are the limited availability of training data and the need for realism. Our", "work addresses these issues and provides a new tool for 3D mesh modeling.", "While our convolutional Mesh Autoencoder leads to a representation that generalizes better for unseen 3D faces than PCA with much fewer parameters, our model has several limitations.", "Our network is restricted to learning face representation for a fixed topology, i.e., all our data samples needs to have the same adjacency matrix, A. The mesh sampling layers are also based on this fixed adjacency matrix A, which defines only the edge connections.", "The adjacency matrix does not take in to account the vertex positions thus affecting the performance of our sampling operations.", "In future, we would like to incorporate this information into our learning framework.", "Mesh Autoencoder PCA FLAME BID29 Table 5 : Quantitative evaluation of Extrapolation experiment.", "The training set consists of the rest of the expressions.", "Mean error is of the form [µ ± σ] with mean Euclidean distance µ and standard deviation σ.", "The median error and number of frames in each expression sequnece is also shown.", "All errors are in millimeters (mm).The", "amount of data for high resolution faces is very limited. We", "believe that generating more of such data with high variability between faces would improve the performance of Mesh Autoencoders for 3D face representations. The", "data scarcity also limits our ability to learn models that can be trained for superior performance at higher dimensional latent space. The", "data scarcity also produces noise in some reconstructions.", "We have introduced a generalization of convolutional autoencoders to mesh surfaces with mesh downsampling and upsampling layers combined with fast localized convolutional filters in spectral space.", "The locally invariant filters that are shared across the surface of the mesh significantly reduce the number of filter parameters in the network.", "While the autoencoder is applicable to any class of mesh objects, we evaluated its quality on a dataset of realistic extreme facial expressions.", "Table 6 : Comparison of FLAME and FLAME++.", "FLAME++ is obtained by replacing expression model of FLAME with our mesh autoencoder.", "All errors are in millimeters (mm).convolutional", "filters capture a lot of surface details that are generally missed in linear models like PCA while using 75% fewer parameters. Our Mesh Autoencoder", "outperforms the linear PCA model by 50% on interpolation experiments and generalizes better on completely unseen facial expressions.Face models are used in a large number of applications in computer animations, visual avatars and interactions. In recent years, a lot", "of focus has been given to capturing highly detailed static and dynamic facial expressions. This work introduces a", "direction in modeling these high dimensional face meshes that can be useful in a range of computer graphics applications." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1249999925494194, 0.1599999964237213, 0.17391303181648254, 0.3333333432674408, 0.07407406717538834, 0.11764705181121826, 0.277777761220932, 0, 0.0952380895614624, 0.12903225421905518, 0.060606054961681366, 0, 0.1304347813129425, 0.09302324801683426, 0.13793103396892548, 0.12903225421905518, 0.0952380895614624, 0.0555555522441864, 0.158730149269104, 0, 0.11764705181121826, 0.17142856121063232, 0.15686273574829102, 0.10256409645080566, 0.1702127605676651, 0.05714285373687744, 0.2857142686843872, 0.19512194395065308, 0.13793103396892548, 0.29629629850387573, 0.14999999105930328, 0.1071428507566452, 0.060606054961681366, 0.07407406717538834, 0, 0.09090908616781235, 0.0624999962747097, 0.0714285671710968, 0, 0.07999999821186066, 0.10810810327529907, 0.1111111044883728, 0, 0.2702702581882477, 0.060606054961681366, 0.277777761220932, 0.09090908616781235, 0.07407406717538834, 0, 0, 0.11999999731779099, 0.25, 0 ]
SkgYEQ9h4m
true
[ "Convolutional autoencoders generalized to mesh surfaces for encoding and reconstructing extreme 3D facial expressions." ]
[ "Computing distances between examples is at the core of many learning algorithms for time series.", "Consequently, a great deal of work has gone into designing effective time series distance measures.", "We present Jiffy, a simple and scalable distance metric for multivariate time series.", "Our approach is to reframe the task as a representation learning problem---rather than design an elaborate distance function, we use a CNN to learn an embedding such that the Euclidean distance is effective.", "By aggressively max-pooling and downsampling, we are able to construct this embedding using a highly compact neural network.", "Experiments on a diverse set of multivariate time series datasets show that our approach consistently outperforms existing methods.", "Measuring distances between examples is a fundamental component of many classification, clustering, segmentation and anomaly detection algorithms for time series BID38 BID43 BID13 .", "Because the distance measure used can have a significant effect on the quality of the results, there has been a great deal of work developing effective time series distance measures BID18 BID28 BID1 BID15 .", "Historically, most of these measures have been hand-crafted.", "However, recent work has shown that a learning approach can often perform better than traditional techniques BID16 BID33 BID9 .We", "introduce a metric learning model for multivariate time series. Specifically", ", by learning to embed time series in Euclidean space, we obtain a metric that is both highly effective and simple to implement using modern machine learning libraries. Unlike many", "other deep metric learning approaches for time series, we use a convolutional, rather than a recurrent, neural network, to construct the embedding. This choice", ", in combination with aggressive maxpooling and downsampling, results in a compact, accurate network.Using a convolutional neural network for metric learning per se is not a novel idea BID35 BID45 ; however, time series present a set of challenges not seen together in other domains, and how best to embed them is far from obvious. In particular", ", time series suffer from:1. A lack of labeled", "data. Unlike text or images", ", time series cannot typically be annotated post-hoc by humans. This has given rise", "to efforts at unsupervised labeling BID4 , and is evidenced by the small size of most labeled time series datasets. Of the 85 datasets", "in the UCR archive BID10 , for example, the largest dataset has fewer than 17000 examples, and many have only a few hundred. 2. A lack of large", "corpora. In addition to the", "difficulty of obtaining labels, most researchers have no means of gathering even unlabeled time series at the same scale as images, videos, or text. Even the largest time", "series corpora, such as those on Physiobank BID19 , are tiny compared to the virtually limitless text, image, and video data available on the web. 3. Extraneous data. There", "is no guarantee that", "the beginning and end of a time series correspond to the beginning and end of any meaningful phenomenon. I.e., examples of the class", "or pattern of interest may take place in only a small interval within a much longer time series. The rest of the time series", "may be noise or transient phenomena between meaningful events BID37 BID21 .4. Need for high speed. One consequence", "of the presence of extraneous", "data is that many time series algorithms compute distances using every window of data within a time series BID34 BID4 BID37 . A time series of length T has O(T ) windows of", "a given length, so it is essential that the operations done at each window be efficient.As a result of these challenges, an effective time series distance metric must exhibit the following properties:• Efficiency: Distance measurement must be fast, in terms of both training time and inference time.• Simplicity: As evidenced by the continued dominance", "of the Dynamic Time Warping (DTW) distance BID42 in the presence of more accurate but more complicated rivals, a distance measure must be simple to understand and implement.• Accuracy: Given a labeled dataset, the metric should", "yield a smaller distance between similarly labeled time series. This behavior should hold even for small training sets", ".Our primary contribution is a time series metric learning method, Jiffy, that exhibits all of these properties: it is fast at both training and inference time, simple to understand and implement, and consistently outperforms existing methods across a variety of datasets.We introduce the problem statement and the requisite definitions in Section 2. We summarize existing state-of-the-art approaches", "(both neural and non-neural) in Section 3 and go on to detail our own approach in Section 4. We then present our results in Section 5. The paper", "concludes with implications of our work and", "avenues for further research.", "We present Jiffy, a simple and efficient metric learning approach to measuring multivariate time series similarity.", "We show that our method learns a metric that leads to consistent and accurate classification across a diverse range of multivariate time series.", "Jiffy's resilience to hyperparameter choices and consistent performance across domains provide strong evidence for its utility on a wide range of time series datasets.Future work includes the extension of this approach to multi-label classification and unsupervised learning.", "There is also potential to further increase Jiffy's speed by replacing the fully connected layer with a structured BID6 or binarized BID39" ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.31578946113586426, 0.2631579041481018, 0.3888888955116272, 0.2800000011920929, 0.09756097197532654, 0.4878048598766327, 0.260869562625885, 0.19230768084526062, 0.06451612710952759, 0.1860465109348297, 0.42424240708351135, 0.3529411852359772, 0.260869562625885, 0.3055555522441864, 0.1875, 0, 0.10810810327529907, 0.2222222238779068, 0.1599999964237213, 0.0714285671710968, 0.1249999925494194, 0.08163265138864517, 0.14814814925193787, 0.24390242993831635, 0.23255813121795654, 0.04878048226237297, 0.07407407462596893, 0.25, 0.2857142686843872, 0.2222222238779068, 0.24390242993831635, 0.3611111044883728, 0.13333332538604736, 0.06666666269302368, 0.07407407462596893, 0.41025641560554504, 0.40909090638160706, 0.3103448152542114, 0.13333332538604736 ]
ryacTMZRZ
true
[ "Jiffy is a convolutional approach to learning a distance metric for multivariate time series that outperforms existing methods in terms of nearest-neighbor classification accuracy." ]
[ "Prefrontal cortex (PFC) is a part of the brain which is responsible for behavior repertoire.", "Inspired by PFC functionality and connectivity, as well as human behavior formation process, we propose a novel modular architecture of neural networks with a Behavioral Module (BM) and corresponding end-to-end training strategy. ", "This approach allows the efficient learning of behaviors and preferences representation.", "This property is particularly useful for user modeling (as for dialog agents) and recommendation tasks, as allows learning personalized representations of different user states. ", "In the experiment with video games playing, the resultsshow that the proposed method allows separation of main task’s objectives andbehaviors between different BMs.", "The experiments also show network extendability through independent learning of new behavior patterns.", "Moreover, we demonstrate a strategy for an efficient transfer of newly learned BMs to unseen tasks.", "Humans are highly intelligent species and are capable of solving a large variety of compound and open-ended tasks.", "The performance on those tasks often varies depending on a number of factors.", "In this work, we group them into two main categories: Strategy and Behaviour.", "The first group contains all the factors leading to the achievement of a defined set of goals.", "On the other hand, Behaviour is responsible for all the factors not directly linked to the goals and having no significant effect on them.", "Examples of such factors can be current sentiment status or the unique personality and preferences that affect the way an individual makes decisions.", "Existing Deep Networks have been focused on learning of a Strategy component.", "This was achieved by optimization of a model for defined sets of goals, also the goal might be decomposed into sub-goals first, as in FeUdal Networks BID29 or Policy Sketches approach BID1 .", "Behavior component, in turn, obtained much less attention from the DL community.", "Although some works have been conducted on the identification of Behavior component in the input, such as works in emotion recognition BID15 BID11 BID17 .", "To the best of our knowledge, there was no previous research on incorporation on Behavior Component or Behavior Representation in Deep Networks before.", "Modeling Behaviour along with Strategy component is an important step to mimicking a real human behavior and creation of robust Human-Computer Interaction systems, such as a dialog agent, social robot or recommendation system.", "The early work of artificial neural networks was inspired by brain structure BID9 BID16 , and the convolution operation and hierarchical layer design found in the network designed for visual analytic are inspired by visual cortex BID9 BID16 .", "In this work, we again seek inspiration from the human brain architecture.", "In the neuroscience studies, the prefrontal cortex (PFC) is the region of the brain responsible for the behavioral repertoire of animals BID18 ).", "Similar to the connectivity of the brain cortex (as shown in Figure 1 ), we hypothesize that a behavior can be modeled as a standalone module within the deep network architecture.", "Thus, in this work, we introduce a general purpose modular architecture of deep networks with a Behavioural Module (BM) focusing on impersonating the functionality of PFC.Apart from mimicking the PFC connectivity in our model, we also borrow the model training strategy from human behavior formation process.", "As we are trying to mimic the functionality of a human brain we approached the problem from the perspective of Reinforcement Learning.", "This approach also aligns with the process of unique personality development.", "According to BID6 and BID5 unique personality can be explained by different dopamine functions caused by genetic influence.", "These differences are also a reason for different Positive Emotionality (PE) Sensory Cortex (Conv Layers) PFC (Behavior Module) Motor Cortex (FC layers) Figure 1: Abstract illustration of the prefrontal cortex (PFC) connections of the brain BID18 and corresponding parts of the proposed model.", "patterns (sensitivity to reward stimuli), which are in turn a significant factor in behavior formation process BID5 .", "Inspired by named biological processes we introduce extra positive rewards (referring to positive-stimuli or dopamine release, higher the reward referring to higher sensitivity) to encourage specific actions and provoke the development of specific behavioral patterns in the trained agent.To validate our method, we selected the challenging domain of classic Atari 2600 games BID2 , where the simulated environment allows an AI algorithm to learn game playing by repeatedly seek to understand the input space, objectives and solution.", "Based on this environment and an established agent (i.e. Deep Q-Network (DQN) BID20 ), the behavior of the agent can be represented by preferences over different sets of actions.", "In other words, in the given setting, each behaviour is represented by a probability distribution over given action space.", "In real-world tasks, the extra-reward can be represented by the human satisfaction by taken action along with the correctness of the output (main reward).Importantly", ", the effect of human behavior is not restricted to a single task and can be observed in various similar situations. Although it", "is difficult to correlate the effect of human behavior on completely different tasks, it is often easier to observe akin patterns in similar domains and problems. To verify this", ", we study two BM transfer strategies to transfer a set of newly learned BMs across different tasks. As a human PFC", "is responsible for behavior patterns in a variety of tasks, we also aim to achieve a zero-shot transfer of learned modules across different tasks.The contributions of our work are as follow:• We propose a novel modular architecture with behavior module and a learning method for the separation of behavior from a strategy component.• We provide a", "0-shot transfer strategy for newly learned behaviors to previously unseen tasks. The proposed approach", "ensures easy extendability of the model to new behaviors and transferability of learned BMs.• We demonstrate the effectiveness", "of our approach on video games domain. The experimental results show good", "separation of behavior with different BMs, as well as promising results when transfer learned BMs to new tasks. Along with that, we study the effects", "of different hyper-parameters on the behavior separation process.", "In this work, we have proposed a novel Modular Network architecture with Behavior Module, inspired by human brain Pre-Frontal Cortex connectivity.", "This approach demonstrated the successful separation of the Strategy and Behavior functionalities among different network components.", "This is particularly useful for network expandability through independent learning of new Behavior Modules.", "Adversarial 0-shot transfer approach showed high potential of the learned BMs to be transferred to unseen tasks.", "Experiments showed that learned behaviors are removable and do not degrade the performance of the network on the main task.", "This property allows the model to work in a general setting, when user preferences are unknown.", "The results also align with human behavior formation process.", "We also conducted an exhaustive study on the effect of hyper-parameters on behavior learning process.", "As a future work, we are planning to extend the work to other domains, such as style transfer, chat bots, and recommendation systems.", "Also, we will work on improving module transfer quality.", "In this appendix, we show the details of our preliminary study on various key parameters.", "The experiments were conducted on the Behavior Separation task." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2222222238779068, 0.04651162400841713, 0.0833333283662796, 0.1666666567325592, 0.11764705181121826, 0.07692307233810425, 0.13793103396892548, 0.1428571343421936, 0.07999999821186066, 0, 0.0714285671710968, 0.11428570747375488, 0.05714285373687744, 0.07999999821186066, 0.13636362552642822, 0.07999999821186066, 0.11764705181121826, 0.11764705181121826, 0.08888888359069824, 0.13636362552642822, 0, 0.19354838132858276, 0.09756097197532654, 0.07692307233810425, 0.06451612710952759, 0.0833333283662796, 0, 0.11764705181121826, 0.06896550953388214, 0.052631575614213943, 0.04999999701976776, 0.12903225421905518, 0.05882352590560913, 0.1666666567325592, 0.14999999105930328, 0.060606054961681366, 0.16949151456356049, 0.14814814925193787, 0.06666666269302368, 0.07999999821186066, 0.0555555522441864, 0.0952380895614624, 0.11764705181121826, 0.0714285671710968, 0.2222222238779068, 0.06896550953388214, 0.06451612710952759, 0.06896550953388214, 0, 0.07407406717538834, 0, 0, 0.0714285671710968, 0 ]
Syl6tjAqKX
true
[ "Extendable Modular Architecture is proposed for developing of variety of Agent Behaviors in DQN." ]
[ "A major component of overfitting in model-free reinforcement learning (RL) involves the case where the agent may mistakenly correlate reward with certain spurious features from the observations generated by the Markov Decision Process (MDP).", "We provide a general framework for analyzing this scenario, which we use to design multiple synthetic benchmarks from only modifying the observation space of an MDP.", "When an agent overfits to different observation spaces even if the underlying MDP dynamics is fixed, we term this observational overfitting.", "Our experiments expose intriguing properties especially with regards to implicit regularization, and also corroborate results from previous works in RL generalization and supervised learning (SL).", "Generalization for RL has recently grown to be an important topic for agents to perform well in unseen environments.", "Complication arises when the dynamics of the environments entangle with the observation, which is often a high-dimensional projection of the true latent state.", "One particular framework, which we denote by zero-shot supervised framework (Zhang et al., 2018a; Nichol et al., 2018; Justesen et al., 2018) and is used to study RL generalization, is to treat it analogous to a classical supervised learning (SL) problem -i.e. assume there exists a distribution of MDP's, train jointly on a finite \"training set\" sampled from this distribution, and check expected performance on the entire distribution, with the fixed trained policy.", "In this framework, there is a spectrum of analysis, ranging from almost purely theoretical analysis (Wang et al., 2019; Asadi et al., 2018) to full empirical results on diverse environments Packer et al., 2018) .", "However, there is a lack of analysis in the middle of this spectrum.", "On the theoretical side, previous work do not provide analysis for the case when the underlying MDP is relatively complex and requires the policy to be a non-linear function approximator such as a neural network.", "On the empirical side, there is no common ground between recently proposed empirical benchmarks.", "This is partially caused by multiple confounding factors for RL generalization that can be hard to identify and separate.", "For instance, an agent can overfit to the MDP dynamics of the training set, such as for control in Mujoco (Pinto et al., 2017; Rajeswaran et al., 2017) .", "In other cases, an RNN-based policy can overfit to maze-like tasks in exploration , or even exploit determinism and avoid using observations (Bellemare et al., 2012; Machado et al., 2018) .", "Furthermore, various hyperparameters such as the batch-size in SGD (Smith et al., 2018) , choice of optimizer (Kingma & Ba, 2014) , discount factor γ (Jiang et al., 2015) and regularizations such as entropy and weight norms (Cobbe et al., 2018) can also affect generalization.", "Due to these confounding factors, it can be unclear what parts of the MDP or policy are actually contributing to overfitting or generalization in a principled manner, especially in empirical studies with newly proposed benchmarks.", "In order to isolate these factors, we study one broad factor affecting generalization that is most correlated with themes in SL, specifically observational overfitting, where an agent overfits due to properties of the observation which are irrelevant to the latent dynamics of the MDP family.", "To study this factor, we fix a single underlying MDP's dynamics and generate a distribution of MDP's by only modifying the observational outputs.", "Our contributions in this paper are the following:", "1. We discuss realistic instances where observational overfitting may occur and its difference from other confounding factors, and design a parametric theoretical framework to induce observational overfitting that can be applied to any underlying MDP.", "2. We study observational overfitting with linear quadratic regulators (LQR) in a synthetic environment and neural networks such as multi-layer perceptrons (MLPs) and convolutions in classic Gym environments.", "A primary novel result we demonstrate for all cases is that implicit regularization occurs in this setting in RL.", "We further test the implicit regularization hypothesis on the benchmark CoinRun from using MLPs, even when the underlying MDP dynamics are changing per level.", "3. In the Appendix, we expand upon previous experiments by including full training curve and hyperparamters.", "We also provide an extensive analysis of the convex one-step LQR case under the observational overfitting regime, showing that under Gaussian initialization of the policy and using gradient descent on the training cost, a generalization gap must necessarily exist.", "The structure of this paper is outlined as follows: Section 2 discusses the motivation behind this work and the synthetic construction to abstract certain observation effects.", "Section 3 demonstrates numerous experiments using this synthetic construction that suggest implicit regularization is at work.", "Finally, Section 3.4 tests the implicit regularization hypothesis on CoinRun, as well as ablates various ImageNet architectures and margin metrics in the Appendix.", "We have identified and isolated a key component of overfitting in RL as the particular case of \"observational overfitting\", which is particularly attractive for studying architectural implicit regularizations.", "We have analyzed this setting extensively, by examining 3 main components:", "1. The analytical case of LQR and linear policies under exact gradient descent, which lays the foundation for understanding theoretical properties of networks in RL generalization.", "2. The empirical but principled Projected-Gym case for both MLP and convolutional networks which demonstrates the effects of neural network policies under nonlinear environments.", "3. The large scale case for CoinRun, which can be interpreted as a case where relevant features are moving across the input, where empirically, MLP overparametrization also improves generalization.", "We noted that current network policy bounds using ideas from SL are unable to explain overparametrization effects in RL, which is an important further direction.", "In some sense, this area of RL generalization is an extension of static SL classification from adding extra RL components.", "For instance, adding a nontrivial \"combination function\" between f and g θ that is dependent on time (to simulate how object pixels move in a real game) is both an RL generalization issue and potentially video classification issue, and extending results to the memory-based RNN case will also be highly beneficial.", "Furthermore, it is unclear whether such overparametrization effects would occur in off-policy methods such as Q-learning and also ES-based methods.", "In terms of architectural design, recent works (Jacot et al., 2018; Garriga-Alonso et al., 2019; Lee et al., 2019) have shed light on the properties of asymptotically overparametrized neural networks in the infinite width and depth cases and their performance in SL.", "Potentially such architectures (and a corresponding training algorithm) may be used in the RL setting which can possibly provide benefits, one of which is generalization as shown in this paper.", "We believe that this work provides an important initial step towards solving these future problems.", "We further verify that explicit regularization (norm based penalties) also reduces generalization gaps.", "However, explicit regularization may be explained due to the bias of the synthetic tasks, since the first layer's matrix may be regularized to only \"view\" the output of f , especially as regularizing the first layer's weights substantially improves generalization.", "Figure A2 : Explicit Regularization on layer norms.", "We provide another deconvolution memorization test, using an LQR as the underlying MDP.", "While fg-Gym-Deconv shows that memorization performance is dampened, this test shows that there can exist specific hard limits to memorization.", "Specifically, NatureCNN can memorize 30 levels, but not 50; IMPALA can memorize 2 levels but not 5; IMPALA-LARGE cannot memorize 2 levels at all.", "Training, Test Rewards (f = NULL) IMPALA_2_levels IMPALA_5_levels IMPALA_30_levels IMPALA_LARGE_2_levels NatureCNN_30_levels NatureCNN_50_levels Figure A3 : Deconvolution memorization test using LQR as underlying MDP." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.24561403691768646, 0.26923075318336487, 0.21276594698429108, 0.19999998807907104, 0.1395348757505417, 0.13333332538604736, 0.13793103396892548, 0.1071428507566452, 0.21052631735801697, 0.14035087823867798, 0.05128204822540283, 0.2222222238779068, 0.19230768084526062, 0.1111111044883728, 0.19354838132858276, 0.17241378128528595, 0.3333333432674408, 0.21276594698429108, 0.1764705777168274, 0.14035087823867798, 0.07692307233810425, 0.22727271914482117, 0.1666666567325592, 0.0952380895614624, 0.19999998807907104, 0.1599999964237213, 0.1428571343421936, 0.1249999925494194, 0.3396226465702057, 0.1621621549129486, 0.23529411852359772, 0.11999999731779099, 0.11320754140615463, 0.15686273574829102, 0.1818181723356247, 0.19178082048892975, 0.09090908616781235, 0.13114753365516663, 0.25925925374031067, 0.1463414579629898, 0.1538461446762085, 0.178571417927742, 0, 0.10256409645080566, 0.1395348757505417, 0, 0 ]
HJli2hNKDH
true
[ "We isolate one factor of RL generalization by analyzing the case when the agent only overfits to the observations. We show that architectural implicit regularizations occur in this regime." ]
[ "We propose a neural language model capable of unsupervised syntactic structure induction.", "The model leverages the structure information to form better semantic representations and better language modeling.", "Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information.", "On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation.", "In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model.", "In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network.", "Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks.", "Linguistic theories generally regard natural language as consisting of two part: a lexicon, the complete set of all possible words in a language; and a syntax, the set of rules, principles, and processes that govern the structure of sentences BID46 .", "To generate a proper sentence, tokens are put together with a specific syntactic structure.", "Understanding a sentence also requires lexical information to provide meanings, and syntactical knowledge to correctly combine meanings.", "Current neural language models can provide meaningful word represent BID0 BID41 .", "However, standard recurrent neural networks only implicitly model syntax, thus fail to efficiently use structure information BID53 .Developing", "a deep neural network that can leverage syntactic knowledge to form a better semantic representation has received a great deal of attention in recent years BID50 BID53 BID11 . Integrating", "syntactic structure into a language model is important for different reasons: 1) to obtain", "a hierarchical representation with increasing levels of abstraction, which is a key feature of deep neural networks and of the human brain BID1 BID31 BID47 ; 2) to capture complex linguistic phenomena, like long-term dependency problem BID53 and the compositional effects BID50 ; 3) to provide shortcut for gradient back-propagation BID11 .A syntactic parser", "is the most common source for structure information. Supervised parsers", "can achieve very high performance on well constructed sentences. Hence, parsers can", "provide accurate information about how to compose word semantics into sentence semantics BID50 , or how to generate the next word given previous words BID56 . However, only major", "languages have treebank data for training parsers, and it request expensive human expert annotation. People also tend to", "break language rules in many circumstances (such as writing a tweet). These defects limit", "the generalization capability of supervised parsers.Unsupervised syntactic structure induction has been among the longstanding challenges of computational linguistic BID23 BID25 BID2 . Researchers are interested", "in this problem for a variety of reasons: to be able to parse languages for which no annotated treebanks exist BID35 ; to create a dependency structure to better suit a particular NLP application BID56 ; to empirically argue for or against the poverty of the stimulus BID12 BID10 ; and to examine cognitive issues in language learning BID51 .In this paper, we propose a", "novel neural language model: Parsing-Reading-Predict Networks (PRPN), which can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to form a better language model. With our model, we assume that", "language can be naturally represented as a tree-structured graph. The model is composed of three", "parts:1. A differentiable neural Parsing", "Network uses a convolutional neural network to compute the syntactic distance, which represents the syntactic relationships between all successive pairs of words in a sentence, and then makes soft constituent decisions based on the syntactic distance.2. A Reading Network that recurrently", "computes an adaptive memory representation to summarize information relevant to the current time step, based on all previous memories that are syntactically and directly related to the current token.3. A Predict Network that predicts the", "next token based on all memories that are syntactically and directly related to the next token.We evaluate our model on three tasks: word-level language modeling, character-level language modeling, and unsupervised constituency parsing. The proposed model achieves (or is", "close to) the state-of-the-art on both word-level and character-level language modeling. The model's unsupervised parsing outperforms", "some strong baseline models, demonstrating that the structure found by our model is similar to the intrinsic structure provided by human experts.", "In this paper, we propose a novel neural language model that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model.", "We introduce a new neural parsing network: Parsing-Reading-Predict Network, that can make differentiable parsing decisions.", "We use a new structured attention mechanism to control skip connections in a recurrent neural network.", "Hence induced syntactic structure information can be used to improve the model's performance.", "Via this mechanism, the gradient can be directly backpropagated from the language model loss function into the neural Parsing Network.", "The proposed model achieve (or is close to) the state-of-the-art on both word/character-level language modeling tasks.", "Experiment also shows that the inferred syntactic structure highly correlated to human expert annotation." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3720930218696594, 0.31111109256744385, 0.21276594698429108, 0.04081632196903229, 1, 0.3333333432674408, 0.3199999928474426, 0.22580644488334656, 0.13636362552642822, 0.12765957415103912, 0.1428571343421936, 0.16326530277729034, 0.27586206793785095, 0.2666666507720947, 0.1538461446762085, 0.09756097197532654, 0.0952380895614624, 0.072727270424366, 0.08163265138864517, 0.08888888359069824, 0.1111111044883728, 0.2750000059604645, 0.774193525314331, 0.17391303181648254, 0.0555555522441864, 0.20895521342754364, 0.13114753365516663, 0.2222222238779068, 0.12765957415103912, 0.19999998807907104, 0.8771929740905762, 0.2666666507720947, 0.17391303181648254, 0.22727271914482117, 0.2857142686843872, 0.12765957415103912, 0.2666666507720947 ]
rkgOLb-0W
true
[ "In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model." ]
[ "Unsupervised embedding learning aims to extract good representations from data without the use of human-annotated labels.", "Such techniques are apparently in the limelight because of the challenges in collecting massive-scale labels required for supervised learning.", "This paper proposes a comprehensive approach, called Super-AND, which is based on the Anchor Neighbourhood Discovery model.", "Multiple losses defined in Super-AND make similar samples gather even within a low-density space and keep features invariant against augmentation.", "As a result, our model outperforms existing approaches in various benchmark datasets and achieves an accuracy of 89.2% in CIFAR-10 with the Resnet18 backbone network, a 2.9% gain over the state-of-the-art.", "Deep learning and convolutional neural network have become an indispensable technique in computer vision (LeCun et al., 2015; Krizhevsky et al., 2012; Lawrence et al., 1997) .", "Remarkable developments, in particular, were led by supervised learning that requires thousands or more labeled data.", "However, high annotation costs have become a significant drawback in training a scalable and practical model in many domains.", "In contrast, unsupervised deep learning that requires no label has recently started to get attention in computer vision tasks.", "From clustering analysis (Caron et al., 2018; Ji et al., 2018) , and self-supervised model (Gidaris et al., 2018; Bojanowski & Joulin, 2017) to generative model (Goodfellow et al., 2014; Kingma & Welling, 2013; Radford et al., 2016) , various learning methods came out and showed possibilities and prospects.", "Unsupervised embedding learning aims to extract visually meaningful representations without any label information.", "Here \"visually meaningful\" refers to finding features that satisfy two traits:", "(i) positive attention and", "(ii) negative separation (Ye et al., 2019; Zhang et al., 2017c; Oh Song et al., 2016) .", "Data samples from the same ground truth class, i.e., positive samples, should be close in the embedding space (Fig. 1a) ; whereas those from different classes, i.e., negative samples, should be pushed far away in the embedding space (Fig. 1b) .", "However, in the setting of unsupervised learning, a model cannot have knowledge about whether given data points are positive samples or negative samples.", "Several new methods have been proposed to find 'visually meaningful' representations.", "The sample specificity method considers all data points as negative samples and separates them in the feature space (Wu et al., 2018; Bojanowski & Joulin, 2017) .", "Although this method achieves high performance, its decisions are known to be biased from learning only from negative separation.", "One approach utilizes data augmentation to consider positive samples in training (Ye et al., 2019) , which efficiently reduces any ambiguity in supervision while keeping invariant features in the embedding space.", "Another approach is called the Anchor Neighborhood Discovery (AND) model, which alleviates the complexity in boundaries by discovering the nearest neighbor among the data points (Huang et al., 2019) .", "Each of these approaches overcomes different limitations of the sample specificity method.", "However, no unified approach has been proposed.", "This paper presents a holistic method for unsupervised embedding learning, named Super-AND.", "Super-AND extends the AND algorithm and unifies various but dominant approaches in this domain with its unique architecture.", "Our proposed model not only focuses on learning distinctive features across neighborhoods, but also emphasizes edge information in embeddings and maintains the unchanging class information from the augmented data.", "Besides combining existing techniques, we newly introduce Unification Entropy loss (UE-loss), an adversary of sample specificity loss, which is able to gather similar data points within a low-density space.", "Extensive experiments are conducted on several benchmark datasets to verify the superiority of the model.", "The results show the synergetic advantages among modules of Super-AND.", "The main contributions of this paper are as follows:", "• We effectively unify various techniques from state-of-the-art models and introduce a new loss, UE-loss, to make similar data samples gather in the low-density space.", "• Super-AND outperforms all baselines in various benchmark datasets.", "It achieved an accuracy of 89.2% in the CIFAR-10 dataset with the ResNet18 backbone network, compared to the state-of-the-art that gained 86.3%.", "• The extensive experiments and the ablation study show that every component in Super-AND contributes to the performance increase, and also indicate their synergies are critical.", "Our model's outstanding performance is a step closer to the broader adoption of unsupervised techniques in computer vision tasks.", "The premise of data-less embedding learning is at its applicability to practical scenarios, where there exists only one or two examples per cluster.", "Codes and trained data for Super-AND are accessible via a GitHub link.", "Generative model.", "This type of model is a powerful branch in unsupervised learning.", "By reconstructing the underlying data distribution, a model can generate new data points as well as features from images without labels.", "Generative adversarial network (Goodfellow et al., 2014) has led to rapid progress in image generation problems Arjovsky et al., 2017) .", "While some attempts have been made in terms of unsupervised embedding learning (Radford et al., 2016) , the main objective of generative models lies at mimicking the true distribution of each class, rather than discovering distinctive categorical information the data contains.", "Self-supervised learning.", "This type of learning uses inherent structures in images as pseudo-labels and exploits labels for back-propagation.", "For example, a model can be trained to create embeddings by predicting the relative position of a pixel from other pixels (Doersch et al., 2015) or the degree of changes after rotating images (Gidaris et al., 2018) .", "Predicting future frames of a video can benefit from this technique (Walker et al., 2016) .", "Wu et al. (2018) proposed the sample specificity method that learns feature representation from capturing apparent discriminability among instances.", "All of these methods are suitable for unsupervised embedding learning, although there exists a risk of false knowledge from generated labels that weakly correlate with the underlying class information.", "Learning invariants from augmentation.", "Data augmentation is a strategy that enables a model to learn from datasets with an increased variety of instances.", "Popular techniques include flipping, scaling, rotation, and grey-scaling.", "These techniques do not deform any crucial features of data, but only change the style of images.", "Some studies hence use augmentation techniques and train models Clustering analysis.", "This type of analysis is an extensively studied area in unsupervised learning, whose main objective is to group similar objects into the same class.", "Many studies either leveraged deep learning for dimensionality reduction before clustering (Schroff et al., 2015; Baldi, 2012) or trained models in an end-to-end fashion (Xie et al., 2016; Yang et al., 2016) .", "Caron et al. (2018) proposed a concept called deep cluster, an iterative method that updates its weights by predicting cluster assignments as pseudo-labels.", "However, directly reasoning the global structures without any label is error-prone.", "The AND model, which we extend in this work, combines the advantages of sample specificity and clustering strategy to mitigate the noisy supervision via neighborhood analysis (Huang et al., 2019) .", "This paper presents Super-AND, a holistic technique for unsupervised embedding learning.", "Besides the synergetic advantage combining existing methods brings, the newly proposed UE-loss that groups nearby data points even in a low-density space while maintaining invariant features via data augmentation.", "The experiments with both coarse-grained and fine-grained datasets demonstrate our model's outstanding performance against the state-of-the-art models.", "Our efforts to advance unsupervised embedding learning directly benefit future applications that rely on various image clustering tasks.", "The high accuracy achieved by Super-AND makes the unsupervised learning approach an economically viable option where labels are costly to generate." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.25806450843811035, 0.25, 0.25, 0.05714285373687744, 0.13333332538604736, 0.052631575614213943, 0.06451612710952759, 0.0624999962747097, 0.11764705181121826, 0.03999999538064003, 0.1428571343421936, 0, 0, 0, 0.08695651590824127, 0.21621620655059814, 0.07692307233810425, 0.0476190447807312, 0.060606054961681366, 0.13636362552642822, 0.0952380895614624, 0.1538461446762085, 0.1818181723356247, 0.29629629850387573, 0.1818181723356247, 0.1904761791229248, 0.09090908616781235, 0.20689654350280762, 0.1599999964237213, 0.0833333283662796, 0.14999999105930328, 0, 0.10810810327529907, 0.05128204822540283, 0.23529411852359772, 0.15789473056793213, 0.14814814925193787, 0.307692289352417, 0.11764705181121826, 0, 0.19230768084526062, 0.19354838132858276, 0.1249999925494194, 0.12903225421905518, 0.11764705181121826, 0.2790697515010834, 0, 0.12121211737394333, 0, 0.12903225421905518, 0, 0.15789473056793213, 0.09090908616781235, 0.10526315122842789, 0.07692307233810425, 0.13333332538604736, 0.38461539149284363, 0.1428571343421936, 0.0624999962747097, 0.24242423474788666, 0.2222222238779068 ]
Hkgty1BFDS
true
[ "We proposed a comprehensive approach for unsupervised embedding learning on the basis of AND algorithm." ]
[ "Analysis of histopathology slides is a critical step for many diagnoses, and in particular in oncology where it defines the gold standard.", "In the case of digital histopathological analysis, highly trained pathologists must review vast whole-slide-images of extreme digital resolution (100,000^2 pixels) across multiple zoom levels in order to locate abnormal regions of cells, or in some cases single cells, out of millions.", "The application of deep learning to this problem is hampered not only by small sample sizes, as typical datasets contain only a few hundred samples, but also by the generation of ground-truth localized annotations for training interpretable classification and segmentation models.", "We propose a method for disease available during training.", "Even without pixel-level annotations, we are able to demonstrate performance comparable with models trained with strong annotations on the Camelyon-16 lymph node metastases detection challenge.", "We accomplish this through the use of pre-trained deep convolutional networks, feature embedding, as well as learning via top instances and negative evidence, a multiple instance learning technique fromatp the field of semantic segmentation and object detection.", "Histopathological image analysis (HIA) is a critical element of diagnosis in many areas of medicine, and especially in oncology, where it defines the gold standard metric.", "Recent works have sought to leverage modern developments in machine learning (ML) to aid pathologists in disease detection tasks, but the majority of these techniques require localized annotation masks as training data.", "These annotations are even more costly to obtain than the original diagnosis, as pathologists must spend time to assemble pixel-by-pixel segmentation maps of diseased tissue at extreme resolution, thus HIA datasets with annotations are very limited in size.", "Additionally, such localized annotations may not be available when facing new problems in HIA, such as new disease subtybe classification, prognosis estimation, or drug response prediction.", "Thus, the critical question for HIA is: can one design a learning architecture which achieves accurate classification with no additional localized annotation?", "A successful technique would be able train algorithms to assist pathologists during analysis, and could also be used to identify previously unknown structures and regions of interest.Indeed, while histopathology is the gold standard diagnostic in oncology, it is extremely costly, requiring many hours of focus from pathologists to make a single diagnosis BID21 BID30 .", "Additionally, as correct diagnosis for certain diseases requires pathologists to identify a few cells out of millions, these tasks are akin to \"finding a needle in a haystack.\"", "Hard numbers on diagnostic error rates in histopathology are difficult to obtain, being dependent upon the disease and tissue in question as well as self-reporting by pathologists of diagnostic errors.", "However, as reported in the review of BID25 , false negatives in cancer diagnosis can lead not only to catastrophic consequences for the patient, but also to incredible financial risk to the pathologist.", "Any tool which can aid pathologists to focus their attention and effort to the must suspect regions can help reduce false-negatives and improve patient outcomes through more accurate diagnoses BID8 .", "Medical researchers have looked to computer-aided diagnosis for decades, but the lack of computational resources and data have prevented wide-spread implementa-tion and usage of such tools BID11 .", "Since the advent of automated digital WSI capture in the 1990s, researchers have sought approaches for easing the pathologist's workload and improve patient outcomes through image processing algorithms BID11 BID22 .", "Rather than predicting final diagnosis, many of these procedures focused instead on segmentation, either for cell-counting, or for the detection of suspect regions in the WSI.", "Historical methods have focused on the use of hand-crafted texture or morphological BID5 features used in conjunction with unsupervised techniques such as K-means clustering or other dimensionality reduction techniques prior to classification via k-Nearest Neighbor or a support vector machine.Over the past decade, fruitful developments in deep learning BID19 have lead to an explosion of research into the automation of image processing tasks.", "While the application of such advanced ML techniques to image tasks has been successful for many consumer applications, the adoption of such approaches within the field of medical imaging has been more gradual.", "However, these techniques demonstrate remarkable promise in the field of HIA.", "Specifically, in digital pathology with whole-slide-imaging (WSI) BID33 BID26 , highly trained and skilled pathologists review digitally captured microscopy images from prepared and stained tissue samples in order to make diagnoses.", "Digital WSI are massive datasets, consisting of images captured at multiple zoom levels.", "At the greatest magnification levels, a WSI may have a digital resolution upwards of 100 thousand pixels in both dimensions.", "However, since localized annotations are very difficult to obtain, datasets may only contain WSI-level diagnosis labels, falling into the category of weakly-supervised learning.The use of DCNNs was first proposed for HIA in BID3 , where the authors were able to train a model for mitosis detection in H&E stained images.", "A similar technique was applied for WSI for the detection of invasive ductal carcinoma in BID4 .", "These approaches demonstrated the usefulness of learned features as an effective replacement for hand-crafted image features.", "It is possible to train deep architectures from scratch for the classification of tile images BID29 BID13 .", "However, training such DCNN architectures can be extremely resource intensive.", "For this reason, many recent approaches applying DCNNs to HIA make use of large pre-trained networks to act as rich feature extractors for tiles BID15 BID17 BID21 BID32 BID27 .", "Such approaches have found success as aggregation of rich representations from pre-trained DCNNs has proven to be quite effective, even without from-scratch training on WSI tiles.In this paper, we propose CHOWDER 1 , an approach for the interpretable prediction of general localized diseases in WSI with only weak, whole-image disease labels and without any additional expert-produced localized annotations, i.e. per-pixel segmentation maps, of diseased areas within the WSI.", "To accomplish this, we modify an existing architecture from the field of multiple instance learning and object region detection BID9 to WSI diagnosis prediction.", "By modifying the pre-trained DCNN model BID12 , introducing an additional set of fully-connected layers for context-aware classification from tile instances, developing a random tile sampling scheme for efficient training over massive WSI, and enforcing a strict set of regualrizations, we are able to demonstrate performance equivalent to the best human pathologists .", "Notably, while the approach we propose makes use of a pre-trained DCNN as a feature extractor, the entire procedure is a true end-to-end classification technique, and therefore the transferred pre-trained layers can be fine-tuned to the context of H&E WSI.", "We demonstrate, using only whole-slide labels, performance comparable to top-10 ranked methods trained with strong, pixel-level labels on the Camelyon-16 challenge dataset, while also producing disease segmentation that closely matches ground-truth annotations.", "We also present results for diagnosis prediction on WSI obtained from The Cancer Genome Atlas (TCGA), where strong annotations are not available and diseases may not be strongly localized within the tissue sample.", "We have shown that using state-of-the-art techniques from MIL in computer vision, such as the top instance and negative evidence approach of BID9 , one can construct an effective technique for diagnosis prediction and disease location for WSI in histopathology without the need Table 2 : Final leader boards for Camelyon-16 competition.", "All competition methods had access to the full set of strong annotations for training their models.", "In contrast, our proposed approach only utilizes image-wide diagnosis levels and obtains comparable performance as top-10 methods.", "for expensive localized annotations produced by expert pathologists.", "By removing this requirement, we hope to accelerate the production of computer-assistance tools for pathologists to greatly improve the turn-around time in pathology labs and help surgeons and oncologists make rapid and effective patient care decisions.", "This also opens the way to tackle problems where expert pathologists may not know precisely where relevant tissue is located within the slide image, for instance for prognosis estimation or prediction of drug response tasks.", "The ability of our approach to discover associated regions of interest without prior localized annotations hence appears as a novel discovery approach for the field of pathology.", "Moreover, using the suggested localization from CHOWDER, one may considerably speed up the process of obtaining ground-truth localized annotations.A number of improvements can be made in the CHOWDER method, especially in the production of disease localization maps.", "As presented, we use the raw values from convolutional embedding layer, which means that the resolution of the produced disease localization map is fixed to that of the sampled tiles.", "However, one could also sample overlapping tiles and then use a data fusion technique to generate a final localization map.", "Additionally, as a variety of annotations may be available, CHOWDER could be extended to the case of heterogenous annotation, e.g. some slides with expert-produced localized annotations and those with only whole-slide annotations.A FURTHER RESULTS Figure 5 : Visualization of metastasis detection on test image 2 of the Camelyon-16 dataset using our proposed approach.", "Left: Full WSI at zoom level 6 with ground truth annotation of metastases shown via black border.", "Tiles with positive feature embeddings are colored from white to red according to their magnitude, with red representing the largest magnitude.", "Right: Detail of metastases at zoom level 2 overlaid with classification output of our proposed approach.", "Here, the output of all tested tiles are shown and colored according to their value, from blue to white to red, with blue representing the most negative values, and red the most positive.", "Tiles without color were not included when randomly selecting tiles for inference.", "Figure 6 : Visualization of metastasis detection on test image 92 of the Camelyon-16 dataset using our proposed approach.", "Left: Full WSI at zoom level 6 with ground truth annotation of metastases shown via black border.", "Tiles with positive feature embeddings are colored from white to red according to their magnitude, with red representing the largest magnitude.", "Right: Detail of metastases at zoom level 2 overlaid with classification output of our proposed approach.", "Here, the output of all tested tiles are shown and colored according to their value, from blue to white to red, with blue representing the most negative values, and red the most positive.", "Tiles without color were not included when randomly selecting tiles for inference." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2978723347187042, 0.13114753365516663, 0.25, 0.2857142686843872, 0.03999999538064003, 0.20689654350280762, 0.19999998807907104, 0.1428571343421936, 0.09836065024137497, 0.03999999538064003, 0.2083333283662796, 0.18918918073177338, 0.1538461446762085, 0.18867923319339752, 0.18518517911434174, 0.07547169178724289, 0.1599999964237213, 0.18518517911434174, 0.16326530277729034, 0.14999999105930328, 0.11538460850715637, 0.1621621549129486, 0.1090909019112587, 0.10256409645080566, 0.2222222238779068, 0.2222222238779068, 0.19512194395065308, 0.1463414579629898, 0.23255813121795654, 0.0555555522441864, 0.07407406717538834, 0.17977528274059296, 0.1599999964237213, 0.1690140813589096, 0.20338982343673706, 0.17241378128528595, 0.13793103396892548, 0.21917808055877686, 0.1428571343421936, 0.1395348757505417, 0.05882352590560913, 0.17241378128528595, 0.13793103396892548, 0.1599999964237213, 0.17543859779834747, 0.15686273574829102, 0.13333332538604736, 0.16438356041908264, 0.04651162400841713, 0.045454539358615875, 0.09756097197532654, 0.11538460850715637, 0.052631575614213943, 0.13636362552642822, 0.04651162400841713, 0.045454539358615875, 0.09756097197532654, 0.11538460850715637, 0.052631575614213943 ]
ryserbZR-
true
[ "We propose a weakly supervised learning method for the classification and localization of cancers in extremely high resolution histopathology whole slide images using only image-wide labels." ]
[ "Massively multi-label prediction/classification problems arise in environments like health-care or biology where it is useful to make very precise predictions.", "One challenge with massively multi-label problems is that there is often a long-tailed frequency distribution for the labels, resulting in few positive examples for the rare labels.", "We propose a solution to this problem by modifying the output layer of a neural network to create a Bayesian network of sigmoids which takes advantage of ontology relationships between the labels to help share information between the rare and the more common labels. ", "We apply this method to the two massively multi-label tasks of disease prediction (ICD-9 codes) and protein function prediction (Gene Ontology terms) and obtain significant improvements in per-label AUROC and average precision.", "In this paper, we study general techniques for improving predictive performance in massively multilabel classification/prediction problems in which there is an ontology providing relationships between the labels.", "Such problems have practical applications in biology, precision health, and computer vision where there is a need for very precise categorization.", "For example, in health care we have an increasing number of treatments that are only useful for small subsets of the patient population.", "This forces us to create large and precise labeling schemes when we want to find patients for these personalized treatments.One large issue with massively multi-label prediction is that there is often a long-tailed frequency distribution for the labels with a large fraction of the labels having very few positive examples in the training data.", "The corresponding low amount of training data for rare labels makes it difficult to train individual classifiers.", "Current multi-task learning approaches enable us to somewhat circumvent this bottleneck through sharing information between the rare and cofmmon labels in a manner that enables us to train classifiers even for the data poor rare labels BID6 .In", "this paper, we introduce a new method for massively multi-label prediction, a Bayesian network of sigmoids, that helps achieve better performance on rare classes by using ontological information to better share information between the rare and common labels. This", "method is based on similar ideas found in Bayesian networks and hierarchical softmax BID18 . The", "main distinction between this paper and prior work is that we focus on improving multi-label prediction performance with more complicated directed acyclic graph (DAG) structures between the labels while previous hierarchical softmax work focuses on improving runtime performance on multi-class problems (where labels are mutually exclusive) with simpler tree structures between the labels.In order to demonstrate the empirical predictive performance of our method, we test it on two very different massively multi-label tasks. The", "first is a disease prediction task where we predict ICD-9 (diagnoses) codes from medical record data using the ICD-9 hierarchy to tie the labels together. The", "second task is a protein function prediction task where we predict Gene Ontology terms BID0 BID5 from sequence information using the Gene Ontology DAG to combine the labels. Our", "experiments indicate that our new method obtains better average predictive performance on rare labels while maintaining similar performance on common labels.", "This paper introduces a new method for improving the performance of rare labels in massively multi-label problems with ontologically structured labels.", "Our new method uses the ontological relationships to construct a Bayesian network of sigmoid outputs which enables us to express the probability of rare labels as a product of conditional probabilities of more common higher-level labels.", "This enables us to share information between the labels and achieve empirically better performance in both AUROC and average precision for rare labels than flat sigmoid baselines in three separate experiments covering the two very different domains of protein function prediction and disease prediction.", "This improvement in performance for rare labels enables us to make more precise predictions for smaller label categories and should be applicable to a variety of tasks that contain an ontology that defines relationships between labels." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.21621620655059814, 0.24390242993831635, 0.23999999463558197, 0.21739129722118378, 0.23255813121795654, 0.15789473056793213, 0.05128204822540283, 0.16129031777381897, 0.11764705181121826, 0.1599999964237213, 0.42307692766189575, 0.1249999925494194, 0.1599999964237213, 0.1463414579629898, 0.1904761791229248, 0.22857142984867096, 0.4324324131011963, 0.17391303181648254, 0.145454540848732, 0.20408162474632263 ]
r1g1LoAcFm
true
[ " We propose a new method for using ontology information to improve performance on massively multi-label prediction/classification problems." ]
[ "Generative Adversarial Networks have made data generation possible in various use cases, but in case of complex, high-dimensional distributions it can be difficult to train them, because of convergence problems and the appearance of mode collapse.\n", "Sliced Wasserstein GANs and especially the application of the Max-Sliced Wasserstein distance made it possible to approximate Wasserstein distance during training in an efficient and stable way and helped ease convergence problems of these architectures.\n\n", "This method transforms sample assignment and distance calculation into sorting the one-dimensional projection of the samples, which results a sufficient approximation of the high-dimensional Wasserstein distance. \n\n", "In this paper we will demonstrate that the approximation of the Wasserstein distance by sorting the samples is not always the optimal approach and the greedy assignment of the real and fake samples can result faster convergence and better approximation of the original distribution.", "Generative Adversarial Networks (GANs) were first introduced in Goodfellow et al. (2014) , where instead of the application of a mathematically well-established loss function an other differentiable neural network, a discriminator was applied to approximate the distance between two distributions.", "These methods are popularly applied in data generation and has significantly improved the modelling capabilities of neural networks.", "It was demonstrated in various use cases that these approaches can approximate complex high-dimensional distributions in practice Karras et al. (2017) , Yu et al. (2017) , Brock et al. (2018) .", "Apart from the theoretical advantage of GANs and applying a discriminator network instead of a distance metric (e.g. 1 or 2 loss), modelling high-dimensional distributions with GANs often proves to be problematic in practice.", "The two most common problems are mode collapse, where the generator gets stuck in a state where only a small portion of the whole distribution is modeled and convergence problems, where either the generator or the discriminator solves his task almost perfectly, providing low or no gradients for training for the other network.", "Convergence problems were improved, by introducing the Wasserstein distance Gulrajani et al. (2017) , which instead of a point-wise distance calculation (e.g. cross-entropy or 1 distance) calculates a minimal transportation distance (earth mover's distance) between the two distributions.", "The approximation and calculation of the Wasserstein distance is complex and difficult in highdimensions, since in case of a large sample size calculation and minimization of the transport becomes exponentially complex, also distance can have various magnitudes in the different dimensions.", "In Deshpande et al. (2018) it was demonstrated that high-dimensional distributions can be approximated by using a high number of one dimensional projections.", "For a selected projection the minimal transport between the one dimensional samples can be calculated by sorting both the real and the fake samples and assigning them according to their sorted indices correspondingly.", "As an additional advantage, it was also demonstrated in Deshpande et al. (2018) that instead of the regular mini-max game of adversarial training, the distribution of the real samples could be approximated directly by the generator only, omitting the discriminator and turning training into a simple and more stable minimization problem.", "The theory of this novel method is well described and it was demonstrated that it works in practice, but unfortunately for complex, high-dimensional distributions a large amount of projections are needed.", "In Deshpande et al. (2019) it was demonstrated how the high number of random projections could be substituted by a single continuously optimized plane.", "The parameters of this projection are optimized in an adversarial manner selecting the \"worst\" projection, which maximizes separation between the real and fake samples using a surrogate function.", "This modification brought the regular adversarial training back and created a mini-max game again, where the generator creates samples which resemble well to the original distribution according to the selected plane and the discriminator tries to find a projection, which separates the real and fake samples from each other.", "The essence of Sliced Wasserstein distances is how they provide a method to calculate minimal transportation between the projected samples in one-dimension with ease, which approximates the Wasserstein distance in the original high-dimension.", "In theory this approach is sound and works well in practise.", "It was proven in Kolouri et al. (2019) that the sliced Wasserstein distance satisfies the properties of non-negativity, identity of indiscernibles, symmetry, and triangle inequality, this way forming a true metric.", "However it approximates high-dimensional distributions well, we would like to demonstrate in this paper that the assignment of real and fake samples by sorting them in one dimension also has its flaws and a greedy assignment approach can perform better on commonly applied datasets.", "We would also argue regarding the application of the Wasserstein distance.", "We will demonstrate that in many cases various assignments can result the same minimal transportation during training and calculation of the Wasserstein distance with sorting can alter the distribution of perfectly modeled samples even when only a single sample differs from the approximated distribution.", "In this paper we have introduced greedy sample assignment for Max-Sliced Wasserstein GANs.", "We have shown that using one-dimensional samples, in many cases multiple assignments can result optimal transportation and in most cases sorting changes all the samples, meanwhile those parts of the distribution which are at a \"good\" position should not generate error.", "We proposed greedy assignment as a possible solution, where samples will be assigned to their most similar counterparts.", "We have also introduced how the combination of the two methods can be applied resulting a hybrid approach in which it can automatically selected -based on the difference of the two measures -which assignment will be used.", "We have demonstrated on simple toy datasets that greedy assignment performs better than sorting the samples and we have evaluated both the greedy and the hybrid methods on commonly investigated datasets (MNIST and CelebA).", "With all datasets the greedy assignment resulted lower KullbackLeibler divergence and higher correlation than the traditional approach.", "We have used the Max-Sliced Wasserstein distance for the base of our comparison, since this is the most recent version, which also results the best performance, but all the approaches can be exploited in case of regular Sliced Wasserstein distances as well.", "Also our approach changes the distance calculation only and it can be applied together with various other improved techniques and architectures which are used in GAN training." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.11999999731779099, 0.2666666507720947, 0.3589743673801422, 0.3333333432674408, 0.2641509473323822, 0.11764705181121826, 0.04999999701976776, 0.25, 0.10169491171836853, 0.23999999463558197, 0.21276594698429108, 0.10256409645080566, 0.22727271914482117, 0.1666666567325592, 0.08888888359069824, 0.14999999105930328, 0.1860465109348297, 0.15094339847564697, 0.35555556416511536, 0, 0.2222222238779068, 0.31578946113586426, 0.38461539149284363, 0.29629629850387573, 0.20689654350280762, 0.18867924809455872, 0.3529411852359772, 0.260869562625885, 0.3333333432674408, 0.1875, 0.19230768084526062, 0.0952380895614624 ]
BJgRsyBtPB
true
[ "We apply a greedy assignment on the projected samples instead of sorting to approximate Wasserstein distance" ]
[ "Lifelong machine learning focuses on adapting to novel tasks without forgetting the old tasks, whereas few-shot learning strives to learn a single task given a small amount of data.", "These two different research areas are crucial for artificial general intelligence, however, their existing studies have somehow assumed some impractical settings when training the models.", "For lifelong learning, the nature (or the quantity) of incoming tasks during inference time is assumed to be known at training time.", "As for few-shot learning, it is commonly assumed that a large number of tasks is available during training.", "Humans, on the other hand, can perform these learning tasks without regard to the aforementioned assumptions.", "Inspired by how the human brain works, we propose a novel model, called the Slow Thinking to Learn (STL), that makes sophisticated (and slightly slower) predictions by iteratively considering interactions between current and previously seen tasks at runtime.", "Having conducted experiments, the results empirically demonstrate the effectiveness of STL for more realistic lifelong and few-shot learning settings.", "Deep Learning has been successful in various applications.", "However, it still has a lot of areas to improve on to reach human's lifelong learning ability.", "As one of its drawbacks, neural networks (NNs) need to be trained on large datasets before giving satisfactory performance.", "Additionally, they usually suffer from the problem of catastrophic forgetting (McCloskey & Cohen (1989); French (1999) )-a neural network performs poorly on old tasks after learning a novel task.", "In contrast, humans are able to incorporate new knowledge even from few examples, and continually throughout much of their lifetime.", "To bridge this gap between machine and human abilities, effort has been made to study few-shot learning (Fei-Fei et al. (2006) ; Lake et al. (2011); Santoro et al. (2016) ; Vinyals et al. (2016) ; Snell et al. (2017) ; Ravi & Larochelle (2017b) ; Finn et al. (2017) ; ; Garcia & Bruna (2018) ; Qi et al. (2018) ), lifelong learning (Gepperth & Karaoguz (2016) ; Rusu et al. (2016) ; Kirkpatrick et al. (2017) ; Yoon et al. (2018) ; ; ; Serrà et al. (2018) ; Schwarz et al. (2018) ; Sprechmann et al. (2018) ; Riemer et al. (2019) ), and both (Kaiser et al. (2017) ).", "The learning tasks performed by humans are, however, more complicated than the settings used by existing lifelong and few-shot learning works.", "Task uncertainty: currently, lifelong learning models are usually trained with hyperparameters (e.g., number of model weights) optimized for a sequence of tasks arriving at test time.", "The knowledge about future tasks (even their quantity) may be a too strong assumption in many real-world applications, yet without this knowledge, it is hard to decide the appropriate model architecture and capacity when training the models.", "Sequential few-shot tasks: existing few-shot learning models are usually (meta-)trained using a large collection of tasks.", "1 Unfortunately, this collection is not available in the lifelong learning scenarios where tasks come in sequentially.", "Without seeing many tasks at training time, it is hard for an existing few-shot model to learn the shared knowledge behind the tasks and use the knowledge to speed up the learning of a novel task at test time.", "Humans, on the other hand, are capable of learning well despite having only limited information and/or even when not purposely preparing for a particular set of future tasks.", "Comparing how humans learn and think to how the current machine learning models are trained to learn and make predictions, we observe that the key difference lies on the part of thinking, which is the decision-making counterpart of models when making predictions.", "While most NN-based supervised learning models use a single forward pass to predict, humans make careful and less error-prone decisions in a more sophisticated manner.", "Studies in biology, psychology, and economics (Parisi et al. (2019) ; Kahneman & Egan (2011) ) have shown that, while humans make fast predictions (like machines) when dealing with daily familiar tasks, they tend to rely on a slow-thinking system that deliberately and iteratively considers interactions between current and previously learned knowledge in order to make correct decisions when facing unfamiliar or uncertain tasks.", "We hypothesize that this slow, effortful, and less error-prone decision-making process can help bridge the gap of learning abilities between humans and machines.", "We propose a novel brain-inspired model, called the Slow Thinking to Learn (STL), for taskuncertain lifelong and sequential few-shot machine learning tasks.", "STL has two specialized but dependent modules, the cross-task Slow Predictor (SP) and per-task Fast Learners (FLs), that output lifelong and few-shot predictions, respectively.", "We show that, by making the prediction process of SP more sophisticated (and slightly slower) at runtime, the learning process of all modules can be made easy at training time, eliminating the need to fulfill the aforementioned impractical settings.", "Note that the techniques for slow predictions (Finn et al. (2017) ; Ravi & Larochelle (2017b) ; Nichol & Schulman (2018) ; Sprechmann et al. (2018) ) and fast learning (McClelland et al. (1995) ; Kumaran et al. (2016) ; Kaiser et al. (2017) ) have already been proposed in the literature.", "Our contributions lie in that we", "1) explicitly model and study the interactions between these two techniques, and", "2) demonstrate, for the first time, how such interactions can greatly improve machine capability to solve the joint lifelong and few-shot learning problems encountered by humans everyday.", "2 Slow Thinking to Learn (STL)", "Figure 1: The Slow Thinking to Learn (STL) model.", "To model the interactions between the shared SP f and per-task FLs {(g (t) , M (t) )} t , we feed the output of FLs into the SP while simultaneously letting the FLs learn from the feedback given by SP.", "We focus on a practical lifelong and fewshot learning set-up:", ", · · · arriving in sequence and the labeled examples", "also coming in sequence, the goal is to design a model such that it can be properly trained by data", ") collected up to any given time point s, and then make correct predictions for unlabeled data X (t) = {x (t,i) } i in any of the seen tasks, t ≤ s.", "Note that, at training time s, the future tasks To solve Problem 1, we propose the Slow Thinking to Learn (STL) model, whose architecture is shown in Figure 1 .", "The STL is a cascade where the shared Slow Predictor (SP) network f parameterized by θ takes the output of multiple task-specific Fast Learners (FLs) {(g (t) , M (t) )} t , t ≤ s, as input.", "An FL for task T (t) consists of an embedding network g (t)2 parameterized by φ (t) and augmented with an external, episodic, non-parametric memory", "Here, we use the Memory Module (Kaiser et al. (2017) ) as the external memory which saves the clusters of seen examples {(x (t,i) , y (t,i) )} i to achieve better storage efficiency-the h (t,j) of an entry (h (t,j) , v (t,j) ) denotes the embedding of a cluster of x (t,i) 's with the same label while the v (t,j) denotes the shared label.", "We use the FL (g (t) , M (t) ) and SP f to make few-shot and lifelong predictions for task T (t) , respectively.", "We let the number of FLs grow with the number of seen tasks in order to ensure that the entire STL model will have enough complexity to learn from possibly endless tasks in lifelong.", "This does not imply that the SP will consume unbounded memory space to make predictions at runtime, as the FL for a specific task can be stored on a hard disk and loaded into the main memory only when necessary.", "Slow Predictions.", "The FL predicts the label of a test instance x using a single feedforward pass just like most existing machine learning models.", "As shown in Figure 2 (a), the FL for task T (t) first embed the instance to get h = g (t) (x ) and then predicts the labelŷ FL of x by averaging the cluster labels", "where KNN(h ) is the set of K nearest neighboring embeddings of h .", "We havê", "where h, h denotes the cosine similarity between h (t,j) and h .", "On the other hand, the SP predicts the label of x with a slower, iterative process, which is shown in Figure 2 (b).", "The SP first adapts (i.e., fine-tunes) its weights θ to KNN(h ) and their corresponding values stored in M (t) to getθ by solving", "where loss(·) denotes a loss function.", "Then, the SP makes a prediction byŷ SP = f (h ;θ ).", "The adapted network fθ is discarded after making the prediction.", "The slower decision-making process of SP may seem unnecessary and wasteful of computing resources at first glance.", "Next, we explain why it is actually a good bargain.", "Life-Long Learning with Task Uncertainty.", "Since the SP makes predictions after runtime adaptation, we define the training objective of θ for task T (s) such that it minimizes the losses after being adapted for each seen task", "The term loss(f (h;θ * ), v) denotes the empirical slow-prediction loss of the adapted SP on an example (x, y) in M (t) , whereθ * denotes the weights of the adapted SP for x following Eq. (1):", "requires recursively solvingθ * for each (x, y) remembered by the FLs.", "We use an efficient gradient-based approach proposed by Finn et al. (2017) ) to solve Eq. (2).", "Please refer to Section 2.1 of the Appendix for more details.", "Since the SP learns from the output of FLs, theθ * in Eq. (2) approximates a hypothesis used by an FL to predict the label of x.", "The θ, after being trained, will be close to everyθ * and can be fine-tuned to become a hypothesis, meaning that θ encodes the invariant principles 3 underlying the hypotheses for different tasks.", "(a)", "(b)", "(c) Figure 3 : The relative positions between the invariant representations θ and the approximate hypothesesθ (t) 's of FLs for different tasks T (t) 's on the loss surface defined by FLs after seeing the", "(a) first,", "(b) second, and", "(c) third task.", "Since θ−θ (t) ≤ R for any t in Eq. (2), the effective capacity of SP (at runtime) is the union of the capacity of all possible points within the dashed R-circle centered at θ.", "Furthermore, after being sequentially trained by two tasks using Eq. (3), the θ will easily get stuck in the middle ofθ", "(1) andθ (2) .", "To solve the third task, the third FL needs to change its embedding function (and therefore the loss surface) such thatθ (3) falls into the R-circle centered at θ.", "Recall that in Problem 1, the nature of tasks arriving after a training process is unknown, thus, it is hard to decide the right model capacity at training time.", "A solution to this problem is to use an expandable network (Rusu et al. (2016) ; Yoon et al. (2018) ) and expand the network when training it for a new task, but the number of units to add during each expansion remains unclear.", "Our STL walks around this problem by not letting the SP learn the tasks directly but making it learn the invariant principles behind the tasks.", "Assuming that the underlying principles of the learned hypotheses for different tasks are universal and relatively simple, 4 one only needs to choose a model architecture with capacity that is enough to learn the shared principles in lifelong manner.", "Note that limiting the capacity of SP at training time does not imply underfitting.", "As shown in Figure 3 , the postadaptation capacity of SP at runtime can be much larger than the capacity decided during training.", "Sequential Few-Shot Learning.", "Although each FL is augmented with an external memory that has been shown to improve learning efficiency by the theory of complementary learning systems (McClelland et al. (1995) ; Kumaran et al. (2016) ), it is not sufficient for FLs to perform few-shot predictions.", "Normally, these models need to be trained on many existing few-shot tasks in order to obtain good performance at test time.", "Without assuming s in Problem 1 to be a large number, the STL takes a different approach that fast stabilizes θ and then let the FL for a new incoming task learn a good hypothesis by extrapolating from θ.", "We define the training objective of g (s) , which is parameterized by φ (s) and augmented with memory M (s) , for the current task T (s) as follows:", "where", ") is the empirical loss term whose specific form depends on the type of external memory used (see Section 2.2 of the Appendix for more details), and", ") is a regularization term, which we call the feedback term, whose inverse value denotes the usefulness of the FL in helping SP (f parameterized by θ) adapt.", "Specifically, it is written as", "The feedback term encourages each FL to learn unique and salient features for the respective task so the SP will not be confused by two tasks having similar embeddings.", "As shown in Figure 3 (b), the relative position of θ gets \"stuck\" easily after seeing a few of previous tasks.", "To solve the current task, g (s) needs to change the loss surface for θ such thatθ (s) falls into the R-circle centered at θ (Figure 3(c) ).", "This makes θ an efficient guide (through the feedback term) to finding g (s) when there are only few examples and also few previous tasks.", "We use an alternate training procedure to train the SP and FLs.", "Please see Section 2.3 of the Appendix for more details.", "Note that when sequentially training STL for task T (s) in lifelong, we can safely discard the data", "in the previous tasks because the FLs are task-specific (see Eq. (3)) and the SP does not require raw examples to train (see Eq. (2)).", "Inspired by the thinking process that humans undergo when making decisions, we propose STL, a cascade of per-task FLs and shared SP.", "To the best of our knowledge, this is the first work that studies the interactions between the fast-learning and slow-prediction techniques and shows how such interactions can greatly improve machine capability to solve the joint lifelong and few-shot learning problems under challenging settings.", "For future works, we will focus on integrating the STL with different types of external memory and studying the performance of STL in real-world deployments." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19999998807907104, 0.12244897335767746, 0.13636362552642822, 0.04878048226237297, 0.20512820780277252, 0.19999998807907104, 0.2857142686843872, 0, 0.19999998807907104, 0.04651162400841713, 0.07547169178724289, 0.09090908616781235, 0.1818181723356247, 0.23255813121795654, 0.11764705181121826, 0.13333332538604736, 0.1538461446762085, 0.14999999105930328, 0.178571417927742, 0.07843136787414551, 0.24561403691768646, 0.1666666567325592, 0.09756097197532654, 0.21739129722118378, 0.30434781312942505, 0.1702127605676651, 0.14035087823867798, 0.10169491171836853, 0, 0.22857142984867096, 0.6399999856948853, 0.06666666269302368, 0.060606054961681366, 0.14814814925193787, 0.1764705777168274, 0.12121211737394333, 0.1818181723356247, 0.1071428507566452, 0.11538460850715637, 0.03448275476694107, 0.04255318641662598, 0.0555555522441864, 0.2222222238779068, 0.11764705181121826, 0.1666666567325592, 0.17777776718139648, 0.1071428507566452, 0.0555555522441864, 0.1764705777168274, 0.04444443807005882, 0.08163265138864517, 0, 0.0555555522441864, 0.05882352590560913, 0.04999999701976776, 0, 0, 0.07843136787414551, 0.072727270424366, 0.0555555522441864, 0.09756097197532654, 0.1111111044883728, 0.0833333283662796, 0.14814814925193787, 0.1111111044883728, 0.07407407462596893, 0, 0.037735845893621445, 0.045454539358615875, 0, 0.16326530277729034, 0.07999999821186066, 0.09677419066429138, 0.045454539358615875, 0.13793103396892548, 0.052631575614213943, 0.08888888359069824, 0, 0.158730149269104, 0.13636362552642822, 0.10344827175140381, 0.08163265138864517, 0.0833333283662796, 0.04081632196903229, 0, 0.11538460850715637, 0.045454539358615875, 0.1666666567325592, 0.1666666567325592, 0.1666666567325592, 0.05714285373687744, 0.0952380895614624, 0.13333332538604736, 0.08695651590824127, 0.6666666865348816, 0.08695651590824127 ]
HklbIerFDS
true
[ "This paper studies the interactions between the fast-learning and slow-prediction models and demonstrate how such interactions can improve machine capability to solve the joint lifelong and few-shot learning problems." ]
[ "Hypernetworks are meta neural networks that generate weights for a main neural network in an end-to-end differentiable manner.", "Despite extensive applications ranging from multi-task learning to Bayesian deep learning, the problem of optimizing hypernetworks has not been studied to date.", "We observe that classical weight initialization methods like Glorot & Bengio (2010) and He et al. (2015), when applied directly on a hypernet, fail to produce weights for the mainnet in the correct scale.", "We develop principled techniques for weight initialization in hypernets, and show that they lead to more stable mainnet weights, lower training loss, and faster convergence.", "Meta-learning describes a broad family of techniques in machine learning that deals with the problem of learning to learn.", "An emerging branch of meta-learning involves the use of hypernetworks, which are meta neural networks that generate the weights of a main neural network to solve a given task in an end-to-end differentiable manner.", "Hypernetworks were originally introduced by Ha et al. (2016) as a way to induce weight-sharing and achieve model compression by training the same meta network to learn the weights belonging to different layers in the main network.", "Since then, hypernetworks have found numerous applications including but not limited to: weight pruning (Liu et al., 2019) , neural architecture search (Brock et al., 2017; , Bayesian neural networks (Krueger et al., 2017; Ukai et al., 2018; Pawlowski et al., 2017; Henning et al., 2018; Deutsch et al., 2019) , multi-task learning (Pan et al., 2018; Shen et al., 2017; Klocek et al., 2019; Serrà et al., 2019; Meyerson & Miikkulainen, 2019) , continual learning (von Oswald et al., 2019) , generative models (Suarez, 2017; Ratzlaff & Fuxin, 2019) , ensemble learning (Kristiadi & Fischer, 2019) , hyperparameter optimization (Lorraine & Duvenaud, 2018) , and adversarial defense (Sun et al., 2017) .", "Despite the intensified study of applications of hypernetworks, the problem of optimizing them to this day remains significantly understudied.", "Given the lack of principled approaches to training hypernetworks, prior work in the area is mostly limited to ad-hoc approaches based on trial and error (c.f. Section 3).", "For example, it is common to initialize the weights of a hypernetwork by sampling a \"small\" random number.", "Nonetheless, these ad-hoc methods do lead to successful hypernetwork training primarily due to the use of the Adam optimizer (Kingma & Ba, 2014) , which has the desirable property of being invariant to the scale of the gradients.", "However, even Adam will not work if the loss diverges (i.e. integer overflow) at initialization, which will happen in sufficiently big models.", "The normalization of badly scaled gradients also results in noisy training dynamics where the loss function suffers from bigger fluctuations during training compared to vanilla stochastic gradient descent (SGD).", "Wilson et al. (2017) showed that while adaptive optimizers like Adam may exhibit lower training error, they fail to generalize as well to the test set as non-adaptive gradient methods.", "Moreover, Adam incurs a computational overhead and requires 3X the amount of memory for the gradients compared to vanilla SGD.", "Small random number sampling is reminiscent of early neural network research (Rumelhart et al., 1986) before the advent of classical weight initialization methods like Xavier init (Glorot & Bengio, 2010) and Kaiming init (He et al., 2015) .", "Since then, a big lesson learned by the neural network optimization community is that architecture specific initialization schemes are important to the ro-bust training of deep networks, as shown recently in the case of residual networks (Zhang et al., 2019) .", "In fact, weight initialization for hypernetworks was recognized as an outstanding open problem by prior work (Deutsch et al., 2019) that had questioned the suitability of classical initialization methods for hypernetworks.", "Our results We show that when classical methods are used to initialize the weights of hypernetworks, they fail to produce mainnet weights in the correct scale, leading to exploding activations and losses.", "This is because classical network weights transform one layer's activations into another, while hypernet weights have the added function of transforming the hypernet's activations into the mainnet's weights.", "Our solution is to develop principled techniques for weight initialization in hypernetworks based on variance analysis.", "The hypernet case poses unique challenges.", "For example, in contrast to variance analysis for classical networks, the case for hypernetworks can be asymmetrical between the forward and backward pass.", "The asymmetry arises when the gradient flow from the mainnet into the hypernet is affected by the biases, whereas in general, this does not occur for gradient flow in the mainnet.", "This underscores again why architecture specific initialization schemes are essential.", "We show both theoretically and experimentally that our methods produce hypernet weights in the correct scale.", "Proper initialization mitigates exploding activations and gradients or the need to depend on Adam.", "Our experiments reveal that it leads to more stable mainnet weights, lower training loss, and faster convergence.", "Section 2 briefly covers the relevant technical preliminaries and Section 3 reviews problems with the ad-hoc methods currently deployed by hypernetwork practitioners.", "We derive novel weight initialization formulae for hypernetworks in Section 4, empirically evaluate our proposed methods in Section 5, and finally conclude in Section 6.", "In all our experiments, hyperfan-in and hyperfan-out both lead to successful hypernetwork training with SGD.", "We did not find a good reason to prefer one over the other (similar to He et al. (2015) 's observation in the classical case for fan-in and fan-out init).", "For a long time, the promise of deep nets to learn rich representations of the world was left unfulfilled due to the inability to train these models.", "The discovery of greedy layer-wise pre-training (Hinton et al., 2006; Bengio et al., 2007) and later, Xavier and Kaiming init, as weight initialization strategies to enable such training was a pivotal achievement that kickstarted the deep learning revolution.", "This underscores the importance of model initialization as a fundamental step in learning complex representations.", "In this work, we developed the first principled weight initialization methods for hypernetworks, a rapidly growing branch of meta-learning.", "We hope our work will spur momentum towards the development of principled techniques for building and training hypernetworks, and eventually lead to significant progress in learning meta representations.", "Other non-hypernetwork methods of neural network generation (Stanley et al., 2009; Koutnik et al., 2010) can also be improved by considering whether their generated weights result in exploding activations and how to avoid that if so." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.07999999821186066, 0.06896550953388214, 0.1463414579629898, 0.25, 0, 0, 0, 0.056338027119636536, 0, 0.05882352590560913, 0, 0, 0, 0.0555555522441864, 0, 0.07407406717538834, 0.0952380895614624, 0.043478257954120636, 0.21621620655059814, 0, 0, 0.4166666567325592, 0.1428571343421936, 0.13793103396892548, 0.12903225421905518, 0.1111111044883728, 0, 0.09090908616781235, 0, 0, 0.27586206793785095, 0, 0.0555555522441864, 0, 0.13636364042758942, 0.08695651590824127, 0.37037035822868347, 0.11428570747375488, 0 ]
H1lma24tPB
true
[ "The first principled weight initialization method for hypernetworks" ]
[ "For bidirectional joint image-text modeling, we develop variational hetero-encoder (VHE) randomized generative adversarial network (GAN), a versatile deep generative model that integrates a probabilistic text decoder, probabilistic image encoder, and GAN into a coherent end-to-end multi-modality learning framework.", "VHE randomized GAN (VHE-GAN) encodes an image to decode its associated text, and feeds the variational posterior as the source of randomness into the GAN image generator.", "We plug three off-the-shelf modules, including a deep topic model, a ladder-structured image encoder, and StackGAN++, into VHE-GAN, which already achieves competitive performance.", "This further motivates the development of VHE-raster-scan-GAN that generates photo-realistic images in not only a multi-scale low-to-high-resolution manner, but also a hierarchical-semantic coarse-to-fine fashion.", "By capturing and relating hierarchical semantic and visual concepts with end-to-end training, VHE-raster-scan-GAN achieves state-of-the-art performance in a wide variety of image-text multi-modality learning and generation tasks." ]
[ 0, 0, 0, 0, 1 ]
[ 0.2711864411830902, 0.1249999925494194, 0.1702127605676651, 0.1249999925494194, 0.4000000059604645 ]
H1x5wRVtvS
false
[ "A novel Bayesian deep learning framework that captures and relates hierarchical semantic and visual concepts, performing well on a variety of image and text modeling and generation tasks." ]
[ "Current classical planners are very successful in finding (non-optimal) plans, even for large planning instances.", "To do so, most planners rely on a preprocessing stage that computes a grounded representation of the task.", "Whenever the grounded task is too big to be generated (i.e., whenever this preprocess fails) the instance cannot even be tackled by the actual planner.", "To address this issue, we introduce a partial grounding approach that grounds only a projection of the task, when complete grounding is not feasible.", "We propose a guiding mechanism that, for a given domain, identifies the parts of a task that are relevant to find a plan by using off-the-shelf machine learning methods.", "Our empirical evaluation attests that the approach is capable of solving planning instances that are too big to be fully grounded.", "Given a model of the environment, classical planning attempts to find a sequence of actions that lead from an initial state to a state that satisfies a set of goals.", "Planning models are typically described in the Planning Domain Definition Language (PDDL) BID16 ) in terms of predicates and action schemas with arguments that can be instantiated with a set of objects.", "However, most planners work on a grounded representation without free variables, like STRIPS BID4 or FDR BID1 .", "Grounding is the process of translating a task in the lifted (PDDL) representation to a grounded representation.", "It requires to compute all valid instantiations that assign objects to the arguments of predicates and action parameters, even though only a small fraction of these instantiations might be necessary to solve the task.The size of the grounded task is exponential in the number of arguments in predicates and action schemas.", "Although this is constant for all tasks of a given domain, and grounding can be done in polynomial time, it may still be prohibitive when the number of objects is large and/or some predicates or actions have many parameters.The success of planners like FF BID9 or LAMA BID24 in finding plans for large planning tasks is undeniable.", "However, since most planners rely on grounding for solving a task, they fail without even starting the search for a plan whenever an instance cannot be grounded, making grounding a bottleneck for the success of satisficing planners.Grounding is particularly challenging in open multi-task environments, where the planning task is automatically generated with all available objects even if only a few of them are relevant to achieve the goals.", "For example, in robotics, the planning task may contain all objects with which the robot may interact even if they are not needed BID13 ).", "In network-security environments, like the one modeled in the Caldera domain BID17 , the planning task may contain all details about the network.", "However, to the best of our knowledge, no method exists that attempts to focus the grounding on relevant parts of the task.We propose partial grounding, where, instead of instantiating the full planning task, we focus on the parts that are required to find a plan.", "The approach is sound -if a plan is found for the partially grounded task then it is a valid plan for the original task -but incomplete -the partially grounded task will only be solvable if the operators in at least one plan have been grounded.", "To do so, we give priority to operators that we deem more relevant to achieve the goal.", "Inspired by relational learning approaches to domain control knowledge (e.g., BID31 , BID3 , BID11 ), we use machine learning methods to predict the probability that a given operator belongs to a plan.", "We learn from small training instances, and generalize to larger ones by using relational features in standard classification and regression algorithms (e.g., BID12 ).", "As an alternative model, we also experiment with relational trees to learn the probabilities BID18 .Empirical", "results show that our learning models can predict which operators are relevant with high accuracy in several domains, leading to a very strong reduction of task size when grounding and solving huge tasks.", "In this paper, we proposed an approach to partial grounding of planning tasks, to deal with tasks that cannot be fully grounded under the available time and memory resources.", "Our algorithm heuristically guides the grounding process giving preference to operators that are deemed most relevant for solving the task.", "To determine which operators are relevant, we train different machine learning models using optimal plans from small instances of the same domain.", "We consider two approaches, a direct application of relational decision trees, and using relational features with standard classification and regression algorithms.", "The empirical results show the effectiveness of the approach.", "In most domains, the learned models are able to identify which operators are relevant with high accuracy, helping to reduce the number of grounded operators by several orders of magnitude, and greatly increasing coverage in large instances.", "Figure 3 : The scatter plots show the number of operators of a fully grounded task on the x-axis.", "The y-axis shows the number of operators that are needed to make the goal reachable in the grounding (leftmost two columns), and the number of operators that are needed to solve the task (rightmost two columns), for several priority functions." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.23076923191547394, 0.20338982343673706, 0.28070175647735596, 0.19672130048274994, 0.1818181723356247, 0.17543859779834747, 0.1269841194152832, 0.1538461446762085, 0.2857142686843872, 0.1944444328546524, 0.1927710771560669, 0.1538461446762085, 0.06896550953388214, 0.145454540848732, 0.260869562625885, 0.11940298229455948, 0.11999999731779099, 0.15625, 0.06666666269302368, 0.07843136787414551, 0.20588234066963196, 0.2539682388305664, 0.18518517911434174, 0.07017543166875839, 0.07407406717538834, 0.09302325546741486, 0.08955223113298416, 0.1538461446762085, 0.19354838132858276 ]
r1e44bPpP4
true
[ "This paper introduces partial grounding to tackle the problem that arises when the full grounding process, i.e., the translation of a PDDL input task into a ground representation like STRIPS, is infeasible due to memory or time constraints." ]