query
stringlengths
273
149k
pos
stringlengths
18
667
idx
int64
0
1.99k
task_name
stringclasses
1 value
Common-sense physical reasoning is an essential ingredient for any intelligent agent operating in the real-world. For example, it can be used to simulate the environment, or to infer the state of parts of the world that are currently unobserved. In order to match real-world conditions this causal knowledge must be learned without access to supervised data. To address this problem we present a novel method that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion. It incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and learn efficiently. On videos of bouncing balls we show the superior modelling capabilities of our method compared to other unsupervised neural approaches that do not incorporate such prior knowledge. We demonstrate its ability to handle occlusion and show that it can extrapolate learned knowledge to scenes with different numbers of objects. Humans rely on common-sense physical reasoning to solve many everyday physics-related tasks BID32. For example, it enables them to foresee the consequences of their actions (simulation), or to infer the state of parts of the world that are currently unobserved. This causal understanding is an essential ingredient for any intelligent agent that is to operate within the world. Common-sense physical reasoning is facilitated by the discovery and representation of objects (a core domain of human cognition BID45) that serve as primitives of a compositional system. They allow humans to decompose a complex visual scene into distinct parts, describe relations between them and reason about their dynamics as well as the consequences of their interactions BID4 BID32 BID48.The most successful machine learning approaches to common-sense physical reasoning incorporate such prior knowledge in their design. They maintain explicit object representations, which allow for general physical dynamics to be learned between object pairs in a compositional manner BID3 BID8 BID49. However, in these approaches learning is supervised, as it relies on object-representations from external sources (e.g. a physics simulator) that are typically unavailable in real-world scenarios. Neural approaches that learn to directly model motion or physical interactions in pixel space offer an alternative solution BID46 BID47 ). However, while unsupervised, these methods suffer from a lack compositionality at the representational level of objects. This prevents such end-to-end neural approaches from efficiently learning functions that operate on multiple entities and generalize in a human-like way (c.f. BID4 ; BID32 ; BID41, but see BID39).In this work we propose Relational N-EM (R-NEM), a novel approach to common-sense physical reasoning that learns physical interactions between objects from raw visual images in a purely unsupervised fashion. At its core is Neural Expectation Maximization (N-EM;, a method that allows for the discovery of compositional object-representations, yet is unable to model interactions between objects. Therefore, we endow N-EM with a relational mechanism inspired by previous work BID3 BID8 BID41, enabling it to factor interactions between object-pairs, learn efficiently, and generalize to visual scenes with a varying number of objects without re-training. Our goal is to learn common-sense physical reasoning in a purely unsupervised fashion directly from visual observations. We have argued that in order to solve this problem we need to exploit the compositional structure of a visual scene. Conventional unsupervised representation learning approaches (eg. VAEs BID31 ; GANs BID17) learn a single distributed representation that superimposes information about the input, without imposing any structure regarding objects or other low-level primitives. These monolithic representations can not factorize physical interactions between pairs of objects and therefore lack an essential inductive bias to learn these efficiently. Hence, we require an alternative approach that can discover objects representations as primitives of a visual scene in an unsupervised fashion. One such approach is Neural Expectation Maximization (N-EM;), which learns a separate distributed representation for each object described in terms of the same features through an iterative process of perceptual grouping and representation learning. The compositional nature of these representations enable us to formulate Relational N-EM (R-NEM): a novel unsupervised approach to common-sense physical reasoning that combines N-EM (Section 2.1) with an interaction function that models relations between objects efficiently (Section 2.2). Neural Expectation Maximization (N-EM;) is a differentiable clustering method that learns a representation of a visual scene composed of primitive object representations. These representations adhere to many useful properties of a symbolic representation of objects, and can therefore be used as primitives of a compositional system BID26. They are described in the same format and each contain only information about the object in the visual scene that they correspond to. Together, they form a representation of a visual scene composed of objects that is learned in an unsupervised way, which therefore serves as a starting point for our approach. The goal of N-EM is to group pixels in the input that belong to the same object (perceptual grouping) and capture this information efficiently in a distributed representation θ k for each object. At a high-level, the idea is that if we were to have access to the family of distributions P (x|θ k) (a statistical model of images given object representations θ k) then we can formalize our objective as inference in a mixture of these distributions. By using Expectation Maximization (EM; BID9 to compute a Maximum Likelihood Estimate (MLE) of the parameters of this mixture (θ 1, . . ., θ K), we obtain a grouping (clustering) of the pixels to each object (component) and their corresponding representation. In reality we do not have access to P (x|θ k), which N-EM learns instead by parameterizing the mixture with a neural network and back-propagating through the iterations of the unrolled generalized EM procedure. Following, we model each image x ∈ R D as a spatial mixture of K components parameterized by vectors θ 1,..., θ K ∈ R M. A neural network f φ is used to transform these representations θ k into parameters ψ i,k = f φ (θ k) i for separate pixel-wise distributions. A set of binary latent variables Z ∈ D×K encodes the unknown true pixel assignments, such that z i,k = 1 iff pixel i was generated by component k. The full likelihood for x given θ = (θ 1, . . ., θ K) Figure 1: Illustration of the different computational aspects of R-NEM when applied to a sequence of images of bouncing balls. Note that γ, ψ at the Representations level correspond to the γ (E-step), ψ (Group Reconstructions) from the previous time-step. Different colors correspond to different cluster components (object representations).The right side shows a computational overview of Υ R-NEM, a function that computes the pair-wise interactions between the object representations.is given by: DISPLAYFORM0 If f φ has learned a statistical model of images given object representations θ k, then we can compute the object representations for a given image x by maximizing P (x|θ). Marginalization over z complicates this process, thus we use generalized EM to maximize the following lowerbound instead: DISPLAYFORM1 Each iteration of generalized EM consists of two steps: the E-step computes a new estimate of the posterior probability distribution over the latent variables γ i,k:= P (z i,k = 1|x i, ψ old i) given θ old from the previous iteration. It yields a new soft-assignment of the pixels to the components (clusters), based on how accurately they model x. The generalized M-step updates θ old by taking a gradient ascent step on, using the previously computed soft-assignments: DISPLAYFORM2 The unrolled computational graph of the generalized EM steps is differentiable, which provides a means to train f φ to implement a statistical model of images given object representations. Using back-propagation through time (eg. ;) we train f φ to minimize the following loss: DISPLAYFORM3 The intra-cluster term is identical to, which credits each component for accurately representing pixels that have been assigned to it. The inter-cluster term ensures that each representation only captures the information about the pixels that have been assigned to it. A more powerful variant of N-EM can be obtained (RNN-EM) by substituting the generalized M-step with a recurrent neural network having hidden state θ k. In this case, the entirety of f φ consists of a recurrent encoder-decoder architecture that receives γ k (x − ψ k) as input at each step. The learning objective in is prone to trivial solutions in case of overcapacity, which could prevent the network from modelling the statistical regularities in the data that correspond to objects. By adding noise to the input image or reducing θ in dimensionality we can guide learning to avert this. Moreover, in the case of RNN-EM one can evaluate at the following time-step (predictive coding) to encourage learning of object representations and their corresponding dynamics. One intuitive interpretation of using denoising or next-step prediction as part of the training objective is to guide the network to learn about essential properties of objects, in this case those that correspond to the Gestalt Principles of prägnanz and common fate BID22. RNN-EM (unlike N-EM) is able to capture the dynamics of individual objects through a parametrized recurrent connection that operates on the object representation θ k across consecutive time-steps. However, the relations and interactions that take place between objects can not be captured in this way. In order to overcome this shortcoming we propose Relational N-EM (R-NEM), which adds relational structure to the recurrence to model interactions between objects without violating key properties of the learned object representations. Consider a generalized form of the standard RNN-EM dynamics equation, which computes the object representation θ k at time t as a function of all object representations θ:= [θ 1, . . ., θ K] at the previous time-step through an interaction function Υ: DISPLAYFORM0 Here W, R are weight matrices, σ is the sigmoid activation function, andx (t) is the input to the recurrent model at time t (possibly transformed by an encoder). When Υ RNN-EM k (θ):= θ k, this dynamics model coincides with a standard RNN update rule, thereby recovering the original RNN-EM formulation. The inductive bias incorporated in Υ reflects the modeling assumptions about the interactions between objects in the environment, and therefore the nature of θ k's interdependence. If Υ incorporates the assumption that no interaction takes place between objects, then the θ k's are fully independent and we recover Υ RNN-EM. On the other hand, if we do assume that interactions among objects take place, but assume very little about the structure of the interdependence between the θ k's, then we forfeit useful properties of θ k such as compositionality. For example, if Υ:= MLP(θ) we can no longer extrapolate learned knowledge to environments with more or fewer than K objects and lose overall data efficiency BID41. Instead, we can make efficient use of compositionality among the learned object representations θ k to incorporate general but guiding constraints on how these may influence one another BID3 BID8. In doing so we constrain Υ to capture interdependence between θ k's in a compositional manner that enables physical dynamics to be learned efficiently, and allow for learned dynamics to be extrapolated to a variable number of objects. We propose a parametrized interaction function Υ R-NEM that incorporates these modeling assumptions and updates θ k based on the pairwise effects of the objects i = k on k: DISPLAYFORM1 where [·; ·] is the concatenation operator and MLP (·) corresponds to a multi-layer perceptron. First, each θ i is transformed using MLP enc to obtainθ i, which enables information that is relevant for the object dynamics to be made more explicit in the representation. Next, each pair (θ k,θ i) is concatenated and processed by MLP emb, which computes a shared embedding ξ k,i that encodes the interaction between object k and object i. Notice that we opt for a clear separation between the focus object k and the context object i as in previous work BID8. From ξ k,i we compute e k,i: the effect of object i on object k; and an attention coefficient α k,i that encodes whether interaction between object i and object k takes place. These attention coefficients BID2 BID53 help to select relevant context objects, and can be seen as a more flexible unsupervised replacement of the distance based heuristic that was used in previous work BID8. Finally, we compute the total effect of θ i =k on θ k as a weighted sum of the effects multiplied by their attention coefficient. A visual overview of Υ R-NEM can be seen on the right side of Figure 1. Machine learning approaches to common-sense physical reasoning can roughly be divided in two groups: symbolic approaches and approaches that perform state-to-state prediction. The former group performs inference over the parameters of a symbolic physics engine BID4 BID48 BID52, which restricts them to synthetic environments. The latter group employs machine learning methods to make state-to-state predictions, often describing the state of a system as a set of compact object-descriptions that are either used as an input to the system BID3 BID8 BID14 BID21 or for training purposes BID49. By incorporating information (eg. position, velocity) about objects these methods have achieved excellent generalization and simulation capabilities. Purely unsupervised approaches for state-to-state prediction BID0 BID33 BID35 BID47 ) that use raw visual inputs as state-descriptions have yet to rival these capabilities. Our method is a purely unsupervised state-to-state prediction method that operates in pixel space, taking a first step towards unsupervised learning of common-sense reasoning in real-world environments. The proposed interaction function Υ R-NEM can be seen as a type of Message Passing Neural Network (MPNN; BID16) that incorporates a variant of neighborhood attention BID11. In light of other recent work BID54 it can be seen as a permutation equivariant set function. R-NEM relies on N-EM to discover a compositional object representation from raw visual inputs. A closely related approach to N-EM is the TAG framework BID18, which utilizes a similar mechanism to perform inference over group representations, but in addition performs inference over the group assignments. In recent work TAG was combined with a recurrent ladder network BID27 to obtain a powerful model (RTagger) that can be applied to sequential data. However, the lack of a single compact representation that captures all information about a group (object) makes a compositional treatment of physical interactions more difficult. Other unsupervised approaches rely on attention to group together parts of the visual scene corresponding to objects BID13 BID20. These approaches suffer from a similar problem in that their sequential nature prevents a coherent object representation to take shape. Other related work have also taken steps towards combining the learnability of neural networks with the compositionality of symbolic programs in modeling physics BID3 BID8, playing games BID10 BID29, learning algorithms BID6 BID7 BID34 BID40, visual understanding BID12 BID28, and natural language processing BID1 BID24. In this section we evaluate R-NEM on three different physical reasoning tasks that each vary in their dynamical and visual complexity: bouncing balls with variable mass, bouncing balls with an invisible curtain and the Arcade Learning Environment BID5. We compare R-NEM to other unsupervised neural methods that do not incorporate any inductive biases reflecting real-world dynamics and show that these are indeed beneficial. All experiments use ADAM BID30 with default parameters, on 50K train + 10K validation + 10K test sequences and early stopping with a patience of 10 epochs. For each of MLP enc,emb,eff we used a unique single layer neural network with 250 rectified linear units. For MLP att we used a two-layer neural network: 100 tanh units followed by a single sigmoid unit. A detailed overview of the experimental setup can be found in Appendix A. Step 1 Step 2Step 3Step FORMULA4 Step 5Step 6Step 7Step 8Step 9Step FORMULA0 Step 11Step 12Step 13Step 14Step FORMULA0 Step FORMULA0 Step 17Step FORMULA0 Step FORMULA0 Figure 2: R-NEM applied to a sequence of 4 bouncing balls. Each column corresponds to a time-step, which coincides with an EM step. At each time-step, R-NEM computes K = 5 new representations θ k according to (see also Representations in Figure 1) from the input x with added noise (bottom row). From each new θ k a group reconstruction ψ k is produced (rows 2-6 from bottom) that predicts the state of the environment at the next time-step. Attention coefficients are visualized by overlaying a colored reconstruction of a context object on the white reconstruction of the focus object (see Attention in Section 4). Based on the prediction accuracy of ψ, the E-step (see Figure 1) computes new soft-assignments γ (row 7 from bottom), visualized by coloring each pixel i according to their distribution over components γ i. Row 8 visualizes the total prediction by the network (k ψ k · γ k) and row 9 the ground-truth sequence at the next time-step. Figure 3: Performance of each method on the bouncing balls task. Each method was trained on a dataset with 4 balls, evaluated on a test set with 4 balls (left), and on a test-set with 6-8 balls (middle). The losses are reported relative to the loss of a baseline for each dataset that always predicts the current frame. The ARI score (right) is used to evaluate the degree of compositionality that is achieved. The last ten time-steps of the sequences produced by R-NEM and RNN are simulated. Right: The BCE loss on the entire test-set for these same time-steps. Bouncing Balls We study the physical reasoning capabilities of R-NEM on the bouncing balls task, a standard environment to evaluate physical reasoning capabilities that exhibits low visual complexity and complex non-linear physical dynamics. 3 We train R-NEM on sequences of 64 × 64 binary images over 30 time-steps that contain four bouncing balls with different masses corresponding to their radii. The balls are initialized with random initial positions, masses and velocities. Balls bounce elastically against each other and the image window. Figure 1 presents a qualitative evaluation of R-NEM on the bouncing balls task. After 10 time-steps it can be observed that the pixels that belong to each of the balls are grouped together and assigned to a unique component (with a saturated color); and that the (colored grey) has been divided among all components (ing in a grey coloring). This indicates that the representation θ k from which each component produces the group reconstruction ψ k does indeed only contain information about a unique object, such that together the θ k's yield a compositional object representation of the scene. The total reconstruction (that combines the group reconstructions and the soft-assignments) displays an accurate reconstruction of the input sequence at the next time-step, indicating that R-NEM has learned to model the dynamics of bouncing balls. Comparison We compare the modelling capabilities of R-NEM to an RNN, LSTM BID15 BID23 and RNN-EM in terms of the Binomial Cross-Entropy (BCE) loss between the predicted image and the ground-truth image of the last frame, 4 as well as the relational BCE that only takes into account objects that currently take part in collision. Unless specified we use K = 5.On a test-set with sequences containing four balls we observe that R-NEM produces markedly lower losses when compared to all other methods (left plot in Figure 3). Moreover, in order to validate that each component captures only a single ball (and thus compositionality is achieved), we report the Adjusted Rand Index (ARI; BID25) score between the soft-assignments γ and the ground-truth assignment of pixels to objects. In the left column of the ARI plot (right side in Figure 3) we find that R-NEM achieves an ARI score of 0.8, meaning that in roughly 80% of the cases each ball is modeled by a single component. This suggests that a compositional object representation is achieved for most of the sequences. Together these observations are in line with our qualitative evaluation and validate that incorporating real world priors is greatly beneficial (comparing to RNN, LSTM) and that Υ R-NEM enables interactions to be modelled more accurately compared to RNN-EM in terms of the relational BCE.Similar to we find that further increasing the number of components during training (leaving additional groups empty) increases the quality of the grouping, see R-NEM K = 8 in Figure 3. In addition we observe that the loss (in particular the relational BCE) is reduced further, which matches our hypothesis that compositional object representations are greatly beneficial for modelling physical interactions. Extrapolating learned knowledge We use a test-set with sequences containing 6-8 balls to evaluate the ability of each method to extrapolate their learned knowledge about physical interactions between four balls to environments with more balls. We use K = 8 when evaluating R-NEM and RNN-EM on this test-set in order to accommodate the increased number of objects. As can be seen from the middle plot in Figure 3, R-NEM again greatly outperforms all other methods. Notice that, since we report the loss relative to a baseline, we roughly factor out the increased complexity of the task. Perfect extrapolation of the learned knowledge would therefore amount to no change in relative performance. In contrast, we observe far worse performance for the LSTM (relative to the baseline) when evaluated on this dataset with extra balls. It suggests that the gating mechanism of the LSTM has allowed it to learn a sophisticated and overly specialized solution for sequences with four balls that does not generalize to a dataset with 6-8 balls. R-NEM and RNN-EM scale markedly better to this dataset than LSTM. Although the RNN similarly suffers to a lesser extend from this type of "overfitting", this is most likely due its inability to learn a reasonable solution on sequences of four balls to begin with. Hence, we conclude that the superior extrapolation capabilities of RNN-EM and R-NEM are inherent to their ability to factor a scene in terms of permutation invariant object representations (see right side of the right plot in Figure 3).Attention Further insight in the role of the attention mechanism can be gained by visualizing the attention coefficients, as is done in Figure 2. For each component k we draw α k,i * ψ i on top of the reconstruction ψ k, colored according to the color of component i. These correspond to the colored balls (that are for example seen in time-steps 13, 14), which indicate whether component k took information about component i into account when computing the new state (recall FORMULA5). It can be observed that the attention coefficient α k,i becomes non-zero whenever collision takes place, such that a colored ball lights up in the following time-steps. The attention mechanism learned by R-NEM thus assumes the role of the distance-based heuristic in previous work BID8, matching our own intuitions of how this mechanism would best be utilized. A quantitative evaluation of the attention mechanism is obtained by comparing R-NEM to a variant of itself that does not incorporate attention (R-NEM no att). Figure 3 shows that both methods perform equally well on the regular test set (4 balls), but that R-NEM no att performs worse at extrapolating from its learned knowledge (6-8 balls). A likely reason for this behavior is that the range of the sum in changes with K. Thus, when extrapolating to an environment with more balls the total sum may exceed previous boundaries and impede learned dynamics. Simulation Once a scene has been accurately modelled, R-NEM can approximately simulate its dynamics through recursive application of for each θ k.5 In FIG2 we compare the simulation capabilities of R-NEM to RNN-EM and an RNN on the bouncing balls environment.3 On the left it shows for R-NEM and an RNN a sequence with five normal steps followed by 10 simulation steps, as well as the ground-truth sequence. From the last frame in the sequence it can clearly be observed that R-NEM has managed to accurately simulate the environment. Each ball is approximately in the correct place, and the shape of each ball is preserved. The balls simulated by the RNN, on the other hand, deviate substantially from their ground-truth position and their size has increased. In general we find that R-NEM produces mostly very accurate simulations, whereas the RNN consistently fails. Interestingly we found that the cases in which R-NEM frequently fails are those for which a single component models more than one ball. The right side of FIG2 summarizes the BCE loss for these same time-steps across the entire test-set. Although this is a crude measure of simulation performance (since it does not take into account the identity of the balls), we still observe that R-NEM consistently outperforms RNN-EM and an RNN.Hidden Factors Occlusion is abundant in the real world, and the ability to handle hidden factors is crucial for any physical reasoning system. We therefore evaluate the capability of R-NEM to handle occlusion using a variant of bouncing balls that contain an invisible "curtain." Figure 5 shows that R-NEM accurately models the sequence and can maintain object states, even when confronted with occlusion.3 For example, note that in step 36 the "blue" ball, is completely occluded and is about to collide with the "orange" ball. In step 38 the ball is accurately predicted to re-appear at the bottom of the curtain (since collision took place) as opposed to the left side of the curtain. This demonstrates that R-NEM has a notion of object permanence and implies that it understands a scene on a level beyond pixels: it assigns persistence and identity to the objects. 5 Note that in this case the input to the neural network encoder in component k corresponds to γ k (x (t) − ψ (t−1) ), such that the output of the encoderx (t) ≈ 0 when ψ DISPLAYFORM0 Step 8 Step FORMULA0 Step FORMULA0 Step FORMULA0 Step FORMULA0 Step FORMULA0 Step FORMULA1 Step FORMULA1 Step FORMULA1 Step FORMULA1 Step FORMULA1 Step 30 Step FORMULA1 Step FORMULA4 Step 36 Step 38 Step FORMULA4 Step FORMULA1 Step FORMULA4 Step FORMULA4 Figure 5: R-NEM applied to a sequence of bouncing balls with an invisible curtain. The ground truth sequence is displayed in the top row, followed by the prediction of R-NEM (middle) and the soft-assignments of pixels to components (bottom). R-NEM models objects, as well as its interactions, even when the object is completely occluded (step 36). Only a subset of the steps is shown. Ground Truth Step FORMULA0 Step FORMULA1 Step 3 Step FORMULA4 Step FORMULA5 Step 6 Step 7 Step 8 Step 9 Step FORMULA0 Step FORMULA0 Step FORMULA0 Step FORMULA0 Step FORMULA0 Step FORMULA0 Step FORMULA0 Step FORMULA0 Step FORMULA0 Figure 6: R-NEM accurately models a sequence of frames obtained by an agent playing Space Invaders. A group no longer corresponds to an object, but instead assumes the role of high-level entities that engage in similar movement patterns. In terms of test-set performance we find that R-NEM (BCE: 46.22, relational BCE: 2.33) outperforms an RNN (BCE: 94.64, relational BCE: 4.14) and an LSTM (BCE: 59.32, relational BCE: 2.72).Space Invaders To test the performance of R-NEM in a visually more challenging environment, we train it on sequences of 84 × 84 binarized images over 25 time-steps of game-play on Space Invaders from the Arcade Learning Environment BID5. 6 We use K = 4 and also feed the action of the agent to the interaction function. Figure 6 confirms that R-NEM is able to accurately model the environment, even though the visual complexity has increased. Notice that these visual scenes comprise a large numbers of (small) primitive objects that behave similarly. Since we trained R-NEM with four components it is unable to group pixels according to individual objects and is forced to consider a different grouping. We find that R-NEM assigns different groups to every other column of aliens together with the spaceship, and to the three large "shields." These groupings seem to be based on movement, which to some degree coincides with their semantic roles of the environment. In other examples (not shown) we also found that R-NEM frequently assigns different groups to every other column of the aliens, and to the three large "shields." Individual bullets and the space ship are less frequently grouped separately, which may have to do with the action-noise of the environment (that controls the movement of the space-ship) and the small size of the bullets at the current resolution that makes them less predictable. We have argued that the ability to discover and describe a scene in terms of objects provides an essential ingredient for common-sense physical reasoning. This is supported by converging evidence from cognitive science and developmental psychology that intuitive physics and reasoning capabilities are built upon the ability to perceive objects and their interactions BID43 BID48. The fact that young infants already exhibit this ability, may even suggest an innate bias towards compositionality BID32 BID37 BID45. Inspired by these observations we have proposed R-NEM, a method that incorporates inductive biases about the existence of objects and interactions, implemented by its clustering objective and interaction function respectively. The specific nature of the objects, and their dynamics and interactions can then be learned efficiently purely from visual observations. In our experiments we find that R-NEM indeed captures the (physical) dynamics of various environments more accurately than other methods, and that it exhibits improved generalization to environments with different numbers of objects. It can be used as an approximate simulator of the environment, and to predict movement and collisions of objects, even when they are completely occluded. This demonstrates a notion of object permanence and aligns with evidence that young infants seem to infer that occluded objects move in connected paths and continue to maintain objectspecific properties BID44. Moreover, young infants also appear to expect that objects only interact when they come into contact BID44, which is analogous to the behaviour of R-NEM to only attend to other objects when a collision is imminent. In summary, we believe that our method presents an important step towards learning a more human-like model of the world in a completely unsupervised fashion. Current limitations of our approach revolve around grouping and prediction. What aspects of a scene humans group together typically varies as a function of the task in mind. One may perceive a stack of chairs as a whole if the goal is to move them to another room, or as individual chairs if the goal is to count the number of chairs in the stack. In order to facilitate this dynamic grouping one would need to incorporate top-down feedback from an agent into the grouping procedure to deviate from the built-in inductive biases. Another limitation of our approach is the need to incentivize R-NEM to produce useful groupings by injecting noise, or reducing capacity. The former may prevent very small regularities in the input from being detected. Finally the interaction in the E-step among the groups makes it difficult to increase the number of components above ten without causing harmful training instabilities. Due to the multitude of interactions and objectives in R-NEM (and RNN-EM) we find that they are sometimes challenging to train. In terms of prediction we have implicitly assumed that objects in the environment behave according to rules that can be inferred. This poses a challenge when objects deform in a manner that is difficult to predict (as is the case for objects in Space Invaders due to downsampling). However in practice we find that (once pixels have been grouped together) the masking of the input helps each component in quickly adapting its representation to any unforeseen behaviour across consecutive time steps. Perhaps a more severe limitation of R-NEM (and of RNN-EM in general) is that the second loss term of the outer training objective hinders in modelling more complex varying s, as the group would have to predict the "pixel prior" for every other group. We argue that the ability to engage in common-sense physical reasoning benefits any intelligent agent that needs to operate in a physical environment, which provides exciting future research opportunities. In future work we intend to investigate how top-down feedback from an agent could be incorporated in R-NEM to facilitate dynamic groupings, but also how the compositional representations produced by R-NEM can benefit a reinforcement learner, for example to learn a modular policy that easily generalizes to novel combinations of known objects. Other interactions between a controller C and a model of the world M (implemented by R-NEM) as posed in BID42 constitute further research directions. In all experiments we train the networks using ADAM BID30 with default parameters, a batch size of 64 and 50 000 train + 10 000 validation + 10 000 test inputs. The quality of the learned groupings is evaluated by computing the Adjusted Rand Index (ARI; BID25) with respect to the ground truth, while ignoring the and overlap regions (as is consistent with earlier work). We use early stopping when the validation loss has not improved for 10 epochs. The bouncing balls data is similar to previous work BID47 ) with a few modifications. The data consists of sequences of 64 × 64 binary images over 30 time-steps and balls are randomly sampled from two types: one ball is six times heavier and 1.25 times larger in radius than the other. The balls are initialized with random initial positions and velocities. Balls bounce elastically against each other and the image window. As in previous work we use a convolutional encoder-decoder architecture with a recurrent neural network as bottleneck, that is updated according to:
We introduce a novel approach to common-sense physical reasoning that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion
300
scitldr
The idea that neural networks may exhibit a bias towards simplicity has a long history. Simplicity bias provides a way to quantify this intuition. It predicts, for a broad class of input-output maps which can describe many systems in science and engineering, that simple outputs are exponentially more likely to occur upon uniform random sampling of inputs than complex outputs are. This simplicity bias behaviour has been observed for systems ranging from the RNA sequence to secondary structure map, to systems of coupled differential equations, to models of plant growth. Deep neural networks can be viewed as a mapping from the space of parameters (the weights) to the space of functions (how inputs get transformed to outputs by the network). We show that this parameter-function map obeys the necessary conditions for simplicity bias, and numerically show that it is hugely biased towards functions with low descriptional complexity. We also demonstrate a Zipf like power-law probability-rank relation. A bias towards simplicity may help explain why neural nets generalize so well. In a recent paper BID4, an inequality inspired by the coding theorem from algorithmic information theory (AIT) BID5, and applicable to computable input-output maps was derived using the following simple procedure. Consider a map f: I → O between N I inputs and N O outputs. The size of the inputs space is parameterized as n, e.g. if the inputs are binary strings, then N I = 2 n. Assuming f and n are given, implement the following simple procedure: first enumerate all 2 n inputs and map them to outputs using f. Then order the outputs by how frequently Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute. they appear. Using a Shannon-Fano code, one can then describe x with a code of length − log 2 P (x) + O, which therefore upper bounds the Kolmogorov complexity, giving the relation P (x) ≤ 2 −K(x|f,n)+O BID0. The O terms are independent of x (but hard to estimate). Similar bounds can be found in standard works BID5. As pointed out in BID4, if the maps are simple, that is condition 1: K(f) + K(n) K(x) + O holds, then because K(x) ≤ K(x|f, n) + K(f) + K(n) + O, and K(x|f, n) ≤ K(x) + O, it follows that K(x|f, n) ≈ K(x) + O. The problem remains that Kolmogorov complexity is fundamentally uncomputable BID5, and that the O terms are hard to estimate. However, in reference a more pragmatic approach was taken to argue that a bound on the probability P (x) that x obtains upon random sampling of inputs can be approximated as DISPLAYFORM0 whereK(x) is a suitable approximation to the Kolmogorov complexity of x. Here a and b are constants that are independent of x and which can often be determined from some basic information about the map. These constants pick up multiplicative and additive factors in the approximation to K(x) and to the O terms. In addition to the simplicity of the the input-output map f (condition), the map also needs to obey conditions BID1 Redundancy: that the number of inputs N I is much larger than the number of outputs N O, as otherwise P (x) can't vary much; 3) Large systems where N O 0, so that finite size effects don't play a dominant role; 4) Nonlinear: If the map f is linear it won't show bias and 5) Well-behaved: The map should not have a significant fraction of pseudorandom outputs because it is hard to find good approximationsK(x). For example many randomnumber generators produce outputs that appear complex, but in fact have low K(x) because they are generated by a relatively simple algorithms with short descriptions. Some of the steps above may seem rather rough to AIT purists. For example: Can a reasonable approximation to K(x) be found? What about O terms? And, how do you know condition 5) is fulfilled? Notwithstanding these important questions, in reference the simplicity bias bound was tested empirically for a wide range of different maps, ranging from a sequence to RNA secondary 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 Simplicity bias in the parameter-function map of deep neural networks structure map, to a set of coupled differential equations, to L-systems (a model for plant morphology and computer graphics) to a stochastic financial model. In each case the bound works remarkably well: High probability outputs have low complexity, and high complexity outputs have low probability (but not necessarily vice versa). A simple matrix map that allows condition 1 to be directly tested also demonstrates that when the map becomes sufficiently complex, simplicity bias phenomena disappear.this method is that it relies on the assumption of the put x, the upper bound was only a poor approximation,. nd b can generally be estimated with a limited amount text. As long as there are ways to estimate max(K(x)), first order can simply be set to zero. Of course some ay obey them, but we can always simply fix a and b only a small amount of information is needed to fix the e chosen approximate measure of complexity. In this a di↵erent complexity sayK ↵, = ↵C LZ (x) +, then nts a ↵, = a/↵ and b ↵, = b a /↵. In other words, the parameters. Such robustness is a useful property.plexity for di↵erent sized systems. (a) RNA n = 10 and simplest structure does have the largest probability. upper bound, a = 0.23, b = 1.08; (c) RNA n = 80 shows er bound, a = 0.33, b = 6.39. FIG0. Probability that an RNA secondary structure x obtains upon random sampling of length L = 80 sequences versus a Lempel-Ziv measure of the complexity of the structure. The black solid line is the simplicity-bias bound, while the dashed line denotes the bound with the parameter b set to zero. In FIG0 we illustrate an iconic input-output map for RNA, a linear biopolymer that can fold into well-defined sructures due to specific bonding between the four different types of nucleotides ACUG from which its sequences are formed. While the full three-dimensional structure is difficult to predict, the secondary structure, which records which nucleotide binds to which nucleotide, can be efficiently and accurately calculated. This mapping from sequences to secondary structures fulfills the conditions above. Most importantly, the map, which uses the laws of physics to determine the lowest free-energy structure for a given sequence, is independent of the length of the sequences, and so fulfills the simplicity condition. The structures (the outputs x) can be written in terms of a ternary string, and so simple compression algorithms can be used to estimate their complexity. In FIG0, we observe, as expected, that the probability P (x) that a particular secondary structure x is found upon random sampling of sequences is bounded by Eq. as predicted. Similar robust simplicity bies behaviour to that seen in this figure was observed for the other maps. Similar scaling was also observed for this map with a series of other approximations to K(x), suggesting that the precise choice of complexity measure was not critical, as long as it captures some basic essential features. In summary then, the simplicity bias bound works robustly well for a wide range of different maps. The predictions are strong: the probability that an output obtains upon random sampling of inputs should drop (at least) exponentially with linear increases in the descriptional complexity of the output. Nevertheless, it is important to note that while the 5 conditions above are sufficient for the bound to hold, they are not sufficient to guarantee that the map will be biased (and therefore simplicity biased). One can easily construct maps that obey them, but do not show bias. Understanding the conditions ing in biased maps is very much an open area of investigation. The question we will address here is: Can deep learning be re-cast into the language of input-output maps, and if so, do these maps also exhibit the very general phenomenon of simplicity bias?2. The parameter-function map It is not hard to see that the map above obeys condition 1: The shortest description of the map grows slowly with the logarithm of the size of the space of functions (which determines the typical K(x)). Conditions 2-4 are also clearly met. Condition 5 is more complex and requires empirical testing. But given that simplicity bias was observed for such a wide range of maps, our expectation is that it will hold robustly for neural networks also. In order to explore the properties of the parameterfunction map, we consider random neural networks. We put a probability distribution over the space of parameters Θ, and are interested in the distribution over functions induced by this distribution via the parameter-function map of a given neural network. We consider Gaussian and uniform distributions over parameters. In the following, when we say "probability of a function", we imply we 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 Simplicity bias in the parameter-function map of deep neural networks are using a Gaussian or uniform i.i.d. distribution over parameters unless otherwise specified. For a convolutional network with 4 layers and no pooling BID0, we have used the Gaussian process approximation(7; 8) to estimate the probability of different labellings 2 on a random sample of 1000 images from CIFAR10. We used the critical sample ratio (CSR) as a measure of the complexity of the functions BID2. FIG1, depicts the strong correlation between the log probability and CSR consistent with Eq..In FIG7, we also show the values of the log probability for a CNN and a FC network, on a larger sample of 10k images from CIFAR10, MNIST, and fashion-MNIST. This illustrates the large range of orders of magnitude for the probability of different labellings, as well as the negative correlation with increasing label corruption. While label corruption is not a direct measure of descriptional complexity, it is almost certainly the case that increasing label corruption generally corresponds to an increase in measures of Kolmogorov complexity. Direct measurements of simplicity bias are extremely computationally expensive, and so far we have not been able to obtain directly sampled for datasets larger than CIFAR10.To further isolate the phenomenon of simplicity bias in the parameter-function map, and study the effect of us-1 See Appendix A for more details on the architecture 2 See Appendix A.1 for details on the Gaussian process approximation BID2 See Appendix C for more details on complexity measures (a) (b) FIG7. log probability of a labelling on 10k random images on three different datasets, as the fraction of label corruption increases. The probability is computed using the Gaussian process approximation for (a) a CNN with 4 layers and no pooling. (b) a 1 hidden layer FC network. ing different complexity measures, we look at a series of MLPs with 7 Boolean input neurons, 1 Boolean output neuron, and a number of layers of 40 hidden neurons each. We use ReLU activation in all hidden layers. For each such architecture, we sample the parameters i.i.d. according to a Gaussian or uniform distribution. We then count how often individual Boolean functions are obtained, and use the normalized empirical frequencies as estimate of their probability. In FIG2, we plot the probability (in log scale) versus the Lempel-Ziv complexity, which is defined in Appendix C.1. In Appendix C.3, we show similar for other complexity measures, demonstrating that the simplicity bias is robust to the choice of complexity measure, as long as it not too simple (like entropy). The simplicity bias observed in FIG2 very closely resembles that seen for the other maps in.In FIG2 in Appendix C.3, we show the same data as in FIG2, but as a histogram, highlighting that most of the probability mass, when sampling parameters, is close to the upper bound. In addition, in FIG9 in Appendix D, we show the probability versus complexity plots for fully-165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 Simplicity bias in the parameter-function map of deep neural networks connected networks with increasing number of layers, showing that the correlation is similar in all cases, although for deeper networks, the bias is a bit stronger. In our experiments in the previous two sections, we observe that the range of probabilities of functions spans many orders of magnitude. Here we show that for the experiment in Section 4, when we plot the probabilities versus the rank of the function (when ranked by probability), we find that the probabilities asymptotically follow Zipf's law: DISPLAYFORM0 where N O is the total number of functions. This was also observed in for networks of Boolean functions, and in for more general circuits. We formalize our findings for neural networks in the following conjecture:Conjecture 1 (Zipf's law beahviour) The probabilityrank relation of functions for sufficiently overparametrized neural networks follows Zipf's law (Equation 2)By "sufficiently over-parametrized" we mean that the networks are in the regime where the Gaussian process approximation to the distribution over function is a good approximation (which appears to be the case for many cases in practice (11; 8; 12) ). We now explain the experiments we conducted to test this conjecture. In FIG3, we show the probabilities of the functions versus their rank (when ranked by probability), for different choices of parameter distribution. We observe that for all the parameter distributions the probability-rank plot appears to approach Zipf's law. To determine the parameter N O in Zipf's law, we checked that this architecture can produce almost all Boolean functions of 7 inputs, by finding that it could fit all the functions in a sample of 1000 random Boolean functions. We thus used N O = 2 DISPLAYFORM1 in the Zipf's law curve in FIG3, finding excellent agreement with no free parameters. These imply in particular that the distribution is extremely biased, with probabilities spanning a huge range of orders of magnitudes. Note that the mean P (x) over all functions is 1/N O ≈ 3 × 10 −39, so that in our plots we are only showing a tiny fraction of functions for which P (x) is orders of magnitude larger than the mean. We can't reliably obtain a probability-rank plot for the experiments we did on CIFAR10, because our sampling of the space of functions is much sparser in this case. How- are weight and bias variances, respectively, where n is the number of hidden neurons ever, the probabilities still span a huge range of orders of magnitude, as seen in FIG1. In fact, the range of orders of magnitude appears to scale with m, the size of the input space. This is consistent with a probability-rank following Zipf's law. The fact that neural networks are biased towards simple solutions has been conjectured to be the main reason they generalize (4; 3; 12). Here we have explored a form of simplicity bias encoded in the parameter-function map. It turns out that this bias is enough to guarantee good generalization, as was shown in via a PAC-Bayesian formalism. In FIG0 in Appendix B, we show the PACBayesian bounds versus the true generalization error they obtained, finding that the bounds are not only nonvacuous but follow the trends of the true error closely. Simplicity bias also provides a lens through which to naturally understand various phenomena observed in deep networks, for example, that the number of parameters does not strongly affect generalization. We have provided evidence that neural networks exhibit simplicity bias. The fact that the phenomena observed are remarkably similar to those of a wide range of maps from science and engineering BID4 suggests that this behaviour is general, and will hold for many neural network architectures. It would be interesting to test this claim for larger systems, which will require new sampling techniques, and to derive analytic arguments for a bias towards simplicity, as done in BID12. 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 Simplicity bias in the parameter-function map of deep neural In the main experiments of the paper we used two classes of architectures. Here we describe them in more detail.• Fully connected networks (FCs), with varying number of layers. The size of the hidden layers was the same as the input dimension, and the nonlinearity was ReLU. The last layer was a single Softmax neuron. We used default Keras settings for initialization (Glorot uniform).• Convolutional neural networks (CNNs), with varying number of layers. The number of filters was 200, and the nonlinearity was ReLU. The last layer was a fully connected single Softmax neuron. The filter sizes alternated between and BID4 BID4, and the padding between SAME and VALID, the strides were 1 (same default settings as in the code for FORMULA0). We used default Keras settings for initialization (Glorot uniform). In recent work ((2; 3; 1; 4)), it was shown that infinitely-wide neural networks (including convolutional and residual networks) are equivalent to Gaussian processes. This means that if the parameters are distributed i.i.d. (for instance with a Gaussian with diagonal covariance), then the (real-valued) outputs of the neural network, corresponding to any finite set of inputs, are jointly distributed with a Gaussian distribution. More precisely, assume the i.i.d. distribution over parameters isP with zero mean, then for a set of n inputs (x 1, ..., x n), DISPLAYFORM0 whereỹ = (ỹ 1, ...,ỹ n). The entries of the covariance matrix K are given by the kernel function k as DISPLAYFORM1 The kernel function depends on the choice of architecture, and properties ofP, in particular the weight variance σ 2 w /n (where n is the size of the input to the layer) and the bias variance σ 2 b. The kernel for fully connected ReLU networks has a well known analytical form known as the arccosine kernel , while for convolutional and residual networks it can be efficiently computed BID0.The main quantity in the PAC-Bayes theorem, P (U), is precisely the probability of a given set of output labels for the set of instances in the training set, also known as marginal likelihood, a connection explored in recent work ((6; 7)). For binary classification, these labels are binary, and are related to the real-valued outputs of the network via a nonlinear function such as a step functionwhich we denote σ. Then, for a training set U = {(x 1, y 1),..., (x m, y m)}, DISPLAYFORM2 This distribution no longer has a Gaussian form because of the output nonlinearity σ. We will discuss approximations to deal with this in the following. There is also the more fundamental issue of neural networks not being infinitely-wide in practice. However, the Gaussian process limit has been found to be a good approximation for nets with reasonable widths (3; 4; 8).In order to calculate P (U) using the GPs, we use the expectation-propagation (EP) approximation, implemented in, which is more accurate than the Laplacian approximation (see BID9 for a description and comparison of the algorithms).In, the authors compare the two approximations and find that EP appears to give better approximations. One of the key steps to practical application of the simplicity bias framework of Dingle et al. in FORMULA0 is the identification of a suitable complexity measureK(x) which mimics aspects of the (uncomputable) Kolmogorov complexity K(x) for the problem being studied. It was shown for the maps in that several different complexity measures all generated the same qualitative simplicity bias behaviour: DISPLAYFORM0 1 We use the code from to compute the kernel for convolutional networks but with different values of a and b depending on the complexity measure and of course depending on the map, but independent of output x. Showing that the same qualitative obtain for different complexity measures is sign of robustness for simplicity bias. Below we list a number of different descriptional complexity measures which we used, to extend the experiments in Section 4 in the main text. Lempel-Ziv complexity (LZ complexity for short). The Boolean functions studied in the main text can be written as binary strings, which makes it possible to use measures of complexity based on finding regularities in binary strings. One of the best is Lempel-Ziv complexity, based on the Lempel-Ziv compression algorithm. It has many nice properties, like asymptotic optimality, and being asymptotically equal to the Kolmogorov complexity for an ergodic source. We use the variation of Lempel-Ziv complexity from FORMULA0 which is based on the 1976 Lempel Ziv algorithm (FORMULA0): DISPLAYFORM0 where n is the length of the binary string, and N w (x 1 ...x n) is the number of words in the Lempel-Ziv "dictionary" when it compresses output x. The symmetrization makes the measure more fine-grained, and the log 2 (n) factor as well as the value for the simplest strings ensures that they scale as expected for Kolmogorov complexity. This complexity measure is the primary one used in the main text. We note that the binary string representation depends on the order in which inputs are listed to construct it, which is not a feature of the function itself. This may affect the LZ complexity, although for low-complexity input orderings (we use numerical ordering of the binary inputs), it has a negligible effect, so that K(x) will be very close to the Kolmogorov complexity of the function. Entropy. A fundamental, though weak, measure of complexity is the entropy. For a given binary string this is defined DISPLAYFORM1 N, where n 0 is the number of zeros in the string, and n 1 is the number of ones, and 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 DISPLAYFORM2 This measure is close to 1 when the number of ones and zeros is similar, and is close to 0 when the string is mostly ones, or mostly zeros. Entropy and K LZ (x) are compared in FIG1, and in more detail in supplementary note 7 (and supplementary information figure 1) of reference BID10. They correlate, in the sense that low entropy S(x) means low K LZ (x), but it is also possible to have Large entropy but low K LZ (x), for example for a string such as 10101010.... Boolean expression complexity. Boolean functions can be compressed by finding simpler ways to represent them. We used the standard SciPy implementation of the Quine-McCluskey algorithm to minimize the Boolean function into a small sum of products form, and then defined the number of operations in the ing Boolean expression as a Boolean complexity measure. Generalization complexity. L. Franco et al. have introduced a complexity measure for Boolean functions, designed to capture how difficult the function is to learn and generalize (FORMULA0), which was used to empirically find that simple functions generalize better in a neural network (FORMULA0). The measure consists of a sum of terms, each measuring the average over all inputs fraction of neighbours which change the output. The first term considers neighbours at Hamming distance of 1, the second at Hamming distance of 2 and so on. The first term is also known (up to a normalization constant) as average sensitivity (FORMULA0). The terms in the series have also been called "generalized robustness" in the evolutionary theory literature (FORMULA0). Here we use the first two terms, so the measure is: DISPLAYFORM3 where Nei i (x) is all neighbours of x at Hamming distance i. Critical sample ratio. A measure of the complexity of a function was introduced in BID16 to explore the dependence of generalization with complexity. In general, it is defined with respect to a sample of inputs as the fraction of those samples which are critical samples, defined to be an input such that there is another input within a ball of radius r, producing a different output (for discrete outputs). Here, we define it as the fraction of all inputs, that have another input at Hamming distance 1, producing a different output. In FIG1, we compare the different complexity measures against one another. We also plot the frequency of each complexity; generally more functions are found with higher complexity. In FIG7 we show how the probability versus complexity plots look for other complexity measures. The behaviour is similar to that seen for the LZ complexity measure in FIG0 of the main text. In FIG3 we show probability versus LZ complexity plots for other choices of parameter distributions. In FIG2, we show a histogram of the functions in the log probability -complexity plane. The histogram counts are weighted by the probability of the function, so that the total weight is proportional to the probability of obtaining a function in a particular bin, when sampling the parameters. In FIG9 we show the effect of the number of layers on the bias (for feedforward neural networks with 40 neurons per layer). The left figures show the probability of individual functions versus the complexity. The right figure shows the histogram of complexities, weighted by the probability by which the function appeared in the sample of parameters. 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 parameters, for a network of shape BID6 40, 40, BID0. Points with a frequency of 10 −8 are removed for clarity because these suffer from finite-size effects (see Appendix E). The measures of complexity are described in Appendix C. 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 FIG2. Histogram of functions in the probability versus Lempel-Ziv complexity plane, weighted according to their probability. Probabilities are estimated from a sample of 10 8 parameters, for a network of shape BID6 40, 40, BID0 The histograms therefore show the distribution over complexities when randomly sampling parameters BID1 We can see that between the 0 layer perceptron and the 2 layer network there is an increased number of higher complexity functions. This is most likely because of the increasing expressivity of the network. For 2 layers and above, the expressivity does not significantly change, and instead, we observe a shift of the distribution towards lower complexity. E. Finite-size effects for sampling probabilitySince for a sample of size N the minimum estimated probability is 1/N, many of the low-probability samples that arise just once may in fact have a much lower probability than suggested. See Figure 7), for an illustration of how this finite-size sampling effect manifests with changing sample size N. For this reason, these points are typically removed from plots. 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549
A very strong bias towards simple outpouts is observed in many simple input-ouput maps. The parameter-function map of deep networks is found to be biased in the same way.
301
scitldr
Imitation Learning (IL) is an appealing approach to learn desirable autonomous behavior. However, directing IL to achieve arbitrary goals is difficult. In contrast, planning-based algorithms use dynamics models and reward functions to achieve goals. Yet, reward functions that evoke desirable behavior are often difficult to specify. In this paper, we propose "Imitative Models" to combine the benefits of IL and goal-directed planning. Imitative Models are probabilistic predictive models of desirable behavior able to plan interpretable expert-like trajectories to achieve specified goals. We derive families of flexible goal objectives, including constrained goal regions, unconstrained goal sets, and energy-based goals. We show that our method can use these objectives to successfully direct behavior. Our method substantially outperforms six IL approaches and a planning-based approach in a dynamic simulated autonomous driving task, and is efficiently learned from expert demonstrations without online data collection. We also show our approach is robust to poorly-specified goals, such as goals on the wrong side of the road. Imitation learning (IL) is a framework for learning a model to mimic behavior. At test-time, the model pursues its best-guess of desirable behavior. By letting the model choose its own behavior, we cannot direct it to achieve different goals. While work has augmented IL with goal conditioning , it requires goals to be specified during training, explicit goal labels, and are simple (e.g., turning). In contrast, we seek flexibility to achieve general goals for which we have no demonstrations. In contrast to IL, planning-based algorithms like model-based reinforcement learning (MBRL) methods do not require expert demonstrations. MBRL can adapt to new tasks specified through reward functions . The "model" is a dynamics model, used to plan under the user-supplied reward function. Planning enables these approaches to perform new tasks at test-time. The key drawback is that these models learn dynamics of possible behavior rather than dynamics of desirable behavior. This means that the responsibility of evoking desirable behavior is entirely deferred to engineering the input reward function. Designing reward functions that cause MBRL to evoke complex, desirable behavior is difficult when the space of possible undesirable behaviors is large. In order to succeed, the rewards cannot lead the model astray towards observations significantly different than those with which the model was trained. Our goal is to devise an algorithm that combines the advantages of MBRL and IL by offering MBRL's flexibility to achieve new tasks at test-time and IL's potential to learn desirable behavior entirely from offline data. To accomplish this, we first train a model to forecast expert trajectories with a density function, which can score trajectories and plans by how likely they are to come from the expert. A probabilistic model is necessary because expert behavior is stochastic: e.g. at an intersection, the expert could choose to turn left or right. Next, we derive a principled probabilistic inference objective to create plans that incorporate both the model and arbitrary new tasks. Finally, we derive families of tasks that we can provide to the inference framework. Our method can accomplish new tasks specified as complex goals without having seen an expert complete these tasks before. We investigate properties of our method on a dynamic simulated autonomous driving task (see Fig. 1). Videos are available at https://sites.google.com/view/imitative-models. Our contributions are as follows: Figure 1: Our method: deep imitative models. Top Center. We use demonstrations to learn a probability density function q of future behavior and deploy it to accomplish various tasks. Left: A region in the ground plane is input to a planning procedure that reasons about how the expert would achieve that task. It coarsely specifies a destination, and guides the vehicle to turn left. Right: Goal positions and potholes yield a plan that avoids potholes and achieves one of the goals on the right. 1. Interpretable expert-like plans with minimal reward engineering. Our method outputs multistep expert-like plans, offering superior interpretability to one-step imitation learning models. In contrast to MBRL, our method generates expert-like behaviors with minimal reward engineering. 2. Flexibility to new tasks: In contrast to IL, our method flexibly incorporates and achieves goals not seen during training, and performs complex tasks that were never demonstrated, such as navigating to goal regions and avoiding test-time only potholes, as depicted in Fig. 1. 3. Robustness to goal specification noise: We show that our method is robust to noise in the goal specification. In our application, we show that our agent can receive goals on the wrong side of the road, yet still navigate towards them while staying on the correct side of the road. 4. State-of-the-art CARLA performance: Our method substantially outperforms MBRL, a custom IL method, and all five prior CARLA IL methods known to us. It learned near-perfect driving through dynamic and static CARLA environments from expert observations alone. We begin by formalizing assumptions and notation. We model continuous-state, discrete-time, Partially-Observed Markov Decision Processes (POMDPs). For brevity, we call the components of state of which we have direct observations the agent's "state", although we explicitly assume these states do not represent the full Markovian world state. Our agent's state at time t is s t ∈ R D; t = 0 refers to the current time step, and φ is all of the agent's observations. Variables are bolded. Random variables are capitalized. Absent subscripts denote all future time steps, e.g. S. = S 1:T ∈ R T ×D. We denote a probability density function of a random variable S as p(S), and its value as p(s). = p(S = s). To learn agent dynamics that are possible and preferred, we construct a model of expert behavior. We fit an "Imitative Model" q(S 1:T |φ) = T t=1 q(S t |S 1:t−1, φ) to a dataset of expert trajectories drawn from a (unknown) distribution of expert behavior s i ∼ p(S|φ i). By training q(S|φ) to forecast expert trajectories with high likelihood, we model the scene-conditioned expert dynamics, which can score trajectories by how likely they are to come from the expert. After training, q(S|φ) can generate trajectories that resemble those that the expert might generatee.g. trajectories that navigate roads with expert-like maneuvers. However, these maneuvers will not have a specific goal. Beyond generating human-like behaviors, we wish to direct our agent to goals and have the agent automatically reason about the necessary mid-level details. We define general tasks by a set of goal variables G. The probability of a plan s conditioned on the goal G is modelled by a posterior p(s|G, φ). This posterior is implemented with q(s|φ) as a learned imitation prior and p(G|s, φ) as a test-time goal likelihood. We give examples of p(G|s, φ) after deriving a maximum a posteriori inference procedure to generate expert-like plans that achieve abstract goals: We perform gradient-based optimization of Eq. 1, and defer this discussion to Appendix A. Next, we discuss several goal likelihoods, which direct the planning in different ways. They communicate goals they desire the agent to achieve, but not how to achieve them. The planning procedure determines how to achieve them by producing paths similar to those an expert would have taken to reach the given goal. In contrast to black-box one-step IL that predicts controls, our method produces interpretable multi-step plans accompanied by two scores. One estimates the plan's "expertness", the second estimates its probability to achieve the goal. Their sum communicates the plan's overall quality. Our approach can also be viewed as a learning-based method to integrate mid-level and high-level controllers together, where demonstrations from both are available at train-time, only the highlevel controller is available at test-time, and the high-level controller can vary. The high-level controller's action specifies a subgoal for the mid-level controller. A density model of future trajectories of an expert mid-level controller is learned at train-time, and is amenable to different types of direction as specified by the high-level controller. In this sense, the model is an "apprentice", having learned to imitate mid-level behaviors. In our application, the high-level controller is composed of an A * path-planning algorithm and one of a library of components that forms goal likelihoods from the waypoints produced by A *. Connecting this to related approaches, learning the midlevel controller (Imitative Model) resembles offline IL, whereas inference with an Imitative Model resembles trajectory optimization in MBRL, given goals provided by the high-level controller. Constraint-based planning to goal sets (hyperparameter-free): Consider the setting where we have access to a set of desired final states, one of which the agent should achieve. We can model this by applying a Dirac-delta distribution on the final state, to ensure it lands in a goal set G ⊂ R D: δ s T (G)'s partial support of s T ∈ G ⊂ R D constrains s T and introduces no hyperparameters into p(G|s, φ). For each choice of G, we have a different way to provide high-level task information to the agent. The simplest choice for G is a finite set of points: a (A) Final-State Indicator likelihood. We applied (A) to a sequence of waypoints received from a standard A * planner (provided by the CARLA simulator), and outperformed all prior dynamic-world CARLA methods known to us. We can also consider providing an infinite number of points. Providing a set of line-segments as G yields a (B) Line-Segment Final-State Indicator likelihood, which encourages the final state to land along one of the segments. Finally, consider a (C) Region Final-State Indicator likelihood in which G is a polygon (see Figs. 1 and 4). Solving Eq. 1 with (C) amounts to planning the most expert-like trajectory that ends inside a goal region. Appendix B provides derivations, implementation details, and additional visualizations. We found these methods to work well when G contains "expert-like" final position(s), as the prior strongly penalizes plans ending in non-expert-like positions. Unconstrained planning to goal sets (hyperparameter-based): Instead of constraining that the final state of the trajectory reach a goal, we can use a goal likelihood with full support (s T ∈ R D), centered at a desired final state. This lets the goal likelihood encourage goals, rather than dictate them. If there is a single desired goal (G = {g T}), the (D) Gaussian Final-State likelihood p(G|s, φ) ← N (g T ; s T, I) treats g T as a noisy observation of a final future state, and encourages the plan to arrive at a final state. We can also plan to K successive states G = (g T −K+1, . . ., g T) with a (E) Gaussian State Sequence: p(G|s, φ) ← T k=T −K+1 N (g k ; s k, I) if a program wishes to specify a desired end velocity or acceleration when reaching the final state g T (Fig. 2). Alternatively, a planner may propose a set of states with the intention that the agent should reach any one of them. This is ) and is useful if some of those final states are not reachable with an expert-like plan. Unlike A-C, D-F introduce a hyperparameter " ". However, they are useful when no states in G correspond to observed expert behavior, as they allow the imitation prior to be robust to poorly specified goals. Costed planning: Our model has the additional flexibility to accept arbitrary user-specified costs c at test-time. For example, we may have updated knowledge of new hazards at test-time, such as a given map of potholes or a predicted cost map. Cost-based knowledge c(s i |φ) can be incorporated as an (G) Energy-based likelihood: p(G|s, φ) ∝ T t=1 e −c(st|φ) . This can be combined with other goal-seeking objectives by simply multiplying the likelihoods together. Examples of combining G (energy-based) with F (Gaussian mixture) were shown in Fig. 1 and are shown in Fig. 3. Next, we describe instantiating q(S|φ) in CARLA . Designing general goal likelihoods can be considered a form of reward engineering if there are no restrictions on the goal likelihoods. This connection is best seen in (G), which has an explicit cost term. One reason why it is easier to design goal likelihoods than to design reward functions is that the task of evoking most aspects of goal-driven behavior is already learned by the prior q(s|φ), which models desirable behavior. This is in contrast to model-free RL, which entirely relies on the reward design to evoke goal-driven behavior, and in contrast to model-based RL, which heavily relies on the reward design to evoke goal-driven behavior, as its dynamics model learns what is possible, rather than what is desirable. Additionally, it is easy to design goal likelihoods when goals provide a significant amount of information that obviates the need to do any manual tuning. The main assumption is that one of the goals in the goal set is reachable within the model's time-horizon. In our autonomous driving application, we model the agent's state at time t as s t ∈ R D with D = 2; s t represents our agent's location on the ground plane. The agent has access to environment perception φ ← {s −τ :0, χ, λ}, where τ is the number of past positions we condition on, χ is a high-dimensional observation of the scene, and λ is a low-dimensional traffic light signal. χ could represent either LIDAR or camera images (or both), and is the agent's observation of the world. In our setting, we featurize LIDAR to χ = R 200×200×2, with χ ij representing a 2-bin histogram of points above and at ground level in a 0.5m 2 cell at position (i, j). CARLA provides ground-truth s −τ:0 and λ. Their availability is a realistic input assumption in perception-based autonomous driving pipelines. Model requirements: A deep imitative model forecasts future expert behavior. It must be able to compute q(s|φ)∀s ∈ R T ×D. The ability to compute ∇ s q(s|φ) enables gradient-based optimization for planning. provide a recent survey on forecasting agent behavior. As many forecasting methods cannot compute trajectory probabilities, we must be judicious in choosing q(S|φ). A model that can compute probabilities R2P2 , a generative autoregressive flow . We extend R2P2 to instantiate the deep imitative model q(S|φ). R2P2 was previously used to forecast vehicle trajectories: it was not demonstrated or developed to plan or execute controls. Although we used R2P2, other futuretrajectory density estimation techniques could be used -designing q(s|φ) is not the primary focus of this work. In R2P2, q θ (S|φ) is induced by an invertible, differentiable function:; f θ warps a latent sample from a base distribution Z ∼ q 0 = N (0, I) to S. θ is trained Figure 5: Illustration of our method applied to autonomous driving. Our method trains an imitative model from a dataset of expert examples. After training, the model is repurposed as an imitative planner. At test-time, a route planner provides waypoints to the imitative planner, which computes expert-like paths to each goal. The best plan is chosen according to the planning objective and provided to a low-level PID-controller in order to produce steering and throttle actions. This procedure is also described with pseudocode in Appendix A. to maximize q θ (S|φ) of expert trajectories. f θ is defined for 1..T as follows: where µ θ (S 1:t−1, φ) = 2S t−1 −S t−2 +m θ (S 1:t−1, φ) encodes a constant-velocity inductive bias. The m θ ∈ R 2 and σ θ ∈ R 2×2 are computed by expressive neural networks. The ing trajectory distribution is complex and multimodal (Appendix C.1 depicts samples). Because traffic light state was not included in the φ of R2P2's "RNN" model, it could not react to traffic lights. We created a new model that includes λ. It fixed cases where q(S|φ) exhibited no forward-moving preference when the agent was already stopped, and improved q(S|φ)'s stopping preference at red lights. We used T = 40 trajectories at 10Hz (4 seconds), and τ = 3. Fig. 12 in Appendix C depicts the architecture of µ θ and σ θ. We now instantiate a complete autonomous driving framework based on imitative models to study in our experiments, seen in Fig. 5. We use three layers of spatial abstraction to plan to a faraway destination, common to autonomous vehicle setups: coarse route planning over a road map, path planning within the observable space, and feedback control to follow the planned path . For instance, a route planner based on a conventional GPS-based navigation system might output waypoints roughly in the lanes of the desired direction of travel, but not accounting for environmental factors such as the positions of other vehicles. This roughly communicates possibilities of where the vehicle could go, but not when or how it could get to them, or any environmental factors like other vehicles. A goal likelihood from Sec. 2.2 is formed from the route and passed to the planner, which generates a state-space plan according to the optimization in Eq. 1. The ing plan is fed to a simple PID controller on steering, throttle, and braking. Pseudocode of the driving and inference algorithms are given in Algs 1 and 2. The PID algorithm is given in Appendix A. G ← ROUTEPLAN(φ) {Generate goals from a route} 4: φ ← ENVIRONMENT(u) {Execute control} 8: end for 9: end while 3 RELATED WORK A body of previous work has explored offline IL (Behavior Cloning -BC) in the CARLA simulator (; ; ; ; . These BC Algorithm 2 IMITATIVEPLAN(q θ, f, p, G, φ) 1: Initialize z 1:T ∼ q 0 2: while not converged do 3: approaches condition on goals drawn from a small discrete set of directives. Despite BC's theoretical drift shortcomings , these methods still perform empirically well. These approaches and ours share the same high-level routing algorithm: an A * planner on route nodes that generates waypoints. In contrast to our approach, these approaches use the waypoints in a Waypoint Classifier, which reasons about the map and the geometry of the route to classify the waypoints into one of several directives: {Turn left, Turn right, Follow Lane, Go Straight}. One of the original motivations for these type of controls was to enable a human to direct the robot . However, in scenarios where there is no human in the loop (i.e. autonomous driving), we advocate for approaches to make use of the detailed spatial information inherent in these waypoints. Our approach and several others we designed make use of this spatial information. One of these is CIL-States (CILS): whereas the approach in uses images to directly generate controls, CILS uses identical inputs and PID controllers as our method. With respect to prior conditional IL methods, our main approach has more flexibility to handle more complex directives post-training, the ability to learn without goal labels, and the ability to generate interpretable planned and unplanned trajectories. These contrasting capabilities are illustrated in Table 1. Our approach is also related to MBRL. MBRL can also plan with a predictive model, but its model only represents possible dynamics. The task of evoking expert-like behavior is offloaded to the reward function, which can be difficult and time-consuming to craft properly. We know of no MBRL approach previously applied to CARLA, so we devised one for comparison. This MBRL approach also uses identical inputs to our method, instead to plan a reachability tree over an dynamic obstacle-based reward function. See Appendix D for further details of the MBRL and CILS methods, which we emphasize use the same inputs as our method. Several prior works (; ;) used imitation learning to train policies that contain planning-like modules as part of the model architecture. While our work also combines planning and imitation learning, ours captures a distribution over possible trajectories, and then plan trajectories at test-time that accomplish a variety of given goals with high probability under this distribution. Our approach is suited to offline-learning settings where interactively collecting data is costly (time-consuming or dangerous). However, there exists online IL approaches that seek to be safe (; ;). We evaluate our method using the CARLA driving simulator . We seek to answer four primary questions: Can we generate interpretable, expert-like plans with offline learning and minimal reward engineering? Neither IL nor MBRL can do so. It is straightforward to interpret the trajectories by visualizing them on the ground plane; we thus seek to validate whether these plans are expert-like by equating expert-like behavior with high performance on the CARLA benchmark. Can we achieve state-of-the-art CARLA performance using resources commonly available in real autonomous vehicle settings? There are several differences between the approaches, as discussed in Sec 3 and shown in Tables 1 and 2. Our approach uses the CARLA toolkit's resources that are commonly available in real autonomous vehicle settings: waypoint-based routes (all prior approaches use these), LIDAR and traffic-light observations (both are CARLAprovided, but only the approaches we implemented use it). Furthermore, the two additional methods of comparison we implemented (CILS and MBRL) use the exact same inputs as our algorithm. These reasons justify an overall performance comparison to answer: whether we can achieve state-ofthe-art performance using commonly available resources. We advocate that other approaches also make use of such resources. How flexible is our approach to new tasks? We investigate by applying each of the goal likelihoods we derived and observing the ing performance. How robust is our approach to error in the provided goals? We do so by injecting two different types of error into the waypoints and observing the ing performance. We begin by training q(S|φ) on a dataset of 25 hours of driving we collected in Town01, detailed in Appendix C.2. Following existing protocol, each test episode begins with the vehicle starting in one of a finite set of starting positions provided by the CARLA simulator in Town01 or Town02 maps in one of two settings: static-world (no other vehicles) or dynamic-world (with other vehicles). We ran the same benchmark 3 times across different random seeds to quantify means and their standard errors. We construct the goal set G for the Final-State Indicator (A) directly from the route output by CARLA's waypointer. B's line segments are formed by connecting the waypoints to form a piecewise linear set of segments. C's regions are created a polygonal goal region around the segments of (B). Each represents an increasing level of coarseness of direction. Coarser directions are easier to specify when there is ambiguity in positions (both the position of the vehicle and the position of the goals). Further details are discussed in Appendix B.3. We use three metrics: (a) success rate in driving to the destination without any collisions (which all prior work reports); (b) red-light violations; and (c) proportion of time spent driving in the wrong lane and off road. With the exception of metric (a), lower numbers are better. Results: Towards questions and (expert-like plans and flexibility), we apply our approach with a variety of goal likelihoods to the CARLA simulator. Towards question, we compare our methods against CILS, MBRL, and prior work. These are shown in Table 3. The metrics for the methods we did not implement are from the aggregation reported in. We observe our method to outperform all other approaches in all settings: static world, dynamic world, training conditions, and test conditions. We observe the Goal Indicator methods are able to perform well, despite having no hyperparameters to tune. We found that we could further improve our approach's performance if we use the light state to define different goal sets, which defines a "smart" waypointer. The settings where we use this are suffixed with "S." in the Tables. We observed the planner prefers closer goals when obstructed, when the vehicle was already stopped, and when a red light was detected; we observed the planner prefers farther goals when unobstructed and when green lights or no lights were observed. Examples of these and other interesting behaviors are best seen in the videos on the website (https://sites.google.com/view/imitative-models). These behaviors follow from the method leveraging q(S|φ)'s internalization of aspects of expert behavior in order to reproduce them in new situations. Altogether, these provide affirmative answers to questions and. Towards question, these show that our approach is flexible to different directions defined by these goal likelihoods. Towards questions (flexibility) and (noise-robustness), we analyze the performance of our method when the path planner is heavily degraded, to understand its stability and reliability. We use the Gaussian Final-State Mixture goal likelihood. Navigating with high-variance waypoints. As a test of our model's capability to stay in the distribution of demonstrated behavior, we designed a "decoy waypoints" experiment, in which half of the waypoints are highly perturbed versions of the other half, serving as distractions for our Gaussian Final-State Mixture imitative planner. We observed surprising robustness to decoy waypoints. Examples of this robustness are shown in Fig. 6. In Table 4, we report the success rate and the mean number of planning rounds for failed episodes in the " 1 /2 distractors" row. These numbers indicate our method can execute dozens of planning rounds without decoy waypoints causing a catastrophic failure, and often it can execute the hundreds necessary to achieve the goal. See Appendix E for details. Navigating with waypoints on the wrong side of the road. We also designed an experiment to test our method under systemic bias in the route planner. Our method is provided waypoints on the wrong side of the road (in CARLA, the left side), and tasked with following the directions of these waypoints while staying on the correct side of the road (the right side). In order for the value of q(s|φ) to outweigh the influence of these waypoints, we increased the hyperparameter. We found our method to still be very effective at navigating, and report in Table 4. We also investigated providing very coarse 8-meter wide regions to the Region Final-State likelihood; these always include space in the wrong lane and off-road (Fig. 11 in Appendix B.4 provides visualization). Nonetheless, on Town01 Dynamic, this approach still achieved an overall success rate of 48%. Taken together towards question, our indicate that our method is fairly robust to errors in goal-specification. To further investigate our model's flexibility to test-time objectives (question 3), we designed a pothole avoidance experiment. We simulated potholes in the environment by randomly inserting them in the cost map near waypoints. We ran our method with a test-time-only cost map of the simulated potholes by combining goal likelihoods (F) and (G), and compared to our method that did not incorporate the cost map (using (F) only, and thus had no incentive to avoid potholes). We recorded the number of collisions with potholes. In Table 4, our method with cost incorporated avoided most potholes while avoiding collisions with the environment. To do so, it drove closer to the centerline, and occasionally entered the opposite lane. Our model internalized obstacle avoidance by staying on the road and demonstrated its flexibility to obstacles not observed during training. Fig. 7 shows an example of this behavior. See Appendix F for details of the pothole generation. We proposed "Imitative Models" to combine the benefits of IL and MBRL. Imitative Models are probabilistic predictive models able to plan interpretable expert-like trajectories to achieve new goals. Inference with an Imitative Model resembles trajectory optimization in MBRL, enabling it to both incorporate new goals and plan to them at test-time, which IL cannot. Learning an Imitative Model resembles offline IL, enabling it to circumvent the difficult reward-engineering and costly online data collection necessities of MBRL. We derived families of flexible goal objectives and showed our model can successfully incorporate them without additional training. Our method substantially outperformed six IL approaches and an MBRL approach in a dynamic simulated autonomous driving task. We showed our approach is robust to poorly specified goals, such as goals on the wrong side of the road. We believe our method is broadly applicable in settings where expert demonstrations are available, flexibility to new situations is demanded, and safety is paramount. Future work could investigate methods to handle both observation noise and out-of-distribution observations to enhance the applicability to robust real systems -we expand on this issue in Appendix E. Finally, to facilitate more general planning, future work could extend our approach to explicitly reason about all agents in the environment in order to inform a closed-loop plan for the controlled agent. In Algorithm 1, we provided pseudocode for receding-horizon control via our imitative model. In Algorithm 2 we provided pesudocode that describes how we plan in the latent space of the trajectory. In Algorithm 3, we detail the speed-based throttle and position-based steering PID controllers. Algorithm 3 PIDCONTROLLER(φ = {s 0, s −1, . . .}, s Since s 1:T = f (z 1:T) in our implementation, and f is differentiable, we can perform gradient descent of the same objective in terms of z 1:T, as shown in Algorithm 2.Since q is trained with z 1:T ∼ N (0, I), the latent space is likelier to be better numerically conditioned than the space of s 1:T, although we did not compare the two approaches formally. We implemented the following optimizations to improve this procedure's output and practical run time. 1) We started with N = 120 different z initializations, optimized them in batch, and returned the highest-scoring value across the entire optimization. 2) We observed the ing planning procedure to usually converge quickly, so instead of specifying a convergence threshold, we simply ran the optimization for a small number of steps, M = 10, and found that we obtained good performance. Better performance could be obtained by performing a larger number of steps. We now derive an approach to optimize our main objective with set constraints. Although we could apply a constrained optimizer, we find that we are able to exploit properties of the model and constraints to derive differentiable objectives that enable approximate optimization of the corresponding closed-form optimization problems. These enable us to use the same straightforward gradient-descent-based optimization approach described in Algorithm 2. Shorthand notation: In this section we omit dependencies on φ for brevity, and use short hand µ t. = µ θ (s 1:t−1) and Σ t. = Σ θ (s 1:t−1). For example, q(s t |s 1:t−1) = N (s t ; µ t, Σ t). Let us begin by defining a useful delta function: which serves as our goal likelihood when using goal with set constraints: p(G|s 1:T) ← δ S T (G). We now derive the corresponding maximum a posteriori optimization problem: arg max q(s t |s 1:t−1) = arg max arg max By exploiting the fact that q(s T |s 1:T −1) = N (s T ; µ T, Σ T), we can derive closed-form solutions for when G has special structure, which enables us to apply gradient descent to solve this constrainedoptimization problem (examples below). With a closed form solution to equation 6, we can easily compute equation 5 using unconstrained-optimization as follows: The solution to equation 6 in the case of a single desired goal g ∈ R D is simply: More generally, multiple point goals help define optional end points for planning: where the agent only need reach one valid end point (see Fig. 8 for examples), formulated as: We can form a goal-set as a finite-length line segment, connecting point a ∈ R D to point b ∈ R D: The solution to equation 6 in the case of line-segment goals is: Proof: To solve equation 15 is to find which point along the line g line (u) maximizes N (·; µ T, Σ T) subject to the constraint 0 ≤ u ≤ 1: Since L u is convex, the optimal value u * is value closest to the unconstrained arg max of L u (u), subject to 0 ≤ u ≤ 1: We now solve for u * R: which gives us: B.1.3 MULTIPLE-LINE-SEGMENT GOAL-SET: More generally, we can combine multiple line-segments to form piecewise linear "paths" we wish to follow. By defining a path that connects points line-segment, select the optimal segment i * = arg max i L i u, and use the segment i *'s solution to u * to compute s * T. Examples shown in Fig. 9. Instead of a route or path, a user (or program) may wish to provide a general region the agent should go to, and state within that region being equally valid. Polygon regions (including both boundary and interior) offer closed form solution to equation 6 and are simple to specify. A polygon can be specified by an ordered sequence of vertices (p 0, p 1, . .., p N) ∈ R N ×2. Edges are then defined as the sequence of line-segments between successive vertices (and a final edge between first and last vertex): ((p 0, p 1),..., (p N −1, p N), (p N, p 0) ). Examples shown in Fig. 10 and 11. Solving equation 6 with a polygon has two cases: depending whether µ T is inside the polygon, or outside. If µ T lies inside the polygon, then the optimal value for s * T that maximizes N (s * T ; µ T, Σ T) is simply µ T: the mode of the Gaussian distribution. Otherwise, if µ T lies outside the polygon, then the optimal value s * T will lie on one of the polygon's edges, solved using B.1.3. The waypointer uses the CARLA planner's provided route to generate waypoints. In the constrainedbased planning goal likelihoods, we use this route to generate waypoints without interpolating between them. In the relaxed goal likelihoods, we interpolate this route to every 2 meters, and use the first 20 waypoints. As mentioned in the main text, one variant of our approach uses a "smart" waypointer. This waypointer simply removes nearby waypoints closer than 5 meters from the vehicle when a green light is observed in the measurements provided by CARLA, to encourage the agent to move forward, and removes far waypoints beyond 5 meters from the vehicle when a red light is observed in the measurements provided by CARLA. Note that the performance differences between our method without the smart waypointer and our method with the smart waypointer are small: the only signal in the metrics is that the smart waypointer improves the vehicle's ability to stop for red lights, however, it is quite adept at doing so without the smart waypointer. Given the in-lane waypoints generated by CARLA's route planner, we use these to create Point goal sets, Line-Segment goal sets, and Polygon Goal-Sets, which respectively correspond to the (A) Final-State Indicator, (B) Line-Segment Final-State Indicator, and (C) Final-State Region Indicator described in Section 2.2. For (A), we simply feed the waypoints directly into the Final-State Indicator, which in a constrained optimization to ensure that. We also included the vehicle's current position in the goal set, in order to allow it to stop. The gradient-descent based optimization is then formed from combining Eq. 8 with Eq. 12. The gradient to the nearest goal of the final state of the partially-optimized plan encourage the optimization to move the plan closer to that goal. We used K = 10. We applied the same procedure to generate the goal set for the (B) Line Segment indicator, as the waypoints returned by the planner are ordered. Finally, for the (C) Final-State Region Indicator (polygon), we used the ordered waypoints as the "skeleton" of a polygon that surrounds. It was created by adding a two vertices for each point v t in the skeleton at a distance 1 meter from v t perpendicular to the segment connecting the surrounding points (v t−1, v t+1). This ed in a goal set G polygon ⊃ G line-segment, as it surrounds the line segments. The (F) Gaussian Final-State Mixture goal set was constructed in the same way as (A), and also used when the pothole costs were added. For the methods we implemented, the task is to drive the furthest road location from the vehicle's initial position. Note that this protocol more difficult than the one used in prior work;;;; , which has no distance guarantees between start positions and goals, and often in shorter paths. Visualizations of examples of our method deployed with different goal likelihoods are shown in Fig. 8, Fig. 9, Fig. 10, and Fig. 11. The architecture of q(S|φ) is shown in Table 5. C.2 DATASET Before training q(S|φ), we ran CARLA's expert in the dynamic world setting of Town01 to collect a dataset of examples. We have prepared the dataset of collected data for public release upon publication. We ran the autopilot in Town01 for over 900 episodes of 100 seconds each in the presence of 100 other vehicles, and recorded the trajectory of every vehicle and the autopilot's LIDAR observation. We randomized episodes to either train, validation, or test sets. We created sets of 60,701 train, 7586 validation, and 7567 test scenes, each with 2 seconds of past and 4 seconds of future position information at 10Hz. The dataset also includes 100 episodes obtained by following the same procedure in Town02. We designed a conditional imitation learning baseline that predicts the setpoint for the PID-controller. Each receives the same scene observations (LIDAR) and is trained with the same set of trajectories as our main method. It uses nearly the same architecture as that of the original CIL, except it outputs setpoints instead of controls, and also observes the traffic light information. We found it very effective for stable control on straightaways. When the model encounters corners, however, prediction is more difficult, as in order to successfully avoid the curbs, the model must implicitly plan a safe path. We found that using the traffic light information allowed it to stop more frequently. Static-world To compare against a purely model-based reinforcement learning algorithm, we propose a model-based reinforcement learning baseline. This baseline first learns a forwards dynamics model s t+1 = f (s t−3:t, a t) given observed expert data (a t are recorded vehicle actions). We use an MLP with two hidden layers, each 100 units. Note that our forwards dynamics model does not imitate the expert preferred actions, but only models what is physically possible. Together with the same LIDAR map χ our method uses to locate obstacles, this baseline uses its dynamics model to plan a reachability tree through the free-space to the waypoint while avoiding obstacles. The planner opts for the lowest-cost path that ends near the goal C(s 1:T ; g T) = ||s T − g T || 2 + T t=1 c(s t), where cost of a position is determined by c(s t) = 1.51(s t < 1 meters from any obstacle) + 0.751(1 <= s t < 2 meters from any obstacle) +... We plan forwards over 20 time steps using a breadth-first search over CARLA steering angle {−0.3, −0.1, 0., 0.1, 0.3}, noting valid steering angles are normalized to [−1, 1], with constant throttle at 0.5, noting the valid throttle range is. Our search expands each state node by the available actions and retains the 50 closest nodes to the waypoint. The planned trajectory efficiently reaches the waypoint, and can successfully plan around perceived obstacles to avoid getting stuck. To convert the LIDAR images into obstacle maps, we expanded all obstacles by the approximate radius of the car, 1.5 meters. We use the same setup as the Static-MBRL method, except we add a discrete temporal dimension to the search space (one R 2 spatial dimension per T time steps into the future). All static obstacles remain static, however all LIDAR points that were known to collide with a vehicle are now removed: and replaced at every time step using a constant velocity model of that vehicle. We found that the main failure mode was due to both to inaccuracy in constant velocity prediction as well as the model's inability to perceive lanes in the LIDAR. The vehicle would sometimes wander into the opposing traffic's lane, having failed to anticipate an oncoming vehicle blocking its path. In the decoy waypoints experiment, the perturbation distribution is N (0, σ = 8m): each waypoint is perturbed with a standard deviation of 8 meters. One failure mode of this approach is when decoy waypoints lie on a valid off-route path at intersections, which temporarily confuses the planner about the best route. Additional visualizations are shown in Fig. 14. Besides using our model to make a best-effort attempt to reach a user-specified goal, the fact that our model produces explicit likelihoods can also be leveraged to test the reliability of a plan by evaluating whether reaching particular waypoints will in human-like behavior or not. This capability can be quite important for real-world safety-critical applications, such as autonomous driving, and can be used to build a degree of fault tolerance into the system. We designed a classification experiment to evaluate how well our model can recognize safe and unsafe plans. We planned our model to known good waypoints (where the expert actually went) and known bad waypoints (off-road) on 1650 held-out test scenes. We used the planning criterion to classify these as good and bad plans and found that we can detect these bad plans with 97.5% recall and 90.2% precision. This indicates imitative models could be effective in estimating the reliability of plans. We determined a threshold on the planning criterion by single-goal planning to the expert's final location on offline validation data and setting it to the criterion's mean minus one stddev. Although a more intelligent calibration could be performed by analyzing the information retrieval statistics on the offline validation, we found this simple calibration to yield reasonably good performance. We used 1650 test scenes to perform classification of plans to three different types of waypoints 1) where the expert actually arrived at time T (89.4% reliable), 2) waypoints 20m ahead along the waypointer-provided route, which are often near where the expert arrives (73.8% reliable) 3) the same waypoints from 2), shifted 2.5m off of the road (2.5% reliable). This shows that our learned model exhibits a strong preference for valid waypoints. Therefore, a waypointer that provides expert waypoints via 1) half of the time, and slightly out-of-distribution waypoints via 3) in the other half, an "unreliable" plan classifier achieves 97.5% recall and 90.2% precision. The existence of both observation noise and uncertain/out-of-distribution observations is an important practical issue for autonomous vehicles. Although our current method only conditions on our current observation, several extensions could help mitigate the negative effects of both and. For, a Bayesian filtering formulation is arguably most ideal, to better estimate (and track) the location of static and dynamic obstacles under noise. However, such high-dimensional filtering are often intractable, and might necessitate approximate Bayesian deep learning techniques, RNNs, or frame stacking, to benefit from multiple observations. Addressing would ideally be done by placing a prior over our neural network weights, to derive some measure of confidence in our density estimation of how expert each plan is, such that unfamiliar scenes generate large uncertainty on our density estimate that we could detect, and react cautiously (pessimistically) to. One way to address the situation if the distributions are very different is to adopt an ensembling approach in order for the method to determine when the inputs are out of distribution -the ensemble will usually have higher variance (i.e. disagree) when each element of the ensemble is provided with an out-of-distribution input. For instance, this variance could be used as a penalization in the planning criterion. As discussed, our model assumes access to the traffic-light state provided by the simulator, which we call λ. However, access to this state would be noisy in practice, because it relies on a sensor-based (usually image-based) detection and classification module. We performed an experiment to assess robustness to noise in λ: we simulated noise in λ by "flipping" the light state with 20% probability, corresponding to a light state detector that has 80% accuracy on average. "Flipping" means that if the light is green, then changingλ to indicate red, and if the light is red, then changing λ to indicate green. We performed this following the experimental method of "Region Final-St. Indicator S." in dynamic Town02, and ran it with three separate seeds. The means and their standard errors are reported in Table 6. The we draw is that the approach can still achieve success most of the time, although it tends to violate red-lights more often. Qualitatively, we observed the ing behavior near intersections to sometimes be "jerky", with the model alternating between stopping and non-stopping plans. We hypothesize that the model itself could be made more robust if the noise in λ was also present in the training data. Table 6: We evaluate the effect of noise in the traffic-light state (λ) on CARLA's Dynamic Navigation task. Noise in the light state predictably degrades overall and red-light performance, but not to the point of preventing the method from operating at all. Success Ran Red Light Wrong lane Off road Region Final-St. Indicator S. (original) 88% ± 3.3 2.57% ± 0.04 0.49% ± 0.32 2.6% ± 1.06 Region Final-St. Indicator S. (noisy λ) 76% ± 5.0 34.8% ± 2.4 0.15% ± 0.04 1.79% ± 0.34 We simulated potholes in the environment by randomly inserting them in the cost map near each waypoint i with offsets distributed N i (µ= [−15m, 0m], Σ = diag([1, 0.01])), (i.e. mean-centered on the right side of the lane 15m before each waypoint). We inserted pixels of root cost −1e3 in the cost map at a single sample of each N i, binary-dilated the cost map by 1 /3 of the lane-width (spreading the cost to neighboring pixels), and then blurred the cost map by convolving with a normalized truncated Gaussian filter of σ = 1 and truncation width 1. See Fig. 15 for a visualization of our baseline methods. In order to tune the hyperparameter of the unconstrained likelihoods, we undertook the following binary-search procedure. When the prior frequently overwhelmed the posterior, we set ← 0.2, to yield tighter covariances, and thus more penalty for failing to satisfy the goals. When the posterior frequently overwhelmed the prior, we set ← 5, to yield looser covariances, and thus less penalty for failing to satisfy the goals. We executed this process three times: once for the "Gaussian Final-State Mixture" experiments (Section 4), once for the "Noise Robustness" Experiments (Section 4.1), and once for the pothole-planning experiments (Section 4.2). Note that the Constrained-Goal Likelihoods introduced no hyperparameters to tune.
In this paper, we propose Imitative Models to combine the benefits of IL and goal-directed planning: probabilistic predictive models of desirable behavior able to plan interpretable expert-like trajectories to achieve specified goals.
302
scitldr
Learning communication via deep reinforcement learning has recently been shown to be an effective way to solve cooperative multi-agent tasks. However, learning which communicated information is beneficial for each agent's decision-making remains a challenging task. In order to address this problem, we introduce a fully differentiable framework for communication and reasoning, enabling agents to solve cooperative tasks in partially-observable environments. The framework is designed to facilitate explicit reasoning between agents, through a novel memory-based attention network that can learn selectively from its past memories. The model communicates through a series of reasoning steps that decompose each agent's intentions into learned representations that are used first to compute the relevance of communicated information, and second to extract information from memories given newly received information. By selectively interacting with new information, the model effectively learns a communication protocol directly, in an end-to-end manner. We empirically demonstrate the strength of our model in cooperative multi-agent tasks, where inter-agent communication and reasoning over prior information substantially improves performance compared to baselines. Communication is one of the fundamental building blocks for cooperation in multi-agent systems. The ability to effectively represent and communicate information valuable to a task is especially important in multi-agent reinforcement learning (MARL). Apart from learning what to communicate, it is critical that agents learn to reason based on the information communicated to them by their teammates. Such a capability enables agents to develop sophisticated coordination strategies that would be invaluable in application scenarios such as search-and-rescue for multi-robot systems , swarming and flocking with adversaries , multiplayer games (e.g., StarCraft, , DoTA, , and autonomous vehicle planning, Building agents that can solve complex cooperative tasks requires us to answer the question: how do agents learn to communicate in support of intelligent cooperation? Indeed, humans inspire this question as they exhibit highly complex collaboration strategies, via communication and reasoning, allowing them to recognize important task information through a structured reasoning process, (; ;). Significant progress in multiagent deep reinforcement learning (MADRL) has been made in learning effective communication (protocols), through the following methods: (i) broadcasting a vector representation of each agent's private observations to all agents , (ii) selective and targeted communication through the use of soft-attention networks, , that compute the importance of each agent and its information, , and (iii) communication through a shared memory channel , which allows agents to collectively learn and contribute information at every time instant. The architecture of implements communication by enabling agents to communicate intention as a learned representation of private observations, which are then integrated in the hidden state of a recurrent neural network as a form of agent memory. One downside to this approach is that as the communication is constrained in the neighborhood of each agent, communicated information does not enrich the actions of all agents, even if certain agent communications may be critical for a task. For example, if an agent from afar has covered a landmark, this information would be beneficial to another agent that has a trajectory planned towards the same landmark. In contrast, Memory Driven Multi-Agent Deep Deterministic Policy Gradient (MD-MADDPG), , implements a shared memory state between all agents that is updated sequentially after each agent selects an action. However, the importance of each agent's update to the memory in MD-MADDPG is solely decided by its interactions with the memory channel. In addition, the sequential nature of updating the memory channel restricts the architecture's performance to 2-agent systems. Targeted Multi-Agent Communication (TarMAC), , uses soft-attention for the communication mechanism to infer the importance of each agent's information, however without the use of memory in the communication step. The paradigm of using relations in agent-based reinforcement learning was proposed by through multi-headed dot-product attention (MHDPA) . The core idea of relational reinforcement learning (RRL) combines inductive logic programming (; Džeroski et al., 2001) and reinforcement learning to perform reasoning steps iterated over entities in the environment. Attention is a widely adopted framework in Natural Language Processing (NLP) and Visual Question Answering (VQA) tasks (b; a;) for computing these relations and interactions between entities. The mechanism generates an attention distribution over the entities, or more simply a weighted value vector based on importance for the task at hand. This method has been adopted successfully in state-of-the-art for Visual Question Answering (VQA) tasks (b), (a), and more recently , demonstrating the robustness and generalization capacity of reasoning methods in neural networks. In the context of multi-agent cooperation, we draw inspiration from work in soft-attention to implement a method for computing relations between agents, coupled with a memory based attention network from Compositional Attention Networks (MAC) , yielding a framework for a memory-based communication that performs attentive reasoning over new information and past memories. Concretely, we develop a communication architecture in MADRL by leveraging the approach of RRL and the capacity to learn from past experiences. Our architecture is guided by the belief that a structured and iterative reasoning between non-local entities should enable agents to capture higherorder relations that are necessary for complex problem-solving. To seek a balance between computational efficiency and adaptivity to variable team sizes, we exploit the soft-attention as the base operation for selectively attending to an entity or information. To capture the information and histories of other entities, and to better equip agents to make a deliberate decision, we separate out the attention and reasoning steps. The attention unit informs the agent of which entities are most important for the current time-step, while the reasoning steps use previous memories and the information guided by the attention step to extract the shared information that is most relevant. This explicit separation in communication enables agents to not only place importance on new information from other agents, but to selectively choose information from its past memories given new information. This communication framework is learned in an end-to-end fashion, without resorting to any supervision, as a of task-specific rewards. Our empirical study demonstrates the effectiveness of our novel architecture to solve cooperative multi-agent tasks, with varying team sizes and environments. By leveraging the paradigm of centralized learning and decentralized execution, alongside communication, we demonstrate the efficacy of the learned cooperative strategies. We consider a team of N agents and model it as a cooperative multi-agent extension of a partially observable Markov decision process (POMDP) . We characterize this POMDP by the set of state values, S, describing all the possible configurations of the agents in the environment, control actions {A 1, A 2, ..., A N}, where each agent i performs an action A i, and set of observations {O 1, O 2, ..., O N}, where each agent i's local observation, O i is not shared globally. Actions are selected through a stochastic policy π π π θi: with policy parameters θ i, and a new state is generated by the environment according to the transition function T: S × A 1 ×... × A N → S. At every step, the environment generates a reward, r i: S × A i → R, for each agent i and a new local observation o i: S → O i. The goal is to learn a policy such that each agent maximizes the total expected return where T is the time horizon and γ is the discount factor. We choose the deterministic policy gradient algorithms for all our training. In this framework, the parameters θ of the policy, µ µ µ θ, are updated such that the objective J(θ) = E s∼p π π π,a∼µ θ [R(s, a)] and the policy gradient, (see Appendix A), is given by: Deep Deterministic Policy Gradient (DDPG) is an adaptation of DPG where the policy µ µ µ and critic Q µ µ µ are approximated as neural networks. DDPG is an off-policy method, where experience replay buffers, D, are used to sample system trajectories which are collected throughout the training process. These samples are then used to calculate gradients for the policy and critic networks to stabilize training. In addition, DDPG makes use of a target network, similar to Deep Q-Networks (DQN) , such that the parameters of the primary network are updated every few steps, reducing the variance in learning. Recent work proposes a multi-agent extension to the DDPG algorithm, socalled MADDPG, adapted through the use of centralized learning and decentralized execution. Each agent's policy is instantiated similar to DDPG, as µ µ µ θi (a i |o i) conditioned on its local observation o i. The major underlying difference is that the critic is centralized such that it estimates the joint action- We operate under this scheme of centralized learning and decentralized execution of MADDPG, , as the critics are not needed during the execution phase. We introduce a communication architecture that is an adaptation of the attention mechanism of the Transformer network, , and the structured reasoning process used in the MAC Cell, . The framework holds memories from previous time-steps separately for each agent, to be used for reasoning on new information received by communicating teammates. Through a structured reasoning, the model interacts with memories and communications received from other agents to produce a memory for the agent that contains the most valuable information for the task at hand. An agent's memory is then used to predict the action of the agent, such that policy is given by µ µ µ θ: O i × M i → A i, where M i is the memory of agent i. To summarize, before any agent takes an action, the agent performs four operations via the following architectural features: Thought Unit, where each agent encodes its local observations into appropriate representations for communication and action selection, Question Unit, which is used to generate the importance of all information communicated to the agent, Memory Unit, which controls the final message to be used for predicting actions by combining new information from other agents with its own memory, through the attention vector generated in the Question unit, and Action Unit, that predicts the action. In Figure 1 we illustrate our proposed Structured Attentive Reasoning Network (SARNet). The thought unit at each time-step t transforms an agent's private observations into three separate vector representations: query, q Figure 1: SARNet consists of a thought unit, question unit, memory unit and action unit. The thought unit operates over the current observation, to represent the information for the communication and action step. The question unit attends to the information from communicating agents, representing the importance of each neighboring agent's information. The memory unit, guided by the Question Unit, performs a reasoning operation to extract relevant information from communicated information, which is then integrated into memory. The Action Unit, processes the information from the memory and the observation encoding to predict an action for the current time-step. The query is used by each agent i to inform the Question Unit which aspects of the communicated information are relevant to the current step. The key and value are broadcast to all communicating agents. The key vector is used in the Question Unit to infer the relevance of the broadcasting agent to the current reasoning step, and the value vector is subsequently used to update the information into the memory of agent i. The ing information broadcasted by each agent i to all the cooperating agents is then: This component is designed to capture the importance of each agent in the environment, including the reasoning agent i, similar to the self-attention mechanism in . In the attention mechanism used in , the attention computes a weight for each entity through the use of the sof tmax. However, we generate the attention mechanism over all individual representations in the vector for each entity, using Eq. 5. This allows the agent to compute the importance of each individual communicated information from other agents for a particular timestep. This is performed through a soft attention-based weighted average using the query generated by agent i, and the set of keys, K, that contain the keys, {k} from all agents. The recipient agent, i, upon receiving the set of keys, K, from all agents, computes the interaction with every agent through a Hadamard product,, of its own query vector, q i and all the keys, k j, in the set K. qh, is then applied to every interaction, qh t ij, that defines the query targeted for each communicating agent j, including self, to produce a scalar defining the weight of the particular agent. A sof tmax operation is then used over the new scalars for each agent to generate the weights specific to each agent. The use of the linear transformation in Eq. 5 allows the model to specify an importance not only for each individual agent, but more specifically it learns to assign an importance to each element in the information vector, as compared to the approach used in standard soft-attention based networks, such as Transformer , which only perform a dot-product computation between the query and keys. The memory unit is responsible for decomposing the set of new values, V, that contain, {v into relevant information for the current time-step. Specifically, it computes the interaction of this new knowledge with the memory aggregated from the preceding time-step. The new retrieved information, from the memory and the values, is then measured in terms of relevance based on the importance of each agent generated in the Question unit. As a first step, an agent computes a direct interaction between the new values from other agents, v j ∈ V, and its own memory, m i . This step performs a relative reasoning between newly received information and the memory from the previous step. This allows the model to potentially highlight information from new communications, given information from prior memory. The new interaction per agent j is evaluated relative to the memory, mi t ij, and current knowledge, V, is then used to compute a new representation for the final attention stage, through a feed-forward network, W . This enables the model to reason independently on the interaction between new information and previous memory, and new information alone. Finally, we aggregate the important information, mr ij, based on the weighting calculated in the Question unit, in. This step generates a weighted average of the new information, mr ij, gathered from the reasoning process, based on the attention values computed in. A linear transformation, W, is applied to the of the reasoning operation to prepare the information for input to the action cell. The action unit, as the name implies, predicts the final action of the agent, i, based on the new memory, Eq. 10, computed from the Memory unit and an encoding, e t i, Eq. 2, of its local observation, o i, from the Thought unit. where ϕ a θ a i is a multi-layer perceptron (MLP) parameterised by θ a i. We incorporate the centralized learning-decentralized execution framework from to implement an actor-critic model to learn the policy parameters for each agent. We use parameter sharing across agents to ease the training process. The policy network µ θi produces actions a i, which is then evaluated by the critic Q µ θ i, which aims to minimize the loss function., are the observations when actions {a 1, a 2, ..., a N} are performed, the experience replay buffer, D, contains the tuples (x, x, m 1, m 2, ..., m N, a 1, a 2, ...a N, r 1, r 2, ...r N), and the target Q-value is defined as y. To keep the training stable, delayed updates to the target network Q µ θ i is implemented, such that current parameters of Q µ θ i, are only copied periodically. The goal of the loss function, L(θ i) is to minimize the expectation of the difference between the current and the target action-state function. The gradient of the ing policy, with communication, to maximize the expectation of the rewards, J(θ i) = E[R i], can be written as: We evaluate our communication architecture on OpenAI's multi-agent particle environment, , a two-dimensional environment consisting of agents and landmarks with cooperative tasks. Each agent receives a private observation that includes only partial observations of the environment depending on the task. The agents act independently and collect a shared global reward for cooperative tasks. We consider different experimental scenarios where a team of agents cooperate to complete tasks against static goals, or compete against non-communicating agents. We compare our communication architecture, SARNet, to communication mechanisms of CommNet, , TarMAC and the non-communicating policy of MADDPG, . In this environment, N agents need to cooperate to reach L landmarks. Each agent observes the relative positions of the neighboring agents and landmarks. The agents are penalized if they collide with each other, and positively rewarded based on the proximity to the nearest landmark. At each time-step, the agent receives a reward of −d, where d is the distance to the closest landmark, and penalized a reward of −1 if a collision occurs with another agent. In this cooperative task, all agents strive to maximize a shared global reward. Performance is evaluated per episode by average reward, number of collisions, and occupied landmarks. Our model outperforms all the baselines achieving a higher reward through lower metrics of average distance to the landmark and collisions for N = L = 3 and N = L = 6 agents as shown in Table 1. We hypothesize that in an environment with more agents, the effect of retaining a memory of previous communications from other agents allows the policy to make a more informed decision. This leads to a significant reduction in collisions, and lower distance to the landmark. Our architecture outperforms TarMAC, which uses a similar implementation of soft-attention, albeit without a memory, and computing the communication for the next time-step, unlike SARNet, where the communication mechanism in time t, shapes the action, a t i for the current time-step. We also show the attention metrics for agent 0 at a single time-step during the execution of the tasks with N = 6 agents in Fig. 3a. Table 1: Partially observable cooperative navigation. For N = L = 3, the agents can observe the nearest 1 agent and 2 landmarks, and for N = L = 6, the agents can observe, 2 agents and 3 landmarks. Number of collisions between agents, and average distance at the end of the episode are measured. This task involves a slower moving team of N cooperating agents chasing M faster moving agents in an environment with L static landmarks. Each agent receives its own local observation, where it can observe the nearest prey, predator, and landmarks. Predators are rewarded by +10 every time they collide with a prey, and subsequently the prey is penalized −10. Since the environment is unbounded, the prey are also penalized for moving out of the environment. Predators are trained using the SARNet, TarMAC, CommNet, and MADDPG and prey are trained using DDPG. Due to the nature of dynamic intelligent agents competing with the communicating agents, the complexity of the task increases substantially. As shown in Fig. 3b, we observe that agent 0 learns to place a higher importance on agent 1's information over itself. This dynamic nature of the agent, in selecting which information is beneficial, coupled with extracting relevant information from the memory, enables our architecture to substantially outperform the baseline methods, Table 2a. Table 2: In 2a, N = L = 3, M = 1, the agents can observe the nearest 1 predator, 1 prey and 2 landmarks, and for N = L = 6, M = 2, the agents can observe, 3 predators, 1 prey and 3 landmarks. Number of prey captures by the predators per episode is measured. For Table 2b, we measure the avg. success rate of the communicating agents N, to reach the target landmark, and the same for the adversary M. Larger values for N are desired, and lower for M. (a) Partially observable predatory-prey Table 2b, both TarMAC and CommNet agents choose to stay far away from the target landmark, such that the adversarial agent, follow them. In contrast, SARNet and MADDPG, learn to spread out over all the landmarks, with a higher adversarial score, but achieving an overall higher mean reward for the complete task. (a) Attention for agent 0, at a single time-step for cooperative navigation environment for N = L = 6. (b) Attention for agent 0, at a single time-step in a predator-prey environment, for N = 6, and M = 2. Figure 3: Visualizing attention predicted by agent i over all agents during the initial stages of the task. We observe that agents learn to devalue their own information if it is more advantageous to place importance on information from other agents when reading and writing to the memory unit. We perform additional benchmarks showcasing the importance of memory in our communication architecture. By introducing noise in the memory at each time-step, we evaluate the performance of SARNet on partially-observable cooperative navigation and predator-prey environments. We find a general worsening of the for both tasks as demonstrated by the metrics in Table 3. However, we note that in spite of the corrupted memory, the agent's policy is robust to adversarial noise, as it learns to infer the important information from the thorough reasoning process in the communication. Table 3: Performance metrics when a Gaussian Noise of mean 0 and variance 1 is introduced in the memory (MC-SARNet) during execution of the tasks for predator-prey and cooperative navigation. We have introduced a novel framework, SARNet, for communication in multi-agent deep RL which performs a structured attentive reasoning between agents to improve coordination skills. Through a decomposition of the representations of communication into reasoning steps, our agents exceed baseline methods in overall performance. Our experiments demonstrate key benefits of gathering insights from its own memories, and the internal representations of the information available to agent. The communication architecture is learned end-to-end, and is capable of computing taskrelevant importance of each piece of computed information from cooperating agents. While this multi-agent communication mechanism shows promising , we believe that we can further adapt this method to scale to a larger number of agents, through a gating mechanism to initiate communication, and decentralized learning. Policy gradient (PG) methods are the popular choice for a variety of reinforcement learning (RL) tasks in the context described above. In the PG framework, the parameters θ of the policy are directly adjusted to maximize the objective J(θ) = E s∼p π π π,a∼π π π θ [R], by taking steps in the direction of ∇ θ J(θ), where p π π π, is the state distribution, s is the sampled state and a is the action sampled from the stochastic policy. Through learning a value function for the state-action pair, Q π π π (s, a), which estimates how good an optimal action a is for an agent in state s, the policy gradient is then written as, : Several variations of PG have been developed, primarily focused on techniques for estimating Q π π π. For example, the REINFORCE algorithm uses a rather simplistic method of sample return calculated as a cumulative expected reward for an episode with a discount factor γ, R t = T i=t γ i−t r i. When temporal-difference learning is used, the learned function Q π π π (s, a) is described as the critic, which leads to several different actor-critic algorithms , , where the actor could be a stochastic π π π θ or deterministic policy µ µ µ θ for predicting actions. Hyperparameters We use batch synchronous method for off-policy gradient methods, , for all the experiments. Each environment instance has a separate replay buffer of size 10 6. All policies are trained using the Adam optimizer with a learning rate of 5 × 10 −4, a discount factor, γ = 0.96. and τ = 0.001, for the soft update of the target network. The query, key and values, share a common first layer of size 128, and subsequently are linearly projected to 32-d. Batch normalization and dropouts in the communication channel are implemented. The observation, and action units have two hidden layers of size 128 units with ReLU as the activation function. The critic is implemented as a 2-layer MLP, with 1024 and 512 units. We use a batch-size of 128, and updates are initiated after accumulating experiences for 1280 episodes. Exploration noise is implemented through Orhnstein-Uhlenbeck process with θ = 0.15 and σ = 0.3. All experimental are averaged over 3 separate runs, with different random seeds. Baseline Hyperparameters Policies for TarMAC, CommNet and MADDPG are instantiated as an MLP, similar to SARNet. All layers in the policy network are of size 128 units, while the critic is a 2-layer network with 1024, 512 units. Both TarMAC and CommNet are implemented with 1-stage communication. For TarMAC, the query's, and key's have 16 units, and values are 32 units as described in . All other training parameters are kept similar to the SARNet implementation.
Novel architecture of memory based attention mechanism for multi-agent communication.
303
scitldr
In this paper, we introduce Symplectic ODE-Net (SymODEN), a deep learning framework which can infer the dynamics of a physical system from observed state trajectories. To achieve better generalization with fewer training samples, SymODEN incorporates appropriate inductive bias by designing the associated computation graph in a physics-informed manner. In particular, we enforce Hamiltonian dynamics with control to learn the underlying dynamics in a transparent way which can then be leveraged to draw insight about relevant physical aspects of the system, such as mass and potential energy. In addition, we propose a parametrization which can enforce this Hamiltonian formalism even when the generalized coordinate data is embedded in a high-dimensional space or we can only access velocity data instead of generalized momentum. This framework, by offering interpretable, physically-consistent models for physical systems, opens up new possibilities for synthesizing model-based control strategies. In recent years, deep neural networks have become very accurate and widely used in many application domains, such as image recognition , language comprehension , and sequential decision making . To learn underlying patterns from data and enable generalization beyond the training set, the learning approach incorporates appropriate inductive bias by promoting representations which are simple in some sense. It typically manifests itself via a set of assumptions, which in turn can guide a learning algorithm to pick one hypothesis over another. The success in predicting an outcome for previously unseen data then depends on how well the inductive bias captures the ground reality. Inductive bias can be introduced as the prior in a Bayesian model, or via the choice of computation graphs in a neural network. In a variety of settings, especially in physical systems, wherein laws of physics are primarily responsible for shaping the outcome, generalization in neural networks can be improved by leveraging underlying physics for designing the computation graphs. Here, by leveraging a generalization of the Hamiltonian dynamics, we develop a learning framework which exploits the underlying physics in the associated computation graph. Our show that incorporation of such physics-based inductive bias offers insight about relevant physical properties of the system, such as inertia, potential energy, total conserved energy. These insights, in turn, enable a more accurate prediction of future behavior and improvement in out-of-sample behavior. Furthermore, learning a physically-consistent model of the underlying dynamics can subsequently enable usage of model-based controllers which can provide performance guarantees for complex, nonlinear systems. In particular, insight about kinetic and potential energy of a physical system can be leveraged to synthesize appropriate control strategies, such as the method of controlled Lagrangian and interconnection & damping assignment , which can reshape the closed-loop energy landscape to achieve a broad range of control objectives (regulation, tracking, etc.). Inferring underlying dynamics from time-series data plays a critical role in controlling closed-loop response of dynamical systems, such as robotic manipulators and building HVAC systems . Although the use of neural networks towards identification and control of dynamical systems dates back to more than three decades , recent advances in deep neural networks have led to renewed interest in this domain. learn dynamics with control from highdimensional observations (raw image sequences) using a variational approach and synthesize an iterative LQR controller to control physical systems by imposing a locally linear constraint. and adopt a variational approach and use recurrent architectures to learn state-space models from noisy observation. SE3-Nets learn SE transformation of rigid bodies from point cloud data. use partial information about the system state to learn a nonlinear state-space model. However, this body of work, while attempting to learn state-space models, does not take physics-based priors into consideration. The main contribution of this work is two-fold. First, we introduce a learning framework called Symplectic ODE-Net (SymODEN) which encodes a generalization of the Hamiltonian dynamics. This generalization, by adding an external control term to the standard Hamiltonian dynamics, allows us to learn the system dynamics which conforms to Hamiltonian dynamics with control. With the learned structured dynamics, we are able to synthesize controllers to control the system to track a reference configuration. Moreover, by encoding the structure, we can achieve better predictions with smaller network sizes. Second, we take one step forward in combining the physics-based prior and the data-driven approach. Previous approaches require data in the form of generalized coordinates and their derivatives up to the second order. However, a large number of physical systems accommodate generalized coordinates which are non-Euclidean (e.g., angles), and such angle data is often obtained in the embedded form, i.e., (cos q, sin q) instead of the coordinate (q) itself. The underlying reason is that an angular coordinate lies on S 1 instead of R 1. In contrast to previous approaches which do not address this aspect, SymODEN has been designed to work with angle data in the embedded form. Additionally, we leverage differentiable ODE solvers to avoid the need for estimating second-order derivatives of generalized coordinates. Lagrangian dynamics and Hamiltonian dynamics are both reformulations of Newtonian dynamics. They provide novel insights into the laws of mechanics. In these formulations, the configuration of a system is described by its generalized coordinates. Over time, the configuration point of the system moves in the configuration space, tracing out a trajectory. Lagrangian dynamics describes the evolution of this trajectory, i.e., the equations of motion, in the configuration space. Hamiltonian dynamics, however, tracks the change of system states in the phase space, i.e. the product space of generalized coordinates q = (q 1, q 2, ..., q n) and generalized momenta p = (p 1, p 2, ..., p n). In other words, Hamiltonian dynamics treats q and p on an equal footing. This not only provides symmetric equations of motion but also leads to a whole new approach to classical mechanics . Hamiltonian dynamics is also widely used in statistical and quantum mechanics. In Hamiltonian dynamics, the time-evolution of a system is described by the Hamiltonian H(q, p), a scalar function of generalized coordinates and momenta. Moreover, in almost all physical systems, the Hamiltonian is the same as the total energy and hence can be expressed as where the mass matrix M(q) is symmetric positive definite and V (q) represents the potential energy of the system. Correspondingly, the time-evolution of the system is governed bẏ where we have dropped explicit dependence on q and p for brevity of notation. Moreover, sincė the total energy is conserved along a trajectory of the system. The RHS of Equation is called the symplectic gradient of H, and Equation shows that moving along the symplectic gradient keeps the Hamiltonian constant. In this work, we consider a generalization of the Hamiltonian dynamics which provides a means to incorporate external control (u), such as force and torque. As external control is usually affine and only influences changes in the generalized momenta, we can express this generalization as where the input matrix g(q) is typically assumed to have full column rank. For u = 0, the generalized dynamics reduces to the classical Hamiltonian dynamics and the total energy is conserved; however, when u = 0, the system has a dissipation-free energy exchange with the environment. Once we have learned the dynamics of a system, the learned model can be used to synthesize a controller for driving the system to a reference configuration q. As the proposed approach offers insight about the energy associated with a system, it is a natural choice to exploit this information for synthesizing controllers via energy shaping . As energy is a fundamental aspect of physical systems, reshaping the associated energy landscape enables us to specify a broad range of control objectives and synthesize nonlinear controllers with provable performance guarantees. If rank(g(q)) = rank(q), the system is fully-actuated and we have control over any dimension of "acceleration" inṗ. For such fully-actuated systems, a controller u(q, p) = β β β(q) + v(p) can be synthesized via potential energy shaping β β β(q) and damping injection v(p). For completeness, we restate this procedure using our notation. As the name suggests, the goal of potential energy shaping is to synthesize β β β(q) such that the closed-loop system behaves as if its time-evolution is governed by a desired Hamiltonian H d. With this, we have where the difference between the desired Hamiltonian and the original one lies in their potential energy term, i.e. In other words, β β β(q) shape the potential energy such that the desired Hamiltonian H d (q, p) has a minimum at (q, 0). Then, by substituting Equation and Equation into Equation, we get Thus, with potential energy shaping, we ensure that the system has the lowest energy at the desired reference configuration. Furthermore, to ensure that trajectories actually converge to this configuration, we add an additional damping term 2 given by However, for underactuated systems, potential energy shaping alone cannot 3 drive the system to a desired configuration. We also need kinetic energy shaping for this purpose . Remark If the desired potential energy is chosen to be a quadratic of the form the external forcing term can be expressed as This can be interpreted as a PD controller with an additional energy compensation term. In this section, we introduce the network architecture of Symplectic ODE-Net. In Subsection 3.1, we show how to learn an ordinary differential equation with a constant control term. In Subsection 3.2, we assume we have access to generalized coordinate and momentum data and derive the network architecture. In Subsection 3.3, we take one step further to propose a data-driven approach to deal with data of embedded angle coordinates. In Subsection 3.4, we put together the line of reasoning introduced in the previous two subsections to propose SymODEN for learning dynamics on the hybrid space R n × T m. Now we focus on the problem of learning the ordinary differential equation (ODE) from time series data. Consider an ODE:ẋ = f (x). Assume we don't know the analytical expression of the right hand side (RHS) and we approximate it with a neural network. If we have time series data X = (x t0, x t1, ..., x tn), how could we learn f (x) from the data? introduced Neural ODE, differentiable ODE solvers with O-memory backpropagation. With Neural ODE, we make predictions by approximating the RHS function using a neural network f θ and feed it into an ODE solver x t1,x t2,...,x tn = ODESolve(x t0, f θ, t 1, t 2, ..., t n) We can then construct the loss function L = X−X 2 2 and update the weights θ by backpropagating through the ODE solver. In theory, we can learn f θ in this way. In practice, however, the neural net is hard to train if n is large. If we have a bad initial estimate of the f θ, the prediction error would in general be large. Although |x t1 −x t1 | might be small,x t N would be far from x t N as error accumulates, which makes the neural network hard to train. In fact, the prediction error ofx t N is not as important asx t1. In other words, we should weight data points in a short time horizon more than the rest of the data points. In order to address this and better utilize the data, we introduce the time horizon τ as a hyperparameter and predict x ti+1, x ti+2,..., x ti+τ from initial condition x ti, where i = 0,..., n − τ. One challenge toward leveraging Neural ODE to learn state-space models is the incorporation of the control term into the dynamics. Equation has the formẋ = f (x, u) with x = (q, p). A function of this form cannot be directly fed into Neural ODE directly since the domain and range of f have different dimensions. In general, if our data consist of trajectories of (x, u) t0,...,tn where u remains the same in a trajectory, we can leverage the augmented dynamics 2 if we have access toq instead of p, we useq instead in Equation 3 As gg T is not invertible, we cannot solve the matching condition given by Equation 7. 4 Under review as a conference paper at ICLR 2020 With this improvisation, we can match the input and output dimension off θ, which enables us to feed it into Neural ODE. The idea here is to use different constant external forcing to get the system responses and use those responses to train the model. With a trained model, we can apply a timevarying u to the dynamicsẋ = f θ (x, u) and generate estimated trajectories. When we synthesize the controller, u remains constant in each integration step. As long as our model interpolates well among different values of constant u, we could get good estimated trajectories with a time-varying u. The problem is then how to design the network architecture off θ, or equivalently f θ such that we can learn the dynamics in an efficient way. Suppose we have trajectory data consisting of (q, p, u) t0,...,tn, where u remains constant in a trajectory. If we have the prior knowledge that the unforced dynamics of q and p is governed by Hamiltonian dynamics, we can use three neural nets -M −1 θ1 (q), V θ2 (q) and g θ3 (q) -as function approximators to represent the inverse of mass matrix, potential energy and the control coefficient. Thus, where The partial derivative in the expression can be taken care of by automatic differentiation. by putting the designed f θ (q, p, u) into Neural ODE, we obtain a systematic way of adding the prior knowledge of Hamiltonian dynamics into end-to-end learning. In the previous subsection, we assume (q, p, u) t0,...,tn. In a lot of physical system models, the state variables involve angles which reside in the interval [−π, π). In other words, each angle resides on the manifold S 1. From a data-driven perspective, the data that respects the geometry is a 2 dimensional embedding (cos q, sin q). Furthermore, the generalized momentum data is usually not available. Instead, the velocity is often available. For example, in OpenAI Gym Pendulum-v0 task, the observation is (cos q, sin q,q). From a theoretical perspective, however, the angle itself is often used, instead of the 2D embedding. The reason being both the Lagrangian and the Hamiltonian formulations are derived using generalized coordinates. Using an independent generalized coordinate system makes it easier to solve for the equations of motion. In this subsection, we take the data-driven standpoint. We assume all the generalized coordinates are angles and the data comes in the form of (x 1 (q), x 2 (q), x 3 (q), u) t0,...,tn = (cos q, sin q,q, u) t0,...,tn. We aim to incorporate our theoretical prior -Hamiltonian dynamics -into the data-driven approach. The goal is to learn the dynamics of x 1, x 2 and x 3. Noticing p = M(x 1, x 2)q, we can write down the derivative of x 1, x 2 and x 3, where "•" represents the elementwise product (Hadamard product). We assume q and p evolve with the generalized Hamiltonian dynamics Equation. Here the Hamiltonian H(x 1, x 2, p) is a function of x 1, x 2 and p instead of q and p. Then the right hand side of Equation can be expressed as a function of state variables and control (x 1, x 2, x 3, u). Thus, it can be fed into the Neural ODE. We use three neural nets -M 2 ) and g θ3 (x 1, x 2) -as function approximators. Substitute Equation and Equation into Equation, then the RHS serves as f θ (x 1, x 2, x 3, u). where In Subsection 3.2, we treated the generalized coordinates as translational coordinates. In Subsection 3.3, we developed a method to better deal with embedded angle data. In most of physical systems, these two types of coordinates coexist. For example, robotics systems are usually modelled as interconnected rigid bodies. The positions of joints or center of mass are translational coordinates and the orientations of each rigid body are angular coordinates. In other words, the generalized coordinates lie on R n × T m, where T m denotes the m-torus, with T 1 = S 1 and In this subsection, we put together the architecture of the previous two subsections. We assume the generalized coordinates are q = (r, φ φ φ) ∈ R n × T m and the data comes in the form of (x 1, x 2, x 3, x 4, x 5, u) t0,...,tn = (r, cos φ φ φ, sin φ φ φ,ṙ,φ φ φ, u) t0,...,tn. With similar line of reasoning, we use three neural nets -M with Hamiltonian dynamics, we havė where theṙ andφ φ φ come from Equation. Now we obtain a f θ which can be fed into Neural ODE. Figure 1 shows the flow of the computation graph based on Equation-. In real physical systems, the mass matrix M is positive definite, which ensures a positive kinetic energy with a non-zero velocity. The positive definiteness of M implies the positive definiteness of 4 In Equation, the derivative of M −1 θ 1 (x1, x2) can be expanded using chain rule and expressed as a function of the states., where L θ1 is a lower-triangular matrix. The positive definiteness is ensured if the diagonal elements of M −1 θ1 are positive. In practice, this can be done by adding a small constant to the diagonal elements of M θ1. It not only makes M θ1 invertible, but also stabilize the training. We use the following four tasks to evaluate the performance of Symplectic ODE-Net model -(i) Task 1: a pendulum with generalized coordinate and momentum data (learning on R 1); (ii) Task 2: a pendulum with embedded angle data (learning on S 1); (iii) Task 3: a cart-pole system (learning on R 1 × S 1); and (iv) Task 4: an acrobot (learning on T 2). Model Variants. Besides the Symplectic ODE-Net model derived above, we consider a variant by approximating the Hamiltonian using a fully connected neural net H θ1,θ2. We call it Unstructured Symplectic ODE-Net (Unstructured SymODEN) since this model does not exploit the structure of the Hamiltonian. Baseline Models. In order to show that we can learn the dynamics better with less parameters by leveraging prior knowledge, we set up baseline models for all four experiments. For the pendulum with generalized coordinate and momentum data, the naive baseline model approximates Equation -f θ (x, u) -by a fully connected neural net. For all the other experiments, which involves embedded angle data, we set up two different baseline models: naive baseline approximates f θ (x, u) by a fully connected neural net. It doesn't respect the fact that the coordinate pair, cos φ φ φ and sin φ φ φ, lie on T m. Thus, we set up the geometric baseline model which approximatesq andṗ with a fully connected neural net. This ensures that the angle data evolves on T m. Data Generation. For all tasks, we randomly generated initial conditions of states and subsequently combined them with 5 different constant control inputs, i.e., u = −2.0, −1.0, 0.0, 1.0, 2.0 to produce the initial conditions and input required for simulation. The simulators integrate the corresponding dynamics for 20 time steps to generate trajectory data which is then used to construct the training set. The simulators for different tasks are different. For Task 1, we integrate the true generalized Hamiltonian dynamics with a time interval of 0.05 seconds to generate trajectories. All the other tasks deal with embedded angle data and velocity directly, so we use OpenAI Gym simulators to generate trajectory data. One drawback of using OpenAI Gym is that not all environments use the Runge-Kutta method (RK4) to carry out the integration. OpenAI Gym favors other numerical schemes over RK4 because of speed, but it is harder to learn the dynamics with inaccurate data. For example, if we plot the total energy as a function of time from data generated by Pendulum-v0 environment with zero action, we see that the total energy oscillates around a constant by a significant amount, even though the total energy should be conserved. Thus, for Task 2 and Task 3, we use Pendulum-v0 and CartPole-v1, respectively, and replace the numerical integrator of the environments to RK4. For Task 4, we use the Acrobot-v1 environment which is already using RK4. We also change the action space of Pendulum-v0, CartPole-v1 and Acrobot-v1 to a continuous space with a large enough bound. Model training. In all the tasks, we train our model using Adam optimizer with 1000 epochs. We set a time horizon τ = 3, and choose "RK4" as the numerical integration scheme in Neural ODE. We vary the size of the training set by doubling from 16 initial state conditions to 1024 initial state conditions. Each initial state condition is combined with five constant control u = −2.0, −1.0, 0.0, 1.0, 2.0 to produce initial condition and input for simulation. Each trajectory is generated by integrating the dynamics 20 time steps forward. We set the size of minibatches to be the number of initial state conditions. We logged the train error per trajectory and the prediction error per trajectory in each case for all the tasks. The train error per trajectory is the mean squared error (MSE) between the estimated trajectory and the ground truth over 20 time steps. To evaluate the performance of each model in terms of long time prediction, we construct the metric of prediction error per trajectory by using the same initial state condition in the training set with a constant control of u = 0.0, integrating 40 time steps forward, and calculating the MSE over 40 time steps. The reason for using only the unforced trajectories is that a constant nonzero control might cause the velocity to keep increasing or decreasing over time, and large absolute values of velocity are of little interest for synthesizing controllers. In this task, we use the model described in Section 3.2 and present the predicted trajectories of the learned models as well as the learned functions of SymODEN. We also point out the drawback of treating the angle data as a Cartesian coordinate. The dynamics of this task has the following forṁ with Hamiltonian H(q, p) = 1.5p 2 + 5(1 − cos q). In other words M (q) = 3, V (q) = 5(1 − cos q) and g(q) = 1. In Figure 2, The ground truth is an unforced trajectory which is energyconserved. The prediction trajectory of the baseline model does not conserve energy, while both the SymODEN and its unstructured variant predict energy-conserved trajectories. For SymODEN, the learned g θ3 (q) and M −1 θ1 (q) matches the ground truth well. V θ2 (q) differs from the ground truth with a constant. This is acceptable since the potential energy is a relative notion. Only the derivative of V θ2 (q) plays a role in the dynamics. Here we treat q as a variable in R 1 and our training set contains initial conditions of q ∈ [−π, 3π]. The learned functions do not extrapolate well outside this range, as we can see from the left part in the figures of M −1 θ1 (q) and V θ2 (q). We address this issue by working directly with embedded angle data, which leads us to the next subsection. In this task, the dynamics is the same as Equation but the training data are generated by the OpenAI Gym simulator, i.e. we use embedded angle data and assume we only have access toq instead of p. We use the Under review as a conference paper at ICLR 2020 model described in Section 3.3 and synthesize an energy-based controller (Section 2.2). Without true p data, the learned function matches the ground truth with a scaling β, as shown in Figure 3. To explain the scaling, let us look at the following dynamicṡ with Hamiltonian H = p 2 /(2α) + 15α(1 − cos q). If we only look at the dynamics of q, we havë q = −15 sin q+3u, which is independent of α. If we don't have access to the generalized momentum p, our trained neural network may converge to a Hamiltonian with a α e which is different from the true value, α t = 1/3, in this task. By a scaling β = α t /α e = 0.357, the learned functions match the ground truth. Even we are not learning the true α t, we can still perform prediction and control since we are learning the dynamics of q correctly. We let V d = −V θ2 (q), then the desired Hamiltonian has minimum energy when the pendulum rests at the upward position. For the damping injection, we let K d = 3. Then from Equation and, the controller we synthesize is Only SymODEN out of all models we consider provides the learned potential energy which is required to synthesize the controller. Figure 4 shows how the states evolve when the controller is fed into the OpenAI Gym simulator. We can successfully control the pendulum into the inverted position using the controller based on the learned model even though the absolute maximum control u, 7.5, is more than three times larger than the absolute maximum u in the training set, which is 2.0. This shows SymODEN extrapolates well. The CartPole system is an underactuated system and to synthesize a controller to balance the pole from arbitrary initial condition requires trajectory optimization or kinetic energy shaping. We show that we can learn its dynamics and perform prediction in Section 4.6. We also train SymODEN in a fully-actuated version of the CartPole system (see Appendix E). The corresponding energy-based controller can bring the pole to the inverted position while driving the cart to the origin. The Acrobot is an underactuated double pendulum. As this system exhibits chaotic motion, it is not possible to predict its long-term behavior. However, Figure 6 shows that SymODEN can provide reasonably good short-term prediction. We also train SymODEN in a fully-actuated version of the Acrobot and show that we can control this system to reach the inverted position (see Appendix E). In this subsection, we show the train error, prediction error, as well as the MSE and total energy of a sample test trajectory for all the tasks. Figure 5 shows the variation in train error and prediction error with changes in the number of initial state conditions in the training set. We can see that SymODEN yields better generalization in every task. In Task 3, although the Geometric Baseline Model yields lower train error in comparison to the other models, SymODEN generates more accurate predictions, indicating overfitting in the Geometric Baseline Model. By incorporating the physics-based prior of Hamiltonian dynamics, SymODEN learns dynamics that obeys physical laws and thus provides better predictions. In most cases, SymODEN trained with a smaller training dataset performs better than other models in terms of the train and prediction error, indicating that better generalization can be achieved even with fewer training samples. R2-C4 R2-C5 in the training set. Both the horizontal axis and vertical axis are in log scale. Figure 6 shows the evolution of MSE and total energy along a trajectory with a previously unseen initial condition. For all the tasks, MSE of the baseline models diverges faster than SymODEN. Unstructured SymODEN performs well in all tasks except Task 3. As for the total energy, in Task 1 and Task 2, SymODEN and Unstructured SymODEN conserve total energy by oscillating around a constant value. In these models, the Hamiltonian itself is learned and the prediction of the future states stay around a level set of the Hamiltonian. Baseline models, however, fail to find the conservation and the estimation of future states drift away from the initial Hamiltonian level set. Here we have introduced Symplectic ODE-Net which provides a systematic way to incorporate prior knowledge of Hamiltonian dynamics with control into a deep learning framework. We show that SymODEN achieves better prediction with fewer training samples by learning an interpretable, physically-consistent state-space model. Future works will incorporate a broader class of physicsbased prior, such as the port-Hamiltonian system formulation, to learn dynamics of a larger class of physical systems. SymODEN can work with embedded angle data or when we only have access to velocity instead of generalized momentum. Future works would explore other types of embedding, such as embedded 3D orientations. Another interesting direction could be to combine energy shaping control (potential as well as kinetic energy shaping) with interpretable end-to-end learning frameworks. Tianshu Wei, Yanzhi Wang, and Qi Zhu. Deep Reinforcement Learning for Building HVAC Control. In Proceedings of the 54th Annual Design Automation Conference (DAC), pp. 22:1-22:6, 2017. The architectures used for our experiments are shown below. For all the tasks, SymODEN has the lowest number of total parameters. To ensure that the learned function is smooth, we use Tanh activation function instead of ReLu. As we have differentiation in the computation graph, nonsmooth activation functions would lead to discontinuities in the derivatives. This, in turn, would in an ODE with a discontinuous RHS which is not desirable. All the architectures shown below are fully-connected neural networks. The first number indicates the dimension of the input layer. The last number indicates the dimension of output layer. The dimension of hidden layers is shown in the middle along with the activation functions. Task 1: Pendulum • Input: 2 state dimensions, 1 action dimension • Baseline Model (0.36M parameters): 2 -600Tanh -600Tanh -2Linear • Unstructured SymODEN (0.20M parameters): • SymODEN (0.13M parameters): Task 2: Pendulum with embedded data • Input: 3 state dimensions, 1 action dimension • Naive Baseline Model (0.65M parameters): 4 -800Tanh -800Tanh -3Linear • Geometric Baseline Model (0.46M parameters):, where L θ1: 1 -300Tanh -300Tanh -300Tanh -1Linear -approximate (q,ṗ): 4 -600Tanh -600Tanh -2Linear, where Task 3: CartPole • Input: 5 state dimensions, 1 action dimension • Naive Baseline Model (1.01M parameters): 6 -1000Tanh -1000Tanh -5Linear, where L θ1: 3 -400Tanh -400Tanh -400Tanh -3Linear -H θ2: 5 -500Tanh -500Tanh -1Linear -g θ3: 3 -300Tanh -300Tanh -2Linear, where L θ1: 3 -400Tanh -400Tanh -400Tanh -3Linear -V θ2: 3 -300Tanh -300Tanh -1Linear -g θ3: 3 -300Tanh -300Tanh -2Linear Task 4:Acrobot • Input: 6 state dimensions, 1 action dimension • Naive Baseline Model (1.46M parameters): 7 -1200Tanh -1200Tanh -6Linear, where L θ1: 4 -400Tanh -400Tanh -400Tanh -3Linear -approximate (q,ṗ): 7 -800Tanh -800Tanh -4Linear, where L θ1: 4 -400Tanh -400Tanh -400Tanh -3Linear -H θ2: 6 -600Tanh -600Tanh -1Linear -g θ3: 4 -300Tanh -300Tanh -2Linear, where L θ1: 4 -400Tanh -400Tanh -400Tanh -3Linear -V θ2: 4 -300Tanh -300Tanh -1Linear -g θ3: 4 -300Tanh -300Tanh -2Linear The energy-based controller has the form u(q, p) = β β β(q) + v(p), where the potential energy shaping term β β β(q) and the damping injection term v(p) are given by Equation and Equation, respectively. If the desired potential energy V q (q) is given by a quadratic, as in Equation, then and the controller can be expressed as The corresponding external forcing term is then given by which is same as Equation in the main body of the paper. The first term in this external forcing provides an energy compensation, whereas the second term and the last term are proportional and derivative control terms, respectively. Thus, this control can be perceived as a PD controller with an additional energy compensation. In Hamiltonian Neural Networks (HNN), incorporate the Hamiltonian structure into learning by minimizing the difference between the symplectic gradients and the true gradients. When the true gradient is not available, which is often the case, the authors suggested using finite difference approximations. In SymODEN, true gradients or gradient approximations are not necessary since we integrate the estimated gradient using differentiable ODE solvers and set up the loss function with the integrated values. Here we perform an ablation study of the differentiable ODE Solver. Both HNN and the Unstructured SymODEN approximate the Hamiltonian by a neural network and the main difference is the differentiable ODE solver, so we compare the performance of HNN and the Unstructured SymODEN. We set the time horizon τ = 1 since it naturally corresponds to the finite difference estimate of the gradient. A larger τ would correspond to higher-order estimates of gradients. Since there is no angle-aware design in HNN, we use Task 1 to compare the performance of these two models. We generate 25 training trajectories, each of which contains 45 time steps. This is consistent with the HNN paper. In the HNN paper , the initial conditions of the trajectories are generated randomly in an annulus, whereas in this paper, we generate the initial state conditions uniformly in a reasonable range in each state dimension. We guess the reason the authors of HNN choose the annulus data generation is that they do not have an angle-aware design. Take the pendulum for example; all the training and test trajectories they generate do not pass the inverted position. If they make prediction on a trajectory with a large enough initial speed, the angle would go over ±2π, ±4π, etc. in the long run. Since these are away from the region where the model gets trained, we can expect the prediction would be poor. In fact, this motivates us to design the angle-aware SymODEN in Section 3.3. In this ablation study, we generate the training data in both ways. Table 1 shows the train error and the prediction error per trajectory of the two models. We can see Unstructured SymODEN performs better than HNN. This is an expected . To see why this is the case, let us assume the training loss per time step of HNN is similar to that of Unstructured SymODEN. Since the training loss is on the symplectic gradient, the error would accumulate while integrating the symplectic gradient to get the estimated state values, and MSE of the state values would likely be one order of magnitude greater than that of Unstructured SymODEN. Figure 7 shows the MSE and total energy of a particular trajectory. It is clear that the MSE of the Unstructured SymODEN is lower than that of HNN. The MSE of HNN periodically touches zero does not mean it has a good prediction at that time step. Since the trajectories in the phase space are closed circles, those zeros mean the predicted trajectory of HNN lags behind (or runs ahead of) the true trajectory by one or more circles. Also, the energy of the HNN trajectory drifts instead of staying constant, probably because the finite difference approximation is not accurate enough. Incorporating the differential ODE solver also introduces two hyperparameters: solver types and time horizon τ. For the solver types, the Euler solver is not accurate enough for our tasks. The adaptive solver "dopri5" lead to similar train error, test error and prediction error as the RK4 solver, but requires more time during training. Thus, in our experiments, we choose RK4. Time horizon τ is the number of points we use to construct our loss function. Table 2 shows the train error, test error and prediction error per trajectory in Task 2 when τ is varied from 1 to 5. We can see that longer time horizons lead to better models. This is expected since long time horizons penalize worse long term predictions. We also observe in our experiments that longer time horizons require more time to train the models. CartPole and Acrobot are underactuated systems. Incorporating the control of underactuated systems into the end-to-end learning framework is our future work. Here we trained SymODEN on Under review as a conference paper at ICLR 2020 fully actuated versions of Cartpole and Acrobot and synthesized controllers based on the learned model. For the fully-actuated CartPole, Figure 8 shows the snapshots of the system of a controlled trajectory with an initial condition where the pole is below the horizon. Figure 9 shows the time series of state variables and control inputs. We can successfully learn the dynamics and control the pole to the inverted position and the cart to the origin. For the fully-actuated Acrobot, Figure 10 shows the snapshots of a controlled trajectory. Figure 11 shows the time series of state variables and control inputs. We can successfully control the Acrobot from the downward position to the upward position, though the final value of q 2 is a little away from zero. Taking into account that the dynamics has been learned with only 64 different initial state conditions, it is most likely that the upward position did not show up in the training data. Here we show statistics of train, test, and prediction per trajectory in all four tasks. The train errors are based on 64 initial state conditions and 5 constant inputs. The test errors are based on 64 previously unseen initial state conditions and the same 5 constant inputs. Each trajectory in the train and test set contains 20 steps. The prediction error is based on the same 64 initial state conditions (during training) and zero inputs.
This work enforces Hamiltonian dynamics with control to learn system models from embedded position and velocity data, and exploits this physically-consistent dynamics to synthesize model-based control via energy shaping.
304
scitldr
Federated learning, where a global model is trained by iterative parameter averaging of locally-computed updates, is a promising approach for distributed training of deep networks; it provides high communication-efficiency and privacy-preservability, which allows to fit well into decentralized data environments, e.g., mobile-cloud ecosystems. However, despite the advantages, the federated learning-based methods still have a challenge in dealing with non-IID training data of local devices (i.e., learners). In this regard, we study the effects of a variety of hyperparametric conditions under the non-IID environments, to answer important concerns in practical implementations: (i) We first investigate parameter divergence of local updates to explain performance degradation from non-IID data. The origin of the parameter divergence is also found both empirically and theoretically. (ii) We then revisit the effects of optimizers, network depth/width, and regularization techniques; our observations show that the well-known advantages of the hyperparameter optimization strategies could rather yield diminishing returns with non-IID data. (iii) We finally provide the reasons of the failure cases in a categorized way, mainly based on metrics of the parameter divergence. Over the recent years, federated learning has been a huge success to reduce the communication overhead in distributed training of deep networks. Guaranteeing competitive performance, the federated learning permits each learner to compute their local updates of each round for relatively many iterations (e.g., 1 epoch, 10 epochs, etc.), which provides much higher communication-efficiency compared to the conventional data parallelism approaches (for intra-datacenter environments, e.g., ;) that generally require very frequent gradient aggregation. Furthermore, the federated learning can also significantly reduce data privacy and security risks by enabling to conceal on-device data of each learner from the server or other learners; thus the approach can be applied well to environments with highly private data (e.g., personal medical data), it is now emerging as a promising methodology for privacypreserving distributed learning along with differential privacy-based methods (; ; ;). On this wise, the federated learning takes a simple approach that performs iterative parameter averaging of local updates computed from each learners' own dataset, which suggests an efficient way to learn a shared model without centralizing training data from multiple sources; but hereby, since the local data of each device is created based on their usage pattern, the heterogeneity of training data distributions across the learners might be naturally assumed in real-world cases. Hence, each local dataset would not follow the population distribution, and handling the decentralized non-IID data still remains a statistical challenge in the field of federated learning . For instance, observed severe performance degradation in multi-class classification accuracy under highly skewed non-IID data; it was reported that more diminishing returns could be yielded as the probabilistic distance of learners' local data from the population distribution increases. LearnerUpdate(k, w): // Run on learner k B ←(split P k into batches of size B) for each local epoch ε from 1 to E do for each batch b ∈ B do w ← w − η∇ (w; b) end for end for return w to server Contributions. To address the non-IID issue under federated learning, there have been a variety of recent works 1; nevertheless, in this paper we explore more fundamental factors, the effects of various hyperparameters. The optimization for the number of local iterations per round or learning rates has been handled in several literatures (e.g., ; Li et al. (2019c); ); by extension we discuss, for the first time to the best of our knowledge, the effects of optimizers, network depth/width, and regularization techniques. Our contributions are summarized as follows: First, as a root cause of performance degradation from non-IID data, we investigate parameter divergence of local updates at each round. The parameter divergence can be regarded as a direct response to learners' local data being non-IID sampled from the population distribution, of which the excessive magnitude could disturb the performance of the consequent parameter averaging. We also investigate the origin of the parameter divergence in both empirical and theoretical ways. Second, we observe the effects of well-known hyperparameter optimization methods 2 under the non-IID data environments; interestingly, some of our findings show highly conflicted aspects with their positive outcomes under "vanilla" training 3 or the IID data setting. Third, we analyze the internal reasons of our observations in a unified way, mainly using the parameter divergence metrics; it is identified that the rationale of the failures under non-IID data lies in some or all of (i) inordinate magnitude of parameter divergence, (ii) its steep fall phenomenon (described in Section 4.2), and (iii) excessively high training loss of local updates. In this study, Algorithm 1 is considered as a federated learning method, and it is written based on FedAvg . 4 We note that this kind of parameter averaging-based approach has been widely discussed in the literature, under various names, e.g., parallel (restarted) SGD and local SGD . In our experiments with Tensorflow , 5 we consider the multi-class classification tasks on CIFAR-10 and SVHN datasets. 2 we use the term hyperparameter optimization methods and hyperparametric methods interchangeably. 3 This term refers to the non-distributed training with a single machine, using the whole training examples. 4 Regarding the significance of the algorithm, we additionally note that Google is currently employing it on their mobile keyboard application (Gboard) (; ; ;). In this study we deal with image classification, which is also considered as the main applications of federated learning along with the language models . 5 Our source code is available at https://github.com/fl-noniid/fl-noniid Baseline network model: For the baseline deep network, we consider a CNN model that has three 3 × 3 convolutional layers with 64, 128, 256 output channels, respectively; and then three fully-connected layers with 512, 256, 10 output sizes, respectively (see Appendix A.1 for more detailed description). We use the term NetA-Baseline to denote this baseline model throughout this paper. Regularization configuration: For weight decay, we apply the method of decoupled weight decay regularization based on the fact that weight decay is equivalent to L 2 regularization only for pure SGD . The baseline value of the weight decay factor is set to 0.00005. As our regularization baseline, we consider not to apply any other regularization techniques additionally. We importantly note that if without any particular comments, the described in the following sections are ones obtained using the above baseline configurations of the network model and regularization. Environmental configuration: We consider 10 learners to have each 5000 nonoverlapping training examples; Table 1 summarizes our configuration of data settings; Non-IID(N) denotes a data setting that lets each learner to have training examples only for N class(es). The data settings in the Table 1 deal with data balanced cases where learners have the same amount of local data, and they are mainly considered in the following sections; we additionally note that one can refer to Appendix C.8 for the experiments with data unbalanced cases. For the IID and the non-IID data settings, T = 200 and 300 are used respectively, while E = 1 and minibatch size of 50 are considered commonly for the both. 6 One can find the remaining configurations for the experiments in Appendix A. Parameter divergence is recently being regarded as a strong cause of diminishing returns from decentralized non-IID data in federated learning (it is sometimes expressed in another way, gradient/loss divergence (b; c; ;) ). For the divergence metrics, many of the literatures usually handle the difference of each learner's local model parameters from one computed with the population distribution; it eventually also causes parameter diversity between the local updates as the data distributions become heterogeneous across learners. A pleasant level of parameter divergence could rather imply exploiting rich decentralized data (IID cases); however, if the local datasets are far from the population distribution, the consequent parameter averaging of the highly diverged local updates could lead to bad solutions away from the global optimum (non-IID cases). and Non-IID from the population distribution are 1.0 and 1.6, respectively. The origin of parameter divergence. In relation, it has been theoretically proven that the parameter divergence (between the global model parameters under FedAvg and those computed by vanilla SGD training) is directly related to the probabilistic distance of local datasets from the population distribution (see Proposition 3.1 in). In addition to it, for multi-class classification tasks, we here identify in lower level, that if data distributions in each local dataset are highly skewed and heterogeneous over classes, subsets of neurons, which have especially big magnitudes of the gradients in back propagation, become significantly different across learners; this leads to inordinate parameter divergence between them. As illustrated in Figure 1, under the IID data setting, the weight values in the output layer are evenly distributed relatively evenly across classes if the neurons of the model are initialized uniformly. However, we can observe under the non-IID data settings that the magnitudes of the gradients are distributed depending on each learner's data distribution. We also provide the corresponding theoretical analysis in Appendix B. Metrics. To capture parameter divergence under federated learning, we define the following two metrics using the notations in Algorithm 1. Since in our analysis we compare different network architectures or training settings together in a set, the number of neurons in the probed layers can become different, and values of model parameters can highly depend on the experimental manipulations; thus instead of Euclidean distance, in the two divergence metrics we use cosine distance that enables normalized (qualitative) measures. We also note that PD-VL is defined assuming the balancedness of data amount between learners, i.e., the same numbers of local iterations per round. The reason of probing parameter divergence being important is that the federated learning are performed based on iterative parameter averaging. That is, investigating how local updates are diverged can give a clue whether the subsequent parameter averaging yields positive returns; the proposed divergence metrics provide two ways for it. k is a subset (or the universal set) of w t k, we define parameter divergence between local updates as In addition, assume that P k is identical ∀k ∈ K, and let w t −1 be the vanilla-updated parameters, that is, the model parameters updated on the global parameters (i.e., w t−1) using IID training data during the same number of iterations with the actual learners (i.e., P k /B). Then, for z Relationship among probabilistic distance, parameter divergence, and learning performance. We consider Non-IID and Non-IID for non-IID data settings. Here we use earth mover's distance (EMD), also known as Wasserstein distance, to measure probabilistic distance of each data settings from the population distribution; the value becomes 1.0 and 1.6 for Non-IID and Non-IID, respectively. From the middle and right panels of Figure 2, it is seen that greater EMDs lead to bigger parameter divergence (refer to also Figure 9 in the appendix). Also, together with the left panel, we can observe the positive correlation between parameter divergence and learning performance. Therefore, we believe the parameter divergence metrics can help to reveal the missing link between data non-IIDness and the consequent learning performance. Note that one can also refer to the similar analysis with more various EMD in. From now on we describe our findings for the effects of various hyperparameter optimization methods with non-IID data on the federated learning algorithm. The considered hyperparametric methods have been a huge success to improve performance in deep learning; however, here we newly identify that under non-IID data settings, they could give negative/diminishing effects on performance of the federated learning algorithm. The following is the summary of our findings; we provide the complete experimental and further discussion in the next subsection and the appendix. Effects of optimizers. Unlike non-adaptive optimizers such as pure SGD and momentum SGD , Adam could give poor performance from non-IID data if the parameter averaging is performed only for weights and biases, compared to all the model variables (including the first and second moment) being averaged. Here we importantly note that both momentum SGD and Adam require the additional variables related to momentum as well as weights and biases; throughout the rest of the paper, the terms (optimizer name)-A and (optimizer name)-WB are used to refer to the parameter averaging being performed for all the variables 7 and only for weights & biases, respectively. Effects of network depth/width. It is also known that deepening "plain" networks (which simply stacks layers, without techniques such as information highways and shortcut connection ) yields performance degradation at a certain depth, even under vanilla training; however this phenomenon gets much worse under non-IID data environments. On the contrary, widening networks could help to achieve better outcomes; in that sense, the global average pooling could fail in this case since it significantly reduces the channel dimension of the (last) fully-connected layer, compared to using the max pooling. Effects of Batch Normalization. The well-known strength of Batch Normalization , the dependence of hidden activations in the minibatch , could become a severe drawback in non-IID data environments. Batch Renormalization helps to mitigate this, but it also does not resolve the problem completely. Effects of regularization techniques. With non-IID data, regularizations techniques such as weight decay and data augmentation could give excessively high training loss of local updates even in a modest level, which offsets the generalization gain. We now explain the internal reasons of the observations in the previous subsection. Through the experimental , we were able to classify the causes of the failures under non-IID data into three categories; the following discussions are described based on this. 8 Note that our discussion in this subsection is mostly made from the under Nesterov momentum SGD and on CIFAR-10; the complete including other optimizers (e.g., pure SGD, Polyak momentum SGD, and Adam) and datasets (e.g., SVHN) are given in Appendix C. Inordinate magnitude of parameter divergence. As mentioned before, bigger parameter divergence is the root cause of diminishing returns under federated learning methods with non-IID data. By extension, here we observe that even under the same non-IID data setting, some of the considered hyperparametric methods yield greater parameter divergence than when they are not applied. For example, from the left plot of Figure 3, we see that under the Non-IID setting, the parameter divergence values (in the last fully-connected layer) become greater as the network depth increases (note that NetA-Baseline, NetA-Deeper, and NetA-Deepest have 3, 6, and 9 convolutional layers, respectively; see also Appendix A.1 for their detailed architecture). The corresponding final test accuracy was found to be 74.11%, 73.67%, and 68.98%, respectively, in order of the degree of shallowness; this fits well into the parameter divergence . Since the NetA-Deeper and NetA-Deepest have twice and three times as many model parameters as NetA-Baseline, it can be expected enough that the deeper models yield bigger parameter divergence in the whole model; but our also show its qualitative increase in a layer level. In relation, we also provide the using the modern network architecture (e.g., ResNet ) in Table 8 of the appendix. From the middle plot of the figure, we can also observe bigger parameter divergence in a high level of weight decay under the Non-IID setting. Under the non-IID data setting, the test accuracy of about 72 ∼ 74% was achieved in the low levels (≤ 0.0001), but weight decay factor of 0.0005 yielded only that of 54.11%. Hence, this suggests that with non-IID data we should apply much smaller weight decay to federated learning-based methods. Here we note that if a single iteration is considered for each learner's local update per round, the corresponding parameter divergence will be of course the same without regard to degree of weight decay. However, in our experiments, the great number of local iterations per round (i.e., 100) made a big difference of the divergence values under the non-IID data setting; this eventually yielded the accuracy gap. We additionally observe for the non-IID cases that even with weight decay factor of 0.0005, the parameter divergence values are similar to those with the smaller factors at very early rounds in which the norms of the weights are relatively very small. In addition, it is observed from the right plot of the figure that Dropout also yields bigger parameter divergence under the non-IID data setting. The corresponding test accuracy was seen to be a diminishing return with Nesterov momentum SGD (i.e., using Dropout we can achieve +2.85% under IID, but only +1.69% is obtained under non-IID, compared to when it is not applied; see Table 2 ); however, it was observed that the generalization effect of the Dropout is still valid in test accuracy for the pure SGD and the Adam (refer to also Table 13 in the appendix). Steep fall phenomenon. As we see previously, inordinate magnitude of parameter divergence is one of the notable characteristics for failure cases under federated learning with non-IID data. However, under the non-IID data setting, some of the failure cases have been observed where the test accuracy is still low but the parameter divergence values of the last fully-connected layer decrease (rapidly) over rounds; as the round goes, even the values were sometimes seen to be lower than those of the comparison targets. We refer to this phenomenon as steep fall phenomenon. It is inferred that these (unexpected abnormal) sudden drops of parameter divergence values indicate going into poor local minima (or saddles); this can be supported by the behaviors that test accuracy increases plausibly at very early rounds, but the growth rate quickly stagnates and eventually becomes much lower than the comparison targets. The left plot of Figure 4 shows the effect of the Adam optimizer with respect to its implementations. Through the experiments, we identified that under non-IID data environments, the performance of Adam is very sensitive to the range of model variables to be averaged, unlike the non-adaptive optimizers (e.g., momentum SGD); its moment variables should be also considered in the parameter averaging together with weights and biases (see also Table 3). The poor performance of the Adam-WB under the Non-IID setting would be from twice as many momentum variables as the momentum SGD, which indicates the increased number of them affected by the non-IIDness; thus, originally we had thought that extreme parameter divergence could appear if the momentum variables are not averaged together with weights and biases. However, it was seen that the parameter divergence values under the Adam-WB was seen to be similar or even smaller than under Adam-A (see also Figure 11 in the appendix). Instead, from the left panel we can observe that the parameter divergence of Adam-WB in the last fully-connected layer is bigger than that of Adam-A at the very early rounds (as we expected), but soon it is abnormally sharply reduced over rounds; this is considered the steep fall phenomenon. The middle and the right plots of the figure also show the steep fall phenomenon in the last fullyconnected layer, with respect to network width and whether to use Batch Normalization, respectively. In the case of the NetC models, NetC-Baseline, NetC-Wider, and NetC-Widest use the global average pooling, the max pooling with stride 4, and the max pooling with stride 2, respectively, after the last convolutional layer; the number of neurons in the output layer becomes 2560, 10240, and 40960, respectively (see also Appendix A.1 for their detailed architecture). Under the Non-IID setting, the corresponding test accuracy was found to be 64.06%, 72.61%, and 73.64%, respectively, in order of the degree of wideness. In addition, we can see that under Non-IID, Batch Normalization 9 yields not only big parameter divergence (especially before the first learning rate drop) but also the steep fall phenomenon; the corresponding test accuracy was seen to be very low (see Table 3). The failure of the Batch Normalization stems from that the dependence of batchnormalized hidden activations makes each learner's update too overfitted to the distribution of their local training data. Batch Renormalization, by relaxing the dependence, yields a better outcome; however, it still fails to exceed the performance of the baseline due to the significant parameter divergence. To explain the impact of the steep fall phenomenon in test accuracy, we provide Figure 5, which indicates that the loss landscapes for the failure cases (e.g., Adam-WB and with Batch Normalization) commonly show sharper minima that leads to poorer generalization (Hochreiter & Schmidhuber, 9 For its implementations into the considered federated learning algorithm, we let the server get the proper moving variance by 1997;), and the minimal value in the bowl is relatively greater. 10 Here it is also observed that going into sharp minima starts even in early rounds such as 25th. Excessively high training loss of local updates. The final cause that we consider for the failure cases is excessively high training loss of local updates. For instance, from the left plot of Figure 6, we see that under the Non-IID setting, NetB-Baseline gives much higher training loss than the other models. Here we note that for the NetB-Baseline model, the global average pooling is applied after the last convolutional layer, and the number of neurons in the first fully-connected layer thus becomes 256 · 256; on the other hand, NetB-Wider and NetB-Widest use the max pooling with stride 4 and 2, which make the number of neurons in that layer become 1024 · 256 and 4096 · 256, respectively (see also Appendix A.1 for their details). The experimental were shown that NetB-Baseline has notably lower test accuracy (see Table 4). We additionally remark that for NetBBaseline, very high losses are observed under the IID setting, and their values even are greater than in the non-IID case; however, note that one have to be aware that local updates are extremely easy to be overfitted to each training dataset under non-IID data environments, thus the converged training losses being high is more critical than the IID cases. The middle and the right plot of the figure show the excessive training loss under the non-IID setting when applying the weight decay factor of 0.0005 and the data augmentation, respectively. In the cases of the high level of weight decay, the severe performance degradation appears compared to when the levels are low (i.e., ≤ 0.0001) as already discussed. In addition, we observed that with Nesterov momentum SGD, the data augmentation yields a diminishing return in test accuracy (i.e., with the data augmentation we can achieve +3.36% under IID, but −0.16% is obtained under non-IID, compared to when it is not applied); with Adam the degree of the diminishment becomes higher (refer to Table 12 in the appendix). In the data augmentation cases, judging from that the 10 Based on , the visualization of loss surface was conducted by L(α, β) = (θ * + αδ + βγ), where θ * is a center point of the model parameters, and δ and γ is the orthogonal direction vectors. parameter divergence values are not so different between with and without it, we can identify that the performance degradation stems from the high training loss (see Figures 30 and 31 in the appendix). Here we additionally note that unlike on the CIFAR-10, in the experiments on SVHN it was seen that the generalization effect of the data augmentation is still valid in test accuracy (see Table 12). In this paper, we explored the effects of various hyperparameter optimization strategies for optimizers, network depth/width, and regularization on federated learning of deep networks. Our primary concern in this study was lied on non-IID data, in which we found that under non-IID data settings many of the probed factors show somewhat different behaviors compared to under the IID setting and vanilla training. To explain this, a concept of the parameter divergence was utilized, and its origin was identified both empirically and theoretically. We also provided the internal reasons of our observations with a number of the experimental cases. In the meantime, the federated learning has been vigorously studied for decentralized data environments due to its inherent strength, i.e., high communication-efficiency and privacy-preservability. However, so far most of the existing works mainly dealt with only IID data, and the research to address non-IID data has just entered the beginning stage very recently despite its high real-world possibility. Our study, as one of the openings, handles the essential factors in the federated training under the non-IID data environments, and we expect that it will provide refreshing perspectives for upcoming works. A EXPERIMENTAL DETAILS In the experiments, we consider CNN architectures, as illustrated in Figure 7. In the network configurations, three groups of 3 × 3 convolutional layers are included that have 16 · m, 128, and 256 output channels, respectively; n denotes the number of the layers in each convolutional group. The first two groups are followed by 3 × 3 max pooling with stride 2; the last convolutional layer is followed by either the 3 × 3 max pooling with stride s or the global average pooling. In the case of fully-connected layers, we use two types of the stacks: (i) three layers, of which the output sizes are 256 · u, 256, and 10, respectively; and (ii) a single layer, of which the output size is 10. In addition, we use the ReLU and the softmax activation for the hidden weight layers and the output layer, respectively. Table 6 summarizes the network models used in the experiments. In the experiments, we initialize the network models to mostly follow the truncated normal distribution with a mean of 0 based on , however we fix the standard deviation to 0.05 for the first convolutional group and the last fully-connected layer. For training, minibatch stochastic optimization with cross-entropy loss is considered. Specifically, we use pure SGD, Nesterov momentum SGD , and Adam as optimization methods; initial learning rates are set to 0.05, 0.01, and 0.001, respectively for each optimizer. We drop the learning rate by 0.1 at 50% and 75% of the total training iterations, respectively. Regarding the environmental configurations, we predetermine each of learners' local training dataset in a random seed; the training examples are allocated so that they do not overlap between the learners. To report the experimental , we basically considered to run the trials once, but as for unstable ones in the preliminary tests, we chose the middle of several runs. In every plot, the values are plotted at each round. In relation to the federated learning under non-IID data, so far there have been several works for providing theoretical bounds to explain how does the degree of the non-IIDness of decentralized data affect the performance, with respect to its degree (e.g., ). Inspired by them, here we further study how does the non-IIDness make the model parameters of each learner diverged. In this analysis, we consider training deep networks for multi-class classification. Based on the notations in Algorithm 1, the SGD update of learner k at round t + 1 is given as where f q (x; w) is the posterior probability for class q ∈ Q (Q is the label space), obtained from model parameters w with data examples (x, y), and p k (y = q) is the probability that the label of a data example in P k is q. In this equation, w is the model parameters after the τ -th local iterations in the round t + 1 (R is the number of local iterations of each learner per round). Herein we note that w t+0 k (w t) is the global model parameters received from the server at the round t + 1; we use the term to distinguish it from the term w t k (which indicates the local update that has sent back to the server at round t). Then, by the linearity of the gradient, we obtain where d q denote the neurons, in the (dense) output layer of the model w, that are connected to the output node for class q. with the fixed k. At round t+1, suppose that for learner where (a q). Then, we can get From this, we can identify that the parameter difference, with the fixed q. At round t+1, suppose that for class q, Then, similar with Equation 1, we can have C THE COMPLETE EXPERIMENTAL In this section we provide our complete experimental . Before the main statement, we first note that in the following figures, C ij denotes the j-th convolutional layer in the i-th group, and F j denotes the j-th fully-connected layer (in relation, refer to Appendix A.1). In addition, we remind that in this paper "vanilla" training refers to non-distributed training with a single machine, using the whole data examples; for the vanilla training, we trained the networks for 100 epochs. Here we investigate the effect of optimizers. We importantly note that both momentum SGD and Adam require the additional variables related to momentum as well as weights and biases; the terms (optimizer name)-A and (optimizer name)-WB are used to refer to the parameter averaging being performed for all the variables and only for weights & biases, respectively. The experimental are provided in Table 7 and Figures 10 and 11. From the table, interestingly we can notice that under the non-IID data setting, there exists a huge performance gap between Adam-A and Adam-WB (≈ 7%), unlike the momentum SGD trials. At the initial steps of this study, we had thought that the poor performance of Adam-WB would be from the following: Since Adam requires twice as many momentum variables as momentum SGD, extreme parameter divergence could appear if they are not averaged together with weights and biases. However, unlike our expectations, the parameter divergence values under the Adam-WB was seen to be similar or even smaller than under Adam-A. Nevertheless, we can observe the followings for the non-IID cases: First, the parameter divergence of Adam-WB in F 3 is bigger than that of Adam-A at the very early rounds (as we expected), but soon it is abnormally sharply reduced over rounds; this can be considered the steep fall phenomenon. Second, Adam-WB leads to higher training loss of each learner. We guess that these two caused the severe degradation of Adam-WB in test accuracy. Here we investigate the effect of network depth. Since deepening networks also indicates that there becomes having more parameters to be averaged in the considered federated learning algorithm, we had predicted especially under non-IID data settings that depending on their depth, it would yield bigger parameter divergence in the whole model and the consequent diminishing returns compared to under the vanilla training and the IID data setting; the test accuracy show it as expected (see Table 8). 11 Moreover, it is also seen from Figure 12 that parameter divergence increases also qualitatively (i.e., in a layer level) under the non-IID data setting, as the number of convolutional layers increases. Note that for C 21 and C 31, the divergence pattern is ed as opposed to that of C 11 and F 3; however, the values of C 11 and F 3 would be more impactful as mentioned in Footnote 8. We additionally remark from the figure that the sharp reduction of parameter divergence (in the convolutional layers) at the very early rounds when using NetA-Deepest indicates the parameter averaging algorithm did not work properly. Correspondingly, the test accuracy values in the early period were seen to be not much different from the initial one. Following the previous subsection, from now on we investigate the effect of network width. Contrary to the in the Section C.2, it is seen from Table 9 that widening networks provides positive effects for the considered federated learning algorithm under the non-IID data setting. Especially, one can see that compared to the max pooling trials, while the global average pooling yields higher test accuracy in the vanilla training (with the minibatch size of 50), its performance gets significantly worse under the non-IID data setting (remind that NetB-Baseline and NetC-Baseline use the global average pooling after the last convolutional layer). Focusing on the NetC models, we here make the following observations for the non-IID data setting from Figures 15, 18, and 21: First, the considered federated learning algorithm provides bigger parameter divergence in F 1 as its width decreases (note that each input size of F 1 is 256, 1024, and 4096 for NetC-Baseline, NetC-Wider, and NetC-Widest, respectively), especially during the beginning rounds (e.g., for the NMom-A case, until about 50 rounds). Unlike in Section C.2, here we can identify that even though the parameter size of is the smallest under the global averaging pooling, it rather yields the biggest qualitative parameter divergence. Second, the steep fall phenomenon appears in F 1 for the NetC- Baseline case. Third, the global average pooling gives too high training loss of each learner. All the three observations fit well into the failure of the global average pooling. We additionally note that when using NetC-Baseline, the under the IID data setting shows very high loss values; this leads to diminishing returns for the pure SGD and the NMom-A cases, compared to the vanilla training with the minibatch size of 50. However, the corresponding degradation rate is seen to be much higher under the non-IID data setting. This is because the local updates are extremely easy to be overfitted to the training data under the non-IID data setting; thus the converged training losses being high becomes much more critical. Here we investigate the effect of weight decay. From Table 10, it is seen that under the non-IID data setting we should apply much smaller weight decay for the considered federated learning algorithm than under the vanilla training or the IID data setting. For its internal reason, Figures 22, 23, and 24 show that under the non-IID data setting, the considered federated learning algorithm not only converges to too high training loss (of each learner) but also causes excessive parameter divergence when the weight decay factor is set to 0.0005. Here we note that if a single iteration is considered for each learner's local update per round, the corresponding parameter divergence will be of course the same without regard to degree of weight decay. However, in our experiments, the great number of local iterations per round (i.e., 100) made a big difference of the divergence values under the non-IID data setting; this eventually yielded the accuracy gap. In addition, we further observe under the non-IID data setting that even with the weight decay factor of 0.0005, the test accuracy increases similarly with its smaller values at very early rounds, in which the norm values of the weights are relatively much smaller. Moreover we also conducted additional experiments for the related regularization techniques, FedProx (b). Under the FedProx, in order to make local updates do not deviate excessively from the current global model parameters, at each round t each learner uses the following surrogate loss function that adds a proximal term to the original objective function: (w) + µ 2 w − w t−1 2. Figure 8 shows the experimental ; as seen from the figure, in our implementation FedProx did not provide dramatic improvement in final accuracy, but we can observe that it could yield not only lower parameter divergence but also faster convergence speed (especially before the first learning rate drop). One can find the corresponding complete in Figure 25. Here we investigate the effect of Batch Normalization. For its implementations into the considered federated learning algorithm, we let the server get the proper moving variance by − E φ 2 at each round, by allowing each learner k collect E φ 2 k as well). It is natural to take this strategy especially under the non-IID data setting; otherwise, a huge problem would arise due to bad approximation of the moving statistics. Also, it is additionally remarked that for Batch Renormalization we simply used α = 0.01, r max = 2, and d max = 2 in the experiments (see for the description of the three hyperparameters). Table 11 that under the non-IID data setting, the performance significantly gets worse if Batch Normalization is employed to the baseline; this would be rooted in that the dependence of batch-normalized hidden activations makes each learner's update too overfitted to the distribution of their local training data. The consequent bigger parameter divergence is observed in Figures 26, 27, and 28. On the contrary, Batch Renormalization, by relaxing the dependence, yields a better outcome; although its parameter divergence is seen greater in some layers than under Batch Normalization, it does not lead to the steep fall phenomenon while the Batch Normalization does in F 3. Nevertheless, the Batch Renormalization was still not able to exceed the performance of the baseline due to the significant parameter divergence. In the implementation of data augmentation, we used random horizontal flipping, brightness & contrast adjustment, and 24×24 cropping & resizing in the pipeline. From Table 12, we identify that under the non-IID data setting, the data augmentation yields diminishing returns for the PMom-A, NMom-A, and Adam-A cases on CIFAR-10, compared to under the IID data setting; under Adam-A, especially it gives even a worse outcome. However, it is seen that the corresponding parameter divergence is almost similar between with and without the data augmentation (refer to Figures 30 and 31). Instead, we are able to notice that the diminishing outcomes from the data augmentation had been eventually rooted in local updates' high training losses. Here we note that in the pure SGD case, very high training loss values are found as well under the IID data setting when the data augmentation was applied (see Figure 29); this leads to lower test accuracy compared to the baseline, 83.09 (83.20) 82.01 (82.11) 76.63 (76.68) similar to under the non-IID cases. Also, it is additionally noted that unlike on the CIFAR-10, in the experiments on SVHN it was observed that the generalization effect of the data augmentation is still valid in test accuracy. In the experiments, we employed Dropout with the rates 0.2 and 0.5 for convolutional layers and fully-connected layers, respectively. The show that under the non-IID data setting, the Dropout provides greater parameter divergence compared to the baselines, especially in F 3 (see Figures 32, 33, and 34); this leads to diminishing returns for the PMom-A and NMom-A cases on CIFAR-10, compared to under the IID data setting. However, we can observe from Table 13 that the effect of the Dropout is still maintained positive for the rest of the cases. As remarked in , since the federated learning do not require centralizing local data, data unbalancedness (i.e., each learner has various numbers of local data examples) would be also naturally assumed in the federated learning along with non-IIDness. In relation, we also conducted the experiments under the unbalanced cases. Table 14 summarizes the considered unbalanced data settings; they were constructed similarly to (b) so that the number of data examples per learner follows a power law. The experimental under the unbalanced settings are summarized in Table 15. From the table, it is observed that our findings in Section 4.1 are still valid under the unbalanced data settings. In addition, we can also see that for the unbalanced cases, the performance under Non-IID setting is worse mostly than that of balanced cases while they show similar values under the IID data setting; this indicates that the negative impact of data unbalancedness is not as great as that of the nonIIDness, but it becomes much bigger when the two are combined.
We investigate the internal reasons of our observations, the diminishing effects of the well-known hyperparameter optimization methods on federated learning from decentralized non-IID data.
305
scitldr
Deep learning models are known to be vulnerable to adversarial examples. A practical adversarial attack should require as little as possible knowledge of attacked models T. Current substitute attacks need pre-trained models to generate adversarial examples and their attack success rates heavily rely on the transferability of adversarial examples. Current score-based and decision-based attacks require lots of queries for the T. In this study, we propose a novel adversarial imitation attack. First, it produces a replica of the T by a two-player game like the generative adversarial networks (GANs). The objective of the generative model G is to generate examples which lead D returning different outputs with T. The objective of the discriminative model D is to output the same labels with T under the same inputs. Then, the adversarial examples generated by D are utilized to fool the T. Compared with the current substitute attacks, imitation attack can use less training data to produce a replica of T and improve the transferability of adversarial examples. Experiments demonstrate that our imitation attack requires less training data than the black-box substitute attacks, but achieves an attack success rate close to the white-box attack on unseen data with no query. Deep neural networks are often vulnerable to imperceptible perturbations of their inputs, causing incorrect predictions . Studies on adversarial examples developed attacks and defenses to assess and increase the robustness of models, respectively. Adversarial attacks include white-box attacks, where the attack method has full access to models, and black-box attacks, where the attacks do not need knowledge of models structures and weights. White-box attacks need training data and the gradient information of models, such as FGSM (Fast Gradient Sign Method) , BIM (Basic Iterative Method) (a) and JSMA (Jacobian-based Saliency Map Attack) (b). However, the gradient information of attacked models is hard to access, the white-box attack is not practical in real-world tasks. Literature shows adversarial examples have transferability property and they can affect different models, even the models have different architectures (; a;). Such a phenomenon is closely related to linearity and over-fitting of models (; ; ; Tramèr et al., 2018). Therefore, substitute attacks are proposed to attack models without the gradient information. Substitute black-box attacks utilize pre-trained models to generate adversarial examples and apply these examples to attacked models. Their attack success rates rely on the transferability of adversarial examples and are often lower than that of white-box attacks. Black-box score-based attacks a; b) do not need pre-trained models, they access the output probabilities of the attacked model to generate adversarial examples iteratively. Black-box decisionbased attacks (; ;) require less information than the score-based attacks. They utilize hard labels of the attacked model to generate adversarial examples. Adversarial attacks need knowledge of models. However, a practical attack method should require as little as possible knowledge of attacked models, which include training data and procedure, models weights and architectures, output probabilities and hard labels . The disadvantage of current substitute black-box attacks is that they need pre-trained substitute models trained by the same dataset with attacked model T (; ; a) or a number of images to imitate the outputs of T to produce substitute networks. Actually, the prerequisites of these attacks are hard to obtain in real-world tasks. The substitute models trained by limited images hardly generate adversarial examples with well transferability. The disadvantage of current decision-based and score-based black-box attacks is that every adversarial example is synthesized by numerous queries. Hence, developing a practical attack mechanism is necessary. In this paper, we propose an adversarial imitation training, which is a special two-player game. The game has a generative model G and a imitation model D. The G is designed to produce examples to make the predicted label of the attacked model T and D different, while the imitation model D fights for outputting the same label with T. The proposed imitation training needs much less training data than the T and does not need the labels of these data, and the data do not need to coincide with the training data. Then, the adversarial examples generated by D are utilized to fool the T like substitute attacks. We call this new attack mechanism as adversarial imitation attack. Compared with current substitute attacks, our adversarial imitation attack requires less training data. Score-based and decision-based attacks need a lot of queries to generate each adversarial attack. The similarity between the proposed method and current score-based and decision-based attacks is that adversarial imitation attack also needs to obtain a lot of queries in the training stage. The difference between these two kinds of attack is our method do not need any additional queries in the test stage like other substitute attacks. Experiments show that our proposed method achieves state-of-the-art performance compared with current substitute attacks and decision-based attack. We summarize our main contributions as follows: • The proposed new attack mechanism needs less training data of attacked models than current substitute attacks, but achieves an attack success rate close to the white-box attacks. • The proposed new attack mechanism requires the same information of attacked models with decision attacks on the training stage, but is query-independent on the testing stage. Adversarial Scenes Adversarial attacks happen in two scenes, namely the white-box and the black-box settings. In the white-box settings, the attack method has complete access to attacked models, such as models internal, training strategy and data. While in the black-box settings, the attack method has little knowledge of attacked models. The black-box attack utilizes the transferability property of adversarial examples, only needs the labeled training data, but its attack success rate is often lower than that of the white-box attack method if attacked models have no defense. Actually, attack methods requiring lots of prior knowledge of attacked models are difficult to apply in practical applications . Adversarial Attacks Several methods for generating adversarial examples were proposed. proposed a one-step attack called FGSM. On the basis of the FGSM, Kurakin et al. (2017a) came up with BIM, an iterative optimization-based attack. Another iterative attack called DeepFool aims to find an adversarial example that would cross the decision boundary. Carlini & Wagner (2017b) provided a stronger attack by simultaneously minimizing the perturbation and the L F norm of the perturbation. generated adversarial examples through decoupling the direction and the norm of the perturbation, which is also constrained by the L F norm. showed that targeted adversarial examples hardly have transferability, they proposed ensemble-based methods to generate adversarial examples having stronger transferability. proposed a practical black-box attack which accesses the hard label to train substitute models. For score-based attacks, proposed the zeroth-order based attack (ZOO) which uses gradient estimates to attack a black-box model. Ilyas et al. (2018b) improves the way to estimate gradients. proposed a simple black-box score-based attack on DCT space. For decision-based attacks, first proposed decision-based attacks which do not rely on gradients. and improve the query efficiency of the decision-based attack. Adversarial Defenses To increase the robustness of models, methods for defending against adversarial attacks are being proposed. Adversarial training (; ; b; Tramèr et al., 2018) can be considered as a kind of data augmentation. It applies adversarial examples to the training data, ing in a robust model against adversarial attacks. Defenses based on gradient masking (Tramèr et al., 2018;) provide robustness against optimization-based attacks. Random transformation (a; ;) on inputs of models hide the gradient information, eliminate the perturbation. proposed thermometer encoding based on one-hot encoding, it applied a nonlinear transformation to inputs of models, aiming to reduce the linearity of the model. However, most defenses above are still unsafe against some attacks (a;). showed that defenses based on gradient masking actually are unsafe against attacks. Instead, some researchers focus on detecting adversarial examples. Some use a neural network (; ;) to distinguish between adversarial examples and clean examples. Some achieve statistical properties (; ; ; ; to detect adversarial examples. In this section, we introduce the definition of adversarial examples and then propose a new attack mechanism based on adversarial imitation training. X refers to the samples from the input space of the attacked model T, y true refers to the true labels of the samples X. T (y|X, θ) is the attacked model parameterized by θ. For a non-targeted attack, the objective of the adversarial attack can be formulated as: where the and r are perturbation of the sample and upper bound of the perturbation, respectively. To guarantee that is imperceptible, r is set to a small value in applications. X = X + are the adversarial examples which can fool the attacked model T. θ refers to the parameters of the model T. For white-box attacks, they obtain gradient information of T to generate adversarial examples, and attack the T directly. For substitute attacks, they generate adversarial examples from a substitute model T, and transfer the examples to attack the T. The key point of a successful attack is the transferability of the adversarial examples. To improve the transferability of adversarial examples and avoid output query, we utilize an imitation network to imitate the characteristics of the T by accessing its output labels to improve the transferability of adversarial examples, which are generated by the imitation network. After the adversarial imitation training, adversarial examples generated by imitation network do not need additional query. In the next subsection, we introduce the proposed adversarial imitation training and imitation attack. Inspired by the generative adversarial network (GAN), we use the adversarial framework to copy the attacked model. We propose a two-player game based adversarial imitation training to replicate the information of attacked model T, which is shown in Figure 1. To learn the characteristics of T, we define an imitation network D, and train it using disturbed input X = G(X) + X and the corresponding output label y T (X) of attacked model. X denotes training samples here. The role of G is to create new samples that y T (X) = y D (X). Thus, D, G and T play a special two-player game. To simplify the expression but without loss of generality, we just analyze the case of binary classification. The value function of players can be presented as: The proposed adversarial imitation attack. For the training stage, the objective of G is to generate samples X = G(X) + X and let For the testing stage, the imitation model D is utilized to generate adversarial examples to attack T. Note that the T is equivalent to the referee of this game. The two players G and D optimize their parameters based on the output y T (X). We suppose that adversarial perturbation ≤ r 1, and, our imitation attack will have the same success rate as the white-box attack without the gradient information of T. Therefore, for a well-trained imitation network, adversarial examples generated by D have strong transferability for T. A proper upper bound (r 2 ≥ r 1) of G(X) is the key points for training an efficient imitation network D. Especially in targeted attacks (T outputs the specified wrong label), if the characteristics of D is more similar to that of the attacked model, the transferability of adversarial examples will be stronger. In the training stage, the loss function of D is J D = V G,D. Because G is more hard to train than D, sometimes the ability of D is much stronger than G, so the loss of G fluctuates during the training stage. In order to maintain the stability of training, the loss function of G is designed as J G = e −V G,D. Therefore, the global optimal imitation network D is obtained if and only if ∀ X, D(X) = y T (X). At this point, J D = 0 and J G = e 0 = 1. The loss of G is always in a controllable value in training stage. As we discussed before, if r 1 ≤ r 2, the adversarial examples generated by a well-trained D have strong transferability for T. Because the attack perturbation of adversarial examples is set to be a small value, we constrain the G(X) in training stage to limit the search space of G, which can reduce the number of queries efficiently. For training methodology, we alternately train the G and D in every mini-batch, and use L 2 penalty to constrain the search space of G. The procedure is shown in algorithm 1. Algorithm 1 Mini-batch stochastic gradient descent training of imitation network. 1: for number of training iterations do 2: Sample mini-batch of m examples{X,..., X (m) } from training set. Update the imitation model by descending its loss function: 4: Update the generative model by descending its loss function: 6: e −V G,D + α G(X). In adversarial attacks, when the optimal D is obtained, the adversarial examples generated by D are utilized to attack T. In this subsection, we introduce the settings for our experiments. Datasets: we evaluate our proposed method on MNIST and CIFAR-10 . Because we need to use data different with the training set of T to train the imitation network, we divided the test sets (10k) from MNIST and CIFAR-10 into two parts. One part contains 9500 images for training and another part contains 500 images for evaluating the attack performance. Model architecture and attack method: in order to get the information of the attacked model T as little as possible, we only utilize the output label (not the probability) of the T to train the imitation network. The imitation network has no prior knowledge of the attacked model, which means it does not load any pre-trained model in experiments. For the experiments on MNIST, we design 3 different network architectures with different capacity (small network, medium network and large network) for evaluating the performance of our imitation attack with models having different capacity. We utilize the pre-trained medium network and VGG-16 as the attacked model on MNIST and CIFAR-10, respectively. In order to compare the success rate of the proposed imitation attack with current substitute attack, we utilize 4 attack methods, FGSM, BIM, projected gradient descent (PGD) , C&W to generate adversarial examples. For testing, we use AdverTorch library to generate adversarial examples. On the other hand, for comparing the performance of our method with current decision-based attacks and score-based attacks, we utilize Boundary Attack , HSJA Attack , SimBA-DCT Attack as comparison methods. Note that score-based attacks require output probabilities of T, which contain much more information than labels. Evaluation criteria: the goals of non-targeted attack and targeted attack are to lead the attacked model to output a wrong label and a specific wrong label, respectively. In non-targeted attacks, we only generate adversarial examples on the images classified correctly by the attacked model. In targeted attacks, we only generate adversarial examples on the images which are not classified to the specific wrong labels. The success rates of adversarial attack are calculated by n/m, where n and m are the number of adversarial examples which can fool the attacked model and the total number of adversarial examples, respectively. In this subsection, we utilize the proposed adversarial imitation training to train imitation models and evaluate the performance in terms of attack success rate. To compare our method with substitute attack, We utilize the medium network and VGG-16 as attacked models on MNIST and CIFAR-10, respectively. Then we use the same train dataset to obtain a pre-trained large network (the architecture is also in Table 9) and ResNet-50 to generate substitute adversarial examples. We obtain imitation networks by using the proposed adversarial imitation training. The large network and ResNet-16 are used as the model architectures of imitation networks on MNIST and CIFAR-10, respectively. The imitation models are only trained by 9500 samples of the test dataset, which are much less than the training sets of MNIST (60000 samples) and CIFAR-10 (50000 samples). The of experiments are evaluated on the other 500 samples of the test dataset and are shown in Table 1 and Table 2. The success rates of the proposed imitation attack far exceed the success rates of the substitute attack in all experiments. The experiments of Table 1 and Table 2 show that the proposed new attack mechanism needs less training images than substitute attacks, but achieves an attack success rate close to the white-box attack. The experiments indicate that the adversarial samples generated by a well-trained imitation model have a higher transferability than the substitute attack. To compare our method with decision-based and score-based attacks, we evaluate the performance of these attacks. We utilize 9500 images from the test set of MNIST and CIFAR-10 to train the imitation network, and use other 500 images from the test set as unseen data for our method. The other decision-based and score-based attacks are evaluated on the test set of MNIST and CIFAR-10 dataset. Note that score-based attacks require much more information (output probabilities) than decision-based attacks (output labels). The on MNIST and CIFAR-10 are shown in Table 3 and Table 4, respectively. Because our imitation attack only needs queries on the training stage, we evaluate performances of our method on its train and unseen sets. We set the iteration of adversarial imitation training to 1800, so the average number of query per image is 1800. We utilize BIM attack to generate adversarial examples as our imitation attack in this experiment. This experiment shows that our imitation attack achieves state-of-the-art performance in decisionbased methods. Even compared with the score-based attack, our imitation attack outperforms it in terms of perturbation distance and attack success rate. More importantly, it also obtains good on unseen data, which indicates our imitation attack can be applied to query-independent scenarios. In the above subsection, we utilize a more complex network than the attacked model to replicate the attacked model. In this subsection, we study the impact of model capacity on the ability of imitation. To evaluate the imitation performance of the network with less capacity than the attacked model, we train the small network, medium network, and large network to imitate the pre-trained medium network on MNIST dataset, and train VGG-13, VGG-16 and ResNet-50 to imitate VGG-16 on CIFAR-10. The performance of models with different capacities are shown in Table 5 and 6. The show that an imitation model with a lower capacity than the attacked model can also achieve a good imitation performance. Attack success rates of all imitation models far exceed the substitute attacks in Table 1 and 2. Most experiments show that the larger capacity the imitation network has, the higher attack success rate it can achieve (FGSM, BIM, PGD in MNIST and BIM in CIFAR-10). However, some experiments show models having a larger capacity do not have a higher attack success rate (FGSM and PGD in CIFAR-10). We surmise that the performance of imitating an attacked model is not only influenced by the capacity of the imitation model D, but also influenced by the capability of the G. In this subsection, we only use 200 images (20 samples per class) to train the imitation networks and discuss characteristics of the model replication. We train the imitation network using 200 images from MNIST and CIFAR-10 test set, and compare its performance with Practical Attack on other images from MNIST and CIFAR-10 test set. The are shown in Table 7 and 8. The practical attack uses the output labels of attacked models to train substitute models under the scenario, which they can make an infinite number of queries for attacked models. It is hard to generate adversarial examples to fool the attacked models by limited training samples. Note that what the substitute models imitate is the response for perturbations of the attacked model. A substitute model that can generate adversarial examples with a higher attack success rate is a better replica. Our adversarial imitation attack can produce a substitute model with much higher classification accuracy and attack success rate than Practical Attack for both non-targeted and targeted attacks in this scenario (with an infinite number of queries). We also show the performances of these two methods with limited query numbers in Figure 6. Additionally, the imitation model with low classification accuracy still can produce Practical adversarial attacks should have as little as possible knowledge of attacked model T. Current black-box attacks need numerous training images or queries to generate adversarial images. In this study, to address this problem, we combine the advantages of current black-box attacks and proposed a new attack mechanism, imitation attack, to replicate the information of the T, and generate adversarial examples fooling deep learning models efficiently. Compared with substitute attacks, imitation attack only requires much less data than the training set of T and do not need the labels of the training data, but adversarial examples generated by imitation attack have stronger transferability for the T. Compared with score-based and decision-based attacks, our imitation attack only needs the same information with decision attacks, but achieves state-of-the-art performances and is query-independent on testing stage. Experiments showed the superiority of the proposed imitation attack. Additionally, we observed that deep learning classification model T is easy to be stolen by limited unlabeled images, which are much fewer than the training images of T. In future work, we will evaluate the performance of the proposed adversarial imitation attack on other tasks except for image classification. A NETWORK ARCHITECTURES Figure 2 and Figure 3. The experiments show that adversarial examples generated by the proposed imitation attack can fool the attacked model with a small perturbation.
A novel adversarial imitation attack to fool machine learning models.
306
scitldr
Stochastic Gradient Descent (SGD) methods using randomly selected batches are widely-used to train neural network (NN) models. Performing design exploration to find the best NN for a particular task often requires extensive training with different models on a large dataset, which is very computationally expensive. The most straightforward method to accelerate this computation is to distribute the batch of SGD over multiple processors. However, large batch training often times leads to degradation in accuracy, poor generalization, and even poor robustness to adversarial attacks. Existing solutions for large batch training either do not work or require massive hyper-parameter tuning. To address this issue, we propose a novel large batch training method which combines recent in adversarial training (to regularize against ``sharp minima'') and second order optimization (to use curvature information to change batch size adaptively during training). We extensively evaluate our method on Cifar-10/100, SVHN, TinyImageNet, and ImageNet datasets, using multiple NNs, including residual networks as well as compressed networks such as SqueezeNext. Our new approach exceeds the performance of the existing solutions in terms of both accuracy and the number of SGD iterations (up to 1\% and $3\times$, respectively). We emphasize that this is achieved without any additional hyper-parameter tuning to tailor our method to any of these experiments. Finding the right NN architecture for a particular application requires extensive hyper-parameter tuning and architecture search, often times on a very large dataset. The delays associated with training NNs is often the main bottleneck in the design process. One of the ways to address this issue to use large distributed processor clusters; however, to efficiently utilize each processor, the portion of the batch associated with each processor (sometimes called the mini-batch) must grow correspondingly. In the ideal case, the hope is to decrease the computational time proportional to the increase in batch size, without any drop in generalization quality. However, large batch training has a number of well known draw backs. These include degradation of accuracy, poor generalization, and poor robustness to adversarial perturbations BID17 BID36.In order to address these drawbacks, many solutions have been proposed BID14 BID37 BID7 BID29 BID16. However, these methods either work only for particular models on a particular dataset, or they require massive hyperparameter tuning, which is often times not discussed in the presentation of . Note that while extensive hyper-parameter turning may in good tables, it is antithetical to the original motivation of using large batch sizes to reduce training time. One solution to reduce the brittleness of SGD to hyper-parameter tuning is to use second-order methods. Full Newton method with line search is parameter-free, and it does not require a learning rate. This is achieved by using a second-order Taylor series approximation to the loss function, instead of a first-order one as in SGD, to obtain curvature information. BID25; BID34 BID2 show that Newton/quasi-Newton methods outperform SGD for training NNs. However, their re-sults only consider simple fully connected NNs and auto-encoders. A problem with second-order methods is that they can exacerbate the large batch problem, as by construction they have a higher tendency to get attracted to local minima as compared to SGD. For these reasons, early attempts at using second-order methods for training convolutional NNs have so far not been successful. Ideally, if we could find a regularization scheme to avoid local/bad minima during training, this could resolve many of these issues. In the seminal works of El Ghaoui & BID9; BID33, a very interesting connection was made between robust optimization and regularization. It was shown that the solution to a robust optimization problem for least squares is the same as the solution of a Tikhonov regularized problem BID9. This was also extended to the Lasso problem in BID33. Adversarial learning/training methods, which are a special case of robust optimization methods, are usually described as a min-max optimization procedure to make the model more robust. Recent studies with NNs have empirically found that robust optimization usually converges to points in the optimization landscape that are flatter and are more robust to adversarial perturbation BID36.Inspired by these , we explore whether second order information regularized by robust optimization can be used to do large batch size training of NNs. We show that both classes of methods have properties that can be exploited in the context of large batch training to help reduce the brittleness of SGD with large batch size training, thereby leading to significantly improved . In more detail, we propose an adaptive batch size method based on curvature information extracted from the Hessian, combined with a robust optimization method. The latter helps regularize against sharp minima, especially during early stages of training. We show that this combination leads to superior testing performance, as compared to the proposed methods for large batch size training. Furthermore, in addition to achieving better testing performance, we show that the total number of SGD updates of our method is significantly lower than state-of-the-art methods for large batch size training. We achieve these without any additional hyper-parameter tuning of our algorithm (which would, of course, have helped us to tailor our solution to these experiments). Here is a more detailed itemization of the main contributions of this work:• We propose an Adaptive Batch Size method for SGD training that is based on second order information, computed by backpropagating the Hessian operator. Our method automatically changes the batch size and learning rate based on Hessian information. We state and prove a that this method is convergent for a convex problem. More importantly, we empirically test the algorithm for important non-convex problems in deep learning and show that it achieves equal or better test performance, as compared to small batch SGD (We refer to this method as ABS).• We propose a regularization method using robust training by solving a min-max optimization problem. We combine the second order adaptive batch size method with recent of BID36, which show that robust training can be used to regularize against sharp minima. We show that this combination of Hessian-based adaptive batch size and robust optimization achieves significantly better test performance with little computational overhead (we refer to this Adaptive Batch Size Adversarial method as ABSA).• We test the proposed strategies extensively on a wide range of datasets (Cifar-10/100, SVHN, TinyImageNet, and ImageNet), using different NNs, including residual networks. Importantly, we use the same hyper-parameters for all of the experiments, and we do not perform any kind of tuning of our hyper-parameters to tailor our . The empirical show the clear benefit of our proposed method, as compared to the state-of-the-art. The proposed algorithm achieves equal or better test accuracy (up to 1%) and requires significantly fewer SGD updates (up to 5×).• We empirically show that we can use a block approximation of the Hessian operator (i.e. the Hessian of the last fewer layers) to reduce the computational overhead of backpropagating the second order information. This approximation is especially effective for deep NNs. While a number of recent works have discussed adaptive batch size or increasing batch size during training BID7 BID29 BID10 BID1, to the best of our knowledge this is the first paper to introduce Hessian information and adversarial training in adaptive batch size training, with extensive testing on many datasets. We believe that it is important for every work to state its limitations (in general, but in particular in this area). We were particularly careful to perform extensive experiments and repeated all the reported tests multiple times. We test the algorithm on models ranging from a few layers to hundreds of layers, including residual networks as well as smaller networks such as SqueezeNext. An important limitation is that second order methods have additional overhead for backpropagating the Hessian. Currently, most of the existing frameworks do not support (memory) efficient backpropagation of the Hessian (thus providing a structural bias against these powerful methods). However, the complexity of each Hessian matvec is the same as a gradient computation BID21. Our method requires Hessian spectrum, which typically needs ten Hessian matvecs (for power method iterations to reach a tolerance of 1e-2). Thus, the benefits that we show in terms of testing accuracy and reduced number of updates do come at a cost (see Table 3 for details). We measure this additional overhead and report it in terms of wall clock time. Furthermore, we (empirically) show that this power iteration needs to be done only at the end of every epoch, thus significantly reducing the additional overhead. Another limitation is that our theory only holds for convex problems (under certain smoothness assumptions). Proving convergence for non-convex setting requires more involved analysis. Recently, BID32 has provided interesting theoretical guarantees for AdaGrad in the non-convex setting. Exploring a similar direction for our method is of interest for future work. Another point is that adaptive batch size, prevents one from utilizing all of the processes, as compared to using large batch throughout the training. However, a large data center can handle and accommodate a growing number of requests for processor resources, which could alleviate this. Optimization methods based on SGD are currently the most effective techniques for training NNs, and this is commonly attributed to SGD's ability to escape saddle-points and "bad" local minima BID5.The sequential nature of weight updates in synchronous SGD limits possibilities for parallel computing. In recent years, there has been considerable effort on breaking this sequential nature, through asynchronous methods or symbolic execution techniques BID20. A main problem with asynchronous methods is reproducibility, which, in this case, depends on the number of processes used (; BID0 . Due to this issue, recently there have been attempts to increase parallelization opportunities in synchronous SGD by using large batch size training. With large batches, it is possible to distribute more efficiently the computations to parallel compute nodes BID11, thus reducing the total training time. However, large batch training often leads to sub-optimal test performance BID17 BID36 . This has been attributed to the observation that large batch size training tends to get attracted to local minima or sharp curvature directions, which are not robust to (possible) mismatch between training and testing curves BID17. A full understanding of this, however, remains elusive. There have been several solutions proposed for alleviating the problem with large batch size training. The first notable work here is BID14, where it was shown that by scaling the learning rate, it is possible to achieve the same testing accuracy for large batches. In particular, ResNet-50 model was tested on ImageNet dataset, and it was shown that the baseline accuracy could be recovered up to a batch size of 8192. However, this approach does not generalize to other networks such as AlexNet BID37, or other tasks such as NLP. In BID37, an adaptive learning rate method (called LARS) was proposed which allowed scaling training to a much larger batch size of 32K with more hyper-parameter tuning. Another notable work is Smith et al. FORMULA0 (and also BID7), which proposed a hybrid increase of batch size and learning rate to accelerate training. In this approach, one would select a strategy to "anneal" the batch size during the training. This is based on the idea that large batches contain less "noise," and that could be used much the same way as reducing learning rate during training. More recent work BID16; BID24 proposed mix-precision method to further explore the limit of large batch training. A recent study has shown that anisotropic noise injection could also help in escaping sharp minima . The authors showed that the noise from SGD could be viewed as anisotropic, with the Hessian as its covariance matrix. Injecting random noise using the Hessian as covariance was proposed as a method to avoid sharp minima. Another recent work by BID36 has shown that adversarial training (or robust optimization) could be used to "regularize" against these sharp minima, with preliminary showing superior testing performance as compared to other methods. The link between robust optimization and regularization is a very interesting observation that has been theoretically proved in the case of Ridge regression (El Ghaoui & BID9, and Lasso BID2 . BID26 ; BID27 used adversarial training and showed that the model training using robust optimization is often times more robust to perturbations, as compared to normal SGD training. Similar observations have been made by others BID30 BID13 . We consider a supervised learning framework where the goal is to minimize a loss function L(θ): DISPLAYFORM0 where θ are the model weight parameters, Z = X × Y is the training dataset, and l(z, θ) is the loss for a datum z ∈ Z. Here, X is the input, Y is the corresponding label, and N = |Z| is the cardinality of the training set. SGD is typically used to optimize Eqn. FORMULA0 by taking steps of the form: DISPLAYFORM1 where B is a mini-batch of examples drawn randomly from Z, and η t is the step size (learning rate) at iteration t. In the case of large batch size training, the batch size is increased to large values. views the learning rate and batch size as noise injected during optimization. Both a large learning rate as well as a small batch size can be considered to be equivalent to high noise injection. This is explained by modeling the behavior of NNs as a stochastic differential equation (SDE) of the following form: DISPLAYFORM2 where (t) is the noise injected by SGD (see BID28 for details). The authors then argue that the noise magnitude is proportional to g = η t (|Z| |B| − 1). For mini-batch |B| |Z|, the noise magnitude can be estimated as g ≈ η t |Z| |B|. Hence, in order to achieve the benefits from small batch size training, i.e., the noise generated by small batch training, the learning rate η t should increase proportionally to the batch size, and vice versa. That is, the same annealing behavior could be achieved by increasing the batch size, which is the method used by BID29.The need for annealing can be understood by considering a convex problem. When we get closer to a local minimum, a more accurate descent direction with less noise is preferable to a more noisy direction, since less noise helps converge to rather than oscillate around the local minimum. This explains the manual batch size and learning rate changes proposed in BID29 BID7. Ideally, we would like to have an automatic method that could provide us with such information and regularize against local minima with poor generalization. As we show next, this is possible through the use of second order information combined with robust optimization. In this section, we propose a method for utilizing second order information to adaptively change the batch size. We refer to this as the Adaptive Batch Size (ABS) method; see Alg. 1. Intuitively, using a larger batch size in regions where the loss has a "flatter" landscape, and using a smaller batch size in regions with a "sharper" loss landscape, could help to avoid attraction to local minima with poor generalization. This information can be obtained through the lens of the Hessian operator. -Learning rate lr, learning rate decay steps A, learning rate decay ratio ρ -Initial Batch Size B, minimum batch size Bmin, maximum batch size Bmax, input x, label y. -Eigenvalue decreasing ratio α, eigenvalue computation frequency n, i.e., after training n samples compute eigenvalue, batch increasing ratio β, duration factor κ, i.e., if we compute κ times Hessian but eigenvalue does not decrease, we would increase the batch size -If adversarial training is used, perturbation magnitude adv, perturbation ratio γ (γmax) of training data, decay ratio ω, vanishing step τ 2: Initialization: Eig = None, Visiting Sample = 0 DISPLAYFORM0 DISPLAYFORM1 We adaptively increase the batch size as the Hessian eigenvalue decreases or stays stable for several epochs (fixed to be ten in all of the experiments).The second component of our framework is robust optimization. In the seminal works of (El Ghaoui & BID9 BID33), a connection between robust optimization and regularization was proved in the context of ridge and lasso regression. In BID36, the authors empirically showed that adversarial training leads to more robust models with respect to adversarial perturbation. An interesting corollary was that, after adversarial training, the model converges to regions that are considerably flatter, as compared to the baseline. Thus, we can combine our ABS algorithm with adversarial training as a form of regularization against "sharp" minima. We refer to this as the Adaptive Batch Size Adversarial (ABSA) method; see Alg. 1. In practice, ABSA is often more stable than ABS. This corresponds to solving a minmax problem instead of a normal minimization problem BID17 BID36. Solving this min-max problem for NNs is an intractable problem, and thus we approximately solve the maximization problem through the Fast Gradient Sign Method (FGSM) proposed by BID13. This basically corresponds to generating adversarial inputs using one gradient ascent step (i.e., the perturbation is computed by ∆x = ∇ x l(z, θ)). Other possible choices are proposed by BID31 BID4 BID22.1 FIG1 illustrates our ABS schedule as compared to a normal training strategy and the increasing batch size strategy of Smith et al. FORMULA0; BID7. Note that our learning rate adaptively changes based on the Hessian eigenvalue in order to keep the same noise level as in the baseline SGD training. As we show in section 4, our combined approach (second order and robust optimization) not only achieves better accuracy, but it also requires significantly fewer SGD updates, as compared to Smith et al. FORMULA0; BID7. Before discussing the empirical , an important question is whether using ABS is a convergent algorithm for even a convex problem. Here, we show that our ABS algorithm does converge for strongly convex problems. Based on an assumption about the loss (Assumption 2 in Appendix A), it is not hard to prove the following theorem. Theorem 1. Under Assumption 2, let assume at step t, the batch size used for parameter update is b t, the step size is b t η 0, where η 0 is fixed and satisfies, DISPLAYFORM0 where B max is the maximum batch size during training. Then, with θ 0 as the initilization, the expected optimality gap satisfies the following inequality, DISPLAYFORM1 From Theorem 1, if b t ≡ 1, the convergence rate for t steps, based on equation 5, is (1 − η 0 c s). However, the convergence rate of Alg. 1 becomes DISPLAYFORM2 With an adaptive b t, Alg. 1 can converge faster than basic SGD. We show empirical for a logistic regression problem in the Appendix A, which is a simple convex problem. We evaluate the performance of our ABS and ABSA methods on different datasets (ranging from O(1E4) to O(1E7) training examples) and multiple NN models. We compare the baseline performance (i.e., small batch size), along with other state-of-the-art methods proposed for large batch training BID29 BID14. The two main metrics for comparison are the final accuracy and the total number of updates. Preferably we would want a higher testing accuracy along with fewer SGD updates. We emphasize that, for all of the datasets and models we tested, we do not change any of the hyper-parameters in our algorithm. We use the exact same parameters used in the baseline model, and we do not tailor any parameters to suit our algorithm. A detailed explanation of the different NN models, and the datasets is given in Appendix B.Section 4.1 shows the of ABS (ABSA) compared to BaseLine (BL), FB BID14 and GG BID29. Section 4.2 presents the on more challenging datasets of TinyImageNet and ImageNet. The superior performance of our method does come at the cost of backpropagating the Hessian. Thus, in section 4.3, we discuss how approximate Hessian informatino could be used to alleviate teh costs. We first start by discussing the of ABS and ABSA on SVHN and Cifar-10/100 datasets. Notice that GG and our ABS (ABSA) have different batch sizes during training. Hence the batch size reported in our represents the maximum batch size during training. To allow for a direct comparison we also report the number of weight updates in our (lower is better). It should be mentioned that the number of SGD updates is not necessarily the same as the wall-clock time. Therefore, we also report a simulated training time of I3 model in Appendix C. TAB1 report the test accuracy and the number of parameter updates for different datasets and models. First, note the drop in BL accuracy for large batch confirming the accuracy degradation problem. Moreover, note that the FB strategy only works well for moderate batch sizes (it diverges for large batch). However, the GG method has a very consistent performance, but its number of parameter updates are usually greater than our method. Looking at the last two major columns of TAB1 -7, the test performances ABS achieves are similar accuracy as BL. Overall, the number of updates of ABS is 3-10 times smaller than BL with batch size 128. However, for most cases, ABSA achieves superior . This confirms the effectiveness of adversarial training combined with the second order information. SVHN is a very simple dataset, and Cifar-10/100 are relatively small datasets, and one might wonder whether the improvements we reported in section 4.1 hold for more complex problems. Here, we report the ABSA method on more challenging datasets, i.e., TinyImageNet and ImageNet. We use the exact same hyper-parameters in our algorithm, even though tuning them could potentially be preferable for us. TinyImageNet is an image classification problem, with 200 classes and only 500 images per class. Thus it is easy to overfit the training data. The for I1 model is reported in TAB2. Note that with fewer SGD iterations, ABSA can achieve better test accuracy than other methods. The performance of ABSA is actually about 1% higher (the training loss and test performance of I1 on TinyImagenet is shown in FIG4 in appendix). Note that we do not tune the hyper-parameters, e.g., α, β, and perhaps one could close the gap between 70.24% and 70.4% with fine tuning of our hyper-parameters. However, from a practical point of view such tuning is antithetical to the goal of large batch size training as it would increase the total training time, and we specifically did not want to tailor any new parameters for a particular model/dataset. One of the limitations of our ABS (ABSA) method is the additional computational cost for computing the top Hessian eigenvalue. If we use the full Hessian operator, the second backpropagation needs to be done all the way to the first layer of NN. For deep networks this could lead to high cost. Here, we empirically explore whether we could use approximate second order information, and in particular we test a block Hessian approximation Figure 6. The block approximation corresponds to only analyzing the Hessian of the last few layers. In Figure 6 (see Appendix D), we plot the trace of top eigenvalues of full Hessian and block Hessian for C1 model. Although the top eigenvalue of block Hessian has more variance than that of full Hessian, the overall trends are similar for C1. The test performance of C1 on Cifar-10 with block Hessian is 84.82% with 4600 parameter updates (as compared to 84.42% for full Hessian ABSA). The test performance of C4 on Cifar-100 with block Hessian is 68.01% with 12500 parameter updates (as compared to 68.43% for full Hessian ABSA). These suggest that using a block Hessian to estimate the trend of the full Hessian might be a good choice to overcome computation cost, but a more detailed analysis is needed. We introduce an adaptive batch size algorithm based on Hessian information to speed up the training process of NNs, and we combine this approach with adversarial training (which is a form of robust optimization, and which could be viewed as a regularization term for large batch training). We extensively test our method on multiple datasets (SVHN, Cifar-10/100, TinyImageNet and ImageNet) with multiple NN models (AlexNet, ResNet, Wide ResNet and SqueezeNext). As the goal of large batch is to reduce training time, we did not perform any hyper-parameter tuning to tailor our method for any of these tests. Our method allows one to increase batch size and learning rate automatically, based on Hessian information. This helps significantly reduce the number of parameter updates, and it achieves superior generalization performance, without the need to tune any of the additional hyper-parameters. Finally, we show that a block Hessian can be used to approximate the trend of the full Hessian to reduce the overhead of using second-order information. These improvements are useful to reduce NN training time in practice. • L(θ) is continuously differentiable and the gradient function of L is Lipschitz continuous with Lipschitz constant L g, i.e. DISPLAYFORM0 for all θ 1 and θ 2.Also, the global minima of L(θ) is achieved at θ * and L(θ *) = L *.• Each gradient of each individual l i (z i) is an unbiased estimation of the true gradient, i.e. DISPLAYFORM1 where V(·) is the variance operator, i.e. DISPLAYFORM2 From the Assumption 2, it is not hard to get, DISPLAYFORM3 DISPLAYFORM4 With Assumption 2, the following two lemmas could be found in any optimization reference, e.g.. We give the proofs here for completeness. Lemma 3. Under Assumption 2, after one iteration of stochastic gradient update with step size η t at θ t, we have DISPLAYFORM5 where DISPLAYFORM6 Proof. With the L g smooth of L(θ), we have DISPLAYFORM7 From above, the follows. Lemma 4. Under Assumption 2, for any θ, we have DISPLAYFORM8 Proof. Let DISPLAYFORM9 Then h(θ) has a unique global minima atθ DISPLAYFORM10 The following lemma is trivial, we omit the proof here. DISPLAYFORM11 PROOF OF THEOREM 1Given these lemmas, we now proceed with the proof of Theorem 1.Proof. Assume the batch used at step t is b t, according to Lemma 3 and 5, DISPLAYFORM12 where the last inequality is from Lemma 4. This yields DISPLAYFORM13 It is not hard to see, DISPLAYFORM14 which concludes DISPLAYFORM15 Therefore, DISPLAYFORM16 We show a toy example of binary logistic regression on mushroom classification dataset 2. We split the whole dataset to 6905 for training and 1819 for validation. η 0 = 1.2 for SGD with batch size 100 and full gradient descent. We set 100 ≤ b t ≤ 3200 for our algorithm, i.e. ABS. Here we mainly focus on the training losses of different optimization algorithms. The are shown in FIG3. In order to see if η 0 is not an optimal step size of full gradient descent, we vary η 0 for full gradient descent; see in FIG3. In this section, we give the detailed outline of our training datasets, models, strategy as well as hyper-parameter used in Alg 1.Dataset. We consider the following datasets.• SVHN. The original SVHN BID23 dataset is small. However, in this paper, we choose the additional dataset, which contains more than 500k samples, as our training dataset. • Cifar. The two Cifar (i.e., Cifar-10 and Cifar-100) datasets BID18 ) have same number of images but different number of classes.• TinyImageNet. TinyImageNet consists of a subset of ImangeNet images BID6, which contains 200 classes. Each of the class has 500 training and 50 validation images. 3 The size of each image is 64 × 64.• ImageNet. The ILSVRC 2012 classification dataset BID6 ) consists of 1000 images classes, with a total of 1.2 million training images and 50,000 validation images. During training, we crop the image to 224 × 224.Model Architecture. We implement the following convolution NNs. When we use data augmentation, it is exactly same the standard data augmentation scheme as in the corresponding model.• S1. AlexNet like model on SVHN as same as BID36 [C1]. We training it for 20 epochs with initial learning rate 0.01, and decay a factor of 5 at epoch 5, 10 and 15. There is no data augmentation.• C1. ResNet18 on Cifar-10 dataset BID15. We training it for 90 epochs with initial learning rate 0.1, and decay a factor of 5 at epoch 30, 60, 80. There is no data augmentation. • C2. WResNet 16-4 on Cifar-10 dataset . We training it for 90 epochs with initial learning rate 0.1, and decay a factor of 5 at epoch 30, 60, 80. There is no data augmentation.• C3. SqueezeNext on Cifar-10 dataset BID12. We training it for 200 epochs with initial learning rate 0.1, and decay a factor of 5 at epoch 60, 120, 160. Data augmentation is implemented.• C4. ResNet18 on Cifar-100 dataset BID15. We training it for 160 epochs with initial learning rate 0.1, and decay a factor of 10 at epoch 80, 120. Data augmentation is implemented. • I1. ResNet50 on TinyImageNet dataset BID15. We training it for 120 epochs with initial learning rate 0.1, and decay a factor of 10 at epoch 60, 90. Data augmentation is implemented. • I2. AlexNet on ImageNet dataset BID19. We training it for 90 epochs with initial learning rate 0.01, and decay it to 0.0001 quadratically at epoch 60, then keeps it as 0.0001 for the rest 30 epochs. Data augmentation is implemented.• I3 ResNet18 on ImageNet dataset BID15. We training it for 90 epochs with initial learning rate 0.1, and decay a factor of 10 at epoch 30, 60 and 80. Data augmentation is implemented. Training Strategy. We use the following training strategies• BL. Use the standard training procedure.• FB. Use linear scaling rule BID14 with warm-up stage.• GG. Use increasing batch size instead of decay learning rate BID29.• ABS. Use our adaptive batch size strategy without adversarial training.• ABSA. Use our adaptive batch size strategy with adversarial training. For adversarial training, the adversarial data are generated using Fast Gradient Sign Method (FGSM) BID13. The hyper-parameters in Alg. 1 (α and β) are chosen to be 2, κ = 10, adv = 0.005, γ = 20%, and ω = 2 for all the experiments. The only change is that for SVHN, the frequency to compute Hessian information is 65536 training examples as compared to one epoch, due to the small number of total training epochs (only 20).C SIMULATED TRAINING TIMEAs discussed above, the number of SGD updates does not necessarily correlate with wall-clock time, and this is particularly the case because our method require Hessian backpropagation. Here, we use the method suggested in BID11, to approximate the wall-clock time of our algorithm when utilizing p parallel processes. For the ring algorithm BID31, the communication time per SGD iteration for p processes is: DISPLAYFORM0 where α latency is the network latency, β bandwidth is the inverse bandwidth, and |θ| is the size number of model parameters measured in terms of Bits. Moreover, we manually measure the wall-clock time of computing the Hessian information using our in-house code, as well as the cost of forward/backward calculations on a V100 GPU. The total time will consists of this computation time and the communication one along with Hessian computation overhead (if any). Therefore we have: DISPLAYFORM1 where T compute is the time to compute forward and backward propagation, T communication is the time to communicate between different machine, and T Hessian is the time to compute top eigenvalues. We use the latency and bandwidth values of α latency = 2 µs, and β bandwidth = 1 6 Gb/s based on NERSC's Cori2 supercomputing platform. Based on above formulas, we give an example of simulated computation time cost of I3 on ImageNet. Note that for large processes and small latency terms, the communication time formula is simplified as, T comm = 2β bandwidth |θ|.In Table 3 we report the simulation time of I3 on ImageNet on 512 processes. For GG, we assume it increases batch size by a factor of 10 at epoch 30, 60 and 80. The batch size per GPU core is set to 16 for SGD (and 8 for Hessian computation due to memory limit) and the total batch size used for Hessian computation is set to 4096 images. The T comp and T comm is for one SGD update and T Hessian is for one complete Hessian eigenvalue computation (including communication for Hessian computation). Note that the total Hessian computation time for ABS/ABSA is only 1.15 × 90 = 103.5 s even though the Hessian computation is not efficiently implemented in the existing frameworks. Note that even with the additional Hessian overhead ABS/ABSA is still much faster than BL (and these numbers are with an in-house and not highly optimized code for Hessian computations). We furthermore note that we have added the additional computational overhead of adversarial computations to the ABSA method. Table 3: Below we present the breakdown of one SGD update training time in terms of forward/backwards computation (T comp), one step communication time (T comm), one total Hessian spectrum computation (if any T Hess), and the total training time. The correspond to I3 model on ImageNet (for accuracy please see FIG2). In this section, we present additional empirical . TAB2 for details). As one can see, from epoch 60 to 80, the test performance drops due to overfitting. However, ABSA achieves the best performance with apparently less overfitting (it has higher training loss).
Large batch size training using adversarial training and second order information
307
scitldr
Inspired by neurophysiological discoveries of navigation cells in the mammalian brain, we introduce the first deep neural network architecture for modeling Egocentric Spatial Memory (ESM). It learns to estimate the pose of the agent and progressively construct top-down 2D global maps from egocentric views in a spatially extended environment. During the exploration, our proposed ESM network model updates belief of the global map based on local observations using a recurrent neural network. It also augments the local mapping with a novel external memory to encode and store latent representations of the visited places based on their corresponding locations in the egocentric coordinate. This enables the agents to perform loop closure and mapping correction. This work contributes in the following aspects: first, our proposed ESM network provides an accurate mapping ability which is vitally important for embodied agents to navigate to goal locations. In the experiments, we demonstrate the functionalities of the ESM network in random walks in complicated 3D mazes by comparing with several competitive baselines and state-of-the-art Simultaneous Localization and Mapping (SLAM) algorithms. Secondly, we faithfully hypothesize the functionality and the working mechanism of navigation cells in the brain. Comprehensive analysis of our model suggests the essential role of individual modules in our proposed architecture and demonstrates efficiency of communications among these modules. We hope this work would advance research in the collaboration and communications over both fields of computer science and computational neuroscience. Egocentric spatial memory (ESM) refers to a memory system that encodes, stores, recognizes and recalls the spatial information about the environment from an egocentric perspective BID24. Such information is vitally important for embodied agents to construct spatial maps and reach goal locations in navigation tasks. For the past decades, a wealth of neurophysiological have shed lights on the underlying neural mechanisms of ESM in mammalian brains. Mostly through single-cell electrophysiological recordings studies in mammals BID23, there are four types of cells identified as specialized for processing spatial information: head-direction cells (HDC), border and boundary vector cells (BVC), place cells (PC) and grid cells (GC). Their functionalities are: According to BID38, HDC, together with view cells BID5, fires whenever the mammal's head orients in certain directions. The firing behavior of BVC depends on the proximity to environmental boundaries BID22 and directions relative to the mammals' heads BID1. PC resides in hippocampus and increases firing rates when the animal is in specific locations independent of head orientations BID1. GC, as a metric of space BID35, are regularly distributed in a grid across the environment BID11. They are updated based on animal's speed and orientation BID1. The corporation of these cell types enables mammals to navigate and reach goal locations in complex environments; hence, we are motivated to endow artificial agents with the similar memory capability but a computational architecture for such ESM is still absent. Inspired by neurophysiological discoveries, we propose the first computational architecture, named as the Egocentric Spatial Memory Network (ESMN), for modeling ESM using a deep neural network. ESMN unifies functionalities of different navigation cells within one end-to-end trainable framework and accurately constructs top-down 2D global maps from egocentric views. To our best knowledge, we are the first to encapsulate the four cell types respectively with functionally similar neural networkbased modules within one integrated architecture. In navigation tasks, the agent with the ESMN takes one egomotion from a discrete set of macro-actions. ESMN fuses the observations from the agent over time and produces a top-down 2D local map using a recurrent neural network. In order to align the spatial information at the current step with all the past predicted local maps, ESMN estimates the agent's egomotion and transforms all the past information using a spatial transformer neural network. ESMN also augments the local mapping module with a novel spatial memory capable of integrating local maps into global maps and storing the discriminative representations of the visited places. The loop closure component will then detect whether the current place was visited by comparing its observation with the representations in the external memory which subsequently contributes to global map correction. Neuroscience-inspired AI is an emerging research field BID12. Our novel deep learning architecture to model ESMN in the mammalian navigation system attempts to narrow the gap between computer science (CS) and computational neuroscience (CN) and bring interests to both communities. On one hand, our novel ESMN outperforms several competitive baselines and the state-of-the-art monocular visual SLAMs. Our outstanding performance in map construction brings great advancements in robotics and CS. It could also have many potential engineering applications, such as path planning for robots. In CN, the neuroplausible navigation system with four types of cells integrated is still under development. In our work, we put forward bold hypothesis about how these navigation cells may cooperate and perform integrated navigation functions. We also faithfully propose several possible communication links among them in the form of deep architectures. We evaluate ESMN in eight 3D maze environments where they feature complex geometry, varieties of textures, and variant lighting conditions. In the experiments, we demonstrate the acquired skills of ESMN in terms of positional inference, free space prediction, loop closure classification and map correction which play important roles in navigation. We provide detailed analysis of each module in ESMN as well as their functional mappings with the four cell types. Lastly, we conduct ablation studies, compare with state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithms and show the efficacy of our integrated framework on unifying the four modules. There is a rich literature on computational models of egocentric spatial memory (ESM) primarily in cognitive science and AI. For brevity, we focus on the related works in machine learning and robotics. Reward-based learning is frequently observed in spatial memory experiments BID14. In machine learning, reinforcement learning is commonly used in attempts to mathematically formalize the reward-based learning. Deep Q-networks, one of the reinforcement learning frameworks, have been employed in the navigation tasks where the agent aims to maximize the rewards while navigating to the goal locations BID26; BID28. The representation of spatial memory is expressed implicitly using long short term memory (LSTM) BID13. In one of the most relevant works BID10, Gupta et al.introduced a mapper-planner pipeline where the agent is trained in a supervised way to produce a top-down belief map of the world and thus plans path; however, the authors assume that the agents perform mapping tasks in an ideal scenario where the agent knows which macro-action to take and takes the macro-action without control errors at each time step. Different from their work, our ESM network takes into account the action uncertainty and predicts the agent's pose based on egocentric views. Moreover, we propose the mechanism of loop closure and map correction for long-term exploration. Apart from reinforcement learning, there are works on navigation in the domain of robotics where the spatial memory is often explicitly represented in the form of grid maps or topological structuresElfes; BID33; BID20. SLAM tackles the problem of positional inference and mapping BID36 BID4; BID3; BID39. While classical SLAM achieves good performances with the aid of multimodal sensors BID21, SLAM using monocular cameras still has limitations where the feature matching process relies heavily on hand-crafted features extracted from the visual scenes BID40; BID34. As great strides have been made using deep learning in computer vision tasks which in a significant performance boost, several works endeavor to replace parts of the classical SLAM workflows with deep learning based modules BID17; BID0; BID16. However, there is no existing end-to-end neural network for visual SLAM to our best knowledge. Inspired by the neurophysiological , we propose the first unified ESM model using deep neural networks. This model simultaneously solves the problems of positional inference, intrinsic representation of places and space geometry during the process of 2D map construction. Before we elaborate on our proposed architecture, we introduce the Egocentric Spatial Memory Network (ESMN) modeling problem and relevant notations. ESM involves an object-to-self representational system which constantly requires encoding, transforming and integrating spatial information from the first-person view into a global map. At the current time step t, the agent, equipped with one RGB camera, takes the camera view I t as the current visual input. Without the aid of any other sensors, it predicts the egomotion from the pair of camera views (I t−1, I t). At time step t, the agent is allowed to take only one egomotion A θ,d,l,t out of a discrete set of macro-actions which include rotating left/right by θ degrees and moving in the directions l: forward/backward/left/right by the distance of d relative to the agent's current pose. We assume the agent moves in a 2D grid world and the camera coordinate is fixed with respect to the agent's body. The starting location p 0 of the agent is always where the triplet denotes the positions along the x and y-axis and the orientation in the world coordinate. The problem of modeling ESM is to learn a global map in the egocentric coordinate based on the visual input pairs (I t−1, I t) for t = 1,..., T. We define the global map as a top-view 2D probabilistic grid map where the probability infers the agent's belief of free space. For precise controls of the experimental variables, we consider the egocentric spatial memory (ESM) modeling problem in artificial 3D maze environments. In order to tackle this problem, we propose a unified neural network architecture named as ESMN. The architecture of ESMN is illustrated in FIG0 and is elaborated in details in Figure 2. Inspired by the navigation cells mentioned in the introduction, our proposed ESMN comprises four modules: Head Direction Unit (HDU), Boundary Vector Unit (BVU), Place Unit (PU), and Grid Unit (GU). This is to incorporate multiple objectives for modeling ESM. spatial information to the current egocentric coordinate via a 2D-CNN. PU learns to encode the latent representation of visited places in a 2D-CNN pre-trained using a triplet loss. GU, as a memory, integrates the predicted local maps from BVU over time and keeps track of all the visited places. In GU, we add a gating mechanism to constantly detect the loop closure and thus eliminate the accumulated errors during long-term mapping. In this paper, we adopt a "divide and conquer" approach by composing different navigation modules within one framework systematically. The division of these modules are inspired by scientific supports which have shown the advantages in biological systems BID31. Leveraging on rich features extracted from deep networks, the learnt features are suitable for recognizing visual scenes and hence, boost up the map construction and correction performances. The efficiency and robustness of our algorithm are demonstrated by comparing with other spatial navigation methods replied on visual inputs. At each time step, the RGB camera image (or frame) is the only input to the agent. In order to make spatial reasoning about the topological structure of the spatially extended environment, the agent has to learn to take actions to explore its surroundings and predict their own poses by integrating the egomotion over time. The HDU module in our proposed ESMN model employs a 2D-CNN to predict the macro-action the agent has taken at the time step t based on the inputs of two camera views {I t−1, I t}. Though there are two possible implementations for the loss in training the HDU module, i.e. regression and classification, we consider the latter in order to reduce the dimensionality of action space. The macro-actions A θ,d,l are discretized into 2 classes for rotation and 4 classes for translation as explained at the beginning of Section 3. Though we solve the problem of positional inference using a feed-forward neural network based on two most recent observations alone, the extensions to use recurrent neural network would be interesting to explore in the future. We explain how ESMN integrates egocentric views into a top-down 2D representation of the environment using a recurrent neural network. Similar to BID10, the BVU in ESMN serves as a local mapper and maintains the accumulative free space representations in the egocentric coordinate for a short-term period. Given the current observation I t, function g first encodes its geometric representation about space g(I t) via a 2D-CNN and then transform g(I t) into egocentric top-down view m t via de-convolutional layers. Together with the accumulative space representation m t−1 at the previous step and the estimated egomotion A θ,d,l,t from t − 1 to t, BVU estimates the current local space representation m t using the following update rule: DISPLAYFORM0 where W is a function that transforms the previous accumulative space representation m t−1 to the current egocentric coordinate based on the estimated egomotion A θ,d,l,t. We parameterize W by using a spatial transformer network BID15 composing of two key elements: it generates the sampling grid which maps the input coordinates of m t−1 to the corresponding output locations after egomotion A θ,d,l,t transformation; the sampling kernel then takes the bilinear interpolation of the values on m t−1 and outputs the transformed m t−1 in the current egocentric coordinate. U is a function which merges the free space prediction m t from the current observation with the accumulative free space representation at the previous step. Specifically, we simplify merging function U as a weighted summation parameterized by λ followed by hyperbolic tangent operation: DISPLAYFORM1 Loop closure is valuable for the agents in the navigation tasks as it encourages efficient exploration and spatial reasoning. In order to detect loop closure during an episode, given the current observation I t, ESMN learns to encode the discriminative representation h(I t) of specific places independent of scaling and orientations via an embedding function F. Based on the similarity of all the past observations Ω t = {I 1, I 2, ..., I t} at corresponding locations P t = {p 1, p 2, ..., p t}, we create training targets by making an analogy to the image retrieval tasks BID8 and define the triplet (I t, I +, I −) as anchor sample (current camera view), positive and negative samples drawn from Ω t respectively. PU tries to minimize the triplet loss: DISPLAYFORM0 where we parameterize F using a three-stream 2D-CNN where the weights are shared across streams. D is a distance measure between pairs of embedding. Here, mean squared error is used. A loop closure label equals 1 if the mean squared error is below the threshold α which is set empirically. Apart from the similarity comparison of observations, we consider two extra criteria for determining whether the place was visited: the current position of the agent p t is near to the positions visited at the earlier times. We empirically set a threshold distance between the current and the recently visited locations based on the loop closure accuracy during training. We implemented it via a binary mask in the center of the egocentric global map where the binary states denote the accepted "closeness". the agent only compares those positions which are far from the most recent visited positions to avoid trivial loop closure detection at consecutive time steps. It is implemented via another binary mask which tracks these recently visited places. These criterion largely reduce the false alarm rate and improves the searching speed during loop closure classification. While BVU provides accumulative local free space representations in high resolution for a short-term period, we augment the local mapping framework with memory for long-term integration of the local maps and storage of location representations. Different from Neural Turing Machine BID9 where the memory slots are arranged sequentially, our addressable memory, of size F × H × W, is indexed by spatial coordinates {(i, j): i ∈ {1, 2, ..., H}, j ∈ {1, 2, ..., W}} with memory vector M (i, j) of size F at location (i, j). Because ESM is often expressed in the coordinate frame of the agent itself, we use location-based addressing mechanism and the locations of reading or writing heads are fixed to be always in the center of the memory. Same as BVU, all the past spatial information in the memory is transformed based on the estimated egomotion A θ,d,l,t. Mathematically, we formulate the returned reading vector r h,w as DISPLAYFORM0 where the memory patch covers the area of memory vectors with the width w and height h. We simplify the writing mechanism for GU and use Equation 4 for writing the vector r h,w. Figure 3: Overview of maze layouts, with differing geometries, textures and lighting conditions. Each maze is stimulated with normal, weak and strong lighting conditions. Maze 5_S and 5_W refers to Maze 5 with strong and weak lighting conditions respectively. Maze 1 is adopted from. Maze 8 is inspired by radial arm maze used to test spatial memory in rats BID32. The digits are pasted on walls along specific pathways for loop closure classification tasks. In our case, two writing heads and two reading heads are necessary to achieve the following objectives: one reading head returns the memory vectors m t−1 in order for BVU to predict m t using Equation 1; GU performs updates by writing the predicted local accumulative space representation m t back into the memory to construct the global map in the egocentric coordinate; GU keeps track of the visited places by writing the discriminative representation h(I t) at the center of the egocentric global map denoted as (DISPLAYFORM1 W 2); GU returns the memory vectors near to the current location for loop closure classification where the size of the searching area is parameterized by w and h. The choice of w and h are application-based. We simplify the interaction between local space representations m t and m t−1 with GU and set w and h to be the same size as m t and m t−1.If the loop closure classifies the current observation as "visited", GU eliminates the discrepancies on the global map by merging the two places together. The corrected map has to preserve the topological structure in the discovered areas and ensure the connectivity of the different parts on the global map is maintained. To realize this, we take three inputs in a stack of 3D convolution and de-convolution layers for map correction. The inputs are: the local map predicted at the anchor place; the local map predicted at the recalled place in the current egocentric coordinate; all the past integrated global maps. To make the training targets, we perturb the sequence of ground truth egomotions with random ones and generate synthetic integrated global maps with rotation and scaling augmentations. We minimize regression loss between the predicted maps and the ground truth. We train ESMN end-to-end by stochastic gradient descent with learning rate 0.002 and momentum 0.5. Adam Optimizer BID18 is used. At each time step, the ground truths are provided: local map, egomotion and loop closure classification label. For faster training convergence, we first train each module separately and then load these pre-trained networks into ESMN for fine-tuning. The input frame size is 3 × 64 × 64. We normalize all the input RGB images to be within the range [−1, 1]. The batch size is 10. The discriminative representation h(I t) is of dimension 128. The size of the local space representation m t is h × w = 32 × 32 covering the area 7.68 meters × 7.68 meters in the physical world whereas the global map (the addressable memory) is of size H × W = 500 × 500. We set λ = 0.5 in BVU and α = 0.01 in PU. The memory vector in GU is of size F = 128. Refer to Supplementary Material for detailed architecture. We conduct experiments on eight 3D mazes in Gazebo for robotic simulation BID19. We will make our simulation source code public once our paper is accepted. Figure 3 shows an overview of eight mazes. They feature complex geometry and rich varieties of textures. To prevent over-fitting in a single-modality environment, we augment these observations with different illumination conditions classified into three categories: strong, normal and weak, and train our ESMN in these mixed lighting conditions. Based on the in Section 4.1, our ESMN performs equally well in the test set across permutations of these conditions which shows its capacity of generalizing in various environments. For loop closure classification tasks, we create digits on walls as the unique features of specific pathways. The agent randomly walks through the mazes with obstacle avoidance. The ground truths for 2D top-view of mazes are obtained by using one virtual 2D laser scanner attached to the agent. We use the data collected in maze 1 to 5 for training and validation and the data in maze 6 to 8 for testing. The macro-actions for egomotions are divided into 2 classes for rotation by θ = 10 degrees and 4 classes for translation with d = 0.1 meters relative to the agent's current pose in the physical world. The simulation environment enables us to conduct exhaustive evaluation experiments and train individual module separately using fully supervised signals for better functionality evaluation. The integrated framework is then trained jointly after loading the pre-trained weights from individual modules. Head Direction Unit (HDU): Positional Inference. Head Direction Cells fire whenever the animal's head orients in certain directions. The study BID25 suggests that the rodent's orientation system is as a of neural integration of head angular velocities derived from vestibular system. Thus, after jointly training ESMN, we decode HDU for predicting head direction in the world coordinate by accumulating the estimated egomotions over time. FIG2 shows the head direction prediction in the world coordinate compared with the agent's ground truth global poses. We observe that the head directions integrated from the estimated egomotions are very similar to the ground truths. Moreover, we compute the egomotion classification errors from 2 classes of rotation and 4 classes of translation under three illumination conditions, see Table 1. There is a slight degradation of rotation + translation compared with rotation alone. One possible reason is that the change of egocentric camera views after 10 degrees of rotation is more observable than the one induced from translation by 0.1 meters. We also provide quantitative analysis of our predicted local maps across the first 32 time steps in FIG4 using Mean Squared Error (MSE), Correlation (Cor), Mutual Information (MI) which are standard image similarity metrics. At each time step, the predicted local maps are compared with the ground truth maps at t = 32. As the agent continues to explore in the environment, the area of the predicted free space expands leading to the decrease of MSE and the increase of correlation and MI in our test set. This validates that ESMN is able to accurately estimate the proximity to the physical obstacles relative to the agent itself and continuously accumulate the belief of free space representations based on the egomotions. Place Unit (PU): Loop Closure Classification. One of the critical aspects of ESM is the ability to recall and recognize a previously visited place independent of head orientation. Place cell (PC) resides in hippocampus and increases firing patterns when the animal is in specific locationsBurgess. Recent anatomical connectivity research BID37 implies that grid cells (GC) are the principle cortical inputs to PC. In our proposed ESMN, we design a novel memory unit (GU) for storing the discriminative representations of visited places based on their relative locations with respect to the agent itself in the egocentric coordinate. GU interacts constantly with PU by reading and writing the memory vectors. We evaluate the learnt discriminative representations of visited Table 2: Ablation study on the global map performance in Maze 6 at t = 448 and t = 1580 using metrics in Section 4.1. From top to bottom, the models are: 3D-CNN baseline, LSTM baseline, our ablated model with PU and GU removed, our proposed architecture. The best values are highlighted in bold.places by comparing the current observation (anchor input) with all the past visited places. FIG5 presents example pairs of observations when the loop closure is detected. Qualitative imply that our ESMN can accurately detect loop closure when the agent is at the previously visited places irrespective of large differences between these camera views. Grid Unit: Mapping Integration and Correction. After the loop closure is detected, ESMN performs map correction to eliminate the discrepancies during long-term mapping. FIG6 presents the example of the predicted global maps after map correction. It is shown that the map gets corrected at t = 448 (Column 1). Thus, the predicted global map (Row 3) is structurally more accurate compared with the one without loop closure (Row 2). Our ablation analysis is as follows: FORMULA0 To study the necessity of egomotion estimation from HDU, we take a sequence of camera views at all the previous time steps as inputs to predict the global map directly. We implement this by using a feed-forward 3D convolution neural network (3D-CNN). Practically, since it is hard to take all the past camera views across very long time period, we choose the input sequence with one representative frame every 15 time steps. As ESM requires sustainable mapping over long durations, we create one more baseline by taking the sequence of camera views as inputs and using Long Short Term Memory architecture to predict global maps directly (LSTM Direct). To maintain the same model complexity, we attach the same 2D-CNN in our BVU module before LSTM and fully connected layers after LSTM. To explore the effect of loop closure and thus map correction, we create one ablated model with PU and GU removed (HDU + BVU). FORMULA3 We present the of our integrated architecture with loop closure classification and map correction enabled (HDU + BVU + PU + GU). We report the evaluation in Table 2 using the metrics MSE, correlation and MI as introduced in Section 4.1.We observe that our proposed architecture surpasses the competitive baselines and the ablated models. At t = 448, compared with the first baseline (3D-CNN), there is decrease of 0.03 in MSE and increase of 0.17 in correlation and 0.09 in MI. The significant improvement infers that it is necessary to estimate egomotion for better map construction. Additionally, the integration of local maps based on the egomotion makes the computation more flexible and efficient by feeding back the accumulative maps to the system for future time steps. In the second baseline (LSTM Direct), we observe that the performance drops significantly when it constructs global maps for longer durations. As GU serves as an external memory to integrate local maps, the baseline confirms GU has advantages over LSTMDirect in terms of long-lasting memory. To explore the effect of loop closure and thus map correction, we have the ablated model with PU and GU removed (HDU + BVU Table 3 : Comparison of our method with state-of-the-art SLAM algorithms. The global map after the agent completes one loop closure in Maze 6, 7, and 8 are reported using metrics introduced in Section 4.1. The numbers highlighted in bold are the best.our proposed architecture with all four modules enabled at t = 448 and t = 1580, the decreased performance validates that these steps are necessary to eliminate the errors during long-term mapping. We evaluate our ESMN by comparing with state-of-the-art monocular visual SLAM methods, , and . We used the codes provided by the authors and replaced the intrinsic camera parameters with ours. In order to construct maps in the world scale, we provide explicit scaling factor for both methods. Results show that our ESMN significantly outperforms the rest in terms of MSE, Cor and MI as shown in Table 3 . In the experiments, we also observe that ORBslam has high false positives in loop closure detection which leads to incorrect map correction. It may be caused by the limitations of local feature descriptors. Different from them, our ESMN can robustly detect loop closure and correct maps over time. Furthermore, we notice that both SLAMs tend to fail to track feature descriptors between frames especially during rotations. The parameters of searching windows in order to match features have to be manually set and they cannot be adapted in large motion cases. The in Maze 8 using EKFslam are not provided in Table 3 as the original algorithm requires the dense local feature matching and it is not applicable in very low texture environments. We get inspirations from neurophysiological discoveries and propose the first deep neural network architecture for modeling ESM which unifies the functionalities of the four navigation cell types: head-direction cells, border cells and boundary vector cells, place cells and grid cells. Our learnt model demonstrates the capacity of estimating the pose of the agent and constructing a top-down 2D spatial representations of the physical environments in the egocentric coordinate which could have many potential applications, such as path planning for robot agents. Our ESMN accumulates the belief about the free space by integrating egocentric views. To eliminate errors during mapping, ESMN also augments the local mapping module with an external spatial memory to keep track of the discriminative representations of the visited places for loop closure detection. We conduct exhaustive evaluation experiments by comparing our model with some competitive baselines and state-of-the-art SLAM algorithms. The experimental demonstrate that our model surpasses all these methods. The comprehensive ablation study suggests the essential role of individual modules in our proposed architecture and the efficiency of communications among these modules. We introduce the anatomy of our model for reproducibility in this section. We follow exactly the same convention as Torch. Inspired by neurophysilogical discoveries, our framework consists of four units: Head Direction Unit (HDU), Boundary Vector Unit (BVU), Place Unit (PU), and Grid Unit (GU). Refer to Table 7 for HDU, Table 6 for BVU, Table 8 for PU, Table 9 for global map correction. In GU, as explained in Section 3.5 in the main manuscript, egocentric spatial memory is often expressed in the coordinate frame of the agent itself, we use location-based addressing mechanism and the locations of reading or writing heads are fixed to be always in the center of the external memory. As the receptive field parameterized by w and h for reading and writing are fixed. We designed a 2D binary masking map where 1 denotes the place Table 5: Reading Head 2 and Writing Head 2 in Grid Unit (GU). It either reads memory vectors or writes the discriminative spatial representation h(I t) of size 1 × 128 from or into the global map of size 128 × 500 × 500 in GU.to read and write and vice versa. We use element-wise multiplication between each feature map of the external memory and the binary masking map for reading and writing operations. Refer to TAB5 for reading and writing local maps in GU, and Table 5 for reading and writing spatial representations of visited places in GU.Figure 9: Example of predicted local maps over first 32 time steps in Maze 6. Frames #1, 5, 9, 13, 17, 21, 25, 29, 32 are shown (left to right columns). The topmost row shows the camera view. Row 2 shows the ground truth with red arrows denoting the agent's position and orientation from the top view. The white region denotes free space while the black denotes unknown areas. Row 3 shows the corresponding top-view accumulative belief of the predicted local maps where the red color denotes higher belief of the free space. Best viewed in color. current camera view estimated egomotion predicted local map at previous time step nn. Sequential nn. ParallelTable nn. SpatialConvolution nn. AffineGridGeneratorBHWD(A, A) nn. Reshape(batchSize, 1, 32, 32) nn. ReLU(true) nn. Transpose({3,4},{2,4}) nn. SpatialConvolution nn. BilinearSamplerBHWD nn. SpatialBatchNormalization (256,1e-3) nn. Transpose({2,4},{3,4}) nn. ReLU(true) nn. Reshape(batchSize, 32, 32) nn. SpatialConvolution nn. ReLU nn. SpatialBatchNormalization (512,1e-3) nn. ReLU(true) nn. SpatialConvolution nn. SpatialBatchNormalization(1024,1e-3) nn. ReLU(true) nn. SpatialFullConvolution nn. SpatialBatchNormalization(512,1e-3) nn. ReLU(true) nn. SpatialFullConvolution nn. SpatialBatchNormalization(256,1e-3) nn. ReLU(true) nn. SpatialFullConvolution nn. SpatialBatchNormalization(128,1e-3) nn. ReLU(true) nn. SpatialFullConvolution nn. SpatialBatchNormalization(1,1e-3) nn. ReLU(true) nn. Squeeze nn. View (batchSize, 32, 32) nn. CAddTable nn. HardTanh Table 6: Architecture of Boundary Vector Unit (BVU) FIG0: Example of predicted local maps over first 32 time steps in Maze 7. Frames #1, 5, 9, 13, 17, 21, 25, 29, 32 are shown (left to right columns). The topmost row shows the camera view. Row 2 shows the ground truth with red arrows denoting the agent's position and orientation from the top view. The white region denotes free space while the black denotes unknown areas. Row 3 shows the corresponding top-view accumulative belief of the predicted local maps where the red color denotes higher belief of the free space. Best viewed in color. 5, 9, 13, 17, 21, 25, 29, 32 are shown (left to right columns). The topmost row shows the camera view. Row 2 shows the ground truth with red arrows denoting the agent's position and orientation from the top view. The white region denotes free space while the black denotes unknown areas. Row 3 shows the corresponding top-view accumulative belief of the predicted local maps where the red color denotes higher belief of the free space. Best viewed in color.contacted two RGB camera views at t and t − 1 nn. Sequential nn. SpatialConvolution nn. ReLU(true) nn. SpatialConvolution nn. SpatialBatchNormalization(256,1e-3) nn. ReLU(true) nn. SpatialConvolution nn. SpatialBatchNormalization(512,1e-3) nn. ReLU(true) nn. SpatialConvolution nn. SpatialBatchNormalization(1024,1e-3) nn. ReLU(true) nn. SpatialConvolution nn. SpatialBatchNormalization(1024,1e-3) nn. ReLU(true) nn. Reshape nn. Linear nn. ReLU nn. Linear nn. ReLU nn. Linear nn. CrossEntropyCriterion Table 7: Architecture of Head Direction Unit (HDU)
first deep neural network for modeling Egocentric Spatial Memory inspired by neurophysiological discoveries of navigation cells in mammalian brain
308
scitldr
We show that if the usual training loss is augmented by a Lipschitz regularization term, then the networks generalize. We prove generalization by first establishing a stronger convergence , along with a rate of convergence. A second resolves a question posed in: how can a model distinguish between the case of clean labels, and randomized labels? Our answer is that Lipschitz regularization using the Lipschitz constant of the clean data makes this distinction. In this case, the model learns a different function which we hypothesize correctly fails to learn the dirty labels. While deep neural networks networks (DNNs) give more accurate predictions than other machine learning methods BID30, they lack some of the performance guarantees of these other methods. One step towards performance guarantees for DNNs is a proof of generalization with a rate. In this paper, we present such a , for Lipschitz regularized DNNs. In fact, we prove a stronger convergence from which generalization follows. We also consider the following problem, inspired by . Problem 1.1. [Learning from dirty data] Suppose we are given a labelled data set, which has Lipschitz constant Lip(D) = O (see below). Consider making copies of 10 percent of the data, adding a vector of norm to the perturbed data points, and changing the label of the perturbed points. Call the new, dirty, data setD. The dirty data has Lip(D) = O(1/). However, if we compute the histogram of the pairwise Lipschitz constants, the distribution of the values on the right hand side of, are mostly below Lip(D) with a small fraction of the values being O(1/), since the duplicated images are apart but with different labels. Thus we can solve with L 0 estimate using the prevalent smaller values, which is an accurate estimate of the clean data Lipschitz constant. The solution of using such a value is illustrated on the right of Figure 1. Compare to the Tychonoff regularized solution on the right of Figure 2. We hypothesis that on dirty data the solution of replaces the thin tall spikes with short fat spikes leading to better approximation of the original clean data. In Figure 1 we illustrate the solution of (with L 0 = 0), using synthetic one dimensional data. In this case, the labels {−1, 0, 1} are embedded naturally into Y = R, and λ = 0.1. Notice that the solution matches the labels exactly on a subset of the data. In the second part of the figure, we show a solution with dirty labels which introduce a large Lipschitz constant, in this case, the solution reduces the Lipschitz constant, thereby correcting the errors. Learning from dirty labels is studied in §2.4. We show that the model learns a different function than the dirty label function. We conjecture, based on synthetic examples, that it learns a better approximation to the clean labels. We begin by establishing notation. Consider the classification problem to fix ideas, although our restuls apply to other problems as well. Definition 1.2. Let D n = x 1,..., x n be a sequence of i.i.d. random variables sampled from the probability distribution ρ. The data x i are in X = d. Consider the classification problem with D labels, and represent the labels by vertices of the probability simplex, Y ⊂ R D. Write y i = u 0 (x i) for the map from data to labels. Write u(x; w) for the map from the input to data to the last layer of the network.1 Augment the training loss with Lipschitz regularization DISPLAYFORM0 The first term in is the usual average training loss. The second term in the Lipschitz regularization term: the excess Lipschitz constant of the map u, compared to the constant L 0.In order to apply the generalization theorem, we need to take L 0 ≥ Lip(u 0), the Lipschitz constant of the data on the whole data manifold. In practice, Lip(u 0) can be estimated by the Lipschitz constant of the empirical data. The definition of the Lipschitz constants for functions and data, as well as the implementation details are presented in §1.3 below. Figure 1: Synthetic labelled data and Lipschitz regularized solution u. Left: The solution value matches the labels exactly on a large portion of the data set. Right: dirtly labels: 10% of the data is incorrect; the regularized solution corrects the errors. Our analysis will apply to the problem which is convex in u, and does not depend explicitly on the weights, w. Of course, once u is restricted to a fixed neural network architecture, the corresponding minimization problem becomes non-convex in the weights. Our analysis can avoid the dependence on the weights because we make the assumption that there are enough parameters so that u can exactly fit the training data. The assumption is justified by. As we send n → ∞ for convergence, we require that the network also grow, in order to continue to satisfy this assumption. Our apply to other non-parametric methods in this regime. Generalization bounds have been obtained previously via VC dimension analysis of neural networks . The generalization rates have factors of the form A k for a k-layer neural network with bounds w i ≤ A for all weight vectors w i in the network. Such bounds are only applicable for low-complexity networks. Other works have considered connections between generalization and stability BID8 ). More recently, BID5 proposed the Lipschitz constant of the network as a candidate measure for the Rademacher complexity, which is a measure of generalization (, Chapter 26). Also, BID12 showed that Lipschitz regularization can be viewed as a special case of distributional robustness. Unlike other recent contributions such as BID26, our analysis does not depend on the training method. In fact, our analysis has more in common with inverse problems in image processing, such as Total Variation denoising and inpainting BID6 BID36. For further discussion, see Appendix C.The estimate of Lip(u; X) provided by can be quite different from the the Tychonoff gradient regularization BID14, DISPLAYFORM0 since corresponds to a maximum of the values of the norms, and the previous equation corresponds to the mean-squared values. In fact, recent work on semi-supervised learning suggests that higher p-norms of the gradient are needed for generalization when the data manifold is not well approximated by the data BID15 BID10 BID29 BID39. In Figure 2 we compare to the problems in Figure 1 using Tychonoff regularization. The Tychonoff regularization is less effective at correcting errors. The effect is more pronounced in higher dimensions. Figure 2: Synthetic labelled data and Tychonoff regularized solution u. Left: The solution value matches the labels exactly on a large portion of the data set. Right dirty labels: 10% of the data is incorrect; the regularized solution is not as effective at correcting errors. The effect is more pronounced in higher dimensions. An upper bound for the Lipschitz constant of the model is given by the norm of the product of the weight matrices (, Section 4.3). Let w = (w 1, . . ., w J) be the weight matrices for each layer. Then DISPLAYFORM0 Regularization of the network using methods based on has been implemented recently in BID23 and . Because the upper bound in does not take into account the coefficients in weight matrices which are zero due to the activation functions, the gap in the inequality can be off by factors of many orders of magnitude for deep networks BID19.Implementing can be accomplished using backpropagation in the x variable on each label, which can become costly for D large. Special architectures could also be used to implement Lipschitz regularization, for example, on a restricted architecture, BID31 renormalized the weight matrices of each layer to be norm 1.Lipschitz regularization may help with adversarial examples BID40 BID22 which poses a problem for model reliability BID21. Since the Lipschitz constant L ℓ of the loss, ℓ, controls the norm of a perturbation DISPLAYFORM1 maps with smaller Lipschitz constants may be more robust to adversarial examples. BID19 implemented Lipschitz regularization of the loss, and achieved better robustness against adversarial examples, compared to adversarial training BID22 alone. Lipschitz regularization may also improve stability of GANs. 1-Lipschitz networks are also important for Wasserstein-GANs ) BID0. In the gradient penalty away from norm 1 is implemented, augmented by a penalty around perturbed points, with the goal of improved stability. Spectral regularization for GANs was implemented in BID33 ). Definition 1.3 (Lipschitz constants of functions and data). Choose norms · Y, and · X on X and Y, respectively. The Lipschitz constant (in these norms) of a function u: DISPLAYFORM0 When X 0 is all of X, we write Lip(u; X) = Lip(u). The Lipschitz constant of the data is given by DISPLAYFORM1 Finlay & Oberman FORMULA0 implement Lipschitz regularization as follows. The basis for the implementation of the Lipschitz constant is Rademacher's Theorem BID18, §3.1), which states that if a function g(x) is Lipschitz continuous then it is differentiable almost everywhere and DISPLAYFORM2 Restricting to a mini-batch, we obtain the following method for estimating the Lipschitz constant. Let u(x; w) be a Lipschitz continuous function. Then max DISPLAYFORM3 For vector valued functions, the appropriate matrix norm must be used, see §B. The variational problem admits Lipschitz continuous minimizers, but in general the minimizers are not unique. When L 0 = Lip(u 0), it is clear that u 0, is a solution of FORMULA0: both the loss term and the regularization term are zero when applied to u 0. In addition, any L 0 -Lipschitz extension of u 0 | Dn is also a minimizer of, so solutions are not unique. Let u n be any solution of the Lipschitz regularized variational problem. We study the limit of u n as n → ∞. Since the empirical probability measures ρ n converge to the data distribution ρ, the continuum variational problem corresponding to is min DISPLAYFORM0 where in FORMULA8 we have introduced the following notation. Definition 2.1. Given the loss function, ℓ, a map u: X → Y, and a probability measure, µ, supported on X, define DISPLAYFORM1 to be the expectation of the loss with respect to the measure. In particular, the generalization loss of the map u: DISPLAYFORM2 for the average loss on the data set D n, where ρ n:= 1 n δ xi is the empirical measure corresponding to D n. Remark 2.2. Generalization is defined in (, Section 5.2) as the expected value of the loss function on a new input sampled from the data distribution. As defined, the full generalization error includes the training data, but it is of measure zero, so removing it does not change the value. We introduce the following assumption on the loss function. Assumption 2.3 (Loss function). The function ℓ: Y × Y → R is a loss function if it satisfies (i) ℓ ≥ 0, (ii) ℓ(y 1, y 2) = 0 if and only if y 1 = y 2, and (iii) ℓ is strictly convex in y 1.Example 2.4 (R D with L 2 loss). Set Y = R D, and let each label be a basis vector. Set ℓ(y 1, y 2) = y 1 − y 2 2 2 to be the L 2 loss. Example 2.5 (Classification). In classification, the output of the network is a probability vector on the labels. Thus Y = ∆ D, the D-dimensional probability simplex, and each label is mapped to a basis vector. The cross-entropy loss ℓ DISPLAYFORM0 Example 2.6 (Regularized cross-entropy). In the classification setting, it is often the case that the softmax function DISPLAYFORM1 is combined with the cross-entropy loss. In this paper, we regard softmax as the last layer of the DNN, so we assume the output u(x) of the network lies in the probability simplex. If the output, z, of the second to last layer of the DNN, which is the input to softmax in, lies in a compact set, i.e., |z j | ≤ C for all i and some C > 0, then softmax(z) j ≥ e −2C, and so the range of softmax lies in the set DISPLAYFORM2 which is strictly interior to the probability simplex. Restricted to A, the cross-entropy loss ℓ KL is strongly convex and Lipschitz continuous, which is required in Theorems 2.12 and 2.11 below. In our analysis, it is slightly more convenient to define the regularized cross entropy loss with parameter > 0 DISPLAYFORM3 For classification problems, where z = e k, we have ℓ KL (y, e k) = −(1 +) log((y k +)/(1 +)), which is Lipschitz and strongly convex for any 0 ≤ y i ≤ 1 within the probability simplex. Thus, the regularized cross entropy ℓ KL satisfies the strong convexity and Lipschitz regularity required by Theorems 2.12 and 2.11 on the whole probability simplex. Here, we show that solutions of the random variational problem converge to solutions of. We make the standard manifold assumption BID11, and assume the data distribution ρ is a probability density supported on a compact, smooth, DISPLAYFORM0 We denote the probability density again by ρ: M → [0, ∞). Hence, the data D n is a sequence x 1,..., x n of i.i.d. random variables on M with probability density ρ. Associated with the random sample we have the closet point projection map σ n: X → {x 1, . . ., x n} ⊂ X that satisfies DISPLAYFORM1 for all x ∈ X. We recall that W 1,∞ (X; Y) is the space of Lipschitz mappings from X to Y. Throughout this section, C, c > 0 denote positive constants depending only on M, and we assume C ≥ 1 and 0 < c < 1. We follow the analysis tradition of allowing the particular values of C and c to change from line to line. We establish that that minimizers of are unique on M in Theorem A.1, which follows from the strict convexity of the loss restricted to the data manifold M. See also FIG0 which shows how the solutions need not be unique off the data manifold. Our first is in the case where Lip[u 0] ≤ L 0, and so the Lipschitz regularizer is not fully active. This corresponds to the case of clean labels. We state our in generality, for approximate minimizers of, and specialize to the case DISPLAYFORM2 Theorem 2.7 (Convergence ). Assume inf x∈M ρ(x) > 0. For any t > 0, with probability at least 1 − Ct DISPLAYFORM3 is any sequence of minimizers of FORMULA0 and DISPLAYFORM4 and Theorem 2.7 applies to the sequence u n, yielding DISPLAYFORM5 It is important to note that Theorem 2.7 does not requires u n to be minimizers of-we just require zero empirical loss, which is often achieved in practice . This allows for approximation errors in solving on the whole domain X, due to the restriction that u must be expressed via a Deep Neural Network. As an immediate corollary, we can prove that the generalization loss converges to zero, and so we obtain generalization. Corollary 2.9. Assume that for some q ≥ 1 the loss ℓ satisfies DISPLAYFORM6 Then under the assumptions of Theorem 2.7 DISPLAYFORM7 holds with probability at least 1 − Ct −1 n −(ct−1).Proof. By FORMULA21, we can bound the generalization loss as follows DISPLAYFORM8 The proof is completed by invoking Theorem 2.7.We now turn to the proof of Theorem 2.7, which requires a bound on the distance between the closest point projection σ n and the identity. The is standard in probability, and we include it for completeness in Lemma 2.10 proved in §A.1. We refer the interested reader to BID34 for more details. Lemma 2.10. Suppose that inf M ρ > 0. Then for any t > 0 DISPLAYFORM9 with probability at least 1 − Ct −1 n −(ct−1).We now give the proof of Theorem 2.7.Proof of Theorem 2.7. Since L[u n, ρ n] = 0 we have u 0 (x i) = u n (x i) for all 1 ≤ i ≤ n. Thus for any x ∈ X we have DISPLAYFORM10 Therefore, we deduce DISPLAYFORM11 The proof is completed by invoking Lemma 2.10. We now consider the setting of Problem 1.1, illustrated in Figure 1 right. We assume that we only have access to a "dirty" label function, which corresponds to an additive error of the form DISPLAYFORM0 where u clean is the label function, and u e: X → Y is some error function, which is assumed to be zero with high probability. Assume that the error vector e has a much larger Lipschitz constant than the labels, so that Lip(u 0) ≫ Lip(u clean).We wish to fit the clean labels, while not fitting the errors, having access only to u 0. The labels correspond to the subset of the data which generate the low Lipschitz constant L clean, while the errors correspond to pairs of labels that generate a high Lipschitz constant. Thus L clean can easily be estimated from the distribution of the pairwise Lipschitz constants of the data. With the goal in mind, we set L 0 = L clean in. The Lipschitz regularizer is active in, which can lead to the solution succeeding in avoiding the dirty labels, as in Figure 1 right. Our main (Theorems 2.12 and 2.11) show that minimizers of J n converge to minimizers of J almost surely as the number of training points n tends to ∞. It is beyond the scope of this work to estimate to what extent the errors are corrected, however we do know that the solution cannot fit u 0 due to the value of the Lipschitz constant, which is already an improvement over the case λ = 0.The proofs for this section can be found in Section A.2. Theorem 2.11. Suppose that ℓ: Y × Y → R is Lipschitz and strongly convex and let L = Lip(u 0). Then for any t > 0, with probability at least 1 − 2t − m m+2 n −(ct−1) all minimizing sequences u n of and all minimizers u * of satisfy DISPLAYFORM1 The next drops the assumption of strong convexity of the loss. Theorem 2.12. Suppose that inf M ρ > 0, ℓ: Y × Y → R is Lipschitz, and let u * ∈ W 1,∞ (X; Y) be any minimizer of. Then with probability one DISPLAYFORM2 where u n is any sequence of minimizers of. Furthermore, every uniformly convergent subsequence of u n converges on X to a minimizer of. Remark 2.13. In Theorem 2.12 and Theorem 2.11, the sequence u n does not, in general, converge on the whole domain X. The important point is that the sequence converges on the data manifold M, and solves the variational problem off of the manifold, which ensures that the output of the DNN is stable with respect to the input. See FIG0. In this section we provide the proof of stated in §2.3. Theorem A.1. Suppose the loss function satisfies Assumption 2.3. If u, v ∈ W 1,∞ (X; Y) are two minimizers of and DISPLAYFORM0 Therefore, w is a minimizer of J and so we have equality above, which yields DISPLAYFORM1 Since ℓ is strictly convex in its first argument, it follows that u = v on M.Proof of Lemma 2.10 of §2.3. There exists M such that for any 0 < ≤ M, we can cover M with N geodesic balls B 1, B 2,..., B N of radius, where N ≤ C −m and C depends only on M BID25. Let Z i denote the number of random variables x 1,..., x n falling in B i. Then DISPLAYFORM2 m. Let A n denote the event that at least one B i is empty (i.e., Z i = 0 for some i). Then by the union bound we deduce DISPLAYFORM3 In the event that A n does not occur, then each B i has at least one point, and so |x DISPLAYFORM4 The proof of Theorem 2.12 requires a preliminary Lemma. Let DISPLAYFORM5 holds with probability at least 1 − 2t DISPLAYFORM6 The estimate FORMULA0 is called a discrepancy BID41 BID25, and is a uniform version of concentration inequalities. A key tool in the proof of Lemma A.5 is Bernstein's inequality BID7 ), which we recall now for the reader's convenience. For X 1,..., X n i.i.d. with variance DISPLAYFORM7 surely for all i then Bernstein's inequality states that for any > 0 DISPLAYFORM8 Proof of Lemma A.5. We note that it is sufficient to prove the for w ∈ H L (X; Y) with M wρ dV ol(x) = 0. In this case, we have w(x) = 0 for some x ∈ M, and so w L ∞ (X;Y) ≤ CL.We first give the proof for M = X = m. We partition X into hypercubes B 1,..., B N of side length h > 0, where N = h −m. Let Z j denote the number of x 1,..., x n falling in B j. Then Z j is a Binomial random variable with parameters n and p j = Bj ρ dx ≥ ch m. By the Bernstein inequality we have for each j that DISPLAYFORM9 provided 0 < ≤ h m. Therefore, we deduce DISPLAYFORM10 holds with probability at least 1−2h DISPLAYFORM11 holds with probability at least 1 − 2t DISPLAYFORM12 trivially holds, and hence we can allow t > n/ log(n) as well. We sketch here how to prove the on the manifold M. We cover M with k geodesic balls of radius > 0, denoted B M (x 1,),..., B M (x k,), and let ϕ 1,..., ϕ k be a partition of unity subordinate to this open covering of M. For > 0 sufficiently small, the Riemannian exponential map exp x: B(0,) ⊂ T x M → M is a diffeomorphism between the ball B(0, r) ⊂ T x M and the geodesic ball B M (x,) ⊂ M, where T x M ∼ = R m. Furthermore, the Jacobian of exp x at v ∈ B(0, r) ⊂ T x M, denoted by J x (v), satisfies (by the Rauch Comparison Theorem) DISPLAYFORM13 Therefore, we can run the argument above on the ball B(0, r) ⊂ R m in the tangent space, lift the to the geodesic ball B M (x i,) via the Riemannian exponential map exp x, and apply the bound DISPLAYFORM14 to complete the proof. Remark A.6. The exponent 1/(m + 2) is not optimal, but affords a very simple proof. It is possible to prove a similar with the optimal exponent 1/m in dimension m ≥ 3, but the proof is significantly more involved. We refer the reader to BID41 for details. Remark A.7. The proof of Theorem 2.12 shows that Γ-converges to almost surely as n → ∞ in the L ∞ (X; Y) topology. Γ-convergence is a notion of convergence for functionals that ensures minimizers along a sequence of functionals converge to a minimizer of the Γ-limit. While we do not use the language of Γ-convergence here, the ideas are present in the proof of Theorem 2.12. We refer to BID9 for details on Γ-convergence. Proof of Theorem 2.12. By Lemma A.5 the event that lim DISPLAYFORM15 for all Lipschitz constants L > 0 has probability one. For the rest of the proof we restrict ourselves to this event. Let u n ∈ W 1,∞ (X; Y) be a sequence of minimizers of, and let u * ∈ W 1,∞ (X; Y) be any minimizer of. Then since DISPLAYFORM16 we have Lip(u n) ≤ Lip(u 0) =: L for all n. By the Arzelà-Ascoli Theorem BID37 there exists a subsequence u nj and a function u ∈ W 1,∞ (X; Y) such that u nj → u uniformly as n j → ∞. Note we also have Lip(u) ≤ lim inf j→∞ Lip(u nj). Since DISPLAYFORM17 Therefore, u is a minimizer of J. By Theorem A.1, u = u * on M, and so u nj → u * uniformly on M as j → ∞. Now, suppose that does not hold. Then there exists a subsequence u nj and δ > 0 such that DISPLAYFORM18 for all j ≥ 1. However, we can apply the argument above to extract a further subsequence of u nj that converges uniformly on M to u *, which is a contradiction. This completes the proof. Proof of Theorem 2.11. Let L = Lip(u 0). By Lemma A.5 DISPLAYFORM19 (FORMULA0 holds with probability at least 1 − 2t − m m+2 n −(ct−1) for any t > 0. Let us assume for the rest of the proof that holds. (LE) is not practical for large scale problems. There has be extensive work on the Lipschitz Extension problem, see, BID28, for example. More recently, optimal Lipschitz extensions have been studied, with connections to Partial Differential Equations, see BID2. We can interpret as a relaxed version of (LE), where λ −1 is a parameter which replaces the unknown Lagrange multiplier for the constraint. Variational problems are fundamental tools in mathematical approaches to image processing BID3 Lipschitz regularization in not nearly as common. It appears in image processing in (, §4.4) BID16 and BID24 ). Variational problems of the form can be studied by the direct method in the calculus of variations BID13. The problem can be discretized to obtain a finite dimensional convex convex optimization problem. The variational problem can also be studied by finding the first variation, which is a Partial Differential Equation BID17, which can then be solved numerically. Both approaches are discussed in BID3. In FIG1 we compare different regularization terms, in one dimension. The difference between the regularizers is more extreme in higher dimensions.
We prove generalization of DNNs by adding a Lipschitz regularization term to the training loss. We resolve a question posed in Zhang et al. (2016).
309
scitldr
For fast and energy-efficient deployment of trained deep neural networks on resource-constrained embedded hardware, each learned weight parameter should ideally be represented and stored using a single bit. Error-rates usually increase when this requirement is imposed. Here, we report large improvements in error rates on multiple datasets, for deep convolutional neural networks deployed with 1-bit-per-weight. Using wide residual networks as our main baseline, our approach simplifies existing methods that binarize weights by applying the sign function in training; we apply scaling factors for each layer with constant unlearned values equal to the layer-specific standard deviations used for initialization. For CIFAR-10, CIFAR-100 and ImageNet, and models with 1-bit-per-weight requiring less than 10 MB of parameter memory, we achieve error rates of 3.9%, 18.5% and 26.0% / 8.5% (Top-1 / Top-5) respectively. We also considered MNIST, SVHN and ImageNet32, achieving 1-bit-per-weight test of 0.27%, 1.9%, and 41.3% / 19.1% respectively. For CIFAR, our error rates halve previously reported values, and are within about 1% of our error-rates for the same network with full-precision weights. For networks that overfit, we also show significant improvements in error rate by not learning batch normalization scale and offset parameters. This applies to both full precision and 1-bit-per-weight networks. Using a warm-restart learning-rate schedule, we found that training for 1-bit-per-weight is just as fast as full-precision networks, with better accuracy than standard schedules, and achieved about 98%-99% of peak performance in just 62 training epochs for CIFAR-10/100. For full training code and trained models in MATLAB, Keras and PyTorch see https://github.com/McDonnell-Lab/1-bit-per-weight/. Fast parallel computing resources, namely GPUs, have been integral to the resurgence of deep neural networks, and their ascendancy to becoming state-of-the-art methodologies for many computer vision tasks. However, GPUs are both expensive and wasteful in terms of their energy requirements. They typically compute using single-precision floating point (32 bits), which has now been recognized as providing far more precision than needed for deep neural networks. Moreover, training and deployment can require the availability of large amounts of memory, both for storage of trained models, and for operational RAM. If deep-learning methods are to become embedded in resourceconstrained sensors, devices and intelligent systems, ranging from robotics to the internet-of-things to self-driving cars, reliance on high-end computing resources will need to be reduced. To this end, there has been increasing interest in finding methods that drive down the resource burden of modern deep neural networks. Existing methods typically exhibit good performance but for the ideal case of single-bit parameters and/or processing, still fall well-short of state-of-the-art error rates on important benchmarks. In this paper, we report a significant reduction in the gap (see Figure 1 and Results) between Convolutional Neural Networks (CNNs) deployed using weights stored and applied using standard precision (32-bit floating point) and networks deployed using weights represented by a single-bit each. In the process of developing our methods, we also obtained significant improvements in error-rates obtained by full-precision versions of the CNNs we used. In addition to having application in custom hardware deploying deep networks, networks deployed using 1-bit-per-weight have previously been shown BID21 to enable significant speedups on regular GPUs, although doing so is not yet possible using standard popular libraries. Aspects of this work was first communicated as a subset of the material in a workshop abstract and talk BID19. Figure 1: Our error-rate gaps between using full-precision and 1-bit-per-weight. All points except black crosses are data from some of our best reported in this paper for each dataset. Black points are on the full ImageNet dataset, in comparison with of BID22 (black crosses). The notation 4x, 10x and 15x corresponds to network width (see Section 4).1.1 RELATED WORK In 2015, a new form of CNN called a "deep residual network," or "ResNet" BID8 was developed and used to set many new accuracy records on computer-vision benchmarks. In comparison with older CNNs such as Alexnet BID14 and VGGNet BID24, ResNets achieve higher accuracy with far fewer learned parameters and FLOPs (FLoating-point OPerations) per image processed. The key to reducing the number of parameters in ResNets was to replace "all-to-all layers" in VGG-like nets with "global-average-pooling" layers that have no learned parameters BID17 BID25, while simultaneously training a much deeper network than previously. The key new idea that enabled a deeper network to be trained effectively was the introduction of so-called "skip-connections" BID8. Many variations of ResNets have since been proposed. ResNets offer the virtue of simplicity, and given the motivation for deployment in custom hardware, we have chosen them as our primary focus. Despite the increased efficiency in parameter usage, similar to other CNNs the accuracy of ResNets still tends to increase with the total number of parameters; unlike other CNNs, increased accuracy can either from deeper or wider networks.In this paper, we use Wide Residual Networks, as they have been demonstrated to produce better accuracy in less training time than deeper networks. Achieving the best accuracy and speed possible when deploying ResNets or similar networks on hardware-constrained mobile devices will require minimising the total number of bits transferred between memory and processors for a given number of parameters. Motivated by such considerations, a lot of recent attention has been directed towards compressing the learned parameters (model compression) and reducing the precision of computations carried out by neural networks-see BID12 for a more detailed literature review. Recently published strategies for model compression include reducing the precision (number of bits used for numerical representation) of weights in deployed networks by doing the same during training BID2 BID12 BID20 BID22, reducing the number of weights in trained neural networks by pruning BID6 BID13, quantizing or compressing weights following training BID6 BID29, reducing the precision of computations performed in forward-propagation during inference BID2 BID12 BID20 BID22, and modifying neural network architectures BID10. A theoretical analysis of various methods proved on the convergence of a variety of weight-binarization methods BID16.From this range of strategies, we are focused on an approach that simultaneously contributes two desirable attributes: simplicity, in the sense that deployment of trained models immediately follows training without extra processing; implementation of convolution operations can be achieved without multipliers, as demonstrated by BID22. Our strategy for improving methods that enable inference with 1-bit-per-weight was threefold:1. State-of-the-art baseline. We sought to begin with a baseline full-precision deep CNN variant with close to state-of-the-art error rates. At the time of commencement in 2016, the state-of-the-art on CIFAR-10 and CIFAR-100 was held by Wide Residual Networks, so this was our starting point. While subsequent approaches have exceeded their accuracy, ResNets offer superior simplicity, which conforms with our third strategy in this list.2. Make minimal changes when training for 1-bit-per-weight. We aimed to ensure that training for 1-bit-per-weight could be achieved with minimal changes to baseline training.3. Simplicity is desirable in custom hardware. With custom hardware implementations in mind, we sought to simplify the design of the baseline network (and hence the version with 1-bit weights) as much as possible without sacrificing accuracy. Although this paper is chiefly about 1-bit-per-weight, we exceeded our objectives for the fullprecision baseline network, and surpassed reported error rates for CIFAR-10 and CIFAR-100 using Wide ResNets. This was achieved using just 20 convolutional layers; most prior work has demonstrated best wide ResNet performance using 28 layers. Our innovation that achieves a significant error-rate drop for CIFAR-10 and CIFAR-100 in Wide ResNets is to simply not learn the per-channel scale and offset factors in the batch-normalization layers, while retaining the remaining attributes of these layers. It is important that this is done in conjunction with exchanging the ordering of the final weight layer and the global average pooling layer (see FIG2 .We observed this effect to be most pronounced for CIFAR-100, gaining around 3% in test-error rate. But the method is advantageous only for networks that overfit; when overfitting is not an issue, such as for ImageNet, removing learning of batch-norm parameters is only detrimental. Ours is the first study we are aware of to consider how the gap in error-rate for 1-bit-per-weight compared to full-precision weights changes with full-precision accuracy across a diverse range of image classification datasets (Figure 1).Our approach surpasses by a large margin all previously reported error rates for CIFAR-10/100 (error rates halved), for networks constrained to run with 1-bit-per-weight at inference time. One reason we have achieved lower error rates for the 1-bit case than previously is to start with a superior baseline network than in previous studies, namely Wide ResNets. However, our approach also in smaller error rate increases relative to full-precision error rates than previously, while training requires the same number of epochs as for the case of full-precision weights. Our main innovation is to introduce a simple fixed scaling method for each convolutional layer, that permits activations and gradients to flow through the network with minimum change in standard deviation, in accordance with the principle underlying popular initialization methods BID7. We combine this with the use of a warm-restart learning-rate method BID18 ) that enables us to report close-to-baseline for the 1-bit case in far fewer epochs of training than reported previously. We follow the approach of BID2; BID22; BID20, in that we find good when using 1-bit-per-weight at inference time if during training we apply the sign function to real-valued weights for the purpose of forward and backward propagation, but update full-precision weights using SGD with gradients calculated using full-precision. However, previously reported methods for training using the sign of weights either need to train for many hundreds of epochs BID2 BID20, or use computationallycostly normalization scaling for each channel in each layer that changes for each minibatch during training, i.e. the BWN method of BID22. We obtained our using a simple alternative approach, as we now describe. We begin by noting that the standard deviation of the sign of the weights in a convolutional layer with kernels of size F × F will be close to 1, assuming a mean of zero. In contrast, the standard deviation of layer i in full-precision networks is initialized in the method of BID7 to 2/(F 2 C i−1), where C i−1 is the number of input channels to convolutional layer i, and i = 1,.., L, where L is the number of convolutional layers and C 0 = 3 for RGB inputs. When applying the sign function alone, there is a mismatch with the principled approach to controlling gradient and activation scaling through a deep network BID7. Although the use of batch-normalization can still enable learning, convergence is empirically slow and less effective. To address this problem, for training using the sign of weights, we use the initialization method of BID7 for the full-precision weights that are updated, but also introduce a layer-dependent scaling applied to the sign of the weights. This scaling has a constant unlearned value equal to the initial standard deviation of 2/(F 2 C i−1) from the method of BID7. This enables the standard deviation of forward-propagating information to be equal to the value it would have initially in full-precision networks. In implementation, during training we multiply the sign of the weights in each layer by this value. For inference, we do this multiplication using a scaling layer following the weight layer, so that all weights in the network are stored using 0 and 1, and deployed using ±1 (see https://github.com/McDonnell-Lab/1-bit-per-weight/). Hence, custom hardware implementations would be able to perform the model's convolutions without multipliers BID22, and significant GPU speedups are also possible BID21.The fact that we scale the weights explicitly during training is important. Although for forward and backward propagation it is equivalent to scale the input or output feature maps of a convolutional layer, doing so also scales the calculated gradients with respect to weights, since these are calculated by convolving input and output feature maps. As a consequence, learning is dramatically slower unless layer-dependent learning rates are introduced to cancel out the scaling. Our approach to this is similar to the BWN method of BID22, but our constant scaling method is faster and less complex. In summary, the only differences we make in comparison with full-precision training are as follows. Let W i be the tensor for the convolutional weights in the i-th convolutional layer. These weights are processed in the following way only for forward propagation and backward propagation, not for weight updates: DISPLAYFORM0 where F i is the spatial size of the convolutional kernel in layer i; see FIG1. Our ResNets use the'post-activation' and identity mapping approach of for residual connections. For ImageNet, we use an 18-layer design, as in BID8. For all other datasets we mainly use a 20-layer network, but also report some for 26 layers. Each residual block includes two convolutional layers, each preceded by batch normalization (BN) and Rectified Linear Unit (ReLU) layers. Rather than train very deep ResNets, we use wide residual networks (Wide ResNets). Although and others reported that 28/26-layer networks in better test accuracy than 22/20-layer 1 networks, we found for CIFAR-10/100 that just 20 layers typically produces best , which is possibly due to our approach of not learning the batch-norm scale and offset parameters. Our baseline ResNet design used in most of our experiments (see Figures 3 and 4) has several differences in comparison to those of;. These details are articulated in Appendix A, and are mostly for simplicity, with little impact on accuracy. The exception is our approach of not learning batch-norm parameters. We trained our models following, for most aspects, the standard stochastic gradient descent methods used by for Wide ResNets. Specifically, we use cross-entropy loss, minibatches of size 125, and momentum of 0.9 (both for learning weights, and in situations where we learn batch-norm scales and offsets). For CIFAR-10/100, SVHN and MNIST, where overfitting is evident in Wide ResNets, we use a larger weight decay of 0.0005. For ImageNet32 and full ImageNet we use a weight decay of 0.0001. Apart from one set of experiments where we added a simple extra approach called cutout, we use standard'light' data augmentation, including randomly flipping each image horizontally with probability 0.5 for CIFAR-10/100 and ImageNet32. For the The design is mostly a standard pre-activation ResNet. The first (stand-alone) convolutional layer ("conv") and first 2 or 3 residual blocks have 64k (ImageNet) or 16k (other datasets) output channels. The next 2 or 3 blocks have 128k or 32k output channels and so on, where k is the widening parameter. The final (stand-alone) convolutional layer is a 1 × 1 convolutional layer that gives N output channels, where N is the number of classes. Importantly, this final convolutional layer is followed by batch-normalization ("BN") prior to global-average-pooling ("GAP") and softmax ("SM"). The blocks where the number of channels double are downsampling blocks (details are depicted in FIG4) that reduce each spatial dimension in the feature map by a factor of two. The rectified-linearunit ("RELU") layer closest to the input is optional, but when included, it is best to learn the BN scale and offset in the subsequent layer., downsampling (stride-2 convolution) is used in the convolutional layers where the number of output channels increases. The corresponding downsampling for skip connections is done in the same residual block. Unlike standard pre-activation ResNets we use an average pooling layer ("avg pool") in the residual path when downsampling. same 3 datasets plus SVHN, we pad by 4 pixels on all sides (using random values between 0 and 255) and crop a 32 × 32 patch from a random location in the ing 40 × 40 image. For full ImageNet, we scale, crop and flip, as in BID8. We did not use any image-preprocessing, i.e. we did not subtract the mean, as the initial BN layer in our network performs this role, and we did not use whitening or augment using color or brightness. We use the initialization method of BID7.We now describe important differences in our training approach compared to those usually reported. When training on CIFAR-10/100 and SVHN, in all batch-norm layers (except the first one at the input when a ReLU is used there), we do not learn the scale and offset factor, instead initializating these to 1 and 0 in all channels, and keeping those values through training. Note that we also do not learn any biases for convolutional layers. The usual approach to setting the moments for use in batch-norm layers in inference mode is to keep a running average through training. When not learning batch-normalization parameters, we found a small benefit in calculating the batch-normalization moments used in inference only after training had finished. We simply form as many minibatches as possible from the full training-set, each with the same data augmentation as used during training applied, and pass these through the trained network, averaging the returned moments for each batch. For best when using this method using matconvnet, we found it necessary to ensure the parameter that is used to avoid divisions by zero is set to 1 × 10 −5; this is different to the way it is used in keras and other libraries. FORMULA0 is that we exchange the ordering of the global average pooling layer and the final weight layer, so that our final weight layer becomes a 1 × 1 convolutional layer with as many channels as there are classes in the training set. This design is not new, but it does seem to be new to ResNets: it corresponds to the architecture of BID17, which originated the global average pooling concept, and also to that used by BID25.As with all other convolutional layers, we follow this final layer with a batch-normalization layer; the benefits of this in conjunction with not learning the batch-normalization scale and offset are described in the Discussion section. We use a'warm restarts' learning rate schedule that has reported state-of-the-art Wide ResNet BID18 whilst also speeding up convergence. The method constantly reduces the learning rate from 0.1 to 1 × 10 −4 according to a cosine decay, across a certain number of epochs, and then repeats across twice as many epochs. We restricted our attention to a maximum of 254 epochs (often just 62 epochs, and no more than 30 for ImageNet32) using this method, which is the total number of epochs after reducing the learning rate from maximum to minimum through 2 epochs, then 4, 8, 16, 32, 64 and 128 epochs. For CIFAR-10/100, we typically found that we could achieve test error rates after 32 epochs within 1-2% of the error rates achievable after 126 or 254 epochs. In the literature, most experiments with CIFAR-10 and CIFAR-100 use simple "standard" data augmentation, consisting of randomly flipping each training image left-right with probability 0.5, and padding each image on all sides by 4 pixels, and then cropping a 32 × 32 version of the image from a random location. We use this augmentation, although with the minor modification that we pad with uniform random integers between 0 and 255, rather than zero-padding. Additionally, we experimented with "cutout" BID3. This involves randomly selecting a patch of each raw training image to remove. The method was shown to combine with other state-of-the-art methods to set the latest state-of-the-art on CIFAR-10/100 (see TAB0). We found better using larger cutout patches for CIFAR-100 than those reported by BID3; hence for both CIFAR-10 and CIFAR-100 we choose patches of size 18 × 18. Following the method of BID3, we ensured that all pixels are chosen for being included in a patch equally frequently throughout training by ensuring that if the chosen patch location is near the image border, the patch impacts on the image only for the part of the patch inside the image. Differently to BID3, as for our padding, we use uniform random integers to replace the image pixel values in the location of the patches. We did not apply cutout to other datasets. We conducted experiments on six databases: four databases of 32×32 RGB images-CIFAR-10, CIFAR-100, SVHN and ImageNet32-and the full ILSVRC ImageNet database BID23, as well as MNIST BID15. Details of the first three 32×32 datasets can be found in many papers, e.g.. ImageNet32 is a downsampled version of ImageNet, where the training and validation images are cropped using their annotated bounding boxes, and then downsampled to 32 × 32 BID1; see http://image-net.org/download-images. All experiments were carried out on a single GPU using MATLAB with GPU acceleration from MatConvNet and cuDNN.We report for Wide ResNets, which (except when applied to ImageNet) are 4× and 10× wider than baseline ResNets, to use the terminology of, where the baseline has 16 channels in the layers at the first spatial scale. We use notation of the form 20-10 to denote Wide ResNets with 20 convolutional layers and 160 channels in the first spatial scale, and hence width 10×. For the full ImageNet dataset, we use 18-layer Wide ResNets with 160 channels in the first spatial scale, but given the standard ResNet baseline is width 64, this corresponds to width 2.5× on this dataset. We denote these networks as 18-2.5. TAB0 lists our top-1 error rates for CIFAR-10/100; C10 indicates CIFAR-10, C100 indicates CIFAR-100; the superscript + indicates standard crop and flip augmentation; and ++ indicates the use of cutout. TAB1 lists error rates for SVHN, ImageNet32 (I32) and full ImageNet; we did not use cutout on these datasets. Both top-1 and top-5 are tabulated for I32 and ImageNet. For ImageNet, we provide for single-center-crop testing, and also for multi-crop testing. In the latter, the decision for each test image is obtained by averaging the softmax output after passing through the network 25 times, corresponding to crops obtained by rescaling to 5 scales as described by BID8, and from 5 random positions at each scale. Our full-precision ImageNet error rates are slightly lower than expected for a wide ResNet according to the of, probably due to the fact we did not use color augmentation. TAB2 shows comparison from the original work on Wide ResNets, and subsequent papers that have reduced error rates on the CIFAR-10 and CIFAR-100 datasets. We also show the only , to our knowledge, for ImageNet32. The current state-of-the-art for SVHN without augmentation is 1.59% BID11, and with cutout augmentation is 1.30% BID3. Our full-precision for SVHN (1.75%) is only a little short of these even though we used only a 4× ResNet, with less than 5 million parameters, and only 30 training epochs. Table 4 shows comparison for previous work that trains models by using the sign of weights during training. Additional appear in BID12, where activations are quantized, and so the error rates are much larger. Inspection of Tables 1 to 4 reveals that our baseline full-precision 10× networks, when trained with cutout, surpass the performance of deeper Wide ResNets trained with dropout. Even without the use Table 4: Test error rates using 1-bit-per-weight at test time and propagation during training. Method C10 C100 SVHN ImageNet BC BID2 8.27 -2.30 -Weight binarization BID20 8.25 ---BWN -Googlenet BID22 9.88 --34.5 / 13.9 (full ImageNet) VGG+HWGQ BID0 7.49 ---BC with ResNet + ADAM BID16 of cutout, our 20-10 network surpasses by over 2% the CIFAR-100 error rate reported for essentially the same network by and is also better than previous Wide ResNet on CIFAR-10 and ImageNet32. As elaborated on in Section 5.3, this improved accuracy is due to our approach of not learning the batch-norm scale and offset parameters. For our 1-bit networks, we observe that there is always an accuracy gap compared to full precision networks. This is discussed in Section 5.1.Using cutout for CIFAR-10/100 reduces error rates as expected. In comparison with training very wide 20× ResNets on CIFAR-100, as shown in TAB0, it is more effective to use cutout augmentation in the 10× network to reduce the error rate, while using only a quarter of the weights. Clearly, even for width-4 ResNets, the gap in error rate between full precision weights and 1-bit-per-weight is small. Also noticeable is that the warm-restart method enables convergence to very good solutions after just 30 epochs; training longer to 126 epochs reduces test error rates further by between 2% and 5%. It can also be observed that the 20-4 network is powerful enough to model the CIFAR-10/100 training sets to well over 99% accuracy, but the modelling power is reduced in the 1-bit version, particularly for CIFAR-100. The reduced modelling capacity for single-bit weights is consistent with the gap in test-error rate performance between the 32-bit and 1-bit cases. When using cutout, training for longer gives improved error rates, but when not using cutout, 126 epochs suffices for peak performance. Finally, for MNIST and a 4× wide ResNet without any data augmentation, our full-precision method achieved 0.71% after just 1 epoch of training, and 0.28% after 6 epochs, whereas our 1-bit method achieved 0.81%, 0.36% and 0.27% after 1, 6 and 14 epochs. In comparison, 1.29% was reported for the 1-bit-per-weight case by BID2, and 0.96% by BID12. Error rate (%) C100-1-bit C100-1-bit-Cutout C100-32-bits C100-32-bits-Cutout C10-1-bit C10-1-bit-Cutout C10-32-bits C10-32-bits-Cutout FIG6: Convergence through training. Left: Each marker shows the error rates on the test set and the training set at the end of each cycle of the warm-restart training method, for 20-4 ResNets (less than 5 million parameters). Right: each marker shows the test error rate for 20-10 ResNets, with and without cutout. C10 indicates CIFAR-10, and C100 indicates CIFAR-100. For CIFAR-10/100, both our full-precision Wide ResNets and our 1-bit-per-weight versions benefit from our method of not learning batch-norm scale and offset parameters. Accuracy in the 1-bit-perweight case also benefits from our use of a warm-restart training schedule BID18. To demonstrate the influence of these two aspects, in FIG8 we show, for CIFAR-100, how the test error rate changes through training when either or both of these methods are not used. We did not use cutout for the purpose of this figure. The comparison learning-rate-schedule drops the learning rate from 0.1 to 0.01 to 0.001 after 85 and 170 epochs. It is clear that our methods lower the final error rate by around 3% absolute by not learning the batch-norm parameters. The warm-restart method enables faster convergence for the full-precision case, but is not significant in reducing the error rate. However, for the 1-bit-per-weight case, it is clear that for best it is best to both use warm-restart, and to not learn batch-norm parameters. It is to be expected that a smaller error rate gap will between the same network using fullprecision and 1-bit-per-weight when the test error rate on the full precision network gets smaller. Indeed, TAB0 quantify how the gap in error rate between the full-precision and 1-bit-perweight cases tends to grow as the error-rate in the full-precision network grows. To further illustrate this trend for our approach, we have plotted in Figure 1 the gap in Top-1 error rates vs the Top-1 error rate for the full precision case, for some of the best performing networks for the six datasets we used. Strong from this data can only be made relative to alternative methods, but ours is the first study to our knowledge to consider more than two datasets. Nevertheless, we have also plotted in Figure 1 the error rate and the gap reported by BID22 for two different networks using their BWN 1-bit weight method. The reasons for the larger gaps for those points is unclear, but what is clear is that better full-precision accuracy in a smaller gap in all cases. A challenge for further work is to derive theoretical bounds that predict the gap. How the magnitude of the gap changes with full precision error rate is dependent on many factors, including the method used to generate models with 1-bit-per-weight. For high-error rate cases, the loss function throughout training is much higher for the 1-bit case than the 32-bit case, and hence, the 1-bit-per-weight network is not able to fit the training set as well as the 32-bit one. Whether this is because of the loss of precision in weights, or due to the mismatch in gradients inherent is propagating with 1-bit weights and updating full-precision weights during training is an open question. If it is the latter case, then it is possible that principled refinements to the weight update method we used will further reduce the gap. However, it is also interesting that for our 26-layer networks applied to CIFAR-10/100, that the gap is much smaller, despite no benefits in the full precision case from extra depth, and this also warrants further investigation. Our approach differs from the BWN method of BID22 for two reasons. First, we do not need to calculate mean absolute weight values of the underlying full precision weights for each output channel in each layer following each minibatch, and this enables faster training. Second, we do not need to adjust for a gradient term corresponding to the appearance of each weight in the mean absolute value. We found overall that the two methods work equally effectively, but ours has two advantages: faster training, and fewer overall parameters. As a note we found that the method of BID22 also works equally effectively on a per-layer basis, rather than per-channel. We also note that the focus of BID22 was much more on the case that combines single-bit activations and 1-bit-per-weight than solely 1-bit-per-weight. It remains to be tested how our scaling method compares in that case. It is also interesting to understand whether the use of batch-norm renders scaling of the sign of weights robust to different scalings, and whether networks that do not use batch-norm might be more sensitive to the precise method used. The unusual design choice of not learning the batch normalization parameters was made for CIFAR-10/100, SVHN and MNIST because for Wide ResNets, overfitting is very evident on these datasets (see FIG6 ; by the end of training, typically the loss function becomes very close to zero, corresponding to severe overfitting. Inspired by label-smoothing regularization BID26 that aims to reduce overconfidence following the softmax layer, we hypothesized that imposing more control over the standard deviation of inputs to the softmax might have a similar regularizing effect. This is why we removed the final all-to-all layer of our ResNets and replaced it with a 1×1 convolutional layer followed by a batch-normalization layer prior to the global average pooling layer. In turn, not learning a scale and offset for this batch-normalization layer ensures that batches flowing into the softmax layer have standard deviations that do not grow throughout training, which tends to increase the entropy of predictions following the softmax, which is equivalent to lower confidence .After observing success with these methods in 10× Wide ResNets, we then observed that learning the batch-norm parameters in other layers also led to increased overfitting, and increased test error rates, and so removed that learning in all layers (except the first one applied to the input, when ReLU is used there).As shown in FIG8, there are significant benefits from this approach, for both full-precision and 1-bit-per-weight networks. It is why, in TAB2, our surpass those of on effectively the same Wide ResNet (our 20-10 network is essentially the same as the 22-10 comparison network, where the extra 2 layers appear due to the use of learned 1×1 convolutional projections in downsampling residual paths, whereas we use average pooling instead).As expected from the motivation, we found our method is not appropriate for datasets such as ImageNet32 where overfitting is not as evident, in which case learning the batch normalization parameters significantly reduces test error rates. Here we compare our approach with SqueezeNet BID13, which reported significant memory savings for a trained model relative to an AlexNet. The SqueezeNet approach uses two strategies to achieve this: replacing many 3x3 kernels with 1x1 kernels; deep compression BID6.Regarding SqueezeNet strategy, we note that SqueezeNet is an all-convolutional network that closely resembles the ResNets used here. We experimented briefly with our 1-bit-per-weight approach in many all-convolutional variants-e.g. plain all-convolutional BID25, SqueezeNet BID13, MobileNet BID10, ResNexT BID27 and found its effectiveness relative to full-precision baselines to be comparable for all variants. We also observed in many experiments that the total number of learned parameters correlates very well with classification accuracy. When we applied a SqueezeNet variant to CIFAR-100, we found that to obtain the same accuracy as our ResNets for about the same depth, we had to increase the width until the SqueezeNet had approximately the same number of learned parameters as the ResNet. We conclude that our method therefore reduces the model size of the baseline SqueezeNet architecture (i.e. when no deep compression is used) by a factor of 32, albeit with an accuracy gap. Regarding SqueezeNet strategy, the SqueezeNet paper reports deep compression BID6 was able to reduce the model size by approximately a factor of 10 with no accuracy loss. Our method reduces the same model size by a factor of 32, but with a small accuracy loss that gets larger as the full-precision accuracy gets smaller. It would be interesting to explore whether deep compression might be applied to our 1-bit-per-weight models, but our own focus is on methods that minimally alter training, and we leave investigation of more complex methods for future work. Regarding SqueezeNet performance, the best accuracy reported in the SqueezeNet paper for ImageNet is 39.6% top-1 error, requiring 4.8MB for the model's weights. Our single-bit-weight models achieve better than 33% top-1 error, and require 8.3 MB for the model's weights. In this paper we focus only on reducing the precision of weights to a single-bit, with benefits for model compression, and enabling of inference with very few multiplications. It is also interesting and desirable to reduce the computational load of inference using a trained model, by carrying out layer computations in very few numbers of bits BID12 BID22 BID0. Facilitating this requires modifying non-linear activations in a network from ReLUs to quantized ReLUs, or in the extreme case, binary step functions. Here we use only full-precision calculations. It can be expected that combining our methods with reduced precision processing will inevitably increase error rates. We have addressed this extension in a forthcoming submission. optional ReLU is used, unlike all other batch-normalization layers, we enable learning of the scale and offset factors. This first BN layer enables us to avoid doing any pre-processing on the inputs to the network, since the BN layer provides necessary normalization. When the optional ReLU is included, we found that the learned offset ensures the input to the first ReLU is never negative. In accordance with our strategy of simplicity, all weight layers can be thought of as a block of three operations in the same sequence, as indicated in FIG2. Conceptually, batch-norm followed by ReLU can be thought of as a single layer consisting of a ReLU that adaptively changes its centre point and positive slope for each channel and relative to each mini-batch. We also precede the global average pooling layer by a BN layer, but do not use a ReLU at this point, since nonlinear activation is provided by the softmax layer. We found including the ReLU leads to differences early in training but not by the completion of training. The Wide ResNet of is specified as having a first convolutional layer that always has a constant number of output channels, even when the number of output channels for other layers increases. We found there is no need to impose this constraint, and instead always allow the first layer to share the same number of output channels as all blocks at the first spatial scale. The increase in the total number of parameters from doing this is small relative to the total number of parameters, since the number of input channels to the first layer is just 3. The benefit of this change is increased simplicity in the network definition, by ensuring one fewer change in the dimensionality of the residual pathway. We were interested in understanding whether the good we achieved for single-bit weights were a consequence of the skip connections in residual networks. We therefore applied our method to plain all-convolutional networks identical to our 4× residual networks, except with the skip connections removed. Initially, training indicated a much slower convergence, but we found that altering the initial weights standard deviations to be proportional to 2 instead of √ 2 helped, so this was the only other change made. The change was also applied in Equation.As summarized in Figure 7, we found that convergence remained slower than our ResNets, but there was only a small accuracy penalty in comparison with ResNets after 126 epochs of training. This is consistent with the findings of BID8 where only ResNets deeper than about 20 layers showed a significant advantage over plain all-convolutional networks. This experiment, and others we have done, support our view that our method is not particular to ResNets. Classification error rate (%) C100-32-bits-ResNet C100-1-bit-ResNet C10-32-bits-ResNet C10-1-bit-ResNet C100-32-bits-Plain C100-1-bit-Plain C10-32-bits-Plain C10-1-bit-Plain Figure 7: Residual networks compared with all-convolutional networks. The data in this figure is for networks with width 4×, i.e. with about 4.3 million learned parameters.
We train wide residual networks that can be immediately deployed using only a single bit for each convolutional weight, with signficantly better accuracy than past methods.
310
scitldr
This paper presents a system for immersive visualization of Non-Euclidean spaces using real-time ray tracing. It exploits the capabilities of the new generation of GPU’s based on the NVIDIA’s Turing architecture in order to develop new methods for intuitive exploration of landscapes featuring non-trivial geometry and topology in virtual reality.
Immersive Visualization of the Classical Non-Euclidean Spaces using Real-Time Ray Tracing.
311
scitldr
We propose the set autoencoder, a model for unsupervised representation learning for sets of elements. It is closely related to sequence-to-sequence models, which learn fixed-sized latent representations for sequences, and have been applied to a number of challenging supervised sequence tasks such as machine translation, as well as unsupervised representation learning for sequences. In contrast to sequences, sets are permutation invariant. The proposed set autoencoder considers this fact, both with respect to the input as well as the output of the model. On the input side, we adapt a recently-introduced recurrent neural architecture using a content-based attention mechanism. On the output side, we use a stable marriage algorithm to align predictions to labels in the learning phase. We train the model on synthetic data sets of point clouds and show that the learned representations change smoothly with translations in the inputs, preserve distances in the inputs, and that the set size is represented directly. We apply the model to supervised tasks on the point clouds using the fixed-size latent representation. For a number of difficult classification problems, the are better than those of a model that does not consider the permutation invariance. Especially for small training sets, the set-aware model benefits from unsupervised pretraining. Autoencoders are a class of machine learning models that have been used for various purposes such as dimensionality reduction, representation learning, or unsupervised pretraining (see, e.g., BID13 ; BID1 ; BID6 ; BID10). In a nutshell, autoencoders are feed-forward neural networks which encode the given data in a latent, fixed-size representation, and subsequently try to reconstruct the input data in their output variables using a decoder function. This basic mechanism of encoding and decoding is applicable to a wide variety of input distributions. Recently, researchers have proposed a sequence autoencoder BID5, a model that is able to handle sequences of inputs by using a recurrent encoder and decoder. Furthermore, there has been growing interest to tackle sets of elements with similar recurrent architectures BID21 ). In this paper, we propose the set autoencoder -a model that learns to embed a set of elements in a permutation-invariant, fixed-size representation using unlabeled training data only. The basic architecture of our model corresponds to that of current sequence-to-sequence models BID20 BID3 BID23: It consists of a recurrent encoder that takes a set of inputs and creates a fixed-length embedding, and a recurrent decoder that uses the fixedlength embedding and outputs another set. As encoder, we use an LSTM network with an attention mechanism as in BID21. This ensures that the embedding is permutation-invariant in the input. Since we want the loss of the model to be permutation-invariant in the decoder output, we re-order the output and align it to the input elements, using a stable matching algorithm that calculates a permutation matrix. This approach yields a loss which is differentiable with respect to the model's parameters. The proposed model can be trained in an unsupervised fashion, i.e., without having a labeled data set for a specific task. In a series of experiments, we analyze the properties of the embedding. For example, we show that the learned embedding is to some extent distance-preserving, i.e., the distance between two sets of elements correlates with the distances of their embeddings. Also, the embedding is smooth, i.e., small changes in the input set lead to small changes of the respective embedding. Furthermore, we show Figure 1: Example of a sequence-to-sequence translation model. The encoder receives the input characters ["g","o"]. Its internal state is passed to the decoder, which outputs the translation, i.e., the characters of the word "aller".that pretraining in an unsupervised fashion can help to increase the performance on supervised tasks when using the fixed-size embedding as input to a classification or regression model, especially if training data is limited. The rest of the paper is organized as follows. Section 2 introduces the preliminaries and briefly discusses related work. In Section 3, we present the details of the set autoencoder. Section 4 presents experimental setup and . We discuss the and conclude the paper in Section 5.2 RELATED WORK Sequence-to-sequence models have been applied very successfully in various domains such as automatic translation BID20, speech recognition BID3, or image captioning BID23. In all these domains, the task is to model P (Y |X), i.e., to predict an output sequence Y = (y 1, y 2, . . ., y m) given an input sequence X = (x 1, x 2, . . ., x n). Figure 1 shows the basic architecture of a sequence-to-sequence model. It consists of an encoder and a decoder, both of which are usually recurrent neural networks (RNNs). In the figure, the sequence-to-sequence model translates the input sequence X = (g, o) to the output sequence Y = (a, l, l, e, r). One by one, the elements of the input sequence are passed to the encoder as inputs. The encoder always updates its internal state given the previous state and the new input. Now, the last internal state of the encoder represents a fixed-size embedding of the input sequence (and is sometimes referred to as the thought vector). The decoder network's internal state is now initialized with the thought vector, and a special "start" token is passed as the input. One by one, the decoder will now output the tokens of the output sequence, each of which is used as input in the next decoder step. By calculating a loss on the output sequence, the complete sequence-to-sequence model can be trained using backpropagation. A special case of a sequence-to-sequence model is the sequence autoencoder BID5, where the task is to reconstruct the input in the output. For a more formal description of sequence-to-sequence models, please refer to BID20. Researchers have tackled sets of elements directly with neural networks, without using explicit but lossy set representations such as the popular bag-of-words-model BID12. Vinyals et al. raise the question of how the sequence-to-sequence architecture can be adapted to handle sets. They propose an encoder that achieves the required permutation-invariance to the input elements by using a content-based attention mechanism. Using a pointer network BID22 as decoder, the model can then be trained to sort input sets and outperforms a model without a permutation-invariant encoder. The proposed attention-based encoder has been used successfully in other tasks such as one-shot or few-shot learning BID3 ). Another approach BID18 tackles sets of fixed size by introducing a permutation-equivariant 1 layer in standard neural networks. For sets containing more than a few elements, the proposed layer helps to solve problems like point cloud classification, calculating sums of images depicting numbers, or set anomaly detection. The proposed models can fulfill complex supervised tasks and operate on sets of elements by exploiting permutation equi-or invariance. However, they do not make use of unlabeled data. The objective of the set autoencoder is very similar to that of the sequence autoencoder BID5: to create a fixed-size, permutation-invariant embedding for an input set X = {x 1, x 2, . . ., x n}, x i ∈ R d by using unsupervised learning, i.e., unlabeled data. The motivation is that unlabeled data is much easier to come by, and can be used to pretrain representations, which facilitate subsequent supervised learning on a specific task BID6. In contrast to the sequence autoencoder, the set autoencoder needs to be permutation invariant, both in the input and the output set. The first can be achieved directly by employing a recurrent encoder architecture using content-based attention similar to the one proposed by BID21 ) (see Section 3.1). Achieving permutation invariance in the output set is not straightforward. When training a set autoencoder, we need to provide the desired outputs Y in some order. By definition, all possible orders should be equally good, as long as all elements of the input set and no surplus elements are present. In theory, the order in which the outputs are presented to the model (or, more specifically: to the loss function) should be irrelevant: by using a chain rule-based model, the RNN can, in principle, model any joint probability distribution, and we could simply enlarge the data set by including all permutations. Since the number of permutations grows exponentially in the number of elements, this is not a feasible way: The data set quickly becomes huge, and the model has to learn to create every possible permutation of output sets. Therefore, we need to tackle the problem of random permutations in the outputs differently, while maintaining a differentiable architecture (see Section 3.2). The encoder takes the input set and embeds it into the fixed-sized latent representation. This representation should be permutation invariant to the order of the inputs. We use an architecture with content-based attention mechanism similar to the one proposed in BID21 ) (see FIG2): First, each element x i of the input set X is mapped to a memory slot m i ∈ R l using a mapping function f inp (Eq. 1). We use a linear mapping as f inp, the same for all i 3. Then, an LSTM network BID14 BID8 with l cells performs n steps of calculation. In each step, it calculates its new cell state c t ∈ R l and hidden state h t ∈ R l using the previous cell-and hidden state c t−1 and h t−1, as well as the previous read vector r t−1, all of which are initialized to zero in the first step. The new read vector r t is then calculated as weighted combination of all memory locations, using the attention mechanism (Eq. 5). For each memory location i, the attention mechanism calculates a scalar value e i,t which increases based on the similarity between the memory value m i and the hidden state h t, which is interpreted as a query to the memory (Eq. 3). We set f dot to be a dot product. Then, the normalized weighting a i for all memory locations is calculated using a softmax (Eq. 4). After n steps, the concatenation of c n,h n and r n can be seen as a fixed-size embedding of X 4. Note that permuting the elements of X has no effect on the embedding, since the memory locations m i are weighted by content, and the sum in Eq. 5 is commutative. DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 Section 3.1 defined the forward propagation from the input set X to the fixed-size embedding [c t, h t, r t]. We now define the output of the set autoencoder that allows us to train the model using a loss function L. Like in the original sequence-to-sequence model, the core of the decoder is an LSTM network (see Figure 3): DISPLAYFORM0 DISPLAYFORM1 In each step, the decoder LSTM calculates its internal cell stateĉ t and its hidden stateĥ t (Eq. 6). The cell-and hidden state are initialized with the cell-and hidden state from the embedding, produced by the encoder (Eq. 7 and Eq. 8). In the first step, the decoder also gets an additional inputr 0, which is set to be the last read vector of the encoder (Eq. 9). In all following steps,r t is a vector of all zeros. We calculate the decoder's output o t at step t by using the linear function f out (Eq. 10). Each output element o t is of the same dimensionality as the input elements x i. The underlying idea is that f out is the "reverse" operation to f inp, such that encoder and decoder LSTM can operate on similar representations. Furthermore, in each step, the function f eos calculates ω t (Eq. 11), which we interpret as the probability that o t is an element of the set. In other words, if ω t = 0, we can stop sampling. In principle, we could use the LSTM's output sequence O = (o 1, o 2, . . ., o m) directly as elements of the output set. However, this does not take into account the following issues: First, the number m of outputs produced by the decoder should be equal to the size n of the input set X. This can be achieved by learning to output the correct ω t (see Eq. 12 below). Second, the order of the set elements should be irrelevant, i.e., the loss function should be permutation-invariant in the outputs. To address the latter point, we introduce an additional mapping layer D = (d 0, d 1, . . ., d n) between the decoder output and the loss function. The mapping rearranges the first n decoder outputs in the order of the inputs X. That is, it should make sure that the distance between d i and x i is small for all i. The mapping is defined as: d i = n j=1 o j w ij Here, w ij are the elements of a permutation matrix W of size n × n with the properties DISPLAYFORM0 In other words, each output o i is mapped to exactly one d j, and vice versa. For now, we assume that W has been parametrized such that the re-ordered elements in D match the elements in input set X well (see Section 3.3). Then, the set autoencoder loss function can be calculated as DISPLAYFORM1 The function L(x i, d i) is small if x i and d i are similar, and large if x i and d i are dissimilar. In other words, for each element in the input set, the distance to a matching output element (as mapped by W) will be decreased by minimizing L. For discrete elements, L can be calculated as the cross entropy loss L(x, d) = − i x i log d i. For elements that are vectors of real numbers, L is a norm of these vectors, e.g., L(x, d) = ||x − d||. The function L eos calculates the cross-entropy loss between ω i and ω * t, where ω * i indicates if an i th element should be present in the output, i.e., ω * i = 1 if i <= n, 0 else (recall that the decoder can produce m outputs, and m is not necessarily equal to n). Since the whole set autoencoder is differentiable, we train all weights except W using gradient descent. Re-ordering the decoder outputs resembles a point cloud correspondence problem from the domain of computer vision BID15 BID19. Methods like the iterative closest points algorithm Figure 4: Examples of sets in shapes data set BID2 find closest points between two sets, and find a transformation that aligns the second set to the first. Since we are only interested in the correspondence step, we notice its similarity to the stable marriage problem BID11: We want to find matching pairs P i = {man i, woman i} of two sets of men and women, such that there are no two pairs P i, P j where element man i prefers woman j over woman i, and, at the same time, woman j prefers man i over man j.5 To solve this problem greedily, we can use the Gale-Shapely algorithm BID7, which has a run time complexity of O(n 2) (see Algorithm 1) 6. Since its solution is permutation invariant in the set that proposes first BID11 )(p. 10), we consider the input elements x i to be the men, and let them propose first. After the stable marriage step, w ij = 1 if x i is "engaged" to o j. We use a number of synthetic data sets of point clouds for the unsupervised experiments. Each data set consists of sets of up to k items with d dimensions. In the random data sets, the k values of each element are randomly drawn from a uniform distribution between -0.5 and 0.5. In the following experiments, we set k = 16 and d ∈ {1, 2, 3}. In other words, we consider sets of up to 16 elements that are randomly distributed along a zero-centered 1d-line, 2d-plane, or 3-d cube with side length 1. We choose random distributions to evaluate the architecture's capability of reconstructing elements from sets, rather than learning common structure within those sets. In the shapes data set, we create point clouds with up to k = 32 elements of d = 2 dimensions. In each set, the points form either a circle, a square, or a cross (see Figure 4). The shapes can occur in different positions and vary in size. To convey enough information, each set consists of at least 10 points. Each data set contains 500k examples in the training set, 100k examples in the validation set, and another 500k examples in the test set. For each of the data sets we train a set autoencoder to minimize the reconstruction error of the sets, i.e., to minimize Eq. 12 (please refer to the appendix for details of the training procedure, including all hyperparameters). Figure 5 shows the mean euclidean distance of the reconstructed elements for the three random data sets (left-hand side) and the shapes data set (right-hand side), for different set sizes. For the random data sets, the mean error increases with the number of dimensions d of the elements, and with the number of elements within a set. This is to be expected, since all values are completely uncorrelated. For the shapes data set, the average error is lower than the errors for the 2d-random data set with more than two elements. Furthermore, the error decreases with the number of elements in the set. We hypothesize that with a growing number of points, the points become more evenly distributed along the outlines of the shapes, and it is therefore easier for the model to reconstruct them (visual inspection of the reconstructions suggests that the model tends to distribute points more uniformly on the shapes' outlines). We now take a closer look at the embeddings of the set autoencoder (i.e., the vector [c t, h t, r t]) for random sets. Some of the embedding variables have a very strong correlation with the set size (Pearson correlation coefficients of > 0.97 and <-0.985, respectively). In other words, the size of the set is encoded almost explicitly in the embedding. The embeddings seem to be reasonably smooth (see FIG4). We take a set with a single 2d-element and calculate its embeddings (leftmost points in FIG4). We then move this element smoothly in the 2d-plane and observe the ing embeddings. Most of the time, the embeddings change smoothly as well. The discontinuities occur when the element crosses the center region of the 2d plane. Apart from this center region, the embedding preserves a notion of distance of two item sets. This becomes apparent when looking at the correlations between the euclidean distances of two sets X 1 and X 2 and their corresponding embeddings enc(X 1) and enc(X 2). 7 The correlation coefficients for random sets of size one to four are 0.81, 0.71, 0.67, and 0.64. In other words, similar sets tend to yield similar embeddings. Vinyals et al. show that order matters for sequence-to-sequence and set-to-set models. This is the case both for the order of input sequences -specific orders improve the performance of the model's taskas well as for the order of the output sets, i.e., the order in which the elements of the set are processed to calculate the loss function. Recall that the proposed set autoencoder is invariant to the order of the elements both in the input set (using the attention mechanism) and the target set (by reordering the outputs before calculating the loss). Nevertheless, we observe that the decoder learns to output elements in a particular order: We now consider sets with exactly 8 random 2-dimensional elements. We use a pretrained set autoencoder from above, encode over 6,000 sets, and subsequently reconstruct the sets using the decoder. Figure 7 shows heat maps of the 2d-coordinates of the i'th reconstructed element. The first reconstructed element tends to be in the center right area. Accordingly, the second element tends to be in the lower-right region, the third element in the center-bottom region, and so on. The decoder therefore has learned to output the elements within a set in a particular order. Note that most of the distributions put some mass in every area, since the decoder must be able to reproduce sets where the items are not distributed equally in all areas (e.g., in a set where all elements are in the top right corner, the first element must be in the top right corner as well). Figure 8 shows the effect of the set size n on the distribution of the first element. If there is only one element (n = 1), it can be anywhere in the 2d plane. The more elements there are, the more likely it is that at least one of them will be in the center right region, so the distribution of the first element gets more peaked. We speculate that the roughly circular arrangement of the element distributions (which occurs for other set sizes as well) could be an implicit of using the cosine similarity f dot in the attention mechanism of the encoder. Also, this behavior is likely to be the reason for the discontinuity in FIG4 around. 7 We align the elements of X 1 and X 2 using the Gale-Shapely algorithm before calculating distances. DISPLAYFORM0 Figure 7: Heat maps of the location of the ith element in sets of size 8 with 2d-elements (decoder output). Darker shadings indicate more points at these coordinates. n=1 n=6 n=11 n=16Figure 8: Heat maps of the location of the first element in sets of various sizes n with 2d-elements (decoder output). Darker shadings indicate more points at these coordinates. We derive a number of classification and regression tasks based on the data sets in Section 4.1. On the random data sets, we define a number of binary classification tasks. The 1d-, 2d-or 3d-area is partitioned into two, four, or eight areas of equal sizes. Then, two classes of sets are defined: A set is of class 1, if all of its elements are within i of the j defined areas. All other sets are of class two. For example, if d = 2, i = 2, and j = 4, a set is of class 1 if all its elements are in the top left or bottom left area, or any other combination of two areas. 8 Furthermore, we define two regression tasks on the random data sets: The target for the first one is the maximum distance between any two elements in the set, the second one is the volume of the d-dimensional bounding box of all elements. On the shapes data set, the three-class classification problem is to infer the prototypical shape represented by the elements in the set. In the following, we use a set autoencoder as defined above, add a standard two-layer neural network f supervised (enc(X)) on top of the embedding, and use an appropriate loss function for the task (for implementation details see supplementary material). We compare the of the set autoencoder (referred to as set-AE) to those of two vanilla sequence autoencoders, which ignore the fact that the elements form a set. The first autoencoder (seq-AE) gets the same input as the set model, the input for the second autoencoder has been ordered along the first element dimensions (seq-AE (ordered)). Furthermore, we consider three training fashions: For direct training, we train the respective model in a purely supervised fashion on the available training data (in other words: the models are not trained as autoencoders). In the pretrained fashion (pre), we initialize the encoder weights using unsupervised pretraining on the respective (unlabeled) 500k training set, and subsequently train the parameters of f supervised, holding the encoder weights fixed. In the fine tuning setting (fine), we continue training from the pretrained model, and fine-tune all weights. Tables 1(a) and (b) show the accuracy on the test set for the i-of-j-areas classification tasks, for 1,000 and 10,000 training examples. The plain sequence autoencoder is only competitive for the most simple task (first row). For the simple and moderately complex tasks (first three rows), the ordered sequence autoencoder and the set autoencoder reach high accuracies close to 100%, both Table 3: Accuracy for object shape classification task, higher is better. Averaged over 10 runs.for the small and the large training set. When the task gets more difficult (higher i, j, or d), the set autoencoder clearly outperforms both other models. For the small training set, the pre and fine training modes of the set autoencoder usually lead to better than direct training. In other words, the unsupervised pretraining of the encoder weights leads to a representation which can be used to master the classification tasks with a low number of labeled examples. For the larger training set, unsupervised pretraining is still very useful for the more complicated classification tasks. On the other hand, unsupervised pretraining only helps in a few rare cases if the elements are treated as a sequence -the representation learned by the sequence autoencoders does not seem to be useful for the particular classification tasks. The or the regression task are shown in Tables 2 (a) and (b). Again, the ordered sequence autoencoder shows good for small d (recall that the first element dimension is the one that has been ordered), but fails to compete with the set-aware model in the higher dimensions. However, unsupervised pretraining helps the set model in the regression task only for small d. Tables 3 (a) and (b) show the for the shapes classification task. Here, the ordered sequence autoencoder with fine tuning clearly dominates both other models. The set model is unable to capitalize on the proper handling of permutation invariance. In sum, the show that unsupervised pretraining of the set autoencoder creates representations that can be useful for subsequent supervised tasks. This is primarily the case, if the supervised task requires knowledge of the individual locations of the elements, as in the i-of-j-areas classification task. If the precise locations of a subset of the elements are required (as in the bounding box or maximum distance regression tasks), direct training yields better . We hypothesize that failure of the set-aware model on the shapes classification is due to the linear mapping functions f inp and f out: They might be too simple to capture the strong, but non-linear structures in the data. We presented the set autoencoder, a model that can be trained to reconstruct sets of elements using a fixed-size latent representation. The model achieves permutation invariance in the inputs by using a content-based attention mechanism, and permutation invariance in the outputs, by reordering the outputs using a stable marriage algorithm during training. The fixed-size representation possesses a number of interesting attributes, such as distance preservation. We show that, despite the output permutation invariance, the model learns to output elements in a particular order. A series of experiments show that the set autoencoder learns representations that can be useful for tasks that require information about each set element, especially if the tasks are more difficult, and few labeled training examples are present. There are a number of directions for future research. The most obvious is to use non-linear functions for f inp and f out to enable the set autoencoder to capture non-linear structures in the input set, and test the performance on point clouds of 3d data sets such as ShapeNet BID4. Also, changes to the structure of the encoder/decoder (e.g., which variables are interpreted as query or embedding) and alternative methods for aligning the decoder outputs to the inputs can be investigated. Furthermore, more research is necessary to get a better understanding for which tasks the permutation invariance property is helpful, and unsupervised pretraining can be advantageous. BID0 to implement all models. For the implementation and experiments, we made the following design choices:Model Architecture• Both the encoder and the decoder LSTMs are have peephole connections BID8. We use the LSTM implementation of Tensorflow • The input mapping f inp and output mapping f out functions are simple linear functions. Note that we can not re-use f inp on the decoder side to transform the supervised labels in a "backwards" fashion: In this case, learning could parametrize f inp such that all set elements are mapped to a the same value, and the decoder learns to output this element only.• For the supervised experiments (classification and regression), we add a simple two-layer neural network f supervised on top of the embedding. The hidden layer of this network has the same number of units as the embedding, and uses ReLu activations BID17. For the two-class problems (i of j areas), we use a single output neuron and a cross-entropy loss, for the multi-class problems (object shapes) we use three output neurons and a cross-entropy loss. For the regression problems (bounding box and maximum distance), we optimize the mean squared error.• All parameters are initialized using Xavier initialization BID9 • Batch handling: For efficiency, we use minibatches. Therefore, within a batch, there can be different set sizes n i for each example i in the set. For simplicity, the encoder keeps processing all sets in a batch, i.e., it always performs n = k steps, where k is the maximum set size. Preliminary experiments showed only minor variations in the performance when processing is stopped after n i steps, where n i corresponds to the actual size of set i in the minibatch. • The number l of LSTM cells is automatically determined by the dimensionality d and maximum set size k of the input. We set l = k * d, therefore c t, h t,ĉ t,ĥ t ∈ R l. As a consequence, the embedding for all models (set-and sequence autoencoder) could, in principle, comprise the complete information of the set (recall that the goal was not to find the most compact or efficient embedding) • For simplicity, the dimensionality of each the memory cell m i and the read vector r i is equal to the number of LSTM cells, i.e., m i, r i ∈ R l (in principle, the memory could have any other dimensionality, but this would require an additional mapping step, since the query needs to be of the same dimensionality as the memory). We use Adam BID16 to optimize all parameters. We keep Adam's hyperparameters (except for the learning rate) at Tensorflow's default values (β 1 = 0.9, β 2 = 0.999, = 1e − 08). We use minibatch training with a batch size of 100. We keep track of the optimization objective during Table 4: Training hyperparameters.training and reduce the learning rate by 33% / stop training once there has been no improvement for a defined number of epochs, depending on the training mode (see Table 4). For the classification tasks, we couple the learning rate decrease/early stopping mechanism to the missclassification error (1 − accuracy) rather than the loss function. • The values of stalled-epochs-before-X are much higher for the supervised learning scenarios, since the training sets are much smaller (e.g., when using 1,000 examples and a batch size of 100, a single epoch only in 10 gradient update steps).• It is possible that the for supervised training with fine tuning improve if the encoder weights are regularized as well (the weights are prone to overfitting, since we use a low number of training examples).
We propose the set autoencoder, a model for unsupervised representation learning for sets of elements.
312
scitldr
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a considerable amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires considerable human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and backward policy, with the backward policy resetting the environment for a subsequent attempt. By learning a value function for the backward policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the backward policy can greatly reduce the number of manual resets required to learn a task and can reduce the number of unsafe actions that lead to non-reversible states. Deep reinforcement learning (RL) algorithms have the potential to automate acquisition of complex behaviors in a variety of real-world settings. Recent have shown success on games BID16 ), locomotion BID22 ), and a variety of robotic manipulation skills;; BID8 ). However, the complexity of tasks achieved with deep RL in simulation still exceeds the complexity of the tasks learned in the real world. Why have real-world lagged behind the simulated accomplishments of deep RL algorithms?One challenge with real-world application of deep RL is the scaffolding required for learning: a bad policy can easily put the system into an unrecoverable state from which no further learning is possible. For example, an autonomous car might collide at high speed, and a robot learning to clean glasses might break them. Even in cases where failures are not catastrophic, some degree of human intervention is often required to reset the environment between attempts (e.g., BID2).Most RL algorithms require sampling from the initial state distribution at the start of each episode. On real-world tasks, this operation often corresponds to a manual reset of the environment after every episode, an expensive solution for complex environments. Even when tasks are designed so that these resets are easy (e.g., and BID8), manual resets are necessary when the robot or environment breaks (e.g., BID7). The bottleneck for learning many real-world tasks is not that the agent collects data too slowly, but rather that data collection stops entirely when the agent is waiting for a manual reset. To avoid manual resets caused by the environment breaking, task designers often add negative rewards to dangerous states and intervene to prevent agents from taking dangerous actions. While this works well for simple tasks, scaling to more complex environments requires writing large numbers of rules for types of actions the robot should avoid. For example, a robot should avoid hitting itself, except when clapping. One interpretation of our method is as automatically learning these safety rules. Decreasing the number of manual resets required to learn to a task is important for scaling up RL experiments outside simulation, allowing researchers to run longer experiments on more agents for more hours. We propose to address these challenges by forcing our agent to "leave no trace." The goal is to learn not only how to do the task at hand, but also how to undo it. The intuition is that the sequences of actions that are reversible are safe; it is always possible to undo them to get back to the original state. This property is also desirable for continual learning of agents, as it removes the requirements for manual resets. In this work, we learn two policies that alternate between attempting the task and resetting the environment. By learning how to reset the environment at the end of each episode, the agent we learn requires significantly fewer manual resets. Critically, our value-based reset policy restricts the agent to only visit states from which it can return, intervening to prevent the forward policy from taking potentially irreversible actions. Using the reset policy to regularize the forward policy encodes the assumption that whether our learned reset policy can reset is a good proxy for whether any reset policy can reset. The algorithm we propose can be applied to both deterministic and stochastic MDPs. For stochastic MDPs we say that an action is reversible if the probability that an oracle reset policy can successfully reset from the next state is greater than some safety threshold. The set of states from which the agent knows how to return grows over time, allowing the agent to explore more parts of the environment as soon as it is safe to do so. The main contribution of our work is a framework for continually and jointly learning a reset policy in concert with a forward task policy. We show that this reset policy not only automates resetting the environment between episodes, but also helps ensure safety by reducing how frequently the forward policy enters unrecoverable states. Incorporating uncertainty into the value functions of both the forward and reset policy further allows us to make this process risk-aware, balancing exploration against safety. Our experiments illustrate that this approach reduces the number of "hard" manual resets required during learning of a variety of simulated robotic skills. Our method builds off previous work in areas of safe exploration, multiple policies, and automatic curriculum generation. Previous work has examined safe exploration in small MDPs. Moldovan & Abbeel (2012a) examine risk-sensitive objectives for MDPs, and propose a new objective of which minmax and expectation optimization are both special cases. Moldovan & Abbeel (2012b) consider safety using ergodicity, where an action is safe if it is still possible to reach every other state after having taken that action. These methods are limited to small, discrete MDPs where exact planning is straightforward. Our work includes a similar notion of safety, but can be applied to solve complex, high-dimensional tasks. Thomas et al. (2015a; b) prove high confidence bounds for off policy evaluation and policy improvement. While these works look at safety as guaranteeing some reward, our work defines safety as guaranteeing that an agent can reset. Previous work has also used multiple policies for safety and for learning complex tasks. BID9 learn a sequence of forward and reset policies to complete a complex manipulation task. Similar to BID9, our work learns a reset policy to undo the actions of the forward policy. While BID9 engage the reset policy when the forward policy fails, we preemptively predict whether the forward policy will fail, and engage the reset policy before allowing the forward policy to fail. Similar to our approach, BID21 also propose to use a safety policy that can trigger an "abort" to prevent a dangerous situation. However, in contrast to our approach, BID21 use a heuristic, hand-engineered reset policy, while our reset policy is learned simultaneously with the forward policy. BID11 uses uncertainty estimation via bootstrap to provide for safety. Our approach also uses bootstrap for uncertainty estimation, but unlike our method, BID11 does not learn a reset or safety policy. Learning a reset policy is related to curriculum generation: the reset controller is engaged in increasingly distant states, naturally providing a curriculum for the reset policy. Prior methods have studied curriculum generation by maintaining a separate goal setting policy or network BID25 BID14. In contrast to these methods, we do not set explicit goals, but only allow the reset policy to abort an episode. When learning the forward and reset policies jointly, the training dynamics of our reset policy resemble those of reverse curriculum generation, but in reverse. In particular, reverse curriculum learning can be viewed as a special case of our method: our reset policy is analogous to the learner in the reverse curriculum, while the forward policy plays a role similar to the initial state selector. However, reverse curriculum generation requires that the agent can be reset to any state (e.g., in a simulator), while our method is specifically aimed at streamlining real-world learning, through the use of uncertainty estimation and early aborts. In this section, we discuss the episodic RL problem setup, which motivates our proposed joint learning of forward and reset policies. RL considers decision-making problems that consist of a state space S, action space A, transition dynamics P (s | s, a), an initial state distribution p 0 (s), and a scalar reward function r(s, a). In episodic, finite horizon tasks, the objective is to find the optimal policy π * (a | s) that maximizes the expected sum of γ-discounted returns, E π T t=0 γ t r(s t, a t), where DISPLAYFORM0 Typical RL training routines involve iteratively sampling new episodes; at the end of each episode, a new starting state s 0 is sampled from a given initial state distribution p 0. In practical applications, such as robotics, this procedure involves a hard-coded reset policy or a human intervention to manually reset the agent. Our work is aimed at avoiding these manual resets by learning an additional reset policy that satisfies the following property: when the reset policy is executed from any state reached by the forward policy, the distribution over final states is close to the initial state distribution p 0. If we learn such a reset policy, then the agent never requires querying the black-box distribution p 0 and can continually learn on its own. Our method for continual learning relies on jointly learning a forward policy and reset policy, using early aborts to avoid manual resets. The forward policy aims to maximize the task reward, while the reset policy takes actions to reset the environment. Both have the same state and action spaces, but are given different reward objectives. The forward policy reward r f (s, a) is the usual task reward given by the environment. The reset policy reward r r (s) is designed to approximate the initial state distribution. In practice, we found that a very simple design worked well for our experiments. We used the negative distance to some start state, plus any reward shaping included in the forward reward. To make this set-up applicable for solving the task, we make two assumptions on the task environment. First, we make the weak assumption that there exists a policy that can reset from at least one of the reachable states with maximum reward in the environment. This assumption ensures that it is possible to solve the task without any manual resets. Many manipulation and locomotion tasks in robotics satisfy this assumption. As a counterexample, the Atari game Ms. Pacman violates this assumption because transitioning from one level to the next level is not reversible; the agent cannot transition from level 3 back to level 1. Second, we assume that the initial state distribution is unimodal and has narrow support. This assumption ensures that the distribution over the reset policy's final state is close to the initial state distribution p 0. If the initial state distribution were multi-modal, the reset policy might only learn to return to one of these modes. Detecting whether an environment violates this second assumption is straightforward. A mismatch between p 0 and the reset policy's final state distribution will cause the forward policy to earn a small reward when the initial state is sampled from p 0 and a larger reward when the initial state is the final state of the reset policy. We choose off-policy actor-critic as the base RL algorithm BID24 BID13, since its off-policy learning allows sharing of the experience between the forward and reset policies. Additionally, the Q-functions can be used to signal early aborts. Our method can also be used directly with any other Q-learning method (; BID16 BID8 BID0 BID15 . The reset policy learns how to transition from the forward policy's final state back to an initial state. In challenging domains where the reset policy is unable to reset from some states or would take prohibitively long to reset, a costly manual reset is required. The reset policy offers a natural mechanism for reducing these manual resets. We observe that, for states from which we cannot quickly reset, the value function of the reset policy will be low. We can therefore use this value function (or, specifically, its Q-function) as a metric to determine when to terminate the forward policy, performing an early abort. Before an action proposed by the forward policy is executed in the environment, it must be "approved" by the reset policy. In particular, if the reset policy's Q-value for the proposed action is too small, then an early abort is performed: the proposed action is not taken and the reset policy takes control. Formally, early aborts restrict exploration to a'safe' subspace of the MDP. Let E ⊆ S × A be the set of (possibly stochastic) transitions, and let Q reset (s, a) be the Q-value of our reset policy at state s taking action a. The subset of transitions E * ∈ E allowed by our algorithm is DISPLAYFORM0 Noting that V (s) max a∈A Q(s, a), we see that given access to the true Q-values, Leave No Trace only visits safe states: DISPLAYFORM1 In Appendix A, we prove that if we learn the true Q-values for the reset policy, then early aborts restrict the forward policy to visiting states that are safe in expectation at convergence. Early aborts can be interpreted as a learned, dynamic, safety constraint, and a viable alternative for the manual constraints that are typically used for real-world RL experiments. Early aborts promote safety by preventing the agent from taking actions from which it cannot recover. These aborts are dynamic because the states at which they occur change throughout training as more states are considered safe. Early aborts can make learning the forward policy easier by preventing the agent from entering unsafe states. We experimentally analyze early aborts in Section 6.3, and discuss how our approach handles over/under-estimates of Q-values in Appendix B. A hard reset is an action that resamples that state from the initial state distribution. Hard resets are available to an external agent (e.g., a human) but not the learned agent. Early aborts decrease the requirement for "hard" resets, but do not eliminate them, since an imperfect reset policy might still miss a dangerous state early in the training process. It is challenging to identify whether any policy can reset from the current state. Formally, we define a set of states S reset that give a reward greater than r min to the reset policy: DISPLAYFORM0 We say that we are in an irreversible state if we have not visited a state in S reset within the past N episodes, where N is a hyperparameter. This is a necessary but not sufficient condition, as the reset policy may have not yet learned to reset from a safe state. Increasing N decreases the number of hard resets. However, when we are in an irreversible state, increasing N means that we remain in that state (learning nothing) for more episodes. Section 6.4 empirically examines this trade-off. In practice, the setting of this parameter should depend on the cost of hard resets. Our full algorithm (Algorithm 1) consists of alternately running a forward policy and reset policy. When running the forward policy, we perform an early abort if the Q-value for the reset policy is less than Q min. Only if the reset policy fails to reset after N episodes do we do a manual reset. The accuracy of the Q-value estimates directly affects task reward and indirectly affects safety (through early aborts). Our Q-values may not be good estimates of the true value function for (s, r) ← ENVIRONMENT. STEP(a) 7:Update the forward policy.8:for max steps per episode do 9:a ← RESET AGENT.CHOOSE ACTION(s) 10:(s, r) ← ENVIRONMENT.STEP (a) 11:Update the reset policy.12:Let S N reset be the final states from the last N reset episodes. DISPLAYFORM0 s ← ENVIRONMENT.RESET Hard Reset previously-unseen states. To address this, we train Q-functions for both the forward and reset policies that provide uncertainty estimates. Several prior works have explored how uncertainty estimates can be obtained in such settings BID6 BID19. In our method, we train an ensemble of Q-functions, each with a different random initialization. This technique has been established in the literature as a principled way to provides a distribution over Q-values at each state given the observed data BID19 BID3.Given this distribution over Q-values, we can propose three strategies for early aborts: Optimistic Aborts: Perform an early abort only if all the Q-values are less than Q min. Equivalently, do an early abort if max θ Q θ reset (s, a) < Q min. Realist Aborts: Perform an early abort if the mean Q-value is less than Q min. Pessimistic Aborts: Perform an early abort if any of the Q-values are less than Q min. Equivalently, do an early abort if min θ Q θ reset (s, a) < Q min. We expect that optimistic aborts will provide better exploration at the cost of more hard resets, while pessimistic aborts should decrease hard resets, but may be unable to effectively explore. We empirically test this hypothesis in Appendix C. We first present a small didactic example to illustrate how our forward and reset policies interact and how cautious exploration reduces the number of hard resets. We first discuss the gridworld in FIG0. The states with red borders are absorbing, meaning that the agent cannot leave them and must use a hard reset. The agent receives a reward of 1 for reaching the goal state, and 0 otherwise. States are colored based on the number of early aborts triggered in each state. Note that most aborts occur next to the initial state, when the forward policy attempts to enter the absorbing state South-East of the start state, but is blocked by the reset policy. In FIG1, we present a harder environment, where the task can be successfully completed by reaching one of the two goals, exactly one of which is reversible. The forward policy has no preference for which goal is better, but the reset policy successfully prevents the forward policy from entering the absorbing goal state, as indicated by the much larger early abort count in the blue-colored state next to the absorbing goal. 78% without increasing the number of steps to solve the task. In a real-world setting, this might produce a substantial gain in efficiency, as time spend waiting for a hard reset could be better spent collecting more experience. Thus, for some real-world experiments, increasing Q min can decrease training time even if it requires more steps to learn. Pusher Cliff Cheetah Cliff Walker Peg InsertionIn this section, we use the five complex, continuous control environments shown above to answer questions about our approach. While ball in cup and peg insertion are completely reversible, the other environments are not: the pusher can knock the puck outside its workspace and the cheetah and walker can jump off a cliff. Crucially, reaching the goal states or these irreversible states does not terminate the episode, so the agent remains in the irreversible state until it calls for a hard reset. To ensure fair evaluation of all approaches, we use a different procedure for evaluation than for training. We evaluate the performance of a policy by creating a copy of the policy in a separate thread, running the forward policy for a fixed number of steps, and computing the average per-step reward. All approaches observe the same amount of data during training. We visualize the training dynamics and provide additional plots and experimental details are in the Appendix. Figure 5: We compare our method to a nonepisodic ("forward-only") approach on ball in cup. Although neither uses hard resets, only our method learns to catch the ball. As an upper bound, we also show the "status quo" approach that performs a hard reset after episode, which is often impractical outside simulation. One proposal for learning without resets is to run the forward policy until the task is learned. This "forwardonly" approach corresponds to the standard, fully online, non-episodic lifelong RL setting, commonly studied in the context of temporal difference learning BID26 ). We show that this approach fails, even on reversible environments where safety is not a concern. We benchmarked the forward-only approach and our method on ball in cup, using no hard resets for either. Figure 5 shows that our approach solves the task while the "forward-only" approach fails to learn how to catch the ball when initialized below the cup. Note that the x axis includes steps taken by the reset policy. Once the forward-only approach catches the ball, it gets maximum reward by keeping the ball in the cup. In contrast, our method learns to solve this task by automatically resetting the environment after each attempt, so the forward policy can practice catching the ball without hard resets. As an upper bound, we show policy reward for the "status quo" approach, which performs a hard reset after every attempt. Note that the dependence on hard resets makes this third method impractical outside simulation. Pusher Cliff CheetahFigure 6: Our method achieves equal or better rewards than the status quo with fewer manual resets. Our first goal is to reduce the number of hard resets during learning. In this section, we compare our algorithm to the standard, episodic learning setup ("status quo"), which only learns a forward policy. As shown in Figure 6 (left), the conventional approach learns the pusher task somewhat faster than ours, but our approach eventually achieves the same reward with half the number of hard resets. In the cliff cheetah task (Figure 6 (right)), not only does our approach use an order of magnitude fewer hard resets, but the final reward of our method is substantially higher. This suggests that, besides reducing the number of resets, the early aborts can actually aid learning by preventing the forward policy from wasting exploration time waiting for resets in irreversible states. Pusher Cliff Cheetah Figure 7: Early abort threshold: Increasing the early abort threshold to act more cautiously avoids many hard resets, indicating that early aborts help avoid irreversible states. To test whether early aborts prevent hard resets, we can see if the number of hard resets increases when we lower the early abort threshold. Figure 7 shows the effect of three values for Q min while learning the pusher and cliff cheetah. In both environments, decreasing the early abort threshold increased the number of hard resets, supporting our hypothesis that early aborts prevent hard resets. On pusher, increasing Q min to 80 allowed the agent to learn a policy that achieved nearly the same reward using 33% fewer hard resets. The cliff cheetah task has lower rewards than pusher, even an early abort threshold of 10 is enough to prevent 69% of the total early aborts that the status quo would have performed. While early aborts help avoid hard resets, our algorithm includes a mechanism for requesting a manual reset if the agent reaches an unresettable state. As described in Section 4.2, we only perform a hard reset if the reset agent fails to reset in N consecutive episodes. Figure 8 shows how the number of reset attempts, N, affects hard resets and reward. On the pusher task, when our algorithm was given a single reset attempt, it used 64% fewer hard resets than the status quo approach would have. Increasing the number of reset attempts to 4 ed in another 2.5x reduction in hard resets, while decreasing the reward by less than 25%. On the cliff cheetah task, increasing the number of reset attempts brought the number of resets down to nearly zero, without changing the reward. Surprisingly, these indicate that for some tasks, it is possible to learn an equally good policy with significantly fewer hard resets. Our approach uses an ensemble of value functions to trigger early aborts. Our hypothesis was that our algorithm would be sensitive to bias in the value function if we used a single Q network. To test this hypothesis, we varied the ensemble size from 1 to 50. Figure 9 shows the effect on learning the pushing task. An ensemble with one network failed to learn, but still required many hard resets. Increasing the ensemble size slightly decreased the number of hard resets without affecting the reward. Our method can automatically produce a curriculum in settings where the desired skill is performed by the reset policy, rather than the forward policy. As an example, we evaluate our method on a peg insertion FIG0: Our method automatically induces a curriculum, allowing the agent to solve peg insertion with sparse rewards.task, where the reset policy inserts the peg and the forward policy removes it. The reward for a successful peg insertion is provided only when the peg is in the hole, making this task challenging to learn with random exploration. Hard resets provide illustrations of what a successful outcome looks like, but do not show how to achieve it. Our algorithm starts with the peg in the hole and runs the forward (peg removal) policy until an early abort occurs. As the reset (peg insertion) policy improves, early aborts occur further and further from the hole. Thus, the initial state distribution for the reset (peg insertion) policy moves further and further from the hole, increasing the difficulty of the task as the policy improves. We compare our approach to an "insert-only" baseline that only learns the peg insertion policy -we manually remove the peg from the hole after every episode. For evaluation, both approaches start outside the hole. FIG0 shows that only our method solves the task. The number of resets required by our method plateaus after one million steps, indicating that it has solved the task and no longer requires hard resets at the end of the episode. In contrast, the "insert-only" baseline fails to solve the task, never improving its reward. Thus, even if reducing manual resets is not important, the curriculum automatically created by Leave No Trace can enable agents to learn policies they otherwise would be unable to solve. In this paper, we presented a framework for automating reinforcement learning based on two principles: automated resets between trials, and early aborts to avoid unrecoverable states. Our method simultaneously learns a forward and reset policy, with the value functions of the two policies used to balance exploration against recoverability. Experiments in this paper demonstrate that our algorithm not only reduces the number of manual resets required to learn a task, but also learns to avoid unsafe states and automatically induces a curriculum. Our algorithm can be applied to a wide range of tasks, only requiring a few manual resets to learn some tasks. During the early stages of learning we cannot accurately predict the consequences of our actions. We cannot learn to avoid a dangerous state until we have visited that state (or a similar state) and experienced a manual reset. Nonetheless, reducing the number of manual resets during learning will enable researchers to run experiments for longer on more agents. A second limitation of our work is that we treat all manual resets as equally bad. In practice, some manual resets are more costly than others. For example, it is more costly for a grasping robot to break a wine glass than to push a block out of its workspace. An approach not studied in this paper for handling these cases would be to specify costs associated with each type of manual reset, and incorporate these reset costs into the learning algorithm. While the experiments for this paper were done in simulation, where manual resets are inexpensive, the next step is to apply our algorithm to real robots, where manual resets are costly. A challenge introduced when switching to the real world is automatically identifying when the agent has reset. In simulation we can access the state of the environment directly to compute the distance between the current state and initial state. In the real world, we must infer states from noisy sensor observations to deduce if they are the same. In this section, we prove that if we indeed learn the true Q-values for the reset policy, then the abort condition stipulated by our method will keep the forward policy safe (able to reset) for deterministic infinite-horizon discounted reward MDPs. For stochastic MDPs, the abort condition will keep the forward policy safe in expectation. Thus, the abort condition is effective at convergence. Before convergence, the reliability of this abort condition depends on the accuracy of the learned Q-values. In practice, we partially mitigate issues with imperfect Q functions by means of the Q-function ensemble (Section 4.4). In the proofs that follow, we make the following assumptions:1. The reward function for the reset policy depends only on the current state. We use r r (s) as the reset reward received for arriving at state s throughout this proof. 2. From every state s t, there exists an action a t such that the expected reset reward at the next state is at least as large as the reset reward at the current state: DISPLAYFORM0 For example, if the reset reward is uniform over S reset and zero everywhere else, then this assumption requires that for every state S reset, there exists an action that deterministically transitions to another state in S reset. As a counterexample, a modified cliff cheetah environment where the cheetah is initialized a meter above the ground does not satisfy this assumption. If the cheetah in S reset (above the ground), there are no actions it can take to stop itself from falling to the ground and leaving S reset. In the proofs below, we assume a stochastic MDP. For a deterministic MDP, we can remove the expectations over s t+1 (Lemma 4). To begin, we use the two assumptions to establish a lower bound for the value of any state for the reset policy. Lemma 1. For every state s ∈ S, the expected cumulative discounted reward for the reset agent is greater than a term that depends on the discount γ and the reward of the current state r r (s): DISPLAYFORM0 Proof. As a consequence of the assumptions, a reset policy at state s that acts optimally is guaranteed to receive an expected reset reward of at least r r (s) in every future time step. Thus, its expected cumulative discounted reward is at least DISPLAYFORM1 Next, we show that the reset policy can choose actions so that the Q-values do not decrease in expectation. Theorem 1. For any state s t ∈ S and action a t ∈ A, there exists another action a * t+1 ∈ A such that DISPLAYFORM2 In the proof that follows, note that the next state s t+1 is an unknown random variable. Functions of s t+1 are also random variables. Proof. Let state s t and action a t be given and let s t+1 be a random variable indicating the next state following a possibly stochastic transition. Let a * t+1 to be the action with largest Q-value at the next state: DISPLAYFORM3 We want to bound the expected difference between Q reset (s t, a t) and Q reset (s t+1, a * t+1), where the expectation is with respect to the unknown next state s t+1. We begin by unrolling the first term of the Q-value. Because a * t+1 is defined to be the action with largest Q-value, we can replace the first Q-value expression with the value function. We then apply the bound from Lemma 1. For brevity, we omit the subscript reset and omit that the expectation is over s t+1. DISPLAYFORM4 Next, we want to show that if one state is safe, the next state will also be safe in expectation. As a reminder, we say transitions (s, a) in E * are safe and states s in S * are safe (Eq. 1 and 3): DISPLAYFORM5 Lemma 2. Let safe state s t ∈ S * be given and choose an action a t such that (s t, a t) ∈ E *. Then the following state s t+1 is also safe in expectation: DISPLAYFORM6 Proof. By our assumption that (s t, a t) ∈ E *, we know Q reset (s t, a t) > Q min. Combining with Theorem 1, we get Proof. Proof by induction. We assumed that the initial state s 0 is safe. Lemma 2 shows that safety is a preserved invariant. Thus, each future state s t is also safe in expectation. DISPLAYFORM7 Leave No Trace being safe in expectation means that if we look an arbitrary number of steps into the future, the expected Q-value for that state is at least Q min. Equivalently, the probability that the state we arrive at is safe is greater than 50%.Deterministic MDPs are a special case for which we can prove that Leave No Trace only visits safe states (not in expectation).Lemma 4. For deterministic MDPs, if the initial state s 0 is safe, then Leave No Trace only visits states that are also safe (not in expectation.)Proof. When the next state s t+1 is a deterministic function of the current state s t and action a t, we can remove the expectation over s t+1 from Theorem 1 and Lemma 2. Thus, if the initial state s 0 is safe, we are guaranteed that every future state is also safe. In practice, Leave No Trace does visit unsafe states (though significantly less frequently than existing approaches). First, the proofs above only show that each state is safe in expectation. We do not prove that every state is safe with high probability. Second, we do not have access to the true Q-values. Our learned Q-function may overestimate the Q-value of some action, leading us to take an unsafe action. Empirically, we found that using an ensemble of Q-functions helped mitigate this problem, decreasing the number of unsafe actions taken as compared to using a single Q-function (Section 6.5). We introduced early aborts in Section 4.1 and analyzed them in Appendix A under the assumption that we had access to the true Q-values. In practice, our learned Q-values may over/under-estimate the Q-value for the reset policy. First, consider the case that the Q-function overestimates the reset Q-value for state s u, so the agent mistakenly thinks that unsafe state s u is safe. The agent will visit state s u and discover that it cannot reset. When the reset Q-function is updated with this experience, it will decrease its predicted reset Q-value for state s u. Second, consider the case that the Q-function underestimates the reset Q-value for state s s, so the agent mistakenly thinks that safe state s s is unsafe. For continuous tasks, once the agent learns to reset from a nearby safe state, generalization of the Q-function across states will lead the reset policy to assume that it can also reset from the state s s. For discrete state tasks where the Q-function does not generalize across states, we act optimistically in the face of uncertainty by acting based on the largest predicted Q-value from our ensemble, helping to avoid this second case (see Appendix C). We benchmarked three methods for combining our ensemble of values functions (optimistic, realistic, and pessimistic, as discussed in Section 4.4). FIG0 compares the three methods on gridworld on the gridworld environment from Section 5. Only the optimistic agent efficiently explored. As expected, the realistic and pessimistic agents, which are more conservative in letting the forward policy continue, fail to explore when Q min is too large. Interestingly, for the continuous control environments, the ensembling method makes relatively little difference for the number of resets or final performance, as shown in FIG0. This suggests that much of the benefit of ensemble comes from its ability to produce less biased abort predictions in novel states, rather than the particular risk-sensitive rule that is used. This also indicates that no Q-function in the ensemble significantly overestimates or underestimates the value function -such a Q-function would in bogus Q-value estimates when the ensemble was combined by taking the max or min (respectively). In this section, we provide some intuition for the training dynamics of our algorithm. In particular, we visualize the number of steps taken by the forward policy before an early abort occurs. FIG0 shows this quantity (the episode length for the forward policy) as a function of training iteration. Note that we stop the forward policy after a fixed number of steps (500 steps for cliff cheetah and cliff walker, 100 steps for pusher) if an early abort has not already occurred. For all tasks, initially the reset policy is unable to reset from any state, so early aborts happen almost immediately. As the reset policy improves, early aborts occur further and further from the initial state distribution, corresponding to longer forward episode lengths. In all tasks, increasing the safety threshold Q min caused early aborts to occur sooner, especially early in training. For the cliff cheetah, another curious pattern emerges when Q min is 10 and 20. After 200 thousand steps, the agent had learned rudimentary policies for running forwards and backwards. As the agent learns to run forwards faster, it reaches the cliff sooner and does an early abort, so the forward episode length actually decreases. For cliff walker, we do not see the same pattern because the forward task is more difficult, so the agent only reaches the cliff near the end of training. Both the cliff walker and pusher environments highlight the sensitivity of our method to Q min. If Q min is too small, early aborts will never occur. Automatically tuning the safety threshhold based on the real-world cost of hard resets is an exciting direction for future research. Cliff Cheetah We show the number of steps taken before an early abort for cliff cheetah (top row), cliff walker (middle row), and pusher (bottom row). Increasing the safety threshold causes early aborts to occur earlier, causing the agent to explore more cautiously. These plots are the average across 5 random seeds. For each experiment in the main paper, we chose one or two demonstrative environments. Below, we show all experiments run on cliff cheetah, cliff walker, and pusher. This experiment, described in Section 6.2, compared our method to the status quo approach (resetting after every episode). FIG0 shows plots for all environments. This experiment, described in Section 6.3, shows the effect of varying the early abort threshold. FIG0 shows plots for all environments. To generate FIG0, we averaged early abort counts across 10 random seeds. For Figure 3 we took the median across 10 random seeds. Both gridworld experiments used 5 models in the ensemble. In this section, we provide additional information on the continuous control environments in our experiments. We use the ball in cup environment implemented in. The cliff cheetah and cliff walker environments are modified versions of the cheetah and walker environments in. The pusher environment is a modified version of Pusher-v0 environment in BID1. Finally, the peg insertion environment is based on BID4.
We propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and backward policy, with the backward policy resetting the environment for a subsequent attempt.
313
scitldr
It has been argued that current machine learning models do not have commonsense, and therefore must be hard-coded with prior knowledge . Here we show surprising evidence that language models can already learn to capture certain common sense knowledge. Our key observation is that a language model can compute the probability of any statement, and this probability can be used to evaluate the truthfulness of that statement. On the Winograd Schema Challenge , language models are 11% higher in accuracy than previous state-of-the-art supervised methods. Language models can also be fine-tuned for the task of Mining Commonsense Knowledge on ConceptNet to achieve an F1 score of 0.912 and 0.824, outperforming previous best (Jastrzebskiet al., 2018). Further analysis demonstrates that language models can discover unique features of Winograd Schema contexts that decide the correct answers without explicit supervision. It has been argued that current machine learning models do not have common sense BID4 BID15. For example, even best machine learning models perform poorly on commonsense reasoning tasks such as Winograd Schema Challenge BID11 BID14. This argument is often combined with another important criticism of supervised learning that it only works well on problems that have a lot of labeled data. The Winograd Schema Challenge (WSC) is an opposite of such problems because its labeled set size is only on the order of a few hundreds examples, with no official training data. Based on this argument, it is suggested that machine learning models must be integrated with prior knowledge BID15 BID10.As an example, consider the following question from the WSC dataset:"The trophy doesn't fit in the suitcase because it is too big. " What is "it"? Answer 0: the trophy. Answer 1: the suitcase. The main point of this dataset is that no machine learning model today can do a good job at answering this type of questions. In this paper, we present surprising evidence that language models do capture certain common sense knowledge and this knowledge can be easily extracted. Key to our method is the use of language models (LMs), trained on a large amount of unlabeled data, to score multiple choice questions posed by the challenge and similar datasets. In the above example, we will first substitute the pronoun ("it") with the candidates ("the trophy" and "the suitcase"), and then use an LM to compute the probability of the two ing sentences ("The trophy doesn't fit in the suitcase because the trophy is too big. " and "The trophy doesn't fit in the suitcase because the suitcase is too big. "). The substitution that in a more probable sentence will be the chosen answer. Using this simple method, we are able to achieve 63.7% accuracy, 11% above that of the previous state-of-the-art 1.To demonstrate a practical impact of this work, we show that the trained LMs can be used to enrich human-annotated knowledge bases, which are known to be low in coverage and expensive to expand. For example, "Suitcase is a type of container", a relevant knowledge to the above Winograd Schema example, does not present in the ConceptNet knowledge base BID13. The goal of this task is to add such new facts to the knowledge base at a cheaper cost than human annotation, in our case using LM scoring. We followed the Commonsense Knowledge Mining task formulation from BID0 BID12 BID8, which posed the task as a classification problem of unseen facts and non-facts. Without an additional classification layer, LMs are fine-tuned to give different scores to facts and non-facts tuples from ConceptNet. Results obtained by this method outperform all previous , despite the small training data size (100K instances). On the full test set, LMs can identify commonsense facts with 0.912 F1 score, which is 0.02 better than supervised trained networks BID8. Previous attempts at solving Winograd Schema Challenge usually involve heavy utilization of annotated knowledge bases, rule-based reasoning, or hand-crafted features BID21 BID1 BID27. BID29 rely on a semantic parser to understand the question, query Google Search, and perform rule-based reasoning. BID27 formalizes the knowledge-graph data structure and a reasoning process based on cognitive linguistics theories. BID1 introduce a mathematical reasoning framework with knowledge bases as axioms. BID24 is an early empirical work towards WSC making use of learning. Their SVM, however, utilizes nearly 70K hand-crafted features and additional supervised training data, while being tested on a less restricted version of WSC. Concurrent work from BID23 attempts WSC by fine-tuning pretrained Transformer LMs on supervised training data, but did not produce better than previous methods. In contrast, we make use of LSTMs, which are shown to be qualitatively different BID30 ) and obtain significant improvements without fine-tuning. The previous best method on WSC makes use of the skip-gram model to learn word representations BID14. Their model, however, also includes supervised neural networks and three knowledge bases. Our work uses the same intuition that unsupervised learning from texts such as a skip-gram model can capture some aspect of commonsense. For example, BID16 BID10 show that by learning to predict adjacent words in a sentence, word vectors can be used to answer analogy questions such as Man:King::Woman:?. The difference is that WSC requires more contextual information, and hence we use LMs instead of just word vectors. By training LMs on very large text corpora, we obtain good without any supervised learning nor the aid of knowledge bases. Closely related to our substitution method on Winograd Schema Challenge are Cloze type reading comprehension tasks such as LAMBADA BID20 or Store Cloze Test BID19, where LM scoring also reported great successes BID2 BID28. On a broader impact, neural LMs have been applied to improve downstream applications BID3 BID26 BID22 BID7 BID23 by providing better sentence or paragraph vector representations. Knowledge bases constructed by human are high in precision, but low in coverage. Since increasing the coverage by more human annotation is expensive, automated methods have been proposed. Previous attempts using deep neural networks are known to produce limited success on the ConceptNet knowledge base, where training data is limited. BID12 shows that a supervised LSTM is outperformed by a simpler model in scoring unseen facts from ConceptNet. Furthermore, BID8 find Deep Neural Network's performance degrades significantly on a selected subset of most novel test instances in comparison to training data. In Section 5.2, we demonstrate that our trained LMs do not suffer from this phenomenon and outperform all previous methods on both test criteria. In this section, we introduce a simple and straightforward application of pretrained language models on Winograd Schema Challenge. Our method is based on the observation that a language model can compute the probability of any given statement. We use this probability to judge the truthfulness of the statement. We first substitute the pronoun in the original sentence with each of the candidate choices. The problem of coreference resolution then reduces to identifying which substitution in a more probable sentence. Language modeling subsequently becomes a natural solution by its definition. Namely, language models are trained on text corpora, which encodes human knowledge in the form of natural language. During inference, LMs are able to assign probability to any given text based on what they have learned from training data. An overview of our method is shown in Figure 1.Figure 1: Overview of our method and analysis. We consider the test "The trophy doesn't fit in the suitcase because it is too big." Our method first substitutes two candidate references trophy and suitcase into the pronoun position. We then use an LM to score the ing two substitutions. By looking at the probability ratio at every word position, we are able to detect "big" as the main contributor to trophy being the chosen answer. When "big" is switched to "small", the answer changes to suitcase. This switching behaviour is an important feature characterizing the Winograd Schema Challenge. Consider a sentence S consisting of n consecutive words has its pronoun to be resolved specified at the k th position: 2 S = {w 1, .., w k−1, w k ≡ p, w k+1, .., w n}. We make use of a trained language model P θ (w t |w 1, w 2, .., w t−1), which defines the probability of word w t conditioned on the previous words w 1,..., w t−1. The substitution of a candidate reference c in to the pronoun position k in a new sentence S w k ←c (we use notation w k ← c to mean that word w k is substituted by candidate c). We consider two different ways of scoring the substitution:• Score f ull (w k ← c) = P θ (w 1, w 2, ..., w k−1, c, w k+1, ..., w n) which scores how probable the ing full sentence is, and DISPLAYFORM0 which scores how probable the part of the ing sentence following c is, given its antecedent. In other words, it only scores a part of S w k ←c conditioned on the rest of the substituted sentence. An example of these two scores is shown in Table 1. In our experiments, we find that the partial scoring strategy is generally better than the naive full scoring strategy. More comparison and analysis on scoring type is done in Section 6.3. Table 1: Example of full and partial scoring for the test "The trophy doesn't fit in the suitcase because it is too big." with two reference choices "the suitcase" and "the trophy". c = the suitcase Score f ull (w k ← "the suitcase") = P (The trophy doesn't fit in the suitcase because the suitcase is too big) Score partial (w k ← "the suitcase") = P (is too big| The trophy doesn't fit in the suitcase because the suitcase) c = the trophy Score f ull (w k ← "the trophy") = P (The trophy doesn't fit in the suitcase because the trophy is too big) Score partial (w k ← "the trophy") = P (is too big| The trophy doesn't fit in suitcase because the trophy) We consider two types of Recurrent language models, one processes word inputs and the other processes character inputs. All output layers are constructed to only produce word outputs, allowing both types of input processing to join in ensembles where the conditional probability at each word position is averaged over all ensemble members. To handle human names in Winograd Schema Challenge, we simply make use of a very large vocabulary (approximately 800K tokens). We follow architectural design and training scheme in BID9, with additional modifications to create more LM variants. More details about our LMs can be found in Appendix A. In this section we describe training text corpora used in our experiments. We also detail tests for commonsense reasoning and commonsense knowledge mining. Training text corpora. We perform experiments on several different text corpora to examine the effect of training data type on test accuracy. Namely, we consider LM-1-Billion, CommonCrawl, 3SQuAD and Gutenberg Books. For SQuAD, we collect context passages from the Stanford QuestionAnswering Dataset BID25 to form its training and validation set accordingly. Commonsense Reasoning Tests. We consider two tests: Pronoun Disambiguation Problems and Winograd Schema Challenge. The first consists of 60 pronoun disambiguation questions (PDP-60). The latter consists of 273 questions and is designed to work against techniques such as traditional linguistic restrictions, common heuristics or simple statistical tests . also built a Winograd Schema-like dataset but relaxed some criteria, allowing the context wording to reveal information about the correct answer. 6 We also found instances of incorrect annotation and ambiguous tests in their training and test sets (see Appendix C). In this work, therefore, we focus on the official Winograd Schema Challenge test set. Commonsense Knowledge Mining test. Following BID0 BID12, we use the same data split on the ConceptNet knowledge base, which in training, validation and test sets having sizes of 100K, 1200, and 2400 respectively. With one half of the validation and test sets being non-facts, the commonsense knowledge mining task is posed as performing classification between facts and non-facts on these sets. Another test set in included which consists of 800 instances with highest novelty measurement computed against the training set . We first train our LMs on all text corpora and test them on the two Commonsense Reasoning tests. The LMs are then finetuned for mining novel commonsense knowledge on ConceptNet. We first examine PDP-60 with unsupervised single-model resolvers by training one word-level LM on the Gutenberg corpus. In TAB0, this resolver outperforms the previous best by more than 11% in accuracy. Next, we compare against systems that make use of both supervised and unsupervised training data. As can be seen in TAB0, the single-model LM can still produce better when its competing system includes either supervised deep neural network or knowledge bases. By training more LMs for ensembling, we are able to reach 70% accuracy, outperforming the previous state-of-the-art of 66.7%. For this task, we found full scoring gives better than partial scoring. In Section 6.3, we provide more comparison between these two types of scoring. On the harder task WSC-273 where questions are designed to exclude relevant knowledge in their wording, incorporating supervised learning and knowledge base to USSM BID14 provides insignificant gain this time (+3%), compared to the large gain on PDP-60 (+19%). On the other hand, our single-model resolver can still outperform the other methods by a large margin as shown in TAB1. By ensembling predictions from multiple LMs, we obtain nearly 10% of absolute accuracy improvement compared to the previous state-of-the-art. We note that BID29 also attempted WSC but their approach is only applicable to 53 out of 273 test cases, therefore not comparable to our . Customizing training data for Winograd Schema Challenge As previous systems collect relevant data from knowledge bases after observing questions during evaluation BID24 BID29, we also explored using this option. Namely, we build a customized text corpus based on questions in commonsense reasoning tasks. It is important to note that this does not include the answers and therefore does not provide supervision to our resolvers. In particular, we aggregate documents from the CommonCrawl dataset that have the most overlapping n-grams with the questions. The score for each document is a weighted sum of F 1 (n) scores when counting overlapping n-grams: DISPLAYFORM0 The top 0.1% highest ranked documents are chosen as our new training corpus. This procedure ed in nearly 1,000,000 documents, with the highest ranking document having a score of 8×10 −2, still relatively small compared to a perfect score of 1.0. We name this dataset Stories since most of the constituent documents take the form of a story with long chain of coherent events. More statistics on Stories can be found in Appendix B.We train four different LMs on Stories and add them to the previous ensemble of 10 LMs, ing in an accuracy of 63.7% in the final system. Remarkably, single models trained on this corpus are already extremely strong, with one word-level LM achieving 62.6% accuracy. In the previous sections, we show that unsupervised LMs can outperform other methods equipped with additional knowledge bases on two Commonsense Reasoning tests. In this section, we demonstrate how these trained LMs can help expand the coverage of these human-annotated knowledge bases. To make LM scoring applicable, knowledge tuples of the form Relation(head, tail) from ConceptNet are first converted to a form that resembles natural language sentences. For example, UsedFor(post office, mail letter) is converted to "Post office is used for mail letter." by simply concatenating its head, relation, and tail phrases in order. Although this simple procedure in ungrammatical sentences, 7 we find our LMs can still adapt to this new data distribution and generalize extremely well to test instances. For fine tuning, each commonsense fact in the training set is accompanied by a negative example, generated by replacing its tail phrase by another random phrase (for example "Post office is used for dance."). Instead of adding a classification layer, we add to the original LM objective a term that encourages perplexity discrepancy between the pair of positive and negative examples. Loss new = Loss LM + max(0, log(Perp positive) − log(Perp negative) + α), where α indicates how much of a discrepancy is needed beyond which no loss is added. P erp is the perplexity evaluated on the tail phrase, given the corresponding head and relation phrases. During evaluation, a threshold is used to classify low-perplexity and high-perlexity instances as fact and non-fact. We found a word-level LM with α = 0.5 perform best on the validation set. As shown in TAB2, our fine-tuned LM outperforms other methods on both tests. Unlike DNN BID12, LM ranking is robust to the novelty-based test instances, while supervised DNN performance degrades significantly on this test despite good performance on the full test. We suggest that this happened because supervised trained DNNs tend to overfit easily when training data is limited. On the other hand, by leveraging a massive amount of unsupervised training data, LM does not overfit to the limited training data for this task (100K instances) despite its large size of approximately 2 billion parameters. In this section, we perform analysis on both correct and incorrect predictions made by LM scoring on the Winograd Schema, and the influence of training data types on test performance. We introduce a method to detect keywords from the context at which our proposed resolvers make the decision between the two candidates c correct and c incorrect. We then demonstrate that these detected keywords surprisingly match the annotated features in each Winograd Schema question that play the role of identifying the correct answer. Namely, we look at the following ratio: DISPLAYFORM0 where 1 ≤ t ≤ n for full scoring, and k + 1 ≤ t ≤ n for partial scoring. It follows that the choice between c correct or c incorrect is made by whether the value of Q = t q t is bigger than 1.0 or not. By looking at the value of each individual q t, it is possible to retrieve words with the largest values of q t and hence most responsible for the final value of Q.We visualize the probability ratios q t to have more insights into the decisions of our resolvers. Figure 2 displays a sample of incorrect decisions made by full scoring which are corrected by partial scoring. Figure 2: A sample of questions from WSC-273 predicted incorrectly by full scoring, but corrected by partial scoring. Here we mark the correct prediction by an asterisk and display the normalized probability ratioq t by coloring its corresponding word. It can be seen that the wrong predictions are made mainly due to q t at the pronoun position, where the LM has not observed the full sentence. Partial scoring shifts the attention to later words and places highest q values on the special keywords, marked by a squared bracket. These keywords characterize the Winograd Schema Challenge, as they uniquely decide the correct answer. In the last question, since the special keyword appear before the pronoun, our resolver instead chose "upset", as a reasonable switch word could be "annoying".Interestingly, we found q t with large values coincides with the special keyword of each Winograd Schema in several cases. Intuitively, this means our LMs assigned very low probability for the keyword after observing the wrong substitution. It follows that we can predict the keyword in each Winograd Schema question by selecting the word positions with the highest value of q t. For questions with keywords appearing before the reference, we detect them by backward-scoring models. Namely, we ensemble 6 LMs, each trained on one text corpus with word order reversed. Overall, we are able to discover a significant number of special keywords (115 out of 178 correctly answered questions) as shown in TAB3. This strongly indicates a correct understanding of the context and a good grasp of commonsense knowledge in the resolver's decision process. In the original proposal of the Winograd Schema Challenge, BID11 argue that by careful wording of the context, no relevant knowledge is revealed about the correct answer. For example, "big" is not a property exclusive to either "trophy" or "suitcase". This forces the system to resort to the context for correct answer, as opposed to exploiting simple statistical correlation in the training data to cheat the test. We use the trained LMs to expose such correlation in the used training data by gradually ablating the context from a WSC question. For example, at 100% ablation, the scoring reduces to comparing only "the trophy is too big" versus "the suitcase is too big". FIG0 -left shows that there is indeed such bias. For some LMs the bias made them perform worse than random at 100% ablation, while for others they perform better than random without any context. Note that this bias, however, does not necessarily affect the corresponding LM-scoring when context is included. As more context is included, all LMs improve and reach the best performance at 0% ablation, indicating the critical role of context in their scoring.6.3 SCORING TYPE AND EFFECT OF TRAINING DATA.We look at incorrect predictions from a word-level LM. With full scoring strategy, we observe that q t at the pronoun position is most responsible for a very large percentage of incorrect decisions as shown in Figure 2. For example, with the test "The trophy cannot fit in the suitcase because it is too big.", the system might return c incorrect ="suitcase" simply because c correct = "trophy" is a very rare word in its training corpus and therefore is assigned a very low probability, overpowering subsequent q t values. To verify this observation, we apply a simple fix to full scoring by normalizing its score with the unigram count of c: Score f ull normalized = Score f ull /Count(c). This normalization indeed fixes full scoring in 9 out of 10 tested LMs on PDP-60. On WSC-273, the observation is again confirmed as partial scoring, which ignores c altogether, strongly outperforms the other two scorings in all cases as shown in FIG0 -middle. We therefore attribute the different behaviour observed on PDP-60 as an atypical case due to its very small size. Next, we examine the effect of training data on commonsense reasoning test performance. An ensemble of 10 LMs is trained on each of the five corpora: LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and Stories. A held-out dataset from each text corpus is used for early stopping on the corresponding training data.8 FIG0 -right shows that among single training text corpora, test performance improves as the training text contains longer documents (LM-1-Billion is a set of mostly independent sentences, while Gutenberg or Stories are full books or very-long documents). Finally, the ensemble trained on a mix of different datasets perform best, highlighting the important role of diversity in training data for commonsense reasoning accuracy of the final system. We introduced a simple method to apply pretrained language models to tasks that require commonsense knowledge. Key to our method is the insight that large LMs trained on massive text corpora can capture certain aspect of human knowledge, and therefore can be used to score textual statements. On the Winograd Schema Challenge, LMs are able to achieve 11 points of accuracy above the best previously reported . On mining novel commonsense facts from ConceptNet knowledge base, LM scoring also outperforms previous methods on two different test criteria. We analyse the trained language models and observe that key features of the context that identify the correct answer are discovered and used in their predictions. Traditional approaches to capturing common sense usually involve expensive human annotation to build knowledge bases. This work demonstrates that commonsense knowledge can alternatively be learned and stored in the form of distributed representations. At the moment, we consider language modeling for learning from texts as this supplies virtually unlimited data. It remains an open question for unsupervised learning to capture commonsense from other modalities such as images or videos. The base model consists of two layers of Long-Short Term Memory (LSTM) BID6 with 8192 hidden units. The output gate of each LSTM uses peepholes and a projection layer to reduce its output dimensionality to 1024. We perform drop-out on LSTM's outputs with probability 0.25. For word inputs, we use an embedding lookup of 800000 words, each with dimension 1024. For character inputs, we use an embedding lookup of 256 characters, each with dimension 16. We concatenate all characters in each word into a tensor of shape (word length, 16) and add to its two ends the <begin of word> and <end of word> tokens. The ing concatenation is zero-padded to produce a fixed size tensor of shape. This tensor is then processed by eight different 1-D convolution (Conv) kernels of different sizes and number of output channels, listed in TAB4, each followed by a ReLU acitvation. The output of all CNNs are then concatenated and processed by two other fully-connected layers with highway connection that persist the input dimensionality. The ing tensor is projected down to a 1024-feature vector. For both word input and character input, we perform dropout on the tensors that go into LSTM layers with probability 0.25. We use a single fully-connected layer followed by a Sof tmax operator to process the LSTM's output and produce a distribution over word vocabulary of size 800K. During training, LM loss is evaluated using importance sampling with negative sample size of 8192. This loss is minimized using the AdaGrad BID5 algorithm with a learning rate of 0.2. All gradients on LSTM parameters and Character Embedding parameters are clipped by their global norm at 1.0. To avoid storing large matrices in memory, we shard them into 32 equal-sized smaller pieces. In our experiments, we used 8 different variants of this base model as listed in TAB5.In TAB6, we listed all LMs and their training text corpora used in each of the experiments in Section 5. FIG2 shows a histogram of similarity score introduced in Section 5.1. Inspecting an excerpt from the highest ranking document reveals many complex references from pronouns, within long chains of events. We hypothesize that this allows LM trained on this corpus to learn disambiguating pronouns to make correct predictions. One day when John and I had been out on somebusiness of our master's, and were returning gently on a long, straight road, at some distance we saw a boy trying to leap a pony over a gate; the pony would not take the leap, -and the boy cut him with the whip, but he only turned off on one side. He whipped him again, but the pony turned off on the other side. Then the boy got off and gave him a hard thrashing, and knocked him about the head... On a non-exhaustive inspection of the dataset constructed by BID24, we found some instances of incorrect or ambiguous annotation. 9 Below we list two cases with our comment.• A series of injections are used to battle a type of cancer in patients because they have a special type of drug which counteracts this sickness. Label: patients. Comment: Found in training set, incorrect label.• John attacked Tim because he was a communist. Label: Tim. Comment: Found in test set, there is no clear answer to this question as communists can also attack their enemy. Using the similarity scoring technique in section 5.1, we observe a large amount of low quality training text on the lower end of the ranking. Namely, these are documents whose content are mostly unintelligible or unrecognized by our vocabulary. Training LMs for commonsense reasoning tasks on full CommonCrawl, therefore, might not be ideal. On the other hand, we detected and removed a portion of PDP-122 questions presented as an extremely high ranked document.
We present evidence that LMs do capture common sense with state-of-the-art results on both Winograd Schema Challenge and Commonsense Knowledge Mining.
314
scitldr
In many real-world learning scenarios, features are only acquirable at a cost constrained under a budget. In this paper, we propose a novel approach for cost-sensitive feature acquisition at the prediction-time. The suggested method acquires features incrementally based on a context-aware feature-value function. We formulate the problem in the reinforcement learning paradigm, and introduce a reward function based on the utility of each feature. Specifically, MC dropout sampling is used to measure expected variations of the model uncertainty which is used as a feature-value function. Furthermore, we suggest sharing representations between the class predictor and value function estimator networks. The suggested approach is completely online and is readily applicable to stream learning setups. The solution is evaluated on three different datasets including the well-known MNIST dataset as a benchmark as well as two cost-sensitive datasets: Yahoo Learning to Rank and a dataset in the medical domain for diabetes classification. According to the , the proposed method is able to efficiently acquire features and make accurate predictions. In traditional machine learning settings, it is usually assumed that a training dataset is freely available and the objective is to train models that generalize well. In this paradigm, the feature set is fixed, and we are dealing with complete feature vectors accompanied by class labels that are provided for training. However, in many real-world scenarios, there are certain costs for acquiring features as well as budgets limiting the total expenditure. Here, the notation of cost is more general than financial cost and it also refers to other concepts such as computational cost, privacy impacts, energy consumption, patient discomfort in medical tests, and so forth BID22. Take the example of the disease diagnosis based on medical tests. Creating a complete feature vector from all the relevant information is synonymous with conducting many tests such as MRI scan, blood test, etc. which would not be practical. On the other hand, a physician approaches the problem by asking a set of basic easy-to-acquire features, and then incrementally prescribes other tests based on the current known information (i.e., context) until a reliable diagnosis can be made. Furthermore, in many real-world use-cases, due to the volume of data or necessity of prompt decisions, learning and prediction should take place in an online and stream-based fashion. In the medical diagnosis example, it is consistent with the fact that the latency of diagnosis is vital (e.g., urgency of specific cases and diagnosis), and it is often impossible to defer the decisions. Here, by online we mean processing samples one at a time as they are being received. Various approaches were suggested in the literature for cost-sensitive feature acquisition. To begin with, traditional feature selection methods suggested to limit the set of features being used for training BID11 BID17. For instance, L1 regularization for linear classifiers in models that effectively use a subset of features BID9. Note that these methods focus on finding a fixed subset of features to be used (i.e., feature selection), while a more optimal solution would be making feature acquisition decisions based on the sample at hand and at the prediction-time. More recently, probabilistic methods were suggested that measure the value of each feature based on the current evidence BID5. However, these methods are usually applicable to Bayesian networks or similar probabilistic models and make limiting assumptions such as having binary features and binary classes. Furthermore, these probabilistic methods are computationally expensive and intractable in large scale problems BID5.Motivated by the success of discriminative learning, cascade and tree based classifiers suggested as an intuitive way to incorporate feature costs BID20 BID3. Nevertheless, these methods are basically limited to the modeling capability of tree classifiers and are limited to fixed predetermined structures. A recent work by BID27 suggested a gating method that employs adaptive linear or tree-based classifiers, alternating between low-cost models for easy-to-handle instances and higher-cost models to handle more complicated cases. While this method outperforms many of the previous work on the tree-based and cascade cost-sensitive classifiers, the low-cost model being used is limited to simple linear classifiers or pruned random forests. As an alternative approach, sensitivity analysis of trained predictors is suggested to measure the importance of each feature given a context BID7 BID18. These approaches either require an exhaustive measurement of sensitivities or rely on approximations of sensitivity. These methods are easy to use as they work without any significant modification to the predictor models being trained. However, theoretically, finding the global sensitivity is a difficult and computationally expensive problem. Therefore, frequently, approximate or local sensitivities are being used in these methods which may cause not optimal solutions. Another approach that is suggested in the literature is modeling the feature acquisition problem as a learning problem in the imitation learning BID13 or reinforcement learning BID14 BID29 BID15 domain. These approaches are promising in terms of performance and scalability. However, the value functions used in these methods are usually not intuitive and require tuning hyper-parameters to balance the cost vs. accuracy trade-off. More specifically, they often rely on one or more hyper-parameters to adjust the average cost at which these models operate. On the other hand, in many real-world scenarios it is desirable to adjust the trade-off at the prediction-time rather than the training-time. For instance, it might be desirable to spend more for a certain instance or continue the feature acquisition until a desired level of prediction confidence is achieved. This paper presents a novel method based on deep Q-networks for cost-sensitive feature acquisition. The proposed solution employs uncertainty analysis in neural network classifiers as a measure for finding the value of each feature given a context. Specifically, we use variations in the certainty of predictions as a reward function to measure the value per unit of the cost given the current context. In contrast to the recent feature acquisition methods that use reinforcement learning ideas BID14 BID29 BID15, the suggested reward function does not require any hyper-parameter tuning to balance cost versus performance trade-off. Here, features are acquired incrementally, while maintaining a certain budget or a stopping criterion. Moreover, in contrast to many other work in the literature that assume an initial complete dataset BID13 BID5 BID8 BID27, the proposed solution is stream-based and online which learns and optimizes acquisition costs during the training and the prediction. This might be beneficial as, in many real-world use cases, it might be prohibitively expensive to collect all features for all training data. Furthermore, this paper suggests a method for sharing the representations between the class predictor and action-value models that increases the training efficiency. In this paper, we consider the general scenario of having a stream of samples as input (S i). Each sample S i corresponds to a data point of a certain class in R d, where there is a cost for acquiring each feature (c j ; 1 ≤ j ≤ d). For each sample, initially, we do not know the value of any feature. Subsequently, at each time step t, we only have access to a partial realization of the feature vector denoted by x t i that consists of features that are acquired so far. There is a maximum feature acquisition budget (B) that is available for each sample. Note that the acquisition may also be terminated before reaching the maximum budget, based on any other termination condition such as reaching a certain prediction confidence. Furthermore, for each S i, there is a ground truth target labelỹ i. It is also worth noting that we consider the online stream processing task in which acquiring features is only possible for the current sample being processed. In other words, any decision should take place in an online fashion. In this setting, the goal of an Opportunistic Learning (OL) solution is to make accurate predictions for each sample by acquiring as many features as necessary. At the same time, learning should take place by updating the model while maintaining the budgets. Please note that, in this setup, we are assuming that the feature acquisition algorithm is processing a stream of input samples and there are no distinct training or test samples. However, we assume that ground truth labels are only available to us after the prediction and for a subset of samples. More formally, we define a mask vector k DISPLAYFORM0 d where each element of k indicates if the corresponding feature is available in x t i. Using this notation, the total feature acquisition cost at each time step can be represented as DISPLAYFORM1 Furthermore, we define the feature query operator (q) as DISPLAYFORM2 In Section 3, we use these primitive operations and notations for presenting the suggested solution. As prediction certainty is used extensively throughout this paper, we devote this section to certainty measurement. The softmax output layer in neural networks are traditionally used as a measure of prediction certainty. However, interpreting softmax values as probabilities is an ad hoc approach prone to errors and inaccurate certainty estimates BID30. In order to mitigate this issue, we follow the idea of Bayesian neural networks and Monte Carlo dropout (MC dropout) BID31 BID10. Here we consider the distribution of model parameters at each layer l in an L layer neural network as:ω DISPLAYFORM0 whereω l is a realization of layer parameters from the probability distribution of p(ω l). In this setting, a probability estimate conditioned on the input and stochastic model parameters is represented as: DISPLAYFORM1 where fω D is the output activation of a neural network with parametersω trained on dataset D. In order to find the uncertainty of final predictions with respect to inputs, we integrate equation 4 with respect to ω: DISPLAYFORM2 Finally, MC dropout suggests interpreting the dropout forward path evaluations as Monte Carlo samples (ω t) from the ω distribution and approximating the prediction probability as: DISPLAYFORM3 With reasonable dropout probability and number of samples, the MC dropout estimate can be considered as an accurate estimate of the prediction uncertainty. Readers are referred to BID10 for a more detailed discussion. In this paper, we denote the certainty of prediction for a given sample (Cert(x t i)) as a vector providing the probability of the sample belonging to each class in equation 6. We formulate the problem at hand as a generic reinforcement learning problem. Each episode is basically consisting of a sequence of interactions between the suggested algorithm and a single data instance (i.e., sample). At each point, the current state is defined as the current realization of the feature vector (i.e., x t i) for a given instance. At each state, the set of valid actions consists of acquiring any feature that is not acquired yet (i.e., A t i = {j = 1 . . . d|k t i,j = 0}). In this setting, each action along with the state transition as well as a reward, defined in the following, is characterizing an experience. We suggest incremental feature acquisition based on the value per unit cost of each feature. Here, the value of acquiring a feature is defined as the expected amount of change in the prediction uncertainty that acquiring the feature causes. Specifically, we define the value of each unknown feature as: DISPLAYFORM0 where r t i,j is the value of acquiring feature j for sample i at time step t. It can be interpreted as the expected change of the hypothesis due to acquiring each feature per unit of the cost. Other reinforcement learning based feature acquisition methods in the literature usually use the final prediction accuracy and feature acquisition costs as components of reward function BID14 BID29 BID15. However, the reward function of equation 7 is modeling the weighted changes of hypothesis after acquiring each feature. Consequently, it in an incremental solution which is selecting the most informative feature to be acquired at each point. As it is demonstrated in our experiments, this property is particularly beneficial when a single model is to be used under a budget determined at the prediction-time or any other, not predefined, termination condition. While it is possible to directly use the measure introduced in equation 7 to find features to be acquired at each time, it would be computationally expensive; because it requires exhaustively measuring the value function for all features at each time. Instead, in addition to a predictor model, we train an action value (i.e., feature value) function which estimates the gain of acquiring each feature based on the current context. For this purpose, we follow the idea of the deep Q-network (DQN) BID26. Briefly, DQN suggests end-to-end learning of the action-value function. It is achieved by exploring the space through taking actions using an -greedy policy, storing experiences in a replay memory, and gradually updating the value function used for exploration. Due to space limitations, readers are referred to BID26 for a more detailed discussion. FIG0 presents the network architecture of the proposed method for prediction and feature acquisition. In this architecture, a predictor network (P-Network) is trained jointly with an action value network (Q-Network). The P-Network is responsible for making prediction and consists of dropout layers that are sampled in order to find the prediction uncertainty. The Q-Network estimates the value of each unknown feature being acquired. Here, we suggest sharing the representations learned from the P-Network with the Q-Network. Specifically, the activations of each layer in the P-Network serve as input to the adjacent layers of the Q-Network (see FIG0 . Note that, in order to increase model stability during the training, we do not allow back-propagation from Q-Network outputs to P-Network weights. We also explored other architectures and sharing methods including using fully-shared layers between P-and Q-Networks that are trained jointly or only sharing the first few layers. According to our experiments, the suggested sharing method of FIG0 is reasonably efficient, while introducing a minimal burden on the prediction performance. Algorithm 1 summarizes the procedures for cost-sensitive feature acquisition and training the networks. This algorithm is designed to operate on a stream of input instances, actively acquire features, make predictions, and optimize the models. In this algorithm, if any features are available for free we include them in the initial feature vector; otherwise, we start with all features not being available initially. Here, the feature acquisition is terminated when either a maximum budget is exceeded, a user-defined stopping function decides to stop, or there is no unknown feature left to acquire. It is worth noting that, in Algorithm 1, to simplify the presentation, we assumed that ground-truth labels are available at the beginning of each episode. However, in the actual implementation, we store experiences within an episode in a temporary buffer, excluding the label. At last, after the termination of the feature acquisition procedure, a prediction is being made and upon the availability of label for that sample, the temporary experiences along with the ground-truth label are pushed to the experience replay memory. In our experiments, for the simplicity of presentation, we assume that all features are independently acquirable at a certain cost, while in many scenarios, features are bundled and acquired together (e.g., certain clinical measurements). However, it should be noted that the current formulation presented in this paper allows for having bundled feature sets. In this case, each action would be acquiring each bundle and the reward function is evaluated for the acquisition of the bundle by measuring the variations of uncertainty before and after acquiring the bundle. In this paper, PyTorch numerical computational library BID28 is used for the implementation of the proposed method. The experiments took between a few hours to a couple days on a GPU server, depending on the experiment. Here, we explored fully connected multi-layer neural network architectures; however, the approach taken can be readily applied to other neural network and deep learning architectures. We normalize features prior to our experiments statistically (µ = 0, σ = 1) and impute missing features with zeros. Note that, in our implementation, for efficiency reasons, we use NaN (not a number) values to represent features that are not available and impute them with zeros during the forward/backward computation. Cross-entropy and mean squared error (MSE) loss functions were used as the objective functions for the P and Q networks, respectively. Furthermore, the Adam optimization algorithm BID21 was used throughout this work for training the networks. We used dropout with the probability of 0.5 for all hidden layers of the P-Network and no dropout for the Q-Network. The target Q-Network was updated softly with the rate of 0.001. We update P, Q, and target Q networks every 1 + n f e 100 experiences, where n f e is the total number of features in an experiment. In addition, the replay memory size is set to store 1000 × n f e most recent experiences. The random exploration probability is decayed such that eventually it reaches the probability of 0.1. We determined these hyper-parameters using the validation set. Based on our experiments, the suggested solution is not much sensitive to these values and any reasonable setting, given enough training iterations, would in reasonable a performance. A more detailed explanation of implementation details for each specific experiment is provided in Section 4. We evaluated the proposed method on three different datasets: MNIST handwritten digits BID24, Yahoo Learning to Rank (LTRC) BID2, and a health informatics dataset. The MNIST dataset is used as it is a widely used benchmark. For this dataset, we assume equal feature acquisition cost of 1 for all features. It is worth noting that we are considering the permutation invariant setup for MNIST where each pixel is a feature discarding the spatial information. Regarding the LTRC dataset, we use feature acquisition costs provided by Yahoo! that corresponding to the computational cost of each feature. Furthermore, we evaluated our method using a real-world health dataset for diabetes classification where feature acquisition costs and budgets are natural and essential to be considered. The national health and nutrition examination survey (NAHNES) data (nha, 2018) was used for this purpose. A feature set including: (i) demographic information (age, gender, ethnicity, etc.), (ii) lab (total cholesterol, triglyceride, etc.), (iii) examination data (weight, height, etc.), and (iv) questionnaire answers (smoking, alcohol, sleep habits, etc.) is used here. An expert with experience in medical studies is asked to suggest costs for each feature based on the overall financial burden, patient privacy, and patient inconvenience. Finally, the fasting glucose values were used to define three classes: normal, pre-diabetes, and diabetes based on standard threshold values. The final dataset consists of 92062 samples of 45 features. BID6, GreedyMiser, and a recent work based on reinforcement learning (RL-Based) BID15., Cronus BID3, and Early Exit approaches. In the current study, we use reinforcement learning as an optimization algorithm, while processing data in a sequential manner. Throughout all experiments, we used fully-connected neural networks with ReLU non-linearity and dropout applied to hidden layers. We apply MC dropout sampling using 1000 evaluations of the predictor network for confidence measurements and finding the reward values. Meanwhile, 100 evaluations are used for prediction at test-time. We selected these value for our experiments as it showed stable prediction and uncertainty estimates. Each dataset was randomly splitted to 15% for test, 15% for validation, and the rest for train. During the training and validation phase, we use the random exploration mechanism. For comparison of the with other work in the literature, as they are all offline methods, the random exploration is not used during the feature acquisition. However, intuitively we believe in datasets with non-stationary distributions, it may be helpful to use random exploration as it helps to capture concept drift. Furthermore, we do model training multiple time for each experiment and average the outcomes. It is also worth noting that, as the proposed method is incremental, we continued feature acquisition until all features were acquired and reported the average accuracy corresponding to each feature acquisition budget. TAB0 presents a summary of datasets and network architectures used throughout the experiments. In this table, we report the number of hidden neurons at each network layer of the P and Q networks. For the Q-Network architecture, the number of neurons in each hidden layer is reported as the number of shared neurons from the P-Network plus the number of neurons specific to the Q-Network. FIG2 presents the accuracy versus acquisition cost curve for the MNIST dataset. Here, we compared of the proposed method (OL) with a feature acquisition method based on recurrent neural networks (RADIN) BID6, a tree-based feature acquisition method (GreedyMiser), and a recent work using reinforcement learning ideas (RL-Based) BID15. As it can be seen from this figure, our cost-sensitive feature acquisition method achieves higher accuracies at a lower cost compared to other competitors. Regarding the RL-Based method, BID15, to make a fair comparison, we used the similar network sizes and learning algorithms as with the OL method. Also, it is worth mentioning that the RL-based curve is the of training many models with different cost-accuracy trade-off hyper-parameter values, while training the OL model gives us a complete curve. Accordingly, evaluating the method of BID15 took more than 10 times compared to OL. FIG3 presents the accuracy versus acquisition cost curve for the LTRC dataset. As LTRC is a ranking dataset, in order to have a fair comparison with other work in the literature, we have used the normalized discounted cumulative gain (NDCG) BID16 performance measure. In short, NDCG is the ratio of the discounted relevance achieved using a suggested ranking method to the discounted relevance achieved using the ideal ranking. Inferring from FIG3, an exhaustive sensitivity-based method (Exhaustive) BID7, the method suggested by BID15 (RL-Based), and using gating functions and adaptively trained random forests BID27 ) (Adapt-GBRT). proposed method is able to achieve higher NDCG values using a much lower acquisition budget compared to tree-based approaches in the literature including CSTC, Cronus BID3, and Early Exit . FIG4 shows a visualization of the OL feature acquisition on the diabetes dataset. In this figure, the y-axis corresponds to 50 random test samples and the x-axis corresponds to each feature. Here, warmer colors represent features that were acquired with more priority and colder colors represent less acquisition priority. It can be observed from this figure that OL acquires features based on the available context rather than having a static feature importance and ordering. It can also be seen that OL gives more priority to less costly and yet informative features such as demographics and examinations. Furthermore, FIG4 demonstrates the accuracy versus acquisition cost for the diabetes classification. As it can be observed from this figure, OL achieves a superior accuracy with a lower acquisition cost compared to other approaches. Here, we used the exhaustive feature query method as suggested by BID7 using sensitivity as the utility function, the method suggested by BID15 (RL-Based), as well a recent paper using gating functions and adaptively trained random forests BID27 ) (Adapt-GBRT). In this section we show the effectiveness of three ideas suggested by this paper i.e, using model uncertainty as a feature-value measure, representation sharing between the P and Q networks, and using MC-dropout as a measure of prediction uncertainty. Additionally, we study the influence of the available budget on the performance of the algorithm. In these experiments, we used the diabetes dataset. A comparison between the suggested feature-value function (OL) in this paper with a traditional feature-value function (RL-Based) was presented in FIG2 and FIG4. We implemented the RL-Based method such that it is using a similar architecture and learning algorithm as the OL, while the reward function is simply the the negative of feature costs for acquiring each feature and a positive value for making correct predictions. As it can be seen from the comparison of these approaches, the reward function suggested in this paper in a more efficient feature acquisition. In order to demonstrate the importance of MC-dropout, we measured the average of accuracy at each certainty value. Statistically, confidence values indicate the average accuracy of predictions BID12. For instance, if we measure the certainty of prediction for a group of samples to be 90%, we expect to correctly classify samples of that group 90% of the time. Figure 5 shows the average prediction accuracy versus the certainty of samples reported using the MC-dropout method (using 1000 samples) and directly using the softmax output values. As it can be inferred from this figure, MC-dropout estimates are highly accurate, while softmax estimates are mostly over-confident and inaccurate. Note that the accuracy of certainty estimates are crucially important to us as any inaccuracy in these values in having inaccurate reward values. Figure 5b shows the accuracy versus cost curves that the suggested architecture achieves using the accurate MC-dropout certainty estimates and using the inaccurate softmax estimates. It can be seen from this figure that more accurate MC-dropout estimates are essential. Figure 6 demonstrates the speed of convergence using the suggested sharing between the P and Q networks (W/ Sharing) as well as not using the sharing architecture (W/O Sharing). Here, we use the normalized area under the accuracy-cost curve (AUACC) as measure of acquisition performance at each episode. Please note that we adjust the number of hidden neurons such that the number of Q-Network parameters is the same for each corresponding layer between the two cases. As it can be seen from this figure, the suggested representation sharing between the P and Q networks increases the speed of convergence. Figure 7 shows the performance of the OL method having various limited budgets during the operation. Here, we report the accuracy-cost curves for 25%, 50%, 75%, and 100% of the budget required to acquire all features. As it can be inferred from this figure, the suggested method is able to efficiently operate at different enforced budget constraints. Figure 8a and 8b demonstrate the validation accuracy and AUACC values measured during the processing of the data stream at each episode for the MNIST and Diabetes datasets, respectively. As it can be seen from this figure, as the algorithm observes more data samples, it achieves higher validation accuracy/AUACC values, and it eventually converges after a certain number of episodes. It should be noted that, in general, convergence in reinforcement learning setups is dependent on the training algorithm and parameters used. For instance, the random exploration strategy, the update condition, and the update strategy for the target Q network would influence the overall time behavior of the algorithm. In this paper, we use conservative and reasonable strategies as reported in Section 3.2 that in stable across a wide range of experiments. In this paper, we proposed an approach for cost-sensitive learning in stream-based settings. We demonstrated that certainty estimation in neural network classifiers can be used as a viable measure for the value of features. Specifically, variations of the model certainty per unit of the cost is used as measure of feature value. In this paradigm, a reinforcement learning solution is suggested which is efficient to train using a shared representation. The introduced method is evaluated on three different real-world datasets representing different applications: MNIST digits recognition, Yahoo LTRC web ranking dataset, and diabetes prediction using health records. Based on the , the suggested method is able to learn from data streams, make accurate predictions, and effectively reduce the prediction-time feature acquisition cost.
An online algorithm for cost-aware feature acquisition and prediction
315
scitldr
This paper revisits the problem of sequence modeling using convolutional architectures. Although both convolutional and recurrent architectures have a long history in sequence prediction, the current "default" mindset in much of the deep learning community is that generic sequence modeling is best handled using recurrent networks. The goal of this paper is to question this assumption. Specifically, we consider a simple generic temporal convolution network (TCN), which adopts features from modern ConvNet architectures such as a dilations and residual connections. We show that on a variety of sequence modeling tasks, including many frequently used as benchmarks for evaluating recurrent networks, the TCN outperforms baseline RNN methods (LSTMs, GRUs, and vanilla RNNs) and sometimes even highly specialized approaches. We further show that the potential "infinite memory" advantage that RNNs have over TCNs is largely absent in practice: TCNs indeed exhibit longer effective history sizes than their recurrent counterparts. As a whole, we argue that it may be time to (re)consider ConvNets as the default "go to" architecture for sequence modeling. Since the re-emergence of neural networks to the forefront of machine learning, two types of network architectures have played a pivotal role: the convolutional network, often used for vision and higher-dimensional input data; and the recurrent network, typically used for modeling sequential data. These two types of architectures have become so ingrained in modern deep learning that they can be viewed as constituting the "pillars" of deep learning approaches. This paper looks at the problem of sequence modeling, predicting how a sequence will evolve over time. This is a key problem in domains spanning audio, language modeling, music processing, time series forecasting, and many others. Although exceptions certainly exist in some domains, the current "default" thinking in the deep learning community is that these sequential tasks are best handled by some type of recurrent network. Our aim is to revisit this default thinking, and specifically ask whether modern convolutional architectures are in fact just as powerful for sequence modeling. Before making the main claims of our paper, some history of convolutional and recurrent models for sequence modeling is useful. In the early history of neural networks, convolutional models were specifically proposed as a means of handling sequence data, the idea being that one could slide a 1-D convolutional filter over the data (and stack such layers together) to predict future elements of a sequence from past ones BID20 BID30. Thus, the idea of using convolutional models for sequence modeling goes back to the beginning of convolutional architectures themselves. However, these models were subsequently largely abandoned for many sequence modeling tasks in favor of recurrent networks BID13. The reasoning for this appears straightforward: while convolutional architectures have a limited ability to look back in time (i.e., their receptive field is limited by the size and layers of the filters), recurrent networks have no such limitation. Because recurrent networks propagate forward a hidden state, they are theoretically capable of infinite memory, the ability to make predictions based upon data that occurred arbitrarily long ago in the sequence. This possibility seems to be realized even moreso for the now-standard architectures of Long ShortTerm Memory networks (LSTMs) BID21, or recent incarnations such as the Gated Recurrent Unit (GRU); these architectures aim to avoid the "vanishing gradient" challenge of traditional RNNs and appear to provide a means to actually realize this infinite memory. Given the substantial limitations of convolutional architectures at the time that RNNs/LSTMs were initially proposed (when deep convolutional architectures were difficult to train, and strategies such as dilated convolutions had not reached widespread use), it is no surprise that CNNs fell out of favor to RNNs. While there have been a few notable examples in recent years of CNNs applied to sequence modeling (e.g., the WaveNet BID40 and PixelCNN BID41 architectures), the general "folk wisdom" of sequence modeling prevails, that the first avenue of attack for these problems should be some form of recurrent network. The fundamental aim of this paper is to revisit this folk wisdom, and thereby make a counterclaim. We argue that with the tools of modern convolutional architectures at our disposal (namely the ability to train very deep networks via residual connections and other similar mechanisms, plus the ability to increase receptive field size via dilations), in fact convolutional architectures typically outperform recurrent architectures on sequence modeling tasks, especially (and perhaps somewhat surprisingly) on domains where a long effective history length is needed to make proper predictions. This paper consists of two main contributions. First, we describe a generic, baseline temporal convolutional network (TCN) architecture, combining best practices in the design of modern convolutional architectures, including residual layers and dilation. We emphasize that we are not claiming to invent the practice of applying convolutional architectures to sequence prediction, and indeed the TCN architecture here mirrors closely architectures such as WaveNet (in fact TCN is notably simpler in some respects). We do, however, want to propose a generic modern form of convolutional sequence prediction for subsequent experimentation. Second, and more importantly, we extensively evaluate the TCN model versus alternative approaches on a wide variety of sequence modeling tasks, spanning many domains and datasets that have typically been the purview of recurrent models, including word-and character-level language modeling, polyphonic music prediction, and other baseline tasks commonly used to evaluate recurrent architectures. Although our baseline TCN can be outperformed by specialized (and typically highly-tuned) RNNs in some cases, for the majority of problems the TCN performs best, with minimal tuning on the architecture or the optimization. This paper also analyzes empirically the myth of "infinite memory" in RNNs, and shows that in practice, TCNs of similar size and complexity may actually demonstrate longer effective history sizes. Our chief claim in this paper is thus an empirical one: rather than presuming that RNNs will be the default best method for sequence modeling tasks, it may be time to (re)consider ConvNets as the "go-to" approach when facing a new dataset or task in sequence modeling. In this section we highlight some of the key innovations in the history of recurrent and convolutional architectures for sequence prediction. Recurrent networks broadly refer to networks that maintain a vector of hidden activations, which are kept over time by propagating them through the network. The intuitive appeal of this approach is that the hidden state can act as a sort of "memory" of everything that has been seen so far in a sequence, without the need for keeping an explicit history. Unfortunately, such memory comes at a cost, and it is well-known that the naïve RNN architecture is difficult to train due to the exploding/vanishing gradient problem BID2.A number of solutions have been proposed to address this issue. More than twenty years ago, BID21 introduced the now-ubiquitous Long Short-Term Memory (LSTM) which uses a set of gates to explicitly maintain memory cells that are propagated forward in time. Other solutions or refinements include a simplified variant of LSTM, the Gated Recurrent Unit (GRU), peephole connections BID15, Clockwork RNN BID26 and recent works such as MI-RNN and the Dilated RNN BID7. Alternatively, several regularization techniques have been proposed to better train LSTMs, such as those based upon the properties of the RNN dynamical system BID43; more recently, strategies such as Zoneout BID28 and AWD-LSTM BID36 were also introduced to regularize LSTM in various ways, and have achieved exceptional in the field of language modeling. While it is frequently criticized as a seemingly "ad-hoc" architecture, LSTMs have proven to be extremely robust and is very hard to improve upon by other recurrent architectures, at least for general problems. BID23 concluded that if there were "architectures much better than the LSTM", then they were "not trivial to find". However, while they evaluated a variety of recurrent architectures with different combinations of components via an evolutionary search, they did not consider architectures that were fundamentally different from the recurrent ones. The history of convolutional architectures for time series is comparatively shorter, as they soon fell out of favor compared to recurrent architectures for these tasks, though are also seeing a resurgence in recent years. BID49 and BID4 studied the usage of time-delay networks (TDNNs) for sequences, one of the earliest local-connection-based networks in this domain. BID30 then proposed and examined the usage of CNNs on time-series data, pointing out that the same kind of feature extraction used in images could work well on sequence modeling with convolutional filters. Recent years have seen a re-emergence of convolutional models for sequence data. Perhaps most notably, the WaveNet BID40 ) applied a stacked convolutional architecture to model audio signals, using a combination of dilations BID52, skip connections, gating, and conditioning on context stacks; the WaveNet mode was additionally applied to a few other contexts, such as financial applications BID3. Non-dilated gated convolutions have also been applied in the context of language modeling. And finally, convolutional models have seen a recent adoption in sequence to sequence modeling and machine translations applications, such as the ByteNet BID24 and ConvS2S architectures BID14.Despite these successes, the general consensus of the deep learning community seems to be that RNNs (here meaning all RNNs including LSTM and its variants) are better suited to sequence modeling for two apparent reasons: 1) as discussed before, RNNs are theoretically capable of infinite memory; and 2) RNN models are inherently suitable for sequential inputs of varying length, whereas CNNs seem to be more appropriate in domains with fixed-size inputs (e.g., vision).With this as the context, this paper reconsiders convolutional sequence modeling in general, first introducing a simple general-purpose convolutional sequence modeling architecture that can be applied in all the same scenarios as an RNN (the architecture acts as a "drop-in" replacement for RNNs of any kind). We then extensively evaluate the performance of the architecture on tasks from different domains, focusing on domains and settings that have been used explicitly as applications and benchmarks for RNNs in the recent past. With regard to the specific architectures mentioned above (e.g. WaveNet, ByteNet, gated convolutional language models), the primary goal here is to describe a simple, application-independent architecture that avoids much of the extra specialized components of these architectures (gating, complex residuals, context stacks, or the encoder-decoder architectures of seq2seq models), and keeps only the "standard" convolutional components from most image architectures, with the restriction that the convolutions be causal. In several cases we specifically compare the architecture with and without additional components (e.g., gating elements), and highlight that it does not seem to substantially improve performance of the architecture across domains. Thus, the primary goal of this paper is to provide a baseline architecture for convolutional sequence prediction tasks, and to evaluate the performance of this model across multiple domains. In this section, we propose a generic architecture for convolutional sequence prediction, and generally refer to it as Temporal Convolution Networks (TCNs). We emphasize that we adopt this term not as a label for a truly new architecture, but as a simple descriptive term for this and similar architectures. The distinguishing characteristics of the TCN are that: 1) the convolutions in the architecture are causal, meaning that there is no information "leakage" between future and past; 2) the architecture can take a sequence of any length and map it to an output sequence of the same length, just as with an RNN. Beyond this, we emphasize how to build very long effective history sizes (i.e., the ability for the networks to look very far into the past to make a prediction) using a combination of very deep networks (augmented with residual layers) and dilated convolutions. Before defining the network structure, we highlight the nature of the sequence modeling task. We suppose that we are given a sequence of inputs x 0,..., x T, and we wish to predict some correspond- ing outputs y 0,..., y T at each time. The key constraint is that to predict the output y t for some time t, we are constrained to only use those inputs that have been previously observed: x 0,..., x t. Formally, a sequence modeling network is any function f: DISPLAYFORM0 DISPLAYFORM1 if it satisfies the causal constraint that y t depends only on x 0,..., x t, and not on any "future" inputs x t+1,..., x T. The goal of learning in the sequence modeling setting is to find the network f minimizing some expected loss between the actual outputs and predictions DISPLAYFORM2 where the sequences and outputs are drawn according to some distribution. This formalism encompasses many settings such as auto-regressive prediction (where we try to predict some signal given its past) by setting the target output to be simply the input shifted by one time step. It does not, however, directly capture domains such as machine translation, or sequenceto-sequence prediction in general, since in these cases the entire input sequence (including "future" states) can be used to predict each output (though the techniques can naturally be extended to work in such settings). As mentioned above, the TCN is based upon two principles: the fact that the network produces an output of the same length as the input, and the fact that there can be no leakage from the future into the past. To accomplish the first point, the TCN uses a 1D fully-convolutional network (FCN) architecture BID32, where each hidden layer is the same length as the input layer, and zero padding of length (kernel size − 1) is added to keep subsequent layers the same length as previous ones. To achieve the second point, the TCN uses causal convolutions, convolutions where a subsequent output at time t is convolved only with elements from time t and before in the previous layer. 1 Graphically, the network is shown in FIG0. Put in a simple manner: DISPLAYFORM0 It is worth emphasizing that this is essentially the same architecture as the time delay neural network proposed nearly 30 years ago by BID49, with the sole tweak of zero padding to ensure equal sizes of all layers. However, a major disadvantage of this "naïve" approach is that in order to achieve a long effective history size, we need an extremely deep network or very large filters, neither of which were particularly feasible when the methods were first introduced. Thus, in the following sections, we describe how techniques from modern convolutional architectures can be integrated into the TCN to allow for both very deep networks and very long effective history.... DISPLAYFORM1 Padding = 2 Padding = 4 DISPLAYFORM2 Figure 2: A dilated causal convolution with dilation factors d = 1, 2, 4 and filter size k = 3. The receptive field is able to cover all values from the input sequence. Through convolutional filters, as previously addressed, a simple causal convolution is only able to look back at a history with size linear in the depth of the network. This makes it challenging to apply the aforementioned causal convolution on sequence tasks, especially those requiring longer history. Our solution here, used previously for example in audio synthesis by BID40, is to employ dilated convolutions BID52 that enable an exponentially large receptive field. More formally, for a 1-D sequence input x ∈ R n and a filter f: {0, . . ., k − 1} → R, the dilated convolution operation F on element s of the sequence is defined as DISPLAYFORM0 where d is the dilation factor and k is the filter size. Dilation is thus equivalent to introducing a fixed step between every two adjacent filter taps. When taking d = 1, for example, a dilated convolution is trivially a normal convolution operation. Using larger dilations enables an output at the top level to represent a wider range of inputs, thus effectively expanding the receptive field of a ConvNet. This gives us two ways to increase the receptive field of the TCN: by choosing larger filter sizes k, and by increasing the dilation factor d, where the effective history of one such layer is (k − 1)d. As is common when using dilated convolutions, we increase d exponentially with the depth of the network (i.e., d = O(2 i) at level i of the network). This ensures that there is some filter that hits each input within the effective history, while also allowing for an extremely large effective history using deep networks. We provide an illustration in Figure 2. Using filter size k = 3 and dilation factor d = 1, 2, 4, the receptive field is able to cover all values from the input sequence. Proposed by BID19, residual functions have proven to be especially useful in effectively training deep networks. In a residual network, each residual block contains a branch leading out to a series of transformations F, whose outputs are added to the input x of the block: DISPLAYFORM0 This effectively allows for the layers to learn modifications to the identity mapping rather than the entire transformation, which has been repeatedly shown to benefit very deep networks. As the TCN's receptive field depends on the network depth n as well as filter size k and dilation factor d, stabilization of deeper and larger TCNs becomes important. For example, in a case where the prediction could depend on a history of size 2 12 and a high-dimensional input sequence, a network of up to 12 layers could be needed. Each layer, more specifically, consists of multiple filters for feature extraction. In our design of the generic TCN model, we therefore employed a generic residual module in place of a convolutional layer. The residual block for our baseline TCN is shown in FIG2. Within a residual block, the TCN has 2 layers of dilated causal convolution and non-linearity, for which we used the rectified linear unit DISPLAYFORM1 (a) TCN residual block. An 1x1 convolution is added when residual input and output have different dimensions.x 0 x 1 x T... BID39. For normalization, we applied Weight Normalization BID45 to the filters in the dilated convolution (where we note that the filters are essentially vectors of size k × 1). In addition, a 2-D dropout BID46 layer was added after each dilated convolution for regularization: at each training step, a whole channel (in the width dimension) is zeroed out. However, whereas in standard ResNet the input is passed in and added directly to the output of the residual function, in TCN (and ConvNet in general) the input and output could have different widths. Therefore in our TCN, when the input-output widths disagree, we use an additional 1x1 convolution to ensure that element-wise addition ⊕ receives tensors of the same shape (see FIG2, 3b).Note that many further optimizations (e.g., gating, skip connections, context stacking as in audio generation using WaveNet) are possible in a TCN than what we described here. However, in this paper, we aim to present a generic, general-purpose TCN, to which additional twists can be added as needed. As we are going to show in Section 4, this general-purpose architecture is already able to outperform recurrent units like LSTM on a number of tasks by a good margin. There are several key advantages to a TCN model with the ingredients that we described above.• Parallelism. Unlike in RNNs where the predictions for later timesteps must wait for their predecessors to complete, in a convolutional architecture these computations can be done in parallel since the same filter is used in each layer. Therefore, in training and evaluation, a (possibly long) input sequence can be processed as a whole in TCN, instead of serially as in RNN, which depends on the length of the sequence and could be less efficient.• Flexible receptive field size. With a TCN, we can change its receptive field size in multiple ways. For instance, stacking more dilated (causal) convolutional layers, using larger dilation factors, or increasing the filter size are all viable options (with possibly different interpretations). TCN is thus easy to tune and adapt to different domains, since we now can directly control the size of the model's memory.• Stable gradients. Unlike recurrent architectures, TCN has a backpropagation path that is different from the temporal direction of the sequence. This enables it to avoid the problem of exploding/vanishing gradients, which is a major issue for RNNs (and which led to the development of LSTM, GRU, HF-RNN, etc.).• Low memory requirement for training. In a task where the input sequence is long, a structure such as LSTM can easily use up a lot of memory to store the partial for backpropagation (e.g., the for each gate of the cell). However, in TCN, the backpropagation path only depends on the network depth and the filters are shared in each layer, which means that in practice, as model size or sequence length gets large, TCN is likely to use less memory than RNNs. We also summarize two disadvantages of using TCN instead of RNNs.• Data storage in evaluation. In evaluation/testing, RNNs only need to maintain a hidden state and take in a current input x t in order to generate a prediction. In other words, a "summary" of the entire history is provided by the fixed-length set of vectors h t, which means that the actual observed sequence can be discarded (and indeed, the hidden state can be used as a kind of encoder for all the observed history). In contrast, the TCN still needs to take in a sequence with non-trivial length (precisely the effective history length) in order to predict, thus possibly requiring more memory during evaluation.• Potential parameter change for a transfer of domain. Different domains can have different requirements on the amount of history the model needs to memorize. Therefore, when transferring a model from a domain where only little memory is needed (i.e., small k and d) to a domain where much larger memory is required (i.e., much larger k and d), TCN may perform poorly for not having a sufficiently large receptive field. We want to emphasize, though, that we believe the notable lack of "infinite memory" for a TCN is decidedly not a practical disadvantage, since, as we show in Section 4, the TCN method actually outperforms RNNs in terms of the ability to deal with long temporal dependencies. In this section, we conduct a series of experiments using the baseline TCN (described in section 3) and generic RNNs (namely LSTMs, GRUs, and vanilla RNNs). These experiments cover tasks and datasets from various domains, aiming to test different aspects of a model's ability to learn sequence modeling. In several cases, specialized RNN models, or methods with particular forms of regularization can indeed vastly outperform both generic RNNs and the TCN on particular problems, which we highlight when applicable. But as a general-purpose architecture, we believe the experiments make a compelling case for the TCN as the "first attempt" approach for many sequential problems. All experiments reported in this section used the same TCN architecture, just varying the depth of the network and occasionally the kernel size. We use an exponential dilation d = 2 n for layer n in the network, and the Adam optimizer BID25 with learning rate 0.002 for TCN (unless otherwise noted). We also empirically find that gradient clipping helped training convergence of TCN, and we pick the maximum norm to clip from [0.3, 1]. When training recurrent models, we use a simple grid search to find a good set of hyperparameters (in particular, optimizer, recurrent drop p ∈ [0.05, 0.5], the learning rate, gradient clipping, and initial forget-gate bias), while keeping the network around the same size as TCN. No other optimizations, such as gating mechanism (see Appendix D), or highway network, were added to TCN or the RNNs. The hyperparameters we use for TCN on different tasks are reported in TAB1 in Appendix B. In addition, we conduct a series controlled experiments to investigate the effects of filter size and residual function on the TCN's performance. These can be found in Appendix C. In this section we highlight the general performance of generic TCNs vs generic LSTMs for a variety of domains from the sequential modeling literature. A complete description of each task, as well as references to some prior works that evaluated them, is given in Appendix A. In brief, the tasks we consider are: the adding problem, sequential MNIST, permuted MNIST (P-MNIST), the copy memory task, the Nottingham and JSB Chorales polyphonic music tasks, Penn Treebank (PTB), Wikitext-103 and LAMBADA word-level language modeling, as well as PTB and text8 characterlevel language modeling. TAB0. We will highlight many of these below, and want to emphasize that for several tasks the baseline RNN architectures are still far from the state of the art (see TAB3), but in total the make a strong case that the TCN architecture, as a generic sequence modeling framework, is often superior to generic RNN approaches. We now consider several of these experiments in detail, generally distinguishing between the "recurrent benchmark" tasks designed to show the limitations of networks for sequence modeling (adding problem, sequential & permuted MNIST, copy memory), and the "applied" tasks (polyphonic music and language modeling). We first compare the of the TCN architecture to those of RNNs on the toy baseline tasks that have been frequently used to evaluate sequential modeling BID21 BID34 BID43 BID29 BID11 BID53 BID28 BID50 BID1.The Adding Problem. Convergence for the adding problem, for problem sizes T = 200, 400, 600, are shown in FIG3; all models were chosen to have roughly 70K parameters. In all three cases, TCNs quickly converged to a virtually perfect solution (i.e., an MSE loss very close to 0). LSTMs and vanilla RNNs performed significantly worse, while on this task GRUs also performed quite well, even though their convergence was slightly slower than TCNs. Sequential MNIST and P-MNIST. Results on sequential and permuted MNIST, run over 10 epochs, are shown in Figures 5a and 5b; all models were picked to have roughly 70K parameters. For both problems, TCNs substantially outperform the alternative architectures, both in terms of con- Copy Memory Task. Finally, FIG5 shows the of the different methods (with roughly the same size) on the copy memory task. Again, the TCNs quickly converge to correct answers, while the LSTM and GRU simply converge to the same loss as predicting all zeros. In this case we also compare to the recently-proposed EURNN BID22, which was highlighted to perform well on this task. While both perform well for sequence length T = 500, the TCN again has a clear advantage for T = 1000 and T = 2000 (in terms of both loss and convergence). Next, we compare the of the TCN architecture to recurrent architectures on 6 different real datasets in polyphonic music as well as word-and character-level language modeling. These are areas where sequence modeling has been used most frequently. As domains where there is considerable practical interests, there have also been many specialized RNNs developed for these tasks (e.g., BID53 ; BID18 ; BID28 ; BID16 ; BID17 ; BID36). We mention some of these comparisons when useful, but the primary goal here is to compare the generic TCN model to other generic RNN architectures, so we focus mainly on these comparisons. On the Nottingham and JSB Chorales datasets, the TCN with virtually no tuning is again able to beat the other models by a considerable margin (see TAB0), and even outperforms some improved recurrent models for this task such as HF-RNN BID5 and Diagonal RNN BID47. Note however that other models such as the Deep Belief Net LSTM BID48 perform substantially better on this task; we believe this is likely due to the fact that the datasets involved in polyphonic music are relatively small, and thus the right regularization method or generative modeling procedure can improve performance significantly. This is largely orthogonal to the RNN/TCN distinction, as a similar variant of TCN may well be possible. Word-level Language Modeling. Language modeling remains one of the primary applications of recurrent networks in general, where many recent works have been focusing on optimizing the usage of LSTMs (see BID28 ; BID36). In our implementation, we follow standard practices such as tying the weights of encoder and decoder layers for both TCN and RNNs BID44, which significantly reduces the number of parameters in the model. When training the language modeling tasks, we use SGD optimizer with annealing learning rate (by a factor of 0.5) for TCN and RNNs. Results on word-level language modeling are reported in TAB0. With a fine-tuned LSTM (i.e., with recurrent and embedding dropout, etc.), we find LSTM can outperform TCN in perplexity on the Penn TreeBank (PTB) dataset, where the TCN model still beats both GRU and vanilla RNN. On the much larger Wikitext-103 corpus, however, without performing much hyperparameter search (due to lengthy training process), we still observe that TCN outperforms the state of the art LSTM (48.4 in perplexity) by BID16 (without continuous cache pointer; see TAB3). The same superiority is observed on the LAMBADA test BID42, where TCN achieves a much lower perplexity than its recurrent counterparts in predicting the last word based on a very long context (see Appendix A). We will further analyze this in section 4.4.Character-level Language Modeling. The of applying TCN and alternative models on PTB and text8 data for character-level language modeling are shown in TAB0, with performance measured in bits per character (bpc). While beaten by the state of the art (see TAB3), the generic TCN outperforms regularized LSTM and GRU as well as methods such as Norm-stabilized LSTM BID27. Moreover, we note that using a filter size of k ≤ 4 works better than larger filter sizes in character-level language modeling, which suggests that capturing short history is more important than longer dependencies in these tasks. Finally, one of the important reasons why RNNs have been preferred over CNNs for general sequence modeling is that theoretically, recurrent architectures are capable of an infinite memory. We therefore attempt to study here how much memory TCN and LSTM/GRU are able to actually "backtrack", via the copy memory task and the LAMBADA language modeling task. The copy memory task is a simple but perfect task to examine a model's ability to pick up its memory from a (possibly) distant past (by varying the value of sequence length T). However, different from the setting in Section 4.2, in order to compare the for different sequence lengths, here we only report the accuracy on the last 10 elements of the output sequence. We used a model size of 10K for both TCN and RNNs. The are shown in FIG6. TCNs consistently converge to 100% accuracy for all sequence lengths, whereas it is increasingly challenging for recurrent models to memorize as T grows (with accuracy converging to that of a random guess). LSTM's accuracy quickly falls below 20% for T ≥ 50, which suggests that instead of infinite memory, LSTMs are only good at recalling a short history instead. This observation is also backed up by the experiments of TCN on the LAMBADA dataset, which is specifically designed to test a model's textual understanding in a broader discourse. The objective of LAMBADA dataset is to predict the last word of the target sentence given a sufficiently long context (see Appendix A for more details). Most of the existing models fail to guess accurately on this task. As shown in TAB0, TCN outperforms LSTMs by a significant margin in perplexity on LAMBADA (with a smaller network and virtually no tuning).These indicate that TCNs, despite their apparent finite history, in practice maintain a longer effective history than their recurrent counterparts. We would like to emphasize that this empirical observation does not contradict the good that prior works have achieved using LSTM, such as in language modeling on PTB. In fact, the very success of n-gram models BID6 suggested that language modeling might not need a very long memory, a also reached by prior works such as BID12. In this work, we revisited the topic of modeling sequence predictions using convolutional architectures. We introduced the key components of the TCN and analyzed some vital advantages and disadvantages of using TCN for sequence predictions instead of RNNs. Further, we compared our generic TCN model to the recurrent architectures on a set of experiments that span a wide range of domains and datasets. Through these experiments, we have shown that TCN with minimal tuning can outperform LSTM/GRU of the same model size (and with standard regularizations) in most of the tasks. Further experiments on the copy memory task and LAMBADA task revealed that TCNs actually has a better capability for long-term memory than the comparable recurrent architectures, which are commonly believed to have unlimited memory. It is still important to note that, however, we only presented a generic architecture here, with components all coming from standard modern convolutional networks (e.g., normalization, dropout, residual network). And indeed, on specific problems, the TCN model can still be beaten by some specialized RNNs that adopt carefully designed optimization strategies. Nevertheless, we believe the experiment in Section 4 might be a good signal that instead of considering RNNs as the "default" methodology for sequence modeling, convolutional networks too, can be a promising and powerful toolkit in studying time-series data. The Adding Problem: In this task, each input consists of a length-n sequence of depth 2, with all values randomly chosen in, and the second dimension being all zeros expect for two elements that are marked by 1. The objective is to sum the two random values whose second dimensions are marked by 1. Simply predicting the sum to be 1 should give an MSE of about 0.1767. First introduced by BID21, the addition problem have been consistently used as a pathological test for evaluating sequential models BID43 BID29 BID53 BID1.Sequential MNIST & P-MNIST: Sequential MNIST is frequently used to test a recurrent network's ability to combine its information from a long memory context in order to make classification prediction BID29 BID53 BID11 BID28 BID22. In this task, MNIST BID31 images are presented to the model as a 784 × 1 sequence for digit classification In a more challenging setting, we also permuted the order of the sequence by a random (fixed) order and tested the TCN on this permuted MNIST (P-MNIST) task. Copy Memory Task: In copy memory task, each input sequence has length T + 20. The first 10 values are chosen randomly from digit with the rest being all zeros, except for the last 11 entries which are marked by 9 (the first "9" is a delimiter). The goal of this task is to generate an output of same length that is zero everywhere, except the last 10 values after the delimiter, where the model is expected to repeat the same 10 values at the start of the input. This was used by prior works such as BID1; BID50 BID22; but we also extended the sequence lengths to up to T = 2000.JSB Chorales: JSB Chorales dataset BID0 ) is a polyphonic music dataset consisting of the entire corpus of 382 four-part harmonized chorales by J. S. Bach. In a polyphonic music dataset, each input is a sequence of elements having 88 dimensions, representing the 88 keys on a piano. Therefore, each element x t is a chord written in as binary vector, in which a "1" indicates a key pressed. 2 is a collection of 1200 British and American folk tunes. Nottingham is a much larger dataset than JSB Chorales. Along with JSB Chorales, Nottingham has been used in a number of works that investigated recurrent models' applicability in polyphonic music BID17 BID9, and the performance for both tasks are measured in terms of negative log-likelihood (NLL) loss. PennTreebank: We evaluated TCN on the PennTreebank (PTB) dataset BID33 ), for both character-level and word-level language modeling. When used as a character-level language corpus, PTB contains 5059K characters for training, 396K for validation and 446K for testing, with an alphabet size of 50. When used as a word-level language corpus, PTB contains 888K words for training, 70K for validation and 79K for testing, with vocabulary size 10000. This is a highly studied dataset in the field of language modeling BID38 BID28 BID36, with exceptional have been achieved by some highly optimized RNNs. Wikitext-103: Wikitext-103 BID35 ) is almost 110 times as large as PTB, featuring a vocabulary size of about 268K. The dataset contains 28K Wikipedia articles (about 103 million words) for training, 60 articles (about 218K words) for validation and 60 articles (246K words) for testing. This is a more representative (and realistic) dataset than PTB as it contains a much larger vocabulary, including many rare vocabularies. LAMBADA: Introduced by BID42, LAMBADA (LA nguage Modeling Boadened to Account for Discourse Aspects) is a dataset consisting of 10K passages extracted from novels, with on average 4.6 sentences as context, and 1 target sentence whose last word is to be predicted. This dataset was built so that human can guess naturally and perfectly when given the context, but would fail to do so when only given the target sentence. Therefore, LAMBADA is a very challenging dataset that evaluates a model's textual understanding and ability to keep track of information in the broader discourse. Here is an example of a test in the LAMBADA dataset, where the last word "miscarriage" is to be predicted (which is not in the context):Context: "Yes, I thought I was going to lose the baby.""I was scared too." he stated, sincerity flooding his eyes. " You were?""Yes, of course. Why do you even ask?""This baby wasn't exactly planned for." Target Sentence: "Do you honestly think that I would want you to have a "Target Word: miscarriage This dataset was evaluated in prior works such as BID42; BID16. In general, better on LAMBADA indicate that a model is better at capturing information from longer and broader context. The training data for LAMBADA is the full text of 2,662 novels with more than 200M words 3, and the vocabulary size is about 93K.text8: We also used text8 4 dataset for character level language modeling BID37. Compared to PTB, text8 is about 20 times as large, with about 100 million characters from Wikipedia (90M for training, 5M for validation and 5M for testing). The corpus contains 27 unique alphabets. In this supplementary section, we report in a table (see TAB1) the hyperparameters we used when applying the generic TCN model on the different tasks/datasets. The most important factor for picking parameters is to make sure that the TCN has a sufficiently large receptive field by choosing k and n that can cover the amount of context needed for the task. As previously mentioned in Section 4, the number of hidden units was chosen based on k and n such that the model size is approximately at the same level as the recurrent models. In the table above, a gradient clip of N/A means no gradient clipping was applied. However, in larger tasks, we empirically found that adding a gradient clip value (we randomly picked from [0.2, 1]) helps the training convergence. We also report the parameter setting for LSTM in TAB2. These values are picked from hyperparameter search for LSTMs that have up to 3 layers, and the optimizers are chosen from {SGD, Adam, RMSprop, Adagrad}.GRU hyperparameters were chosen in a similar fashion, but with more hidden units to keep the total model size approximately the same (since a GRU cell is smaller). As previously noted, TCN can still be outperformed by optimized RNNs in some of the tasks, whose are summarized in TAB3 below. The same TCN architecture is used across all tasks. Note that the size of the SoTA model may be different from the size of the TCN. BID16 Word LAMBADA (ppl) 1279 56M 138 >100M Neural Cache Model (Large) BID16 Char PTB (bpc) 1.35 3M 1.22 14M 2-LayerNorm HyperLSTM BID18 Char text8 (bpc) 1.45 4.6M 1.29 >12M HM-LSTM BID10 C In this section we briefly study, via controlled experiments, the effect of filter size and residual block on the TCN's ability to model different sequential tasks. FIG7 shows the of this ablative analysis. We kept the model size and the depth of the networks exactly the same within each experiment so that dilation factor is controlled. We conducted the experiment on three very different tasks: the copy memory task, permuted MNIST (P-MNIST), as well as word-level PTB language modeling. Through these experiments, we empirically confirm that both filter sizes and residuals play important roles in TCN's capability of modeling potentially long dependencies. In both the copy memory and the permuted MNIST task, we observed faster convergence and better for larger filter sizes (e.g. in the copy memory task, a filter size k ≥ 3 led to only suboptimal convergence). In word-level PTB, we find a filter size of k = 3 works best. This is not a complete surprise, since a size-k filter on the inputs is analogous to a k-gram model in language modeling. Results of control experiments on the residual function are shown in FIG7, 8e and 8f. In all three scenarios, we observe that the residual stabilizes the training by bringing a faster convergence as well as better final , compared to TCN with the same model size but no residual block. One component that has shown to be effective in adapting a TCN to language modeling is the gating mechanism within the residual block, which was used in works such as BID12. In this section, we empirically evaluate the effects of adding gated units to TCN.We replace the ReLU within the TCN residual block with a gating mechanism, represented by an elementwise product between two convolutional layers, with one of them also passing through a sigmoid function σ(x) 5. Prior works such as BID12 has used similar gating to control the path through which information flows in the network, and achieved great performance on language modeling tasks. Through these comparisons, we notice that gating components do further improve the TCN on certain language modeling datasets like PTB, which agrees with prior works. However, we do not observe such benefits to exist in general on sequence prediction tasks, such as on polyphonic music datasets, and those simpler benchmark tasks requiring more long-term memories. For example, on the copy memory task with T = 1000, we find that gating mechanism deteriorates the convergence of TCN to a suboptimal that is only slightly better than random guess.
We argue that convolutional networks should be considered the default starting point for sequence modeling tasks.
316
scitldr
Deep neural networks work well at approximating complicated functions when provided with data and trained by gradient descent methods. At the same time, there is a vast amount of existing functions that programmatically solve different tasks in a precise manner eliminating the need for training. In many cases, it is possible to decompose a task to a series of functions, of which for some we may prefer to use a neural network to learn the functionality, while for others the preferred method would be to use existing black-box functions. We propose a method for end-to-end training of a base neural network that integrates calls to existing black-box functions. We do so by approximating the black-box functionality with a differentiable neural network in a way that drives the base network to comply with the black-box function interface during the end-to-end optimization process. At inference time, we replace the differentiable estimator with its external black-box non-differentiable counterpart such that the base network output matches the input arguments of the black-box function. Using this ``Estimate and Replace'' paradigm, we train a neural network, end to end, to compute the input to black-box functionality while eliminating the need for intermediate labels. We show that by leveraging the existing precise black-box function during inference, the integrated model generalizes better than a fully differentiable model, and learns more efficiently compared to RL-based methods. End-to-end supervised learning with deep neural networks (DNNs) has taken the stage in the past few years, achieving state-of-the-art performance in multiple domains including computer vision BID25, natural language processing BID23 BID10, and speech recognition BID29. Many of the tasks addressed by DNNs can be naturally decomposed to a series of functions. In such cases, it might be advisable to learn neural network approximations for some of these functions and use precise existing functions for others. Examples of such tasks include Semantic Parsing and Question Answering. Since such a decomposition relies partly on precise functions, it may lead to a superior solution compared to an approximated one based solely on a learned neural model. Decomposing a solution into trainable networks and existing functions requires matching the output of the networks to the input of the existing functions, and vice-versa. The input and output are defined by the existing functions' interface. We shall refer to these functions as black-box functions (bbf), focusing only on their interface. For example, consider the question: "Is 7.2 greater than 4.5?" Given that number comparison is a solved problem in symbolic computation, a natural solution would be to decompose the task into a two-step process of (i) converting the natural language to an executable program, and (ii) executing the program on an arithmetic module. While a DNN may be a good fit forWe propose an alternative approach called Estimate and Replace that finds a differentiable function approximation, which we term black-box estimator, for estimating the black-box function. We use the black-box estimator as a proxy to the original black-box function during training, and by that allow the learnable parts of the model to be trained using gradient-based optimization. We compensate for not using any intermediate labels to direct the learnable parts by using the black-box function as an oracle for training the black-box estimator. During inference, we replace the black-box estimator with the original non-differentiable black-box function. End-to-end training of a solution composed of trainable components and black-box functions poses several challenges we address in this work-coping with non-differentiable black-box functions, fitting the network to call these functions with the correct arguments, and doing so without any intermediate labels. Two more challenges are the lack of prior knowledge on the distribution of inputs to the black-box function, and the use of gradient-based methods when the function approximation is near perfect and gradients are extremely small. This work is organized as follows: In Section 2, we formulate the problem of decomposing the task to include calls to a black-box function. Section 3 describes the network architecture and training procedures. In Section 4, we present experiments and comparison to Policy Gradient-based RL, and to fully neural models. We further discuss the potential and benefits of the modular nature of our approach in Section 6. In this work, we consider the problem of training a DNN model to interact with black-box functions to achieve a predefined task. Formally, given a labeled pair (x, y), such that some target function h *: X → Y satisfies h * (x) = y, we assume that there exist: DISPLAYFORM0 Such that h DISPLAYFORM1, where n is the number of arguments in the black-box input domain A = (A 1, ..., A n). The domains {A i} can be structures of discrete, continuous, and nested variables. The problem then is to fit h * given a dataset {(x j, y j) | j ≤ D} and given an oracle access to h bbf. Then h arg: X → (A 1, ..., A n) is an argument extractor function, which takes as input x and outputs a tuple of arguments (h DISPLAYFORM2 .., a n), and h bbf is a black-box function, which takes these arguments and outputs the final . Importantly, we require no sample (x, (a 1, ..., a n), y) for which the intermediate black-box interface argument labels are available. We note that this formulation allows the use of multiple functions simultaneously, e.g., by defining an additional argument that specifies the "correct" function, or a set of arguments that specify ways to combine the functions' . In this section we present the Estimate and Replace approach which aims to address the problem defined in Section 2. The approach enables training a DNN that interacts with non-differentiable black-box functions (bbf), as illustrated in FIG0 (a). The complete model, termed EstiNet, is a composition of two modules-argument extractor and black-box estimator-which learn h arg and h bbf respectively. The black-box estimator sub-network serves as a differential estimator to the black-box function during an end-to-end gradient-based optimization. We encourage the estimator to directly fit the black-box functionality by using the black-box function as a label generator during training. At inference time, we replace the estimator with its black-box function counterpart, and let this hybrid solution solve the end-to-end task at hand in an accurate and efficient way. In this way, we eliminate the need for intermediate labels. We refer to running a forward-pass with the black-box estimator as test mode and running a forward-pass with the black-box function as inference mode. By leveraging the black-box function as in this mode, EstiNet shows better gerealization than an end-to-end neural network model. In addition, EstiNet suggests a modular architecture with the added benefits of module reuse and model interpretability. Adapters EstiNet uses an adaptation function to adapt the argument extractor's output to the blackbox function input, and to adapt black-box function's output to the appropriate final output label format (see FIG0). For example, EstiNet uses such a function to convert soft classification distributions to hard selections, or to map classes of text token to concrete text. The modular nature of the EstiNet model presents a unique training challenge: EstiNet is a modular architecture where each of its two modules, namely the argument extractor and the black-box estimator is trained using its own input-label pair samples and loss function. We optimize EstiNet model parameters with two distinct loss functions-the target loss and the black-box loss. Specifically, we optimize the argument extractor's parameters with respect to the target loss using the task's dataset during end-to-end training. We optimize the black-box estimator's parameters with respect to the black-box loss while training it on the black-box dataset:The black-box dataset We generate input-output pairs for the black-box dataset by sending an input sample to the black-box function and recording its output as the label. We experimented in generating input samples in two ways: offline sampling-in which we sample from an a-priori black-box input distribution, or from a uniform distribution in absence of such; and online sampling-in which we use the output of the argument extractor module during a forward pass as an input to the black-box function, using an adaptation function as needed for recording the output (see FIG0). Having two independent datasets and loss functions suggest multiple training procedure options. In the next section we discuss the most prominent ones along with their advantages and disadvantages. We provide empirical evaluation of these procedures in Section 4.Offline Training In offline training we first train the black-box estimator using offline sampling. We then fix its parameters and load the trained black-box estimator into the EstiNet model and train the argument extractor with the task's dataset and target loss function. A disadvantage of offline training is noisy training due to the distribution difference between the offline black-box a-priori dataset and the actual posterior inputs that the argument extractor computes given the task's dataset during training. That is, the distribution of the dataset with which we trained the black-box estimator is different than the distribution of input it receives during the target loss training. Online Training In online training we aim to solve the distribution difference problem by jointly training the argument extractor and the black-box estimator using the target loss and black-box loss respectively. Specifically, we train the black-box estimator with the black-box dataset generated via online sampling during the training process.1 FIG0 (b) presents a schematic diagram of the online training procedure. We note that the online training procedure suffers from a cold start problem of the argument extractor. Initially, the argument extractor generates noisy input for the black-box function, which prevents it from generating meaningful labels for the black-box estimator. Hybrid Training In hybrid training we aim to solve the cold start problem by first training the black-box estimator offline, but refraining from freezing its parameters. We load the estimator into the EstiNet model and continue to train it in parallel with the argument extractor as in online training. In all of the above training procedures, we essentially replace the use of intermediate labels with the use of a black-box dataset for implicitly training the argument extractor via back-propagation. As a consequence, if the gradients of the black-box estimator are small, it will make it difficult for the argument extractor to learn. Furthermore, if the black-box estimator is a classifier, it tends to grow overly confident as it trains, assigning very high probabilities to specific answers and very low probabilities for the rest BID21. Since these classification functions are implemented with a softmax layer, output values that are close to the function limits in extremely small gradients. Meaning that in the scenario where the estimator reaches local optima and is very confident, its gradient updates become small. Through the chain rule of back-propagation, this means that even if the argument extractor is not yet at local optima, its gradient updates become small as well, which complicates training. 1 We note that this problem is reminiscent of, but different from, Multi-Task Learning, which involves training the same parameters using multiple loss functions. In our case, we train non-overlapping parameters using two losses: Let Ltarget and Lbbf be the two respective losses, and θarg and θbbf be the parameters of the argument extractor and black-box estimator modules. Then the gradient updates of the EstiNet during Online Training are: To overcome this phenomenon, we follow BID24 and BID21, regularizing the high confidence by introducing (i) Entropy Loss -adding the negative entropy of the output distribution to the loss, therefore maximizing the entropy and encouraging less confident distributions, and (ii) Label Smoothing Regularization -adding the cross entropy (CE) loss between the output and the training set's label distribution (for example, uniform distribution) to the standard CE loss between the predicted and ground truth distributions. Empirical validation of the phenomenon and our proposed solution are detailed in Section 4.3. DISPLAYFORM0 We present four experiments in increasing complexity to test the Estimate and Replace approach and compare its performance against existing solutions. Specifically, the experiments demonstrate that by leveraging external black-box functions, we achieve better generalization and better learning efficiency in comparison with existing competing solutions, without using intermediate labels. Appendix A contains concrete details of the experiments. We start with a simple experiment that presents the ability of our Estimate and Replace approach to learn a proposed decomposition solution. We show that by leveraging a precise external function, our method performs better with less training data. In this experiment, we train a network to answer simple greater-than/less-than logical questions on real numbers, such as: "is 7.5 greater than 8.2?" We solve the text-logic task by constructing an EstiNet model with an argument extractor layer that extracts the arguments and operator (7.5, 8.2 and ">" in the above example), and a black-box estimator that performs simple logic operations (greater than and less than). We generate the Text-Logic questions from ten different templates, all requiring a true/false answer for two float numbers. Results We compare the performance of the EstiNet model with a baseline model. This baseline model is equivalent to our model in its architecture, but is trained end-to-end with the task labels as supervision. This supervision allows the model to learn the input-to-output mapping, but does not provide any guidance for decomposing the task and learning the black-box function interface. We used online training for the EstiNet model. TAB0 summarizes the performance differences. The EstiNet model generalizes better than the baseline, and the accuracy difference between the two training procedures increases as the amount of training data decreases. This experiment presents the advantage of the Estimate and Replace approach to train a DNN with less data. For example, to achieve accuracy of 0.97, our model requires only 5% of the data that the baseline training requires. With the second experiment we seek to present the ability of our Estimate and Replace approach to generalize by leveraging a precise external function. In addition, we compare our approach to an Actor Critic-based RL algorithm. The Image-Addition task is to sum the values captured by a sequence of MNIST images. Previously, BID26 have shown that their proposed Neural Arithmetic Logic Unit (NALU) cell generalizes better than previous solutions while solving this task 2 with standard end-to-end training. We solve the task by constructing an EstiNet model with an argument extractor layer that classifies the digit in a given MNIST image, and a black-box estimator that performs the sum operation. The argument extractor takes an unbounded series of MNIST images as input, and outputs a sequence of MNIST classifications of the same length. The black-box estimator, which is a composition of a Long Short-Term Memory (LSTM) layer and a NALU cell, then takes the argument extractor's output as its input and outputs a single regression number. Solving the Image-Addition task requires the argument extractor to classify every MNIST image correctly without intermediate digit labels. Furthermore, because the sequence length is unbounded, unseen sequence lengths in unseen sum ranges which the solution must generalize to. TAB1 shows a comparison of EstiNet performance with an end-to-end NALU model. Both models were trained on sequences of length k = 10. The argument extractor achieves 98.6% accuracy on MNIST test set classification. This high accuracy indicates that the EstiNet is able to learn the desired h arg behavior, where the arguments are the digits shown in the MNIST images. Thus, it can generalize to any sequence length by leveraging the sum operation. Our NALU-based EstiNet outperforms the plain NALU-based end-to-end network. Results vs. RL We compare the EstiNet performance with an AC-based RL agent as an existing solution for training a neural network calling a black-box function without intermediate labels. We compare the learning efficiency of the two models by the amount of gradient updates required to reach optima. Results in FIG1 show that EstiNet significantly outperforms the RL agent. The third experiment tests the capacity of our approach to deal with non-differentiable tasks, in our case a lookup operation, as oppose to the differentiable addition operation presented in the previous section. With this experiment, we present the effect of replacing the black-box estimator with the original black-box function. We are given a k dimensional lookup table T: D k → D where D is the digits domain in the range of. The image-lookup input is a sequence of length k of MNIST images (x 1, ..., x k) with corresponding digits (a 1, . . ., a k) ∈ D k. The label y ∈ D for (x 1, ..., x k) is T (a 1, . . ., a k). We solve the image-lookup task by constructing an EstiNet model with an argument extractor similar to the previous task and a black-box estimator that outputs the classification prediction. Results Results are shown in Table 3. Successfully solving this task infers the ability to generalize to the black-box function, which in our case is the ability to replace or update the original lookup table with another at inference time without the need to retrain our model. To verify this we replace Table 3: Accuracy for the Image-Lookup task on the MNIST test-set for the three model configurations: train, test, and inference. We also report the accuracy of the argument extractor and estimator. The estimator accuracy is evaluated on the online sampling dataset. k is the digit of MNIST images in the input, and the dimension of the lookup table. Train Test Inference Argument Extractor Estimator k = 2 0.98 0.11 0.97 0.99 0.98 k = 3 0.97 0.1 0.97 0.99 0.98 k = 4 0.69 0.1 0.95 0.986 0.7the lookup table with a randomly generated one at test mode and observe performance decrease, as the black-box estimator did not learn the correct lookup functionality. However, in inference mode, where we replace the black-box estimator with the unseen black-box function, performance remains high. We also used the Image-Lookup task to validate the need for confidence regularization as described in Section 3.1.3. FIG2 shows empirical of correlation between over-confidence at the black-box estimator output distribution and small gradients corresponding to the argument extractor, as well as the vice versa when confidence regularizers are applied. For the last experiment, we applied the Estimate and Replace approach to solve a more challenging task. The task combines logic and lookup operations. In this task, we demonstrate the generalization ability on the input -a database table in this instance. The table can be replaced with a different one at inference time, like the black-box function from the previous tasks. In addition, with this experiment we compare the offline, online and hybrid training modes. For this task, we generated a table-based question answering dataset. For example, consider a table that describes the number of medals won by each country during the last Olympics, and a query such as: "Which countries won more than 7 gold medals?". We solve this task by constructing an EstiNet model with an argument extractor layer that (i) extracts the argument from the text, (ii) chooses the logical operation to perform (out of: equal-to, less-than, greater-than, max and min), and (iii) chooses the relevant column to perform the operation on, along with a black-box estimator that performs the logic operation on the relevant column. Results TAB2 summarizes the TLL model performance for the training procedures described in Section 3.1. In offline training the model fails to fit the training set. Consequently, low training model accuracy in low inference performance. We hypothesize that fixing the estimator parameters during the end-to-end training process prevents the rest of the model from fitting the train set. The online training procedure indeed led to significant improvement in inference performance. Hybrid training further improved upon online training fitting the training set and performance carried similarly to inference mode. End-to-End Learning Task-specific architectures for end-to-end deep learning require large datasets and work very well when such data is available, as in the case of neural machine translation BID2. General purpose end-to-end architectures, suitable for multiple tasks, include the Neural Turing Machine BID6 and its successor, the Differential Neural Computer. Other architectures, such as the Neural Programmer architecture BID19 ) allow end-to-end training while constraining parts of the network to execute predefined operations by re-implementing specific operations as static differentiable components. This approach has two drawbacks: it requires re-implementation of the black-box function in a differentiable way, which may be difficult, and it lacks the accuracy and possibly also scalability of an exisiting black-box function. Similarly BID26 present a Neural Arithmetic Logic Unit (NALU) which uses gated base functions to allow better generalization to arithmetic functionality. Program induction is a different approach to interaction with black-box functions. The goal is to construct a program comprising a series of operations based on the input, and then execute the program to get the . When the input is a natural language query, it is possible to use semantic parsing to transform the query into a logical form that describes the program BID15. Early works required natural language query-program pairs to learn the mapping, i.e., intermediate labels. Recent works, (e.g., BID20) require only query-answer pairs for training. Other approaches include neural network-based program induction translation of a query into a program using sequence-to-sequence deep learning methods BID16, and learning the program from execution traces BID22 BID4.Reinforcement Learning Learning to execute the right operation can be viewed as a reinforcement learning problem. For a given input, the agent must select an action (input to black-box function) from a set of available actions. The action selection repeats following feedback based on the previous action selection. Earlier works that took this approach include BID3, and BID1. Recently, BID30 proposed a reinforcement extension to NTMs. overcome the difficulty of discrete selections, necessary for interfacing with an external function, by substituting the gradient with an estimate using RL. Recent work by BID14 and BID11 has shown to achieve state-of-the-art in Semantic Parsing and Question Answering, respectively, using RL. Interpretability via identifies composability as a strong contributor to model interpretability. They define composability as the ability to divide the model into components and interpret them individually to construct an explanation from which a human can predict the model's output. The Estimate and Replace approach solves the black-box interface learning problem in a way that is modular by design. As such, it provides an immediate interpretability benefit. Training a model to comply with a well-defined and well-known interface inherently supports model composability and, thus, directly contributes to its interpretability. For example, suppose you want to let a natural language processing model interface with a WordNet service to receive additional synonym and antonym features for selected input words. Because the WordNet interface is interpretable, the intermediate output of the model to the WordNet service (the words for which the model requested additional features) can serve as an explanation to the model's final prediction. Knowing which words the model chose to obtain additional features for gives insight to how it made its final decision. Reusability via Composability An additional clear benefit of model composability in the context of our solution is reusability. Training a model to comply with a well-defined interface induces well-defined module functionality which is a necessary condition for module reuse. Current solutions for learning using black-box functionality in neural network prediction have critical limitations which manifest themselves in at least one of the following aspects: (i) poor generalization, (ii) low learning efficiency, (iii) under-utilization of available optimal functions, and (iv) the need for intermediate labels. In this work, we proposed an architecture, termed EstiNet, and a training and deployment process, termed Estimate and Replace, which aim to overcome these limitations. We then showed empirical that validate our approach. Estimate and Replace is a two-step training and deployment approach by which we first estimate a given black-box functionality to allow end-to-end training via back-propagation, and then replace the estimator with its concrete black-box function at inference time. By using a differentiable estimation module, we can train an end-to-end neural network model using gradient-based optimization. We use labels that we generate from the black-box function during the optimization process to compensate for the lack of intermediate labels. We show that our training process is more stable and has lower sample complexity compared to policy gradient methods. By leveraging the concrete black-box function at inference time, our model generalizes better than end-to-end neural network models. We validate the advantages of our approach with a series of simple experiments. Our approach implies a modular neural network that enjoys added interpretability and reusability benefits. Future Work We limit the scope of this work to tasks that can be solved with a single black-box function. Solving the general case of this problem requires learning of multiple black-box interfaces, along unbounded successive calls, where the final prediction is a computed function over the output of these calls. This introduces several difficult challenges. For example, computing the final prediction over a set of black-box functions, rather than a direct prediction of a single one, requires an additional network output module. The input of this module must be compatible with the output of the previous layer, be it an estimation function at training time, or its black-box function counterpart at inference time, which belong to different distributions. We reserve this area of research for future work. As difficult as it is, we believe that artificial intelligence does not lie in mere knowledge, nor in learning from endless data samples. Rather, much of it is in the ability to extract the right piece of information from the right knowledge source for the right purpose. Thus, training a neural network to intelligibly interact with black-box functions is a leap forward toward stronger AI. The Image-Addition and Image-Lookup tasks use the MNIST training and test sets. The input is a sequence of MNIST images, sampled uniformly from the training set. The black-box function is a sum operation which receives a sequence of digits in range [0 − 9] represented as one-hot vectors. For Image-Lookup, the input sequence length defines the task (we've tested k ∈ {2, 3, 4}. k = 4 implies a lookup table of size 10 4 ). For Image-Addition, we've trained on input length k = 10 and tested on k = 100. The implementation was done in PyTorch. Architecture The argument extractor for both tasks is a composition of two convolutional layers (conv 1, conv 2), each followed by local 2 × 2 max-pooling, and a fully-connected layer (fc arg), which outputs the MNIST classification for an image. The argument extractors for each image share their parameters and each one outputs an MNIST classification for one image in the sequence. The sum estimator is an LSTM network, followed by a NALU cell on the final LSTM output, which in a regression floating number. The lookup estimator is a composition of fully-connected layers (fc lookup est) with ReLU activations. The architecture parameters are detailed in Training We used the hybrid training procedure where the pre-training of the estimator (offline training) continued until either performance reached 90%, or stopped increasing, on synthetic 10-class (MNIST) distributions which were sampled uniformly. The hyper-parameters of the model are in TAB6. We note that confidence regularization was necessary to stabilize learning and mitigate vanishing gradients. The target losses are cross-entropy and squared distance for lookup and addition respectively. The loss functions are: DISPLAYFORM0 Where LSR stands for Label Smoothing Regularization loss, H stands for entropy, p stands for the output classification, q stands for the gold label (one-hot), and y and y stand for the model and gold MNIST sum regressions, respectively. The β-weighted component of the loss is the online loss. The λ-weighted component is threshold entropy loss regularization on the argument extractor's MNIST classifications. In the following we describe the RL environment and architecture used in our experiments. We employed fixed length episodes and experimented with k ∈ {2, 3}. The MDP was modeled as follows: at each step a sample (x t, y t) is randomly selected from the MNIST dataset, where the handwritten image is used as the state, i.e. s t = x t. The agent responds with an action a t from the set. The reward, r t, in all steps except the last step is 0, and equals to the sum of absolute errors between the labels of the presented examples and the agent responses in the last step: DISPLAYFORM1 Where y t is the digit label. We use A3C as detailed by BID18 as the learning algorithm containing a single worker which updates the master network at the end of each episode. The agent model was implemented using two convolutional layers with filters of size 5 × 5 followed by a max-pooling size 2 × 2. The first convolutional layer contains 10 filters while the second contains 20 filters. The last two layers were fully connected of sizes 256 and 10 respectively with ELU activation, followed by a softmax. We employed Adam optimization BID13 with learning rate 1e − 5. The Text-Logic and Text-Lookup-Logic experiments were implemented in TensorFlow on synthetic datasets generated from textual templates and sampled numbers. We give concrete details for both experiments. For the TLL task we generated a table-based question answering dataset. The TLL dataset input has two parts: a question and a table. To correctly answer a question from this dataset, the DNN has to access the right table column and apply non-differentiable logic on it using a parameter it extracts from the query. For example, consider a table that describes the number of medals won by each country during the last Olympics, and a query such as: "Which countries won more than 7 gold medals?" To answer this query the DNN has to extract the argument (7 in this case) from the query, access the relevant column (namely, gold medals), and execute the greater than operation with the extracted argument and column content (namely a vector of numbers) as its parameters. The operation's output vector holds the indexes of the rows that satisfy the logic condition (greater-than in our example). The final answer contains the names of the countries (i.e., from the countries column) in the selected rows. The black-box function interface Solving the TLL task requires five basic logic functions: equalto, less-than, greater-than, max, and min. Each such function defines an API that is composed of two inputs and one output. The first input is a vector of numbers, namely, a column in the table. The second is a scalar, namely, an argument from the question or NaN if the scalar parameter is not relevant. The output is one binary vector, the same size as the input vector. The output vector indicates the selected rows for a specific query and thus provides the answer. TLL data We generated tables in which the first row contains column names and the first column contains a list of entities (e.g., countries, teams, products, etc.). Subsequent columns contained the quantitative properties of an entity (e.g., population, number of wins, prices, discounts, etc.). Each TLL-generated table consisted of 5 columns and 25 rows. We generated entity names (i.e., nations and clubs) for the first column by randomly selecting from a closed list. We generated values for the rest of the columns by sampling from a uniform distribution. We sampled values between 1 and 100 for the train set tables, and between 300 and 400 for the test set tables. We further created 2 sets of randomly generated questions that use the 5 functions. The set includes 20,000 train questions on the train tables and 4,000 test questions on the test tables. Input representations The TLL input was composed of words, numbers, queries, and tables. We used word pieces as detailed by BID28 to represent words. A word is a concatenation of word pieces: w j ∈ R d is an average value of its piece embedding. The exact numerical value of numbers is important to the decision. To accurately represent a number and embed it into the same word vector space, we used number representation following the float32 scheme BID12. Specifically, it starts by representing a number a ∈ R as a 32 dimension Boolean vector s n. It then adds redundancy factor r, r * 32 < d by multiplying each of the s n digits r times. Last, it pads the s n ∈ R d ing vector with d − r * 32 zeros. We tried several representation schemes. This approach ed in the best EstiNet performance. We represent the query q as a matrix of word embeddings and use an LSTM model BID9 to encode the query matrix into a vector representation: q lstm ∈ R drnn = h last (LSTM(Q)) where h last is the last LSTM output and d rnn is the dimension of the LSTM. n×m×d with n rows and m columns is represented as a three dimensional tensor. It represents a cell in a table as the piece average of its words. Argument Extractors Architecture The EstiNet TLL model uses three types of "selectors" (argument extractors): operation, argument, and column. Operation selectors select the correct black-box function. Argument selectors select an argument from the query and hand it to the API. The column selector's role is to select a column from the table and pass it to the black-box function. We implement each selector subnetwork as a classifier. Let C ∈ R cn×dc be the predicted class matrix, where the total number of classes is c n and each class is represented by a vector of size d c. For example, for a selector that has to select a word from a sentence, the C matrix contains the word embeddings of the words in the sentence. One may consider various selector implementation options. We use a simple, fully connected network implementation in which W ∈ R drnn ×cn is the parameter matrix and b ∈ R dc is the bias. We define β = C (q lstm W + b) to be the selector prediction before activation and α = f sel = softmax(β) to be the prediction after the softmax activation layer. At inference time, the selector transforms its soft selection into a hard selection to satisfy the API requirements. EstiNet enables that using Gumbel Softmax hard selection functionality. Estimator Architecture We use five estimators to estimate each of the five logic operations. Each estimator is a general purpose subnetwork that we implement with a transformer network encoder BID27. Specifically, we use n ∈ N identical layers, each of which consists of two sub-layers. The first is a multi-head attention with k ∈ N heads, and the second is a fully connected feed forward two-layer network, activated separately on each cell in the sequence. We then employ a residual connection around each of these two sub-layers, followed by layer normalization. Last, we apply linear transformation on the encoder output, adding bias and applying a Gumbel Softmax. The task input is a sentence that contains a greater-than or less-than question generated from a set of ten possible natural language patterns. The argument extractor must choose the correct tokens from the input to pass to the estimator/black-box function, which executes the greater-than/less-than functionality. For example: Out of x and y, is the first bigger? where x, y are float numbers sampled from a ∼ N (0, 10 10) distribution. The architecture is a very simple derivative of the TLL model with two selectors for the two floating numbers, and a classification of the choice between greater-than or less-than.
Training DNNs to interface w\ black box functions w\o intermediate labels by using an estimator sub-network that can be replaced with the black box after training
317
scitldr
This paper proposes a new actor-critic-style algorithm called Dual Actor-Critic or Dual-AC. It is derived in a principled way from the Lagrangian dual form of the Bellman optimality equation, which can be viewed as a two-player game between the actor and a critic-like function, which is named as dual critic. Compared to its actor-critic relatives, Dual-AC has the desired property that the actor and dual critic are updated cooperatively to optimize the same objective function, providing a more transparent way for learning the critic that is directly related to the objective function of the actor. We then provide a concrete algorithm that can effectively solve the minimax optimization problem, using techniques of multi-step bootstrapping, path regularization, and stochastic dual ascent algorithm. We demonstrate that the proposed algorithm achieves the state-of-the-art performances across several benchmarks. Reinforcement learning (RL) algorithms aim to learn a policy that maximizes the long-term return by sequentially interacting with an unknown environment. Value-function-based algorithms first approximate the optimal value function, which can then be used to derive a good policy. These methods BID23 BID28 often take advantage of the Bellman equation and use bootstrapping to make learning more sample efficient than Monte Carlo estimation BID25. However, the relation between the quality of the learned value function and the quality of the derived policy is fairly weak BID6. Policy-search-based algorithms such as REINFORCE BID29 and others (; BID18, on the other hand, assume a fixed space of parameterized policies and search for the optimal policy parameter based on unbiased Monte Carlo estimates. The parameters are often updated incrementally along stochastic directions that on average are guaranteed to increase the policy quality. Unfortunately, they often have a greater variance that in a higher sample complexity. Actor-critic methods combine the benefits of these two classes, and have proved successful in a number of challenging problems such as robotics , meta-learning BID3, and games . An actor-critic algorithm has two components: the actor (policy) and the critic (value function). As in policy-search methods, actor is updated towards the direction of policy improvement. However, the update directions are computed with the help of the critic, which can be more efficiently learned as in value-function-based methods BID24; BID13 BID7 BID19. Although the use of a critic may introduce bias in learning the actor, its reduces variance and thus the sample complexity as well, compared to pure policy-search algorithms. While the use of a critic is important for the efficiency of actor-critic algorithms, it is not entirely clear how the critic should be optimized to facilitate improvement of the actor. For some parametric family of policies, it is known that a certain compatibility condition ensures the actor parameter update is an unbiased estimate of the true policy gradient BID24. In practice, temporaldifference methods are perhaps the most popular choice to learn the critic, especially when nonlinear function approximation is used (e.g., BID19).In this paper, we propose a new actor-critic-style algorithm where the actor and the critic-like function, which we named as dual critic, are trained cooperatively to optimize the same objective function. The algorithm, called Dual Actor-Critic, is derived in a principled way by solving a dual form of the Bellman equation BID6. The algorithm can be viewed as a two-player game between the actor and the dual critic, and in principle can be solved by standard optimization algorithms like stochastic gradient descent (Section 2). We emphasize the dual critic is not fitting the value function for current policy, but that of the optimal policy. We then show that, when function approximation is used, direct application of standard optimization techniques can in instability in training, because of the lack of convex-concavity in the objective function (Section 3). Inspired by the augmented Lagrangian method , we propose path regularization for enhanced numerical stability. We also generalize the two-player game formulation to the multi-step case to yield a better bias/variance tradeoff. The full algorithm is derived and described in Section 4, and is compared to existing algorithms in Section 5. Finally, our algorithm is evaluated on several locomotion tasks in the MuJoCo benchmark BID27, and compares favorably to state-of-the-art algorithms across the board. Notation. We denote a discounted MDP by M = (S, A, P, R, γ), where S is the state space, A the action space, P (·|s, a) the transition probability kernel defining the distribution over next-state upon taking action a in state x, R(s, a) the corresponding immediate rewards, and γ ∈ the discount factor. If there is no ambiguity, we will use a f (a) and f (a)da interchangeably. In this section, we first describe the linear programming formula of the Bellman optimality equation BID5 BID14, paving the path for a duality view of reinforcement learning via Lagrangian duality. In the main text, we focus on MDPs with finite state and action spaces for simplicity of exposition. We extend the duality view to continuous state and action spaces in Appendix A.2.Given an initial state distribution µ(s), the reinforcement learning problem aims to find a policy π(·|s): S → P(A) that maximizes the total expected discounted reward with P(A) denoting all the probability measures over A, i.e., DISPLAYFORM0 where DISPLAYFORM1 which can be formulated as a linear program BID14 BID5: DISPLAYFORM2 DISPLAYFORM3 For completeness, we provide the derivation of the above equivalence in Appendix A. Without loss of generality, we assume there exists an optimal policy for the given MDP, namely, the linear programming is solvable. The optimal policy can be obtained from the solution to the linear program via π DISPLAYFORM4 The dual form of the LP below is often easier to solve and yield more direct relations to the optimal policy. DISPLAYFORM5 s.t. a∈A ρ(s, a) = (1 − γ)µ(s) + γ s,a∈S×A ρ(s, a)P (s |s, a)ds, ∀s ∈ S. Since the primal LP is solvable, the dual LP is also solvable, and P * − D * = 0. The optimal dual variables ρ * (s, a) and optimal policy π * (a|s) are closely related in the following manner:Theorem 1 (Policy from dual variables) s,a∈S×A ρ * (s, a) = 1, and π DISPLAYFORM6 Since the goal of reinforcement learning is to learn an optimal policy, it is appealing to deal with the Lagrangian dual which optimizes the policy directly, or its equivalent saddle point problem that jointly learns the optimal policy and value function. Theorem 2 (Competition in one-step setting) The optimal policy π *, actor, and its corresponding value function V *, dual critic, is the solution to the following saddle-point problem DISPLAYFORM7 where DISPLAYFORM8 The saddle point optimization provides a game perspective in understanding the reinforcement learning problem . The learning procedure can be thought as a game between the dual critic, i.e., value function for optimal policy, and the weighted actor, i.e., α(s)π(a|s): the dual critic V seeks the value function to satisfy the Bellman equation, while the actor π tries to generate state-action pairs that break the satisfaction. Such a competition introduces new roles for the actor and the dual critic, and more importantly, bypasses the unnecessary separation of policy evaluation and policy improvement procedures needed in a traditional actor-critic framework. To solve the dual problem in, a straightforward idea is to apply stochastic mirror prox BID8 or stochastic primal-dual algorithm to address the saddle point problem in. Unfortunately, such algorithms have limited use beyond special cases. For example, for an MDP with finite state and action spaces, the one-step saddle-point problem with tabular parametrization is convex-concave, and finite-sample convergence rates can be established; see e.g., and. However, when the state/action spaces are large or continuous so that function approximation must be used, such convergence guarantees no longer hold due to lack of convex-concavity. Consequently, directly solving can suffer from severe bias and numerical issues, ing in poor performance in practice (see, e.g., FIG1):1. Large bias in one-step Bellman operator: It is well-known that one-step bootstrapping in temporal difference algorithms has lower variance than Monte Carlo methods and often require much fewer samples to learn. But it produces biased estimates, especially when function approximation is used. Such a bias is especially troublesome in our case as it introduces substantial noise in the gradients to update the policy parameters.2. Absence of local convexity and duality: Using nonlinear parametrization will easily break the local convexity and duality between the original LP and the saddle point problem, which are known as the necessary conditions for the success of applying primal-dual algorithm to constrained problems . Thus none of the existing primal-dual type algorithms will remain stable and convergent when directly optimizing the saddle point problem without local convexity.3. Biased stochastic gradient estimator with under-fitted value function: In the absence of local convexity, the stochastic gradient w.r.t. the policy π constructed from under-fitted value function will presumably be biased and futile to provide any meaningful improvement of the policy. Hence, naively extending the stochastic primal-dual algorithms in; for the parametrized Lagrangian dual, will also lead to biased estimators and sample inefficiency. In this section, we will introduce several techniques to bypass the three instability issues in the previous section: generalization of the minimax game to the multi-step case to achieve a better bias-variance tradeoff; use of path regularization in the objective function to promote local convexity and duality; and use of stochastic dual ascent to ensure unbiased gradient estimates. In this subsection, we will extend the minimax game between the actor and critic to the multi-step setting, which has been widely utilized in temporal-difference algorithms for better bias/variance tradeoffs BID25 ). By the definition of the optimal value function, it is easy to derive the k-step Bellman optimality equation as DISPLAYFORM0 Similar to the one-step case, we can reformulate the multi-step Bellman optimality equation into a form similar to the LP formulation, and then we establish the duality, which leads to the following mimimax problem:Theorem 3 (Competition in multi-step setting) The optimal policy π * and its corresponding value function V * is the solution to the following saddle point problem DISPLAYFORM1 DISPLAYFORM2 The saddle-point problem FORMULA10 is similar to the one-step Lagrangian: the dual critic, V, and weighted k-step actor, α(s 0) k i=0 π(a i |s i), are competing for an equilibrium, in which critic and actor become the optimal value function and optimal policy. However, it should be emphasized that due to the existence of max-operator over the space of distributions P(A), rather than A, in the multi-step Bellman optimality equation FORMULA9, the establishment of the competition in multi-step setting in Theorem 3 is not straightforward: i), its corresponding optimization is no longer a linear programming; ii), the strong duality in FORMULA10 is not obvious because of the lack of the convex-concave structure. We first generalize the duality to multi-step setting. Due to space limit, detailed analyses for generalizing the competition to multi-step setting are provided in Appendix B. When function approximation is used, the one-step or multi-step saddle-point problems will no longer be convex in the primal parameters. This could lead to instability and even divergence when solved by brute-force stochastic primal-dual algorithms. One then desires to partially convexify the objectives without affecting the optimal solutions. The augmented Lagrangian method , also known as the method of multipliers, is designed and widely used for such purposes. However, directly applying this method would require introducing penalty functions of the multi-step Bellman operator, which renders extra complexity and challenges in optimization. Interested readers are referred to Appendix B.2 for further details. Instead, we propose to use path regularization, as a stepping stone for promoting local convexity and computation efficiency. The regularization term is motivated by the fact that the optimal value function satisfies the constraint DISPLAYFORM0 In the same spirit as augmented Lagrangian, we will introduce to the objective the simple penalty function DISPLAYFORM1 2, leading to the following: DISPLAYFORM2 where η V 0 is a hyper-parameter controlling the strength of the regularization. Note that in the penalty function above we use a behavior policy π b instead of an optimal policy, since the latter is unknown. Adding such a regularization enables local duality in the primal parameters. Indeed, this can be easily verified by showing the positive definiteness of the Hessian at a local solution. We call this approach path regularization, since it exploits the rewards in the sample path to regularize the solution path of value function V in the optimization procedure. As a by-product, the regularization also provides a mechanism to utilize off-policy samples from behavior policy π b.One can also see that the regularization indeed provides guidance and preference to search for the solution path. Specifically, in each update of V the learning procedure, it tries to move towards the optimal value function while staying close to the value function of the behavior policy π b. Intuitively, such regularization restricts the feasible domain of candidates V to be a ball centered at V π b. Besides enhancing local convexity, such a penalty also avoids unboundedness of V in the learning procedure, and thus more numerical robust. As long as the optimal value function is indeed in such a region, the introduced side-effect can be controlled. Formally, we can show that with appropriate η V, the optimal solution (V *, α *, π *) is not affected. The main of this subsection are summarized by the following theorem. Theorem 4 (Property of path regularization) The local duality holds for L r (V, α, π). Denote (V *, α *, π *) as the solution to Bellman optimality equation, with some appropriate DISPLAYFORM3 The proof of the theorem is given in Appendix B.3. We emphasize that the theorem holds when V is given enough capacity, i.e., in the nonparametric limit. With parametrization introduced, definitely approximation error will be introduced, and the valid range of η V, which keeps optimal solution unchanged, will be affected. However, the function approximation error is still an open problem for general class of parametrization, we omit such discussion here which is out of the range of this paper. Rather than the primal form, i.e., min DISPLAYFORM0 The major reason is due to the sample efficiency consideration. In the primal form, to apply the stochastic gradient descent algorithm at V t, one needs to solve max α∈P(S),π∈P(A) L r (V t, α, π) which involves sampling from each π and α during the solution path for the subproblem. We define the regularized dual function r (α, π):= min V L r (V, α, π). We first show the unbiased gradient estimator of r w.r.t. θ ρ = (θ α, θ π), which are parameters associated with α and π. Then, we incorporate the stochastic update rule to the dual ascent algorithm , ing in the dual actor-critic (Dual-AC) algorithm. The gradient estimators of the dual functions can be derived using chain rule and are provided below. Theorem 5 The regularized dual function r (α, π) has gradients estimators DISPLAYFORM1 DISPLAYFORM2 Therefore, we can apply stochastic mirror descent algorithm with the gradient estimator given in Theorem 5 to the regularized dual function r (α, π). Since the dual variables are probabilistic distributions, it is natural to use KL-divergence as the prox-mapping to characterize the geometry in the family of parameters BID1 BID8. Specifically, in the t-th iteration, θ DISPLAYFORM3 DISPLAYFORM4 denotes the stochastic gradients estimated through and via given samples and KL(q(s, a)||p(s, a)) = q(s, a) log q(s,a) p(s,a) dsda. Intuitively, such update rule emphasizes a trade-off between the current policy and possible improvements based on samples. The update of π shares some similarity to the TRPO, which is derived to ensure monotonic improvement of the new policy BID18. We discuss the details in Section 4.4.Rather than just update V once via the stochastic gradient of ∇ V L r (V, α, π) in each iteration for solving saddle-point problem BID8, which is only valid in convexconcave setting, Dual-AC exploits the stochastic dual ascent algorithm which requires V t = Published as a conference paper at ICLR 2018 Decay the stepsize: DISPLAYFORM5 Compute the stochastic gradients for θ π following.8:Update θ t π according to the exact prox-mapping or the approximate closed-form. 9: end for argmin V L r (V, α t−1, π t−1) in t-th iteration for estimating ∇ θρ r (θ α, θ π). As we discussed, such operation will keep the gradient estimator of dual variables unbiased, which provides better direction for convergence. In Algorithm 1, we update V t by solving optimization min V L r (V, α t−1, π t−1). In fact, the V function in the path-regularized Lagrangian L r (V, α, π) plays two roles: i), inherited from the original Lagrangian, the first two terms in regularized Lagrangian push the V towards the value function of the optimal policy with on-policy samples; ii), on the other hand, the path regularization enforces V to be close to the value function of behavior policy π b with off-policy samples. Therefore, the V function in the Dual-AC algorithm can be understood as an interpolation between these two value functions learned from both on and off policy samples. In above, we have introduced path regularization for recovering local duality property of the parametrized multi-step Lagrangian dual form and tailored stochastic mirror descent algorithm for optimizing the regularized dual function. Here, we present several strategies for practical computation considerations. Update rule of V t. In each iteration, we need to solve V t = argmin θ V L r (V, α t−1, π t−1), which depends on π b and η V, for estimating the gradient for dual variables. In fact, the closer π b to π * is, DISPLAYFORM0 2 will be. Therefore, we can set η V to be large for better local convexity and faster convergence. Intuitively, the π t−1 is approaching to π * as the algorithm iterates. Therefore, we can exploit the policy obtained in previous iteration, i.e., π t−1, as the behavior policy. The experience replay can also be used. Furthermore, notice the L(V, α t−1, π t−1) is a expectation of functions of V, we will use stochastic gradient descent algorithm for the subproblem. Other efficient optimization algorithms can be used too. Specifically, the unbiased gradient estimator for DISPLAYFORM1 We can use k-step Monte Carlo approximation for E DISPLAYFORM2 in the gradient estimator. As k is large enough, the truncate error is negligible BID25 ). We will iterate via θ DISPLAYFORM3 It should be emphasized that in our algorithm, V t is not trying to approximate the value or advantage function of π t, in contrast to most actor-critic algorithms. Although V t eventually becomes an approximation of the optimal value function once the solution reaches the global optimum, in each update V t is merely a function that helps the current policy to be improved. From this perspective, the Dual-AC bypasses the policy evaluation step. Update rule of α t. In practice, we may face with the situation that the initial sampling distribution is fixed, e.g., in MuJoCo tasks. Therefore, we cannot obtain samples from α t (s) at each iteration. We assume that ∃η µ ∈, such that α(s) = (1 − η µ)β(s) + η µ µ(s) with β(s) ∈ P(S). Hence, we have s). Note that such an assumption is much weaker comparing with the requirement for popular policy gradient algorithms (e.g., BID24 ; BID22) that assumes µ(s) to be a stationary distribution. In fact, we can obtain a closed-form update forα if a square-norm regularization term is introduced into the dual function. Specifically, Theorem 6 In t-th iteration, given V t and π t−1, DISPLAYFORM4 DISPLAYFORM5 Then, we can updateα t through FORMULA0 with Monte Carlo approximation of DISPLAYFORM6, s k+1, avoiding the parametrization ofα. As we can see, theα t (s)reweights the samples based on the temporal differences and this offers a principled justification for the heuristic prioritized reweighting trick used in BID17.Update rule of θ t π. The parameters for dual function, θ ρ, are updated by the prox-mapping operator following the stochastic mirror descent algorithm for the regularized dual function. Specifically, in t-th iteration, given V t and α t, for θ π, the prox-mapping reduces to DISPLAYFORM7 DISPLAYFORM8. Then, the update rule will become exactly the natural policy gradient with a principled way to compute the "policy gradient"ĝ t π. This can be understood as the penalty version of the trust region policy optimization BID18, in which the policy parameters conservative update in terms of KL-divergence is achieved by adding explicit constraints. Exactly solving the prox-mapping for θ π requires another optimization, which may be expensive. To further accelerate the prox-mapping, we approximate the KL-divergence with the second-order Taylor expansion, and obtain an approximate closed-form update given by DISPLAYFORM9 where DISPLAYFORM10 denotes the Fisher information matrix. Empirically, we may normalize the gradient by its norm g t π F −1 t g t π BID15 for better performances. Combining these practical tricks to the stochastic mirror descent update eventually gives rise to the dual actor-critic algorithm outlined in Algorithm 1. The dual actor-critic algorithm includes both the learning of optimal value function and optimal policy in a unified framework based on the duality of the linear programming (FORMULA0, but they either do not focus on concrete algorithms for solving the optimization problem, or require certain knowledge of the transition probability function that may be hard to obtain in practice. The duality view has also been exploited in BID10 . Their algorithm is based on the duality of entropy-regularized Bellman equation BID26 BID16 ; ; BID2), rather than the exact Bellman optimality equation we try to solve in this work. Our dual actor-critic algorithm can be understood as a nontrivial extension of the (approximate) dual gradient method (, Chapter 6. 3) using stochastic gradient and Bregman divergence, which essentially parallels the view of (approximate) stochastic mirror descent algorithm in the primal space. As a , the algorithm converges with diminishing stepsizes and decaying errors from solving subproblems. Particularly, the update rules of α and π in the dual actor-critic are related to several existing algorithms. As we see in the update of α, the algorithm reweighs the samples which are not fitted well. This is related to the heuristic prioritized experience replay BID17. For the update in π, the proposed algorithm bears some similarities with trust region poicy gradient (TRPO) BID18 and natural policy gradient (; BID15 . Indeed, TRPO and NPR solve the same prox-mapping but are derived from different perspectives. We emphasize that although the updating rules share some resemblance to several reinforcement learning algorithms in the literature, they are purely originated from a stochastic dual ascent algorithm for solving the two-play game derived from Bellman optimality equation. We evaluated the dual actor-critic (Dual-AC) algorithm on several continuous control environments from the OpenAI Gym with MuJoCo physics simulator BID27. We compared Dual-AC with several representative actor-critic algorithms, including trust region policy optimization (TRPO) BID18 and proximal policy optimization (PPO) BID20 1. We ran the algorithms with 5 random seeds and reported the average rewards with 50% confidence interval. Details of the tasks and setups of these experiments including the policy/value function architectures and the hyperparameters values, are provided in Appendix C. To justify our analysis in identifying the sources of instability in directly optimizing the parametrized one-step Lagrangian duality and the effect of the corresponding components in the dual actor-critic algorithm, we perform comprehensive Ablation study in InvertedDoublePendulum-v1, Swimmerv1, and Hopper-v1 environments. We also considered the effect of k = {10, 50} besides the one-step in the study to demonstrate the benefits of multi-step. We conducted comparison between the Dual-AC and its variants, including Dual-AC w/o multi-step, Dual-AC w/o path-regularization, Dual-AC w/o unbiased V, and the naive Dual-AC, for demonstrating the three instability sources in Section 3, respectively, as well as varying the k = {10, 50} in Dual-AC. Specifically, Dual-AC w/o path-regularization removes the path-regularization components; Dual-AC w/o multi-step removes the multi-step extension and the path-regularization; Dual-AC w/o unbiased V calculates the stochastic gradient without achieving the convergence of inner optimization on V; and the naive Dual-AC is the one without all components. Moreover, Dual-AC with k = 10 and Dual-AC with k = 50 denote the length of steps set to be 10 and 50, respectively. The empirical performances on InvertedDoublePendulum-v1, Swimmer-v1, and Hopper-v1 tasks are shown in FIG1. The are consistent across the tasks with the analysis. The naive Dual-AC performs the worst. The performances of the Dual-AC found the optimal policy which solves the problem much faster than the alternative variants. The Dual-AC w/o unbiased V converges slower, showing its sample inefficiency caused by the bias in gradient calculation. The Dual-AC w/o multistep and Dual-AC w/o path-regularization cannot converge to the optimal policy, indicating the importance of the path-regularization in recovering the local duality. Meanwhile, the performance of Dual-AC w/o multi-step is worse than Dual-AC w/o path-regularization, showing the bias in one- step can be alleviated via multi-step trajectories. The performances of Dual-AC become better with the length of step k increasing on these three tasks. We conjecture that the main reason may be that in these three MuJoCo environments, the bias dominates the variance. Therefore, with the k increasing, the proposed Dual-AC obtains more accumulate rewards. In this section, we evaluated the Dual-AC against TRPO and PPO across multiple tasks, including the InvertedDoublePendulum-v1, Hopper-v1, HalfCheetah-v1, Swimmer-v1 and Walker-v1. These tasks have different dynamic properties, ranging from unstable to stable, Therefore, they provide sufficient benchmarks for testing the algorithms. In Figure 2, we reported the average rewards across 5 runs of each algorithm with 50% confidence interval during the training stage. We also reported the average final rewards in TAB1. The proposed Dual-AC achieves the best performance in almost all environments, including Pendulum, InvertedDoublePendulum, Hopper, HalfCheetah and Walker. These demonstrate that Dual-AC is a viable and competitive RL algorithm for a wide spectrum of RL tasks with different dynamic properties. A notable case is the InvertedDoublePendulum, where Dual-AC substantially outperforms TRPO and PPO in terms of the learning speed and sample efficiency, implying that Dual-AC is preferable to unstable dynamics. We conjecture this advantage might come from the different meaning of V in our algorithm. For unstable system, the failure will happen frequently, ing the collected data are far away from the optimal trajectories. Therefore, the policy improvement through the value function corresponding to current policy is slower, while our algorithm learns the optimal value function and enhances the sample efficiency. In this paper, we revisited the linear program formulation of the Bellman optimality equation, whose Lagrangian dual form yields a game-theoretic view for the roles of the actor and the dual critic. Although such a framework for actor and dual critic allows them to be optimized for the same objective function, parametering the actor and dual critic unfortunately induces instablity in optimization. We analyze the sources of instability, which is corroborated by numerical experiments. We then propose Dual Actor-Critic, which exploits stochastic dual ascent algorithm for the path regularized, DISPLAYFORM0 Figure 2: The of Dual-AC against TRPO and PPO baselines. Each plot shows average reward during training across 5 random seeded runs, with 50% confidence interval. The x-axis is the number of training iterations. The Dual-AC achieves comparable performances comparing with TRPO and PPO in some tasks, but outperforms on more challenging tasks.multi-step bootstrapping two-player game, to bypass these issues. Proof We rewrite the linear programming 3 as DISPLAYFORM1 Recall the T is monotonic, i.e., if DISPLAYFORM2 Theorem 1 (Optimal policy from occupancy) s,a∈S×A ρ * (s, a) = 1, and π DISPLAYFORM3 a∈A ρ * (s,a). Proof For the optimal occupancy measure, it must satisfy DISPLAYFORM4 where P denotes the transition distribution and I denotes a |S| × |SA| matrix where I ij = 1 if and only if j ∈ [(i − 1) |A| + 1,..., i |A|]. Multiply both sides with 1, due to µ and P are probabilities, we have 1, ρ * = 1.Without loss of generality, we assume there is only one best action in each state. Therefore, by the KKT complementary conditions of, i.e., ρ(s, a) R(s, a) + γE s |s,a [V (s)] − V (s) = 0, which implies ρ * (s, a) = 0 if and only if a = a *, therefore, the π * by normalization. Theorem 2 The optimal policy π * and its corresponding value function V * is the solution to the following saddle problem DISPLAYFORM5 Proof Due to the strong duality of the optimization, we have DISPLAYFORM6 Then, plugging the property of the optimum in Theorem 1, we achieve the final optimization. In this section, we extend the linear programming and its duality to continuous state and action MDP. In general, the only weak duality holds for infinite constraints, i.e., P * D *. With a mild assumption, we will recover the strong duality for continuous state and action MDP, and most of the in discrete state and action MDP still holds. Specifically, without loss of generality, we consider the solvable MDP, i.e., the optimal policy, π DISPLAYFORM0 where the first inequality comes from 2 f (x), g(DISPLAYFORM1 The constraints in the primal form of linear programming can be written as FORMULA1, we have DISPLAYFORM2 DISPLAYFORM3 The solution (V *, *) also satisfies the KKT conditions, DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 where denotes the conjugate operation. By the KKT condition, we have DISPLAYFORM7 The strongly duality also holds, i.e., DISPLAYFORM8 s.t.(DISPLAYFORM9 Proof We compute the duality gap DISPLAYFORM10 which shows the strongly duality holds. Once we establish the k-step Bellman optimality equation FORMULA9, it is easy to derive the λ-Bellman optimality equation, i.e., DISPLAYFORM0 Proof Denote the optimal policy as π * (a|s), we have DISPLAYFORM1 holds for arbitrary ∀k ∈ N. Then, we conduct k ∼ Geo(λ) and take expectation over the countable infinite many equation, ing DISPLAYFORM2 Next, we investigate the equivalent optimization form of the k-step and λ-Bellman optimality equation, which requires the following monotonic property of T k and T λ.Lemma 7 Both T k and T λ are monotonic. Proof Assume U and V are the value functions corresponding to π 1 and π 2, and U V, i.e., U (s) V (s), ∀s ∈ S, apply the operator T k on U and V, we have DISPLAYFORM3, ∀π ∈ P, which leads to the first , DISPLAYFORM4 With the monotonicity of T k and T λ, we can rewrite the V * as the solution to an optimization, The optimal value function V * is the solution to the optimization DISPLAYFORM0 where µ(s) is an arbitrary distribution over S. DISPLAYFORM1 where the last equality comes from the Banach fixed point theorem BID14. Similarly, we can also show that ∀V, V T ∞ λ V = V *. By combining these two inequalities, we achieve the optimization. We rewrite the optimization as min DISPLAYFORM2 (s, a) ∈ S × A, We emphasize that this optimization is no longer linear programming since the existence of maxoperator over distribution space in the constraints. However, Theorem 1 still holds for the dual variables in.Proof Denote the optimal policy asπ * DISPLAYFORM3 the KKT condition of the optimization can be written as DISPLAYFORM4, we simplify the condition, i.e., DISPLAYFORM5 Due to the P π * V k (s |s, a) is a conditional probability for ∀V, with similar argument in Theorem 1, we have s,a ρ * (s, a) = 1.By the KKT complementary condition, the primal and dual solutions, i.e., V * and ρ *, satisfy DISPLAYFORM6 Recall V * denotes the value function of the optimal policy, then, based on the definition,π * V * = π * which denotes the optimal policy. Then, the condition implies ρ(s, a) = 0 if and only if a = a *, therefore, we can decompose ρ * (s, a) = α * (s)π * (a|s).The corresponding Lagrangian of optimization FORMULA1 is DISPLAYFORM7 where DISPLAYFORM8 We further simplify the optimization. Since the dual variables are positive, we have After clarifying these properties of the optimization corresponding to the multi-step Bellman optimality equation, we are ready to prove the Theorem 3.Theorem 3 The optimal policy π * and its corresponding value function V * is the solution to the following saddle point problem max Proof By Theorem 1 in multi-step setting, we can decompose ρ(s, a) = α(s)π(a|s) without any loss. Plugging such decomposition into the Lagrangian 32 and realizing the equivalence among the optimal policies, we arrive the optimization as min V max α∈P(S),π∈P(A) L k (V, α, π). Then, because of the strong duality as we proved in Lemma 9, we can switch min and max operators in optimization 8 without any loss. DISPLAYFORM9 The strong duality holds in optimization.Proof Specifically, for every α ∈ P(S), π ∈ P(A), DISPLAYFORM0 On the other hand, since L k (V, α *, π *) is convex w.r.t. V, we have V * ∈ argmin V L k (V, α *, π *), by checking the first-order optimality. Therefore, we have max α∈P(S),π∈P(A) (α, π) = max DISPLAYFORM1 Combine these two conditions, we achieve the strong duality even without convex-concave property (1 − γ k+1)E s∼µ(s) [V * (s)] max α∈P(S),π∈P(A) (α, π) (1 − γ k+1)E s∼µ(s) [V * (s)]. We consider the one-step Lagrangian duality first. Following the vanilla augmented Lagrangian method, one can achieve the dual function as The computation of P c is in general intractable due to the composition of max and the condition expectation in ∆[V](s, a), which makes the optimization for augmented Lagrangian method difficult. For the multi-step Lagrangian duality, the objective will become even more difficult due to constraints are on distribution family P(S) and P(A), rather than S × A. The local duality holds for L r (V, α, π). Denote (V *, α *, π *) as the solution to Bellman optimality equation, with some appropriate η V, (V *, α *, π *) = argmax α∈P(S),π∈P(A) argmin V L r (V, α, π).Proof The local duality can be verified by checking the Hessian of L r (θ V *). We apply the local duality theorem [Chapter 14]. Suppose (Ṽ *,α *,π *) is a local solution to min V max α∈P(S),π∈P(A) L r (V, α, π), then, max α∈P(S),π∈P(A) min V L r (V, α, π) has a local solutionṼ * with correspondingα *,π *.Next, we show that with some appropriate η V, the path regularization does not change the optimum. Let U π (s) = E
We propose Dual Actor-Critic algorithm, which is derived in a principled way from the Lagrangian dual form of the Bellman optimality equation. The algorithm achieves the state-of-the-art performances across several benchmarks.
318
scitldr
Training a model to perform a task typically requires a large amount of data from the domains in which the task will be applied. However, it is often the case that data are abundant in some domains but scarce in others. Domain adaptation deals with the challenge of adapting a model trained from a data-rich source domain to perform well in a data-poor target domain. In general, this requires learning plausible mappings between domains. CycleGAN is a powerful framework that efficiently learns to map inputs from one domain to another using adversarial training and a cycle-consistency constraint. However, the conventional approach of enforcing cycle-consistency via reconstruction may be overly restrictive in cases where one or more domains have limited training data. In this paper, we propose an augmented cyclic adversarial learning model that enforces the cycle-consistency constraint via an external task specific model, which encourages the preservation of task-relevant content as opposed to exact reconstruction. This task specific model both relaxes the cycle-consistency constraint and complements the role of the discriminator during training, serving as an augmented information source for learning the mapping. We explore adaptation in speech and visual domains in low resource in supervised setting. In speech domains, we adopt a speech recognition model from each domain as the task specific model. Our approach improves absolute performance of speech recognition by 2% for female speakers in the TIMIT dataset, where the majority of training samples are from male voices. In low-resource visual domain adaptation, the show that our approach improves absolute performance by 14% and 4% when adapting SVHN to MNIST and vice versa, respectively, which outperforms unsupervised domain adaptation methods that require high-resource unlabeled target domain. Domain adaptation BID14 BID31 BID1 aims to generalize a model from source domain to a target domain. Typically, the source domain has a large amount of training data, whereas the data are scarce in the target domain. This challenge is typically addressed by learning a mapping between domains, which allows data from the source domain to enrich the available data for training in the target domain. One way of learning such mappings is through Generative Adversarial Networks (GANs BID7 with cycle-consistency constraint , which enforces that mapping of an example from the source to the target and then back to the source domain would in the same example (and vice versa for a target example). Due to this constraint, CycleGAN learns to preserve the'content' 1 from the source domain while only transferring the'style' to match the distribution of the target domain. This is a powerful constraint, and various works BID32 BID20 BID10 have demonstrated its effectiveness in learning cross domain mappings. Enforcing cycle-consistency is appealing as a technique for preserving semantic information of the data with respect to a task, but implementing it through reconstruction may be too restrictive when data are imbalanced across domains. This is because the reconstruction error encourages exact match of samples from the reverse mapping, which may in turn encourage the forward-mapping to keep the sample close to the original domain. Normally, the adversarial objectives would counter this effect; however, when data from the target domain are scarce, it is very difficult to learn a powerful discriminator that can capture meaningful properties of the target distribution. Therefore, the ing mappings learned is likely to be sub-optimal. Importantly, for the learned mapping to be meaningful, it is not necessary to have the exact reconstruction. As long as the'semantic' information is preserved and the'style' matches the corresponding distribution, it would be a valid mapping. To address this issue, we propose an augmented cyclic adversarial learning model (ACAL) for domain adaptation. In particular, we replace the reconstruction objective with a task specific model. The model learns to preserve the'semantic' information from the data samples in a particular domain by minimizing the loss of the mapped samples for the task specific model. On the other hand, the task specific model also serves as an additional source of information for the corresponding domain and hence supplements the discriminator in that domain to facilitate better modeling of the distribution. The task specific model can also be viewed as an implicit way of disentangling the information essential to the task from the'style' information that relates to the data distribution of different domain. We show that our approach improves the performance by 40% as compared to the baseline on digit domain adaptation. We improve the phoneme error rate by ∼ 5% on TIMIT dataset, when adapting the model trained on one speech from one gender to the other. Our work is broadly related to domain adaptation using neural networks for both supervised and unsupervised domain adaptation. Supervised Domain Adaptation When labels are available in the target domain, a common approach is to utilize the label information in target domain to minimize the discrepancy between source and target domain BID13 BID28 BID6 BID5. For example, BID13 applies the marginal Fisher analysis criteria and Maximum Mean Discrepancy (MMD) to minimize the distribution difference between source and target domain. BID28 proposed to add a domain classifier that predicts domain label of the inputs, with a domain confusion loss. BID6 leverages attributes by using attribute and class level classification loss with attribute consistent loss to fine-tune the target model. Our method also employs models from both domains, however, our models are used to assist adversarial learning for better learning of the target domain distribution. In addition, our final model for supervised domain adaptation is obtained by training on data from target domain as well as the transfered data from the source domain, rather than fine-tuning a source/target domain model. More recently, various work have taken advantage of the substantial generation capabilities of the GAN framework and applied them to domain adaptation BID19 BID2 BID32 BID29 BID16 BID10. However, most of these works focus on high-resource unsupervised domain adaptation, which may be unsuitable for situations where the target domain data are limited. BID2 uses a GAN to adapt data from the source to target domain while simultaneously training a classifier on both the source and adapted data. Our method also employs task specific models; however, we use the models to augment the CycleGAN formulation. We show that having cycles in both directions (i.e. from source to target and vice versa) is important in the case where the target domain has limited data (see sec. 4). BID29 proposes adversarial discriminative domain adaptation (ADDA), where adversarial learning is employed to match the representation learned from the source and target domain. Our method also utilizes pre-trained model from source domain, but we only implicitly match the representation distributions rather than explicitly enforcing representational similarity. Cycle-consistent adversarial domain adaptation is perhaps the most similar work to our own. This approach uses both 1 and semantic Figure 1: Illustration of proposed approach. Left: CycleGAN BID34. Middle: Relaxed cycle-consistent model (RCAL), where the cycle-consistency is enforced through task specific models in corresponding domain. Right: Augmented cycle-consistent model (ACAL). In addition to the relaxed model, the task specific model is also used to augment the discriminator of corresponding domain to facilitate learning. In the diagrams x and L denote data and losses, respectively. We point out that the ultimate goal of our approach is to use the mapped Source → Target samples (x S →T) to augment the limited data of the target domain (x T).consistency to enforce cycle-consistency. An important difference in our work is that we also include another cycle that starts from the target domain. This is important because, if the target domain is of low resource, the adaptation from source to target may fail due to the difficulty in learning a good discriminator in the target domain. BID0 also suggests to improve CycleGAN by explicitly enforcing content consistency and style adaptation, by augmenting the cyclic adversarial learning to hidden representation of domains. Our model is different from recent cyclic adversarial learning, due to implicit learning of content and style representation through an auxiliary task, which is more suitable for low resource domains. Using classification to assist GAN training has also been explored previously BID26 BID27 BID17. BID26 proposed CatGAN, where the discriminator is converted to a multi-class classifier. We extend this idea to any task specific model, including speech recognition task, and use this model to preserve task specific information regarding the data. We also propose that the definition of task model can be extended to unsupervised tasks,such as language or speech modeling in domains, meaning augmented unsupervised domain adaptation. To learn the true data distribution P data (X) in a nonparametric way, BID7 proposed the generative adversarial network (GAN). In this framework, a discriminator network D(x) learns to discriminate between the data produced by a generator network G(z) and the data sampled from the true data distribution P data (X), whereas the generator models the true data distribution by learning to confuse the discriminator. Under certain assumptions BID7, the generator would learn the true data distribution when the game reaches equilibrium. Training of GAN is in general done by alternately optimizing the following objective for D and G. DISPLAYFORM0 CycleGAN BID34 extends this framework to multiple domains, P S (X) and P T (X), while learning to map samples back and forth between them. Adversarial learning is applied such that the mapping from G S →T will match the target distribution P T (X), and similarly for the reverse mapping from G T →S. This is accomplished by the following adversarial objectives: DISPLAYFORM0 CycleGAN also introduces cycle-consistency, which enforces that each mapping is able to invert the other. In the original work, this is achieved by including the following reconstruction objective: DISPLAYFORM1 Learning the CycleGAN model involves optimizing a weighted combination of the above objectives 2, 3 and 4. Enforcing cycle-consistency using a reconstruction objective (e.g. eq. 4) may be too restrictive and potentially in sub-optimal mapping functions. This is because the learning dynamics of CycleGAN balance the two contrastive forces. The adversarial objective encourages the mapping functions to generate samples that are close to the true distribution. At the same time, the reconstruction objective encourages identity mapping. Balancing these objectives may works well in the case where both domains have a relatively large number of training samples. However, problems may arise in case of domain adaptation, where data within the target domain are relatively sparse. Let P S (X) and P T (X) denote source and target domain distributions, respectively, and samples from P T (X) are limited. In this case, it will be difficult for the discriminator D T to model the actual distribution P T (X). A discriminator model with sufficient capacity will quickly overfit and the ing D T will act like delta function on the sample points from P T (X). Attempts to prevent this by limiting the capacity or using regularization may easily induce over-smoothing and under-fitting such that the probability outputs of D T are only weakly sensitive to the mapped samples. In both cases, the influence of the reconstruction objective should begin to outweigh that of the adversarial objective, thereby encouraging an identity mapping. More generally, even if we are are able to obtain a reasonable discriminator D T, the support of the distribution learned through it would likely to be small due to limited data. Therefore, the learning signal G S →T gets from D T would be limited. To sum up, limited data within P T (X) would make it less likely that the discriminator will encourage meaningful cross domain mappings. The root of the above issue in domain adaptation is two fold. First, exact reconstruction is a too strong objective for enforcing cycle-consistency. Second, learning a mapping function to a particular domain which solely depends on the discriminator for that domain is not sufficient. To address these two problems, we propose to 1) use a task specific model to enforce the cycle-consistency constraint, and 2) use the same task specific model in addition to the discriminator to train more meaningful cross domain mappings. In more detail, let M S and M T be the task specific models trained on domains P S (X, Y) and P T (X, Y), and L task denotes the task specific loss. Our cycle-consistent objective is then: DISPLAYFORM0 Here, L task enforces cycle-consistency by requiring that the reverse mappings preserve the semantic information of the original sample. Importantly, this constraint is less strict than when using reconstruction, because now as long as the content matches that of the original sample, the incurred loss will not increase. (Some style consistency is implicitly enforced since each model M is trained on data within a particular domain.) This is a much looser constraint than having consistency in the original data space, and thus we refer to this as the relaxed cycle-consistency objective. Input: source domain data P S (x, y), target domain data P T (x, y), pretrained source task model M S Output: target task model M T while not converged do Sample from (x s, y s) from P S if y t in P T then %Supervised% Sample (x t, y t) from P T Finetune source model M S on (x s, y s) and (G T →S (x t), y t ) samples (eq. 6) Train task model M T on (x t, y t) and (G S →T (x s), y s ) samples (eq. 7) DISPLAYFORM0 To address the second issue, we augment the adversarial objective with corresponding objective: DISPLAYFORM1 Similar to adversarial training, we optimize the above objective by maximizing D S (D T) and minimizing G T →S (G S →T) and M S (M T). With the new terms, the learning of mapping functions G get assists from both the discriminator and the task specific model. The task specific model learns to capture conditional probability distribution P S (Y |X) (P T (Y |X)), that also preserves information regarding P S (X) (P T (X)). This conditional information is different than the information captured through the discriminator D S (D T). The difference is that the model is only required to preserve useful information regarding X respect to predicting Y, for modeling the conditional distribution, which makes learning the conditional model a much easier problem. In addition, the conditional model mediates the influence of data that the discriminator does not have access to (Y), which should further assist learning of the mapping functions G T →S (G S →T).In case of unsupervised domain adaptation, when there is no information of target conditional probability distribution P T (Y |X), we propose to use source model M S to estimate P T (Y |X) through adversarial learning, i.e. DISPLAYFORM2. Therefore, proposed model can be extended to unsupervised domain adaptation, with the corresponding modified objectives: To further extend this approach to semi-supervised domain adaptation, both supervised and unsupervised objectives for labeled and unlabeled target samples are used interchangeably, as explained in Algorithm 1. DISPLAYFORM3 In this section, we evaluate our proposed model on domain adaptation for visual and speech recognition. We continue the convention of referring to the data domains as'source' and'target', where target denotes the domain with either limited or unlabeled training data. Visual domain adaptation is evaluated using the MNIST dataset (M) BID18, Street View House Numbers (SVHN) datasets (S) BID23, USPS (U) BID15, MNISTM (MM) and Synthetic Digits (SD) BID3. Adaptation on speech is evaluated on the domain of gender within the TIMIT dataset BID4, which contains broadband 16kHz recordings of 6300 utterances (5.4 hours) of phonetically-balanced speech. The male/female ratio of speakers across train/validation/test sets is approximately 70% to 30%. Therefore, we treat male speech as the source domain and female speech as the low resource target domain. To get an idea of the contribution from each component of our model, in this section we perform a series of ablations and present the in TAB0. We perform these ablations by treating SVHN as the source domain and MNIST as the target domain. We down sample the MNIST training data so only 10 samples per class are available during training, which is only 0.17% of full training data. The testing performance is calculated on the full MNIST test set. We use a modified LeNet for all experiments in this ablation. The Modified LeNet consists of two convolutional layers with 20 and 50 channels, followed by a dropout layer and two fully connected layers of 50 and 10 dimensionality. There are various ways that one may utilize cycle-consistency or adversarial training to do domain adaptation from components of our model. One way is to use adversarial training on the target domain to ensure matching of distribution of adapted data, and use the task specific model to ensure the'content' of the data from the source domain is preserved. This is the model described in BID2, except their model is originally unsupervised. This model is denoted as S → T in TAB0. It is also interesting to examine the importance of the double cycle, which is proposed in BID34 and adopted in our work. Theoretically, one cycle would be sufficient to learn the mapping between domains; therefore, we also investigate the performance of one cycle only models, where one direction would be from source to target and then back, and similarly for the other direction. These models are denoted as (S→T→S)-One Cycle and (T→S→T)-One Cycle in TAB0, respectively. To test the effectiveness of the relaxed cycle-consistency (eq. 5) and augmented adversarial loss (eq. 6 and 7), we also test one cycle models while progressively adding these two losses. Interestingly, the one cycle relaxed and one cycle augmented models are similar to the model proposed in BID10 when their model performs mapping from source to target domain and then back. The difference is that their model is unsupervised and includes more losses at different levels. As can be seen from TAB0, the simple conditional model performed surprisingly well as compared to more complicated cyclic counterparts. This may be attributed to the reduced complexity, since it only needs to learn one set of mapping. As expected, the single cycle performance is poor when the target domain is of limited data due to inefficient learning of discriminator in the target domain (see section 3). When we change the cycle to the other direction, where there are abundant data in the target domain, the performance improves, but is still worse than the simple one without cycle. This is because the adaptation mapping (i.e. G S →T) is only learned via the generated samples from G T →S, which likely deviate from the real examples in practice. This observation also suggests that it would be beneficial to have cycles in both directions when applying the cycle-consistency constraint, since then both mappings can be learned via real examples. The trends get reversed when we are using relaxed implementation of cycle-consistency from the reconstruction error with the task specific losses. This is because now the power of the task specific model is crucial to preserve the content of the data after the reverse mapping. When the source domain dataset is sufficiently large, the cycle-consistency is preserved. As such, the ing learned mapping functions would preserve meaningful semantics of the data while transferring the styles to the target domain, and vice versa. In addition, it is clear that augmenting the discriminator with task specific loss is helpful for learning adaptations. Furthermore, the information added from the task specific model is clearly beneficial for improving the adaptation performance, without this none of the models outperform the baseline model, where no adaptation is performed. Last but not least, it is also clear from the that using task specific model improves the overall adaptation performance. In this section, we experiment on domain adaptation for the task of digit recognition. In each experiment, we select one domain (MNIST, USPS, MNISTM, SVHN, Synthetic Digits) to be the target. We conduct two type of domain adaptation. First, low-resource supervised adaptation where we sub-sample the target to contain only a few examples per class, using the other full dataset as the source domain. Comparison with recent low resource domain adaptation, FADA for MNIST, USPS, and SVHN adaptation is shown in TAB1. We also apply our proposed model to domain adaptation in speech recognition. We use the TIMIT dataset, where the male to female speaker ratio is about 7: 3 and thus we choose the data subset from male speakers as the source and the subset from female speakers as the target domain. We evaluate the FORMULA0, multi-discriminator training significantly impacts adaptation performance. Therefore, we used the multi-discriminator architecture as the discriminator for the adversarial loss in our evaluation. Our task specific model is a pre-trained speech recognition model within each domain in this set of experiments. The are shown in TAB2. We observe significant performance improvements over the baseline model as well as comparable or better performance as compared to previous methods. It is interesting to note that the performance of the proposed model on the adapted male (M → F) almost matches the baseline model performance, where the model is trained on true female speech. In addition, the performance gap in this case is significant as compared to other methods, which suggests the adapted distribution is indeed close to the true target distribution. In addition, when combined with more data, our model further out performs the baseline by a noticeable margin. In this paper, we propose to use augmented cycle-consistency adversarial learning for domain adaptation and introduce a task specific model to facilitate learning domain related mappings. We enforce cycle-consistency using a task specific loss instead of the conventional reconstruction objective. Additionally, we use the task specific model as an additional source of information for the discriminator in the corresponding domain. We demonstrate the effectiveness of our proposed approach by evaluating on two domain adaptation tasks, and in both cases we achieve significant performance improvement as compared to the baseline. By extending the definition of task-specific model to unsupervised learning, such as reconstruction loss using autoencoder, or self-supervision, our proposed method would work on all settings of domain adaptation. Such unsupervised task can be speech modeling using wavenet BID30, or language modeling using recurrent or transformer networks BID24.
A robust domain adaptation by employing a task specific loss in cyclic adversarial learning
319
scitldr
The success of popular algorithms for deep reinforcement learning, such as policy-gradients and Q-learning, relies heavily on the availability of an informative reward signal at each timestep of the sequential decision-making process. When rewards are only sparsely available during an episode, or a rewarding feedback is provided only after episode termination, these algorithms perform sub-optimally due to the difficultly in credit assignment. Alternatively, trajectory-based policy optimization methods, such as cross-entropy method and evolution strategies, do not require per-timestep rewards, but have been found to suffer from high sample complexity by completing forgoing the temporal nature of the problem. Improving the efficiency of RL algorithms in real-world problems with sparse or episodic rewards is therefore a pressing need. In this work, we introduce a self-imitation learning algorithm that exploits and explores well in the sparse and episodic reward settings. We view each policy as a state-action visitation distribution and formulate policy optimization as a divergence minimization problem. We show that with Jensen-Shannon divergence, this divergence minimization problem can be reduced into a policy-gradient algorithm with shaped rewards learned from experience replays. Experimental indicate that our algorithm works comparable to existing algorithms in environments with dense rewards, and significantly better in environments with sparse and episodic rewards. We then discuss limitations of self-imitation learning, and propose to solve them by using Stein variational policy gradient descent with the Jensen-Shannon kernel to learn multiple diverse policies. We demonstrate its effectiveness on a challenging variant of continuous-control MuJoCo locomotion tasks. Deep reinforcement learning (RL) has demonstrated significant applicability and superior performance in many problems outside the reach of traditional algorithms, such as computer and board games BID28, continuous control BID25, and robotics. Using deep neural networks as functional approximators, many classical RL algorithms have been shown to be very effective in solving sequential decision problems. For example, a policy that selects actions under certain state observation can be parameterized by a deep neural network that takes the current state observation as input and gives an action or a distribution over actions as output. Value functions that take both state observation and action as inputs and predict expected future reward can also be parameterized as neural networks. In order to optimize such neural networks, policy gradient methods BID29 BID37 BID38 and Q-learning algorithms BID28 capture the temporal structure of the sequential decision problem and decompose it to a supervised learning problem, guided by the immediate and discounted future reward from rollout data. Unfortunately, when the reward signal becomes sparse or delayed, these RL algorithms may suffer from inferior performance and inefficient sample complexity, mainly due to the scarcity of the immediate supervision when training happens in single-timestep manner. This is known as the temporal credit assignment problem BID44. For instance, consider the Atari Montezuma's revenge game -a reward is received after collecting certain items or arriving at the final destination in the lowest level, while no reward is received as the agent is trying to reach these goals. The sparsity of the reward makes the neural network training very inefficient and also poses challenges in exploration. It is not hard to see that many of the real-world problems tend to be of the form where rewards are either only sparsely available during an episode, or the rewards are episodic, meaning that a non-zero reward is only provided at the end of the trajectory or episode. In addition to policy-gradient and Q-learning, alternative algorithms, such as those for global-or stochastic-optimization, have recently been studied for policy search. These algorithms do not decompose trajectories into individual timesteps, but instead apply zeroth-order finite-difference gradient or gradient-free methods to learn policies based on the cumulative rewards of the entire trajectory. Usually, trajectory samples are first generated by running the current policy and then the distribution of policy parameters is updated according to the trajectory-returns. The cross-entropy method and evolution strategies BID36 are two nominal examples. Although their sample efficiency is often not comparable to the policy gradient methods when dense rewards are available from the environment, they are more widely applicable in the sparse or episodic reward settings as they are agnostic to task horizon, and only the trajectorybased cumulative reward is needed. Our contribution is the introduction of a new algorithm based on policy-gradients, with the objective of achieving better performance than existing RL algorithms in sparse and episodic reward settings. Using the equivalence between the policy function and its state-action visitation distribution, we formulate policy optimization as a divergence minimization problem between the current policy's visitation and the distribution induced by a set of experience replay trajectories with high returns. We show that with the Jensen-Shannon divergence (D JS), this divergence minimization problem can be reduced into a policy-gradient algorithm with shaped, dense rewards learned from these experience replays. This algorithm can be seen as self-imitation learning, in which the expert trajectories in the experience replays are self-generated by the agent during the course of learning, rather than using some external demonstrations. We combine the divergence minimization objective with the standard RL objective, and empirically show that the shaped, dense rewards significantly help in sparse and episodic settings by improving credit assignment. Following that, we qualitatively analyze the shortcomings of the self-imitation algorithm. Our second contribution is the application of Stein variational policy gradient (SVPG) with the Jensen-Shannon kernel to simultaneously learn multiple diverse policies. We demonstrate the benefits of this addition to the self-imitation framework by considering difficult exploration tasks with sparse and deceptive rewards. Related Works. Divergence minimization has been used in various policy learning algorithms. Relative Entropy Policy Search (REPS) BID33 restricts the loss of information between policy updates by constraining the KL-divergence between the state-action distribution of old and new policy. Policy search can also be formulated as an EM problem, leading to several interesting algorithms, such as RWR BID32 and PoWER BID20. Here the M-step minimizes a KL-divergence between trajectory distributions, leading to an update rule which resembles return-weighted imitation learning. Please refer to BID7 for a comprehensive exposition. MATL BID47 uses adversarial training to bring state occupancy from a real and simulated agent close to each other for efficient transfer learning. In Guided Policy Search (GPS, BID21), a parameterized policy is trained by constraining the divergence between the current policy and a controller learnt via trajectory optimization. Learning from Demonstrations (LfD). The objective in LfD, or imitation learning, is to train a control policy to produce a trajectory distribution similar to the demonstrator. Approaches for self-driving cars BID4 and drone manipulation BID34 have used human-expert data, along with Behavioral Cloning algorithm to learn good control policies. Deep Q-learning has been combined with human demonstrations to achieve performance gains in Atari and robotics tasks BID46 BID30. Human data has also been used in the maximum entropy IRL framework to learn cost functions under which the demonstrations are optimal. BID17 use the same framework to derive an imitation-learning algorithm (GAIL) which is motivated by minimizing the divergence between agent's rollouts and external expert demonstrations. Besides humans, other sources of expert supervision include planningbased approaches such as iLQR and MCTS. Our algorithm departs from prior work in forgoing external supervision, and instead using the past experiences of the learner itself as demonstration data. Exploration and Diversity in RL. Count-based exploration methods utilize state-action visitation counts N (s, a), and award a bonus to rarely visited states BID42. In large statespaces, approximation techniques BID45, and estimation of pseudo-counts by learning density models BID3 BID13 has been researched. Intrinsic motivation has been shown to aid exploration, for instance by using information gain or prediction error BID41 as a bonus. Hindsight Experience Replay adds additional goals (and corresponding rewards) to a Q-learning algorithm. We also obtain additional rewards, but from a discriminator trained on past agent experiences, to accelerate a policy-gradient algorithm. Prior work has looked at training a diverse ensemble of agents with good exploratory skills BID27 BID6 BID12. To enjoy the benefits of diversity, we incorporate a modification of SVPG BID27 in our final algorithm. In very recent work, BID31 propose exploiting past good trajectories to drive exploration. Their algorithm buffers (s, a) and the corresponding return for each transition in rolled trajectories, and reuses them for training if the stored return value is higher than the current state-value estimate. Our approach presents a different objective for self-imitation based on divergence-minimization. With this view, we learn shaped, dense rewards which are then used for policy optimization. We further improve the algorithm with SVPG. Reusing high-reward trajectories has also been explored for program synthesis and semantic parsing tasks BID23 BID0. We start with a brief introduction to RL in Section 2.1, and then introduce our main algorithm of self-imitating learning in Section 2.2. Section 2.3 further extends our main method to learn multiple diverse policies using Stein variational policy gradient with Jensen-Shannon kernel. A typical RL setting involves an environment modeled as a Markov Decision Process with an unknown system dynamics model p(s t+1 |s t, a t) and an initial state distribution p 0 (s 0). An agent interacts sequentially with the environment in discrete time-steps using a policy π which maps the an observation s t ∈ S to either a single action a t (deterministic policy), or a distribution over the action space A (stochastic policy). We consider the scenario of stochastic policies over high-dimensional, continuous state and action spaces. The agent receives a per-step reward r t (s t, a t) ∈ R, and the RL objective involves maximization of the expected discounted sum of rewards, η(π θ) = E p0,p,π ∞ t=0 γ t r(s t, a t), where γ ∈ is the discount factor. The action-value function is DISPLAYFORM0. We define the unnormalized γ-discounted statevisitation distribution for a policy π by ρ π (s) = ∞ t=0 γ t P (s t = s|π), where P (s t = s|π) is the probability of being in state s at time t, when following policy π and starting state s 0 ∼ p 0. The expected policy return η(π θ) can then be written as E ρπ(s,a) [r(s, a)], where ρ π (s, a) = ρ π (s)π(a|s) is the state-action visitation distribution. Using the policy gradient theorem BID43, we can get the direction of ascent ∇ θ η(π θ) = E ρπ(s,a) ∇ θ log π θ (a|s)Q π (s, a). Although the policy π(a|s) is given as a conditional distribution, its behavior is better characterized by the corresponding state-action visitation distribution ρ π (s, a), which wraps the MDP dynamics and fully decides the expected return via η(π) = E ρπ [r(s, a)]. Therefore, distance metrics on a policy π should be defined with respect to the visitation distribution ρ π, and the policy search should be viewed as finding policies with good visitation distributions ρ π that yield high reward. Suppose we have access to a good policy π *, then it is natural to consider finding a π such that its visitation distribution ρ π matches ρ π *. To do so, we can define a divergence measure D(ρ π, ρ π *) that captures the similarity between two distributions, and minimize this divergence for policy improvement. Assume there exists an expert policy π E, such that policy optimization can be framed as minimizing the divergence min π D(ρ π, ρ π E), that is, finding a policy π to imitate π E. In practice, however, we do not have access to any real guiding expert policy. Instead, we can maintain a selected subset M E of highly-rewarded trajectories from the previous rollouts of policy π, and optimize the policy π to minimize the divergence between ρ π and the empirical state-action pair distribution {(s i, a i)} M E: DISPLAYFORM0 Since it is not always possible to explicitly formulate ρ π even with the exact functional form of π, we generate rollouts from π in the environment and obtain an empirical distribution of ρ π. To measure the divergence between two empirical distributions, we use the Jensen-Shannon divergence, with the following variational form (up to a constant shift) as exploited in GANs BID15: DISPLAYFORM1 where d(s, a) and d E (s, a) are empirical density estimators of ρ π and ρ π E, respectively. Under certain assumptions, we can obtain an approximate gradient of D JS w.r.t the policy parameters, thus enabling us to optimize the policy. Gradient Approximation: Let ρ π (s, a) and ρ π E (s, a) be the state-action visitation distributions induced by two policies π and π E respectively. Let d π and d π E be the surrogates to ρ π and ρ π E, respectively, obtained by solving Equation 2. Then, if the policy π is parameterized by θ, the gradient of D JS (ρ π, ρ π E) with respect to policy parameters (θ) can be approximated as: DISPLAYFORM2 where DISPLAYFORM3.. The derivation of the approximation and the underlying assumptions are in Appendix 5.1. Next, we introduce a simple and inexpensive approach to construct the replay memory M E using highreturn past experiences during training. In this way, ρ π E can be seen as a mixture of deterministic policies, each representing a delta point mass distribution in the trajectory space or a finite discrete visitation distribution of state-action pairs. At each iteration, we apply the current policy π θ to sample b trajectories {τ} b 1. We hope to include in M E, the top-k trajectories (or trajectories with returns above a threshold) generated thus far during the training process. For this, we use a priorityqueue list for M E which keeps the trajectories sorted according to the total trajectory reward. The reward for each newly sampled trajectory in {τ} b 1 is compared with the current threshold of the priority-queue, updating M E accordingly. The frequency of updates is impacted by the exploration capabilities of the agent and the stochasticity in the environment. We find that simply sampling noisy actions from Gaussian policies is sufficient for several locomotion tasks (Section 3). To handle more challenging environments, in the next sub-section, we augment our policy optimization procedure to explicitly enhance exploration and produce an ensemble of diverse policies. In the usual imitation learning framework, expert demonstrations of trajectories-from external sources-are available as the empirical distribution of ρ π E of an expert policy π E. In our approach, since the agent learns by treating its own good past experiences as the expert, we can view the algorithm as self-imitation learning from experience replay. As noted in Equation 3, the gradient estimator of D JS has a form similar to policy gradients, but for replacing the true reward function with per-timestep reward defined as log(DISPLAYFORM4). Therefore, it is possible to interpolate the gradient of D JS and the standard policy gradient. We would highlight the benefit of this interpolation soon. The net gradient on the policy parameters is: DISPLAYFORM5 where Q r is the Q function with true rewards, and π E is the mixture policy represented by the DISPLAYFORM6. r φ (s, a) can be computed using parameterized networks for densities d π and d π E, which are trained by solving the D JS optimization (Eq 2) using the current policy rollouts and M E, where φ includes the parameters for d π and d π E. Using Equation 3, the interpolated gradient can be further simplified to: DISPLAYFORM7 where DISPLAYFORM8 is the Q function calculated using − log r φ (s, a) as the reward. This reward is high in the regions of the S × A space frequented more by the expert than the learner, and low in regions visited more by the learner than the expert. The effective Q in Equation 5 is therefore an interpolation between Q r obtained with true environment rewards, and Q r φ obtained with rewards which are implicitly shaped to guide the learner towards expert behavior. In environments with sparse or deceptive rewards, where the signal from Q r is weak or sub-optimal, a higher weight on Q r φ enables successful learning by imitation. We show this empirically in our experiments. We further find that even in cases with dense environment rewards, the two gradient components can be successfully combined for policy optimization. The complete algorithm for self-imitation is outlined in Appendix 5.2 (Algorithm 1).Limitations of self-imitation. We now elucidate some shortcomings of the self-imitation approach. Since the replay memory M E is only constructed from the past training rollouts, the quality of the trajectories in M E is hinged on good exploration by the agent. Consider a maze environment where the robot is only rewarded when it arrives at a goal G placed in a far-off corner. Unless the robot reaches G once, the trajectories in M E always have a total reward of zero, and the learning signal from Q r φ is not useful. Secondly, self-imitation can lead to sub-optimal policies when there are local minima in the policy optimization landscape; for example, assume the maze has a second goal G in the opposite direction of G, but with a much smaller reward. With simple exploration, the agent may fill M E with below-par trajectories leading to G, and the reinforcement from Q r φ would drive it further to G. Thirdly, stochasticity in the environment may make it difficult to recover the optimal policy just by imitating the past top-k rollouts. For instance, in a 2-armed bandit problem with reward distributions Bernoulli (p) and Bernoulli (p+), rollouts from both the arms get conflated in M E during training with high probability, making it hard to imitate the action of picking the arm with the higher expected reward. We propose to overcome these pitfalls by training an ensemble of self-imitating agents, which are explicitly encouraged to visit different, non-overlapping regions of the state-space. This helps to discover useful rewards in sparse settings, avoids deceptive reward traps, and in environments with reward-stochasticity like the 2-armed bandit, increases the probability of the optimal policy being present in the final trained ensemble. We detail the enhancements next. One approach to achieve better exploration in challenging cases like above is to simultaneously learn multiple diverse policies and enforce them to explore different parts of the high dimensional space. This can be achieved based on the recent work by BID27 on Stein variational policy gradient (SVPG). The idea of SVPG is to find an optimal distribution q(θ) over the policy parameters θ which maximizes the expected policy returns, along with an entropy regularization that enforces diversity on the parameter space, i.e. DISPLAYFORM0 Without a parametric assumption on q, this problem admits a challenging functional optimization problem. Stein variational gradient descent (SVGD, BID26) provides an efficient solution for solving this problem, by approximating q with a delta measure q = n i=1 δ θi /n, where DISPLAYFORM1 is an ensemble of policies, and iteratively update {θ i} with DISPLAYFORM2 where k(θ j, θ i) is a positive definite kernel function. The first term in ∆θ i moves the policy to regions with high expected return (exploitation), while the second term creates a repulsion pressure between policies in the ensemble and encourages diversity (exploration). The choice of kernel is critical. BID27 used a simple Gaussian RBF kernel k(θ j, θ i) = exp(− θ j − θ i 2 2 /h), with the bandwidth h dynamically adapted. This, however, assumes a flat Euclidean distance between θ j and θ i, ignoring the structure of the entities defined by them, which are probability distributions. A statistical distance, such as D JS, serves as a better metric for comparing policies BID1 BID19. Motivated by this, we propose to improve SVPG using JS kernel k(θ j, θ i) = exp(−D JS (ρ π θ j, ρ π θ i)/T ), where ρ π θ (s, a) is the state-action visitation distribution obtained by running policy π θ, and T is the temperature. The second exploration term in SVPG involves the gradient of the kernel w.r.t policy parameters. With the JS kernel, this requires estimating gradient of D JS, which as shown in Equation 3, can be obtained using policy gradients with an appropriately trained reward function. Our full algorithm is summarized in Appendix 5.3 (Algorithm 2). In each iteration, we apply the SVPG gradient to each of the policies, where the ∇ θ η(π θ) in Equation 6 is the interpolated gradient from self-imitation (Equation 5). We also utilize state-value function networks as baselines to reduce the variance in sampled policy-gradients. Our goal in this section is to answer the following questions: 1) How does self-imitation fare against standard policy gradients under various reward distributions from the environment, namely episodic, noisy and dense? 2) How far does the SVPG exploration go in overcoming the limitations of selfimitation, such as susceptibility to local-minimas?We benchmark high-dimensional, continuous-control locomotion tasks based on the MuJoCo physics simulator by extending the OpenAI Baselines BID8 framework. Our control policies (θ i) are modeled as unimodal Gaussians. All feed-forward networks have two layers of 64 hidden units each with tanh non-linearity. For policy-gradient, we use the clipped-surrogate based PPO algorithm BID39. Further implementation details are in the Appendix. Table 1: Performance of PPO and Self-Imitation (SI) on tasks with episodic rewards, noisy rewards with masking probability pm, and dense rewards. All runs use 5M timesteps of interaction with the environment. ES performance at 5M timesteps is taken from BID36. Missing entry denotes that we were unable to obtain the 5M timestep performance from the paper. DISPLAYFORM0 We evaluate the performance of self-imitation with a single agent in this sub-section; combination with SVPG exploration for multiple agents is discussed in the next. We consider the locomotion tasks in OpenAI Gym under 3 separate reward distributions: Dense refers to the default reward function in Gym, which provides a reward for each simulation timestep. In episodic reward setting, rather than providing r(s t, a t) at each timestep of an episode, we provide t r(s t, a t) at the last timestep of the episode, and zero reward at other timesteps. This is the case for many practical settings where the reward function is hard to design, but scoring each trajectory, possibly by a human BID5, is feasible. In noisy reward setting, we probabilistically mask out each out each per-timestep reward r(a t, s t) in an episode. Reward masking is done independently for every new episode, and therefore, the agent receives non-zero feedback at different-albeit only few-timesteps in different episodes. The probability of masking-out or suppressing the rewards is denoted by p m.In FIG0, we plot the learning curves on three tasks with episodic rewards. Recall that ν is the hyper-parameter controlling the weight distribution between gradients with environment rewards and the gradients with shaped reward from r φ (Equation 5). The baseline PPO agents use ν = 0, meaning that the entire learning signal comes from the environment. We compare them with selfimitating (SI) agents using a constant value ν = 0.8. The capacity of M E is fixed at 10 trajectories. We didn't observe our method to be particularly sensitive to the choice of ν and the capacity value. For instance, ν = 1 works equally well. Further ablation on these two hyper-parameters can be found in the Appendix. In FIG0, we see that the PPO agents are unable to make any tangible progress on these tasks with episodic rewards, possibly due to difficulty in credit assignment -the lumped rewards at the end of the episode can't be properly attributed to the individual state-action pairs during the episode. In case of Self-Imitation, the algorithm has access to the shaped rewards for each timestep, derived from the high-return trajectories in M E. This makes credit-assignment easier, leading to successful learning even for very high-dimensional control tasks such as Humanoid. Table 1 summarizes the final performance, averaged over 5 runs with random seeds, under the various reward settings. For the noisy rewards, we compare performance with two different reward masking values -suppressing each reward r(s t, a t) with 90% probability (p m = 0.9), and with 50% probability (p m = 0.5). The density of rewards increases across the reward settings from left to right in Table 1. We find that SI agents (ν = 0.8) achieve higher average score than the baseline PPO agents (ν = 0) in majority of the tasks for all the settings. This indicates that not only does self-imitation vastly help when the environment rewards are scant, it can readily be incorporated with the standard policy gradients via interpolation, for successful learning across reward settings. For completion, we include performance of CEM and ES since these algorithms depend only on the total trajectory rewards and don't exploit the temporal structure. CEM perform poorly in most of the cases. ES, while being able to solve the tasks, is sample-inefficient. We include ES performance from BID36 after 5M timesteps of training for a fair comparison with our algorithm. We now conduct experiments to show how self-imitation can lead to sub-optimal policies in certain cases, and how the SVPG objective, which trains an ensemble with an explicit D JS repulsion between policies, can improve performance.2D-Navigation. Consider a simple Maze environment where the start location of the agent (blue particle) is shown in the figure on the right, along with two regions -the red region is closer to agent's starting location but has a per-timestep reward of only 1 point if the agent hovers over it; the green region is on the other side of the wall but has a per-timestep reward of 10 points. We run 8 independent, non-interacting, self-imitating (with ν = 0.8) agents on this task. This ensemble is denoted as SI-independent. Figures 2a plots the state-visitation density for SI-independent after training, from which it is evident that the agents get trapped in the local minima. The red-region is relatively easily explored and trajectories leading to it fill the M E, causing sub-optimal imitation. We contrast this with an instantiation of our full algorithm, which is referred to as SI-interact-JS. It is composed of 8 self-imitating agents which share information for gradient calculation with the SVPG objective (Equation 6). The temperature T = 0.5 is held constant, and the weight on exploration-facilitating repulsion term (α) is linearly decayed over time. FIG2 depicts the state-visitation density for this ensemble. SI-interact-JS explores wider portions of the maze, with multiple agents reaching the green zone of high reward. Figures 2c and 2d show the kernel matrices for the two ensembles after training. Cell (i, j) in the matrix corresponds to the kernel value k(θ i, θ j) = exp(−JS(ρ i, ρ j)/T ). For SI-independent, many darker cells indicate that policies are closer (low JS). For SI-interact-JS, which explicitly tries to decrease k(θ i, θ j), the cells are noticeably lighter, indicating dissimilar policies (high JS). Behavior of PPO-independent (ν = 0) is similar to SI-independent (ν = 0.8) for the Maze task. Locomotion. To explore the limitations of self-imitation in harder exploration problems in highdimensional, continuous state-action spaces, we modify 3 MuJoCo tasks as follows -SparseHalfCheetah, SparseHopper and SparseAnt yield a forward velocity reward only when the centerof-mass of the corresponding bot is beyond a certain threshold distance. At all timesteps, there is an energy penalty to move the joints, and a survival bonus for bots that can fall over causing premature episode termination (Hopper, Ant). FIG3 plots the performance of PPO-independent, SI-independent, SI-interact-JS and SI-interact-RBF (which uses RBF-kernel from BID27 instead of the JS-kernel) on the tasks. Each of these 4 algorithms is an ensemble of 8 agents using the same amount of simulation timesteps. The are averaged over 3 separate runs, where for each run, the best agent from the ensemble after training is selected. The SI-independent agents rely solely on action-space noise from the Gaussian policy parameterization to find high-return trajectories which are added to M E as demonstrations. This is mostly inadequate or slow for sparse environments. Indeed, we find that all demonstrations in M E for SparseHopper are with the bot standing upright (or tilted) and gathering only the survival bonus, as action-space noise alone can't discover hopping behavior. Similarly, for SparseHalfCheetah, M E has trajectories with the bot haphazardly moving back and forth. On the other hand, in SI-interact-JS, the D JS repulsion term encourages the agents to be diverse and explore the state-space much more effectively. This leads to faster discovery of quality trajectories, which then provide good reinforcement through self-imitation, leading to higher overall score. SI-interact-RBF doesn't perform as well, suggesting that the JS-kernel is more formidable for exploration. PPO-independent gets stuck in the local optimum for SparseHopper and SparseHalfCheetah -the bots stand still after training, avoiding energy penalty. For SparseAnt, the bot can cross our preset distance threshold using only action-space noise, but learning is slow due to naïve exploration. We approached policy optimization for deep RL from the perspective of JS-divergence minimization between state-action distributions of a policy and its own past good rollouts. This leads to a self-imitation algorithm which improves upon standard policy-gradient methods via the addition of a simple gradient term obtained from implicitly shaped dense rewards. We observe substantial performance gains over the baseline for high-dimensional, continuous-control tasks with episodic and noisy rewards. Further, we discuss the potential limitations of the self-imitation approach, and propose ensemble training with the SVPG objective and JS-kernel as a solution. Through experimentation, we demonstrate the benefits of a self-imitating, diverse ensemble for efficient exploration and avoidance of local minima. An interesting future work is improving our algorithm using the rich literature on exploration in RL. Since ours is a population-based exploration method, techniques for efficient single agent exploration can be readily combined with it. For instance, parameter-space noise or curiosity-driven exploration can be applied to each agent in the SI-interact-JS ensemble. Secondly, our algorithm for training diverse agents could be used more generally. In Appendix 5.6, we show preliminary for two cases: a) hierarchical RL, where a diverse group of Swimmer bots is trained for downstream use in a complex Swimming+Gathering task; b) RL without environment rewards, relying solely on diversity as the optimization objective. Further investigation is left for future work. Let d * π (s, a) and d * E (s, a) be the exact state-action densities for the current policy (π θ) and the expert, respectively. Therefore, by definition, we have (up to a constant shift): a) is a local surrogate to ρ π θ (s, a). By approximating it to be constant in an −ball neighborhood around θ, we get the following after taking gradient of the above equation w.r.t θ: DISPLAYFORM0 DISPLAYFORM1 where DISPLAYFORM2 The last step follows directly from the policy gradient theorem (Calculate g 1 = ∇ θ η r1 (π θ) with PPO objective using r 1 reward Calculate g 2 = ∇ θ η r2 (π θ) with PPO objective using r 2 reward 9Update θ with (1 − ν)g 1 + νg 2 using ADAM We show the sensitivity of self-imitation to ν and the capacity of M E, denoted by C. The experiments in this subsection are done on Humanoid and Hopper tasks with episodic rewards. The tables show the average performance over 5 random seeds. For ablation on ν, C is fixed at 10; for ablation on C, ν is fixed at 0.8. With episodic rewards, a higher value of ν helps boost performance since the RL signal from the environment is weak. With ν = 0.8, there isn't a single best choice for C, though all values of C give better than baseline PPO (ν = 0). The diversity-promoting D JS repulsion can be used for various other purposes apart from aiding exploration in the sparse environments considered thus far. First, we consider the paradigm of hierarchical reinforcement learning wherein multiple sub-policies (or skills) are managed by a highlevel policy, which chooses the most apt sub-policy to execute at any given time. In FIG7, we use the Swimmer environment from Gym and show that diverse skills (movements) can be acquired in a pre-training phase when D JS repulsion is used. The skills can then be used in a difficult downstream task. During pre-training with SVPG, exploitation is done with policy-gradients calculated using the norm of the velocity as dense rewards, while the exploration term uses the JS-kernel. As before, we compare an ensemble of 8 interacting agents with 8 independent agents. Figures 4a and 4b depict the paths taken by the Swimmer after training with independent and interacting agents, respectively. The latter exhibit variety. FIG7 is the downstream task of Swimming+Gathering where the bot has to swim and collect the green dots, whilst avoiding the red ones. The utility of pre-training a diverse ensemble is shown in FIG7, which plots the performance on this task while training a higher-level categorical manager policy (|A| = 8). Diversity can sometimes also help in learning a skill without any rewards from the environment, as observed by BID10 in recent work. We consider a Hopper task with no rewards, but we do require weak supervision in form of the length of each trajectory L. Using policy-gradient Let the policy parameters be parameterized by θ. To achieve diverse, high-return policies, we seek to obtain the distribution q * (θ) which is the solution of the optimization problem: max q E θ∼q [η(θ)] + αH(q), where H(q) = E θ∼q [− log q(θ)] is the entropy of q. Solving the above equation by setting derivative to zero yields the an energy-based formulation for the optimal policy-parameter distribution: q * (θ) ∝ exp(η(θ) α ). Drawing samples from this posterior using traditional methods such as MCMC is computationally intractable. Stein variational gradient descent (SVGD; BID26) is an efficient method for generating samples and also converges to the posterior of the energy-based model. Let {θ} n 1 be the n particles that constitute the policy ensemble. SVGD provides appropriate direction for perturbing each particle such that induced KL-divergence between the particles and the target distribution q * (θ) is reduced. The perturbation (gradient) for particle θ i is given by (please see BID26 for derivation): DISPLAYFORM0 DISPLAYFORM1 where k(θ j, θ i) is a positive definite kernel function. Using q * (θ) ∝ exp(DISPLAYFORM2 α) as target distribution, and k(θ j, θ i) = exp(−D JS (ρ π θ j, ρ π θ i)/T ) as the JS-kernel, we get the gradient direction for ascent: DISPLAYFORM3 where ρ π θ (s, a) is the state-action visitation distribution for policy π θ, and T is the temperature. Also, for our case, ∇ θj η(π θj) is the interpolated gradient from self-imitation (Equation 5). The −∇ θj D JS (ρ π θ j, ρ π θ i) gradient in the above equation is the repulsion factor that pushes π θi away from π θj. Similar repulsion can be achieved by using the gradient +∇ θi D JS (ρ π θ j, ρ π θ i); note that this gradient is w.r.t θ i instead of θ j and the sign is reversed. Empirically, we find that the latter in slightly better performance. DISPLAYFORM0 This can be done in two ways -using implicit and explicit distributions. In the implicit method, we could train a parameterized discriminator network (φ) using state-actions pairs from π i and π j to implicitly approximate the ratio r s, a) ]. We could then use the policy gradient theorem to obtain the gradient of D JS as explained in Section 2.2. This, however, requires us to learn O(n 2) discriminator networks for a population of size n, one for each policy pair (i, j). To reduce the computational and memory resource burden to O(n), we opt for explicit modeling of ρ πi. Specifically, we train a network ρ ψi to approximate the state-action visitation density for each policy π i. The ρ ψ1... ρ ψn networks are learned using the D JS optimization (Equation 2), and we can easily obtain the ratio r ij (s, a) = ρ ψi (s, a)/[ρ ψi (s, a) + ρ ψj (s, a)]. The agent then uses log r ij (s, a) as the SVPG exploration rewards in the policy gradient theorem. DISPLAYFORM1 We use state-value function networks as baselines to reduce the variance in sampled policy-gradients. Each agent θ i in a population of size n trains n + 1 state-value networks corresponding to real environment rewards r(s, a), self-imitation rewards − log r φ (s, a), and n − 1 SVPG exploration rewards log r ij (s, a). In this section, we provide evaluation for a recently proposed method for self-imitation learning (SIL; BID31). The SIL loss function take the form: DISPLAYFORM0 In words, the algorithm buffers (s, a) and the corresponding return (R) for each transition in rolled trajectories, and reuses them for training if the stored return value is higher than the current statevalue estimate V θ (s).We use the code provided by the authors 2. As per our understanding, PPO+SIL does not use a single set of hyper-parameters for all the MuJoCo tasks (Appendix A; BID31). We follow their methodology and report numbers for the best configuration for each task. This is different from our experiments since we run all tasks on a single fix hyper-parameter set (Appendix 5.5), and therefore a direct comparison of the average scores between the two approaches is tricky. Table 3: Performance of PPO+SIL BID31 on tasks with episodic rewards, noisy rewards with masking probability pm, and dense rewards. All runs use 5M timesteps of interaction with the environment. Table 3 shows the performance of PPO+SIL on MuJoCo tasks under the various reward distributions explained in Section 3.1 -dense, episodic and noisy. We observe that, compared to the dense rewards setting (default Gym rewards), the performance suffers under the episodic case and when the rewards are masked out with p m = 0.9. Our intuition is as follows. PPO+SIL makes use of the cumulative return (R) from each transition of a past good rollout for the update. When rewards are provided only at the end of the episode, for instance, cumulative return does not help with the temporal credit assignment problem and hence is not a strong learning signal. Our approach, on the other hand, derives dense, per-timestep rewards using an objective based on divergence-minimization. This is useful for credit assignment, and as indicated in Table 1. (Section 3.1) leads to learning good policies even under the episodic and noisy p m = 0.9 settings. Our approach makes use of replay memory M E to store the past good rollouts of the agent. Offpolicy RL methods such as DQN BID28 also accumulate agent experience in a replay buffer and reuse them for learning (e.g. by reducing TD-error). In this section, we evaluate the performance of one such recent algorithm -Twin Delayed Deep Deterministic policy gradient (TD3; BID14) on tasks with episodic and noisy rewards. TD3 builds on DDPG BID25 and surpasses its performance on all the MuJoCo tasks evaluated by the authors. Table 4: Performance of TD3 BID14 on tasks with episodic rewards, noisy rewards with masking probability pm, and dense rewards. All runs use 5M timesteps of interaction with the environment. Table 4 shows that the performance of TD3 suffers appreciably with the episodic and noisy p m = 0.9 reward settings, indicating that popular off-policy algorithms (DDPG, TD3) do not exploit the past experience in a manner that accelerates learning when rewards are scarce during an episode. * For 3 tasks used in our paper-Swimmer and the high-dimensional Humanoid, HumanoidStandup-the TD3 code from the authors 3 is unable to learn a good policy even in presence of dense rewards (default Gym rewards). These tasks are also not included in the evaluation by BID14. We run a new exploration baseline -EX 2 BID13 and compare its performance to SIinteract-JS on the hard exploration MuJoCo tasks considered in Section 3.2. The EX 2 algorithm does implicit state-density ρ(s) estimation using discriminative modeling, and uses it for noveltybased exploration by adding − log ρ(s) as the bonus. We used the author provided code 4 and hyperparameter settings. TRPO is used as the policy gradient algorithm. BID13 and SI-interact-JS on the hard exploration MuJoCo tasks from Section 3.2. SparseHalfCheetah, SparseHalfCheetah, SparseAnt use 1M, 1M and 2M timesteps of interaction with the environment, respectively. Results are averaged over 3 separate runs.
Policy optimization by using past good rollouts from the agent; learning shaped rewards via divergence minimization; SVPG with JS-kernel for population-based exploration.
320
scitldr
We study the precise mechanisms which allow autoencoders to encode and decode a simple geometric shape, the disk. In this carefully controlled setting, we are able to describe the specific form of the optimal solution to the minimisation problem of the training step. We show that the autoencoder indeed approximates this solution during training. Secondly, we identify a clear failure in the generalisation capacity of the autoencoder, namely its inability to interpolate data. Finally, we explore several regularisation schemes to resolve the generalisation problem. Given the great attention that has been recently given to the generative capacity of neural networks, we believe that studying in depth simple geometric cases sheds some light on the generation process and can provide a minimal requirement experimental setup for more complex architectures. Autoencoders are neural networks, often convolutional neural networks, whose purpose is twofold. Firstly, to compress some input data by transforming it from the input domain to another space, known as the latent, or code, space. The second goal of the autoencoder is to take this latent representation and transform it back to the original space, such that the output is similar, with respect to some criterion, to the input. One of the main objectives of this learning process being to reveal important structure in the data via the latent space, and therefore to represent this data in a more meaningful fashion or one that is easier to model. Autoencoders have been proven to be extremely useful in many tasks ranging from image compression to synthesis. Many variants on the basic idea of autoencoders have been proposed, the common theme being how to impose useful properties on the learned latent space. However, very little is known about the actual inner workings and mechanisms of the autoencoder. The goal of this work is to investigate these mechanisms and describe how the autoencoder functions. Many applications of autoencoders or similar networks consider relatively high-level input objects, ranging from the MNIST handwritten digits to abstract sketches of conceptual objects BID18; BID7 ). Here, we take a radically different approach. We consider, in depth, the encoding/decoding processes of a simple geometric shape, the disk, and investigate how the autoencoder functions in this case. There are several important advantages to such an approach. Firstly, since the class of objects we consider has an explicit parametrisation, it is possible to describe the "optimal" performance of the autoencoder, ie. can it compress and uncompress a disk to and from a code space of dimensionality 1? Secondly, the setting of this study fixes certain architecture characteristics of the network, such as the number of layers, leaving fewer free parameters to tune. This means that the which we obtain are more likely to be robust than in the case of more high-level applications. Finally, it is easier to identify the roles of different components of the network, which enables us to carry out an instructive ablation study. Using this approach, we show that the autoencoder approximates the theoretical solution of the training problem when no biases are involved in the network. Secondly, we identify certain limitations in the generalisation capacity of autoencoders when the training database is incomplete with respect to the underlying manifold. We observe the same limitation using the architecture of BID18, which is considerably more complex and is proposed to encode natural images. Finally, we analyse several regularisation schemes and identify one in particular which greatly aids in overcoming this generalisation problem. The concept of autoencoders has been present for some time in the learning community (; BID3). The objective is to train two networks, an "encoder" and a "decoder", which transform the input data to and from a code, or latent, space which is learned by the algorithm. In many applications, the dimensionality d of the latent space is smaller than that of the original data, so that the autoencoder is encouraged to discover useful features of the data. In practice, we obviously do not know the exact value of d, but we would still like to impose as much structure in the latent space as possible. This idea lead to the regularisation in the latent space of autoencoders, which comes in several flavours. The first is the sparse autoencoder BID14 ), which attempts to have as few active (non-zero) neurons as possible in the network. This can be done either by modifying the loss function to include sparsity-inducing penalisations, or by acting directly on the values of the code z. In the latter option, one can use rectified linear units (ReLUs) to encourage zeros in the code ) or simply specifying a maximum number of non-zero values as in the "k-sparse" autoencoder BID12 ). Another approach, taken by the variational autoencoder, is to specifying the a priori distribution of the code z. BID9 use the Kullback-Leibler divergence to achieve this goal, and the authors suppose a Gaussian distribution of z. The "contractive" autoencoder BID16 ) encourages the derivatives of the code with respect to the input image to be small, meaning that the representation of the image should be robust to small changes in the input. Autoencoders can be applied to a variety of problems, such as denoising ("denoising autoencoder") or image compression BID1 ). For a good overview of autoencoders, see the book of Goodfellow et al. ). Recently, a great deal of attention has been given to the capacity of CNNs, and in particular generative adversarial networks (GANs) BID13 ) or autoencoders, to generate new images. It is well-known that these networks have important limitations, such as the tendency to produce low quality images or to reproduce images from the training set because of mode collapse. But despite these limitations, many works have investigated the generative capacity of such networks, see for instance BID4; BID17; BID15; BID18 careful study of the autoencoder are the main goals of the paper, and structure our work throughout. Before continuing, we describe our autoencoder in a more formal fashion. We denote input images with x ∈ R m×n and z ∈ R d, where m and n are the height and the width of the image, respectively, and d is the dimension of z. The autoencoder consists of the couple (E, D), the encoder and decoder which transform to and from the "code" space, with E: DISPLAYFORM0 As mentioned, the goal of the auto-encoder is to compress and uncompress a signal into a representation with a smaller dimensionality, while losing as little information as possible. Thus, we search for the parameters of the encoder and the decoder, which we denote with Θ E and Θ D respectively, by minimising DISPLAYFORM1 The autoencoder consists of a series of convolutions with filters of small compact support, subsampling/up-sampling, biases and non-linearities. The values of the filters are termed the weights of the network, and we denote the encoding filters with w,i, where is the layer number and i the number of the filter. Similarly, we denote the decoding filters w,i, the encoding and decoding biases b,i and b,i. We choose leaky ReLUs for the non-linearities: DISPLAYFORM2 with parameter α = 0.2. Thus, the output of a given encoding layer is given by DISPLAYFORM3 and similarly for the decoding layers (except for an zero-padding upsampling prior to the convolution), with weights and biases w and b, respectively. We consider images of a fixed (square) spatial support DISPLAYFORM4 and also that the subsampling rate s is fixed. In the encoder, subsampling is carried out until and z is a single scalar. Thus, the number of layers in our encoder and decoder is not an independent parameter. We set the support of all the convolutional filters in our network to 3 × 3. The architecture of our autoencoder remains the same throughout the paper, and is shown in FIG0. We summarise our parameters in Table 1. We now investigate the inner mechanics of autoencoders in the case of a simple geometric shape: the disk. Our training set consists of binary images of centred disks of random radii, with one disk per image in the test database. Each disk image is determined by the indicator function of a disk of radius r, and is therefore binary. Theoretically, an optimal encoder would only need one scalar to represent the image. Therefore the architecture in FIG0 is set up to ensure a code size d = 1. Our first important observation (see FIG1) is that not only can the network learn to encode/decode On the left side, we have interpolated z in the latent space between two encoded input disks (one small and one large), and show the decoded, output image. It can be seen that the training works well, with the ing code space being meaningful. On the right, we plot the radii of the input disks against their codes z ∈ R. The autoencoder appears to represent the disks with their area.disks, but that the code z which is learned can be interpolated and the corresponding decoding is meaningful. Thus, in this case, the autoencoder is able to encode/decode the data in an optimal fashion. We now proceed to see how the autoencoder actually works on a detailed level, starting with the encoding step. Encoding a centred disk of a certain radius to a scalar z can be done in several ways, the most intuitive being integrating over the area of the disk (encoding a scalar proportionate to its area) or integrating over the perimeter of the disk (encoding a scalar proportionate to its radius). The empirical evidence given by our experiments points towards the first option, since z seems to represent the area and not the radius of the input disks (see FIG1). If this is the case, the integration operation can be done by means of a simple cascade of linear filters. As such, we should be able to encode the disks with a network containg only convolutions and sub-sampling, and no having non-linearities. We have verified experimentally this with such an encoder. A more difficult question is how does the autoencoder convert a scalar, z, to an output disk of a certain size (the decoding process). One approach to understanding the inner workings of autoencoders, and indeed any neural network, is to remove certain elements of the network and to see how it responds, otherwise known as an ablation study. We found that removing the biases of the autoencoder leads to very interesting observations. While, as we have shown, the encoder is perfectly able to function without these biases, this is not the case for the decoder. FIG2 shows the of this ablation. The decoder learns to spread the energy of z in the output according to a certain function g. Thus, the goal of the biases is to shift the intermediary (hidden layer) images such that a cut-off can be carried out to create a satisfactory decoding. We have investigated the behaviour of the decoder without biases in detail. In particular, we will derive an explicit form for the energy minimized by the network, for which a closed form solution can be found (see Appendix A), but more importantly for which we will show experimentally that the network finds the right solution. We first make a general observation about this configuration (without biases). Consider a decoder, without biases DISPLAYFORM0, where U stands for upsampling with zero-padding. In this case, the decoder acts multiplicatively on z, meaning that ∀z, ∀λ ∈ R +, D(λz) = λD(z). Proof. For a fixed z and for any λ > 0. We have DISPLAYFORM1 This reasoning can be applied successively to each layer up to the output y. When the code z is one dimensional, the decoder can be summarized as two linear functions, one for positive codes and a second one for the negative codes. However, in all our experiments, the autoencoder without bias has chosen to use only one possible sign for the code, ing in a linear decoder. Furthermore, the profiles in FIG2 suggest that a single function is learned, and that this function is multiplied by a factor which is constant for each radius. In light of Proposition 1, this means that the decoder has chosen a fixed sign for the code and that the decoder is linear. This can be expressed as DISPLAYFORM2 where t is a spatial variable and r ∈ (0, m 2] is the radius of the disk. This is checked experimentally in Figure 7 in Appendix A. In this case, we can write the optimisation problem of the decoder aŝ DISPLAYFORM3 where R is the maximum radius observed in the training set, DISPLAYFORM4 is the image domain, and B r is the disk of radius r. Note that we have expressed the minimisation problem for continuous functions f. This is not strictly the case, especially for images of small disk radii, however for our purposes the approximation is good. In this case, we have the following proposition. Proposition 2 (Decoding Energy for an autoencoder without Biases). The decoding training problem of the autoencoder without biases has an optimal solutionf that is radially symmetric and Input Output Figure 4: Autoencoding of disks with a database with limited radii. The autoencoder is not able to extrapolate further than the largest observed radius. The images with a green border represent disks whose radii have been observed during training, while those in red have not been observed.maximises the following energy: DISPLAYFORM5 under the (arbitrary) normalization f 2 2 = 1.Proof. When f is fixed, the optimal h for Equation FORMULA8 is given bŷ DISPLAYFORM6 where f, 1 Br = Ω f (t)1 Br (t) dt. After replacing this in Equation, we find that DISPLAYFORM7 where we have chosen the arbitrary normalisation f 2 2 = 1. The form of the last equation shows that the optimal solution is obviously radially symmetric 1. Therefore, after a change of variables, the energy maximised by the decoder can be written as DISPLAYFORM8 such that f 2 2 = 1. In Appendix A, we compare the numerical solution of this problem with the actual profile learned by the network, yielding a very close match. This is very interesting, since it shows that the training process has achieved the optimal solution, in spite of the fact that the loss is non convex. As we have recalled in Section2, many works have recently investigated the generative capacity of autoencoders or GANs. Nevertheless, it is not clear that these architectures truly invent or generalize some visual content. A simpler question is: to what extent is the network able to generalise a simple geometric notion? In this section, we address this issue in our restricted but interpretable case. 1 If not, then consider its mean on every circle, which decreases the L 2 norm of f while maintaining the scalar product with any disk. We then can increase back the energy by deviding by this smaller L 2 norm according to f 2 = 1. Figure 5: Input and output of our network when autoencoding examples of disks when the database contains a "hole". Disks of radii between 11 and 18 pixels (out of 32) were not observed in the database. In green, the disks whose radii have been observed in the database, in red those which have not. For this, we study the behaviour of our autoencoder when examples are removed from the training dataset. In Figure 4, we show the autoencoder when the disks with radii above a certain threshold R are removed. The radii of the left three images (with a green border) are present in the training database, whereas the radii of the right three (red border) have not been observed. It is clear that the network lacks the capacity to extrapolate further than this radius. Indeed, the autoencoder seems to project these disks onto smaller, observed, disks, rather than learning the abstraction of a disk. Again by removing the biases from the network, we may explain why the autoencoder fails to extrapolate when a maximum radius R is imposed. In Appendix B, we show experimental evidence that in this situation, the autoencoder learns a function f whose support is restricted by the value of R, leading to the autoencoder's failure. However, a fair criticism of the previous experiment is simply that the network (and deep learning in general) is not designed to work on data which lie outside of the domain observed in the training data set. Nevertheless, it is reasonable to expect the network to be robust to such "holes" inside the domain. Therefore, we have also analysed the behaviour of the autoencoder when we removed training datapoints whose disks' radii lie within a certain range, between 11 and 18 pixels (out of a total of 32). We then attempt to reconstruct these points in the test data. Figure 5 shows the of this experiment. Once again, in the unknown regions the network is unable to recreate the input disks. (page 521) and BID2 propose several explanations in the deep learning literature of this phenomenon, such as a high curvature of the underlying data manifold, noisy data or high intrinsic dimensionality of the data. In our setting, none of these explanations is sufficient. Thus we conclude that, even in the simple setting of disks, the "classic" autoencoder cannot generalise correctly when a database contains holes. This behavior is potentially problematic for applications which deal with more complex natural images, lying on a high-dimensional manifold, as these are likely to contain such holes. We have therefore carried out the same experiments using the state-of-the-art "iGAN" approach of BID18, which is in turn based on the work of BID13, "DCGAN". The visual of their algorithm are displayed in Appendix C. We trained their network using both a code size of d = 100 (as proposed by the authors), and d = 1 in order to ensure fair comparisons. Indeed, in our case, not only the dimension of the latent space should be d = 1, but also the amount of training data is not enough to work with d = 100. Type-3 regularisation Figure 6: Result of different types of regularisation on autoencoding in an "unknown region" of the training database. We have encoded/decoded a disk which was not observed in the training dataset. We show the of four experiments: no regularisation, 2 regularisation in the latent space ("Type 1"), 2 weight penalisation of the encoder and decoder ("Type 2") and 2 weight penalisation of the encoder only ("Type 3").both cases the network fails to correctly autoencode the disks belonging to the unobserved region. This shows that the generalisation problem is likely to be ubiquitous, and indeed observed in more sophisticated networks, designed to learn natural images manifolds, even in the simple case of disks. We therefore believe that this issue deserves careful attention. Actually this experiment suggets that the capacity to generate new and simple geometrical shapes could be taken as a minimal requirement for a given architecture. In order to address the problem, we now investigate several regularisation techniques whose goal is to aid the generalisation capacity of neural networks. We would like to impose some structure on the latent space in order to interpolate correctly in the case of missing datapoints. This is often achieved via some sort of regularisation. This regularisation can come in many forms, such as imposing a certain distribution in the latent space, as in variational autoencoders BID9 ), or by encouraging z to be sparse, as in sparse auto-encoders BID14; BID12 ). In the present case, the former is not particularly useful, since a probabilistic approach will not encourage the latent space to correctly interpolate. The latter regularisation does not apply, since we already have d = 1. Another commonly used approach is to impose an 2 penalisation of the weights of the filters in the network. The idea behind this bears some similarity to sparse regularisation; we wish for the latent space to be as "simple" as possible, and therefore hope to avoid over-fitting. We have implemented several regularisation techniques on our network. Firstly, we attempt a simple regularisation of the latent space by requiring a "locality-preservation" property as suggested in BID8; BID0; BID11, namely that the 2 distance between two images (x,x) be maintained in the latent space. This is done by randomly selecting a neighbour of each element in the training batch. Secondly, we regularise the weights of the encoder and/or the decoder. Thus, our training attempts to minimise the sum of the data term, x − D(E(x)) 2 2, and a regularisation term λψ(x, θ), which can take one of the following forms: DISPLAYFORM0 • Type 2: DISPLAYFORM1 • Type 3: DISPLAYFORM2 2 2; Figure 6 shows the of these experiments. First of all, we observe that the type 1 regularisation does not work satisfactorily. One interpretation of this is that the manifold in the training data is "discontinuous", and therefore there are no close neighbours for the disks on the edge of the unobserved region. Therefore, this regularisation is to be avoided in cases where there are significant holes in the sampling of the data manifold. The second type of regularisation, minimising the 2 norm of the encoder and decoder weights, produces an interesting effect. Indeed, while the manifold seems reasonable, upon closer inspection, the code z increases in amplitude during the training. Thus, the network cannot converge to a stable solution, which worsens the quality of the . Finally, we observe that regularising the weights of the encoder works particularly well, and that the ing manifold is continuous and correctly represents the area of the disks. Consequently, this asymmetrical regularisation approach is to be encouraged in other applications of autoencoders. At this point, we take the opportunity to note that the clear, marked effects seen with the different regularisation approaches are consistently observed in different training runs. This is due in large part to the controlled, simple setting of autoencoding with disks. Indeed, many other more sophisticated networks, especially GANs, are known to be very difficult to trainSalimans et al. FORMULA2, leading to unstable or poor reproducibility. We believe that our approach can be of use to more high-level applications, by making it easier to clearly identify which components and regularisations schemes best help in processing complex input data. We have investigated in detail the specific mechanisms which allow autoencoders to encode image information in an optimal manner in the specific case of disks. We have shown that, in this case, the encoder functions by integrating over disk, and so the code z represents the area of the disk. In the case where the autoencoder is trained with no bias, the decoder learns a single function which is multiplied by scalar depending on the input. We have shown that this function corresponds to the optimal function. The bias is then used to induce a thresholding process applied to ensure the disk is correctly decoded. We have also illustrated certain limitations of the autoencoder with respect to generalisation when datapoints are missing in the training set. This is especially problematic for higher-level applications, whose data have higher intrinsic dimensionality and therefore are more likely to include such "holes". Finally, we identify a regularisation approach which is able to overcome this problem particularly well. This regularisation is asymmetrical as it consists of regularizing the encoder while leaving more freedom to the decoder. An important future goal is to extend the theoretical analyses obtained to increasingly complex visual objects, in order to understand whether the same mechanisms remain in place. We have experimented with other simple geometric objects such as squares and ellipses, with similar in an optimal code size. Another question is how the decoder functions with the biases included. This requires a careful study of the different non-linearity activations as the radius increases. Finally, the ultimate goal of these studies is to determine the capacity of autoencoders to encode and generate images representing more complex objects or scenes. As we have seen, the proposed framework can help identifying some limitations of complex networks such as the one from BID18 and future works should investigate whether this framework can help developing the right regularization scheme or architecture. Value of < f, 1 Br >, plotted against z Figure 7: Verification of the hypothesis that y(t, r) = h(r)f (t) for decoding in the case where the autoencoder contains no bias.. We have determined the average profile of the output of the autoencoder when no biases are involved. On the left, we have divided several random experimental profiles y by the function h, and plotted the , which is close to constant (spatially) for a fixed radius of the input disk. On the right, we plot z against the theoretically optimal value of h (C f, 1 Br, where C is some constant accounting for the arbitrary normalization of f). This experimental sanity check confirms our theoretical derivations. During the training of the autoencoder for the case of disks (with no bias in the autoencoder), the objective of the decoder is to convert a scalar into the image of a disk with the 2 distance as a metric. Given the profiles of the output of the autoencoder, we have made the hypothesis that the decoder approximates a disk of radius r with a function y(t; r) = h(r)f (t), where f is a continuous function. We show that this is true experimentally in Figure 7 by determining f experimentally by taking the average of all output profiles, and showing the pointwise division of f by randomly selected output profiles. We see that h is approximately constant for varying t and fixed r. Please note that we have removed the last spatial coordinate of the profile which suffers from border effects. We now compare the numerical optimisation of the energy in Equation using a gradient descent approach with the profile obtained by the autoencoder without biases. The ing comparison can be seen in FIG5. One can also derive a closed form solution of Equation FORMULA10 by means of the Euler-Lagrange equation and see that the optimal f for Equation FORMULA10 is the solution of the differential equation y = −kty with initial state (y, y) =, where k is a free positive constant that accommodates for the position of the first zero of y. This gives a closed form of the f in terms of Airy functions. In Figure 9, we see the grey-levels of the input/output of an autoencoder trained (without biases) on a restricted database, that is to say a database whose disks have a maximum radius R which is smaller than the image width. We have used R = 18 for these experiments. We see that the decoder learns a useful function f which only extends to this maximum radius. Beyond this radius, another function is used corresponding to the other sign of codes (see proposition 1) that is not tuned. C AUTOENCODING DISKS WITH THE IGAN ZHU In FIG0, we show the autoencoding of the IGAN network of Zhu et al. We trained their network with a code size of both z = 100 and z = 1. Although the IGAN works better in the latter case, in both experiments the network fails to correctly autoencode disks in the missing radius region which has not been observed in the training database. Disk profile Output profile Figure 9: Profile of the encoding/decoding of centred disks, with a restricted database. The decoder learns a profile f which only extends to the largest observed radius R = 18. Beyond this radius, another profile is learned that has is obviously not tuned to any data. Output, d = 100Output, d = 1 FIG0: Input and output of the network of Zhu et al. BID18 ("IGAN") for disks when the database is missing disks of certain radii. We have applied the IGAN with a code size of d = 100, as in the original paper, and d = 1 as in our autoencoder. In both cases the IGAN interpolates incorrectly in the unknown region. Outlined in green are the images with observed radii and in red the unobserved radii.
We study the functioning of autoencoders in a simple setting and advise new strategies for their regularisation in order to obtain bettre generalisation with latent interpolation in mind for image sythesis.
321
scitldr
We present a simple idea that allows to record a speaker in a given language and synthesize their voice in other languages that they may not even know. These techniques open a wide range of potential applications such as cross-language communication, language learning or automatic video dubbing. We call this general problem multi-language speaker-conditioned speech synthesis and we present a simple but strong baseline for it. Our model architecture is similar to the encoder-decoder Char2Wav model or Tacotron. The main difference is that, instead of conditioning on characters or phonemes that are specific to a given language, we condition on a shared phonetic representation that is universal to all languages. This cross-language phonetic representation of text allows to synthesize speech in any language while preserving the vocal characteristics of the original speaker. Furthermore, we show that fine-tuning the weights of our model allows us to extend our to speakers outside of the training dataset. Our approach is to build a model able to generate speech in multiple languages. The model is trained 23 with multiple speakers to let the model be aware of the variations between speakers and also to 24 disentangle speech content from speaker identity. Once the model is trained, we bias the generation 25 process so that it sounds like a specific speaker. This speaker doesn't have to be in the training data. Our work builds upon recent developments in neural network based speech synthesis [Sotelo et al., 28 2017, , , languages to a universal representation. Our model is able to accomplish zero-shot accent transfer, which is very similar to zero-shot machine 42 translation, done by grounding the input from different languages to a common neural representation 43 space, followed by decoding in the audio space []. The training data consists of audio-transcript pairs. The transcript is translated into its IPA equivalent 61 before being fed to the model and the audio is transformed into an intermediate representation (e.g. WORLD vocoder parameters or spectrogram). Each speaker within the training dataset only speaks 63 a single language. However, at synthesis time, we are able to take any combination of speaker and 64 language, and produce natural sounding speech in the voice of the speaker and in the accent matching 65 that of the language. speakers. Crucially, we apply a smaller learning rate to the encoder and decoder parts of the models, and a higher one for the speaker embedding. This improved speaker fidelity considerably. After fine-tuning, the model is able to generate any text in any language 1 with the new speaker's 76 vocal identity. We conduct experiments on our models trained in two distinct settings. First, we train our model 79 with data in two languages (Bilingual Model). Second, we train our model with data in six languages (Multilingual Model). For these experiments, we used several datasets. We used an internal English dataset composed of 82 approximately 20000 speakers, with about 10 utterances per speaker. We also used the TIMIT dataset [] and DIMEx100 []. DIMEx100 is a Spanish dataset composed of 84 100 Spanish native speakers, with about 60 2-seconds utterances per speaker. For all the experiments we provide audio samples 2 rather than an exhaustive quantitative analysis. language. We show that the model is able to generate in any language for any speaker in the dataset. The model also shows robust performance on new, out-of-sample speakers after the fine-tuning step 97 (see FIG4.
We present a simple idea that allows to record a speaker in a given language and synthesize their voice in other languages that they may not even know.
322
scitldr
The goal of imitation learning (IL) is to enable a learner to imitate expert behavior given expert demonstrations. Recently, generative adversarial imitation learning (GAIL) has shown significant progress on IL for complex continuous tasks. However, GAIL and its extensions require a large number of environment interactions during training. In real-world environments, the more an IL method requires the learner to interact with the environment for better imitation, the more training time it requires, and the more damage it causes to the environments and the learner itself. We believe that IL algorithms could be more applicable to real-world problems if the number of interactions could be reduced. In this paper, we propose a model-free IL algorithm for continuous control. Our algorithm is made up mainly three changes to the existing adversarial imitation learning (AIL) methods – (a) adopting off-policy actor-critic (Off-PAC) algorithm to optimize the learner policy, (b) estimating the state-action value using off-policy samples without learning reward functions, and (c) representing the stochastic policy function so that its outputs are bounded. Experimental show that our algorithm achieves competitive with GAIL while significantly reducing the environment interactions. Recent advances in reinforcement learning (RL) have achieved super-human performance on several domains BID20 BID21. On most of such domains with the success of RL, the design of reward, that explains what agent's behavior is favorable, is obvious for humans. Conversely, on domains where it is unclear how to design the reward, agents trained by RL algorithms often obtain poor policies and behave worse than what we expect them to do. Imitation learning (IL) comes in such cases. The goal of IL is to enable the learner to imitate expert behavior given the expert demonstrations without the reward signal. We are interested in IL because we desire an algorithm that can be applied to real-world problems for which it is often hard to design the reward. In addition, since it is generally hard to model a variety of real-world environments with an algorithm, and the state-action pairs in a vast majority of realworld applications such as robotics control can be naturally represented in continuous spaces, we focus on model-free IL for continuous control. A wide variety of IL methods have been proposed in the last few decades. The simplest IL method among those is behavioral cloning (BC) BID23 which learns an expert policy in a supervised fashion without environment interactions during training. BC can be the first IL option when enough demonstration is available. However, when only a limited number of demonstrations are available, BC often fails to imitate the expert behavior because of the problem which is referred to compounding error BID25 -inaccuracies compound over time and can lead the learner to encounter unseen states in the expert demonstrations. Since it is often hard to obtain a large number of demonstrations in real-world environments, BC is often not the best choice for real-world IL scenarios. Another widely used approach, which overcomes the compounding error problem, is Inverse Reinforcement Learning (IRL) BID27 BID22 BID0 BID33. Recently, BID15 have proposed generative adversarial imitation learning (GAIL) which is based on prior IRL works. Since GAIL has achieved state-of-the-art performance on a variety of continuous control tasks, the adversarial IL (AIL) framework has become a popular choice for IL BID1 BID11 BID16. It is known that the AIL methods are more sample efficient than BC in terms of the expert demonstration. However, as pointed out by BID15, the existing AIL methods have sample complexity in terms of the environment interaction. That is, even if enough demonstration is given by the expert before training the learner, the AIL methods require a large number of state-action pairs obtained through the interaction between the learner and the environment 1. The sample complexity keeps existing AIL from being employed to real-world applications for two reasons. First, the more an AIL method requires the interactions, the more training time it requires. Second, even if the expert safely demonstrated, the learner may have policies that damage the environments and the learner itself during training. Hence, the more it performs the interactions, the more it raises the possibility of getting damaged. For the real-world applications, we desire algorithms that can reduce the number of interactions while keeping the imitation capability satisfied as well as the existing AIL methods do. The following three properties of the existing AIL methods which may cause the sample complexity in terms of the environment interactions:(a) Adopting on-policy RL methods which fundamentally have sample complexity in terms of the environment interactions.(b) Alternating three optimization processes -learning reward functions, value estimation with learned reward functions, and RL to update the learner policy using the estimated value. In general, as the number of parameterized functions which are related to each other increases, the training progress may be unstable or slower, and thus more interactions may be performed during training.(c) Adopting Gaussian policy as the learner's stochastic policy, which has infinite support on a continuous action space. In common IL settings, we observe action space of the expert policy from the demonstration where the expert action can take on values within a bounded (finite) interval. As BID3 suggests, the policy which can select actions outside the bound may slow down the training progress and make the problem harder to solve, and thus more interactions may be performed during training. In this paper, we propose an IL algorithm for continuous control to improve the sample complexity of the existing AIL methods. Our algorithm is made up mainly three changes to the existing AIL methods as follows:(a) Adopting off-policy actor-critic (Off-PAC) algorithm BID5 to optimize the learner policy instead of on-policy RL algorithms. Off-policy learning is commonly known as the promising approach to improve the complexity.(b) Estimating the state-action value using off-policy samples without learning reward functions instead of using on-policy samples with the learned reward functions. Omitting the reward learning reduces functions to be optimized. It is expected to make training progress stable and faster and thus reduce the number of interactions during training.(c) Representing the stochastic policy function of which outputs are bounded instead of adopting Gaussian policy. Bounding action values may make the problem easier to solve and make the training faster, and thus reduce the number of interactions during training. Experimental show that our algorithm enables the learner to imitate the expert behavior as well as GAIL does while significantly reducing the environment interactions. Ablation experimental show that (a) adopting the off-policy scheme requires about 100 times fewer environment interactions to imitate the expert behavior than the one on-policy IL algorithms require, (b) omitting the reward learning makes the training stable and faster, and (c) bounding action values makes the training faster. We consider a Markov Decision Process (MDP) which is defined as a tuple {S, A, T, R, d 0, γ}, where S is a set of states, A is a set of possible actions agents can take, T: S×A×S → is a transition probability, R: S×A → R is a reward function, d 0: S → is a distribution over initial states, and γ ∈ is a discount factor. The agent's behavior is defined by a stochastic policy π: S×A → and Π denotes a set of the stochastic policies. We denote S E ⊂ S and A E ⊂ A as sets of states and actions observed in the expert demonstration, and S π ⊂ S and A π ⊂ A as sets of those observed in rollouts following a policy π. We will use π E, π θ, β ∈ Π to refer to the expert policy, the learner policy parameterized by θ, and a behavior policy, respectively. Given a policy π, performance measure of π is defined as J (π, R) = E ∞ t=0 γ t R(s t, a t)|d 0, T, π where s t ∈ S is a state that the agent receives at discrete time-step t, and a t ∈ A is an action taken by the agent after receiving s t. The performance measure indicates expectation of the discounted return ∞ t=0 γ t R(s t, a t) when the agent follows the policy π in the MDP. Using discounted state visitation distribution denoted by ρ π (s) = ∞ t=0 γ t P(s t = s|d 0, T, π) where P is a probability that the agent receives the state s at time-step t, the performance measure can be rewritten as J (π, R) = E s∼ρπ,a∼π R(s, a). The state-action value function for the agent following π is defined as DISPLAYFORM0 |T, π, and Q π,ν denotes its approximator parameterized by ν. We briefly describe objectives of RL, IRL, and AIL below. We refer the readers to BID15 for details. The goal of RL is to find an optimal policy that maximizes the performance measure. Given the reward function R, the objective of RL with parameterized stochastic policies π θ: S×A → is defined as follows: DISPLAYFORM0 The goal of IRL is to find a reward function based on an assumption that the discounted returns earned by the expert behavior are greater than or equal to those earned by any non-experts behavior. Technically, the objective of IRL is to find reward functions R ω: S × A → R parameterized by ω that satisfies J (π E, R ω) ≥ J (π, R ω) where π denotes the non-expert policy. The existing AIL methods adopt max-margin IRL BID0 of which objective can be defined as follows: DISPLAYFORM1 2) The objective of AIL can be defined as a composition of the objectives and as follows: DISPLAYFORM2 The objective of Off-PAC to train the learner can be described as follows: DISPLAYFORM0 The learner policy is updated by taking the gradient of the state-action value. BID5 proposed the gradient as follows: provided another formula of the gradient using "re-parameterization trick" in the case that the learner policy selects the action as a = π θ (s, z) with random variables z ∼ P z generated by a distribution P z: DISPLAYFORM1 DISPLAYFORM2 3 ALGORITHMAs mentioned in Section.1, our algorithm (a) adopts Off-PAC algorithms to train the learner policy, (b) estimates state-action value without learning the reward functions, and (c) represents the stochastic policy function so that its outputs are bounded. In this section, we first introduce (b) in 3.1 and describe (c) in 3.2, then present how to incorporate (b) and (c) into (a) in 3.3. In this subsection, we introduce a new IRL objective to learn the reward function in 3.1.1 and a new objective to learn the value function approximator in 3.1.2. Then, we show that combining those objectives derives a novel objective to learn the value function approximator without reward learning in 3.1.3. We define the parameterized reward function as R ω (s, a) = log r ω (s, a), with a function r ω: S×A → parameterized by ω. r ω (s, a) represents a probability that the state-action pairs (s, a) belong to S E × A E. In other words, r ω (s, a) explains how likely the expert executes the action a at the state s. With this reward, we can also define a Bernoulli distribution p ω: Π×S×A → such that p ω (π E |s, a) = r ω (s, a) for the expert policy π E and p ω (π|s, a) = 1 − r ω (s, a) for any other policies π ∈ Π \ {π E} which include π θ and β. A nice property of this definition of the reward is that the discounted return for a trajectory {(s 0, a 0), (s 1, a 1),...} can be written as a log likelihood with p ω (π E |s t, a t): DISPLAYFORM0 Here, we assume Markov property in terms of p ω such that p ω (π E |s t, a t) for t ≥ 1 is independent of p ω (π E |s t−u, a t−u) for u ∈ {1, ..., t}. Under this assumption, the return naturally represents how likely a trajectory is the one the expert demonstrated. The discount factor γ plays a role to make sure the return is finite as in standard RL.The IRL objective can be said to aim at assigning r ω = 1 for state-action pairs (s, a) ∈ S E × A E and r ω = 0 for (s, a) ∈ S π × A π when the same definition of the reward R ω (s, a) = log r ω (s, a) is used. Following this fashion easily leads to a problem where the return earned by the non-expert policy becomes −∞, since log r ω (s, a) = −∞ if r ω (s, a) = 0 and thus log DISPLAYFORM1 The existing AIL methods seem to mitigate this problem by trust region optimization for parameterized value function approximator BID29, and it works somehow. However, we think this problem should be got rid of in a fundamental way. We propose a different approach to evaluate state-action pairs (s, a) ∈ S π × A π. Intuitively, the learner does not know how the expert behaves in the states s ∈ S \ S E -that is, it is uncertain which actions the expert executes in the states the expert has not visited. We thereby define a new IRL objective as follows: DISPLAYFORM2 where H denotes entropy of Bernoulli distribution such that: DISPLAYFORM3 Unlike the existing AIL methods, our IRL objective is to assign p ω (π E |s, a) = p ω (π|s, a) = 0.5 for (s, a) ∈ S π × A π. This uncertainty p ω (π E |s, a) = 0.5 explicitly makes the return earned by the non-expert policy finite. On the other hand, the objective is to assign r ω = 1 for (s, a) ∈ S E × A E as do the existing AIL methods. The optimal solution for the objective satisfies the assumption of IRL: DISPLAYFORM4, even though the objective does not aim at discriminating between (s, a) ∈ S E × A E and (s, a) ∈ S π × A π, As we see in Equation FORMULA7, the discounted return can be represented as a log likelihood. Therefore, a value function approximator Q π θ following the learner policy π θ can be formed as a log probability. We introduce a function q π θ,ν: S×A → parameterized by ν to represent the approximator Q π θ,ν as follows: DISPLAYFORM0 The optimal value function following a policy π satisfies the Bellman equation Q π (s t, a t) = R(s t, a t) + γE st+1∼T,at+1∼π Q π (s t+1, a t+1). Substituting π θ for π, log r ω (s t, a t) for R(s t, a t), and log q π θ,ν (s t, a t) for Q π (s t, a t), the Bellman equation for the learner policy π θ can be written as follows: DISPLAYFORM1 We introduce additional Bernoulli distributions P ν: Π × S × A:→ and P ωνγ: Π × S × A × S × A:→ as follows: DISPLAYFORM2 Using P ν and P ωνγ, the loss to satisfy Equation FORMULA13 can be rewritten as follows: DISPLAYFORM3 We use Jensen's inequality with the concave property of logarithm in Equation FORMULA4. Now we see that the loss L(ω, ν, θ) is bounded by the log likelihood ratio between the two Bernoulli distributions P ν and P ωνγ, and L(ω, ν, θ) = 0 if P ν (π E |s t, a t) = E st+1∼T,at+1∼π θ P ωνγ (π E |s t, a t, s t+1, a t+1).In the end, learning the approximator Q π θ,ν turns out to be matching the two Bernoulli distributions. A natural way to measure the difference between two probability distributions is divergence. We choose Jensen-Shannon (JS) divergence to measure the difference because we empirically found it works better, and thereby the objective to optimize Q π θ,ν can be written as follows: DISPLAYFORM4 where D JS denotes JS divergence between two Bernoulli distributions. Suppose the optimal reward function R ω * (s, a) = log r ω * (s, a) for the objective can be obtained, the Bellman equation FORMULA13 can be rewritten as follows: DISPLAYFORM0 Recall that IRL objective aims at assigning r ω * (s t, a t) = 1 for (s t, a t) ∈ S E × A E and r ω * (s t, a t) = 0.5 for (s t, a t) ∈ S π × A π where π ∈ Π \ {π E}. Therefore, the objective FORMULA9 is rewritten as the following objective using the Bellman equation FORMULA13: DISPLAYFORM1 Thus, r ω * can be obtained by the Bellman equation FORMULA6 as long as the solution for the objective can be obtained. We optimize q π θ,ν (s t, a t) in the same way of objective as follows: DISPLAYFORM2 We use P γ ν instead of P ωνγ in objective unlike the objective. Thus, we omit reward learning that the existing AIL methods require, while learning q π θ,ν (s t, a t) to obtain r ω *. Initialize time-step t = 0 and receive initial state s0 5:while not terminate condition do 6:Execute an action at = π θ (st, z) with z ∼ Pz and observe new state st+1 7:Store a state-action triplet (st, at, st+1) in B β. 8: DISPLAYFORM3 end while 10:for u = 1, t do 11:Sample mini-batches of triplets (s t ′ +1). end for 16: end for Recall that the aim of IL is to imitate the expert behavior. It can be summarized that IL attempts to obtain a generative model the expert has over A conditioned on states in S. We see that the aim itself is equivalent to that of conditional generative adversarial networks (cGANs) BID19. The generator of cGANs can generate stochastic outputs of which range are bounded. As mentioned in Section 1, bounding action values is expected to make the problem easier to solve and make the training faster. In the end, we adopt the form of the conditional generator to represent the stochastic learner policy π θ (s, z). The typical Gaussian policy and the proposed policy representations with neural networks are described in FIG0 Algorithm.1 shows the overview of our off-policy actor-critic imitation learning algorithm. To learn the value function approximator Q π θ,ν, we adopt a behavior policy β as π in the second term in objective We employ a mixture of the past learner policies as β and a replay buffer B β BID20 ) to perform sampling s t ∼ ρ π, a t ∼ π and s t+1 ∼ T. The buffer B β is a finite cache and stores the (s t, a t, s t+1) triplets in a first-in-first-out manner while the learner interacts with the environment. The approximator Q π θ,ν (s t, a t) = log q π θ,ν (s t, a t) takes (−∞, 0]. With the approximator, using the gradient to update the learner policy always punish (or ignore) the learner's actions. Instead, we adopt the gradient which directly uses Jacobian of Q π θ,ν.As do off-policy RL methods such as BID20 and, we use the target value function approximator, of which parameters are updated to track ν, to optimize Q π θ,ν. We update Q π θ,ν and π θ at the end of each episode rather than following each step of interaction. In recent years, the connection between generative adversarial networks (GAN) BID10 and IL has been pointed out BID15 BID6. BID15 show that IRL is a dual problem of RL which can be deemed as a problem to match the learner's occupancy measure BID31 to that of the expert, and that a choice of regularizer for the cost function yields an objective which is analogous to that of GAN. Their algorithm, namely GAIL, has become a popular choice for IL and some extensions of GAIL have been proposed BID1 BID11 BID16. However, those extensions have never addressed reducing the number of interactions during training. There has been a few attempts that try to improve the sample complexity in IL literatures, such as Guided Cost Learning (GCL) BID7. However, those methods have worse imitation capability in comparison with GAIL, as reported by BID8. As detailed in section 5, our algorithm have comparable imitation capability to GAIL while improving the sample complexity. Hester & Osband FORMULA7 proposed an off-policy algorithm using the expert demonstration. They address problems where both demonstration and hand-crafted rewards are given. Whereas, we address problems where only the expert demonstration is given. There is another line of IL works where the learner can ask the expert which actions should be taken during training, such as DAgger BID24, SEARN (Daumé &), SMILe BID25, and AggreVaTe BID26. As opposed to those methods, we do not suppose that the learner can query the expert during training. In our experiments, we aim to answer the following three questions: Q1. Can our algorithm enable the learner to imitate the expert behavior? Q2. Is our algorithm more sample efficient than BC in terms of the expert demonstration? Q3. Is our algorithm more efficient than GAIL in terms of the training time? To answer the questions above, we use five physics-based control tasks that are simulated with MuJoCo physics simulator BID32. See Appendix A for the description of each task. In the experiments, we compare the performance of our algorithm, BC, GAIL, and GAIL initialized by BC 23. The implementation details can be found in Appendix B. We train an agent on each task by TRPO BID28 using the rewards defined in the OpenAI Gym BID2, then we use the ing agent with a stochastic policy as the expert for the IL algorithms. We store (s t, a t, s t+1) triplets during the expert demonstration, then the triplets are used as training samples in the IL algorithms. In order to study the sample efficiency of the IL algorithms, we arrange two setups. The first is sparse sampling setup, where we randomly sample 100 (s t, a t, s t+1) triplets from each trajectory which contains 1000 triplets. Then we perform the IL algorithms using datasets that consist of several 100s triplets. Another setup is dense sampling setup, where we use full (s t, a t, s t+1) triplets in each trajectory, then train the learner using datasets that consist of several trajectories. If an IL algorithm succeeds to imitate the expert behavior in the dense sampling setup whereas it fails in the sparse sampling setup, we evaluate the algorithm as sample inefficient in terms of the expert demonstration. The performance of the experts and the learners are measured by cumulative reward they earned in a trajectory. We run three experiments on each task, and measure the performance during training. Figure 2 shows the experimental in both sparse and dense sampling setup. In comparison with GAIL, our algorithm marks worse performance on Walker2d-v1 and Humanoid-v1 with the datasets of the smallest size in sparse sampling setup, better performance on Ant-v1 in both setups, and competitive performance on the other tasks in both setups. Overall, we conclude that our algorithm is competitive with GAIL with regards to performance. That is, our algorithm enables the learner to imitate the expert behavior as well as GAIL does. BC imitates the expert behavior successfully on all tasks in the dense sampling setup. However, BC often fails to imitate the expert behavior in the sparse sampling setup with smaller datasets. Our algorithm achieves better performance than BC does all over the tasks. It shows that our algorithm is more sample efficient than BC in terms of the expert demonstration. Figure 3 shows the performance plot curves over validation rollouts during training in the sparse sampling setup. The curves on the top row in Figure 3 show that our algorithm denoted by Ours trains the learner more efficiently than GAIL does in terms of training time. In addition, the curves on the bottom row in Figure 3 show that our algorithm trains the learner much more efficiently than GAIL does in terms of the environment interaction. As opposed to BID15 suggestion, GAIL initialized by BC (BC+GAIL) does not improve the sample efficiency, but rather harms the leaner's performance significantly. We conducted additional ablation experiments to demonstrate that our proposed method described in Section.3 improves the sample efficiency. FIG2 shows the ablation experimental on Antv1 task. Ours+OnP, which denotes an on-policy variant of Ours, requires 100 times more interactions than Ours. The with Ours+OnP suggests that adopting off-policy learning scheme instead of on-policy one significantly improves the sample efficiency. Ours+IRL(D) and Ours+IRL(E) are variants of Ours that learn value function approximators using the learned reward function with the objective and FORMULA9, respectively. The with Ours+IRL(D) and Ours+IRL(E) suggests that omitting the reward learning described in 3.1 makes the training stable and faster. The with Ours+GP, which denotes a variant of Ours that adopts the Gaussian policy, suggests that bounding action values described in 3.2 makes the training faster and stable. The with Ours+DP, which denotes a variant of Ours that has a deterministic policy with fixed input noises, fails to imitate the expert behavior. It shows that the input noise variable z in our algorithm plays a role to obtain stochastic policies. In this paper, we proposed a model-free IL algorithm for continuous control. Experimental showed that our algorithm achieves competitive performance with GAIL while significantly reducing the environment interactions. A DETAILED DESCRIPTION OF EXPERIMENT TAB0 summarizes the description of each task, the performance of an agent with random policy, and the performance of the experts. We implement our algorithm using two neural networks with two hidden layers. Each network represents π θ and q ν. For convenience, we call those networks for π θ and q ω as policy network (PN) and Q-network (QN), respectively. PN has 100 hidden units in each hidden layer, and its final output is followed by hyperbolic tangent nonlinearity to bound its action range. QN has 500 hidden units in each hidden layer and a single output is followed by sigmoid nonlinearity to bound the output between. All hidden layers are followed by leaky rectified nonlinearity BID18. The parameters in all layers are initialized by Xavier initialization BID9. The input of PN is given by concatenated vector representations for the state s and noise z. The noise vector, of which dimensionality corresponds to that of the state vector, generated by zero-mean normal distribution so that z ∼ P z = N. The input of QN is given by concatenated vector representations for the state s and action a. We employ RMSProp BID14 for learning parameters with a decay rate 0.995 and epsilon 10 −8. The learning rates are initially set to 10 −3 for QN and 10 −4 for PN, respectively. The target QN with parameters ν ′ are updated so that ν ′ = 10 −3 * ν+(1−10 −3) * ν ′ at each update of ν. We linearly decrease the learning rates as the training proceeds. We set minibatch size of (s t, a t, s t+1) triplets 64, the replay buffer size |B β | = 15000, and the discount factor γ = 0.85. We sample 128 noise vectors for calculating empirical expectation E z∼Pz of the gradient. We use publicly available code (https://github.com/openai/imitation) for the implementation of GAIL and BC. Note that, the number of hidden units in PN is the same as that of networks for GAIL. All experiments are run on a PC with a 3.30 GHz Intel Core i7-5820k Processor, a GeForce GTX Titan GPU, and 32GB of RAM.
In this paper, we proposed a model-free, off-policy IL algorithm for continuous control. Experimental results showed that our algorithm achieves competitive results with GAIL while significantly reducing the environment interactions.
323
scitldr
The dominant approach to unsupervised "style transfer'' in text is based on the idea of learning a latent representation, which is independent of the attributes specifying its "style''. In this paper, we show that this condition is not necessary and is not always met in practice, even with domain adversarial training that explicitly aims at learning such disentangled representations. We thus propose a new model that controls several factors of variation in textual data where this condition on disentanglement is replaced with a simpler mechanism based on back-translation. Our method allows control over multiple attributes, like gender, sentiment, product type, etc., and a more fine-grained control on the trade-off between content preservation and change of style with a pooling operator in the latent space. Our experiments demonstrate that the fully entangled model produces better generations, even when tested on new and more challenging benchmarks comprising reviews with multiple sentences and multiple attributes. One of the objectives of unsupervised learning is to learn representations of data that enable fine control over the underlying latent factors of variation, e.g., pose and viewpoint of objects in images, or writer style and sentiment of a product review. In conditional generative modeling, these latent factors are given BID38 BID31 BID9, or automatically inferred via observation of samples from the data distribution BID4 BID15 ).More recently, several studies have focused on learning unsupervised mappings between two data domains such as images BID39, words or sentences from different languages BID6 BID26.In this problem setting, the generative model is conditioned not only on the desired attribute values, but also on a initial input, which it must transform. Generations should retain as many of the original input characteristics as possible, provided the attribute constraint is not violated. This learning task is typically unsupervised because no example of an input and its corresponding output with the specified attribute is available during training. The model only sees random examples and their attribute values. The dominant approach to learn such a mapping in text is via an explicit constraint on disentanglement BID17 BID10 BID37: the learned representation should be invariant to the specified attribute, and retain only attribute-agnostic information about the "content". Changing the style of an input at test time then amounts to generating an output based on the disentangled latent representation computed from the input and the desired attributes. Disentanglement is often achieved through an adversarial term in the training objective that aims at making the attribute value unrecoverable from the latent representation. This paper aims to extend previous studies on "style transfer" along three axes. (i) First, we seek to gain a better understanding of what is necessary to make things work, and in particular, whether Table 1: Our approach can be applied to many different domains beyond sentiment flipping, as illustrated here with example re-writes by our model on public social media content. The first line in each box is an input provided to the model with the original attribute, followed by its rewrite when given a different attribute value. disentanglement is key, or even actually achieved by an adversarial loss in practice. In Sec. 3.1 we provide strong empirical evidence that disentanglement is not necessary to enable control over the factors of variation, and that even a method using adversarial loss to disentangle BID10 does not actually learn representations that are disentangled. (ii) Second, we introduce a model which replaces the adversarial term with a back-translation BID35 objective which exposes the model to a pseudo-supervised setting, where the model's outputs act as supervised training data for the ultimate task at hand. The ing model is similar to recently proposed methods for unsupervised machine translation BID24 BID0 BID44 ), but with two major differences: (a) we use a pooling operator which is used to control the trade-off between style transfer and content preservation; and (b) we extend this model to support multiple attribute control. (iii) Finally, in Sec. 4.1 we point out that current style transfer benchmarks based on collections of user reviews have severe limitations, as they only consider a single attribute control (sentiment), and very small sentences in isolation with noisy labels. To address this issue, we propose a new set of benchmarks based on existing review datasets, which comprise full reviews, where multiple attributes are extracted from each review. The contributions of this paper are thus: a deeper understanding of the necessary components of style transfer through extensive experiments, ing in a generic and simple learning framework based on mixing a denoising auto-encoding loss with an online back-translation technique and a novel neural architecture combining a pooling operator and support for multiple attributes, and a new, more challenging and realistic version of existing benchmarks which uses full reviews and multiple attributes per review, as well as a comparison of our approach w.r.t. baselines using both new metrics and human evaluations. We will open-source our code and release the new benchmark datasets used in this work, as well as our pre-trained classifiers and language models for reproducibility. This will also enable fair empirical comparisons on automatic evaluation metrics in future work on this problem. There is substantial literature on the task of unsupervised image translation. While initial approaches required supervised data of the form (input, transformation, output), e.g., different images of the same object rendered with different viewpoints or/and different lighting conditions BID16 BID41 BID23, current techniques are capable of learning completely unsupervised domain mappings. Given images from two different domains X and Y (where X could be the domain of paintings and Y the domain of realistic photographs), and the task is to learn two mappings F: X → Y and G: Y → X, without supervision, i.e., just based on images sampled from the two domains BID28 BID39. For instance, used a cycle consistency loss to enforce F (G(y)) ≈ y and G(F (x)) ≈ x. This loss is minimized along with an adversarial loss on the generated outputs to constrain the model to generate realistic images. In Fader Networks BID25, a discriminator is applied on the latent representation of an image autoencoder to remove the information about specific attributes. The attribute values are instead given explicitly to the decoder at training time, and can be tuned at inference to generate different realistic versions of an input image with varying attribute values. Different approaches have been proposed for textual data, mainly aiming at controlling the writing style of sentences. Unfortunately, datasets of parallel sentences written in a different style are hard to come by. BID3 collected a dataset of 33 English versions of the Bible written in different styles on which they trained a supervised style transfer model. released a small crowdsourced subset of 1,000 Yelp reviews for evaluation purposes, where the sentiment had been swapped (between positive and negative) while preserving the content. Controlled text generation from unsupervised data is thus the focus of more and more research. An theme that is common to most recent studies is that style transfer can be achieved by disentangling sentence representations in a shared latent space. Most solutions use an adversarial approach to learn latent representations agnostic to the style of input sentences BID10 BID17 BID37 BID43 BID40 BID19 BID46. A decoder is then fed with the latent representation along with attribute labels to generate a variation of the input sentence with different attributes. Unfortunately, the discrete nature of the sentence generation process makes it difficult to apply to text techniques such as cycle consistency or adversarial training. For instance, the latter BID37 BID7 BID45 ) requires methods such as REINFORCE or approximating the output softmax layer with a tunable temperature BID17 BID33 BID42, all of which tend to be slow, unstable and hard to tune in practice. Moreover, all these studies control a single attribute (e.g. swapping positive and negative sentiment).The most relevant work to ours is BID44, which also builds on recent advances in unsupervised machine translation. Their approach first consists of learning cross-domain word embeddings in order to build an initial phrase-table. They use this phrase-table to bootstrap an iterative back-translation pipeline containing both phrase-based and neural machine translation systems. Overall, their approach is significantly more complicated than ours, which is end-to-end and does not require any pre-training. Moreover, this iterative back-translation approach has been shown to be less effective than on-the-fly back-translation which is end-to-end trainable BID26. This section briefly introduces notation, the task, and our empirical procedure for evaluating disentanglement before presenting our approach. We consider a training set D = (x i, y i) i∈ [1,n] of n sentences x i ∈ X paired with attribute values y i. y ∈ Y is a set of m attribute values y = (y 1, ..., y m). Each attribute value y k is a discrete value in the set Y k of possible values for attribute k, e.g. Y k = {bad, neutral, good} if y k represents the overall rating of a restaurant review. Our task is to learn a model F: X × Y → X that maps any pair (x,ỹ) of an input sentence x (whose actual set of attributes are y) and a new set of m attribute valuesỹ to a new sentencex that has the specified attribute valuesỹ, subject to retaining as much as possible of the original content from x, where content is defined as anything in x which does not depend on the attributes. The architecture we consider performs this mapping through a sequence-to-sequence auto-encoder that first encodes x into a latent representation z = e(x), then decodes (z,ỹ) intox = d(z,ỹ), where e and d are functions parameterized by the vector of trainable parameters θ. Before giving more detail on the architecture, let us look at disentanglement. Almost all the existing methods are based on the common idea to learn a latent representation z that is disentangled from y. We consider z to be disentangled from y if it is impossible to recover y from z. While failure to recover y from z could mean either that z was disentangled or that the classifier chosen to recover y was either not powerful enough or poorly trained, success of any classifier in recovering y demonstrates that z was in fact not invariant to y. Table 2: Recovering the sentiment of the input from the encoder's representations of a domain adversarially-trained Fader model BID10. During training, the discriminator, which was trained adversarially and jointly with the model, gets worse at predicting the sentiment of the input when the coefficient of the adversarial loss λ adv increases. However, a classifier that is separately trained on the ing encoder representations has an easy time recovering the sentiment. We also report the baseline accuracy of a fastText classifier trained on the actual inputs. As a preliminary study, we gauge the degree of disentanglement of the latent representation. Table 2 shows that the value of the attribute can be well recovered from the latent representation of a Faderlike BID10 model even when the model is trained adversarially. A classifier fit post-hoc and trained from scratch, parameterized identically to the discriminator(see paragraph on model architecture in Section 3.3 for details), is able to recover attribute information from the "distengeled" content representation learned via adversarial training. This suggests that disentanglement may not be achieved in practice, even though the discriminator is unable to recover attribute information well during training. We do not assert that disentangled representations are undesirable but simply that it isn't mandatory in the goal of controllable text rewriting. This is our focus in the following sections. Evaluation of controlled text generation can inform the design of a more streamlined approach: generated sentences should be fluent, make use of the specified attribute values, and preserve the rest of the content of the input. Denoising auto-encoding (DAE) BID10 ) is a natural way to learn a generator that is both fluent and that can reconstruct the input, both the content and the attributes. Moreover, DAE is a weak way to learn about how to change the style, or in other words, it is a way to force the decoder to also leverage the externally provided attribute information. Since the noise applied to the encoder input x may corrupt words conveying the values of the input attribute y, the decoder has to learn to use the additional attribute input values in order to perform a better reconstruction. We use the noise function described in BID24 that corrupts the input sentence by performing word drops and word order shuffling. We denote by x c a corrupted version of the sentence x. As discussed in Sec. 3.1, disentanglement is not necessary nor easily achievable, and therefore, we do not seek disentanglement and do not include any adversarial term in the loss. Instead, we consider a more natural constraint which encourages the model to perform well at the task we are ultimately interested in -controlled generation via externally provided attributes. We take an input (x, y) and encode x it into z, but then decode using another set of attribute values,ỹ, yielding the reconstructionx. We now usex as input of the encoder and decode it using the original y to ideally obtain the original x, and we train the model to map (x, y) into x. This technique, called back-translation (BT) BID35 BID24 BID0, has a two-fold benefit. Initially when the DAE is not well trained andx has lost most of the content present in x, the only useful information provided to the decoder is the desired attribute y. This encourages the decoder to leverage the provided attributes. Later on during training when DAE is better, BT helps training the sequence-to-sequence for the desired task. Overall, we minimize: DISPLAYFORM0 where p d is the probability distribution over sequences x induced by the decoder, e(x c) is the encoder output when fed with a corrupted version x c of the input x, and d(e(x),ỹ) is a variation of the input sentence x written with a randomly sampled set of attributesỹ. In practice, we generate sentences during back-translation by sampling from the multinomial distribution over words defined by the decoder at each time step using a temperature T. So far, the model is the same as the model used for unsupervised machine translation by BID26, albeit with a different interpretation of its inner workings, no longer based on disentanglement. Instead, the latent representation z can very well be entangled, but we only require the decoder to eventually "overwrite" the original attribute information with the desired attributes. Unfortunately, this system may be limited to swapping a single binary attribute and may not give us enough control on the trade-off between content preservation and change of attributes. To address this limitations, we introduce the following components:Attribute conditioning In order to handle multiple attributes, we separately embed each target attribute value and then average their embeddings. We then feed the averaged embeddings to the decoder as a start-of-sequence symbol. We also tried an approach similar to BID29, where the output layer of the decoder uses a different bias for each attribute label. We observed that the learned biases tend to reflect the labels of the attributes they represent. Examples of learned biases can be found in Table 14. However, this approach alone did not work as well as using attribute-specific start symbols, nor did it improve when combined with them. Latent representation pooling To control the amount of content preservation, we use pooling. The motivating observation is that models that compute one latent vector representation per input word usually perform individual word replacement, while models without attention are much less literal and tend to lose content but have an easier time changing the input sentence with the desired set of attributes. Therefore, we propose to gain finer control by adding a temporal max-pooling layer on top of the encoder, with non-overlapping windows of width w. Setting w = 1 in a standard model with attention, while setting w to the length of the input sequence boils down to a sequence-to-sequence model without attention. Intermediate values of w allow for different tradeoffs between preserving information about the input sentence and making the decoder less prone to copying words one by one. The hyper-parameters of our model are: λ AE and λ BT trading off the denoising auto-encoder term versus the back-translation term (the smaller the λ BT /λ AE ratio the more the content is preserved and the less well the attributes are swapped), the temperature T used to produce unbiased generations BID8 and to control the amount of content preservation, and the pooling window size w. We optimize this loss by stochastic gradient descent without back-propagating through the back-translation generation process; back-translated sentences are generated on-the-fly once a new mini-batch arrives. Model Architecture We use an encoder parameterized by a 2-layer bidirectional LSTM and a 2-layer decoder LSTM augmented with an attention mechanism BID1. Both LSTMs and our word embedding lookup tables, trained from scratch, have 512 hidden units. Another embedding lookup table with 512 hidden units is used to embed each attribute value. The decoder conditions on two different sources of information: 1) attribute embedding information that presented that it as the first token, similar to BID26 and at the softmax output as an attribute conditional bias following BID29. When controlling multiple attributes, we average the embeddings and bias vectors that correspond to the different attribute values.2) The decoder also conditions on a temporally downsampled representation of the encoder via an attention mechanism. The representations are downsampled by temporal max-pooling with a non-overlapping window of size 5. Although our best models do not use adversarial training, in ablations and experiments that study disentanglement, we used a discriminator paramaeterized as 3 layer MLP with 128 hidden units and LeakyReLU acivations. We use data from publicly available Yelp restaurant and Amazon product reviews following previous work in the area BID37 and build on them in three ways to make the task more challenging and realistic. Firstly, while previous approaches operate at the sentence level by assuming that every sentence of a review carries the same sentiment as the whole of review, we operate at the granularity of entire reviews. The sentiment, gender 1 of the author and product/restaurant labels are therefore more reliable. Secondly, we relax constraints enforced in prior works that discard reviews with more than 15 words and only consider the 10k most frequent words. In our case, we consider full reviews with up to 100 words, and we consider byte-pair encodings (BPE) BID36 with 60k BPE codes, eliminating the presence of unknown words. Finally, we leverage available meta-data about restaurant and product categories to collect annotations for two additional controllable factors: the gender of the review author and the category of the product or restaurant being reviewed. A small overview of the corresponding datasets is presented below with some statistics presented in Table 3. Following, we also collect human reference edits for sentiment and restaurant/product categories to serve as a reference for automatic metrics as well as an upper bound on human evaluations (examples in Appendix Table 12).Yelp Reviews This dataset consists of restaurant and business reviews provided by the Yelp Dataset Challenge 2. We pre-process this data to remove reviews that are either 1) not written in English according to a fastText BID20 classifier, 2) not about restaurants, 3) rated 3/5 stars as they tend to be neutral in sentiment (following BID37), or 4) where the gender is not identifiable by the same method as in BID34; BID33. We then binarize both sentiment and gender labels. Five coarse-grained restaurant category labels, Asian, American, Mexican, Bars & Dessert, are obtained from the associated meta-data. Since a review can be written about a restaurant that has multiple categories (ex: an Asian restaurant that serves desserts), we train a multi-label fastText classifier to the original data that has multiple labels per example. We then re-label the entire dataset with this classifier to pick the most likely category to be able to model the category factor as a categorical random variable. (See Appendix section A.2 for more details.) Since there now exists two variants of the Yelp dataset, we refer to the one used by previous work BID37 BID10 as SYelp and our created version with full reviews along with gender and category information as FYelp henceforth. Amazon Reviews The amazon product review dataset BID13 ) is comprised of reviews written by consumers of Amazon products. We followed the same pre-processing steps as in the Yelp dataset with the exception of collecting gender labels, since a very large fraction of amazon usernames were not present in a list of gender-annotated names. We labeled reviews with the following product categories based on the meta-data: Books, Clothing, Electronics, Movies, Music. We followed the same protocol as in FYelp to re-label product categories. In this work, we do not experiment with the version of the Amazon dataset used by previous work, and so we refer to our created version with full reviews along with product category information as just Amazon henceforth. Statistics about the dataset can be found in Table 3.Public social media content We also used an unreleased dataset of public social media content written by English speakers to illustrate the approach with examples from a more diverse set of categories 3. We used 3 independent pieces of available information about that content: 1) gender (male or female) 2) age group (18-24 or 65+), and 3) writer-annotated feeling (relaxed or annoyed). 7,682,688 17,823,468 14,501,958 18,463,789 12,628,250 7,629,505 Table 3: The number of reviews for each attribute for different datasets. The SYelp, FYelp and the Amazon datasets are composed of 443k, 2.7M and 75.2M sentences respectively. Public social media content is collected from 3 different data sources with 25.5M, 33.0M and 20.2M sentences for the Feeling, Gender and Age attributes respectively. To make the data less noisy, we trained a fastText classifier BID20 for each attribute and only kept the data above a certain confidence threshold. Automatic evaluation of generative models of text is still an open research problem. In this work, we use a combination of multiple automatic evaluation criteria informed by our desiderata. We would like our systems to simultaneously 1) produce sentences that conform to the set of pre-specified attribute(s), 2) preserve the structure and content of the input, and 3) generate fluent language. We therefore evaluate samples from different models along three different dimensions:• Attribute control: We measure the extent to which attributes are controlled using fastText classifiers, trained on our datasets, to predict different attributes.• Fluency: Fluency is measured by the perplexity assigned to generated text sequences by a pre-trained Kneser-Ney smooth 5-gram language model using KenLM BID14 ).• Content preservation: We measure the extent to which a model preserves the content present of a given input using n-gram statistics, by measuring the BLEU score between generated text and the input itself, which we refer to as self-BLEU. When a human reference is provided instead, we compute the BLEU score with respect to it, instead of the input, which we will refer to as just BLEU BID32. "BLEU" scores in this paper correspond to the BLEU score with respect to human references averaged across generations conditioned on all possible attribute values except for that of the input. However, when reporting self-BLEU scores, we also average across cases where generations are also conditioned on the same attribute value as the input. A combination of these metrics, however, only provides a rough understanding of the quality of a particular model. Ultimately, we rely on human evaluations collected via a public crowd-sourcing platform. We carried out two types of evaluations to compare different models. 1) Following a protocol similar to, we ask crowd workers to annotate generated sentences along the three dimensions above. Fluency and content preservation are measured on a likert-scale from 1 to 5 and attribute control is evaluated by asking the worker to predict the attribute present in the generated text. 2) We take a pair of generations from two different models, and ask workers to pick the generation they prefer on the overall task, accounting for all the dimensions simultaneously. They are also presented with a "no preference" option to discard equally good or bad generations from both models. Since our automatic evaluation metrics are only weak proxies for the quality of a model, we set minimum thresholds on the content preservation and attribute control criteria and only consider models above a certain threshold. The few models that met the specified threshold on the validation set were evaluated by humans on the same validation set and the best model was selected to be run on the test set. Table 4: Automatic evaluation of models on the SYelp test set from. The test set is composed of sentences that have been manually written by humans, which we use to compute the BLEU score. Samples for previous models were made available by. For our model, we report different corresponding to different choices of hyper-parameters (pooling kernel width and back-translation temperature) to demonstrate our model's ability to control the trade-off between attribute transfer and content preservation. Our first set of experiments aims at comparing our approach with different models recently proposed, on the SYelp dataset. Results using automatic metrics are presented in Table 4. We compare the same set of models as in with the addition of our model and our own implementation of the Fader network BID25, which corresponds to our model without back-translation and without attention mechanism, but uses domain adversarial training BID11 to remove information about sentiment from the encoder's representation. This is also similar to the StyleEmbedding model presented by BID10. For our approach, we were able to control the trade-off between BLEU and accuracy based on different hyper-parameter choices. We demonstrate that our approach is able to outperform all previous approaches on the three desired criteria simultaneously, while our implementation of the fader is competitive with the previous best work. TAB4: Top: Results from human evaluation to evaluate the fluency / content preservation and successful sentiment control on the SYelp test set. The mean and standard deviation of Fluency and Content are measured on a likert scale from 1-5 while sentiment is measured by fraction of times that the controlled sentiment of model matches the judge's evaluation of the sentiment (when also presented with a neutral option). Bottom: Results from human A/B testing of different pairs of models. Each cell indicates the fraction of times that a judge preferred one of the models or neither of them on the overall task.)Since our automatic evaluation metrics are not ideal, we carried out human evaluations using the protocol described in Section 4.2.. We use our implementation of the Fader model since we found it to be better than previous work, by human evaluation TAB4. While we control all attributes simultaneously during training, at test time, for the sake of quantitative evaluations, we change the values only of a single attribute while keeping the others constant. Our model clearly outperforms the baseline Fader model. We also demonstrate that our model does not suffer significant drops in performance when controlling multiple attributes over a single one. Demonstrations of our model's ability to control single and multiple attributes are presented in TAB9 respectively. What is interesting to observe is that our model does not just alter single words in the input to control an attribute, but often changes larger fragments to maintain grammaticality and fluency. Examples of re-writes by our model on social media content in Table 1 show that our model tends to retain the overall structure of input sentences, including punctuation and emojis. Additional examples of re-writes can be found in TAB10 Table 7: Model ablations on 5 model components on the FYelp dataset (Left) and SYelp (Right).In Table 7, we report from an ablation study on the SYelp and FYelp datasets to understand the impact of the different model components on overall performance. The different components are: 1) pooling, 2) temperature based multinomial sampling when back-translating, 3) attention, 4) back-translation, 5) the use of domain adversarial training and 6) attention and back-translation in conjunction. We find that a model with all of these components, except for domain adversarial training, performs the best, further validating our hypothesis in Section 3.1 that it is possible to control attributes of text without disentangled representations. The absence of pooling or softmax temperature when back-translating also has a small negative impact on performance, while the attention and back-translation have much bigger impacts. Without pooling, the model tends to converge to a copy mode very quickly, with a high self-BLEU score and a poor accuracy. The pooling operator alleviates this behaviour and provides models with a different trade-off accuracy / content preservation. Table 13 shows examples of reviews re-written by different models at different checkpoints, showing the trade-off between properly modifying the attribute and preserving the original content. FIG0 shows how the trade-off between content preservation (self-BLEU) and attribute control (accuracy) evolves over the course of training and as a function of the pooling kernel width. We present a model that is capable of re-writing sentences conditioned on given attributes, that is not based on a disentanglement criterion as often used in the literature. We demonstrate our model's ability to generalize to a realistic setting of restaurant/product reviews consisting of several sentences per review. We also present model components that allow fine-grained control over the trade-off between attribute control versus preserving the content in the input. Experiments with automatic and human-based metrics show that our model significantly outperforms the current state of the art not only on existing datasets, but also on the large-scale datasets we created. The source code and benchmarks will be made available to the research community after the reviewing process. A SUPPLEMENTARY MATERIAL We used the Adam optimizer BID21 ) with a learning rate of 10 −4, β 1 = 0.5, and a batch size of 32. As in BID26, we fix λ BT = 1, and set λ AE to 1 at the beginning of the experiment, and linearly decrease it to 0 over the first 300, 000 iterations. We use greedy decoding at inference. When generating pseudo-parallel data via back-translation, we found that increasing the temperature over the course of training from greedy generation to multinomial sampling with a temperature of 0.5 linearly over 300,000 steps was useful BID8. Since the class distribution for different attributes in both the Yelp and Amazon datasets are skewed, we train with balanced minibatches when there is only a single attribute being controlled and with independent and uniformly sampled attribute values otherwise. The synthetic target attributes during back-translatioñ y are also balanced by uniform sampling. In addition to the details presented in Section 4.1, we present additional details on the creation of the FYelp and Amazon datasets. FYelp: Reviews, their rating, user information and restaurant/business categories are obtained from the available metadata. We construct sentiment labels by grouping 1/2 star ratings into the negative category and 4/5 into the positive category while discarding 3 star reviews. To determine the gender of the person writing a review, we obtain their name from the available user information and then look it up in a list of gendered names 5 following Prabhumoye et al. FORMULA0; BID34. We discard reviews for which we were unable to obtain gender information with this technique. Restaurant/business category meta-data is available for each review, from which we discard all reviews that were not written about restaurants. Amongst restaurant reviews, we manually group restaurant categories into "parent" categories to cover a significant fraction of the dataset. The grouping is as follows:• Asian -Japanese, Thai, Ramen, Sushi, Sushi Bar, Chinese, Asian Fusion, Vietnamese, Korean, Noodles, Dim Sum, Cantonese, Filipino, Taiwanese As described in Section 4.1, we train a classifier on these parent categories and relabel the entire dataset using this. Amazon: Reviews, their rating, user information and restaurant/business categories are obtained from the metadata made available by BID13. We construct sentiment labels in the same manner as in FYelp. We did not experiment with gender labels, since we found that Amazon usernames seldom use real names. We group Amazon product categories into "parent categories" manually, similar to FYelp as follows:• We relabel reviews with a trained product category classifier similar to FYelp. For the FYelp and Amazon datasets, we normalize, lowercase and tokenize reviews using the moses BID22 tokenizer. With social media content, we do not lowercase data in order to exploit interesting capitalization patterns inherent in the data, but we still run other pre-processing steps. We use byte-pair encodings (BPE) BID36 ) with 60k replacements, on all 3 datasets, to deal with large vocabulary sizes. released a set of human reference edits when controlling the sentiment of a review, on a test set of 500 examples on the SYelp dataset. We follow suit by collecting a similar dataset of 500 human reference edits, which will be made publicly available, for both sentiment and product categories on the FYelp and Amazon datasets. When collecting such data, we use pre-trained sentiment/category classifiers to interactively guide crowd workers using the ParlAI BID30 platform, to produce edits with the desired attribute value as well as significant content overlap with the input. Negative Dessert the bread here is crummy, half baked and stale even when "fresh." i won't be back. Positive American the burgers here are juicy, juicy and full of flavor! i highly recommend this place. Negative American the bread here is stale, dry and over cooked even though the bread is hard. i won't be back. Positive Asian the sushi here is fresh, tasty and even better than the last. i highly recommend this place. Negative Asian the noodles here are dry, dry and over cooked even though they are supposed to be "fresh." i won't be back. Positive Bar the pizza here is delicious, thin crust and even better cheese (in my opinion). i highly recommend it. Negative Bar the pizza here is bland, thin crust and even worse than the pizza, so i won't be back. Positive Dessert the ice cream here is delicious, soft and fluffy with all the toppings you want. i highly recommend it. Negative Dessert the bread here is stale, stale and old when you ask for a "fresh" sandwich. i won't be back. Positive Mexican the tacos here are delicious, full of flavor and even better hot sauce. i highly recommend this place. Negative Mexican the beans here are dry, dry and over cooked even though they are supposed to be "fresh." i won't be back. Table 9: Demonstrations of our model's ability to control multiple attributes simultaneously on the Amazon dataset (top) and FYelp dataset (bottom). The first two columns indicate the combination of attributes that are being controlled, with the first row indicating a pre-specified input 18-24 i dont love boys they're so urgh 65+What a lovely group of boys! They are so fortunate to have an exceptional faces. Negative too sci fi for me. characters not realistic. situation absurd. i abandoned it halfway through, unusual for me. simply not my taste in mysteries. Positive i love how sci fi this was! the characters are relatable, and the plot was great. i just had to finish it in one sitting, the story got me hooked. this is has to be one of my favorite mysteries. Positive my mom love this case for her i-pod. she uses it a lot and she is one satisfied customer. she would recommended it. Negative my mom initially liked this case for her i-pod. she used it for a while and it broke. since it is not solid she would not recommend it. Clothing ↔ Books (Amazon) Clothing nice suit but i wear a size 8 and ordered at 12 and it was just a bit too small. Books great book but i can't read small text with my bad eyesight, unfortunately, this one happened to be printed rather small. Books great book about the realities of the american dream. well written with great character development. i would definitely recommend this book. Clothing great dress with american flags printed on it. well made with great materials. i would definitely recommend this dress. Books ↔ Music (Amazon) Books i loved the book an can't wait to read the sequel! i couldn't put the book down because the plot and characters were so interesting. Music i loved the music and can not wait for the next album! i couldn't stop listening because the music was so interesting. Table 12: Examples of human edits from our FYelp and Amazon datasets. The first line in every box was the input presented to a crowd worker followed by their corresponding edit with the specified attribute. Accuracy Self-BLEU Input / Swap not a fan. food is not the best and the service was terrible the two times i have visited. 78.7% 73.8 great little place. food is great and the service was great the two times i have visited. 83.5% 51.5 best food in town. food is great and the service was the best two times i have visited. 92.8% 27.7 best thai food in town. the food is great and the service is excellent as always. 96.8%13.1 best chinese food in town. great service and the food is the best i have had in a long time.overpriced specialty food. also very crowded. service is slow. would not recommend at any time. 78.7% 73.8 great homemade food. also very crowded. service is fast. would recommend at least once. 83.5% 51.5 great specialty food. also very crowded. service is friendly. would recommend any time at the time. 92.8% 27.7 great variety of food. also very friendly staff. good service. would recommend at least once a week. 96.8%13.1 great tasting food. very friendly staff. definitely recommend this place for a quick bite. Table 13: Examples of controlling sentiment with different model checkpoints that exhibit different trade-offs between content preservation (self-BLEU) and attribute control (Accuracy). The first line in both examples is the input sentence, and subsequent lines are a model's outputs with decreasing content preservation with rows corresponding to model checkpoints at different epochs during training.
A system for rewriting text conditioned on multiple controllable attributes
324
scitldr
Vanilla RNN with ReLU activation have a simple structure that is amenable to systematic dynamical systems analysis and interpretation, but they suffer from the exploding vs. vanishing gradients problem. Recent attempts to retain this simplicity while alleviating the gradient problem are based on proper initialization schemes or orthogonality/unitary constraints on the RNN’s recurrency matrix, which, however, comes with limitations to its expressive power with regards to dynamical systems phenomena like chaos or multi-stability. Here, we instead suggest a regularization scheme that pushes part of the RNN’s latent subspace toward a line attractor configuration that enables long short-term memory and arbitrarily slow time scales. We show that our approach excels on a number of benchmarks like the sequential MNIST or multiplication problems, and enables reconstruction of dynamical systems which harbor widely different time scales. Theories of complex systems in biology and physics are often formulated in terms of sets of stochastic differential or difference equations, i.e. as stochastic dynamical systems (DS). A long-standing desire is to retrieve these generating dynamical equations directly from observed time series data . A variety of machine and deep learning methodologies toward this goal have been introduced in recent years (; ; ; ; ; ;), many of them based on recurrent neural networks (RNN) which can universally approximate any DS (i.e., its flow field) under some mild conditions . However, vanilla RNN as often used in this context are well known for their problems in capturing long-term dependencies and slow time scales in the data . In DS terms, this is generally due to the fact that flexible information maintenance over long periods requires precise fine-tuning of model parameters toward'line attractor' configurations (Fig. 1), a concept first propagated in computational neuroscience for addressing animal performance in parametric working memory tasks (; ;). Line attractors introduce directions of zero-flow into the model's state space that enable long-term maintenance of arbitrary values (Fig. 1). Specially designed RNN architectures equipped with gating mechanisms and (linear) memory cells have been suggested for solving this issue . However, from a DS perspective, simpler models that can more easily be analyzed and interpreted in DS terms, and for which more efficient inference algorithms exist that emphasize approximation of the true underlying DS would be preferable. Recent solutions to the vanishing vs. exploding gradient problem attempt to retain the simplicity of vanilla RNN by initializing or constraining the recurrent weight matrix to be the identity , orthogonal or unitary . In this way, in a system including piecewise linear (PL) components like rectified-linear units (ReLU), line attractor dimensions are established from the start by construction or ensured throughout training by a specifically parameterized matrix decomposition. However, for many DS problems, line attractors instantiated by mere initialization procedures may be unstable and quickly dissolve during training. On the other hand, orthogonal or unitary constraints are too restrictive for reconstructing DS, and more generally from a computational perspective as well : For instance, neither 2) with flow field (grey) and nullclines (set of points at which the flow of one of the variables vanishes, in blue and red). Insets: Time graphs of z 1 for T = 30 000. A) Perfect line attractor. The flow converges to the line attractor from all directions and is exactly zero on the line, thus retaining states indefinitely in the absence of perturbations, as illustrated for 3 example trajectories (green) started from different initial conditions. B) Slightly detuned line attractor (cf.). The system's state still converges toward the'line attractor ghost' , but then very slowly crawls up within the'attractor tunnel' (green trajectory) until it hits the stable fixed point at the intersection of nullclines. Within the tunnel, flow velocity is smoothly regulated by the gap between nullclines, thus enabling arbitrary time constants. Note that along other, not illustrated dimensions of the system's state space the flow may still evolve freely in all directions. C) Simple 2-unit solution to the addition problem exploiting the line attractor properties of ReLUs in the positive quadrant. The output unit serves as a perfect integrator, while the input unit will only convey those input values to the output unit that are accompanied by a'1' in the second input stream (see 7.1.1 for complete parameters). chaotic behavior (that requires diverging directions) nor settings with multiple isolated fixed point or limit cycle attractors are possible. Here we therefore suggest a different solution to the problem, by pushing (but not strictly enforcing) ReLU-based, piecewise-linear RNN (PLRNN) toward line attractor configurations along some (but not all) directions in state space. We achieve this by adding special regularization terms for a subset of RNN units to the loss function that promote such a configuration. We demonstrate that our approach outperforms, or is en par with, LSTM and other, initialization-based, methods on a number of'classical' machine learning benchmarks . More importantly, we demonstrate that while with previous methods it was difficult to capture slow behavior in a DS that exhibits widely different time scales, our new regularization-supported inference efficiently captures all relevant time scales. Long-range dependency problems in RNN. Error gradients in vanilla RNN trained by some form of gradient descent, like back-propagation through time , tend to either explode or vanish due to the large product of derivative terms that from recursive application of the chain rule over time steps (; ;). Formally, RNN z t = F θ (z t−1, s t) are discrete time dynamical systems that tend to either converge, e.g. to fixed point or limit cycle attractors, or diverge (to infinity or as in chaotic systems) over time, unless parameters of the system are precisely tuned to create directions of zero-flow in the system's state space (Fig. 1), called line attractors (; ;). Convergence of the RNN in general implies vanishing and global divergence exploding gradients. To address this issue, RNN with gated memory cells have been specifically designed , but these are complicated and tedious to analyze from a DS perspective. observed that initialization of the recurrent weight matrix W to the identity in ReLU-based RNN may yield performance en par with LSTMs on standard machine learning benchmarks. For a ReLU with activity z t ≥ 0, zero bias and unit slope, this in the identity mapping, hence a line attractor configuration. expanded on this idea by initializing the recurrence matrix such that its largest absolute eigenvalue is 1, arguing that this would leave other directions in the system's state space free for computations other than memory maintenance. Later work enforced orthogonal (; ;) or unitary constraints on the recurrent weight matrix during training. While this appears to yield long-term memory performance superior to that of LSTMs, these networks are limited in their computational power . This may be a consequence of the fact that RNN with orthogonal recurrence matrix are quite restricted in the range of dynamical phenomena they can produce, e.g. chaotic attractors are not possible since diverging eigen-directions are disabled. Our approach therefore is to establish line attractors only along some but not all directions in state space, and to only push the RNN toward these configurations but not strictly enforce them, such that convergence or divergence of RNN dynamics is still possible. We furthermore implement these concepts through regularization terms in the loss functions, such that they are encouraged throughout training unlike when only established through initialization. Dynamical systems reconstruction. From a natural science perspective, the goal of reconstructing the underlying DS fundamentally differs from building a system that'merely' yields good ahead predictions, as in DS reconstruction we require that the inferred model can freely reproduce (when no longer guided by the data) the underlying attractor geometries and state space properties (see section 3.5, Fig. S2 ;). Earlier work using RNN for DS identification mainly focused on inferring the posterior over latent trajectories Z = {z 1, . . ., z T} given time series data X = {x 1, . . ., x T}, p(Z|X), and on ahead predictions , hence did not show that inferred models can generate the underlying attractor geometries on their own. Others attempt to approximate the flow field, obtained e.g. by numerical differentiation, directly through basis expansions or neural networks, but numerical derivatives are problematic for their high variance and other numerical issues (; ;). Some approaches assume the form of the DS equations basically to be given and focus on estimating the system's latent states and parameters, rather than approximating an unknown DS based on the observed time series information alone. In many biological systems like the brain the intrinsic dynamics are highly stochastic with many noise sources, like probabilistic synaptic release , such that models that do not explicitly account for dynamical process noise may be less suitable. Finally, some fully probabilistic models for DS reconstruction based on GRU (, cf.), LSTM , or radial basis function networks are not easily interpretable and amenable to DS analysis. Most importantly, none of these previous approaches considers the long-range dependency problem within more easily tractable RNN for DS reconstruction. Assume we are given two multivariate time series S = {s t} and X = {x t}, one we will denote as'inputs' (S) and the other as'outputs' (X). We will first consider the'classical' (supervised) machine learning setting where we wish to map S on X through a RNN with latent state equation z t = F θ (z t−1, s t), as for instance in the'addition problem' . In DS reconstruction, in contrast, we usually have a dense time series X from which we wish to infer (unsupervised) the underlying DS, where S may provide an additional forcing function or sparse experimental inputs or perturbations. The latent RNN we consider here takes the specific form where the input mapping, h ∈ R M ×1 a bias, and ε t a Gaussian noise term with diagonal covariance ma- This specific formulation is originally motivated by firing rate (population) models in computational neuroscience , where latent states z t may represent membrane voltages or currents, A the neurons' passive time constants, W the synaptic coupling among neurons, and φ(·) the voltage-to-rate transfer function. However, for a RNN in the form z t = W φ (z t−1) + h, note that the simple change of variables y t → W −1 (z t − h) will yield the more familiar form y t = φ (W y t−1 + h) . Besides its neuroscience motivation, note that by letting A = I, W = 0, h = 0, we get a strict line attractor system across the variables' whole support which we conjecture will be of advantage for establishing long short-term memory properties. Also we can solve for all of the system's fixed points analytically by solving the equations z * = (I − A − W D Ω) −1 h, with D Ω as defined in Suppl. 7.1.2, and can determine their stability from the eigenvalues of matrix A + W D Ω. We could do the same for limit cycles, in principle, which are fixed points of the r-times iterated map F r θ, although practically the number of configurations to consider increases exponentially as 2 M ·r. Finally, we remark that a discrete piecewise-linear system can, under certain conditions, be transformed into an equivalent continuous-time (ODE) piecewise-linear systemζ = G Ω (ζ(t), s(t)) (Suppl. 7.1.2,), in the sense that if ζ(t) = z t, then ζ(t + ∆t) = z t+1 after a defined time step ∆t. These are among the properties that make PLRNNs more amenable to rigorous DS analysis than other RNN formulations. We will assume that the latent RNN states z t are coupled to the actual observations x t through a simple observation model of the form in the case of real-valued observations x t ∈ R N ×1, where B ∈ R N ×M is a factor loading matrix and diag(Γ) ∈ R N + the diagonal covariance matrix of the Gaussian observation noise, or in the case of multi-categorical observations x i,t ∈ {0, 1}, i x i,t = 1. We start from a similar idea as , who initialized RNN parameters such that it performs an identity mapping for z i,t ≥ 0. However, 1) we use a neuroscientifically motivated network architecture (eq. 1) that enables the identity mapping across the variables whole support, z i,t ∈ [−∞, +∞], 2) we encourage this mapping only for a subset M reg ≤ M of units (Fig. S1), leaving others free to perform arbitrary computations, and 3) we stabilize this configuration throughout training by introducing a specific L 2 regularization for parameters A, W, and h in eq. 1. That way, we divide the units into two types, where the regularized units serve as a memory that tends to decay very slowly (depending on the size of the regularization term), while the remaining units maintain the flexibility to approximate any underlying DS, yet retaining the simplicity of the original RNN model (eq. 1). Specifically, the following penalty is added to the loss function (Fig. S1): While this formulation allows us to trade off, for instance, the tendency toward a line attractor (A → I, h → 0) vs. the sensitivity to other units' inputs (W → 0), for all experiments performed here a common value, τ A = τ W = τ h = τ, was assumed for the three regularization factors. For comparability with other approaches like LSTMs or iRNN , we will assume that the latent state dynamics eq. 1 are deterministic (i.e., Σ = 0), will take g(z t) = z t and Γ = I N in eq. 2 (leading to an implicit Gaussian assumption with identity covariance matrix), and will use stochastic gradient descent (SGD) for training to minimize the squared-error loss across R samples,, between estimated and actual outputs for the addition and multiplication problems, and the cross entropy loss i,T ) for sequential MNIST, to which penalty eq. 4 was added for the regularized PLRNN (rPLRNN). We used the Adam algorithm from the PyTorch package with a learning rate of 0.001, a gradient clip parameter of 10, and batch size of 16. In all cases, SGD is stopped after 100 epochs and the fit with the lowest loss across all epochs is chosen. For DS reconstruction we request that the latent RNN approximates the true generating system of equations, which is a taller order than learning the mapping S → X or predicting future values in a time series (cf. sect. 3.5). This point has important implications for the design of models, inference algorithms and performance metrics if the primary goal is DS reconstruction rather than'mere' time series forecasting. In this context we consider the fully probabilistic, generative RNN eq. 1. Together with eq. 2 (where we take g(z t) = φ(z t)) this gives the typical form of a nonlinear state space model with observation and process noise. We solve for the parameters θ = {A, W, C, h, Σ, B, Γ} by maximum likelihood, for which an efficient ExpectationMaximization (EM) algorithm has recently been suggested , which we will briefly summarize here. Since the involved integrals are not tractable, we start off from the evidence-lower bound (ELBO) to the log-likelihood which can be rewritten in various useful ways: In the E-step, given a current estimate θ * for the parameters, we seek to determine the posterior p θ (Z|X) which we approximate by a global Gaussian q(Z|X) instantiated by the maximizer (mode) Z * of p θ (Z|X) as an estimator of the mean, and the negative inverse Hessian around this maximizer as an estimator of the state covariance, i.e. since Z integrates out in p θ (X) (equivalently, this can be derived from a Laplace approximation to the log-likelihood, log p(X|θ) where L * is the Hessian evaluated at the maximizer). We solve this optimization problem by a fixed-point iteration scheme that efficiently exploits the model's piecewise linear structure (see Suppl. 7.1.3, ;). Using this approximate posterior for p θ (Z|X), based on the model's piecewise-linear structure most of the expectation values, and E z∼q [φ(z)φ(z) ], could be solved for (semi-)analytically (where z is the concatenated vector form of Z, as in Suppl. 7.1.3). In the Mstep, we seek θ *:= arg max θ L(θ, q *), assuming proposal density q * to be given from the E-step, which for a Gaussian observation model amounts to a simple linear regression problem (see Suppl. eq. 23). To force the PLRNN to really capture the underlying DS in its governing equations, we use a previously suggested (Koppe et al. 2019) stepwise annealing protocol that gradually shifts the burden of fitting the observations X from the observation model eq. 2 to the latent RNN model eq. 1 during training, the idea of which is to establish a mapping from latent states Z to observations X first, fixing this, and then enforcing the temporal consistency constraints implied by eq. 1 while accounting for the actual observations. Measures of prediction error. For the machine learning benchmarks we employed the same criteria as used for optimization (MSE or cross-entropy, sect. 3.3) as performance metrics, evaluated across left-out test sets. In addition, we report the relative frequency P correct of correctly predicted trials Agreement in attractor geometries. From a DS perspective, it is not sufficient or even sensible to judge a method's ability to infer the underlying DS purely based on some form of (ahead-)prediction error like the MSE defined on the time series itself (Ch.12 in). Rather, we require that the inferred model can freely reproduce (when no longer guided by the data) the underlying attractor geometries and state space properties. This is not automatically guaranteed for a model that yields agreeable ahead predictions on a time series. Vice versa, if the underlying attractor is chaotic, with a tiny bit of noise even trajectories starting from the same initial condition will quickly diverge and ahead-prediction errors are not even meaningful as a performance metric (Fig. S2A). To quantify how well an inferred PLRNN captured the underlying dynamics we therefore followed and used the Kullback-Leibler divergence between the true and reproduced probability distributions across states in state space, thus assessing the agreement in attractor geometries (cf. ;) rather than in precise matching of time series, where p true (x) is the true distribution of observations across state space (not time!), p gen (x|z) is the distribution of observations generated by running the inferred PLRNN, and the sum indicates a spatial discretization (binning) of the observed state space (see Suppl. 7.1.4 for more details). We emphasize thatp (k) gen (x|z) is obtained from freely simulated trajectories, i.e. drawn from the prior p(z), not from the inferred posteriorsp(z|x train). (The form ofp(z) is given by the dynamical model eq. 1 and has a'mixture of piecewise-Gaussians' structure, see.) In addition, to assess reproduction of time scales by the inferred PLRNN, we computed the average correlation between the power spectra of the true and generated time series. Fig. S1, with additional regularization term (eq. 4) during training LSTM Long Short-Term Memory 4 NUMERICAL EXPERIMENTS We compared the performance of our rPLRNN to other models on the following three benchmarks requiring long short-term maintenance of information (as in and): 1) The addition problem of time length T consists of 100 000 training and 10 000 test samples of 2 × T input series S = {s 1, . . ., s T}, where entries s 1,: ∈ are drawn from a uniform random distribution and s 2,: ∈ {0, 1} contains zeros except for two indicator bits placed randomly at times t 1 < 10 and t 2 < T /2. Constraints on t 1 and t 2 are chosen such that every trial requires a long memory of at least T /2 time steps. At the last time step T, the target output of the network is the sum of the two inputs in s 1,: indicated by the 1-entries in s 2,:, x target T = s 1,t1 + s 1,t2. 2) The multiplication problem is the same as the addition problem, only that the product instead of the sum has to be produced by the RNN as an output at time T, x target T = s 1,t1 ·s 1,t2. 3) The MNIST dataset consists of 60 000 training and 10 000 28 × 28 test images of hand written digits. To make this a time series problem, in sequential MNIST the images are presented sequentially, pixel-by-pixel, scanning lines from upper left to bottom-right, ing in time series of fixed length T = 784. On all three benchmarks we compare the performance of the rPLRNN (eq. 1) to several other models summarized in Table 1. To achieve a meaningful comparison, all models have the same number of hidden states M, except for the LSTM, which requires three additional parameters for each hidden state and hence has only M/4 hidden states, yielding the overall same number of trainable parameters as for the other models. In all cases, M = 40, which initial numerical exploration suggested to be a good compromise between model complexity (bias) and data fit (variance) (Fig. S3). Fig. 2 summarizes the for the machine learning benchmarks. As can be seen, on the addition and multiplication tasks, and in terms of either the MSE or percentage correct, our rPLRNN outperforms all other tested methods, including LSTMs. Indeed, the LSTM performs even significantly worse than the iRNN and the iPLRNN. The large error bars in Fig. 2 from the fact that the networks mostly learn these tasks in an all-or-none fashion, i.e. either learn the task and succeed in almost 100 percent of the cases or fail completely. The for the sequential MNIST problem are summarized in Fig. 2C. While in this case the LSTM outperforms all other methods, the rPLRNN is almost en par with it. In addition, the iPLRNN outperforms the iRNN. Similar were obtained for M = 100 units (M = 25, respectively, for LSTM; Fig. S6). While the rPLRNN in general outperformed the pure initialization-based models (iRNN, npRNN, iPLRNN), confirming that a line attractor subspace present at initialization may be lost throughout training, we conjecture that this difference in performance will become even more pronounced as noise levels or task complexity increase. Fig. 3: Reconstruction of a 2-time scale DS (biophysical bursting neuron model) in limit cycle regime. A) KL divergence (D KL) between true and generated state space distributions as a function of τ. Unstable (globally diverging) system estimates were removed. B) Average MSE between power spectra of true and reconstructed DS. C) Average normalized MSE between power spectra of true and reconstructed DS split according to low (≤ 50 Hz) and high (> 50 Hz) frequency components. Error bars = SEM in all graphs. D) Example of (best) generated time series (red=reconstruction with τ = 2 3). Here our goal was to examine whether our regularization approach would also help with the identification of DS that harbor widely different time scales. By tuning systems in the vicinity of line attractors, multiple arbitrary time scales can be realized in theory . To test this, we used a biophysically motivated (highly nonlinear) bursting cortical neuron model with one voltage and two conductance recovery variables (see), one slow and one fast (Suppl. 7.1.5). Reproduction of this DS is challenging since it produces very fast spikes on top of a slow nonlinear oscillation (Fig. 3D). Time series of standardized variables of length T = 1500 were generated from this model and provided as observations to the rPLRNN inference algorithm. rPLRNNs with M = {8 . . . 18} states were estimated, with the regularization factor varied within τ ∈ {0, 10 1, 10 2, 10 3, 10 4, 10 5}/1500. Fig. 3A confirms our intuition that stronger regularization leads to better DS reconstruction as assessed by the KL divergence between true and generated state distributions. This decrease in D KL is accompanied by a likewise decrease in the MSE between the power spectra of true (Suppl. eq. 27) and generated (rPLRNN) voltage traces as τ increased (Fig. 3B). Fig. 3D gives an example of voltage traces and gating variables freely simulated (i.e., sampled) from the generative rPLRNN trained with τ = 2 3, illustrating that our model is in principle capable of capturing both the stiff spike dynamics and the slower oscillations in the second gating variable at the same time. Fig. 3C provides more insight into how the regularization worked: While the high frequency components (> 50 Hz) related to the repetitive spiking activity hardly benefitted from increasing τ, there was a strong reduction in the MSE computed on the power spectrum for the lower frequency range (≤ 50 Hz), suggesting that increased regularization helps to map slowly evolving components of the dynamics. In this work we have introduced a simple solution to the long short-term memory problem in RNN that on the one hand retains the simplicity and tractability of vanilla RNN, yet on the other hand does not curtail the universal computational capabilities of RNN and their ability to approximate arbitrary DS (; ;). We achieved this by adding regularization terms to the loss function that encourage the system to form a'memory subspace', that is, line attractor dimensions which would store arbitrary values for, if unperturbed, arbitrarily long periods. At the same time we did not rigorously enforce this constraint which has important implications for capturing slow time scales in the data: It allows the RNN to slightly depart from a perfect line attractor, which has been shown to constitute a general dynamical mechanism for regulating the speed of flow and thus the learning of arbitrary time constants that are not naturally included qua RNN design (; 2004). This is because as we come infinitesimally close to a line attractor and thus a bifurcation in the system's parameter space, the flow along this direction becomes arbitrarily slow until it vanishes completely in the line attractor configuration (Fig. 1). Moreover, part of the RNN's latent space was not regularized at all, leaving the system enough degrees of freedom for realizing arbitrary computations or dynamics. We showed that the rPLRNN is en par with or outperforms initialization-based approaches and LSTMs on a number of classical benchmarks, and, more importantly, that the regularization strongly facilitates the identification of challenging DS with widely different time scales in PLRNN-based algorithms for DS reconstruction. Future work will explore a wider range of DS models and empirical data with diverse temporal and dynamical phenomena. Another future direction may be to replace the EM algorithm by black-box variational inference, using the re-parameterization trick for gradient descent (; ;). While this would come with better scaling in M, the number of latent states (the scaling in T is linear for EM as well, see), the EM used here efficiently exploits the model's piecewise linear structure in finding the posterior over latent states and computing the parameters (see Suppl. 7.1.3). It may thus be more accurate and suitable for smaller-scale problems where high precision is required, as often encountered in neuroscience or physics. 7 SUPPLEMENTARY MATERIAL 7.1 SUPPLEMENTARY TEXT 7.1.1 Simple exact PLRNN solution for addition problem The exact PLRNN parameter settings (cf. eq. 1) for solving the addition problem with 2 units (cf. Fig. 1C) are as follows: Under some conditions we can translate the discrete into an equivalent continuous time PLRNN. Using D Ω(t) as defined below (7.1.3) for a single time step t, we can rewrite (ignoring the noise term and inputs) PLRNN eq. 1 in the form where Ω(t):= {m|z m,t > 0} is the set of all unit indices with activation larger 0 at time t. To convert this into an equivalent (in the sense defined in eq. 11) system of (piecewise) ordinary differential equations (ODE), we need to find parameters W Ω and h, such that where ∆t is the time step with which the empirically observed time series X was sampled. From these conditions it follows that for each of the s ∈ {1, . . ., 2 M} subregions (orthants) defined by fixed index sets Ω s ⊆ {1, . . ., M} we must have where we assume that D Ω s is constant for one time step, i.e. between 0 and ∆t. We approach this by first solving the homogeneous system using the general ansatz for systems of linear ODEs, where we have used z 0 = k c k v k on lines 15 and 16. Hence we can infer matrix W Ω s from the eigendecomposition of matrix W Ω s, by lettingλ k = 1 ∆t log λ k, where λ k are the eigenvalues of W Ω s, and reassembling We obtain the general solution for the inhomogeneous case by requiring that for all fixed points z * = F (z *) of the map eq. 9 we have G(z *) = 0. Using this we obtaiñ Assuming inputs s t to be constant across time step ∆t, we can apply the same transformation to input matrix C. Fig. S5 illustrates the discrete to continuous PLRNN conversion for a nonlinear oscillator. Note that in the above derivations we have assumed that matrix W Ω s can be diagonalized, and that all its eigenvalues are nonzero (in fact, W Ω s should not have any negative real eigenvalues). In general, not every discrete-time PLRNN can be converted into a continuous-time ODE system in the sense defined above. For instance, we can have chaos in a 1d nonlinear map, while we need at least a 3d ODE system to create chaos . Here we briefly outline the fixed-point-iteration algorithm for solving the maximization problem in eq. 6 (for more details see ;). Given a Gaussian latent PLRNN and a Gaussian observation model, the joint density p(X, Z) will be piecewise Gaussian, hence eq. 6 piecewise quadratic in Z. Let us concatenate all state variables across m and t into one long column vector z = (z 1,1, . . ., z M,1, . . ., z 1,T, . . ., z M,T), arrange matrices A, W into large M T × M T block tri-diagonal matrices, define d Ω:= 1 z1,1>0, 1 z2,1>0,..., 1 z M,T >0 as an indicator vector with a 1 for all states z m,t > 0 and zeros otherwise, and D Ω:= diag(d Ω) as the diagonal matrix formed from this vector. Collecting all terms quadratic, linear, or constant in z, we can then write down the optimization criterion in the form In essence, the algorithm now iterates between the two steps: 2. Given fixed z *, recompute D Ω until either convergence or one of several stopping criteria (partly likelihood-based, partly to avoid loops) is reached. The solution may afterwards be refined by one quadratic programming step. Numerical experiments showed this algorithm to be very fast and efficient . At z *, an estimate of the state covariance is then obtained as the inverse negative Hessian, In the M-step, using the proposal density q * from the E-step, the solution to the maximization problem θ *:= arg max θ L(θ, q *), can generally be expressed in the form where, for the latent model, eq. 1, α t = z t and β t:= z t−1, φ(z t−1), s t, 1 ∈ R 2M +K+1, and for the observation model, eq. 2, α t = x t and β t = g (z t). The measure D KL introduced in the main text for assessing the agreement in attractor geometries only works for situations where the ground truth p true (X) is known. , here we would like to briefly indicate how a proxy for D KL may be obtained in empirical situations where no ground truth is available. Reasoning that for a well reconstructed DS the inferred posterior p inf (z|x) given the observations should be a good representative of the prior generative dynamics p gen (z), one may use the Kullback-Leibler divergence between the distribution over latent states, obtained by sampling from the prior density p gen (z), and the (data-constrained) posterior distribution p inf (z|x) (where z ∈ R M ×1 and x ∈ R N ×1), taken across the system's state space: As evaluating this integral is difficult, one could further approximate p inf (z|x) and p gen (z) by Gaussian mixtures across trajectories, i.e., where the mean and covariance of p(z t |x 1:T) and p(z l |z l−1) are obtained by marginalizing over the multivariate distributions p(Z|X) and p gen (Z), respectively, yielding E[z t |x 1:T], E[z l |z l−1], and covariance matrices Var(z t |x 1:T) and Var(z l |z l−1). Supplementary eq. 24 may then be numerically approximated through Monte Carlo sampling by For high-dimensional state spaces, for which MC sampling becomes challenging, there is luckily a variational approximation of eq. 24 available : where the KL divergences in the exponentials are among Gaussians for which we have an analytical expression. The neuron model used in section 4.2 is described by σ(V) = 1 +.33e where C m refers to the neuron's membrane capacitance, the g • to different membrane conductances, E • to the respective reversal potentials, and m, h, and n are gating variables with limiting values given by Different parameter settings in this model lead to different dynamical phenomena, including regular spiking, slow bursting or chaos (see with parameters in the chaotic regime (blue curves) and with simple fixed point dynamics in the limit (red line). Although the system has vastly different limiting behaviors (attractor geometries) in these two cases, as visualized in the state space, the agreement in time series initially seems to indicate a perfect fit. B) Same as in A) for two trajectories drawn from exactly the same DS (i.e., same parameters) with slightly different initial conditions. Despite identical dynamics, the trajectories immediately diverge, ing in a high MSE. Dash-dotted grey lines in top graphs indicate the point from which onward the state space trajectories were depicted.
We develop a new optimization approach for vanilla ReLU-based RNN that enables long short-term memory and identification of arbitrary nonlinear dynamical systems with widely differing time scales.
325
scitldr
In the field of Generative Adversarial Networks (GANs), how to design a stable training strategy remains an open problem. Wasserstein GANs have largely promoted the stability over the original GANs by introducing Wasserstein distance, but still remain unstable and are prone to a variety of failure modes. In this paper, we present a general framework named Wasserstein-Bounded GAN (WBGAN), which improves a large family of WGAN-based approaches by simply adding an upper-bound constraint to the Wasserstein term. Furthermore, we show that WBGAN can reasonably measure the difference of distributions which almost have no intersection. Experiments demonstrate that WBGAN can stabilize as well as accelerate convergence in the training processes of a series of WGAN-based variants. Over the past few years, Generative Adversarial Networks (GANs) have shown impressive in many generative tasks. They are inspired by the game theory, that two models compete with each other: a generator which seeks to produce samples from the same distribution as the data, and a discriminator whose job is to distinguish between real and generated data. Both models are forced stronger simultaneously during the training process. GANs are capable of producing plausible synthetic data across a wide diversity of data modalities, including natural images (; ;), natural language (; ;), music;; ), etc. Despite their success, it is often difficult to train a GAN model in a fast and stable way, and researchers are facing issues like vanishing gradients, training instability, mode collapse, etc. This has led to a proliferation of works that focus on improving the quality of GANs by stabilizing the training procedure (; ; ; ; ;). In particular, introduced a variant of GANs based on the Wasserstein distance, and releases the problem of gradient disappearance to some extent. However, WGANs limit the weight within a range to enforce the continuity of Lipschitz, which can easily cause over-simplified critic functions . To solve this issue, proposed a gradient penalty method termed WGAN-GP, which replaces the weight clipping in WGANs with a gradient penalty term. As such, WGAN-GP provides a more stable training procedure and succeeds in a variety of generating tasks. Based on WGAN-GP, more works (; ; ; ; ; ; adopt different forms of gradient penalty terms to further improve training stability. However, it is often observed that such gradient penalty strategy sometimes generate samples with unsatisfying quality, or even do not always converge to the equilibrium point . In this paper, we propose a general framework named Wasserstein-Bounded GAN (WBGAN), which improve the stability of WGAN training by bounding the Wasserstein term. The highlight is that the instability of WGANs also resides in the dramatic changes of the estimated Wasserstein distance during the initial iterations. Many previous works just focused on improving the gradient penalty term for stable training, while they ignored the bottleneck of the Wasserstein term. The proposed training strategy is able to adaptively enforce the Wasserstein term within a certain value, so as to balance the Wasserstein loss and gradient penalty loss dynamically and make the training process more stable. WBGANs are generalized, which can be instantiated using different kinds of bound estimations, and incorporated into any variant of WGANs to improve the training stability and accelerate the convergence. Specifically, with Sinkhorn distance for bound estimation, we test three representative variants of WGANs (WGAN-GP , WGANdiv , and WGAN-GPReal ) on the CelebA dataset . As shown in Fig. 1 Wasserstein GANs (WGANs). WGANs were primarily motivated by unstable training caused by the gradient vanishing problem of the original GANs . They proposed to use 1-Wasserstein distance W 1 (P r, P g) to measure the difference between P r and P g, the real and generated distributions, given that W 1 (P r, P g) is continuous everywhere and differentiable almost everywhere under mild assumptions. The objective of WGAN is formulated using the KantorovichRubinstein duality : where L 1 is the function space of all D satisfying the 1-Lipschitz constraint D L ≤ 1. D is a critic and G is the generator, both of which are parameterized by a neural network. Under an optimal critic, minimizing the objective with respect to G is to minimize W 1 (P r, P g). To enforce the 1-Lipschitz constraint on the critic, WGAN used a weight clipping on the critic to constrain the weights within a compact range, [−c, c], which guarantees the set of critic functions is a subset of the k-Lipschitz functions for some k. With weight clipping, the critic tends to learn over-simplified functions , which may lead to unsatisfying .;;; proposed different forms of gradient penalty as a regularization term, so that a generalized loss function with respect to the critic can be written as: where ] stands for the Wasserstein term, and GP for the gradient penalty term. L D is actually posing a tradeoff between these two objectives. Wasserstein Distance between Empirical Distributions. In practice, we approximate W 1 (P r, P g) using W 1 P r,P g, whereP r andP g denote the empirical version of P r and P g with N samples, Here, y i is randomly sampled from the real image dataset, and δ yi is the Dirac delta function at location y i. Computing W 1 P r,P g is a typical problem named discrete optimal transport. We denote B as the set of probabilistic couplings between two empirical distributions defined as: where 1 N is a N -dimensional all-one vector. Then we have W 1 (P r,P g) = min γ∈B Γ, C F, where ·, · F is the Frobenius dot-product and C is the cost matrix, with each element C i,j = c(G(z i), y j ) denoting the cost to move a probability mass from G(z i) to y j. The optimal coupling is the solution of this minimization problem: Γ 0 = arg min Γ∈B Γ, C F. The Sinkhorn Algorithm. Despite Wasserstein distance has appealing theoretical properties in measuring the difference between distributions, its computational costs for linear programming are often high in particular when the problem size becomes large. To alleviate this burden, Sinkhorn distance was proposed to approximate Wasserstein distance: where U α (P r,P g) is a subset of B defined in Eq. 3: where H(·) is the entropy defined as H(Γ) = − N i,j=1 Γ i,j log Γ i,j and H(P r) = − N n=1p n logp n wherep n is the probability of the n-th sample. Compared to Wasserstein distance, Sinkhorn distance restricts the search space of joint probabilities to those with sufficient smoothness. To compute Sinkhorn distance, a Lagrange multiplier was used: can be computed with a much cheaper cost than the original Wasserstein distance using matrix scaling algorithms. For λ > 0, the solution Γ λ is unique and has the form Γ λ = diag(u)K diag(v), where K is the element-wise exponential of −λC. u and v are two non-negative vectors uniquely defined up to a multiplicative factor (. W is often referred to as the Wasserstein term, which is unbounded during the training process. In a wide range of WGAN's variants such as WGAN-GP , the critic defined by L D is to maximize the Wasserstein term W while satisfying the gradient penalty GP. However, in practice, we find that W often rises rapidly to a tremendous value which is far from rational during the initial training procedure. A possible reason may lie in that the critic function does not satisfy the Lipschitz constraint during the initial training stage. As shown in Fig. 2, this leads to dramatic instability in optimization and finally in unsatisfying performance in image generation. Our idea is thus straightforward, i.e., setting an upper-bound for W. The modified critic loss function is written as: Our formulation brings a benefit to the numerical stability of the Wasserstein term. In practice, it remains comparable to the other term, λ GP · GP, so that both W and GP can be optimized in a'mild' manner, i.e., without any one of them dominating or being ignored during training. Note that the Ď W term cannot be chosen arbitrarily. Setting it too small, Ď W will limit the capacity of the critic function, ing in a poor generation. Setting it too large, there will be no effect of bounding the W term. The proposed bounded strategy is a general framework. We name it general in two folds: First, WBGAN can be applied to almost all gradient penalty based WGANs, such as WGAN-GP , WGAN-GPReal , etc. Moreover, there are different ways to estimate the value of Ď W. For example, the linear programming was applied successfully to some existing WGANs like WGAN-TS. In what follows, we present an example which uses Sinkhorn distance to estimate Ď W, while we believe other ways of estimation are also possible. In this section, we give an instantiation, Sikhorn distance , to effectively compute the bounded term Ď W. The motivation of using Sinkhorn distance lies in that in theory, the Wasserstein term of WGAN will eventually converge to the 1-Wasserstein distance between the real distribution P r and the generated distribution P g ). Therefore, we can use the 1-Wasserstein distance between the empirical distributions,P r andP g, as the upper-bound Ď W. Since the computation of Wasserstein distance involves a large linear programming which Algorithm 1 WBGAN with Sinkhorn distance Require: learning rate α, batch size M, the number of iterations of the critic per generator iteration N critic, weight of gradient penalty λ GP, weight of Sinkhorn distance λ s, initial parameters θ and φ 0, other hyper-parameters; 1: while φ t has not converged do 2: Sample a batch {z (m) } M m=1 ∼ P z of prior samples; 5: 7: end for 10: Sample a batch {z (m) } M m=1 ∼ P z of prior samples; 11: 12: φ t+1 ← Adam(L φt, φ t, α, β 1, β 2); 14: end while Ensure: trained parameters θ and φ T (converged). suffers heavy computational costs, we replace it by Sinkhorn distance instead -the Sinkhorn distance betweenP r andP g can be computed using Sinkhorn's matrix scaling algorithm , which is orders of magnitude faster than the linear programming solvers. Mathematically, consider a generator function G φ (z) that produces samples by transforming noise input z drawn from a simple distribution P z, e.g., Gaussian distribution. D θ stands for a critic function parameterized by θ. The objective of the critic is: where d λ (P r,P g) is the Sinkhorn distance defined in Eq. 6. On the other hand, given a fixed critic function D θ, considering that Sinkhorn distance allows gradient back-propagation , we can find the optimal generator G φ by solving: where λ s is a balancing hyper-parameter, which we set λ s = 0.5 in this paper. In Algorithm 1, we summarize the flowchart of training WBGAN with Sinkhorn distance. ) be a separable metric space. P(X) denotes the set of Borel probability measures. P p (X) denotes the set of all µ ∈ P(X) such that X d(x, y) p dµ(x) < +∞ for some y ∈ X. We can suppose real data distribution P r, generated data distribution P g and their empirical distributionP r andP g all in P p (X). Proposition 1. Let P r and P g be real data distribution and generated data distribution. Suppose that P r andP g are empirical measures of P r and P g. Then we have 0 Proof. Please refer to Appendix A. Proposition 1 tells us that as E[W 1 (P r,P g)] → 0, W 1 (P r, P g) is forced to 0. has pointed out that if λ is chosen large enough, d λ (P r,P g) coincides with W 1 (P r,P g). So, it is reasonable to use d λ (P r,P g) to constrain the Wasserstein term. Most GANs measure the distance between distributions based on probability divergence. We will prove that the Eq. 8 is indeed a valid divergence. First, we have the following definition. Definition 1. Given probability measures p and q, D is a functional of p and q. If D satisfies the following properties: then we say D is a probability divergence between p and q. Remark 1. The following W (P r, P g) satisfies the Definition 1 and is therefore a probability divergence. where L 1 is the 1-Lipschitz constraint. Please see the proof and detailed discussion in. This is the objective of critic used by WGAN. Remark 2. Equation 8 satisfies the Definition 1 and is a probability divergence. Proof. The proof is given in Appendix B. Remark 3. Consider two distributions P r (x) = δ(x − α), P g (x) = δ(x − β) that have no intersection (α = β). δ is the Dirac delta function. In such an extreme case, Eq. 8 can still be optimized by gradient descent. Proof. The proof is in Appendix C Remark 2 tells us that Eq. 8 is a valid divergence. Since the real data distribution is supported by lowdimensional manifolds, the supports of generated distribution and real data distribution are unlikely to have a non-negligible intersection. Remark 3 shows that compared to the standard GAN , WBGAN can continuously measure the difference between two distributions, even if there is almost no intersection between the distributions. To verify that WBGAN is a generalized approach, we select three variants of WGAN, namely, WGAN-GP , WGAN-div and WGAN-GPReal (gradient penalty on real data only) as our baselines. By adding bound constraints to these WGAN variants, we obtain the counterparts WBGAN-GP, WBGAN-div, and WBGANGPReal, respectively. Two different network architectures are used, i.e., DCGAN and BigGAN . For DCGAN, we directly output the activation before the sigmoid layer. BigGAN is a conditional GAN architecture, in which class conditioning is passed to generator by supplying it with class-conditional gains and biases in the batch normalization layer (; de ; . In addition, the discriminator is conditioned by using the cosine similarity between its features and a set of learned class embedding. We use the spectral norm in BigGAN, but for the sake of simplicity, we do not use the self-attention module (; . Other hyper-parameters and the network architecture of BigGAN simply follow the original paper. We choose the Fréchet Inception Distance (FID) for quantitative evaluation, which has been proven to be more consistent with individual assessment in evaluating the fidelity and variation of the generated image samples. We first investigate mid-resolution image generation on the CelebA dataset , a large-scale face image dataset with more than 200K face images. During training, we crop 108 × 108 face from the original images and then resize them to 64 × 64. FID Stability. We first use DCGAN to build our generator and discriminator. Training curves are shown in Fig. 2, and quantitative are summarized in Table 1. Each approach is executed for 5 times and the average is reported. All FID curves are obtained from generators directly without using the moving average strategy (; ; ; Yazıcıet al., 2018) to avoid over-smoothing the FID curves, such that we can diagnose the underlying oscillating properties of different methods during training. One can see that WBGAN-based counterparts improve the stability during training, and achieve superior performance over the WGAN-based baselines. We emphasize that the converged FID values reported by WBGAN-div and WBGANGPReal are lower than those reported by WGAN-div and WGAN-GPReal. In particular, WGAN-div suffers several FID fluctuation unexpectedly, and WGAN-GPReal has not ever achieved FID convergence during the entire training process. Regarding WGAN-GP, although the final FID is slightly better than that of WBGAN-GP (6.76 vs. 7.32), we observe a much slower convergence rate in Fig. 2(a). For the generated face images by different approaches, please refer to We also investigate a stronger backbone by replacing the network with BigGAN, a conditional GAN architecture that uses spectral normalization on both generator and discriminator. We set the number of labels to be 1 since the CelebA dataset only contains face images. Training curves are shown in Fig. 3 and quantitative are summarized in Table 1. Among three WGAN-based methods, only WGAN-GP achieves convergence, but its convergence speed and the FID value are inferior to those reported by WBGAN-GP. In opposite, both WGAN-div and WGAN-GPReal fails to converge while the counterparts equipped with WB-GAN perform well. For the generated face images by different approaches, please refer to Fig. 11 and Fig. 12 in Appendix F for details. Epoch Wasserstein Loss and Generator Loss Stability. Next, we evaluate the stability of WBGAN in terms of the Wasserstein term and generator loss. In Fig. 4, we evaluate the impact on WGAN-GP (DCGAN on CelebA). One can see that, after the bound is applied, the Wasserstein term W is stablized especially during the start of training. Due to space limit, more using BigGAN on CelebA are provided in Appendix E. In addition, we compute a new term named the generator loss, which is defined as Fig. 6 shows the curves of this statistics during the starting iterations. Compared to WGAN-based approaches, WBGAN-based approaches produce more stable G loss terms, which verifies that the training process of GAN becomes more stable. Ablation Study. Before continuing to high-resolution experiments, we conduct an ablation study to investigate the contribution made by different components of WBGAN. The backbone network is DCGAN, and the dataset is CelebA. We compare four configurations, i.e., WGAN-GP, with the original loss term used in WGAN-GP; WGAN-GP+D-bound, which adds a bound (Sinkhorn distance) to the Wasserstein term of the critic D of WGAN-GP; WGAN-GP+G-Sinkhorn, which adds Sinkhorn distance to the loss function of the generator G in WGAN-GP; and WGAN-GP+D-bound+G-Sinkhorn, which is equivalent to the final WBGAN-GP, with Sinkhorn distance added to both critic D and generator G. Fig. 5 plots the FID curves of all four settings. One can see that, although the FID curves of WGAN-GP and WGAN-GP+G-Sinkhorn descend quickly in the first 10 epochs, they begin to fluctuate between 20 to 40 epochs. On the other hand, when WGAN-GP is combined with D-bound, FID is able to descend smoothly (without fluctuation), showing that it is the bounded constraint that stablizes the training process. Finally, by integrating both D-bound and G-Sinkhorn into WGAN-GP, the FID curve descends not only smoothly but also fast, which is what we desire in real-world applications. In this section, we evaluate our approach on higher-resolution (128 × 128) images. We use the CelebA-HQ dataset , and use BigGAN as the backbone. As the target become larger (128 × 128), the number of images we can feed into a single batch becomes smaller. Since we are using an empirical way of estimating Sinkhorn distance, it becomes less accurate in the scenario of small batch size and large image size. In other words, it is no longer the best choice to use Sinkhorn distance to estimate the upper-bound Ď W. Returning to our generalized formulation, Eq. 7, we note that other forms of bound to constrain the critic. Here we consider a very simple bound, which is also based on empirical study. Note that the baseline methods, though not converging very well, can finally arrive at a stablized W value. Heuristically, we use this constant value (there is no need to be accurate) as the bound, which is 10 for WGAN-GP, 5 for WGAN-div and 3 for WGAN-GPReal, respectively. In Appendix D, we provide the curves of the Wasserstein term for these baselines, which lead to our estimation. FID curves and quantitative using these constant bounds are shown in Fig. 7 and Table 2, respectively. We find that WBGAN-GP produces a similar convergence rate with WGAN-GP, WBGAN-div is slightly better than WGAN-div, and WBGAN-GPReal outperforms WGAN-GPReal and produces the best . For the generated face images by different approaches, please refer to Fig. 13 and Fig. 14 in Appendix F for details. Discussions. From the above experiments, we can see that Sinkhorn distance is just one way of upper-bound estimation. In case that it becomes less accurate, we can freely replace it with other types of estimation. Besides the constant bound used above, there also exist other examples, such as the two-step computation of the exact Wasserstein distance. However, it is still a challenge to estimate the Wasserstein distance between high-resolution (1024 × 1024) image distributions efficiently. Nevertheless, the most important deliveries of our work are that a bounded Wasserstein term can bring benefits on training stability, and that we can use it to a wide range of frameworks based on WGAN. This paper introduced a general framework called WBGANs, which can be applied to a variety of WGAN variants to stabilize the training process and improve the performance. We clarify that WBGANs can stabilize the Wasserstein term at the beginning of the iterations, which is beneficial for smoother convergence of WGAN-based methods. We present an instantiated bound estimation method via Sinkhorn distance and give a theoretical analysis on it. It remains an open topic on how to set a better bound for higher resolution image generation tasks. Proof. Suppose µ, υ 1, υ 2 ∈ P p (X), t 1, t 2 ≥ 0, t 1 + t 2 = 1, then there exist γ 1 (x, y) and γ 2 (x, y) with marginals (µ, υ 1) and (µ, υ 2) satisfying: Let υ = t 1 υ 1 + t 2 υ 2, γ(x, y) = t 1 γ 1 (x, y) + t 2 γ 2 (x, y), then γ(x, y) has marginals (µ, υ). We can derive: This can be extended to a general form: where are the independent empirical measures drawn fromP g. From Eq. 15, we can get According to the strong law of large numbers, we can derive that with probability 1, 1 n n i=1P gi → P g as n → ∞ (assuming P g has finite first moments). Since W 1 is continuous in P p (X), we can derive that with probability 1, W 1 (P r, 1 n n i=1P gi) → W 1 (P r, P g) as n → ∞. By the law of large numbers again, with probability 1, 1 n n i=1 W 1 (P r,P gi) → E[W 1 (P r,P g)] as n → ∞. Thus we can deduce that: Similarly, supposeP ri (1 ≤ i ≤ n) are the independent empirical measures drawn fromP r. Since the symmetry of Wasserstein distance, we can deduce that: Therefore, combining Eq. 17 and Eq. 18, we can get B PROOF OF REMARK 2 where d λ (P r,P g) ≥ 0 is the Sinkhorn distance defined in Eq. 6. Next, if P r = P g, then we have L θ (P r, P g) = 0. So we only need to show L θ (P r, P g) > 0 if Applying this into Eq. 8 leads to L θ (P r, P g) = max Since P r = P g, we know that d λ (P r,P g) > 0. Therefore, we have L θ (P r, P g) > 0 while P r = P g. We finish the proof. Proof. Let P r (x) = δ(x − α), P g (x) = δ(x − β) and α = β, then we have We know that Wasserstein distance W (P r, P g) = max D θ ∈L1 D θ (α) − D θ (β). Since P r, P g are Dirac distributions, then we have W (P r, P g) = d λ (P r,P g). Combining this into Eq. 22 leads to L θ (P r, P g) = d λ (P r,P g). Considering that Sinkhorn distance d λ (P r,P g)
Propose an improved framework for WGANs and demonstrate its better performance in theory and practice.
326
scitldr
Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian. After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample. In this paper, we show that the latent space operations used in the literature so far induce a distribution mismatch between the ing outputs and the prior distribution the model was trained on. To address this, we propose to use distribution matching transport maps to ensure that such latent space operations preserve the prior distribution, while minimally modifying the original operation. Our experimental validate that the proposed operations give higher quality samples compared to the original operations. Generative models such as Variational Autoencoders (VAEs) BID6 and Generative Adversarial Networks (GANs) BID3 have emerged as popular techniques for unsupervised learning of intractable distributions. In the framework of Generative Adversarial Networks (GANs) BID3, the generative model is obtained by jointly training a generator G and a discriminator D in an adversarial manner. The discriminator is trained to classify synthetic samples from real ones, whereas the generator is trained to map samples drawn from a fixed prior distribution to synthetic examples which fool the discriminator. Variational Autoencoders (VAEs) BID6 are also trained for a fixed prior distribution, but this is done through the loss of an Autoencoder that minimizes the variational lower bound of the data likelihood. For both VAEs and GANs, using some data X we end up with a trained generator G, that is supposed to map latent samples z from the fixed prior distribution to output samples G(z) which (hopefully) have the same distribution as the data. In order to understand and visualize the learned model G(z), it is a common practice in the literature of generative models to explore how the output G(z) behaves under various arithmetic operations on the latent samples z. In this paper, we show that the operations typically used so far, such as linear interpolation BID3, spherical interpolation , vicinity sampling and vector arithmetic BID12, cause a distribution mismatch between the latent prior distribution and the of the operations. This is problematic, since the generator G was trained on a fixed prior and expects to see inputs with statistics consistent with that distribution. We show that this, somewhat paradoxically, is also a problem if the support of ing (mismatched) distribution is within the support of a uniformly distributed prior, whose points all have equal likelihood during training. To address this, we propose to use distribution matching transport maps, to obtain analogous latent space operations (e.g. interpolation, vicinity sampling) which preserve the prior distribution of the latent space, while minimally changing the original operation. In Figure 1 we showcase how our proposed technique gives an interpolation operator which avoids distribution mismatch when interpolating between samples of a uniform distribution. The points of the (red) matched trajectories samples from prior linear matched (ours) spherical (a) Uniform prior: Trajectories of linear interpolation, our matched interpolation and the spherical interpolation . Figure 1: We show examples of distribution mismatches induced by the previous interpolation schemes when using a uniform prior in two dimensions. Our matched interpolation avoids this with a minimal modification to the linear trajectory, traversing through the space such that all points along the path are distributed identically to the prior.are obtained as minimal deviations (in expectation of l 1 distance) from the the points of the (blue) linear trajectory. In the literature there are dozens of papers that use sample operations to explore the learned models. BID0 use linear interpolation between neighbors in the latent space to study how well deep vs shallow representations can disentangle the latent space of Contractive Auto Encoders (CAEs) BID14.In the seminal GAN paper of BID3, the authors use linear interpolation between latent samples to visualize the transition between outputs of a GAN trained on MNIST. BID2 linearly interpolate the latent codes of an auto encoder trained on a synthetic chair dataset. BID12 also linearly interpolate between samples to evaluate the quality of the learned representation. Furthermore, motivated by the semantic word vectors of , they explore using vector arithmetic on the samples to change semantics such as adding a smile to a generated face. BID13 use linear interpolation to explore their proposed GAN model which operates jointly in the visual and textual domain. BID1 combine GANs and VAEs for a neural photo editor, using masked interpolations to edit an embedded photo in the latent space. While there are numerous works performing operations on samples, most of them have ignored the problem of distribution mismatch, such as the one presented in Figure 1d. BID6 and BID9 sidestep the problem when visualizing their models, by not performing operations on latent samples, but instead restrict the latent space to 2-d and uniformly sample the percentiles of the distribution on a 2-d grid. This way, the samples have statistics that are consistent with the prior distribution. However, this approach does not scale up to higher dimensions -whereas the latent spaces used in the literature can have hundreds of dimensions. Related to our work, experimentally observe that there is a distribution mismatch between the distance to origin for points drawn from uniform or Gaussian distribution and points obtained with linear interpolation, and propose to use a so-called spherical linear interpolation to reduce the mismatch, obtaining higher quality interpolated samples. However, the proposed approach has no theoretical guarantees. DISPLAYFORM0 Vicinity sampling Table 1: Examples of interesting sample operations which need to be adapted if we want the distribution of the y to match the prior distribution. If the prior is Gaussian, our proposed matched operation simplifies to a proper re-scaling factor (see third column) for additive operations. DISPLAYFORM1 In this work, we propose a generic method to fully preserve the desired prior distribution when using sample operations. The approach works as follows: we are given a'desired' operation, such as linear interpolation y = tz 1 + (1 − t)z 2, t ∈. Since the distribution of y does not match the prior distribution of z, we search for a warping f: DISPLAYFORM2 has the same distribution as z. In order to have the modificationỹ as faithful as possible to the original operation y, we use optimal transform maps BID17; 2008) to find a minimal modification of y which recovers the prior distribution z. Figure 1a, where each pointỹ of the matched curve is obtained by warping a corresponding point y of the linear trajectory, while not deviating too far from the line. With implicit models such as GANs BID3 and VAEs BID6, we use the data X, drawn from an unknown random variable x, to learn a generator G: DISPLAYFORM0 with respect to a fixed prior distribution p z, such that G(z) approximates x. Once the model is trained, we can sample from it by feeding latent samples z through G.We now bring our attention to operations on latent samples DISPLAYFORM1 We give a few examples of such operations in Table 1.Since the inputs to the operations are random variables, their output y = κ(z 1, · · ·, z k) is also a random variable (commonly referred to as a statistic). While we typically perform these operations on realized (i.e. observed) samples, our analysis is done through the underlying random variable y. The same treatment is typically used to analyze other statistics over random variables, such as the sample mean, sample variance and test statistics. In Table 1 we show example operations which have been commonly used in the literature. As discussed in the Introduction, such operations can provide valuable insight into how the trained generator G changes as one creates related samples y from some source samples. The most common such operation is the linear interpolation, which we can view as an operation DISPLAYFORM2 where z 1, z 2 are latent samples from the prior p z and y t is parameterized by t ∈. Now, assume z 1 and z 2 are i.i.d, and let Z 1, Z 2 be their (scalar) first components with distribution p Z. Then the first component of y t is Y t = tZ 1 + (1 − t)Z 2, and we can compute: DISPLAYFORM3 Since (1 + 2t(t − 1)) = 1 for all t ∈ \ {0, 1}, it is in general impossible for y t to have the same distribution as z, which means that distribution mismatch is inevitable when using linear interpolation. A similar analysis reveals the same for all of the operations in Table 1.This leaves us with a dilemma: we have various intuitive operations (see Table 1) which we would want to be able to perform on samples, but their ing distribution p yt is inconsistent with the distribution p z we trained G for. Due to the curse of dimensionality, as empirically observed by , this mismatch can be significant in high dimensions. We illustrate this in FIG1, where we plot the distribution of the squared norm y t 2 for the midpoint t = 1/2 of linear interpolation, compared to the prior In order to address the distribution mismatch, we propose a simple and intuitive strategy for constructing distribution preserving operators, via optimal transport: Strategy 1 (Optimal Transport Matched Operations). 2. We analytically (or numerically) compute the ing (mismatched) distribution p y 3. We search for a minimal modificationỹ = f (y) (in the sense that E y [c(ỹ, y)] is minimal with respect to a cost c), such that distribution is brought back to the prior, i.e. pỹ = p z.The cost function in step 3 could e.g. be the euclidean distance c(x, y) = x − y, and is used to measure how faithful the modified operator,ỹ = f (κ(z 1, · · ·, z k)) is to the original operator k. Finding the map f which gives a minimal modification can be challenging, but fortunately it is a well studied problem from optimal transport theory. We refer to the modified operationỹ as the matched version of y, with respect to the cost c and prior distribution p z.For completeness, we introduce the key concept of optimal transport theory in a simplified setting, i.e. assuming probability distributions are in euclidean space and skipping measure theoretical formalism. We refer to Villani (2003; 2008) and BID17 for a thorough and formal treatment of optimal transport. The problem of step above was first posed by Monge and can more formally be stated as: Problem 1 (Problem 1.1). Given probability distributions p x, p y, with domains X, Y respectively, and a cost function c: X × Y → R +, we want to minimize DISPLAYFORM0 We refer to the minimizer f * X → Y of (MP) (if it exists), as the optimal transport map from p x to p y with respect to the cost c. However, the problem remained unsolved until a relaxed problem was studied by:Problem 2 (Problem 1.2). Given probability distributions p x, p y, with domains X, Y respectively, and a cost function c: X × Y → R +, we want to minimize DISPLAYFORM1 where (x, y) ∼ p x,y, x ∼ p x, y ∼ p y denotes that (x, y) have a joint distribution p x,y which has (previously specified) marginals p x and p y.We refer to the joint p x,y which minimizes (KP) as the optimal transport plan from p x to p y with respect to the cost c. The key difference is to relax the deterministic relationship between x and f (x) to a joint probability distribution p x,y with marginals p x and p y for x and y. In the case of Problem 1, the minimization might be over the empty set since it is not guaranteed that there exists a mapping f such that f (x) ∼ y. In contrast, for Problem 2, one can always construct a joint density p x,y with p x and p y as marginals, such as the trivial construction where x and y are independent, i.e. DISPLAYFORM2 Note that given a joint density p x,y (x, y) over X × Y, we can view y conditioned on x = x for a fixed x as a stochastic function f (x) from X to Y, since given a fixed x do not get a specific function value f (x) but instead a random variable f (x) that depends on x, with f (x) ∼ y|x = x with density p y (y|x = x):= px,y(x,y)px (x). In this case we have (x, f (x)) ∼ p x,y, so we can view the Problem KP as a relaxation of Problem MP where f is allowed to be a stochastic mapping. While the relaxed problem of Kantorovich (KP) is much more studied in the optimal transport literature, for our purposes of constructing operators it is desirable for the mapping f to be deterministic as in (MP).To this end, we will choose the cost function c such that the two problems coincide and where we can find an analytical solution f or at least an efficient numerical solution. In particular, we note that most operators in Table 1 are all pointwise, such that if the points z i have i.i.d. components, then the y will also have i.i.d. components. If we combine this with the constraint for the cost c to be additive over the components of x, y, we obtain the following simplification: Theorem 1. Suppose p x and p y have i.i.d components and c over DISPLAYFORM3 Consequently, the minimization problems (MP) and (KP) turn into d identical scalar problems for the distributions p X and p Y of the components of x and y: DISPLAYFORM4 such that an optimal transport map T for (MP-1-D) gives an optimal transport map f for (MP) by pointwise application of T, i.e. f (x) (i):= T (x (i) ), and an optimal transport plan p X,Y for DISPLAYFORM5 Proof. See Appendix. Fortunately, under some mild constraints, the scalar problems have a known solution: Theorem 2 (Theorem 2.9 in BID17). Let h: R → R + be convex and suppose the cost C takes the form C(x, y) = h(x − y). Given an continuous source distribution p X and a target distribution p Y on R having a finite optimal transport cost in (KP-1-D), then defines an optimal transport map from DISPLAYFORM6 DISPLAYFORM7 is the Cumulative Distribution Function (CDF) of X and F DISPLAYFORM8 ≥ y} is the pseudo-inverse of F Y. Furthermore, the joint distribution of (X, T mon X→Y (X)) defines an optimal transport plan for (KP-1-D).The mapping T mon X→Y (x) in Theorem 2 is non-decreasing and is known as the monotone transport map from X to Y. It is easy to verify that T mon X→Y (X) has the distribution of Y, in particular DISPLAYFORM9 Now, combining Theorems 1 and 2, we obtain a concrete realization of the Strategy 1 outlined above. We choose the cost c such that it admits to Theorem 1, such as c(x, y):= x − y 1, and use an operation that is pointwise, so we just need to compute the monotone transport map in. That is, if z has i.i.d components with distribution p Z, we just need to compute the component distribution p Y of the y of the operation, the CDFs F Z, F Y and obtain DISPLAYFORM10 as the component-wise modification of y, i.e.ỹ DISPLAYFORM11 In FIG2 we show the monotone transport map for the linear interpolation y = tz 1 + (1 − t)z 2 for various values of t. The detailed calculations and examples for various operations are given in Appendix 5.3, for both Uniform and Gaussian priors. The Gaussian case has a particularly simple ing transport map for additive operations, where it is just a linear transformation through a scalar multiplication, summarized in the third column of Table 1. To validate the correctness of the matched operators obtained above, we numerically simulate the distributions for toy examples, as well as prior distributions typically used in the literature. Priors vs. interpolations in 2-D For Figure 1, we sample 1 million pairs of points in two dimension, from a uniform prior (on [−1, 1] 2 ), and estimate numerically the midpoint distribution of linear interpolation, our proposed matched interpolation and the spherical interpolation of. It is reassuring to see that the matched interpolation gives midpoints which are identically distributed to the prior. In contrast, the linear interpolation condenses more towards the origin, forming a pyramid-shaped distribution (the of convolving two boxes in 2-d). Since the spherical interpolation of follows a great circle with varying radius between the two points, we see that the ing distribution has a "hole" in it, "circling" around the origin for both priors. distribution of the squared norm of the midpoints. We see there is a dramatic difference between vector lengths in the prior and the midpoints of linear interpolation, with only minimal overlap. We also show the spherical (SLERP) interpolation of which has a matching first moment, but otherwise also induces a distribution mismatch. In contrast, our matched interpolation, fully preserves the prior distribution and perfectly aligns. We note that this setting (d = 100, uniform or Gaussian) is commonly used in the literature. In this section we will present some concrete examples for the differences in generator output dependent on the exact sample operation used to traverse the latent space of a generative model. To this end, the generator output for latent samples produced with linear interpolation, SLERP (spherical linear interpolation) of and our proposed matched interpolation will be compared. Please refer to Table 1 for an overview of the operators used in this Section. Setup We used DCGAN BID12 generative models trained on LSUN bedrooms , CelebA BID7 and LLD BID15, an icon dataset, to qualitatively evaluate. For LSUN, the model was trained for two different output resolutions, providing 64 × 64 pixel and a 128×128 pixel output images (where the latter is used in figures containing larger sample images). The models for LSUN and the icon dataset where both trained on a uniform latent prior distribution, while for CelebA a Gaussian prior was used. The dimensionality of the latent space is 100 for both LSUN and CelebA, and 512 for the model trained on the icon model. Furthermore we use improved Wasserstein GAN (iWGAN) with gradient penalty BID4 ) trained on CIFAR-10 at 32 × 32 pixels with a 128-dimensional Gaussian prior to produce the inception scores presented in Section 3.3. We begin with the classic example of 2-point interpolation: FIG3 shows three examples per dataset for an interpolation between 2 points in latent space. Each example is first done via linear interpolation, then SLERP and finally matched interpolation. In FIG5 in the Appendix we show more densely sampled examples. FIG3 that linear interpolation produces inferior with generally more blurry, less saturated and less detailed output images. SLERP and matched interpolation are slightly different, however it is not visually obvious which one is superior. Differences between the various interpolation methods for CelebA FIG3 ) are much more subtle to the point that they are virtually indistinguishable when viewed side-by-side. This is not an inconsistency though: while distribution mismatch can cause large differences, it can also happen that the model generalizes well enough that it does not matter. In all cases, the point where the interpolation methods diverge the most, is at the midpoint of the interpolation where t = 0.5. Thus we provide 25 such interpolation midpoints in Figures 5 (LLD icons) and 6 (LSUN) for direct comparison. 3.69 ± 0.10 3.91 ± 0.10 2.04 ± 0.04 Table 2: Inception scores on LLD-icon, LSUN, CIFAR-10 and CelebA for the midpoints of various interpolation operations. Scores are reported as mean ± standard deviation (relative change in %). highlights the very apparent loss of detail and increasing prevalence of artifacts towards the midpoint in the linear version, compared to SLERP compared and our matched interpolation. Vicinity sampling Furthermore we provide two examples for vicinity sampling in Figures 9 and 10. Analogous to the previous observations, the output under a linear operator lacks definition, sharpness and saturation when compared to both spherical and matched operators. Random walk An interesting property of our matched vicinity sampling is that we can obtain a random walk in the latent space by applying it repeatedly: we start at a point y 0 = z drawn from the prior, and then obtain point y i by sampling a single point in the vicinity of y i−1, using some fixed'step size'.We show an example of such a walk in FIG8, using = 0.5. As a of the repeated application of the vicinity sampling operation, the divergence from the prior distribution in the non-matched case becomes stronger with each step, ing in completely unrecognizable output images on the LSUN and LLD icon models. Even for the CelebA model where differences where minimal before, they are quite apparent in this experiment. The random walk thus perfectly illustrates the need for respecting the prior distribution when performing any operation in latent space, as the adverse effects can cumulate through the repeated application of operators that do not comply to the prior distribution. We quantitatively confirm the observations of the previous section by using the Inception score BID16. In Table 2 we compare the Inception score of our trained models (i.e. using random samples from the prior) with the score when sampling midpoints from the 2-point and 4-point interpolations described above, reporting mean and standard deviation with 50,000 samples, as well as relative change to the original model scores if they are significant. Compared to the original scores of the trained models, our matched operations are statistically indistinguishable (as expected) while the linear interpolation gives a significantly lower score in all settings (up to 29% lower). As observed for the quality visually, the SLERP heuristic gives similar scores to the matched operations. We have shown that the common latent space operations used for Generative Models induce distribution mismatch from the prior distribution the models were trained for. This problem has been mostly ignored by the literature so far, partially due to the belief that this should not be a problem for uniform priors. However, our statistical and experimental analysis shows that the problem is real, with the operations used so far producing significantly lower quality samples compared to their inputs. To address the distribution mismatch, we propose to use optimal transport to minimally modify (in l 1 distance) the operations such that they fully preserve the prior distribution. We give analytical formulas of the ing (matched) operations for various examples, which are easily implemented. The matched operators give a significantly higher quality samples compared to the originals, having the potential to become standard tools for evaluating and exploring generative models. We note that the analysis here can bee seen as a more rigorous version of an observation made by , who experimentally show that there is a significant difference between the average norm of the midpoint of linear interpolation and the points of the prior, for uniform and Gaussian distributions. Suppose our latent space has a prior with DISPLAYFORM0 In this case, we can look at the squared norm DISPLAYFORM1 From the Central Limit Theorem (CLT), we know that as d → ∞, DISPLAYFORM2 in distribution. Thus, assuming d is large enough such that we are close to convergence, we can approximate the distribution of z 2 as N (dµ Z 2, dσ 2 Z 2). In particular, this implies that almost all points lie on a relatively thin spherical shell, since the mean grows as O(d) whereas the standard deviation grows only as O(DISPLAYFORM3 We note that this property is well known for i.i.d Gaussian entries (see e.g. Ex. 6.14 in MacKay FORMULA5). For Uniform distribution on the hypercube it is also well known that the mass is concentrated in the corner points (which is consistent with the claim here since the corner points lie on a sphere).Now consider an operator such as the midpoint of linear interpolation, y = DISPLAYFORM4 In this case, we can compute: DISPLAYFORM5 Thus, the distribution of y 2 can be approximated with N (DISPLAYFORM6 . Therefore, y also mostly lies on a spherical shell, but with a different radius than z. In fact, the shells will intersect at regions which have a vanishing probability for large d. In other words, when looking at the squared norm y 2, y 2 is a (strong) outlier with respect to the distribution of z 2. Proof. We will show it for the Kantorovich problem, the Monge version is similar. Starting from (KP), we compute DISPLAYFORM0 where the inequality in is due to each term being minimized separately. DISPLAYFORM1 where p X,Y has marginals p X and p Y. In this case P d (X, Y) is a subset of all joints p x,y with marginals p x and p y, where the pairs (DISPLAYFORM2) are constrained to be i.i.d. Starting again from can compute: DISPLAYFORM3 where the inequality in is due to minimizing over a smaller set. Since the two inequalities above are in the opposite direction, equality must hold for all of the expressions above, in particular: DISPLAYFORM4 Thus, (KP) and (KP-1-D) equal up to a constant, and minimizing one will minimize the other. Therefore the minimization of the former can be done over p X,Y with p x,y (x, y) = DISPLAYFORM5 In the next sections, we illustrate how to compute the matched operations for a few examples, in particular for linear interpolation and vicinity sampling, using a uniform or a Gaussian prior. We picked the examples where we can analytically compute the uniform transport map, but note that it is also easy to compute F DISPLAYFORM0 and (F Y (y)) numerically, since one only needs to estimate CDFs in one dimension. Since the components of all random variables in these examples are i.i.d, for such a random vector x we will implicitly write X for a scalar random variable that has the distribution of the components of x. When computing the monotone transport map T mon X→Y, the following Lemma is helpful. Lemma 1 (Theorem 2.5 in BID17). Suppose a mapping g(x) is non-decreasing and maps a continuous distribution p X to a distribution p Y, i.e. DISPLAYFORM1 then g is the monotone transport map T mon X→Y.According to Lemma 1, an alternative way of computing T mon X→Y is to find some g that is nondecreasing and transforms p X to p Y. Suppose z has uniform components Z ∼ Uniform(−1, 1). In this case, p Z (z) = 1/2 for −1 < z < 1. Now let y t = tz 1 + (1 − t)z 2 denote the linear interpolation between two points z 1, z 2, with component distribution p Yt. Due to symmetry we can assume that t > 1/2, since p Yt = p Y1−t. We then obtain p Yt as the convolution of p tZ and p (1−t)Z, i.e. p Yt = p tZ * p (1−t)Z. First we note that p tZ = 1/(2t) for −t < z < t and p (1−t)Z = 1/(2(1 − t)) for −(1 − t) < z < 1 − t. We can then compute: DISPLAYFORM0 if − t + (1 − t) < y < t − (1 − t) −y + 1 if t − (1 − t) < y < 1 0 if 1 < yThe CDF F Yt is then obtained by computing Since p Z (z) = 1/2 for |z| < 1, we have F Z (z) =. For vicinity sampling, we want to obtain new points z 1, ·, z k which are close to z. We thus define DISPLAYFORM1 where u i also has uniform components, such that each coordinate of z i differs at most by from z. By identifying tZ i = tZ + (1 − t)U i with t = 1/(1 +), we see that tZ i has identical distribution to the linear interpolation Y t in the previous example. Thus g t (Z i):= T mon Yt→Z (tZ i) will have the distribution of Z, and by Lemma1 is then the monotone transport map from Z i to Z. Suppose z has components Z ∼ N (0, σ 2). In this case, we can compute linear interpolation as before, y t = tz 1 + (1 − t)z 2. Since the sum of Gaussians is Gaussian, we get, Y t ∼ N (0, t 2 σ 2 + (1 − t) 2 σ 2 ). Now, it is easy to see that with a proper scaling factor, we can adjust the variance of Y t back to σ 2. That is, By adjusting the vicinity sampling operation to DISPLAYFORM0 where e i ∼ N, we can similarly find the monotone transport map g (y) = 1 √ 1+ 2 y. Another operation which has been used in the literature is the "analogy", where from samples z 1, z 2, z 3, one wants to apply the difference between z 1 and z 2, to z 3. The transport map is then g(y) =
Operations in the GAN latent space can induce a distribution mismatch compared to the training distribution, and we address this using optimal transport to match the distributions.
327
scitldr
Neural program embeddings have shown much promise recently for a variety of program analysis tasks, including program synthesis, program repair, code completion, and fault localization. However, most existing program embeddings are based on syntactic features of programs, such as token sequences or abstract syntax trees. Unlike images and text, a program has well-defined semantics that can be difficult to capture by only considering its syntax (i.e. syntactically similar programs can exhibit vastly different run-time behavior), which makes syntax-based program embeddings fundamentally limited. We propose a novel semantic program embedding that is learned from program execution traces. Our key insight is that program states expressed as sequential tuples of live variable values not only capture program semantics more precisely, but also offer a more natural fit for Recurrent Neural Networks to model. We evaluate different syntactic and semantic program embeddings on the task of classifying the types of errors that students make in their submissions to an introductory programming class and on the CodeHunt education platform. Our evaluation show that the semantic program embeddings significantly outperform the syntactic program embeddings based on token sequences and abstract syntax trees. In addition, we augment a search-based program repair system with predictions made from our semantic embedding and demonstrate significantly improved search efficiency. Recent breakthroughs in deep learning techniques for computer vision and natural language processing have led to a growing interest in their applications in programming languages and software engineering. Several well-explored areas include program classification, similarity detection, program repair, and program synthesis. One of the key steps in using neural networks for such tasks is to design suitable program representations for the networks to exploit. Most existing approaches in the neural program analysis literature have used syntax-based program representations. BID6 proposed a convolutional neural network over abstract syntax trees (ASTs) as the program representation to classify programs based on their functionalities and detecting different sorting routines. DeepFix BID4, SynFix BID1, and sk p BID9 are recent neural program repair techniques for correcting errors in student programs for MOOC assignments, and they all represent programs as sequences of tokens. Even program synthesis techniques that generate programs as output, such as RobustFill BID3, also adopt a token-based program representation for the output decoder. The only exception is BID8, which introduces a novel perspective of representing programs using input-output pairs. However, such representations are too coarse-grained to accurately capture program properties -programs with the same input-output behavior may have very different syntactic characteristics. Consequently, the embeddings learned from input-output pairs are not precise enough for many program analysis tasks. Although these pioneering efforts have made significant contributions to bridge the gap between deep learning techniques and program analysis tasks, syntax-based program representations are fundamentally limited due to the enormous gap between program syntax (i.e. static expression) and Bubble Insertion Figure 1: Bubble sort and insertion sort (code highlighted in shadow box are the only syntactic differences between the two algorithms). Their execution traces for the input vector A = are displayed on the right, where, for brevity, only values for variable A are shown. semantics (i.e. dynamic execution). This gap can be illustrated as follows. First, when a program is executed at runtime, its statements are almost never interpreted in the order in which the corresponding token sequence is presented to the deep learning models (the only exception being straightline programs, i.e., ones without any control-flow statements). For example, a conditional statement only executes one branch each time, but its token sequence is expressed sequentially as multiple branches. Similarly, when iterating over a looping structure at runtime, it is unclear in which order any two tokens are executed when considering different loop iterations. Second, program dependency (i.e. data and control) is not exploited in token sequences and ASTs despite its essential role in defining program semantics. FIG0 shows an example using a simple max function. On line 8, the assignment statement means variable max val is data-dependent on item. In addition, the execution of this statement depends on the evaluation of the if condition on line 7, i.e., max val is also control-dependent on item as well as itself. Third, from a pure program analysis standpoint, the gap between program syntax and semantics is manifested in that similar program syntax may lead to vastly different program semantics. For example, consider the two sorting functions shown in Figure 1. Both functions sort the array via two nested loops, compare the current element to its successor, and swap them if the order is incorrect. However, the two functions implement different algorithms, namely Bubble Sort and Insertion Sort. Therefore minor syntactic discrepancies can lead to significant semantic differences. This intrinsic weakness will be inherited by any deep learning technique that adopts a syntax-based program representation. We have evaluated our dynamic program embeddings in the context of automated program repair. In particular, we use the program embeddings to classify the type of mistakes students made to their programming assignments based on a set of common error patterns (described in the appendix). The dataset for the experiments consists of the programming submissions made to Module 2 assignment in Microsoft-DEV204.1X and two additional problems from the Microsoft CodeHunt platform. The show that our dynamic embeddings significantly outperform syntax-based program embeddings, including those trained on token sequences and abstract syntax trees. In addition, we show that our dynamic embeddings can be leveraged to significantly improve the efficiency of a searchbased program corrector SARFGEN 1 BID13 ) (the algorithm is presented in the appendix). More importantly, we believe that our dynamic program embeddings can be useful for many other program analysis tasks, such as program synthesis, fault localization, and similarity detection. To summarize, the main contributions of this paper are: we show the fundamental limitation of representing programs using syntax-level features; we propose dynamic program embeddings learned from runtime execution traces to overcome key issues with syntactic program representations; we evaluate our dynamic program embeddings for predicting common mistake patterns students make in program assignments, and show that the dynamic program embeddings outperform state-of-the-art syntactic program embeddings; and we show how the dynamic program embeddings can be utilized to improve an existing production program repair system. This section briefly reviews dynamic program analysis BID0, an influential program analysis technique that lays the foundation for constructing our new program embeddings. Unlike static analysis BID7, i.e., the analysis of program source code, dynamic analysis focuses on program executions. An execution is modeled by a set of atomic actions, or events, organized as a trace (or event history). For simplicity, this paper considers sequential executions only (as opposed to parallel executions) which lead to a single sequence of events, specifically, the executions of statements in the program. Detailed information about executions is often not readily available, and separate mechanisms are needed to capture the tracing information. An often adopted approach is to instrument a program's source code (i.e., by adding additional monitoring code) to record the execution of statements of interest. In particular, those inserted instrumentation statements act as a monitoring window through which the values of variables are inspected. This instrumentation process can occur in a fully automated manner, e.g., a common approach is to traverse a program's abstract syntax tree and insert "write" statements right after each program statement that causes a side-effect (i.e., changing the values of some variables).Consider the two sorting algorithms depicted in Figure 1. If we assume A to be the only variable of interest and subject to monitoring, we can instrument the two algorithms with Console. WriteLine(A) after each program location in the code whenever A is modified 2 (i.e. the lines marked by comments). Given the input vector A =, the execution traces of the two sorting routines are shown on the right in Figure 1.One of the key benefits of dynamic analysis is its ability to easily and precisely identify relevant parts of the program that affect execution behavior. As shown in the example above, despite the very similar program syntax of bubble sort and insertion sort, dynamic analysis is able to discover their distinct program semantics by exposing their execution traces. Since understanding program semantics is a central issue in program analysis, dynamic analysis has seen remarkable success over the past several decades and has ed in many successful program analysis tools such as debuggers, profilers, monitors, or explanation generators. We now present an overview of our approach. Given a program and the execution traces extracted for all its variables, we introduce three neural network models to learn dynamic program embeddings. To demonstrate the utility of these embeddings, we apply them to predict common error patterns (detailed in Section 5) that students make in their submissions to an online introductory programming course. Variable Trace Embedding As shown in TAB1, each row denotes a new program point where a variable gets updated. 3 The entire variable trace consists of those variable values at all program points. As a subsequent step, we split the complete trace into a list of sub-traces (one for each variable). We use one single RNN to encode each sub-trace independently and then perform max pooling on the final states of the same RNN to obtain the program embedding. Finally, we add a one layer softmax regression to make the predictions. The entire workflow is show in FIG1.State Trace Embedding Because each variable trace is handled individually in the previous approach, variable dependencies/interactions are not precisely captured. To address this issue, we propose the state trace embedding. As depicted in TAB1, each program point l introduces a new program state expressed by the latest variable valuations at l. The entire state trace is a sequence of program states. To learn the state trace embedding, we first use one RNN to encode each program state (i.e., a tuple of values) and feed the ing RNN states as a sequence to another RNN. Note that we do not assume that the order in which variables values are encoded by the RNN for each program state but rather maintain a consistent order throughout all program states for a given trace. Finally, we feed a softmax regression layer with the final state of the second RNN (shown in FIG2 . The benefit of state trace embedding is its ability to capture dependencies among variables in each program state as well as the relationship among program states. Dependency Enforcement for Variable Trace Embedding Although state trace embedding can better capture program dependencies, it also comes with some challenges, the most significant of which is redundancy. Consider a looping structure in a program. During an iteration, whenever one variable gets modified, a new program state will be created containing the values of all variables, even of those unmodified by the loop. This issue becomes more severe for loops with larger numbers of iterations. To tackle this challenge, we propose the third and final approach, dependency enforcement for variable trace embedding (hereinafter referred as dependency enforcement embedding), that combines the advantages of variable trace embedding (i.e., compact representation of execution traces) and state trace embedding (i.e., precise capturing of program dependencies). In dependency enforcement embedding, a program is represented by separate variable traces, with each variable being handled by a different RNN. In order to enforce program dependencies, the hidden states from different RNNs will be interleaved in a way that simulates the needed data and control dependencies. Unlike variable trace embedding, we perform an average pooling on the final states of all RNNs to obtain the program embedding on which we build the final layer of softmax regression. FIG3 describes the workflow. We now formally define the three program embedding models. Given a program P, and its variable set V (v 0, v 1,. .., v n ∈ V), a variable trace is a sequence of values a variable has been assigned during the execution of P.4 Let x t vn denote the value from the variable trace of v n that is fed to the RNN encoder (Gated Recurrent Unit) at time t as the input, and h t vn as the ing RNN's hidden state. We compute the variable trace embedding for P in Equation as follows (h T vn denotes the last hidden state of the encoder): DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 We compute the representation of the program trace by performing max pooling over the last hidden state representation of each variable trace embedding. The hidden states h t v1,..., h t vn, h P ∈ R k where k denotes the size of hidden layers of the RNN encoder. Evidence denotes the output of a linear model through the program embedding vector h P, and we obtain the predicted error pattern class Y by using a softmax operation. The key idea in state trace model is to embed each program state as a numerical vector first and then feed all program state embeddings as a sequence to another RNN encoder to obtain the program embedding. Suppose x t vn is the value of variable v n at t-th program state, and h t vn is the ing hidden state of the program state encoder. Equation computes the t-th program state embedding. Equations encode the sequence of all program state embeddings (i.e., h t vn, h t+1 vn, . . ., h t+m vn) with another RNN to compute the program embedding. DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 h t+1 vn = GRU(h t vn, h t+1 vn)... h P = GRU(h t+m−1 vn, x t+m vn) h t v1,..., h t vn ∈ R k1; h t vn,..., h P ∈ R k2 where k 1 and k 2 denote, respectively, the sizes of hidden layers of the first and second RNN encoders. The motivation behind this model is to combine the advantages of the previous two approaches, i.e. representing the execution trace compactly while enforcing the dependency relationship among variables as much as possible. In this model, each variable trace is handled with a different RNN. A potential issue to be addressed is variable matching/renaming (i.e., α-renaming). In other words same variables may be named differently in different programs. Processing each variable id with a single RNN among all programs in the dataset will not only cause memory issues, but more importantly the loss of precision. Our solution is to execute all programs to collect traces for all variables, perform dynamic time wrapping on the variable traces across all programs to find the top-n most used variables that account for the vast majority of variable usage, and rename the top-n most used variables consistently across all programs, and rename all other variables to a same special variable. Given the same set of variables among all programs, the mechanism of dependency enforcement on the top ones is to fuse the hidden states of multiple RNNs based on how a new value of a variable is produced. For example, in FIG0 at line 8, the new value of max val is data-dependent on item, and control-dependent on both item and itself. So at the time step when the new value of max val is produced, the latest hidden states of the RNNs encode variable item as well as itself; they together determine the previous state of the RNN upon which the new value of max val is produced. If a value is produced without any dependencies, this mechanism will not take effect. In other words, the RNN will act normally to handle data sequences on its own. In this work we enforce the data-dependency in assignment statement, declaration statement and method calls; and control-dependency in control statements such as if, f or and while statements. Equations (11 and 12) expose the inner workflow. h LT vm denotes the latest hidden state of the RNN encoding variable trace of v m up to the point of time t when x t vn is the input of the RNN encoding variable trace of v n. denotes element-wise matrix product. DISPLAYFORM0 Given v n depends on v 1 and v m We train our dynamic program embeddings on the programming submissions obtained from Assignment 2 from Microsoft-DEV204.1X: "Introduction to C#" offered on edx and two other problems on Microsoft CodeHunt platform. DISPLAYFORM1 • Print Chessboard: Print the chessboard pattern using "X" and "O" to represent the squares as shown in FIG4.• Count Parentheses: Count the depth of nesting parentheses in a given string.• Generate Binary Digits: Generate the string of binary digits for a given integer. Regarding the three programming problems, the errors students made in their submissions can be roughly classified into low-level technical issues (e.g., list indexing, branching conditions or looping bounds) and high-level conceptual issues (e.g., mishandling corner case, misunderstanding problem requirement or misconceptions on the underlying data structure of test inputs). In order to have sufficient data for training our models to predict the error patterns, we convert each incorrect program into multiple programs such that each new program will have only one error, and mutate all the correct programs to generate synthetic incorrect programs such that they exhibit similar errors that students made in real program submissions. These two steps allow us to set up a dataset depicted in TAB4. Based on the same set of training data, we evaluate the dynamic embeddings trained with the three network models and compare them with the syntax-based program embeddings (on the same error prediction task) on the same testing data. The syntax-based models include one trained with a RNN that encodes the run-time syntactic traces of programs BID10; another trained with a RNN that encodes token sequences of programs; and the third trained with a RNN on abstract syntax trees of programs BID11 5 Please refer to the Appendix for a detailed summary of the error patterns for each problem. All models are implemented in TensorFlow. All encoders in each of the trace model have two stacked GRU layers with 200 hidden units in each layer except that the state encoder in the state trace model has one single layer of 100 hidden units. We adopt random initialization for weight initialization. Our vocabulary has 5,568 unique tokens (i.e., the values of all variables at each time step), each of which is embedded into a 100-dimensional vector. All networks are trained using the Adam optimizer BID5 with the learning and the decay rates set to their default values (learning rate = 0.0001, beta1 = 0.9, beta2 = 0.999) and a mini-batch size of 500. For the variable trace and dependency enforcement models, each trace is padded to have the same length across each batch; for the state trace model, both the number of variables in each program state as well as the length of the entire state trace are padded. During the training of the dependency enforcement model, we have observed that when dependencies become complex, the network suffers from optimization issues, such as diminishing and exploding gradients. This is likely due to the complex nature of fusing hidden states among RNNs, echoing the errors back and forth through the network. We resolve this issue by truncating each trace into multiple sub-sequences and only back-propagate on the last sub-sequence while only feedforwarding on the rest. Regarding the baseline network trained on syntactic traces/token sequences, we use the same encoder architecture (i.e., two layer GRU of 200 hidden units) processing the same 100-dimension embedding vector for each statement/token. As for the AST model, we learn an embedding (100-dimension) for each type of the syntax node by propagating the leaf (a simple look up) to the root through the learned production rules. Finally, we use the root embeddings to represent programs. Table 3: Comparing dynamic program embeddings with syntax-based program embedding in predicting common error patterns made by students. As shown in Table 3, our embeddings trained on execution traces significantly outperform those trained on program syntax (greater than 92% accuracy compared to less than 27% for syntax-based embeddings). We conjecture this is because of the fact that minor syntactic discrepancies can lead to major semantic differences as shown in Figure 1. In our dataset, there are a large number of programs with distinct labels that differ by only a few number of tokens or AST nodes, which causes difficulty for the syntax models to generalize. Even for the simpler syntax-level errors, they are buried in large number of other syntactic variations and the size of the training dataset is relatively small for the syntax-based models to learn precise patterns. In contrast, dynamic embeddings are able to canonicalize the syntactical variations and pinpoint the underlying semantic differences, which in the trace-based models learning the correct error patterns more effectively even with relatively smaller size of the training data. In addition, we incorporated our dynamic program embeddings into SARFGEN BID13 -a program repair system -to demonstrate their benefit in producing fixes to correct students errors in programming assignments. Given a set of potential repair candidates, SARFGEN uses an enumerative search-based technique to find minimal changes to an incorrect program. We use the dynamic embeddings to learn a distribution over the corrections to prioritize the search for the repair algorithm. 6 To establish the baseline, we obtain the set of all corrections from SARFGEN for each of the real incorrect program to all three problems and enumerate each subset until we find the minimum fixes. On the contrary, we also run another experiment where we prioritize each correction according to the prediction of errors with the dynamic embeddings. It is worth mentioning that one incorrect program may be caused by multiple errors. Therefore, we only predict the top-1 error each time and repair the program with the corresponding corrections. If the program is still incorrect, we repeat this procedure till the program is fixed. The comparison between the two approaches is based on how long it takes them to repair the programs. Enumerative Search Table 4: Comparing the enumerative search with those guided by dynamic program embeddings in finding the minimum fixes. Time is measured in seconds. As shown in Table 4, the more fixes required, the more speedups dynamic program embeddings yield -more than an order of magnitude speedups when the number of fixes is four or greater. When the number of fixes is greater than seven, the performance gain drops significantly due to poor prediction accuracy for programs with too many errors. In other words, our dynamic embeddings are not viewed by the network as capturing incorrect execution traces, but rather new execution traces. Therefore, the predictions become unreliable. Note that we ignored incorrect programs having greater than 10 errors when most experiments run out of memory for the baseline approach. There has been significant recent interest in learning neural program representations for various applications, such as program induction and synthesis, program repair, and program completion. Specifically for neural program repair techniques, none of the existing techniques, such as DeepFix BID4, SynFix BID1 and sk p BID9, have considered dynamic embeddings proposed in this paper. In fact, dynamic embeddings can be naturally extended to be a new feature dimension for these existing neural program repair techniques. BID8 is a notable recent effort targeting program representation. Piech et al. explore the possibility of using input-output pairs to represent a program. Despite their new perspective, the direct mapping between input and output of programs usually are not precise enough, i.e., the same input-output pair may correspond to two completely different programs, such as the two sorting algorithms in Figure 1. As we often observe in our own dataset, programs with the same error patterns can also in different input-output pairs. Their approach is clearly ineffective for these scenarios. BID10 introduced the novel approach of using execution traces to induce and execute algorithms, such as addition and sorting, from very few examples. The differences from our work are they use a sequence of instructions to represent dynamic execution trace as opposed to using dynamic program states; their goal is to synthesize a neural controller to execute a program as a sequence of actions rather than learning a semantic program representation; and they deal with programs in a language with low-level primitives such as function stack push/pop actions rather than a high-level programming language. As for learning representations, there are several related efforts in modeling semantics in sentence or symbolic expressions BID11 BID14 BID2. These approaches are similar to our work in spirit, but target different domains than programs. We have presented a new program embedding that learns program representations from runtime execution traces. We have used the new embeddings to predict error patterns that students make in their online programming submissions. Our evaluation shows that the dynamic program embeddings significantly outperform those learned via program syntax. We also demonstrate, via an additional application, that our dynamic program embeddings yield more than 10x speedups compared to an enumerative baseline for search-based program repair. Beyond neural program repair, we believe that our dynamic program embeddings can be fruitfully utilized for many other neural program analysis tasks such as program induction and synthesis. for Pc ∈ Pcs do // Generates the syntactic discrepencies w.r.t. each Pc 7 C(P, Pc) ← DiscrepenciesGeneration(P, Ps) // Executing P to extract the dynamic execution trace 8 T (P) ← DynamicTraceExtraction(P) // Prioritizing subsets of C(P, Pc) through pre-trained model 9 C subs (P, Pc) ← Prioritization(C(P, Pc), T (P), M) 10 for C sub (P, Pc) ∈ C subs (P, Pc) do
A new way of learning semantic program embedding
328
scitldr
In this paper, we propose a generalization of the BN algorithm, diminishing batch normalization (DBN), where we update the BN parameters in a diminishing moving average way. Batch normalization (BN) is very effective in accelerating the convergence of a neural network training phase that it has become a common practice. Our proposed DBN algorithm remains the overall structure of the original BN algorithm while introduces a weighted averaging update to some trainable parameters. We provide an analysis of the convergence of the DBN algorithm that converges to a stationary point with respect to trainable parameters. Our analysis can be easily generalized for original BN algorithm by setting some parameters to constant. To the best knowledge of authors, this analysis is the first of its kind for convergence with Batch Normalization introduced. We analyze a two-layer model with arbitrary activation function. The primary challenge of the analysis is the fact that some parameters are updated by gradient while others are not. The convergence analysis applies to any activation function that satisfies our common assumptions. For the analysis, we also show the sufficient and necessary conditions for the stepsizes and diminishing weights to ensure the convergence. In the numerical experiments, we use more complex models with more layers and ReLU activation. We observe that DBN outperforms the original BN algorithm on Imagenet, MNIST, NI and CIFAR-10 datasets with reasonable complex FNN and CNN models. Deep neural networks (DNN) have shown unprecedented success in various applications such as object detection. However, it still takes a long time to train a DNN until it converges. Ioffe & Szegedy identified a critical problem involved in training deep networks, internal covariate shift, and then proposed batch normalization (BN) to decrease this phenomenon. BN addresses this problem by normalizing the distribution of every hidden layer's input. In order to do so, it calculates the preactivation mean and standard deviation using mini-batch statistics at each iteration of training and uses these estimates to normalize the input to the next layer. The output of a layer is normalized by using the batch statistics, and two new trainable parameters per neuron are introduced that capture the inverse operation. It is now a standard practice;. While this approach leads to a significant performance jump, to the best of our knowledge, there is no known theoretical guarantee for the convergence of an algorithm with BN. The difficulty of analyzing the convergence of the BN algorithm comes from the fact that not all of the BN parameters are updated by gradients. Thus, it invalidates most of the classical studies of convergence for gradient methods. In this paper, we propose a generalization of the BN algorithm, diminishing batch normalization (DBN), where we update the BN parameters in a diminishing moving average way. It essentially means that the BN layer adjusts its output according to all past mini-batches instead of only the current one. It helps to reduce the problem of the original BN that the output of a BN layer on a particular training pattern depends on the other patterns in the current mini-batch, which is pointed out by Bottou et al.. By setting the layer parameter we introduce into DBN to a specific value, we recover the original BN algorithm. We give a convergence analysis of the algorithm with a two-layer batch-normalized neural network and diminishing stepsizes. We assume two layers (the generalization to multiple layers can be made by using the same approach but substantially complicating the notation) and an arbitrary loss function. The convergence analysis applies to any activation function that follows our common assumption. The main shows that under diminishing stepsizes on gradient updates and updates on mini-batch statistics, and standard Lipschitz conditions on loss functions DBN converges to a stationary point. As already pointed out the primary challenge is the fact that some trainable parameters are updated by gradient while others are updated by a minor recalculation. Contributions. The main contribution of this paper is in providing a general convergence guarantee for DBN. Specifically, we make the following contributions.• In section 4, we show the sufficient and necessary conditions for the stepsizes and diminishing weights to ensure the convergence of BN parameters.• We show that the algorithm converges to a stationary point under a general nonconvex objective function. This paper is organized as follows. In Section 2, we review the related works and the development of the BN algorithm. We formally state our model and algorithm in Section 3. We present our main in Sections 4. In Section 5, we numerically show that the DBN algorithm outperforms the original BN algorithm. Proofs for main steps are collected in the Appendix. Before the introduction of BN, it has long been known in the deep learning community that input whitening and decorrelation help to speed up the training process. In fact, Orr & Müller show that preprocessing the data by subtracting the mean, normalizing the variance, and decorrelating the input has various beneficial effects for back-propagation. Krizhevsky et al. propose a method called local response normalization which is inspired by computational neuroscience and acts as a form of lateral inhibition, i.e., the capacity of an excited neuron to reduce the activity of its neighbors. Gülçehre & Bengio propose a standardization layer that bears significant resemblance to batch normalization, except that the two methods are motivated by very different goals and perform different tasks. Inspired by BN, several new works are taking BN as a basis for further improvements. Layer normalization BID2 is much like the BN except that it uses all of the summed inputs to compute the mean and variance instead of the mini-batch statistics. Besides, unlike BN, layer normalization performs precisely the same computation at training and test times. Normalization propagation that Arpit et al. uses data-independent estimations for the mean and standard deviation in every layer to reduce the internal covariate shift and make the estimation more accurate for the validation phase. Weight normalization also removes the dependencies between the examples in a minibatch so that it can be applied to recurrent models, reinforcement learning or generative models. Cooijmans et al. propose a new way to apply batch normalization to RNN and LSTM models. Given all these flavors, the original BN method is the most popular technique and for this reason our choice of the analysis. To the best of our knowledge, we are not aware of any prior analysis of BN.BN has the gradient and non-gradient updates. Thus, nonconvex convergence do not immediately transfer. Our analysis explicitly considers the workings of BN. However, nonconvex convergence proofs are relevant since some small portions of our analysis rely on known proofs and approaches. The optimization problem for a network is an objective function consisting of a large number of component functions, that reads: DISPLAYFORM0 where DISPLAYFORM1.., N, are real-valued functions for any data record X i. Index i associates with data record X i and target response y i (hidden behind the dependency of f on i) in the training set. Parameters θ include the common parameters updated by gradients directly associated with the loss function, i.e., behind the part that we have a parametric model, while BN parameters λ are introduced by the BN algorithm and not updated by gradient methods but by the mini-batch statistics. We define that the derivative of f i is always taken with respect to θ: DISPLAYFORM2 The deep network we analyze has 2 fully-connected layers with D 1 neurons each. The techniques presented can be extended to more layers with additional notation. Each hidden layer computes y = a(W u) with activation function a(·) and u is the input vector of the layer. We do not need to include an intercept term since the BN algorithm automatically adjusts for it. BN is applied to the output of the first hidden layer. We next describe the computation in each layer to show how we obtain the output of the network. The notations introduced here is used in the analysis. FIG0 shows the full structure of the network. The input data is vector X, which is one of DISPLAYFORM3 is the set of all BN parameters and vector θ = W 1, W 2, (β DISPLAYFORM4 is the set of all trainable parameters which are updated by gradients. Matrices W 1, W 2 are the actual model parameters and β, γ are introduced by BN. The value of j th neuron of the first hidden layer is DISPLAYFORM5 where W 1,j,· denotes the weights of the linear transformations for the j th neuron. The j th entry of batch-normalized output of the first layer is DISPLAYFORM6 DISPLAYFORM7 The objective function for the i th sample is DISPLAYFORM8 where l i (·) is the loss function associated with the target response y i. For sample i, we have the following complete expression for the objective function: DISPLAYFORM9 Function f i (X i : θ, λ) is nonconvex with respect to θ and λ. Algorithm 1 shows the algorithm studied herein. There are two deviations from the standard BN algorithm, one of them actually being a generalization. We use the full gradient instead of the more popular stochastic gradient (SG) method. It essentially means that each batch contains the entire training set instead of a randomly chosen subset of the training set. An analysis of SG is potential future research. Although the primary motivation for full gradient update is to reduce the burdensome in showing the convergence, the full gradient method is similar to SG in the sense that both of them go through the entire training set, while full gradient goes through it deterministically and the SG goes through it in expectation. Therefore, it is reasonable to speculate that the SG method has similar convergence property as the full algorithm studied herein. Algorithm 1 DBN: Diminishing Batch-Normalized Network Update Algorithm 1: Initialize θ ∈ R n1 and λ ∈ R n2 2: for iteration m=1,2,... do 3: DISPLAYFORM0 The second difference is that we update the BN parameters (θ, λ) by their moving averages with respect to diminishing α (m). The original BN algorithm can be recovered by setting α (m) = 1 for every m. After introducing diminishing α (m), λ (m) and hence the output of the BN layer is determined by the history of all past data records, instead of those solely in the last batch. Thus, the output of the BN layer becomes more general that better reflects the distribution of the entire dataset. We use two strategies to decide the values of α (m). One is to use a constant smaller than 1 for all m, and the other one is to decay the α (m) gradually, such as α (m) = 1/m. In our numerical experiment, we show that Algorithm 1 outperforms the original BN algorithm, where both are based on SG and non-linear activation functions with many layers FNN and CNN models. The main purpose of our work is to show that Algorithm 1 converges. In the general case, we focus on the nonconvex objective function. Here are the assumptions we used for the convergence analysis. Assumption 1 (Lipschitz continuity on θ and λ). For every i we have DISPLAYFORM0 Noted that the Lipschitz constants associated with each of the above inequalities are not necessarily the same. HereL is an upper bound for these Lipschitz constants for simplicity. Assumption 2 (bounded parameters). Sets P and Q are compact set, where θ ∈ P and λ ∈ Q. Thus, there exists a constant M that weights W and parameters λ are bounded element-wise by this constant M. DISPLAYFORM1 This also implies that the updated θ, λ in Algorithm 1 remain in P and Q, respectively. Assumption 3 (diminishing update on θ). The stepsizes of θ update satisfy DISPLAYFORM2 This is a common assumption for diminishing stepsizes in optimization problems. Assumption 4 (Lipschitz continuity of l i (·)). Assume the loss functions l i (·) for every i is continuously differentiable. It implies that there existsM such that DISPLAYFORM3 Assumption 5 (existence of a stationary point). There exists a stationary point (θ DISPLAYFORM4 We note that all these are standard assumptions in convergence proofs. We also stress that Assumption 4 does not directly imply 1. Since we assume that P and Q are compact, then Assumptions 1, 4 and 5 hold for many standard loss function such as softmax and MSE.Assumption 6 (Lipschitz at activation function). The activation function a(·) is Lipschitz with constant k: DISPLAYFORM5 Since for all activation function there is a = 0, the condition is equivalent to |a(x) − a| ≤ k x − 0. We note that this assumption works for many popular choices of activation functions, such as ReLU and LeakyReLu. We first have the following lemma specifying sufficient conditions for λ to converge. Proofs for main steps are given in the Appendix. Theorem 7 Under Assumptions 1, 2, 3 and 6, if {α (m) } satisfies DISPLAYFORM0 We give a discussion of the above conditions for α (m) and η (m) at the end of this section. With the help of Theorem 7, we can show the following convergence . Lemma 8 Under Assumptions 4, 5 and the assumptions of Theorem 7, when DISPLAYFORM1 we have DISPLAYFORM2 This is similar to the classical convergence rate analysis for the non-convex objective function with diminishing stepsizes, which can be found in.Lemma 9 Under the assumptions of Lemma 8, we have DISPLAYFORM3 This theorem states that for the full gradient method with diminishing stepsizes the gradient norms cannot stay bounded away from zero. The following characterizes more precisely the convergence property of Algorithm 1.Lemma 10 Under the assumptions stated in Lemma 8, we have DISPLAYFORM4 Our main is listed next. Theorem 11 Under the assumptions stated in Lemma 8, we have DISPLAYFORM5 We cannot show that {θ (m) }'s converges (standard convergence proofs are also unable to show such a stronger statement). For this reason, Theorem 11 does not immediately follow from Lemma 10 together with Theorem 7. The statement of Theorem 11 would easily follow from Lemma 10 if the convergence of {θ (m) } is established and the gradient being continuous. We show in the Appendix that the set of sufficient and necessary conditions to satisfy the assumptions of Theorem 7 are h > 1 and k ≥ 1. The set of sufficient and necessary conditions to satisfy the assumptions of Lemma 8 are h > 2 and k ≥ 1. For example, we can pick DISPLAYFORM0 ) to achieve the above convergence in Theorem 11. We conduct the computational experiments with Theano and Lasagne on a Linux server with a Nvidia Titan-X GPU. We use , CIFAR-10 and Network Intrusion (NI) kdd datasets to compare the performance between DBN and the original BN algorithm. For the MNIST dataset, we use a four-layer fully connected FNN (784 × 300 × 300 × 10) with the ReLU activation function and for the NI dataset, we use a four-layer fully connected FNN (784 × 50 × 50 × 10) with the ReLU activation function. For the CIFAR-10 dataset, we use a reasonably complex CNN network that has a structure of (Conv-Conv-MaxPool-DropoutConv-Conv-MaxPool-Dropout-FC-Dropout-FC), where all four convolution layers and the first fully connected layers are batch normalized. We use the softmax loss function and l 2 regularization with for all three models. All the trainable parameters are randomly initialized before training. For all 3 datasets, we use the standard epoch/minibatch setting with the minibatch size of 100, i.e., we do not compute the full gradient and the statistics are over the minibatch. We use to update the learning rates η (m) for trainable parameters, starting from 0.01. We test all the choices of α (m) with the performances presented in Figure 2. Figure 2 shows that all the non-zero choices of α (m) converge properly. The algorithms converge without much difference even when α (m) in DBN is very small, e.g., 1/m 2. However, if we select α (m) = 0, the algorithm is erratic. Besides, we observe that all the non-zero choices of α (m) converge at a similar rate. The fact that DBN keeps the batch normalization layer stable with a very small α (m) suggests that the BN parameters do not have to be depended on the latest minibatch, i.e., the original BN.We compare a selected set of the most efficient choices of α TAB2 shows the best obtained from each choice of α (m). Most importantly, it suggests that the choices of α (m) = 1/m and 1/m 2 perform better than the original BN algorithm. Besides, all the constant less-than-one choices of α (m) perform better than the original BN, showing the importance of considering the mini-batch history for the update of the BN parameters. The BN algorithm in each figure converges to similar error rates on test datasets with different choices of α (m) except for the α (m) = 0 case. Among all the models we tested, α (m) = 0.25 is the only one that performs top 3 for all three datasets, thus the most robust choice. To summarize, our numerical experiments show that the DBN algorithm outperforms the original BN algorithm on the MNIST, NI and CIFAT-10 datasets with typical deep FNN and CNN models. On the analytical side, we believe an extension to more than 2 layers is doable with significant augmentations of the notation. The following proofs are shortened to corporate with AAAI submission page limit. Proposition 12 There exists a constant M such that, for any θ and fixed λ, we have DISPLAYFORM0 Proof. By Assumption 5, we know there exists (θ *, λ *) such that ∇f (θ *, λ *) 2 = 0. Then we have DISPLAYFORM1 where the last inequality is by Assumption 1. We then have ∇f (θ, λ) DISPLAYFORM2 because sets P and Q are compact by Assumption 2. Proof. This is a known of the Lipschitz-continuous condition that can be found in. We have this together with Assumption 1. Lemma 14 When DISPLAYFORM0 is a Cauchy series. Proof. By Algorithm 1, we have DISPLAYFORM1 We defineα DISPLAYFORM2 Then we have DISPLAYFORM3 It remains to show that DISPLAYFORM4 DISPLAYFORM5 implies the convergence of {μ (m) }. By, we have Π DISPLAYFORM6 It is also easy to show that there exists C and Mc such that for all m ≥ Mc, we have DISPLAYFORM7 Therefore, lim DISPLAYFORM8 Thus the following holds: DISPLAYFORM9 and DISPLAYFORM10 From equation 29 and equation 32 it follows that the sequence {μ DISPLAYFORM11} is a Cauchy series. Lemma 15 Since {μ DISPLAYFORM12} is a Cauchy series, {µ DISPLAYFORM13} is a Cauchy series. Proof. We know that µ } is a Cauchy series. Proof. We define σ DISPLAYFORM0 Since {µ (m) j } is convergent, there exists c1, c2 and N1 such that for any m > N1, −∞ < c1 < µ DISPLAYFORM1 Inequality equation 35 is by the following fact: DISPLAYFORM2 where b and ai for every i are arbitrary real scalars. Besides, equation 39 is due to −2aic ≤ max{−2|ai|c, 2|ai|c}.Inequality equation 36 follow from the square function being increasing for nonnegative numbers. Besides these facts, equation 36 is also by the same techniques we used in equation 23-equation 25 where we bound the derivatives with the Lipschitz continuity in the following inequality: DISPLAYFORM3 Inequality equation 37 is by collecting the bounded terms into a single boundML,M. Therefore, DISPLAYFORM4 Using the similar methods in deriving equation 28 and equation 29, it can be seen that a set of sufficient conditions ensuring the convergence for {σ DISPLAYFORM5 Therefore, the convergence conditions for {σ (m) j } are the same as for {µ DISPLAYFORM6 It is clear that these lemmas establish the proof of Theorem 7. Proposition 17 Under the assumptions of Theorem 7, we have |λ (m) −λ|∞ ≤ am, where DISPLAYFORM0 M1 and M2 are constants. Proof. For the upper bound of σ DISPLAYFORM1, by equation 38, we have DISPLAYFORM2 We defineσj:=σ DISPLAYFORM3 The first inequality comes by substituting p by m and by taking lim as q → ∞ in equation 41. The second inequality comes from equation 30. We then obtain, DISPLAYFORM4 The second inequality is by (1 − α )...(1 − α (m) ) < 1, the third inequality is by equation 30 and the last inequality can be easily seen by induction. By equation 44, we obtain DISPLAYFORM5 Therefore, we have DISPLAYFORM6 The first inequality is by equation 45, the second inequality is by equation 41, the third inequality is by equation 31 and the fourth inequality is by adding the nonnegative termσ j C α (m) to the right-hand side. For the upper bound of µ DISPLAYFORM7 Let us define Am:= μ (m) −μ (∞) and Bm:= μj DISPLAYFORM8 } is a Cauchy series, by equation 27, |μ DISPLAYFORM9. Therefore, the first term in equation 47 is bounded by DISPLAYFORM10 For the second term in equation 47, recall that C: DISPLAYFORM11 where the inequality can be easily seen by induction. Therefore, the second term in equation 47 is bounded by DISPLAYFORM12 From these we obtain DISPLAYFORM13 DISPLAYFORM14 where M1 and M2 are constants defined as DISPLAYFORM15 Proposition 18 Under the assumptions of Theorem 7, DISPLAYFORM16, where am is defined in Proposition 17.Proof. For simplicity of the proof, let us define DISPLAYFORM17 where √ n2 is the dimension of λ. The second inequality is by Assumption 1 and the fourth inequality is by Proposition 17. Inequality equation 51 implies that for all m and i, we have |x DISPLAYFORM18 This is established by the following four cases. DISPLAYFORM0 by Proposition 12. DISPLAYFORM1 The last inequality is by Proposition 12.All these four cases yield equation 52.Proposition 19 Under the assumptions of Theorem 7, we havē DISPLAYFORM2 where M is a constant and am is defined in Proposition 17.Proof. By Proposition 13, DISPLAYFORM3 2. Therefore, we can sum it over the entire training set from i = 1 to N to obtain DISPLAYFORM4 In Algorithm 1, we define the update of θ in the following full gradient way: DISPLAYFORM5 which implies DISPLAYFORM6 By equation 56 we haveθ DISPLAYFORM7. We now substituteθ:= θ (m+1), θ:= θ (m) and λ:=λ into equation 54 to obtain DISPLAYFORM8 The first inequality is by plugging equation 56 into equation 54, the second inequality comes from Proposition 12 and the third inequality comes from Proposition 18. Here we show Theorem 11 as the consequence of Theorem 7 and Lemmas 8, 9 and 10.6.4.1 PROOF OF LEMMA 8Here we show Lemma 8 as the consequence of Lemmas 20, 21 and 22.Lemma 20 DISPLAYFORM0 Proof. By plugging equation 45 and equation 43 into equation 58, we have the following for all j: DISPLAYFORM1 It is easy to see that the the following conditions are sufficient for right-hand side of equation 59 to be finite: DISPLAYFORM2 Therefore, we obtain DISPLAYFORM3 Lemma 21 Under Assumption 4, DISPLAYFORM4 is a set of sufficient conditions to ensure DISPLAYFORM5 Proof. By Assumption 4, we have DISPLAYFORM6 By the definition of fi(·), we then have DISPLAYFORM7 The first inequality is by the Cauchy-Schwarz inequality, and the second one is by equation 60. To show the finiteness of equation 64, we only need to show the following two statements: DISPLAYFORM8 and DISPLAYFORM9 Proof of equation 65: For all j we have DISPLAYFORM10 The inequality comes from |W Finally, we invoke Lemma 14 to assert that DISPLAYFORM11 Proof of equation 66: For all j we have DISPLAYFORM12 The first term in equation 68 is finite since {µ Noted that function f (σ) = 1 σ + B is Lipschitz continuous since its gradient is bounded by 1 Next we show that each of the four terms in the right-hand side of equation 75 is finite, respectively. For the first term, DISPLAYFORM13 is by the fact that the parameters {θ, λ} are in compact sets, which implies that the image of fi(·) is in a bounded set. For the second term, we showed its finiteness in Lemma 21. The right-hand side of equation 77 is finite because DISPLAYFORM14 and DISPLAYFORM15 The second inequalities in equation 78 and equation 79 come from the stated assumptions of this lemma. For the fourth term, DISPLAYFORM16 holds, because we have In Lemmas 20, 21 and 22, we show that {σ (m) } and {µ (m) } are Cauchy series, hence Lemma 8 holds. This proof is similar to the the proof by.Proof. By Theorem 8, we have DISPLAYFORM0 If there exists a > 0 and an integerm such that ∇f (θ (m),λ) 2 ≥ Since k ≥ 1 due to Assumption 3, we conclude that k + h > 2. Therefore, the conditions for η (m) and α (m) to satisfy the assumptions of Theorem 7 are h > 1 and k ≥ 1. For the assumptions of Theorem 7, the first condition DISPLAYFORM0 requires h > 2.Besides, the second condition is DISPLAYFORM1 The inequality holds because for any p > 1, we have DISPLAYFORM2 Therefore, the conditions for η (m) and α (m) to satisfy the assumptions of Lemma 8 are h > 2 and k ≥ 1.
We propose a extension of the batch normalization, show a first-of-its-kind convergence analysis for this extension and show in numerical experiments that it has better performance than the original batch normalizatin.
329
scitldr
Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian. After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample. However, the latent space operations commonly used in the literature so far induce a distribution mismatch between the ing outputs and the prior distribution the model was trained on. Previous works have attempted to reduce this mismatch with heuristic modification to the operations or by changing the latent distribution and re-training models. In this paper, we propose a framework for modifying the latent space operations such that the distribution mismatch is fully eliminated. Our approach is based on optimal transport maps, which adapt the latent space operations such that they fully match the prior distribution, while minimally modifying the original operation. Our matched operations are readily obtained for the commonly used operations and distributions and require no adjustment to the training procedure. Generative models such as Variational Autoencoders (VAEs) BID7 and Generative Adversarial Networks (GANs) BID3 have emerged as popular techniques for unsupervised learning of intractable distributions. In the framework of Generative Adversarial Networks (GANs) BID3, the generative model is obtained by jointly training a generator G and a discriminator D in an adversarial manner. The discriminator is trained to classify synthetic samples from real ones, whereas the generator is trained to map samples drawn from a fixed prior distribution to synthetic examples which fool the discriminator. Variational Autoencoders (VAEs) BID7 are also trained for a fixed prior distribution, but this is done through the loss of an Autoencoder that minimizes the variational lower bound of the data likelihood. For both VAEs and GANs, using some data X we end up with a trained generator G, that is supposed to map latent samples z from the fixed prior distribution to output samples G(z) which (hopefully) have the same distribution as the data. In order to understand and visualize the learned model G(z), it is a common practice in the literature of generative models to explore how the output G(z) behaves under various arithmetic operations on the latent samples z. However, the operations typically used so far, such as linear interpolation BID3, spherical interpolation BID20, vicinity sampling and vector arithmetic BID12, cause a distribution mismatch between the latent prior distribution and the of the operations. This is problematic, since the generator G was trained on a fixed prior and expects to see inputs with statistics consistent with that distribution. To address this, we propose to use distribution matching transport maps, to obtain analogous latent space operations (e.g. interpolation, vicinity sampling) which preserve the prior distribution of samples from prior linear matched (ours) spherical (a) Uniform prior: Trajectories of linear interpolation, our matched interpolation and the spherical interp. BID20. (e) Spherical midpoint distribution BID20 Figure 1: We show examples of distribution mismatches induced by the previous interpolation schemes when using a uniform prior in two dimensions. Our matched interpolation avoids this with a minimal modification to the linear trajectory, traversing through the space such that all points along the path are distributed identically to the prior. the latent space, while minimally changing the original operation. In Figure 1 we showcase how our proposed technique gives an interpolation operator which avoids distribution mismatch when interpolating between samples of a uniform distribution. The points of the (red) matched trajectories are obtained as minimal deviations (in expectation of l 1 distance) from the the points of the (blue) linear trajectory. In the literature there are dozens of papers that use sample operations to explore the learned models BID0; BID3; BID2; BID13; BID1; BID13 to name a few), but most of them have ignored the problem of distribution mismatch. BID7 and BID10 sidestep the problem when visualizing their models, by not performing operations on latent samples, but instead restrict the latent space to 2-d and uniformly sample the percentiles of the distribution on a 2-d grid. This way, the samples have statistics that are consistent with the prior distribution. However, this approach does not scale up to higher dimensions -whereas the latent spaces used in the literature can have hundreds of dimensions. BID20 experimentally observe that there is a distribution mismatch between the norm for points drawn from uniform or Gaussian distribution and points obtained with linear interpolation (SLERP), and (heuristically) propose to use a so-called spherical linear interpolation to reduce the mismatch, obtaining higher quality interpolated samples. While SLERP has been subjectively observed to produce better looking samples than linear interpolation and is now commonly, its heuristic nature has limited it from fully replacing the linear interpolation. Furthermore, while perhaps possible it is not obvious how to generalize it to other operations, such as vicinity sampling, n-point interpolation and random walk. In Section 2 we show that for interpolation, in high dimensions SLERP tends to approximately perform distribution matching the approach taken by our framework which can explain why it works well in practice. BID6 further analyze the (norm) distribution mismatch observed by BID20 (in terms of KL-Divergence) for the special case of Gaussian priors, and propose an alternative prior distribution with dependent components which produces less (but still nonzero) distribution mismatch for linear interpolation, at the cost of needing to re-train and re-tune the generative models. In contrast, we propose a framework which allows one to adapt generic operations, such that they fully preserve the original prior distribution while being faithful to the original operation. Thus the KL-Divergence between the prior and the distribution of the from our operations is zero. The approach works as follows: we are given a'desired' operation, such as linear interpolation y = tz 1 + (1 − t)z 2, t ∈. Since the distribution of y does not match the prior distribution of z, we search for a warping f: DISPLAYFORM0 has the same distribution as z. In order to have the modificationỹ as faithful as possible to the original operation y, we use optimal transform Published as a conference paper at ICLR 2019 Operation Expression 2-point interpolation maps BID17 BID18 BID19 to find a minimal modification of y which recovers the prior distribution z. Figure 1a, where each pointỹ of the matched curve is obtained by warping a corresponding point y of the linear trajectory, while not deviating too far from the line. DISPLAYFORM1 With implicit models such as GANs BID3 and VAEs BID7, we use the data X, drawn from an unknown random variable x, to learn a generator G: DISPLAYFORM0 with respect to a fixed prior distribution p z, such that G(z) approximates x. Once the model is trained, we can sample from it by feeding latent samples z through G.We now bring our attention to operations on latent samples DISPLAYFORM1 We give a few examples of such operations in TAB0.Since the inputs to the operations are random variables, their output y = κ(z 1, · · ·, z k) is also a random variable (commonly referred to as a statistic). While we typically perform these operations on realized (i.e. observed) samples, our analysis is done through the underlying random variable y. The same treatment is typically used to analyze other statistics over random variables, such as the sample mean, sample variance and test statistics. In TAB0 we show example operations which have been commonly used in the literature. As discussed in the Introduction, such operations can provide valuable insight into how the trained generator G changes as one creates related samples y from some source samples. The most common such operation is the linear interpolation, which we can view as an operation DISPLAYFORM2 where z 1, z 2 are latent samples from the prior p z and y t is parameterized by t ∈. Now, assume z 1 and z 2 are i.i.d, and let Z 1, Z 2 be their (scalar) first components with distribution p Z. Then the first component of y t is Y t = tZ 1 + (1 − t)Z 2, and we can compute: DISPLAYFORM3 Since (1 + 2t(t − 1)) = 1 for all t ∈ \ {0, 1}, it is in general impossible for y t to have the same distribution as z, which means that distribution mismatch is inevitable when using linear interpolation. A similar analysis reveals the same for all of the operations in TAB0.This leaves us with a dilemma: we have various intuitive operations (see TAB0) which we would want to be able to perform on samples, but their ing distribution p yt is inconsistent with the distribution p z we trained G for. Due to the curse of dimensionality, as empirically observed by BID20, this mismatch can be significant in high dimensions. We illustrate this in FIG1, where we plot the distribution of the squared norm y t 2 for the midpoint t = 1/2 of linear interpolation, compared to the prior distribution z 2. With d = 100 (a typical dimensionality for the latent space), the distributions are dramatically different, having almost no common support. BID6 quantify this mismatch for Gaussian priors in terms of KL-Divergence, and show that it grows linearly with the dimension d. In Appendix A (see Supplement) we expand this analysis and show that this happens for all prior distributions with i.i.d. entries (i.e. not only Gaussian), both in terms of geometry and KL-Divergence. In order to address the distribution mismatch, we propose a simple and intuitive framework for constructing distribution preserving operators, via optimal transport: Published as a conference paper at ICLR 2019 BID20. Both linear and spherical interpolation introduce a distribution mismatch, whereas our proposed matched interpolation preserves the prior distribution for both priors. Strategy 1 (Optimal Transport Matched Operations). DISPLAYFORM0 2. We analytically (or numerically) compute the ing (mismatched) distribution p y 3. We search for a minimal modificationỹ = f (y) (in the sense that E y [c(ỹ, y)] is minimal with respect to a cost c), such that distribution is brought back to the prior, i.e. pỹ = p z.The cost function in step 3 could e.g. be the euclidean distance c(x, y) = x − y, and is used to measure how faithful the modified operator DISPLAYFORM1 Finding the map f which gives a minimal modification can be challenging, but fortunately it is a well studied problem from optimal transport theory. We refer to the modified operationỹ as the matched version of y, with respect to the cost c and prior distribution p z.For completeness, we introduce the key concepts of optimal transport theory in a simplified setting, i.e. assuming probability distributions are in euclidean space and skipping measure theoretical formalism. We refer to BID18 BID19 and BID17 for a thorough and formal treatment of optimal transport. The problem of step above was first posed by Monge and can more formally be stated as: Problem 1 (Problem 1.1). Given probability distributions p x, p y, with domains X, Y respectively, and a cost function c: X × Y → R +, we want to minimize DISPLAYFORM2 We refer to the minimizer f * X → Y of (MP) (if it exists), as the optimal transport map from p x to p y with respect to the cost c. However, the problem remained unsolved until a relaxed problem was studied by BID5: Problem 2 (Problem 1.2). Given probability distributions p x, p y, with domains X, Y respectively, and a cost function c: X × Y → R +, we want to minimize DISPLAYFORM3 where (x, y) ∼ p x,y, x ∼ p x, y ∼ p y denotes that (x, y) have a joint distribution p x,y which has (previously specified) marginals p x and p y.We refer to the joint p x,y which minimizes (KP) as the optimal transport plan from p x to p y with respect to the cost c. The key difference is to relax the deterministic relationship between x and f (x) to a joint probability distribution p x,y with marginals p x and p y for x and y. In the case of Problem 1, the minimization Published as a conference paper at ICLR 2019 might be over the empty set since it is not guaranteed that there exists a mapping f such that f (x) ∼ y. In contrast, for Problem 2, one can always construct a joint density p x,y with p x and p y as marginals, such as the trivial construction where x and y are independent, i.e. p x,y (x, y):= p x (x)p y (y).Note that given a joint density p x,y (x, y) over X × Y, we can view y conditioned on x = x for a fixed x as a stochastic function f (x) from X to Y, since given a fixed x do not get a specific function value f (x) but instead a random variable f (x) that depends on x, with f (x) ∼ y|x = x with density p y (y|x = x):= px,y(x,y)px (x). In this case we have (x, f (x)) ∼ p x,y, so we can view the Problem KP as a relaxation of Problem MP where f is allowed to be a stochastic mapping. While the relaxed problem of Kantorovich (KP) is much more studied in the optimal transport literature, for our purposes of constructing operators it is desirable for the mapping f to be deterministic as in (MP) (see Appendix C for a more detailed discussion on deterministic vs stochastic operations).To this end, we will choose the cost function c such that the two problems coincide and where we can find an analytical solution f or at least an efficient numerical solution. In particular, we note that the operators in TAB0 are all pointwise, such that if the points z i have i.i.d. components, then the y will also have i.i.d. components. If we combine this with the constraint for the cost c to be additive over the components of x, y, we obtain the following simplification: Theorem 1. Suppose p x and p y have i.i.d components and c over DISPLAYFORM4 Consequently, the minimization problems (MP) and (KP) turn into d identical scalar problems for the distributions p X and p Y of the components of x and y: DISPLAYFORM5 such that an optimal transport map T for (MP-1-D) gives an optimal transport map f for (MP) by pointwise application of T, i.e. f (x) (i):= T (x (i) ), and an optimal transport plan p X,Y for (KP-1-D)gives an optimal transport plan p x,y (x, y): DISPLAYFORM6 Proof. See Appendix. Fortunately, under some mild constraints, the scalar problems have a known solution: Theorem 2 (Theorem 2.9 in Santambrogio FORMULA3). Let h: R → R + be convex and suppose the cost C takes the form C(x, y) = h(x − y). Given an continuous source distribution p X and a target distribution p Y on R having a finite optimal transport cost in (KP-1-D), then DISPLAYFORM7 defines an optimal transport map from p X to p Y for (MP-1-D), where DISPLAYFORM8 is the Cumulative Distribution Function (CDF) of X and F DISPLAYFORM9 ≥ y} is the pseudo-inverse of F Y. Furthermore, the joint distribution of (X, T mon X→Y (X)) defines an optimal transport plan for (KP-1-D).The mapping T mon X→Y (x) in Theorem 2 is non-decreasing and is known as the monotone transport map from X to Y. It is easy to verify that T mon X→Y (X) has the distribution of Y, in particular DISPLAYFORM10 Now, combining Theorems 1 and 2, we obtain a concrete realization of the Strategy 1 outlined above. We choose the cost c such that it admits to Theorem 1, such as c(x, y):= x − y 1, and use an operation that is pointwise, so we just need to compute the monotone transport map in. That is, if DISPLAYFORM11 0.8 1 ỹ y t = 0.05 t = 0.25 t = 0.5 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3 −3 −2 −1 0 1 2 3 ỹ y t = 0.05 t = 0.25 t = 0.5 (a) Uniform prior (b) Gaussian prioras the component-wise modification of y, i.e.ỹ DISPLAYFORM12 In FIG3 we show the monotone transport map for the linear interpolation y = tz 1 + (1 − t)z 2 for various values of t. The detailed calculations and examples for various operations are given in Appendix B, for both Uniform and Gaussian priors. To validate the correctness of the matched operators computed in Appendix B, we numerically simulate the distributions for toy examples, as well as prior distributions typically used in the literature. Priors vs. interpolations in 2-D For Figure 1, we sample 1 million pairs of points in two dimension, from a uniform prior (on [−1, 1] 2 ), and estimate numerically the midpoint distribution of linear interpolation, our proposed matched interpolation and the spherical interpolation of BID20. It is reassuring to see that the matched interpolation gives midpoints which are identically distributed to the prior. In contrast, the linear interpolation condenses more towards the origin, forming a pyramidshaped distribution (the of convolving two boxes in 2-d). Since the spherical interpolation of BID20 follows a great circle with varying radius between the two points, we see that the ing distribution has a "hole" in it, "circling" around the origin for both priors. FIG1, we sample 1 million pairs of points in d = 100 dimensions, using either i.i.d. uniform components on [−1, 1] or Gaussian N and compute the distribution of the squared norm of the midpoints. We see there is a dramatic difference between vector lengths in the prior and the midpoints of linear interpolation, with only minimal overlap. We also see that the spherical interpolation (SLERP) is approximately matching the prior (norm) distribution, having a matching first moment, but otherwise also induces a distribution mismatch. In contrast, our matched interpolation, fully preserves the prior distribution and perfectly aligns. We note that this setting (d = 100, uniform or Gaussian) is commonly used in the literature. Setup We used DCGAN BID12 generative models trained on LSUN bedrooms BID21, CelebA BID8 and LLD BID14, an icon dataset, to qualitatively evaluate. For LSUN, the model was trained for two different output resolutions, providing 64 × 64 pixel and a 128 × 128 pixel output images (where the latter is used in figures containing larger sample images). The models for LSUN and the icon dataset where both trained on a uniform latent prior distribution, while for CelebA a Gaussian prior was used. The dimensionality of the latent space is 100 for both LSUN and CelebA, and 512 for the model trained on the icon model. Furthermore we use improved Wasserstein GAN (iWGAN) with gradient penalty (Gulrajani et Table 3 : We measure over the average (normalized) perturbation ỹ − y p / y p incurred by our matched interpolation for the latent spaces used in TAB2, for p = 1, 2.2017) trained on CIFAR-10 at 32 × 32 pixels with a 128-dimensional Gaussian prior to compute inception scores. To measure the effect of the distribution mismatch, we quantitatively evaluate using the Inception score BID16. In TAB2 we compare the Inception score of our trained models (i.e. using random samples from the prior) with the score when sampling midpoints from the 2-point and 4-point interpolations described above, reporting mean and standard deviation with 50,000 samples, as well as relative change to the original model scores if they are significant. Compared to the original scores of the trained models (random samples), our matched operations are statistically indistinguishable (as expected) while the linear interpolation gives a significantly lower score in all settings (up to 29% lower).However, this is not surprising, since our matched operations are guaranteed to produce samples that come from the same distribution as the random samples. To quantify the effect our matching procedure has on the original operation, in Table 3 we compute the perturbation incurred when warping the linear interpolation y to the matched counterpartỹ for 2-point interpolation on the latent spaces used in TAB2. We compute the normalized perturbation ỹ t − y t p / y t p (with p = 1 corresponding to l 1 distance and p = 2 to l 2 distance), over N = 100000 interpolation points y t = tz 1 + (1 − t)z 2 where z 1, z 2 are sampled from the prior and t ∈ sampled uniformly. We observe that for all priors and both metrics, the perturbation is in the range 0.23 − 0.25, i.e. less than a one fourth of y t. In the following, we will qualitatively show that our matched operations behave as expected, and that there is a visual difference between the original operations and the matched counterparts. To this end, the generator output for latent samples produced with linear interpolation, SLERP (spherical linear interpolation) of BID20 and our proposed matched interpolation will be compared. We begin with the classic example of 2-point interpolation: Figure 4 shows three examples per dataset for an interpolation between 2 points in latent space. Each example is first done via linear interpolation, then SLERP and finally matched interpolation. It is immediately obvious in Figures 4a and 4b that linear interpolation produces inferior with generally more blurry, less saturated and less detailed output images. The SLERP heuristic and matched interpolation are slightly different visually, but we do not observe a difference in visual quality. However, we stress that the goal of this work is to construct operations in a principled manner, whose samples are consistent with the generative model. In the case of linear Published as a conference paper at ICLR 2019 interpolation (our framework generalizes to more operations, see below and Appendix), the SLERP heuristic tends to work well in practice but we provide a principled alternative. 4-point interpolation An even stronger effect can be observed when we do 4-point interpolation, showcased in Figure 5 (LSUN) and Figure 8 (LLD icons). The higher resolution of the LSUN output highlights the very apparent loss of detail and increasing prevalence of artifacts towards the midpoint in the linear version, compared to SLERP and our matched interpolation. Midpoints (Appendix) In all cases, the point where the interpolation methods diverge the most, is at the midpoint of the interpolation where t = 0.5. Thus we provide 25 such interpolation midpoints in Figures 11 (LLD icons) and 12 (LSUN) in the Appendix for direct comparison. Vicinity sampling (Appendix) Furthermore we provide two examples for vicinity sampling in Figures 9 and 10 in the Appendix. Analogous to the previous observations, the output under a linear operator lacks definition, sharpness and saturation when compared to both spherical and matched operators. Random walk An interesting property of our matched vicinity sampling is that we can obtain a random walk in the latent space by applying it repeatedly: we start at a point y 0 = z drawn from the prior, and then obtain point y i by sampling a single point in the vicinity of y i−1, using some fixed'step size'. We show an example of such a walk in FIG5, using = 0.5. As a of the repeated application of the vicinity sampling operation, the divergence from the prior distribution in the non-matched case becomes stronger with each step, ing in completely unrecognizable output images on the LSUN and LLD icon models. We proposed a framework that fully eliminates the distribution mismatch in the common latent space operations used for generative models. Our approach uses optimal transport to minimally modify (in l 1 distance) the operations such that they fully preserve the prior distribution. We give analytical formulas of the ing (matched) operations for various examples, which are easily implemented. The matched operators give a significantly higher quality samples compared to the originals, having the potential to become standard tools for evaluating and exploring generative models. This work was partly supported by ETH Zurich General Fund (OK) and Nvidia through a hardware grant. Published as a conference paper at ICLR 2019 We note that the analysis here can bee seen as a more rigorous version of an observation made by BID20, who experimentally show that there is a significant difference between the average norm of the midpoint of linear interpolation and the points of the prior, for uniform and Gaussian distributions. Suppose our latent space has a prior with DISPLAYFORM0 In this case, we can look at the squared norm DISPLAYFORM1 From the Central Limit Theorem (CLT), we know that as d → ∞, DISPLAYFORM2 in distribution. Thus, assuming d is large enough such that we are close to convergence, we can approximate the distribution of z 2 as N (dµ Z 2, dσ 2 Z 2). In particular, this implies that almost all points lie on a relatively thin spherical shell, since the mean grows as O(d) whereas the standard deviation grows only as O(DISPLAYFORM3 We note that this property is well known for i.i.d Gaussian entries (see e.g. Ex. 6.14 in MacKay FORMULA4). For Uniform distribution on the hypercube it is also well known that the mass is concentrated in the corner points (which is consistent with the claim here since the corner points lie on a sphere).Now consider an operator such as the midpoint of linear interpolation, y = In this case, we can compute: DISPLAYFORM4 Thus, the distribution of y 2 can be approximated with N (DISPLAYFORM5 . Therefore, y also mostly lies on a spherical shell, but with a different radius than z. In fact, the shells will intersect at regions which have a vanishing probability for large d. In other words, when looking at the squared norm y 2, y 2 is a (strong) outlier with respect to the distribution of z 2.This can be quantified in terms of KL-Divergence: DISPLAYFORM6 so D KL (z 2, y 2) grows linearly with the dimensions d. Proof. We will show it for the Kantorovich problem, the Monge version is similar. Published as a conference paper at ICLR 2019Starting from (KP), we compute DISPLAYFORM0 where the inequality in is due to each term being minimized separately. DISPLAYFORM1 where p X,Y has marginals p X and p Y. In this case P d (X, Y) is a subset of all joints p x,y with marginals p x and p y, where the pairs (DISPLAYFORM2 where the inequality in is due to minimizing over a smaller set. Since the two inequalities above are in the opposite direction, equality must hold for all of the expressions above, in particular: DISPLAYFORM3 Thus, (KP) and (KP-1-D) equal up to a constant, and minimizing one will minimize the other. Therefore the minimization of the former can be done over p X,Y with p x,y (x, y) = DISPLAYFORM4 In the next sections, we illustrate how to compute the matched operations for a few examples, in particular for linear interpolation and vicinity sampling, using a uniform or a Gaussian prior. We picked the examples where we can analytically compute the uniform transport map, but note that it is also easy to compute F DISPLAYFORM0 and (F Y (y)) numerically, since one only needs to estimate CDFs in one dimension. Since the components of all random variables in these examples are i.i.d, for such a random vector x we will implicitly write X for a scalar random variable that has the distribution of the components of x. When computing the monotone transport map T mon X→Y, the following Lemma is helpful. Lemma 1 (Theorem 2.5 in BID17). Suppose a mapping g(x) is non-decreasing and maps a continuous distribution p X to a distribution p Y, i.e. DISPLAYFORM1 then g is the monotone transport map T mon X→Y.According to Lemma 1, an alternative way of computing T mon X→Y is to find some g that is nondecreasing and transforms p X to p Y. Suppose z has uniform components Z ∼ Uniform(−1, 1). In this case, p Z (z) = 1/2 for −1 < z < 1. Now let y t = tz 1 + (1 − t)z 2 denote the linear interpolation between two points z 1, z 2, with component distribution p Yt. Due to symmetry we can assume that t > 1/2, since p Yt = p Y1−t. We then obtain p Yt as the convolution of p tZ and p (1−t)Z, i.e. p Yt = p tZ * p (1−t)Z. First we note that p tZ = 1/(2t) for −t < z < t and p (1−t)Z = 1/(2(1 − t)) for −(1 − t) < z < 1 − t. We can then compute:p Yt (y) = (p tZ * p (1−t)Z )(y)= 1 2(1 − t)(2t) DISPLAYFORM0 if y < −1 y + 1 if − 1 < y < −t + (1 − t) 2 − 2t if − t + (1 − t) < y < t − (1 − t) −y + 1 if t − (1 − t) < y < 1 0 if 1 < yThe CDF F Yt is then obtained by computing if 1 − 2t < y < 2t − 1 2(1 − t)(3t − 1) + (− 1 2 y 2 + y + 1 2 (2t − 1) 2 − (2t − 1)) if 2t − 1 < y < 1 2(1 − t)(2t) if 1 < ySince p Z (z) = 1/2 for |z| < 1, we have F Z (z) =
We propose a framework for modifying the latent space operations such that the distribution mismatch between the resulting outputs and the prior distribution the generative model was trained on is fully eliminated.
330
scitldr
The problem of building a coherent and non-monotonous conversational agent with proper discourse and coverage is still an area of open research. Current architectures only take care of semantic and contextual information for a given query and fail to completely account for syntactic and external knowledge which are crucial for generating responses in a chit-chat system. To overcome this problem, we propose an end to end multi-stream deep learning architecture which learns unified embeddings for query-response pairs by leveraging contextual information from memory networks and syntactic information by incorporating Graph Convolution Networks (GCN) over their dependency parse. A stream of this network also utilizes transfer learning by pre-training a bidirectional transformer to extract semantic representation for each input sentence and incorporates external knowledge through the neighbourhood of the entities from a Knowledge Base (KB). We benchmark these embeddings on next sentence prediction task and significantly improve upon the existing techniques. Furthermore, we use AMUSED to represent query and responses along with its context to develop a retrieval based conversational agent which has been validated by expert linguists to have comprehensive engagement with humans. With significant advancements in Automatic speech recognition systems and the field of natural language processing, conversational agents have become an important part of the current research. It finds its usage in multiple domains ranging from self-driving cars (b) to social robots and virtual assistants (a). Conversational agents can be broadly classified into two categories: a task oriented chat bot and a chit-chat based system respectively. The former works towards completion of a certain goal and are specifically designed for domain-specific needs such as restaurant reservations , movie recommendation , flight ticket booking systems ) among many others. The latter is more of a personal companion and engages in human-computer interaction for entertainment or emotional companionship. An ideal chit chat system should be able to perform non-monotonous interesting conversation with context and coherence. Current chit chat systems are either generative or retrieval based in nature. The generative ones tend to generate natural language sentences as responses and enjoy scalability to multiple domains without much change in the network. Even though easier to train, they suffer from error-prone responses (b). IR based methods select the best response from a given set of answers which makes them error-free. But, since the responses come from a specific dataset, they might suffer from distribution bias during the course of conversation. A chit-chat system should capture semantic, syntactic, contextual and external knowledge in a conversation to model human like performance. Recent work by proposed a memory network based approach to encode contextual information for a query while performing generation and retrieval later. Such networks can capture long term context but fail to encode relevant syntactic information through their model. Things like anaphora resolution are properly taken care of if we incorporate syntax. Our work improves upon previous architectures by creating enhanced representations of the conversation using multiple streams which includes Graph Convolution networks , Figure 1: Overview of AMUSED. AMUSED first encodes each sentence by concatenating embeddings (denoted by ⊕) from Bi-LSTM and Syntactic GCN for each token, followed by word attention. The sentence embedding is then concatenated with the knowledge embedding from the Knowledge Module (Figure 2). The query embedding passes through the Memory Module (Figure 3) before being trained using triplet loss. Please see Section 4 for more details. transformers and memory networks in an end to end setting, where each component captures conversation relevant information from queries, subsequently leading to better responses. Our contribution for this paper can be summarized as follows: • We propose AMUSED, a novel multi stream deep learning model which learns rich unified embeddings for query response pairs using triplet loss as a training metric. • We perform multi-head attention over query-response pairs which has proven to be much more effective than unidirectional or bi-directional attention. • We use Graph Convolutions Networks in a chit-chat setting to incorporate the syntactical information in the dialogue using its dependency parse. • Even with the lack of a concrete metric to judge a conversational agent, our embeddings have shown to perform interesting response retrieval on Persona-Chat dataset. The task of building a conversational agent has gained much traction in the last decade with various techniques being tried to generate relevant human-like responses in a chit-chat setting. Previous modular systems had a complex pipeline based structure containing various hand-crafted rules and features making them difficult to train. This led to the need of simpler models which could be trained end to end and extended to multiple domains. proposed a simple sequence to sequence model that could generate answers based on the current question, without needing extensive feature engineering and domain specificity. However, the responses generated by this method lacked context. To alleviate this problem, introduced a dynamic-context generative network which is shown to have improved performance on unstructured Twitter conversation dataset. To model complex dependencies between sub-sequences in an utterance, proposed a hierarchical latent variable encoder-decoder model. It is able to generate longer outputs while maintaining context at the same time. Reinforcement learning based approaches have also been deployed to generate interesting responses (a) and tend to possess unique conversational styles . With the emergence of a number of large datasets, retrieval methods have gained a lot of popularity. Even though the set of responses are limited in this scenario, it doesn't suffer from the problem of generating meaningless responses. A Sequential Matching Network proposed by performs word matching of responses with the context before passing their vectors to a RNN. Addition of external information along with the current input sentence and context improves the system as is evident by incorporating a large common sense knowledge base into an end to end conversational agent . To maintain diversity in the responses, suggests a method to combine a probabilistic model defined on item-sets with a seq2seq model. Responses like'I am fine' can make conversations monotonous; a specificity controlled model (b) in conjunction with seq2seq architecture overcomes this problem. These networks helps solve one or the other problem in isolation. To maintain proper discourse in the conversation, context vectors are passed together with input query vector into a deep learning model . A context modelling approach which includes concatenation of dialogue history has also been tried . However, the success of memory networks on Question-Answering task opened the door for its further use in conversational agents. used the same in a task oriented setting for restaurant domain and reported accuracies close to 96% in a full dialogue scenario. Zhang et al. (2018c) further used these networks in a chit chat setting on Persona-Chat dataset and came up with personalized responses. In our network, we make use of Graph Convolution Networks , which have been found to be quite effective for encoding the syntactic information present in the dependency parse of sentences. External Knowledge Bases (KBs) have been exploited in the past to improve the performances in various tasks (a; b;). The relation based strategy followed by creates a KB from dialogue itself, which is later used to improve Question-Answering .; have used KBs to generate more informative responses by using properties of entities in the graph. focused more on introducing knowledge from semantic-nets rather than general KBs. GCN for undirected graph: For an undirected graph G = (V, E), where V is the set of n vertices and E is the set of edges, the representation of the node v is given by x v ∈ R m, ∀v ∈ V. The output hidden representation h v ∈ R d of the node after one layer of GCN is obtained by considering only the immediate neighbors of the node as given by. To capture the multi-hop representation, GCN layers can be stacked on top of each other. GCN for labeled directed graph: For a directed graph G = (V, E), where V is the set of vertices we define the edge set E as a set of tuples (u, v, l(u, v) ) where there is an edge having label l(u, v) between nodes u and v. proposed the assumption that information doesn't necessarily propagate in certain directions in the directed edge, therefore, we add tuples having inverse edges (v, u, l(u, v) −1 ) as well as self loops (u, u, Ω), where Ω denotes self loops, to our edge set E to get an updated edge set E. The representation of a node x v, after the k th layer is given as: are trainable edge-label specific parameters for the layer k, N (v) denotes the set of all vertices that are immediate neighbors of v and f is any non-linear activation function (e.g., ReLU: Since we are obtaining the dependency graph from Stanford CoreNLP, some edges can be erroneous. Edgewise gating (; helps to alleviate this problem by decreasing the effects of such edges. For this, each edge (u, v, l(u, v) ) is assigned a score which is given by: u,v) ) ∈ R are trained and σ denotes the sigmoid function. Incorporating this, the final GCN embedding for a node v after n th layer is given as: This section provides details of three main components of AMUSED which can broadly be classified into Syntactic, Knowledge and Memory Module. We hypothesize that each module captures information relevant for learning representations, for a query-response pair in a chit-chat setting. Suppose that we have a dataset D consisting of a set of conversations d 1, d 2,..., d C where d c represents a single full length conversation consisting of multiple dialogues. A conversation d c is given by a set of tuples (q 1, r 1), (q 2, r 2),..., (q n, r n) where a tuple (q i, r i) denotes the query and response pair for a single turn. The context for a given query q i ∀ i ≥ 2 is defined by a list of sentences l: [q 1, r 1, ..., q i−1, r i−1]. We need to find the best response r i from the set of all responses, R. The training set D for AMUSED is defined by set of triplets (q i, r i, n i) ∀ 1 ≤ i ≤ N where N is the total number of dialogues and n i is a negative response randomly chosen from set R. Syntax information from dependency trees has been successfully exploited to improve a lot of Natural Language Processing (NLP) tasks (a;). In dialog agents, where anaphora resolution as well as sentence structure influences the responses, it finds special usage. A Bi-GRU followed by a syntactic GCN is used in this module. Each sentence s from the input triplet is represented with a list of k-dimensional GloVe embedding corresponding to each of the m tokens in the sentence. The sentence representation S ∈ R m×k is then passed to a Bi-GRU to obtain the representation S gru ∈ R m×dgru, where d gru is the dimension of the hidden state of Bi-GRU. This contextual encoding captures the local context really well, but fails to capture the long range dependencies that can be obtained from the dependency trees. We use GCN to encode this syntactic information. Stanford CoreNLP ) is used to obtain the dependency parse for the sentence s. Giving the input as S gru, we use GCN Equation 1, to obtain the syntactic embedding S gcn. , we only use three edge labels, namely forward-edge, backward-edge and self-loop. This is done because incorporating all the edge labels from the dependency graph heavily over-parameterizes the model. The final token representation is obtained by concatenating the contextual Bi-GRU representation h gru and the syntactic GCN representation h gcn. A sentence representation is then obtained by passing the tokens through a layer of word attention as used by (b;), which is concatenated with the embedding obtained from the Knowledge Module (described in Section 4.2) to obtain the final sentence representation h concat. The final sentence representation h concat of the query is then passed into Knowledge Module. It is further subdivided into two components: a pre-trained Transformer model for next dialogue prediction problem and a component to incorporate information from external Knowledge Bases (KBs). The next dialogue prediction task is described as follows: For each query-response pair in the dataset, we generate a positive sample (q, r) and a negative sample (q, n) where n is randomly chosen from the set of responses R in dataset D. , a training example is defined by concatenating q and r which are separated by a delimiter || and is given by [q||r]. The problem is to classify if the next sentence is a correct response or not. A pre-trained BERT model is used to further train a binary classifier for the next dialogue prediction task as described above. After the model is trained, the pre-final layer is considered and the vector from the special cls token is chosen as the sentence representation. The representation thus obtained would have a tendency to be more inclined towards its correct positive responses. Multi-head attention in the transformer network, along with positional embeddings during training, helps it to learn intra as well as inter sentence dependencies . The input query sentence is then passed from this network to obtain the BERT embedding, h bert. In our day-to-day conversations, to ask succinct questions, or to keep the conversation flowing, we make use of some knowledge. For example, if someone remarks that they like rock music, we can ask a question if they have listened to Nirvana. It can be done only if we know that Nirvana plays rock music. To incorporate such external information, we can make use of existing Knowledge Bases like Wikipedia, Freebase and Wikidata (Vrandečić & Krötzsch, 2014). Entities in these KBs are linked to each other using relations. We can expand the information we have about an entity by looking at its linked entities. Multiple hops of the Knowledge Graph (KG) can be used to expand knowledge. In AMUSED, we do this by passing the input query into Stanford CoreNLP to obtain entity linking information to Wikipedia. Suppose the Wikipedia page of an entity e contains links to the set of entities E. We ignore relation information and only consider one-hop direct neighbors of e. To obtain a KB-expanded embedding h kb of the input sentence, we take the average of GloVE embeddings of each entity in E. In place of Wikipedia, bigger knowledge bases like Wikidata, as well as relation information, can be used to improve KB embeddings. We leave that for future work. For effective conversations, it is imperative that we form a sense from the dialogues that have already happened. A question about' Who is the president of USA' followed by' What about France' should be self-containing. This dialogue context is encoded using a memory network . The memory network helps to capture context of the conversation by storing dialogue history i.e. both question and responses. The query representation, h concat is passed to the memory network, along with BERT embeddings h bert of the context, from the Knowledge Module (Section 4.2). In AMUSED, memory network uses supporting memories to generate the final query representation (h concat). Supporting memories contains input (m i) and output (c i) memory cells . The incoming query q i as well as the history of dialogue context l: [(q 1, r 1),.., (q i−1, r i−1)] is fed as input. The memory cells are populated using the BERT representations of context sentences l as follows: , the incoming query embedding along with input memories is used to compute relevance of context stories as a normalized vector of attention weights as a i = (< m i, h concat >), where < a, b > represents the inner product of a and b. The response from the output memory, o, is then generated as: o = i a i c i. The final output of the memory cell, u is obtained by adding o to h concat. To capture context in an iterative manner, memory cells are stacked in layers which are called as hops. The output of the memory cell after the k th hop is given by The memory network performs k such hops and the final representation h concat is given by sum of o k and u k. Triplet loss has been successfully used for face recognition . Our insight is that traditional loss metrics might not be best suited for a retrieval-based task with a multitude of valid responses to choose from. We define a Conversational Euclidean Space where the representation of a sentence is driven by its context in the dialogue along with its syntactic and semantic information. We have used this loss to bring the query and response representations closer in the conversational space. Questions with similar answers should be closer to each other and the correct response. An individual data point is a triplet which consists of a query (q i), its correct response (r i) and a negative response (n i) selected randomly. We need to learn their embeddings φ(q i) = h where α is the margin hyper-parameter used to separate negative and positive pairs. If I be the set of triplets, N the number of triplets and w the parameter set, then, triplet loss (L) is defined as: We use this dataset to build and evaluate the chit-chat system. Persona-Chat (c) is an open domain dataset on personal conversations created by randomly pairing two humans on Amazon Mechanical Turk. The paired crowd workers converse in a natural manner for 6 − 12 turns. This made sure that the data mimic normal conversations between humans which is very crucial for building such a system. This data is not limited to social media comments or movie dialogues. It contains 9907 training conversations and 1000 conversations each for testing and validation. There are a total of 131, 438 query-response pairs with a vocab size of 19262 in the dataset. We use it for training AMUSED as it provides consistent conversations with proper context. DSTC: Dialogue State Tracking Challenge dataset contains conversations for restaurant booking task. Due to its task oriented nature, it doesn't need an external knowledge module, so we train it only using memory and syntactic module and test on an automated metric. We further use Multi-Genre Natural Language Inference and Microsoft Research Paraphrase Corpus to fine-tune parts of the network i.e; Knowledge Module. It is done because these datasets resemble the core nature of our problem where in we want to predict the correctness of one sentence in response to a particular query. Pre-training BERT: Before training AMUSED, the knowledge module is processed by pre-training a bidirectional transformer network and extracting one hop neighborhood entities from Wikipedia KB. We use the approach for training as explained in Section 4.2.1. There are 104224 positive training and 27214 validation query-response pairs from Persona Chat. We perform three different operations: a) Equal sampling: Sample equal number of negative examples from dataset, b) Oversampling: Sample double the negatives to make training set biased towards negatives and c) Under sampling: Sample 70% of negatives to make training set biased towards positives. Batch size and maximum sequence length are 32 and 128 respectively. We fine-tune this next sentence prediction model with MRPC and MNLI datasets which improves the performance. Training to learn Embeddings: AMUSED requires triplets to be trained using triplet loss. A total of 131438 triplets of the form (q, r, n) are randomly split in 90:10 ratio to form training and validation set. The network is trained with a batch size of 64 and dropout of 0.5. Word embedding size is chosen to be 50. Bi-GRU and GCN hidden state dimensions are chosen to be 192 and 32 respectively. One layer of GCN is employed. Validation loss is used as a metric to stop training which converges after 50 epochs using Adam optimizer at 0.001 learning rate. As a retrieval based model, the system selects a response from the predefined answer set. The retrieval unit extracts embedding (h concat) for each answer sentence from the trained model and stores it in a representation matrix which will be utilized later during inference. First, a candidate subset A is created by sub-sampling a set of responses having overlapping words with a given user query. Then, the final output is retrieved on the basis of cosine similarity between query embedding h concat and the extracted set of potential responses (A). The response with the highest score is then labelled as the final answer and the response embedding is further added into the memory to take care of context. The model ing from oversampling method beats its counterparts by more than 3% in accuracy. It clearly indicates that a better model is one which learns to distinguish negative examples well. The sentence embeddings obtained through this model is further used for lookup in the Knowledge Module (Section 4.2) in AMUSED. We use two different automated metrics to check the effectiveness of the model and the queryresponse representations that we learnt. Next Dialogue Prediction Task: Various components of AMUSED are analysed for their performance on next dialogue prediction task. This task tell us that, given two sentences (a query and a response) and the context, whether second sentence is a valid response to the first sentence or not. Embeddings for queries and responses are extracted from our trained network and then multiple operations which include a) Concatenation, b) Element wise min and c) Subtraction are performed on those before passing them to a binary classifier. A training example consists of embeddings of two sentences from a (q, a) or (q, n) pair which are created in a similar fashion as in Section 4.2.1. Accuracy on this binary classification problem has been used to select the best network. Furthermore, we perform ablation studies using different modules to understand the effect of each component in the network. A 4 layer neural network with ReLU activation in its hidden layers and softmax in the final layer is used as the classifier. External knowledge in conjunction with memory and GCN module has the best accuracy when embeddings of query and response are concatenated together. A detailed study of performance of various components over these operations is shown in Table 1. Precision@1: This is another metric used to judge the effectiveness of our network. It is different from the next sentence prediction task accuracy. It measures that for n trials, the number of times a relevant response is reported with the highest confidence value. Table 2 reports a comparative study of this metric on 500 trials conducted for AMUSED along with for other methods. DSTC dataset is also evaluated on this metric without the knowledge module as explained in Section 5.1 Looking for exact answers might not be a great metric as many diverse answers might be valid for a particular question. So, we must look for answers which are contextually relevant for that query. Overall, we use next sentence prediction task accuracy to choose the final model before retrieval. There is no concrete metric to evaluate the performance of an entire conversation in a chit-chat system. Hence, human evaluation was conducted using expert linguists to check the quality of conversation. They were asked to chat for 7 turns and rate the quality of responses on a scale of 1−10 where a higher score is better. Similar to Zhang et al. (2018c), there were multiple parameters to rate the chat based on coherence, context awareness and non-monotonicity to measure various factors that are essential for a natural dialogue. By virtue of our network being retrieval based, we don't need to judge the responses based on their structural correctness as this will be implicit. To monitor the effect of each neural component, we get it rated by experts either in isolation or in conjunction with other components. Such a study helps us understand the impact of different modules on a human based conversation. Dialogue system proposed by Zhang et al. (2018c) is also reproduced and reevaluated for comparison. From Table 3 we can see that human evaluation follows a similar trend as the automated metric, with the best rating given to the combined architecture. In the paper, we propose AMUSED, a multi-stream architecture which effectively encodes semantic information from the query while properly utilizing external knowledge for improving performance on natural dialogue. It also employs GCN to capture long-range syntactic information and improves context-awareness in dialogue by incorporating memory network. Through our experiments and using different metrics, we demonstrate that learning these rich representations through smart training (using triplets) would improve the performance of chit-chat systems. The ablation studies show the importance of different components for a better dialogue. Our ideas can easily be extended to various conversational tasks which would benefit from such enhanced representations.
This paper provides a multi -stream end to end approach to learn unified embeddings for query-response pairs in dialogue systems by leveraging contextual, syntactic, semantic and external information together.
331
scitldr
We examine techniques for combining generalized policies with search algorithms to exploit the strengths and overcome the weaknesses of each when solving probabilistic planning problems. The Action Schema Network (ASNet) is a recent contribution to planning that uses deep learning and neural networks to learn generalized policies for probabilistic planning problems. ASNets are well suited to problems where local knowledge of the environment can be exploited to improve performance, but may fail to generalize to problems they were not trained on. Monte-Carlo Tree Search (MCTS) is a forward-chaining state space search algorithm for optimal decision making which performs simulations to incrementally build a search tree and estimate the values of each state. Although MCTS can achieve state-of-the-art when paired with domain-specific knowledge, without this knowledge, MCTS requires a large number of simulations in order to obtain reliable estimates in the search tree. By combining ASNets with MCTS, we are able to improve the capability of an ASNet to generalize beyond the distribution of problems it was trained on, as well as enhance the navigation of the search space by MCTS. Planning is the essential ability of a rational agent to solve the problem of choosing which actions to take in an environment to achieve a certain goal. This paper is mainly concerned with combining the advantages of forward-chaining state space search through UCT BID11, an instance of Monte-Carlo Tree Search (MCTS) BID5, with the domain-specific knowledge learned by Action Schema Networks (ASNets) BID18 ), a domain-independent learning algorithm. By combining UCT and ASNets, we hope to more effectively solve planning problems, and achieve the best of both worlds. The Action Schema Network (ASNet) is a recent contribution in planning that uses deep learning and neural networks to learn generalized policies for planning problems. A generalized policy is a policy that can be applied to any problem from a given planning domain. Ideally, this generalized policy is able to reliably solve all problems in the given domain, although this is not always feasible. ASNets are well suited to problems where "local knowledge of the environment can help to avoid certain traps" BID18. In such problems, an ASNet can significantly outperform traditional planners that use heuristic search. Moreover, a significant advantage of ASNets is that a network can be trained on a limited number of small problems, and generalize to problems of any size. However, an ASNet is not guaranteed to reliably solve all problems of a given domain. For example, an ASNet could fail to generalize to difficult problems that it was not trained on -an issue often encountered with machine learning algorithms. Moreover, the policy learned by an ASNet could be suboptimal due to a poor choice of hyperparameters that has led to an undertrained or overtrained network. Although our discussion is closely tied to ASNets, our contributions are more generally applicable to any method of learning a (generalized) policy. Monte-Carlo Tree Search (MCTS) is a state-space search algorithm for optimal decision making which relies on performing Monte-Carlo simulations to build a search tree and estimate the values of each state BID5 ). As we perform more and more of these simulations, the state estimates become more accurate. MCTS-based game-playing algorithms have often achieved state-of-the-art performance when paired with domain-specific knowledge, the most notable being AlphaGo (Silver et al. 2016). One significant limitation of vanilla MCTS is that we may require a large number of simulations in order to obtain reliable estimates in the search tree. Moreover, because simulations are random, the search may not be able to sense that certain branches of the tree will lead to sub-optimal outcomes. We are concerned with UCT, a variant of MCTS that balances the trade-off between exploration and exploitation. However, our work can be more generally used with other search algorithms. Combining ASNets with UCT achieves three goals. Learn what we have not learned: improve the capability of an ASNet to generalize beyond the distribution of problems it was trained on, and of UCT to bias the exploration of actions to those that an ASNet wishes to exploit. Improve on sub-optimal learning: obtain reasonable evaluation-time performance even when an ASNet was trained with suboptimal hyperparameters, and allow UCT to converge to the optimal action in a smaller number of trials. Be robust to changes in the environment or domain: improve performance when the test environment differs substantially from the training environment. The rest of the paper is organized as follows. Section 2 formalizes probabilistic planning as solving a Stochastic Shortest Path problem and gives an overview of ASNets and MCTS along with its variants. Section 3 defines a framework for Dynamic Programming UCT (DP-UCT) BID10. Next, Section 4 examines techniques for combining the policy learned by an ASNet with DP-UCT. Section 5 then presents and analyzes our . Finally, Section 6 summarizes our contributions and discusses related and future work. A Stochastic Shortest Path problem (SSP) is a tuple S, s 0, G, A, P, C BID2 where S is the finite set of states, s 0 ∈ S is the initial state, G ⊆ S is the finite set of goal states, A is the finite set of actions, P (s | a, s) is the probability that we transition into s after applying action a in state s, and C(s, a) ∈ (0, ∞) is the cost of applying action a in state s. A solution to an SSP is a stochastic policy π: A × S →, where π(a | s) represents the probability action a is applied in the current state s. An optimal policy π *, is a policy that selects actions which minimize the expected cost of reaching a goal. For SSPs, there always exists an optimal policy that is deterministic which may be obtained by finding the fixed-point of the state-value function V * known as the Bellman optimality equation BID2, and the actionvalue function Q *. That is, in the state s, we obtain π * by finding the action a that minimizes Q * (s, a). DISPLAYFORM0 We handle dead ends using the finite-penalty approach BID12. That is, we introduce a fixed dead-end penalty D ∈ (0, ∞) which acts as a limit to bound the maximum expected cost to reach a goal, and a give-up action which is selected if the expected cost is greater than or equal to D. The ASNet is a neural network architecture that exploits deep learning techniques in order to learn generalized policies for probabilistic planning problems BID18 ). An ASNet consists of alternating action layers and proposition layers (Figure 1), where the first and last layer are always action layers. The output of the final layer is a stochastic policy π: DISPLAYFORM0 An action layer is composed of a single action module for each ground action in the planning problem. Similarly, a proposition layer is composed of a single proposition module for each ground proposition in the problem. These modules are sparsely connected, ensuring that only the relevant action modules in one layer are connected to a proposition Figure 1: ASNet with 1 hidden layer BID18 module in the next layer. An action module in one layer is connected to a proposition module in the next layer only if the ground proposition appears in the preconditions or effects of a ground action. Similarly, a proposition module in one layer is connected to an action module in the next layer only if the ground proposition appears in the preconditions or effects of the relevant ground action. Since all ground actions instantiated from the same action schema will have the same structure, we can share the same set of weights between their corresponding action modules in a single action layer. Similarly, weights are shared between proposition modules in a single proposition layer that correspond to the same predicate. It is easy to see that by learning a set of common weights θ for each action schema and predicate, we can scale an ASNet to any problem of the same domain. ASNets only have a fixed number of layers, and are thus unable to solve all problems in domains that require arbitrarily long chains of reasoning about action-proposition relationships. Moreover, like most machine learning algorithms, an ASNet could fail to generalize to new problems if not trained properly. This could be due to a poor choice of hyperparameters, overfitting to the problems the network was trained on, or an unrepresentative training set. MCTS is a state-space search algorithm that builds a search tree in an incremental manner by performing trials until we reach some computational budget (e.g. time, memory) at each decision step BID5, at which point MCTS returns the action that gives the best estimated value. A trial is composed of four phases. Firstly, in the selection phase, MCTS recursively selects nodes in the tree using a child selection policy until it encounters an unexpanded node, i.e. a node without any children. Next, in the expansion phase, one or more child nodes of the leaf node are created in the search tree according to the available actions. Now, in the simulation phase, a simulation of the scenario is played-out from one of the new child nodes until we reach a goal or dead end, or exceed the computational budget. Finally, in the backpropagation phase, the of this trial is backpropagated through the selected nodes in the tree to update their estimated values. The updated estimates affect the child selection policy in future trials. Upper Confidence Bounds applied to Trees (UCT) BID11 ) is a variant of MCTS that addresses the trade-off between the exploration of nodes that have not been visited often, and the exploitation of nodes that currently have good state estimates. UCT treats the choice of a child node as a multi-armed bandit problem by selecting the node which maximizes the Upper Confidence Bound 1 (UCB1) term, which we detail in the selection phase in Section 3.1.Trial-Based Heuristic Tree Search (THTS) BID10 ) is an algorithmic framework that generalizes MCTS, dynamic programming, and heuristic search planning algorithms. In a THTS algorithm, we must specify five ingredients: action selection, backup function, heuristic function, outcome selection and the trial length. We discuss these ingredients and a modified version of THTS to additionally support UCT with ASNets in Section 3.Using these ingredients, BID10 create three new algorithms, all of which provide superior theoretical properties over UCT: MaxUCT, Dynamic Programming UCT (DP-UCT) and UCT*. DP-UCT and its variant UCT*, which use Bellman backups, were found to outperform original UCT and MaxUCT. Because of this, we will focus on DP-UCT, which we formally define in the next section. Our framework is a modification of DP-UCT from THTS. It is designed for SSPs with dead ends instead of finite horizon MDPs and is focused on minimizing the cost to a goal rather than maximizing rewards. It also introduces the simulation function, a generalization of random rollouts used in MCTS.We adopt the representation of alternating decision nodes and chance nodes in our search tree, as seen in THTS. DISPLAYFORM0 0 is the number of visits to the node in the first k trials, V k ∈ R + 0 is the state-value estimate based on the first k trials, and {n 1, . . ., n m} are the successor nodes (i.e. children) of n d. A chance node n c is a tuple s, a, C k, Q k, {n 1, . . ., n m}, where additionally, a ∈ A is the action, and Q k is the action-value estimate based on the first k trials. We use V k (n d) to refer to the state-value estimate of a decision node n d, a(n c) to refer to the action of a chance node n c, and so on for all the elements of n d and n c. Additionally, we use S(n) to represent the successor nodes {n 1, . . ., n m} of a search node n, and we also employ the shorthand DISPLAYFORM1 Initially, the search tree contains a single decision node n d with s(n d) = s 0, representing the initial state of our problem. UCT is described as an online planning algorithm, as it interleaves planning with execution. At each decision step, UCT returns an action either when a time cutoff is reached, or a maximum number of trials is performed. UCT then selects the chance node n c from the children of the root decision node that has the highest action-value estimate, Q k (n c), and applies its action a(n c). We sample a decision node n d from S(n c) based on the transition probabilities P (n d | n c) and set n d to be the new root of the tree. A single trial under our framework consists of the selection, expansion, simulation and backup phase. Selection Phase. As described in THTS, in this phase we traverse the explicit nodes in the search tree by alternating between action selection for decision nodes, and outcome selection for chance nodes until we reach an unexpanded decision node n d, which we call the tip node of the trial. Action selection is concerned with selecting a child chance node n c from the successors S(n d) of a decision node n d. UCT selects the child chance node that maximizes the UCB1 term, i.e. arg max nc∈S(n d) UCB1(n d, n c), where DISPLAYFORM0 B is the bias term which allows us to adjust the trade-off between exploration and exploitation. We set DISPLAYFORM1 to force the exploration of chance nodes that have not been visited. In outcome selection, we randomly sample an outcome of an action, i.e. sample a child decision node n d from the successors S(n c) of a chance node n c based on the transition probabilities P (n d | n c).Expansion Phase. In this phase, we expand the tip node n d and optionally initialize the Q-values of its child chance nodes, S(n d). Calculating an estimated Q-value requires calculating a weighted sum of the form: DISPLAYFORM2 where H is some domain-independent SSP heuristic function such as h add, h max, h pom, or h roc (Teichteil-Königsbuch, Vidal, and Infantes 2011; BID19 . This can be expensive when n c has many successor decision nodes. Simulation Phase. Immediately after the expansion phase, we transition to the simulation phase. Here we perform a simulation (also known as a rollout) of the planning problem from the tip node's state s(n d), until we reach a goal or dead-end state, or exceed the trial length. This stands in contrast to the behaviour of THTS, which lacks a simulation phase and would continuously switch between the selection and expansion phases until the trial length is reached. We use the simulation function to choose which action to take in a given state, and sample the next state according to the transition probabilities. If we complete a simulation without reaching a goal or dead end, we add a heuristic estimate H(s) to the rollout cost, where s is the final rollout state. If s is a dead end, then we set the rollout cost to be the dead-end penalty D.The trial length bounds how many steps can be applied in the simulation phase, and hence allows us to adjust the lookahead capability of DP-UCT. By setting the trial length to be very small, we can focus the search on nodes closer to the root of the tree, much like breadth-first search BID10. Following the steps above, if the trial length is 0, we do not perform any simulations and simply take a heuristic estimate for the tip node of the trial, or D if the tip node represents a dead-end. Traditional MCTS-based algorithms use a random simulation function, where each available action in the state has the same probability of being selected. However, this is not very suitable for SSPs as we can continuously loop around a set of states and never reach a goal state. Moreover, using a random simulation function requires an extremely large number of simulations to obtain good estimates for statevalues and action-values within the search tree. Because of this, the simulation phase in MCTS-based algorithms for planning is often neglected and replaced by a heuristic estimate. This is equivalent to setting the trial length to be 0, where we backup a heuristic estimate once we expand the tip node of the trial. However, there can be situations where the heuristic function is misleading or uninformative and thus misguides the search. In such a scenario, it could be more productive to use a random simulation function, or a simulation function influenced by domain-specific knowledge (i.e., the knowledge learned by an ASNet) to calculate estimates. Backup Phase. After the simulation phase, we must propagate the information we have gained from the current trial back up the search tree. We use the backup function to update the state-value estimate V k (n d) for decision nodes and the action-value estimate Q k (n c) for chance nodes. We do this by propagating the information we gained during the simulation in reverse order through the nodes in the trial path, by continuously applying the backup function for each node until we reach the root node of the search tree. Original UCT is defined with Monte-Carlo backups, in which the transition model is unknown and hence estimated based on the number of visits to nodes. However, in our work we consider the transition model to be known a priori. For that reason, DP-UCT only considers Bellman backups BID10, which additionally take the probabilities of outcomes into consideration when backing up action value estimates Q k (n c): DISPLAYFORM3. Υ k (n c) represents the child decision nodes of n c that have already been visited in the first k trials and hence have statevalue estimates. Thus,P (n d | n c) allows us to weigh the state-value estimate V k (n d) of each visited child decision node n d proportionally by its probability P (n d | n c) and that of the unvisited child decision nodes. It should be obvious that Bellman backups are derived directly from the Bellman optimality equations we presented in Section 2. Thus a flavor of UCT using Bellman backups is asymptotically optimal given a correct selection of ingredients that will ensure all nodes are explored infinitely often.4 Combining DP-UCT with ASNets Recall that an ASNet learns a stochastic policy π: A × S →, where π(a | s) represents the probability action a is applied in state s. We introduce two simulation functions which make use of a trained ASNet: STOCHASTIC AS-NETS which simply samples from the probability distribution given by π to select an action, and MAXIMUM ASNETS which selects the action with the highest probability -i.e. arg max a∈A(s) π(a | s).Since the navigation of the search space is heavily influenced by the state-value and action-value estimates we obtain from performing simulations, DP-UCT with an ASNetbased simulation function would ideally converge to the optimal policy in a smaller number of simulations compared to if we used the random simulation function. Of course, we expect this to be the case if an ASNet has learned some useful features or tricks about the environment or domain of the problem we are tackling. However, using ASNets as a simulation function may not be very robust if the learned policy is misleading and uninformative. Here, robustness indicates how well UCT can recover from the misleading information it has been provided. In this situation, DP-UCT with ASNets as a simulation function would require a significantly larger number of simulations in order to converge to the optimal policy than DP-UCT with a random simulation function. Regardless the quality of the learned policy, DP-UCT remains asymptotically optimal when using an ASNet-based simulation function if the selection of ingredients guarantees that our search algorithm will explore all nodes infinitely often. Nonetheless, an ASNet-based simulation function should only be used if its simulation from the tip node n d better approximates V * (n d) than a heuristic estimate H(s(n d)).Choosing between STOCHASTIC ASNETS and MAXI-MUM ASNETS. We can perceive the probability distribution given by the policy π of an ASNet to represent the'confidence' the network has in applying each action. Obviously, MAXIMUM ASNETS will completely bias the simulations towards what an ASNet believes is the best action for a given state. If the probability distribution is highly skewed towards a single action, then MAXIMUM ASNETS would be the better choice, as the ASNet is very'confident' in its decision to choose the corresponding action. On the other hand, if the probability distribution is relatively uniform, then STOCHASTIC ASNETS would likely be the better choice. In this situation, the ASNet may be uncertain and not very'confident' in its decision to choose among a set of actions. Thus, to determine which ASNet-based simulation function to use, we should carefully consider to what extent an ASNet is able to solve the given problem reliably. The UCB1 term allows us to balance the trade-off between exploration of actions in the search tree that have not been applied often, and exploitation of actions that we already know have good action-value estimates based on previous trials. By including an ASNet's influence within UCB1 through its policy π, we hope to maintain this fundamental trade-off yet further bias the action selection to what the ASNet believes are promising actions. Simple ASNet Action Selection. We select the child chance node n c of a decision node n d that maximizes: DISPLAYFORM0 where M ∈ R + and π(n c) = π(a(n c) | s(n c)) for the stochastic policy π learned by ASNet. Similar to UCB1, if a child chance node n c has not been visited before (i.e., C k (n c) = 0), we set SIMPLE-ASNET(n d, n c) = ∞ to force its exploration. The new parameter M, called the influence constant, allows us to control the exploitation of an ASNet's policy π for exploration and, the higher M is, the higher the influence of the ASNet in the action selection. Notice that the influence of the ASNet diminishes as we apply the action a(n c) more often because M ·π(n c)/C k (n c) decreases as the number of visits to the chance node n c increases. Moreover, since the bias provided DISPLAYFORM1 e., the original UCB1 bias term), SIMPLE-ASNET preserves the asymptotic optimality of UCB1: as C k (n c) → ∞, SIMPLE-ASNET(n d, n c) equals UCB1(n d, n c) and both converge to the optimal action-value Q * (n c) (Kocsis and Szepesvári 2006). Because of this similarity with UCB1 and their same initial condition (i.e., treating divisions by C k (n c) = 0 as ∞), we expect that SIMPLE-ASNET action selection will be robust to any misleading information provided by the policy of a trained ASNet. Nonetheless, the higher the value of the influence constant M, the more trials we require to combat any uninformative information. Ranked ASNet Action Selection. One pitfall of the infinite exploration bonus in SIMPLE-ASNET action selection when C k (n c) = 0 is that all child chance nodes must be visited at least once before we actually exploit the policy learned by an ASNet. Ideally, we should use the knowledge learned by an ASNet to select the order in which unvisited chance nodes are explored. Thus, we introduce RANKED-ASNET action selection, an extension to SIMPLE-ASNET action selection. The first condition stipulates that all chance nodes are selected and visited at least once before SIMPLE-ASNET action selection is used. Otherwise, chance nodes that have already been visited are given a value of −∞, while the values of unvisited nodes correspond to their probability in the policy π. Thus, unvisited child chance nodes are visited in decreasing order of their probability within the policy π. RANKED-ASNET action selection will allow DP-UCT to focus the initial stages of its search on what an ASNet believes are the most promising parts of the state space. Given that the ASNet has learned some useful knowledge of which action to apply at each step, we expect RANKED-ASNET action selection to require a smaller number of trials to converge to the optimal action in comparison with SIMPLE-ASNET action selection. However, RANKED-ASNET may not be as robust as SIMPLE-ASNET when the policy learned by an ASNet is misleading or uninformative. For example, if the optimal action has the lowest probability among all actions in the ASNet policy and is hence explored last, then we would require an increased number of trials to converge to this optimum. Comparison with ASNet-based Simulation Functions. DP-UCT with ASNet-influenced action selection is more robust to misleading information than DP-UCT with an ASNet-based simulation function. Since SIMPLE-ASNET and RANKED-ASNET action selection decreases the influence of a network as we apply an action it has suggested more frequently, we will eventually explore actions that may have a small probability in the policy learned by the ASNet but are in fact optimal. We would require a much larger number of trials to achieve this when using an ASNet-based simulation function, as the state-value and action-value estimates in the search tree would be directly derived from ASNet-based simulations. All experiments were performed on an Amazon Web Services EC2 c5.4x large instance with 16 CPUs and 32GB of memory. Each experiment was limited to one CPU core with a maximum turbo clock speed of 3.5 GHz. No restrictions were placed on the amount of memory an experiment used. Considered Planners. For our experiments, we consider two baseline planners: the original ASNets algorithm and UCT*. The latter is a variation of DP-UCT where the trial length is 0 while still using UCB1 to select actions, Bellman backups as the backup function, and no simulation function. UCT* was chosen as a baseline because it outperforms original DP-UCT due to its stronger theoretical properties BID10. We consider four parametrizations of our algorithms -namely, (i) Simple ASNets, (ii) Ranked ASNets, (iii) Stochastic ASNets, and (iv) Maximum ASNets -where: parametrizations (i) and (ii) are UCT* using SIMPLE and RANKED-ASNET action selection, respectively; and parametrizations (iii) and (iv) are DP-UCT with a problem-dependent trial length using STOCHASTIC and MAXIMUM ASNETS as the simulation function, respectively. ASNet Configuration. We use the same ASNet hyperparameters as described by Toyer et al. to train each network. Unless otherwise specified, we imposed a strict two hour time limit to train the network, though in most situations, the network finished training within one hour. All ASNets were trained using an LRTDP-based BID4 teacher that used LM-cut (Helmert and Domshlak 2009) as the heuristic to compute optimal policies. We only report the time taken to solve each problem for the final for an ASNet, and hence do not include the training time. DP-UCT Configuration. For all DP-UCT configurations we used h add BID3 as the heuristic function because it allowed DP-UCT to converge to a good solution in a reasonable time in our experiments, and set the UCB1 bias parameter B to √ 2. For all problems with dead ends, we enabled Q-value initialization, as it helps us avoid selecting a chance node for exploration that may lead to a dead end. We did not enable this for problems without dead ends because estimating Q-values is computationally expensive, and not beneficial in comparison to the number of trials that could have been performed in the same time frame. We gave all configurations a 10 second time cutoff to do trials and limited the maximum number of trials to 10,000 at each decision step to ensure fairness. Moreover, we set the dead-end penalty to be 500. We gave each planning round a maximum time of 1 hour, and a maximum of 100 execution steps. We ran 30 rounds per planner for each experiment. Stack Blocksworld. Stack Blocksworld is a special case of the deterministic Blocksworld domain in which the goal is to stack n blocks initially on the table into a single tower. We train an ASNet to unstack n blocks from a single tower and put them all down on the table. Since the network has never learned how to stack blocks, it completely fails at stacking the n blocks on the table into a single tower. A setting like this one-where the distributions of training and testing problems have non-overlapping support-represents a near-worst-case scenario for inductive learners like ASNets. In contrast, stacking blocks into a single tower is a relatively easy problem for UCT*. Our aim in this experiment is to show that DP-UCT can overcome the misleading information learned by ASNet policy. We train an ASNet on unstack problems with 2 to 10 blocks, and evaluate DP-UCT and ASNets on stack problems with 5 to 20 blocks. Exploding Blocksworld. This domain is an extension of deterministic Blocksworld, and is featured in the International Probabilistic Planning Competitions (IPPC). In Exploding Blocksworld, putting down a block can detonate and destroy the block or the table it was put down on. Once a block or the table is exploded, we can no longer use it; therefore, this domain contains unavoidable dead ends. A good policy avoids placing a block down on the table or down on another block that is required for the goal state (if possible). It is very difficult for an ASNet to reliably solve Exploding Blocksworld problems as each problem could have its own'trick' in order to avoid dead ends and reach the goal with minimal cost. We train an ASNet for 5 hours on a selected set of 16 problems (including those with avoidable and unavoidable dead ends) that were optimally solved by LRTDP within 2 minutes. 1 We evaluate ASNets and DP-UCT on the first eight problems from IPPC 2008 BID6. By combining DP-UCT and ASNets, we hope to exploit the limited knowledge and'tricks' learned by an ASNet on the problems it was trained on to navigate the search space. That is, we aim to learn what we have not learned, and improve suboptimal learning. CosaNostra Pizza BID18. The objective in CosaNostra Pizza is to safely deliver a pizza from the pizza shop to the waiting customer and then return to the shop. There is a series of toll booths on the two-way road between the pizza shop and the customer FIG1 ). At each toll booth, you can choose to either pay the toll operator or drive straight through without paying. We save a time step by driving straight through without paying but the operator becomes angry. Angry operators drop their toll gate on you and crush your car (leading to a dead end) with a probability of 50% when you next pass through their booth. Hence, the optimal policy is to only pay the toll operators on the trip to the customer, but not on the trip back to the pizza shop (as we will not revisit the booth). This ensures a safe return, as there will be no chance of a toll operator crushing your car at any stage. Thus, CosaNostra Pizza is an example of a problem with avoidable dead ends. An ASNet is able to learn the trick of paying the toll operators only on the trip to the customer, and scales up to large instances while heuristic search planners based on determinisation (either for search or for heuristic computation) do not scale up BID18. The reason for the underperformance of determinisation-based techniques (e.g., using h add as heuristic) is the presence of avoidable dead ends in the CosaNostra domain. Moreover, heuristics based on delete relaxation (e.g., h add) also underperform in the CosaNostra domain because they consider that the agent crosses each toll booth only once, i.e., this relaxation ignores the return path since it uses the same propositions as the path to the customer. Thus, we expect UCT* to not scale up to larger instances since it will require extremely long reasoning chains in order to always pay the toll operator on the trip to the customer; however, by combining DP-UCT with the optimal policy learned by an ASNet, we expect to scale up to much larger instances than UCT* alone. For the CosaNostra Pizza problems, we train an ASNet on problems with 1 to 5 toll-booths, and evaluate DP-UCT and ASNets on problems with 2 to 15 toll booths. Stack Blocksworld. We allocate to each execution step n/2 seconds for all runs of DP-UCT, where n is the number of blocks in the problem. We use Simple ASNets with the influence constant M set to 10, 50 and 100 to demonstrate how DP-UCT can overcome the misleading information provided by the ASNet. We do not run experiments that use ASNets as a simulation function, as that would in completely misleading state-value and action-value estimates in the search tree, meaning DP-UCT would achieve near-zero coverage. Figure 3 depicts our . ASNets achieves zero coverage, while UCT* is able to reliably achieve near-full coverage for all problems up to 20 blocks. In general, as we increase M, the coverage of Simple ASNets decays earlier as the number of blocks increases. This is not unexpected, as by increasing M, we increasingly'push' the UCB1 term to select actions that the ASNet wishes to exploit, and hence misguide the navigation of the search space. Nevertheless, Simple ASNets is able to achieve near-full coverage for problems with up to 17 blocks for M = 10, 15 blocks for M = 50, and approximately 11 blocks for M = 100. We also observed a general increase in the time taken to reach a goal as we increased M, though this was not always the case due to the noise of DP-UCT.This experiment shows that Simple ASNets is capable of learning what ASNet has not learned and being robust to changes in the environment by correcting the bad actions the ASNet suggests through search and eventually converging to the optimal solution. Exploding Blocksworld. For all DP-UCT flavors, we increased the UCB1 bias parameter B to 4 and set the maximum number of trials to 30,000 in order to promote more exploration. To combine DP-UCT with ASNets, we use Ranked ASNets with the influence constant M set to 10, 50 and 100. Note, that the coverage for Exploding Blocksworld is an approximation of the true probability of reaching the goal. Since we only run each algorithm 30 times, the are susceptible to chance. Table 1 shows our . 2 Since the training set used by ASNets was likely not representative of the evaluation problems (i.e., the IPPC 2008 problems), the policy learned by ASNets is suboptimal and failed to to reach the goal for the relatively easy problems (e.g., p04 and p07) while UCT* was able to more reliably solve these problems. By combining DP-UCT with ASNets through Ranked ASNets, we were able to either match the performance of UCT* or outperform it, even when ASNet achieved zero coverage for the given problem. However, for certain configurations, we were able to improve upon all other configurations. For p08, Ranked ASNets with M = 50 achieves a coverage of 10/30, while all other configurations of DP-UCT are only able to achieve a coverage of around 4/30. Despite the fact that the ASNet achieved zero coverage in this experiment, the general knowledge learned by the ASNet helped us navigate the search tree more effectively and efficiently, even if the suggestions provided by the ASNet are not optimal. The same reasoning applies to the for p04, where Ranked ASNets with M = 50 achieves a higher coverage than all other configurations. We have demonstrated that we can exploit the policy learned by an ASNet to achieve more promising than UCT* and the network itself, even if this policy is suboptimal. Thus, we have shown that Ranked ASNets is capable of learning what the ASNet has not learned and improving the suboptimal policy learned by the network. CosaNostra Pizza. For this experiment, we considered ASNets as both a simulation function (Stochastic and Maximum ASNets), and in the UCB1 term for action selection (Simple and Ranked ASNets with M = 100) to improve upon UCT*. The optimal policy for CosaNostra Pizza takes 3n + 4 steps, where n is the number of toll booths in the problem. We set the trial length when using ASNets as a simulation function to be 1.25 · (3n + 4), where the 25% increase gives some leeway for Stochastic ASNets. FIG3 shows our -the curves for ASNets and Maximum ASNets overlap, as well as the curves for Simple and Ranked ASNets. ASNets achieves full coverage for all problems, while UCT* alone is only able to achieve full coverage for the problems with 2 and 3 toll booths. Using ASNets in the action selection ingredient through Simple or Ranked ASNets with the influence constant M = 100 only allows us to additionally achieve full coverage for the problem with 4 toll booths. This is because Simple and Ranked ASNets guide the action selection towards the optimal action, but UCT still forces the exploration of other parts of the state space. We are able to more reliably solve CosaNostra Pizza prob-In this experiment, we have shown how using ASNets in UCB1 through SIMPLE-ASNET or RANKED-ASNET action selection can only provide marginal improvements over UCT* when the number of reachable states increases exponentially with the problem size, and the heuristic estimates are misleading. We also demonstrated how we can combat this sub-optimal performance of DP-UCT by using ASNets as a simulation function, as it allows us to more efficiently explore the search space and find the optimal actions. Thus, an ASNet-based simulation function may help DP-UCT learn what it has not learned. Tireworld is a domain with avoidable dead ends. ASNets is trivially able to find the optimal policy which always avoids dead ends. The of our new algorithms on Triangle Tireworld are very similar to the in the CosaNostra experiments, as the algorithms leverage the fact that ASNets finds the optimal generalized policy for both domains. In this paper, we have investigated techniques to improve search using generalized policies. We discussed a framework for DP-UCT, extended from THTS, that allowed us to generate different flavors of DP-UCT including those that exploited the generalized policy learned by an ASNet. We then introduced methods of using this generalized policy in the simulation function, through STOCHASTIC ASNETS and MAXIMUM ASNETS. These allowed us to obtain more accurate state-value estimates and action-value estimates in the search tree. We also extended UCB1 to bias the navigation of the search space to the actions that an ASNet wants to exploit whilst maintaining the fundamental balance between exploration and exploitation, by introducing SIMPLE-ASNET and RANKED-ASNET action selection. We have demonstrated through our experiments that our algorithms are capable of improving the capability of an ASNet to generalize beyond the distribution of problems it was trained on, as well as improve sub-optimal learning. By combining DP-UCT with ASNets, we are able to bias the exploration of actions to those that an ASNet wishes to exploit, and allow DP-UCT to converge to the optimal action in a smaller number of trials. Our experiments have also demonstrated that by harnessing the power of search, we may overcome any misleading information provided by an ASNet due to a change in the environment. Hence, we achieved the three following goals: Learn what we have not learned, Improve on sub-optimal learning, and Be robust to changes in the environment or domain. It is important to observe that our contributions are more generally applicable to any method of learning a (generalized) policy (not just ASNets), and potentially to other trialbased search algorithms including (L)RTDP.In the deterministic setting, there has been a long tradition of learning generalized policies and using them to guide heuristic Best First Search (BFS). For instance, Yoon et al. BID20 add the states ing from selecting actions prescribed by the learned generalized policy to the the queue of a BFS guided by a relaxed-plan heuristic, and de la BID7 learn and use generalized policies to generate lookahead states within a BFS guided by the FF heuristic. These authors observe that generalized policies provide effective search guidance, and that search helps correcting deficiencies in the learned policy. Search control knowledgeà la TLPlan, Talplanner or SHOP2 has been successfully used to prune the search of probabilistic planners BID13 BID17 ). More recently, BID15 have also experimented with the use of preferred actions in variants of RTDP BID1 and AO* BID14, albeit with limited success. Our work differs from these approaches by focusing explicitly on MCTS as the search algorithm and, unlike existing work combining deep learning and MCTS (e.g. AlphaGo (Silver et al. 2016)), looks not only at using neural network policies as a simulation function for rollouts, but also as a means to bias the UCB1 action selection rule. There are still many potential avenues for future work. We may investigate how to automatically learn the influence parameter M for SIMPLE-ASNET and RANKED-ASNET action selection, or how to combat bad information provided by an ASNet in a simulation function by mixing ASNet simulations with random simulations. We may also investigate techniques to interleave planning with learning by using UCT with ASNets as a'teacher' for training an AS
Techniques for combining generalized policies with search algorithms to exploit the strengths and overcome the weaknesses of each when solving probabilistic planning problems
332
scitldr
Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice. When the train and test distributions are mismatched, accuracy can plummet. Currently there are few techniques that improve robustness to unforeseen data shifts encountered during deployment. In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers. We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions. AugMix significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half. Current machine learning models depend on the ability of training data to faithfully represent the data encountered during deployment. In practice, data distributions evolve , models encounter new scenarios , and data curation procedures may capture only a narrow slice of the underlying data distribution . Mismatches between the train and test data are commonplace, yet the study of this problem is not. As it stands, models do not robustly generalize across shifts in the data distribution. If models could identify when they are likely to be mistaken, or estimate uncertainty accurately, then the impact of such fragility might be ameliorated. Unfortunately, modern models already produce overconfident predictions when the training examples are independent and identically distributed to the test distribution. This overconfidence and miscalibration is greatly exacerbated by mismatched training and testing distributions. Small corruptions to the data distribution are enough to subvert existing classifiers, and techniques to improve corruption robustness remain few in number. show that classification error of modern models rises from 22% on the usual ImageNet test set to 64% on ImageNet-C, a test set consisting of various corruptions applied to ImageNet test images. Even methods which aim to explicitly quantify uncertainty, such as probabilistic and Bayesian neural networks, struggle under data shift, as recently demonstrated by. Improving performance in this setting has been difficult. One reason is that training against corruptions only encourages networks to memorize the specific corruptions seen during training and leaves models unable to generalize to new corruptions . Further, networks trained on translation augmentations remain highly sensitive to images shifted by a single pixel . Others have proposed aggressive data augmentation schemes , though at the cost of a computational increase. demonstrates that many techniques may improve clean accuracy at the cost of robustness while many techniques which improve robustness harm uncertainty, and contrariwise. In all, existing techniques have considerable trade-offs. In this work, we propose a technique to improve both the robustness and uncertainty estimates of classifiers under data shift. We propose AUGMIX, a method which simultaneously achieves new state-of-the-art for robustness and uncertainty estimation while maintaining or improving accuracy on standard benchmark datasets. AUGMIX utilizes stochasticity and diverse augmentations, a Jensen-Shannon Divergence consistency loss, and a formulation to mix multiple augmented images to achieve state-of-the-art performance. On CIFAR-10 and CIFAR-100, our method roughly halves the corruption robustness error of standard training procedures from 28.4% to 12.4% and 54.3% to 37.8% error, respectively. On ImageNet, AUGMIX also achieves state-of-the-art corruption robustness and decreases perturbation instability from 57.2% to 37.4%. Code is available at https://github.com/google-research/augmix. Figure 2: Example ImageNet-C corruptions. These corruptions are encountered only at test time and not during training. Robustness under Data Shift. show that training against distortions can often fail to generalize to unseen distortions, as networks have a tendency to memorize properties of the specific training distortion. show training with various blur augmentations can fail to generalize to unseen blurs or blurs with different parameter settings. propose measuring generalization to unseen corruptions and provide benchmarks for doing so. construct an adversarial version of the aforementioned benchmark.; argue that robustness to data shift is a pressing problem which greatly affects the reliability of real-world machine learning systems. Calibration under Data Shift.; propose metrics for determining the calibration of machine learning models. find that simply ensembling classifier predictions improves prediction calibration. Hendrycks et al. (2019a) show that pre-training can also improve calibration. demonstrate that model calibration substantially deteriorates under data shift. Data Augmentation. Data augmentation can greatly improve generalization performance. For image data, random left-right flipping and cropping are commonly used. Random occlusion techniques such as Cutout can also improve accuracy on clean data . Rather than occluding a portion of an image, CutMix replaces a portion of an image with a portion of a different image. Mixup also uses information from two images. Rather than implanting one portion of an image inside another, Mixup produces an elementwise convex combination of two images (; Tokozume et al., Figure 3 : A cascade of successive compositions can produce images which drift far from the original image, and lead to unrealistic images. However, this divergence can be balanced by controlling the number of steps. To increase variety, we generate multiple augmented images and mix them. 2018). show that Mixup can be improved with an adaptive mixing policy, so as to prevent manifold intrusion. Separate from these approaches are learned augmentation methods such as AutoAugment , where a group of augmentations is tuned to optimize performance on a downstream task. Patch Gaussian augments data with Gaussian noise applied to a randomly chosen portion of an image. A popular way to make networks robust to p adversarial examples is with adversarial training , which we use in this paper. However, this tends to increase training time by an order of magnitude and substantially degrades accuracy on non-adversarial images . AUGMIX is a data augmentation technique which improves model robustness and uncertainty estimates, and slots in easily to existing training pipelines. At a high level, AugMix is characterized by its utilization of simple augmentation operations in concert with a consistency loss. These augmentation operations are sampled stochastically and layered to produce a high diversity of augmented images. We then enforce a consistent embedding by the classifier across diverse augmentations of the same input image through the use of Jensen-Shannon divergence as a consistency loss. Mixing augmentations allows us to generate diverse transformations, which are important for inducing robustness, as a common failure mode of deep models in the arena of corruption robustness is the memorization of fixed augmentations . Previous methods have attempted to increase diversity by directly composing augmentation primitives in a chain, but this can cause the image to quickly degrade and drift off the data manifold, as depicted in Figure 3. Such image degradation can be mitigated and the augmentation diversity can be maintained by mixing together the of several augmentation chains in convex combinations. A concrete account of the algorithm is given in the pseudocode below. x aug += w i · chain(x orig) Addition is elementwise Sample weight m ∼ Beta(α, α) Interpolate with rule x augmix = mx orig + (1 − m)x aug 13: return x augmix 14: end function 15: x augmix1 = AugmentAndMix(x orig) x augmix1 is stochastically generated 16: Augmentations. Our method consists of mixing the from augmentation chains or compositions of augmentation operations. We use operations from AutoAugment. Each operation is visualized in Appendix C. Crucially, we exclude operations which overlap with ImageNet-C corruptions. In particular, we remove the contrast, color, brightness, sharpness, and Cutout operations so that our set of operations and the ImageNet-C corruptions are disjoint. In turn, we do not use any image noising nor image blurring operations so that ImageNet-C corruptions are encountered only at test time. Operations such as rotate can be realized with varying severities, like 2 • or −15 •. For operations with varying severities, we uniformly sample the severity upon each application. Next, we randomly sample k augmentation chains, where k = 3 by default. Each augmentation chain is constructed by composing from one to three randomly selected augmentation operations. Mixing. The ing images from these augmentation chains are combined by mixing. While we considered mixing by alpha compositing, we chose to use elementwise convex combinations for simplicity. The k-dimensional vector of convex coefficients is randomly sampled from a Dirichlet(α, . . ., α) distribution. Once these images are mixed, we use a "skip connection" to combine the of the augmentation chain and the original image through a second random convex combination sampled from a Beta(α, α) distribution. The final image incorporates several sources of randomness from the choice of operations, the severity of these operations, the lengths of the augmentation chains, and the mixing weights. Jensen-Shannon Divergence Consistency Loss. We couple with this augmentation scheme a loss that enforces smoother neural network responses. Since the semantic content of an image is approximately preserved with AUGMIX, we should like the model to embed x orig, x augmix1, x augmix2 similarly. Toward this end, we minimize the Jensen-Shannon divergence among the posterior distributions of the original sample x orig and its augmented variants. That is, for p orig =p(y | x orig), p augmix1 =p(y | x augmix1), p augmix2 =p(y|x augmix2), we replace the original loss L with the loss To interpret this loss, imagine a sample from one of the three distributions p orig, p augmix1, p augmix2. The Jensen-Shannon divergence can be understood to measure the average information that the sample reveals about the identity of the distribution from which it was sampled. This loss can be computed by first obtaining M = (p orig + p augmix1 + p augmix2)/3 and then computing Unlike an arbitrary KL Divergence between p orig and p augmix, the Jensen-Shannon divergence is upper bounded, in this case by the logarithm of the number of classes. Note that we could instead compute JS(p orig ; p augmix1), though this does not perform as well. The gain of training with JS(p orig ; p augmix1 ; p augmix2 ; p augmix3) is marginal. The Jensen-Shannon Consistency Loss impels to model to be stable, consistent, and insensitive across to a diverse range of inputs (; ; . Ablations are in Section 4.3 and Appendix A. Datasets. The two CIFAR datasets contain small 32 × 32 × 3 color natural images, both with 50,000 training images and 10,000 testing images. CIFAR-10 has 10 categories, and CIFAR-100 has 100. The ImageNet dataset contains 1,000 classes of approximately 1.2 million large-scale color images. In order to measure a model's resilience to data shift, we evaluate on the CIFAR-10-C, CIFAR-100-C, and ImageNet-C datasets . These datasets are constructed by corrupting the original CIFAR and ImageNet test sets. For each dataset, there are a total of 15 noise, blur, weather, and digital corruption types, each appearing at 5 severity levels or intensities. Since these datasets are used to measure network behavior under data shift, we take care not to introduce these 15 corruptions into the training procedure. The CIFAR-10-P, CIFAR-100-P, and ImageNet-P datasets also modify the original CIFAR and ImageNet datasets. These datasets contain smaller perturbations than CIFAR-C and are used to measure the classifier's prediction stability. Each example in these datasets is a video. For instance, a video with the brightness perturbation shows an image getting progressively brighter over time. We should like the network not to give inconsistent or volatile predictions between frames of the video as the brightness increases. Thus these datasets enable the measurement of the "jaggedness" of a network's prediction stream. Metrics. The Clean Error is the usual classification error on the clean or uncorrupted test data. In our experiments, corrupted test data appears at five different intensities or severity levels 1 ≤ s ≤ 5. For a given corruption c, the error rate at corruption severity s is E c,s. We can compute the average error across these severities to create the unnormalized corruption error uCE c = 5 s=1 E c,s. On CIFAR-10-C and CIFAR-100-C we average these values over all 15 corruptions. Meanwhile, on ImageNet we follow the convention of normalizing the corruption error by the corruption error of AlexNet . We compute CE c =. The average of the 15 corruption errors CE Gaussian Noise, CE Shot Noise,..., CE Pixelate, CE JPEG gives us the Mean Corruption Error (mCE). Perturbation robustness is not measured by accuracy but whether video frame predictions match. Consequently we compute what is called the flip probability. Concretely, for videos such as those with steadily increasing brightness, we determine the probability that two adjacent frames, or two frames with slightly different brightness levels, have "flipped" or mismatched predictions. There are 10 different perturbation types, and the mean across these is the mean Flip Probability (mFP). As with ImageNet-C, we can normalize by AlexNet's flip probabilities and obtain the mean Flip Rate (mFR). In order to assess a model's uncertainty estimates, we measure its miscalibration. Classifiers capable of reliably forecasting their accuracy are considered "calibrated." For instance, a calibrated classifier should be correct 70% of the time on examples to which it assigns 70% confidence. Let the classifier's confidence that its predictionŶ is correct be written C. Then the idealized RMS Calibration Error is, which is the squared difference between the accuracy at a given confidence level and actual the confidence level. In Appendix E, we show how to empirically estimate this quantity and calculate the Brier Score. Training Setup. In the following experiments we show that AUGMIX endows robustness to various architectures including an All Convolutional Network , a DenseNet-BC (k = 12, d = 100) , a 40-2 Wide ResNet , and a ResNeXt-29 (32 × 4) . All networks use an initial learning rate of 0.1 which decays following a cosine learning rate . All input images are pre-processed with standard random left-right flipping and cropping prior to any augmentations. We do not change AUGMIX parameters across CIFAR-10 and CIFAR-100 experiments for consistency. The All Convolutional Network and Wide ResNet train for 100 epochs, and the DenseNet and ResNeXt require 200 epochs for convergence. We optimize with stochastic gradient descent using Nesterov momentum. Table 1: Average classification error as percentages. Across several architectures, AUGMIX obtains CIFAR-10-C and CIFAR-100-C corruption robustness that exceeds the previous state of the art. Results. Simply mixing random augmentations and using the Jensen-Shannon loss substantially improves robustness and uncertainty estimates. Compared to the "Standard" data augmentation baseline ResNeXt on CIFAR-10-C, AUGMIX achieves 16.6% lower absolute corruption error as shown in Figure 5. In addition to surpassing numerous other data augmentation techniques, Table 1 demonstrates that these gains directly transfer across architectures and on CIFAR-100-C with zero additional tuning. Crucially, the robustness gains do not only exist when measured in aggregate. Figure 12 shows that AUGMIX improves corruption robustness across every individual corruption and severity level. Our method additionally achieves the lowest mFP on CIFAR-10-P across three different models all while maintaining accuracy on clean CIFAR-10, as shown in Figure 6 (left) and Table 6. Finally, we demonstrate that AUGMIX improves the RMS calibration error on CIFAR-10 and CIFAR-10-C, as shown in Figure 6 (right) and Table 5. Expanded CIFAR-10-P and calibration are in Appendix D, and Fourier Sensitivity analysis is in Appendix B. Baselines. To demonstrate the utility of AUGMIX on ImageNet, we compare to many techniques designed for large-scale images. While techniques such as Cutout have not been demonstrated to help on the ImageNet scale, and while few have had success training adversarially robust models on ImageNet , other techniques such as Stylized ImageNet have been demonstrated to help on ImageNet-C. Patch Uniform ) is similar to Cutout except that randomly chosen regions of the image are injected with uniform noise; the original paper uses Gaussian noise, but that appears in the ImageNet-C test set so we use uniform noise. We tune Patch Uniform over 30 hyperparameter settings. Next, AutoAugment searches over data augmentation policies to find a high-performing data augmentation policy. We denote AutoAugment with AutoAugment* since we remove augmentation operations that overlap with ImageNet-C corruptions, as with AUGMIX. We also test with Random AutoAugment*, an augmentation scheme where each image has a randomly sampled augmentation policy using AutoAugment* operations. In contrast to AutoAugment, Random AutoAugment* and AUGMIX require far less computation and provide more augmentation variety, which can offset their lack of optimization. Note that Random AutoAugment* is different from RandAugment introduced recently by: RandAugment uses AutoAugment operations and optimizes a single distortion magnitude hyperparameter for all operations, while Random AutoAugment* randomly samples magnitudes for each operation and uses the same operations as AUGMIX. MaxBlur Pooling is a recently proposed architectural modification which smooths the of pooling. Now, Stylized ImageNet (SIN) is a technique where models are trained with the original ImageNet images and also ImageNet images with style transfer applied. Whereas the original Stylized ImageNet technique pretrains on ImageNet-C and performs style transfer with a content loss coefficient of 0 and a style loss coefficient of 1, we find that using 0.5 content and style loss coefficients decreases the mCE by 0.6%. Later, we show that SIN and AUGMIX can be combined. All models are trained from scratch, except MaxBlur Pooling models which has trained models available. Training Setup. Methods are trained with ResNet-50 and we follow the standard training scheme of , in which we linearly scale the learning rate with the batch size, and use a learning rate warm-up for the first 5 epochs, and AutoAugment and AUGMIX train for 180 epochs. All input images are first pre-processed with standard random cropping horizontal mirroring. Table 2: Clean Error, Corruption Error (CE), and mCE values for various methods on ImageNet-C. The mCE value is computed by averaging across all 15 CE values. AUGMIX reduces corruption error while improving clean accuracy, and it can be combined with SIN for greater corruption robustness. Results. Our method achieves 68.4% mCE as shown in Table 2, down from the baseline 80.6% mCE. Additionally, we note that AUGMIX allows straightforward stacking with other methods such as SIN to achieve an even lower corruption error of 64.1% mCE. Other techniques such as AutoAugment* require much tuning, while ours does not. Across increasing severities of corruptions, our method also produces much more calibrated predictions measured by both the Brier Score and RMS Calibration Error as shown in Figure 7. As shown in Table 3, AUGMIX also achieves a state-of-the art on ImageNet-P at with an mFR of 37.4%, down from 57.2%. We demonstrate that scaling up AUGMIX from CIFAR to ImageNet also leads to state-of-the-art in robustness and uncertainty estimation. We locate the utility of AUGMIX in three factors: training set diversity, our Jensen-Shannon divergence consistency loss, and mixing. Improving training set diversity via increased variety of augmentations can greatly improve robustness. For instance, augmenting each example with a Table 3: ImageNet-P . The mean flipping rate is the average of the flipping rates across all 10 perturbation types. AUGMIX improves perturbation stability by approximately 20%. Figure 7: Uncertainty on ImageNet-C. Observe that under severe data shifts, the RMS calibration error with ensembles and AUGMIX is remarkably steady. Even though classification error increases, calibration is roughly preserved. Severity zero denotes clean data. randomly sampled augmentation chain decreases the error rate of Wide ResNet on CIFAR-10-C from 26.9% to 17.0% Table 4. Adding in the Jensen-Shannon divergence consistency loss drops error rate further to 14.7%. Mixing random augmentations without the Jenson-Shannon divergence loss gives us an error rate of 13.1%. Finally, re-introducing the Jensen-Shannon divergence gives us AUGMIX with an error rate of 11.2%. Note that adding even more mixing is not necessarily beneficial. For instance, applying AUGMIX on top of Mixup increases the error rate to 13.3%, possibly due to an increased chance of manifold intrusion . Hence AUGMIX's careful combination of variety, consistency loss, and mixing explain its performance. CIFAR-10-C Error Rate CIFAR-100-C Error Rate Table 4: Ablating components of AUGMIX on CIFAR-10-C and CIFAR-100-C. Variety through randomness, the Jensen-Shannon divergence (JSD) loss, and augmentation mixing confer robustness. AUGMIX is a data processing technique which mixes randomly generated augmentations and uses a Jensen-Shannon loss to enforce consistency. Our simple-to-implement technique obtains state-of-the-art performance on CIFAR-10/100-C, ImageNet-C, CIFAR-10/100-P, and ImageNet-P. AUGMIX models achieve state-of-the-art calibration and can maintain calibration even as the distribution shifts. We hope that AUGMIX will enable more reliable models, a necessity for models deployed in safety-critical environments. In this section we demonstrate that AUGMIX's hyperparameters are not highly sensitive, so that AUGMIX performs reliably without careful tuning. For this set of experiments, the baseline AUGMIX model trains for 90 epochs, has a mixing coefficient of α = 0.5, has 3 examples per Jensen-Shannon Divergence (1 clean image, 2 augmented images), has a chain depth stochastically varying from 1 to 3, and has k = 3 augmentation chains. Figure 8 shows that the performance of various AUGMIX models with different hyperparameters. Under these hyperparameter changes, the mCE does not change substantially. A commonly mentioned hypothesis for the lack of robustness of deep neural networks is that they readily latch onto spurious high-frequency correlations that exist in the data. In order to better understand the reliance of models to such correlations, we measure model sensitivity to additive noise at differing frequencies. We create a 32 × 32 sensitivity heatmap. That is, we add a total of 32 × 32 Fourier basis vectors to the CIFAR-10 test set, one at a time, and record the ing error rate after adding each Fourier basis vector. Each point in the heatmap shows the error rate on the CIFAR-10 test set after it has been perturbed by a single Fourier basis vector. Points corresponding to low frequency vectors are shown in the center of the heatmap, whereas high frequency vectors are farther from the center. For further details on Fourier sensitivity analysis, we refer the reader to Section 2 of. In Figure 9 we observe that the baseline model is robust to low frequency perturbations but severely lacks robustness to high frequency perturbations, where error rates exceed 80%. The model trained with Cutout shows a similar lack of robustness. In contrast, the model trained with AUGMIX maintains robustness to low frequency perturbations, and on the mid and high frequencies AUGMIX is conspicuously more robust. The augmentation operations we use for AUGMIX are shown in Figure 10. We do not use augmentations such as contrast, color, brightness, sharpness, and Cutout as they may overlap with ImageNet-C test set corruptions. We should note that augmentation choice requires additional care. show that blithely applying augmentations can potentially cause augmented images to take different classes. Figure 11 shows how histogram color swapping augmentation may change a bird's class, leading to a manifold intrusion. Manifold Intrusion from Color Augmentation Figure 11: An illustration of manifold intrusion , where histogram color augmentation can change the image's class. We include various additional for CIFAR-10, CIFAR-10-C and CIFAR-10-P below. Figure 12 reports accuracy for each corruption, Table 5 reports calibration for various architectures and Table 6 reports clean error and mFR. We refer to Section 4.1 for details about the architecture and training setup. Table 6: CIFAR-10 Clean Error and CIFAR-10-P mean Flip Probability. All values are percentages. While adversarial training performs well on CIFAR-10-P, it induces a substantial drop in accuracy (increase in error) on clean CIFAR-10 where AUGMIX does not. Due to the finite size of empirical test sets, the RMS Calibration Error must be estimated by partitioning all n test set examples into b contiguous bins {B 1, B 2, . . ., B b} ordered by prediction confidence. In this work we use bins which contain 100 predictions, so that we adaptively partition confidence scores on the interval (; b). Other works partition the interval with 15 bins of uniform length . With these b bins, we estimate the RMS Calibration Error empirically with the formula This is separate from classification error because a random classifier with an approximately uniform posterior distribution is approximately calibrated. Also note that adding the "refinement" E C [(P(Y = Y |C = c)(1 − (P(Y =Ŷ |C = c))] to the square of the RMS Calibration Error gives us the Brier Score .
We obtain state-of-the-art on robustness to data shifts, and we maintain calibration under data shift even though even when accuracy drops
333
scitldr
Automatic Piano Fingering is a hard task which computers can learn using data. As data collection is hard and expensive, we propose to automate this process by automatically extracting fingerings from public videos and MIDI files, using computer-vision techniques. Running this process on 90 videos in the largest dataset for piano fingering with more than 150K notes. We show that when running a previously proposed model for automatic piano fingering on our dataset and then fine-tuning it on manually labeled piano fingering data, we achieve state-of-the-art . In addition to the fingering extraction method, we also introduce a novel method for transferring deep-learning computer-vision models to work on out-of-domain data, by fine-tuning it on out-of-domain augmentation proposed by a Generative Adversarial Network (GAN). For demonstration, we anonymously release a visualization of the output of our process for a single video on https://youtu.be/Gfs1UWQhr5Q Learning to play the piano is a hard task taking years to master. One of the challenging aspects when learning a new piece is the fingering choice in which to play each note. While beginner booklets contain many fingering suggestions, advanced pieces often contain none or a select few. Automatic prediction of PIANO-FINGERING can be a useful addition to new piano learners, to ease the learning process of new pieces. As manually labeling fingering for different sheet music is an exhausting and expensive task 1, In practice previous work (; ; ; ;) used very few tagged pieces for evaluation, with minimal or no training data. In this paper, we propose an automatic, low-cost method for detecting PIANO-FINGERING from piano playing performances captured on videos which allows training modern -data-hungry -neural networks. We introduce a novel pipeline that adapts and combines several deep learning methods which lead to an automatic labeled PIANO-FINGERING dataset. Our method can serve two purposes: an automatic "transcript" method that detects PIANO-FINGERING from video and MIDI files, when these are available, and serve as a dataset for training models and then generalize to new pieces. Given a video and a MIDI file, our system produces a probability distribution over the fingers for each played. Running this process on large corpora of piano pieces played by different artists, yields a total of 90 automatically finger-tagged pieces (containing 155,107 notes in total) and in the first public large scale PIANO-FINGERING dataset, which we name APFD. This dataset will grow over time, as more videos are uploaded to YouTube. We provide empirical evidence that APFD is valuable, both by evaluating a model trained on it over manually labeled videos, as well as its usefulness by fine-tuning the model on a manually created dataset, which achieves state-of-the-art . The process of extracting PIANO-FINGERING from videos alone is a hard task as it needs to detect keyboard presses, which are often subtle even for the human eye. We, therefore, turn to MIDI files to obtain this information. The extraction steps are as follows: We begin by locating the keyboard and identify each key on the keyboard (§3.2). Then, we identify the playing hands on top of the keyboard (§3.3), and detect the fingers given the hands bounding boxes (§3.4). Next, we align between the MIDI file and its corresponding video (§3.6) and finally assign for every pressed note, the finger which was most likely used to play it (§3.5). Albeit the expectation from steps like hand detection and pose estimation, which were extensively studied in the computer-vision literature, we find that in practice, state-of-the-art models do not excel in these tasks for our scenario. We therefore address these weaknesses by fine-tuning an object detection model §3.3 on a new dataset we introduce and train a CycleGAN to address the different lighting scenarios with the pose estimation model §3.4. PIANO-FINGERING was previously studied in multiple disciplines, such as music theory and computer animation (; ; ; ; ;). The fingering prediction task is formalized as follows: Given a sequence of notes, associate each note with a finger from the set {1, 2, 3, 4, 5} × {L, R}. This is subject to constraints such as the positions of each hand, anatomical plausibility of transitioning between two fingers, the hands' size, etc. Each fingering sequence has a cost, which is derived from a transition cost between two fingers. Early work modeled fingering prediction as a search problem, where the objective is to find the optimal fingering sequence to play all the notes with. A naive approach to finding the best sequence is to exhaustively evaluate all possible transitions between one note to another which is not computationally feasible. By defining a transition matrix corresponding to the probability or "difficulty" of transitioning from one note to another -one can calculate a cost function, which defines the predicted sequence likelihood. Using a search algorithm on top of the transitions allows finding a globally optimal solution. This solution is not practical as well, due to the exponential complexity, and therefore heuristics or pruning are employed to reduce the space complexity. The transition matrix can be manually defined by heuristics or personal estimation , or instead, not relying on a pre-defined set of rules, and use a Hidden Markov Model (HMM) to learn the transitions . In practice, leaves the parameter learning to future work, and instead they manually fine-tune the transition matrix. On top of the transition matrix, practitioners suggested using dynamic programming algorithms to solve the search . Another option to solve the huge search space is to use a search algorithm such as Tabu search or variable neighborhood search (Mladenović &), to find a global plausible solution (; . These works are either limited by the defined transition rules, or by making different assumptions to facilitate the search space. Such assumptions come in the form of limiting the predictions to a single hand, limiting the evaluation pieces to contain no chords, rests or substantial lengths during which player can take their hand off the keyboard. Furthermore, all of these works have very small evaluation sets, which in practice makes it hard to compare different approaches, and does not allow to use more advanced models such as neural networks. In this work, we continue the transition of search-based methods that optimize a set of constraints with learning methods that try to imitate human behavior by the use of large datasets. In practice, these methods require lots of training data to achieve good performance, and fingering labeling is particularly expensive and hard to obtain. One way to automatically gather rich fingering data with full hand pose estimation is by using motion capture (MOCAP) gloves when playing the piano. suggests a rule-based and data-based hybrid method, initially estimating fingering decisions using a Directed Acyclic Graph (DAG) based on rule-based comfort constraints which are smoothed using data recorded from limited playing sessions with motion capture gloves. As MOCAP requires special equipment and may affect the comfort of the player, other work, tried to automatically detect piano fingering from video and MIDI files. The pianist's fingernails were laid with colorful markers, which were detected by a computer vision program. As some occlusions can occur, they used some rules to correct the detected fingering. In practice, they implemented the system with a camera capturing only 2 octaves (out of 8) and performed a very limited evaluation. The rules they used are simple (such as: restricting one finger per played note, two successive notes cannot be played with the same finger), but far from capturing real-world scenarios. Previous methods for automatically collecting data were costly, as apart of the equipment needed during the piece playing, and the data-collectors had to pay the participating pianists. In our work, we rely solely on videos from YouTube, meaning costs remain minimal with the ability to scale up to new videos. released a relatively large dataset of manually labeled PIANO-FINGERING by one to six annotators, consisting of 150 pieces, with partially annotated scores (324 notes per piece on average) with a total of 48,726 notes matched with 100,044 tags from multiple annotators. This is the largest annotated PIANO-FINGERING corpus to date and a valuable resource for this field. The authors propose multiple methods for modeling the task of PIANO-FINGERING, including HMMs and neural networks, and report the best performance with an HMM-based model. In this work, we use their dataset as a gold dataset for comparison and adapt their model to compare to our automatically generated dataset. There is a genre of online videos in which people upload piano performances where both the piano and the hands are visible. On some channels, people not only include the video but also the MIDI file recorded while playing the piece. We propose to use machine learning techniques to extract fingering information from such videos, enabling the creation of a large dataset of pieces and their fingering information. This requires the orchestration and adaptation of several techniques, which we describe below. The final output we produce is demonstrated in Figure 1, where we colored both the fingers and the played notes based on the pose-estimation model (§3.4) and the predicted fingers that played them (§3.5). Note that the ring fingers of both hands as well as the index finger of the left hand and the middle finger of the right hand do not press any note in this particular frame, but may play a note in others. We get the information of played notes from the MIDI events. We extract videos from youtube.com, played by different piano players on a specific channel containing both video and MIDI files. In these videos, the piano is filmed in a horizontal angle directly to the keyboard, from which both the keyboard and hands are displayed (as can be seen in Figure 1). MIDI files A standard protocol for the interchange of musical information between musical instruments, synthesizers, and computers Musical Instrument Digital Interface (MIDI) is a standard format for the interchange of musical information between electronic musical instruments. It consists of a sequence of events describing actions to carry out, when, and allows for additional attributes. In the setup of piano recording, it records what note was played in what time for how long and its pressure strength (velocity). We only use videos that come along with a MIDI file, and use it as the source for the played notes and their timestamp. To allow a correct fingering assignment, we first have to find the keyboard and the bounding boxes of the keys. We detect the keyboard as the largest continuous bright area in the video and identify key boundaries using standard image processing techniques, taking into account the expected number of keys and their predictable location and clear boundaries. For robustness and in order to handle the interfering hands that periodically hide parts of the piano, we combine information from multiple random frames by averaging the predictions from each frame. A straightforward approach for getting fingers locations in an image is to use a pose estimation model directly on the entire image. In practice, common methods for full-body pose estimation such as OpenPose containing hand pose estimation, make assumptions about the wrist and elbow locations to automatically approximate the hands' locations. In the case of piano playing, the elbow does not appear in the videos, therefore these systems don't work. We instead, turn to a pipeline approach where we start by detecting the hands, cropping them, and passing the cropped frames to a pose estimation model that expects the hand to be in the middle of the frame. Object Detection (; ; a; b), and specifically Hand Detection Kölsch &; ) are well studied subjects. However, out of the published work providing source code, the code was either complicated to run (e.g. versioning, mobile-only builds, supporting specific GPU, etc.), containing private datasets, or only detecting hands with no distinction between left and right, which is important in our case. We, therefore, created a small dataset with random frames from different videos, corresponding to 476 hands in total evenly split between left and right 2. We then fine-tuned a pre-trained object detection model (Inception v2 , based on Faster R-CNN , trained on COCO challenge ) on our new dataset. The fine-tuned model works reasonably well and some hand detection bounding boxes are presented in Figure 1. We release this new dataset and the trained model alongside the rest of the resources developed in this work. Having a working hand-detection model, we perform hand detection on every frame in the video. If more than two hands are detected, we take the 2 highest probability defections as the correct bounding boxes. If two hands are detected with the same label ("left-left" or "right-right"), we discard the model's label, and instead choose the leftmost bounding box to have the label "left" and the other to have the label "right" -which is the most common position of hands on the piano. Having the bounding box of each hand is not enough, as in order to assign fingers to notes we need the hand's pose. How can we detect fingers that pressed the different keys? We turn to pose estimation models, a well-studied subject in computer vision and use standard models . Using off-the-shelve pose estimation model, turned to often fail in our scenario. Some failure example are presented in Figure 2c where the first pose is estimated correctly, but the rest either have wrong finger positions, shorter, or broken fingers. The videos we use contain visual effects (e.g. LED lights are turned on every time a key is pressed), and such the pose-estimation models exhibit two failures: when the LED colors are warm (red, orange, yellow), the model sees the light as an extension to the finger, and as such poorly estimates where the keypoints are for each finger; when the LED colors are cool (green, blue, white), the over-saturation seems to overwhelm the model, and it mistakenly estimates the hand's pose as much shorter, considering the lit parts of the piano as part of the . Some examples of these errors are presented in Figure 2c. Furthermore, the videos are usually very dark, high-contrast, and blurry due to motion blur, which standard datasets (; ; and models trained on top of them rarely encounter. Given the observation that pose-estimation works well on well lid-images, how can we augment the pose estimation model to other lighting situations? This scenario is similar to sim2real works (; Gupta & Booher;) where one wants to transfer a model from simulations to the real-world. These works learn a mapping function G 1: T → S that transfer instances x i from the target domain T (the real-world) into the source domain S (the simulation), where the mapping is usually achieved by employing a CycleGan . Then, models which are trained on the source domain are used on the transformation of the target domain G 1 (x i) and manage to generalize on the target domain. In our setup, we seek a mapping G 2: S → T that transforms the source domain (i.e the well-lid videos) into the target data (i.e the challenging lighted scenarios). After obtaining the transformation function G 2, we employ the pose estimation model f on the source domain, use the transformation separately, and align the prediction to the new representation. This novel setup benefits from performance boost as we only use the transformation function offline, before training and avoid using it for every prediction. We also benefit of better generalization as we keep good performance on the source domain, and gain major performance on the target domain. We manually detect videos and assign them into their group (well-lit or poorly-lit). Then, we automatically detect and crop hands from random frames, ing in 21,747 well-lit hands, and 12,832 poorly lit hands. We then trained a CycleGAN for multiple epochs and chose 15 training checkpoints that produced different lighting scenarios (some examples can be seen in Figure 2). We then fine-tune a pose-estimation model on the original, well lit frames, and on the 15 transformed frames. This procedure in a model that is robust to different lighting scenarios, as we show in Figures 2b and 2d, demonstrating its performance on different lighting scenarios. Given that we know which notes were pressed in any given frame (see §3.6 below), there is still uncertainty as to which finger pressed them. This uncertainty either comes from imperfect pose estimation, or multiple fingers located on top of a single note. We model the finger press estimation by calculating the probability of a specific finger to have been used, given the hand's pose and the pressed note: where i ∈ for the 5 fingers, h j ∈ {h l, h r} stands for the hand being used (left or right) and n k ∈ corresponding to the played key. We chose to estimate the pressed fingers as a Gaussian distribution N (µ, σ 2), where µ and σ are defined as follows: µ is the center of the key on the x axis and σ is its width. The score of each finger given a note in a specific frame is defined as: The probability of a given finger to have played a note given a note and a frame: (normalizing g for all fingers) As most keyboard presses last more than one frame, we make use of multiple frames to overcome some of the errors from previous steps and to estimate a more accurate prediction. For this reason, we aggregate the frames that were used during a key press. We treat the first frame as the main signal point, and assign each successive frame an exponentially declining weight Σ n l=1 0.5 l Figure 3: Precision and recall based on confidence for all 6,894 manually tagged notes As finger changes can occur in later frames. Finally, we normalize the weighted sum of probabilities to achieve a probability distribution for all frames. In our dataset, we release all probabilities for each played note, along with the maximum likelihood finger estimation. We define the "confidence" score of the extraction from a single piece, as the product of the highest probability for a finger for each note. Figure 3 shows the precision and recall of the predictions based on a cutoff for the highest probability of the note. We see a positive correlation between confidence threshold and precision, and a negative correlation between confidence and recall, meaning we can get relatively high precision for a small number of notes, or relatively low precision for a high number of notes. We consider the MIDI and video files to be complementary, as they were recorded simultaneously. The MIDI files are the source to which keys were pressed, in what time, and for how long. The videos are the source for the piano, hands, and fingers locations. These two sources are not synchronized, but as they depict the same piano performance, a perfect alignment exist (up to the video frequency resolution). We extract the audio track from the video, and treat the first audio peak as the beginning of the piece, clipping the prior part of the video and aligning it with the first event from the MIDI file. In practice, we determine this starting point as the first point in the signal where the signal amplitude is higher than a fifth of the mean absolute value of the entire signal. This heuristic achieves a reasonable alignment, but we observe some alignment mismatch of 80-200ms. We tackle the misalignment by using a signal from the final system confidence (Section 3.5), where every piece gets a final score estimation after running the whole process, depicting the system confidence on the predicted notes. We look for an alignment that maximizes the system confidence over the entire piece: where M IDI t0 is the starting time of the MIDI file, and V ideo tj is the alignment time of the video. V ideo t0 is obtained by the heuristic alignment described in the previous paragraph. We use the confidence score as a proxy of the alignment precision and search the alignment that maximizes the confidence score of the system. More specifically, given the initial offset from the audio-MIDI alignment, we take a window of 1 second in frames (usually 25) for each side and compute the score of the final system on the entire piece. We choose the offset that in the best confidence score as the alignment offset. We follow the methods described in this section, and use it to label 90 piano pieces from 42 different composers with 155,107 notes in total. On average, each piece contains 1,723 notes. In this section, we present multiple evaluations of our overall system. We begin by evaluating the entire process where we assess how the overall system performs on predicting the pressed fingering. Next, we use the dataset we collected and train a PIANO-FINGERING model. We fine-tune this model on a previously manually-annotated dataset for PIANO-FINGERING and show using our data achieves better performance. As piano pieces are usually played by two hands, we avoid modeling each hand separately, and instead use their symmetry property, and simply flip one hand's notes, matching it to the piano scale, following previous work practice . For evaluation, we use the match rate between the prediction and the ground truth. For cases where there is a single ground truth, this is equivalent to accuracy measurement. When more than one labeling is available, we simply average the accuracies with each labeling. As the pose estimation is one of the major components directly affecting our system's performance, we isolate this part in order to estimate the gain from fine-tuning using the CycleGAN (Section 3.4). We manually annotated five random pieces from our dataset by marking the pressing finger for each played note in the video. Then, by using our system (§3.5), we estimate what finger was used for each key. We use the confidence score produced by the model as a threshold to use or discard the key prediction of the model, and report precision, recall and F1 scores of multiple thresholds. Moreover, we compare the scores of these pieces with and without using the CycleGAN. We do so for the five annotated pieces and report these in Table 1. When considering a high confidence score (>90%) both the pre-trained and fine-tuned models correctly mark all considered notes (which consist of between 34-36% of the data). However, when considering decreasing confidences, the fine-tuned model manages to achieve higher precision and higher recall, contributing to an overall higher f1 score. With no confidence threshold (i.e using all fingering predictions), the pre-trained model achieves 93% F1, while the fine-tuned one achieves 97% F1, a 57% error reduction. In order to assess the value of APFD, we seek to show its usefulness on the end task: Automatic Piano Fingering. To this end, we train a standard sequence tagging neural model using our dataset, evaluating on the subset of the videos we manually annotated. Then, we fine-tune this model on PIG , a manually labeled dataset, on which we achieve superior than simply training on that dataset alone. We model the PIANO-FINGERING as a sequence labeling task, where given a sequence of notes n 1, n 2,..., n n we need to predict a sequence of fingering: y 1, y 2,..., y n, where y i ∈ {1, 2, 3, 4, 5} corresponding to 5 fingers of one hand. We employ a standard sequence tagging technique, by embedding each note and using a BiLSTM on top of it. On every contextualized note we then use a Multi-Layer-Perceptron (MLP) to predict the label. The model is trained to minimize cross-entropy loss. This is the same model used in , referred to as DNN (LSTM). didn't use a development set, therefore in this work, we leave 1 piece from the training set and make it a development set. Our dataset-APFD-is composed of 90 pieces, which we split into 75/10 for training and development sets respectively, and use the 5 manually annotated pieces as a test set. We note that the development set is silver data (automatically annotated) and probably contains mistakes. The are summarized in Table 1. We run the same architecture by with some different hyperparameters and achieve 71.4%/64.1% on our and PIG's test set respectively. To evaluate the usefulness of our data to PIG's data, we use the model trained on our silver data and fine-tune it on PIG. This in 66.8% accuracy, 2.3% above the previous state-of-the-art model which was achieved by an HMM . We attribute this gain in performance to our dataset, which both increases the number of training examples and allows to train bigger neural models which excel with more training examples. We also experiment in the opposite direction and fine-tune the model trained on PIG with our data, which in 73.6% accuracy, which is better than training on our data alone, achieving 73.2% accuracy. In this work, we present an automatic method for detecting PIANO-FINGERING from MIDI and video files of a piano performance. We employ this method on a large set of videos, and create the first large scale PIANO-FINGERING dataset, containing 90 unique pieces, with 155,107 notes in total. We show this dataset-although being noisy-is valuable, by training a neural network model on it, fine-tuning on a gold dataset, where we achieve state-of-the-art . In future work, we intend to improve the data collection by improving the pose-estimation model, better handling high speed movements and the proximity of the hands, which often cause errors in estimating their pose. Furthermore, we intend to design improved neural models that can take previous fingering predictions into account, in order to have a better global fingering transition.
We automatically extract fingering information from videos of piano performances, to be used in automatic fingering prediction models.
334
scitldr
Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable. A recent approach for finding a common representation of the two domains is via domain adversarial training , which attempts to induce a feature extractor that matches the source and target feature distributions in some feature space. However, domain adversarial training faces two critical limitations: 1) if the feature extraction function has high-capacity, then feature distribution matching is a weak constraint, 2) in non-conservative domain adaptation (where no single classifier can perform well in both the source and target domains), training the model to do well on the source domain hurts performance on the target domain. In this paper, we address these issues through the lens of the cluster assumption, i.e., decision boundaries should not cross high-density data regions. We propose two novel and related models: 1) the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption; 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) model, which takes the VADA model as initialization and employs natural gradient steps to further minimize the cluster assumption violation. Extensive empirical demonstrate that the combination of these two models significantly improve the state-of-the-art performance on the digit, traffic sign, and Wi-Fi recognition domain adaptation benchmarks. The development of deep neural networks has enabled impressive performance in a wide variety of machine learning tasks. However, these advancements often rely on the existence of a large amount of labeled training data. In many cases, direct access to vast quantities of labeled data for the task of interest (the target domain) is either costly or otherwise absent, but labels are readily available for related training sets (the source domain). A notable example of this scenario occurs when the source domain consists of richly-annotated synthetic or semi-synthetic data, but the target domain consists of unannotated real-world data BID28 ). However, the source data distribution is often dissimilar to the target data distribution, and the ing significant covariate shift is detrimental to the performance of the source-trained model when applied to the target domain BID27.Solving the covariate shift problem of this nature is an instance of domain adaptation BID2. In this paper, we consider a challenging setting of domain adaptation where 1) we are provided with fully-labeled source samples and completely-unlabeled target samples, and 2) the existence of a classifier in the hypothesis space with low generalization error in both source and target domains is not guaranteed. Borrowing approximately the terminology from BID2, we refer to this setting as unsupervised, non-conservative domain adaptation. We note that this is in contrast to conservative domain adaptation, where we assume our hypothesis space contains a classifier that performs well in both the source and target domains. To tackle unsupervised domain adaptation, BID9 proposed to constrain the classifier to only rely on domain-invariant features. This is achieved by training the classifier to perform well on the source domain while minimizing the divergence between features extracted from the source versus target domains. To achieve divergence minimization, BID9 employ domain adversarial training. We highlight two issues with this approach: 1) when the feature function has high-capacity and the source-target supports are disjoint, the domain-invariance constraint is potentially very weak (see Section 3), and 2) good generalization on the source domain hurts target performance in the non-conservative setting. BID24 addressed these issues by replacing domain adversarial training with asymmetric tri-training (ATT), which relies on the assumption that target samples that are labeled by a sourcetrained classifier with high confidence are correctly labeled by the source classifier. In this paper, we consider an orthogonal assumption: the cluster assumption BID5, that the input distribution contains separated data clusters and that data samples in the same cluster share the same class label. This assumption introduces an additional bias where we seek decision boundaries that do not go through high-density regions. Based on this intuition, we propose two novel models: 1) the Virtual Adversarial Domain Adaptation (VADA) model which incorporates an additional virtual adversarial training BID20 and conditional entropy loss to push the decision boundaries away from the empirical data, and 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) model which uses natural gradients to further refine the output of the VADA model while focusing purely on the target domain. We demonstrate that 1. In conservative domain adaptation, where the classifier is trained to perform well on the source domain, VADA can be used to further constrain the hypothesis space by penalizing violations of the cluster assumption, thereby improving domain adversarial training.2. In non-conservative domain adaptation, where we account for the mismatch between the source and target optimal classifiers, DIRT-T allows us to transition from a joint (source and target) classifier (VADA) to a better target domain classifier. Interestingly, we demonstrate the advantage of natural gradients in DIRT-T refinement steps. We report for domain adaptation in digits classification (MNIST-M, MNIST, SYN DIGITS, SVHN), traffic sign classification (SYN SIGNS, GTSRB), general object classification (STL-10, CIFAR-10), and Wi-Fi activity recognition . We show that, in nearly all experiments, VADA improves upon previous methods and that DIRT-T improves upon VADA, setting new state-of-the-art performances across a wide range of domain adaptation benchmarks. In adapting MNIST → SVHN, a very challenging task, we out-perform ATT by over 20%. Given the extensive literature on domain adaptation, we highlight several works most relevant to our paper. BID27; BID17 proposed to correct for covariate shift by re-weighting the source samples such that the discrepancy between the target distribution and reweighted source distribution is minimized. Such a procedure is problematic, however, if the source and target distributions do not contain sufficient overlap. BID13 BID16; BID9 proposed to instead project both distributions into some feature space and encourage distribution matching in the feature space. BID9 in particular encouraged feature matching via domain adversarial training, which corresponds approximately to Jensen-Shannon divergence minimization BID11. To better perform nonconservative domain adaptation, BID24 proposed to modify tri-training for domain adaptation, leveraging the assumption that highly-confident predictions are correct predictions . Several of aforementioned methods are based on BID1's theoretical analysis of domain adaptation, which states the following, Theorem 1 BID1 ) Let H be the hypothesis space and let (X s, s) and (X t, t) be the two domains and their corresponding generalization error functions. Then for any h ∈ H, DISPLAYFORM0 where d H∆H denotes the H∆H-distance between the domains X s and X t, DISPLAYFORM1 Intuitively, d H∆H measures the extent to which small changes to the hypothesis in the source domain can lead to large changes in the target domain. It is evident that d H∆H relates intimately to the complexity of the hypothesis space and the divergence between the source and target domains. For infinite-capacity models and domains with disjoint supports, d H∆H is maximal. A critical component to our paper is the cluster assumption, which states that decision boundaries should not cross high-density regions BID5. This assumption has been extensively studied and leveraged for semi-supervised learning, leading to proposals such as conditional entropy minimization BID12 and pseudo-labeling BID15. More recently, the cluster assumption has led to many successful deep semi-supervised learning algorithms such as semi-supervised generative adversarial networks, virtual adversarial training BID20, and self/temporal-ensembling BID14 BID29. Given the success of the cluster assumption in semi-supervised learning, it is natural to consider its application to domain adaptation. Indeed, BID0 formalized the cluster assumption through the lens of probabilistic Lipschitzness and proposed a nearest-neighbors model for domain adaptation. Our work extends this line of research by showing that the cluster assumption can be applied to deep neural networks to solve complex, high-dimensional domain adaptation problems. Independently of our work, BID8 demonstrated the application of selfensembling to domain adaptation. However, our work additionally considers the application of the cluster assumption to non-conservative domain adaptation. Before describing our model, we first highlight that domain adversarial training may not be sufficient for domain adaptation if the feature extraction function has high-capacity. Consider a classifier h θ, parameterized by θ, that maps inputs to the (K − 1)-simplex (denote as C), where K is the number of classes. Suppose the classifier h = g • f can be decomposed as the composite of an embedding function f θ: X → Z and embedding classifier g θ: Z → C. For the source domain, let D s be the joint distribution over input x and one-hot label y and let X s be the marginal input distribution. DISPLAYFORM0 where the supremum ranges over discriminators D: Z →. Then L y is the cross-entropy objective and D is a domain discriminator. Domain adversarial training minimizes the objective DISPLAYFORM1 where λ d is a weighting factor. Minimization of L d encourages the learning of a feature extractor f for which the Jensen-Shannon divergence between f (X s) and f (X t) is small. 2 BID9 suggest that successful adaptation tends to occur when the source generalization error and feature divergence are both small. It is easy, however, to construct situations where this suggestion fails. In particular, if f has infinitecapacity and the source-target supports are disjoint, then f can employ arbitrary transformations to the target domain so as to match the source feature distribution (see Appendix E for formalization).We verify empirically that, for sufficiently deep layers, jointly achieving small source generalization error and feature divergence does not imply high accuracy on the target task TAB6. Given the limitations of domain adversarial training, we wish to identify additional constraints that one can place on the model to achieve better, more reliable domain adaptation. In this paper, we apply the cluster assumption to domain adaptation. The cluster assumption states that the input distribution X contains clusters and that points in the same cluster come from the same class. This assumption has been extensively studied and applied successfully to a wide range of classification tasks (see Section 2). If the cluster assumption holds, the optimal decision boundaries should occur far away from data-dense regions in the space of X BID5. Following BID12, we achieve this behavior via minimization of the conditional entropy with respect to the target distribution, Intuitively, minimizing the conditional entropy forces the classifier to be confident on the unlabeled target data, thus driving the classifier's decision boundaries away from the target data BID12. In practice, the conditional entropy must be empirically estimated using the available data. However, BID12 note that this approximation breaks down if the classifier h is not locally-Lipschitz. Without the locally-Lipschitz constraint, the classifier is allowed to abruptly change its prediction in the vicinity of the training data points, which 1) in a unreliable empirical estimate of conditional entropy and 2) allows placement of the classifier decision boundaries close to the training samples even when the empirical conditional entropy is minimized. To prevent this, we propose to explicitly incorporate the locally-Lipschitz constraint via virtual adversarial training BID20 and add to the objective function the additional term DISPLAYFORM0 which enforces classifier consistency within the norm-ball neighborhood of each sample x. Note that virtual adversarial training can be applied with respect to either the target or source distributions. We can combine the conditional entropy minimization objective and domain adversarial training to yield min. DISPLAYFORM1 a basic combination of domain adversarial training and semi-supervised training objectives. We refer to this as the Virtual Adversarial Domain Adaptation (VADA) model. Empirically, we observed that the hyperparameters (λ d, λ s, λ t) are easy to choose and work well across multiple tasks (Appendix B).H∆H-Distance Minimization. VADA aligns well with the theory of domain adaptation provided in Theorem 1. Let the loss, DISPLAYFORM2 denote the degree to which the target-side cluster assumption is violated. Modulating λ t enables VADA to trade-off between hypotheses with low target-side cluster assumption violation and hypotheses with low source-side generalization error. Setting λ t > 0 allows rejection of hypotheses with high target-side cluster assumption violation. By rejecting such hypotheses from the hypothesis space H, VADA reduces d H∆H and yields a tighter bound on the target generalization error. We verify empirically that VADA achieves significant improvements over existing models on multiple domain adaptation benchmarks (Table 1). VADA DIRT-T In non-conservative domain adaptation, we assume the following inequality, DISPLAYFORM0 where (s, t) are generalization error functions for the source and target domains. This means that, for a given hypothesis class H, the optimal classifier in the source domain does not coincide with the optimal classifier in the target domain. We assume that the optimality gap in Eq. FORMULA0 from violation of the cluster assumption. In other words, we suppose that any source-optimal classifier drawn from our hypothesis space necessarily violates the cluster assumption in the target domain. Insofar as VADA is trained on the source domain, we hypothesize that a better hypothesis is achievable by introducing a secondary training phase that solely minimizes the target-side cluster assumption violation. Under this assumption, the natural solution is to initialize with the VADA model and then further minimize the cluster assumption violation in the target domain. In particular, we first use VADA to learn an initial classifier h θ0. Next, we incrementally push the classifier's decision boundaries away from data-dense regions by minimizing the target-side cluster assumption violation loss L t in Eq.. We denote this procedure Decision-boundary Iterative Refinement Training (DIRT). Stochastic gradient descent minimizes the loss L t by selecting gradient steps ∆θ according to the following objective, min. DISPLAYFORM0 DISPLAYFORM1 which defines the neighborhood in the parameter space. This notion of neighborhood is sensitive to the parameterization of the model; depending on the parameterization, a seemingly small step ∆θ may in a vastly different classifier. This contradicts our intention of incrementally and locally pushing the decision boundaries to a local conditional entropy minimum, which requires that the decision boundaries of h θ+∆θ stay close to that of h θ. It is therefore important to define a neighborhood that is parameterization-invariant. Following BID21, we instead select ∆θ using the following objective, min. DISPLAYFORM2 Each optimization step now solves for a gradient step ∆θ that minimizes the conditional entropy, subject to the constraint that the Kullback-Leibler divergence between h θ (x) and h θ+∆θ (x) is small for x ∼ X t. The corresponding Lagrangian suggests that one can instead minimize a sequence of optimization problems min. DISPLAYFORM3 that approximates the application of a series of natural gradient steps. In practice, each of the optimization problems in Eq. FORMULA0 can be solved approximately via a finite number of stochastic gradient descent steps. We denote the number of steps taken to be the refinement interval B. Similar to BID29, we use the Adam Optimizer with Polyak averaging BID22. We interpret h θn−1 as a (sub-optimal) teacher for the student model h θn, which is trained to stay close to the teacher model while seeking to reduce the cluster assumption violation. As a , we denote this model as Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T).Weakly-Supervised Learning. This sequence of optimization problems has a natural interpretation that exposes a connection to weakly-supervised learning. In each optimization problem, the teacher model h θn−1 pseudo-labels the target samples with noisy labels. Rather than naively training the student model h θn on the noisy labels, the additional training signal L t allows the student model to place its decision boundaries further from the data. If the clustering assumption holds and the initial noisy labels are sufficiently similar to the true labels, conditional entropy minimization can improve the placement of the decision boundaries BID23. Adaptation. An alternative interpretation is that DIRT-T is the recursive extension of VADA, where the act of pseudo-labeling of the target distribution constructs a new "source" domain (i.e. target distribution X t with pseudo-labels). The sequence of optimization problems can then be seen as a sequence of non-conservative domain adaptation problems in which DISPLAYFORM0 is the true conditional label distribution in the target domain. Since d H∆H is strictly zero in this sequence of optimization problems, domain adversarial training is no longer necessary. Furthermore, if L t minimization does improve the student classifier, then the gap in Eq. should get smaller each time the source domain is updated. In principle, our method can be applied to any domain adaptation tasks so long as one can define a reasonable notion of neighborhood for virtual adversarial training BID19. For comparison against BID24 and BID8, we focus on visual domain adaptation and evaluate on MNIST, MNIST-M, Street View House Numbers (SVHN), Synthetic Digits (SYN DIGITS), Synthetic Traffic Signs (SYN SIGNS), the German Traffic Signs Recognition Benchmark (GTSRB), CIFAR-10, and STL-10. For non-visual domain adaptation, we evaluate on Wi-Fi activity recognition. Architecture We use a small CNN for the digits, traffic sign, and Wi-Fi domain adaptation experiments, and a larger CNN for domain adaptation between CIFAR-10 and STL-10. Both architectures are available in Appendix A. For fair comparison, we additionally report the performance of source-only baseline models and demonstrate that the significant improvements are attributable to our proposed method. Replacing gradient reversal. In contrast to BID9, which proposed to implement domain adversarial training via gradient reversal, we follow BID11 and instead optimize via alternating updates to the discriminator and encoder (see Appendix C).Instance normalization. We explored the application of instance normalization as an image preprocessing step. This procedure makes the classifier invariant to channel-wide shifts and rescaling of pixel intensities. A discussion of instance normalization for domain adaptation is provided in Appendix D. We show in Figure 3 the effect of applying instance normalization to the input image. Figure 3: Effect of applying instance normalization to the input image. In clockwise direction: MNIST-M, GTSRB, SVHN, and CIFAR-10. In each quadrant, the top row is the original image, and the bottom row is the instance-normalized image. Hyperparameters. For each task, we tuned the four hyperparameters (λ d, λ s, λ t, β) by randomly selecting 1000 labeled target samples from the training set and using that as our validation set. We observed that extensive hyperparameter-tuning is not necessary to achieve state-of-the-art performance. In all experiments with instance-normalized inputs, we restrict our hyperparameter search for each task to λ d = {0, 10 −2}, λ s = {0, 1}, λ t = {10 −2, 10 −1}. We fixed β = 10 −2. Note that the decision to turn (λ d, λ s) on or off that can often be determined a priori. A complete list of the hyperparameters is provided in Appendix B. Table 1: Test set accuracy on visual domain adaptation benchmarks. In all settings, both VADA and DIRT-T achieve state-of-the-art performance in all settings. MNIST → MNIST-M. We first evaluate the adaptation from MNIST to MNIST-M. MNIST-M is constructed by blending MNIST digits with random color patches from the BSDS500 dataset. MNIST ↔ SVHN. The distribution shift is exacerbated when adapting between MNIST and SVHN. Whereas MNIST consists of black-and-white handwritten digits, SVHN consists of crops of colored, street house numbers. Because MNIST has a significantly lower intrinsic dimensionality that SVHN, the adaptation from MNIST → SVHN is especially challenging when the input is not pre-processed via instance normalization. When instance normalization is applied, we achieve a strong state-ofthe-art performance 76.5% and an equally impressive margin-of-improvement over source-only of 35.6%. Interestingly, by reducing the refinement interval B and taking noisier natural gradient steps, we were occasionally able to achieve accuracies as high as 87%. However, due to the high-variance associated with this, we omit reporting this configuration in Table 1.SYN DIGITS → SVHN. The adaptation from SYN DIGITS → SVHN reflect a common adaptation problem of transferring from synthetic images to real images. The SYN DIGITS dataset consist of 500000 images generated from Windows fonts by varying the text, positioning, orientation, , stroke color, and the amount of blur. SYN SIGNS → GTSRB. This setting provides an additional demonstration of adapting from synthetic images to real images. Unlike SYN DIGITS → SVHN, SYN SIGNS → GTSRB contains 43 classes instead of 10.STL ↔ CIFAR. Both STL-10 and CIFAR-10 are 10-class image datasets. These two datasets contain nine overlapping classes. Following the procedure in BID8, we removed the non-overlapping classes ("frog" and "monkey") and reduce to a 9-class classification problem. We achieve state-of-the-art performance in both adaptation directions. In STL → CIFAR, we achieve a 11.7% margin-of-improvement and a performance accuracy of 73.3%. Note that because STL-10 contains a very small training set, it is difficult to estimate the conditional entropy, thus making DIRT-T unreliable for CIFAR → STL. Wi-Fi Activity Recognition. To evaluate the performance of our models on a non-visual domain adaptation task, we applied VADA and DIRT-T to the Wi-Fi Activity Recognition Dataset . The Wi-Fi Activity Recognition Dataset is a classification task that takes the WiFi Channel State Information (CSI) data stream as input x to predict motion activity within an indoor area as output y. Domain adaptation is necessary when the training and testing data are collected from different rooms, which we denote as Rooms A and B. TAB2 shows that VADA significantly improves classification accuracy compared to Source-Only and DANN by 17.3% and 15% respectively. However, DIRT-T does not lead to further improvements on this dataset. We perform experiments in Appendix F which suggests that VADA already achieves strong clustering in the target domain for this dataset, and therefore DIRT-T is not expected to yield further performance improvement. Table 3: Additional comparison of the margin of improvement computed by taking the reported performance of each model and subtracting the reported source-only performance in the respective papers. W.I.N.I. indicates "with instance-normalized input."Overall. We achieve state-of-the-art across all tasks. For a fairer comparison against ATT and the Π-model, Table 3 provides the improvement margin over the respective source-only performance reported in each paper. In four of the tasks (MNIST → MNIST-M, SVHN → MNIST, MNIST → SVHN, STL → CIFAR), we achieve substantial margin of improvement compared to previous models. In the remaining three tasks, our improvement margin over the source-only model is competitive against previous models. Our closest competitor is the Π-model. However, unlike the Π-model, we do not perform data augmentation. It is worth noting that DIRT-T consistently improves upon VADA. Since DIRT-T operates by incrementally pushing the decision boundaries away from the target domain data, it relies heavily on the cluster assumption. DIRT-T's empirical success therefore demonstrates the effectiveness of leveraging the cluster assumption in unsupervised domain adaptation with deep neural networks.6.3 ANALYSIS OF VADA AND DIRT-T To study the relative contribution of the virtual adversarial training in the VADA and DIRT-T objectives (Eq. FORMULA6 and Eq. respectively), we perform an extensive ablation analysis in Table 4. The removal of the virtual adversarial training component is denoted by the "no-vat" subscript. Our show that VADA no-vat is sufficient for out-performing DANN in all but one task. The further ability for DIRT-T no-vat to improve upon VADA no-vat demonstrates the effectiveness of conditional entropy minimization. Ultimately, in six of the seven tasks, both virtual adversarial training and conditional entropy minimization are essential for achieving the best performance. The empirical importance of incorporating virtual adversarial training shows that the locally-Lipschitz constraint is beneficial for pushing the classifier decision boundaries away from data. Table 4: Test set accuracy in ablation experiments, starting from the DANN model. The "no-vat" subscript denote models where the virtual adversarial training component is removed. When considering Eq. FORMULA0, it is natural to ask whether defining the neighborhood with respect to the classifier is truly necessary. In FIG2, we demonstrate in SVHN → MNIST and STL → CIFAR that removal of the KL-term negatively impacts the model. Since the MNIST data manifold is low-dimensional and contains easily identifiable clusters, applying naive gradient descent (Eq. FORMULA0) can also boost the test accuracy during initial training. However, without the KL constraint, the classifier can sometimes deviate significantly from the neighborhood of the previous classifier, and the ing spikes in the KL-term correspond to sharp drops in target test accuracy. In STL → CIFAR, where the data manifold is much more complex and contains less obvious clusters, naive gradient descent causes immediate decline in the target test accuracy. We further analyze the behavior of VADA and DIRT-T by showing T-SNE embeddings of the last hidden layer of the model trained to adapt from MNIST → SVHN. In FIG3, source-only training shows strong clustering of the MNIST samples (blue) and performs poorly on SVHN (red). VADA offers significant improvement and exhibits signs of clustering on SVHN. DIRT-T begins with the VADA initialization and further enhances the clustering, ing in the best performance on MNIST → SVHN. In TAB6, we applied domain adversarial training to various layers of a Domain Adversarial Neural Network BID9 trained to adapt MNIST → SVHN. With the exception of layers L − 2 and L − 0, which experienced training instability, the general observation is that as the layer gets deeper, the additional capacity of the corresponding embedding function allows better matching of the source and target distributions without hurting source generalization accuracy. This demonstrates that the combination of low divergence and high source accuracy does not imply better adaptation to the target domain. Interestingly, when the classifier is regularized to be locally-Lipschitz via VADA, the combination of low divergence and high source accuracy appears to correlate more strongly with better adaptation. In this paper, we presented two novel models for domain adaptation inspired by the cluster assumption. Our first model, VADA, performs domain adversarial training with an added term that penalizes violations of the cluster assumption. Our second model, DIRT-T, is an extension of VADA that recursively refines the VADA classifier by untethering the model from the source training signal and applying approximate natural gradients to further minimize the cluster assumption violation. Our experiments demonstrate the effectiveness of the cluster assumption: VADA achieves strong performance across several domain adaptation benchmarks, and DIRT-T further improves VADA performance. Our proposed models open up several possibilities for future work. One possibility is to apply DIRT-T to weakly supervised learning; another is to improve the natural gradient approximation via K-FAC BID18 and PPO BID25. Given the strong performance of our models, we also recommend them for other downstream domain adaptation applications. DISPLAYFORM0 Gaussian noise, σ = 1 DISPLAYFORM1 Gaussian noise, σ = 1 DISPLAYFORM2 We observed that extensive hyperparameter-tuning is not necessary to achieve state-of-the-art performance. To demonstrate this, we restrict our hyperparameter search for each task to λ d = {0, 10 −2}, λ s = {0, 1}, λ t = {10 −2, 10 −1}, in all experiments with instance-normalized inputs. We fixed β = 10 −2. Note that the decision to turn (λ d, λ s) on or off that can often be determined a priori based on prior belief regarding the extent to covariate shift. In the absence of such prior belief, a reliable choice is (λ d = 10 −2, λ s = 1, λ t = 10 −2, β = 10 −2). When the target domain is MNIST/MNIST-M, the task is sufficiently simple that we only allocate B = 500 iterations to each optimization problem in Eq.. In all other cases, we set the refinement interval B = 5000. We apply Adam Optimizer (learning rate = 0.001, β 1 = 0.5, β 2 = 0.999) with Polyak averaging (more accurately, we apply an exponential moving average with momentum = 0.998 to the parameter trajectory). VADA was trained for 80000 iterations and DIRT-T takes VADA as initialization and was trained for {20000, 40000, 60000, 80000} iterations, with number of iterations chosen as hyperparameter. We note from BID11 that the gradient of ∇ θ ln(1 − D(f θ (x))) is tends to have smaller norm than −∇ θ ln D(f θ (x)) during initial training since the latter rescales the gradient by 1/D(f θ (x)). Following this observation, we replace the gradient reversal procedure with alternating minimization of DISPLAYFORM0 The choice of using gradient reversal versus alternating minimization reflects a difference in choice of approximating the mini-max using saturating versus non-saturating optimization BID7. In some of our initial experiments, we observed the replacement of gradient reversal with alternating minimization stabilizes domain adversarial training. However, we encourage practitioners to try either optimization strategy when applying VADA. Theorem 1 suggests that we should identify ways of constraining the hypothesis space without hurting the global optimal classifier for the joint task. We propose to further constrain our model by introducing instance normalization as an image pre-processing step for the input data. Instance normalization was proposed for style transfer BID30 and applies the operation DISPLAYFORM0 where x (i) ∈ R H×W ×C denotes the i th sample with (H, W, C) corresponding to the height, width, and channel dimensions, and where µ, σ: R H×W ×C → R C are functions that compute the mean and standard deviation across the spatial dimensions. A notable property of instance normalization is that it is invariant to channel-wide scaling and shifting of the input elements. Formally, consider scaling and shift variables γ, β ∈ R C. If γ 0 and σ(DISPLAYFORM1 For visual data the application of instance normalization to the input layer makes the classifier invariant to channel-wide shifts and scaling of the pixel intensities. For most visual tasks, sensitivity to channel-wide pixel intensity changes is not critical to the success of the classifier. As such, instance normalization of the input may help reduce d H∆H without hurting the globally optimal classifier. Interestingly, Figure 3 shows that input instance normalization is not equivalent to gray-scaling, since color is partially preserved. To test the effect of instance normalization, we report both with and without the use of instance-normalized inputs. We denote the source and target distributions respectively as p s (x, y) and p t (x, y). Let the source covariate distribution p s (x) define the random variable X s that have support supp(X s) = X s and let (X t, X t) be analogously defined for the target domain. Both X s and X t are subsets of R n. Let p s (y) and p t (y) define probabilities over the support Y = {1, . . ., K}. We consider any embedding function f: R n → R m, where R m is the embedding space, and any embedding classifier g: R m → C, where C is the (K − 1)-simplex. We denote a classifier h = g • f has the composite of an embedding function with an embedding classifier. For simplicity, we restrict our analysis to the simple case where K = 2, i.e. where Y = {0, 1}. Furthermore, we assume that for any δ ∈, there exists a subset Ω ⊆ R n where p s (x ∈ Ω) = δ. We impose a similar condition on p t (x).For a joint distribution p(x, y), we denote the generalization error of a classifier as DISPLAYFORM0 Note that for a given classifier h: R n →, the corresponding hard classifier is k(x) = 1{h(x) > 0.5}. We further define the set Ω ⊆ R n such that DISPLAYFORM1 In a slight abuse of notation, we define the generalization error (Ω) with respect to Ω as DISPLAYFORM2 An optimal Ω * p is a partitioning of DISPLAYFORM3 such that generalization error under the distribution p(x, y) is minimized. E.1 GOOD TARGET-DOMAIN ACCURACY IS NOT GUARANTEED Domain adversarial training seeks to find a single classifier h used for both the source p s and target p t distributions. To do so, domain adversarial training sets up the objective DISPLAYFORM4 DISPLAYFORM5 where F and G are the hypothesis spaces for the embedding function and embedding classifier. Intuitively, domain adversarial training operates under the hypothesis that good source generalization error in conjunction with source-target feature matching implies good target generalization error. We shall see, however, that if X s ∩ X t = ∅ and F is sufficiently complex, this implication does not necessarily hold. Let F contain all functions mapping R n → R m, i.e. F has infinite capacity. Suppose G contains the function g(z) = 1{z = 1 m} and X s ∩ X t = ∅. We consider the set DISPLAYFORM6 Such a set of classifiers satisfies the feature-matching constraint while achieving source generalization error no worse than the optimal source-domain hard classifier. It suffices to show that H * includes hypotheses that perform poorly in the target domain. We first show H * is not an empty set by constructing an element of this set. Choose a partitioning Ω where DISPLAYFORM7 Consider the embedding function DISPLAYFORM8 Let g(z) = 1{z = 1 m}. It follows that the composite classifier DISPLAYFORM9 Next, we show that a classifier h ∈ H * does not necessarily achieve good target generalization error. Consider the partitioningΩ which solves the following optimization problem DISPLAYFORM10 DISPLAYFORM11 Such a partitioningΩ is the worst-case partitioning subject to the probability mass constraint. It follows that worse case h ∈ H * has generalization error pt (h) = max h∈H * pt (h) ≥ pt (hΩ).To provide intuition that pt (h) is potentially very large, consider hypothetical source and target domains where X s ∩ X t = ∅ and p t (x ∈ Ω * pt) = p s (x ∈ Ω * ps) = 0.5. The worst-case partitioning subject to the probability mass constraint is simplyΩ = R n \ Ω Define the embedding functions DISPLAYFORM12 f (x) = 1 m if (x ∈ X s ∩ Ω s) ∨ (x ∈ X t ∩ (R n \ Ω t)) 0 m otherwise. Let g (z) = g(z) = 1{z = 1 m}. It follows that the composite classifiers h = g • f and h = g • f are elements ofH.From the definition of d H∆H, we see that DISPLAYFORM13 TheH∆H-divergence thus achieves the maximum value of 2. Our analysis assumes infinite capacity embedding functions and the ability to solve optimization problems exactly. The empirical success of domain adversarial training suggests that the use of finite-capacity convolutional neural networks combined with stochastic gradient-based optimization provides the necessary regularization for domain adversarial training to work. The theoretical characterization of domain adversarial training in the case finite-capacity convolutional neural networks and gradient-based learning remains a challenging but important open research problem. To evaluate the performance of our models on a non-visual domain adaptation task, we applied VADA and DIRT-T to the Wi-Fi Activity Recognition Dataset . The Wi-Fi Activity Recognition Dataset is a classification task that takes the Wi-Fi Channel State Information (CSI) data stream as input x to predict motion activity within an indoor area as output y. The dataset collected the CSI data stream samples associated with seven activities, denoted as "bed", "fall", "walk", "pick up", "run", "sit down", and "stand up".However, the joint distribution over the CSI data stream and motion activity changes depending on the room in which the data was collected. Since the data was collected for multiple rooms, we selected two rooms (denoted here as Room A and Room B) and constructed the unsupervised domain adaptation task by using Room A as the source domain and Room B as the target domain. We compare the performance of DANN, VADA, and DIRT-T on the Wi-Fi domain adaptation task in TAB2, using the hyperparameters (λ d = 0, λ s = 0, λ t = 10 −2, β = 10 −2). TAB2 shows that VADA significantly improves classification accuracy compared to Source-Only and DANN. However, DIRT-T does not lead to further improvements on this dataset. We believe this is attributable to VADA successfully pushing the decision boundary away from data-dense regions in the target domain. As a , further application of DIRT-T would not lead to better decision boundaries. To validate this hypothesis, we visualize the t-SNE embeddings for VADA and DIRT-T in FIG4 and show that VADA is already capable of yielding strong clustering in the target domain. To verify that the decision boundary indeed did not change significantly, we additionally provide the confusion matrix between the VADA and DIRT-T predictions in the target domain (Fig. 7).
SOTA on unsupervised domain adaptation by leveraging the cluster assumption.
335
scitldr
In this paper, we propose Continuous Graph Flow, a generative continuous flow based method that aims to model complex distributions of graph-structured data. Once learned, the model can be applied to an arbitrary graph, defining a probability density over the random variables represented by the graph. It is formulated as an ordinary differential equation system with shared and reusable functions that operate over the graphs. This leads to a new type of neural graph message passing scheme that performs continuous message passing over time. This class of models offers several advantages: a flexible representation that can generalize to variable data dimensions; ability to model dependencies in complex data distributions; reversible and memory-efficient; and exact and efficient computation of the likelihood of the data. We demonstrate the effectiveness of our model on a diverse set of generation tasks across different domains: graph generation, image puzzle generation, and layout generation from scene graphs. Our proposed model achieves significantly better performance compared to state-of-the-art models. Modeling and generating graph-structured data has important applications in various scientific fields such as building knowledge graphs , inventing new molecular structures and generating diverse images from scene graphs . Being able to train expressive graph generative models is an integral part of AI research. Significant research effort has been devoted in this direction. Traditional graph generative methods (Erdős & Rényi, 1959; ; Albert & Barabási, 2002;) are based on rigid structural assumptions and lack the capability to learn from observed data. Modern deep learning frameworks within the variational autoencoder (VAE) formalism offer promise of learning distributions from data. Specifially, for structured data, research efforts have focused on bestowing VAE based generative models with the ability to learn structured latent space models (; ;). Nevertheless, their capacity is still limited mainly because of the assumptions placed on the form of distributions. Another class of graph generative models are based on autoregressive methods . These models construct graph nodes sequentially wherein each iteration involves generation of edges connecting a generated node in that iteration with the previously generated set of nodes. Such autoregressive models have been proven to be the most successful so far. However, due to the sequential nature of the generation process, the generation suffers from the inability to maintain long-term dependencies in larger graphs. Therefore, existing methods for graph generation are yet to realize the full potential of their generative power, particularly, the ability to model complex distributions with the flexibility to address variable data dimensions. Alternatively, for modeling the relational structure in data, graph neural networks (GNNs) or message passing neural networks (MPNNs) (; ; ; ; ;) have been shown to be effective in learning generalizable representations over variable input data dimensions. These models operate on the underlying principle of iterative neural message passing wherein the node representations are updated iteratively for a fixed number of steps. Hereafter, we use the term message passing to refer to this neural message passing in GNNs. We leverage this representational ability towards graph generation. In this paper, we introduce a new class of models -Continuous Graph Flow (CGF): a graph generative model based on continuous normalizing flows ) that Figure 1: Illustration of evolution of message passing mechanisms from discrete updates (a) to our proposed continuous updates (b). Continuous Graph Flow leverages normalizing flows to transform simple distributions (e.g. Gaussian) at t 0 to the target distributions at t 1. The distribution of only one graph node is shown here for visualization, but, all the node distributions transform over time. generalizes the message passing mechanism in GNNs to continuous time. Specifically, to model continuous time dynamics of the graph variables, we adopt a neural ordinary different equation (ODE) formulation. Our CGF model has both the flexibility to handle variable data dimensions (by using GNNs) and the ability to model arbitrarily complex data distributions due to free-form model architectures enabled by the neural ODE formulation. Inherently, the ODE formulation also imbues the model with following properties: reversibility and exact likelihood computation. Concurrent work on Graph Normalizing Flows (GNF) also proposes a reversible graph neural network using normalizing flows. However, their model requires a fixed number of transformations. In contrast, while our proposed CGF is also reversible and memory efficient, the underlying flow model relies on continuous message passing scheme. Moreover, the message passing in GNF involves partitioning of data dimensions into two halves and employs coupling layers to couple them back. This leads to several constraints on function forms and model architectures that have a significant impact on performance . In contrast, our CGF model has unconstrained (free-form) Jacobians, enabling it to learn more expressive transformations. Moreover, other similar work is also based on normalizing flows as compared to CGF that models continuous time dynamics. We demonstrate the effectiveness of our CGF-based models on three diverse tasks: graph generation, image puzzle generation, and layout generation based on scene graphs. Experimental show that our proposed model achieves significantly better performance than state-of-the-art models. Graph neural networks. Relational networks such as Graph Neural Networks (GNNs) facilitate learning of non-linear interactions using neural networks. In every layer ∈ N of a GNN, the embedding h i corresponding to a graph node accumulates information from its neighbors of the previous layer recursively as described below. where the function g is an aggregator function, S(i) is the set of neighbour nodes of node i, and f ij is the message function from node j to node i, x i (−1) and x j (−1) represent the node features corresponding to node i and j at layer (− 1) respectively. Our model uses a restricted form of GNNs where embeddings of the graph nodes are updated in-place (x i ← h i), thus, we denote graph node as x and ignore h hereafter. These in-place updates allow using x i in the flow-based models while maintaining the same dimensionality across subsequent transformations. Normalizing flows and change of variables. Flow-based models enable construction of complex distributions from simple distributions (e.g. Gaussian) through a sequence of invertible mappings . For instance, a random variable z is transformed from an initial state density z 0 to the final state z K using a chain of K invertible functions f k described as: The computation of log-likelihood of a random variable uses change of variables rule formulated as: where ∂f k (z k−1)/∂z k is the Jacobian of f k for k ∈ {1, 2, ..., K}. Continuous normalizing flows. Continuous normalizing flows (CNFs) ) model the continuous-time dynamics by pushing the limit on number of transformations. Given a random variable z, the following ordinary differential equation (ODE) defines the change in the state of the variable. extended the change of variables rule described in Eq. 3 to continuous version. The dynamics of the log-likelihood of a random variable is then defined as the following ODE. Following the above equation, the log likelihood of the variable z at time t 1 starting from time t 0 is where the trace computation is more computationally efficient than computation of the Jacobian in Equation. Building on CNFs, we present continuous graph flow which effectively models continuous time dynamics over graph-structured data. Given a set of random variables X containing n related variables, the goal is to learn the joint distribution p(X) of the set of variables X. Each element of set X is x i ∈ R m where i = 1, 2..., n and m represents the number of dimensions of the variable. For continuous time dynamics of the set of variables X, we formulate an ordinary differential equation system as follows:... whereẋ i = dx i /dt and X(t) is the set of variables at time t. The random variable x i at time t 0 follows a base distribution that can have simple forms, e.g. Gaussian distributions. The function f i implicitly defines the interaction among the variables. Following this formulation, the transformation of the individual graph variable is defined as This provides transformation of the value of the variable x i from time t 0 to time t 1. The form in Eq. 8 represents a generic multi-variate update where interaction functions are defined over all the variables in the set X. However, the functions do not take into account the relational structure between the graph variables. To address this, we define a neural message passing process that operates over a graph by defining the update functions over variables according to the graph structure. This process begins from time t 0 where each variable x i (t 0) only contain local information. At time t, these variables are updated based on the information gathered from other neighboring variables. For such updates, the function f i in Eq. 8 is defined as: wheref ij (·) is a reusable message function and used for passing information between variables x i and x j, S(i) is the set of neighboring variables that interacts with variable x i, and g(·) is a function that aggregates the information passed to a variable. The above formulation describes the case of pairwise message functions, though this can be generalized to higher-order interactions. We formulate it as a continuous process which eliminates the requirement of having a predetermined number of steps of message passing. By further pushing the message passing process to update at infinitesimally smaller steps and continuing the updates for an arbitrarily large number of steps, the update associated with each variable can be represented using shared and reusable functions as the following ordinary differential equation (ODE) system.... Performing message passing to derive final states is equivalent to solving an initial value problem for an ODE system. Following the ODE formulation, the final states of variables can be computed as follows. This formulation can be solved with an ODE solver. Continuous graph flow leverages the continuous message passing mechanism (described in Sec. 3.1) and formulates the message passing as implicit density transformations of the variables (illustrated in Figure 1). Given a set of variables X with dependencies among them, the goal is to learn a model that captures the distribution from which the data were sampled. Assume the joint distribution p(X) at time t 0 has a simple form such as independent Gaussian distribution for each variable x i (t 0). The continuous message passing process allows the transformation of the set of variables from X(t 0) to X(t 1). Moreover, this process also converts the distributions over variables from simple base distributions to complex data distributions. Building on the independent variable continuous time dynamics described in Eq. 5, we define the dynamics corresponding to related graph variables as: where F represents a set of reusable functions incorporating aggregated messages. Therefore, the joint distribution of set of variables X can be described as: Here we use two types of density transformations for message passing: generic message transformations -transformations with generic update functions where trace in Eq. 13 can be approximated instead of computing by brute force method, and multi-scale message transformationstransformations with generic update functions at multiple scales of information. Generic message transformations. The trace of Jacobian matrix in Eq. 13 is modeled using a generic neural network function. The likelihood is defined as: where F denotes a neural network for message functions, and is a noise vector and usually can be sampled from standard Gaussian or Rademacher distributions. Multi-scale message transformations. As a generalization of generic message transformations, we design a model with multi-scale message passing to encode different levels of information in the variables. Similar to , we construct our multi-scale CGF model by stacking several blocks wherein each flow block performs message passing based on generic message transformations. After passing the input through a block, we factor out a portion of the output and feed it as input to the subsequent block. The likelihood is defined as: where b = 1, 2,..., (B − 1) with B as the total number of blocks in the design of the multi-scale architecture. Assume at time t b (t 0 < t b < t 1), X(t b) is factored out into two. We use one of these (denoted asX(t b)) as the input to the (b + 1) th block. LetX(t b) be the input to the next block, the density transformation is formulated as: To demonstrate the generality and effectiveness of our Continuous Graph Flow (CGF), we evaluate our model on three diverse tasks: graph generation, image puzzle generation, and layout generation based on scene graphs. Graph generation requires the model to learn complex distributions over the graph structure. Image puzzle generation requires the model to learn local and global correlations in the puzzle pieces. Layout generation has a diverse set of possible nodes and edges. These tasks have high complexity in the distributions of graph variables and diverse potential function types. Together these tasks pose a challenging evaluation for our proposed method. Datasets and Baselines. We evaluate our model on graph generation on two benchmark datasets EGO-SMALL and COMMUNITY-SMALL against four strong state-of-the-art baselines: VAE-based method , autoregressive graph generative model GraphRNN and DeepGMG, and Graph normalizing flows . Evaluation. We conduct a quantitative evaluation of the generated graphs using Maximum Mean Discrepancy (MMD) measures proposed in GRAPHRNN . The MMD evaluation in GRAPHRNN was performed using a test set of N ground truth graphs, computing their distribution over the nodes, and then searching for a set of N generated graphs from a larger set of samples generated from the model that best matches this distribution. As mentioned by , this evaluation process would likely have high variance as the graphs are very small. Therefore, we also performed an evaluation by generating 1024 graphs for each model and computing the MMD distance between this generated set of graphs and the ground truth test set. Baseline are from. Implementation details refer to Appendix A. Results and Analysis. Table 1 shows the in terms of MMD. Our CGF outperforms the baselines by a significant margin and also the concurrent work GNF. We believe our CGF outperforms GNF because it employs free-flow functions forms unlike GNF that has some contraints necessitated by the coupling layers. Fig. 2 visualizes the graphs generated by our model. Our model can capture the characteristics of datasets and generate diverse graphs that are not seen during the training. For additional visualizations and comparisons, refer to the Appendix A. Task description. We design image puzzles for image datasets to test model's ability on fitting very complex node distributions in graphs. Given an image of size W × W, we design a puzzle by dividing the original image into non-overlapping unique patches. A puzzle patch is of size w × w, in which w represents the width of the puzzle. Each image is divided into p = W/w puzzle patches both horizontally and vertically, and therefore we obtain P = p × p patches in total. Each patch corresponds to a node in the graph. To evaluate the performance of our model on dynamic graph sizes, instead of training the model with all nodes, we samplep adjacent patches wherep is uniformly sampled from {1, . . ., P} as input to the model during training and test. In our experiments, we use Figure 2: Visualization of generated graphs from our model. Our model can capture the characteristic of datasets and generate diverse graphs not appearing in the training set. patch size w = 16, p ∈ {2, 3, 4} and edge function for each direction (left, right, up, down) within a neighbourhood of a node. Additional details are in Appendix A. Datasets and baselines. We design the image puzzle generation task for three datasets: MNIST , CIFAR10 , and CelebA. CelebA dataset does not have a validation set, thus, we split the original dataset into a training set of 27,000 images and test set of 3,000 images as in . We compare our model with six state-of-the-art VAE based models: StructuredVAE , Graphite , Variational message passing using structured inference networks (VMP-SIN) , BiLSTM + VAE: a bidirectional LSTM used to model the interaction between node latent variables (obtained after serializing the graph) in an autoregressive manner similar to , Variational graph autoencoder (GAE) , and Neural relational inference (NRI) : we adapt this to model data for single time instance and model interactions between the nodes. Results and analysis. We report the negative log likelihood (NLL) in bits/dimension (lower is better). The in Table 2 indicate that CGF significantly outperforms the baselines. In addition to the quantitative , we also conduct sampling based evaluation and perform two types of generation experiments: Unconditional Generation: Given a puzzle size p, p 2 puzzle patches are generated using a vector z sampled from Gaussian distribution (refer Fig. 3(a) ); and Conditional Generation: Given p 1 patches from an image puzzle having p 2 patches, we generate the remaining (p 2 − p 1) patches of the puzzle using our model (see Fig. 3(b) ). We believe the task of conditional generation is easier than unconditional generation as there is more relevant information in the input during flow based transformations. For unconditional generation, samples from a base distribution (e.g. Gaussian) are transformed into learnt data distribution using the CGF model. For conditional generation, we map x a ∈ X a where X a ⊂ X to the points in base distribution to obtain z a and (a) Unconditional Generation (b) Conditional Generation Figure 3: Qualitative for image puzzle generation. Samples generated using our model for 2x2 MNIST puzzles (above horizontal line) and 3x3 CelebA-HQ puzzles (below horizontal line) in (a) unconditional generation and (b) conditional generation settings. For setting (b), generated patches (highlighted in green boxes) are conditioned on the remaining patches (from ground truth). subsequently concatenate the samples from Gaussian distribution to z a to obtain z that match the dimensions of desired graph and generate samples by transforming from z to x ∈ X using the trained graph flow. Task description and evaluation metrics. Layout generation from scene graphs is a crucial task in computer vision and bridges the gap between the symbolic graph-based scene description and the object layouts in the scene (; ;). Scene graphs represent scenes as directed graphs, where nodes are objects and edges give relationships between objects. Object layouts are described by the set of corresponding bounding box annotations . Our model uses scene graph as inputs (nodes correspond to objects and edges represent relations). An edge function is defined for each relationship type. The output contains a set of object bounding boxes described by, where x i, y i are the top-left coordinates, and w i, h i are the bounding box width and height respectively. We use negative log likelihood per node (lower is better) for evaluating models on scene layout generation. Datasets and baselines. Two large-scale challenging datasets are used to evaluate the proposed model: Visual Genome and COCO-Stuff datasets. Visual Genome contains 175 object and 45 relation types. The training, validation and test set contain 62565, 5506 and 5088 images respectively. COCO-Stuff dataset contains 24972 train, 1024 validation, and 2048 test scene graphs. We use the same baselines as in Sec. 4.2. Results and analysis. We show quantitative in Table 3 against several state-of-the-art baselines. Our CGF model significantly outperforms these baselines in terms of negative log likelihood. Moreover, we show some qualitative in Fig. 4. Our model can learn the correct relations defined in scene graphs for both conditional and unconditional generation, Furthermore, our model is capable to learn one-to-many mappings and generate diverse of layouts for the same scene graph. Table 3: Quantitative for layout generation for scene graph in negative log-likelihood. These are for unconditional generation using CGF with generic message transformations. Method Visual Genome COCO-Stuff BiLSTM + VAE -1.20 -1.60 StructuredVAE -1.05 -1.36 Graphite -1.17 -0.93 VMP-SIN -0.61 -0.85 GAE -1.85 -1.92 NRI -0.76 -0.91 CGF -4.24 -6.21 To test the generalizability of our model to variable graph sizes, we design three different evaluation settings and test it on image puzzle task: odd to even: training with graphs having odd graph sizes and testing on graphs with even numbers of nodes, less to more: training on graphs with smaller sizes and testing on graphs with larger sizes, and more to less: training on graphs with larger sizes and testing on graphs with smaller. In the less to more setting, we test the model's ability to use the functions learned from small graphs on more complicated ones, whereas the more to less setting evaluates the model's ability to learn disentangled functions without explicitly seeing them during training. In our experiments, for the less to more setting, we use sizes less than G/2 for training and more than G/2 for testing where G is the size of the full graph. Similarly, for the less to more setting, we use sizes less than G/2 for training and more than G/2 for testing. Table 4 reports the NLL for these settings. The NLL of these models are close to the performance on the models trained on full dataset demonstrating that our model is able to generalize to unseen graph sizes. In this paper, we presented continuous graph flow, a generative model that generalizes the neural message passing in graphs to continuous time. We formulated the model as an neural ordinary differential equation system with shared and reusable functions that operate over the graph structure. We conducted evaluation for a diverse set of generation tasks across different domains: graph generation, image puzzle generation, and layout generation for scene graph. Experimental showed that continuous graph flow achieves significant performance improvement over various of state-ofthe-art baselines. For future work, we will focus on generation tasks for large-scale graphs which is promising as our model is reversible and memory-efficient. We provide supplementary materials to support the contents of the main paper. In this part, we describe implementation details of our model. We also provide additional qualitative for the generation tasks: Graph generation, image puzzle generation and layout generation from scene graphs. The ODE formulation for continuous graph flow (CGF) model was solved using ODE solver provided by NeuralODE . In this section, we provide specific details of the configuration of our CGF model used in our experiments on two different generation tasks used for evaluation in the paper. Graph Generation. For each graph, we firstly generate its line graph with edges switched to nodes and nodes switched to edges. Then the graph generation problem is now generating the current nodes values which represents the adjacency matrix in the original graph. Each node value is binary (0 or 1) and is dequantized to continuous values through variational dequantization, with a global learnable Gaussian distribution as variational distribution. For our architecture, we use two blocks of continuous graph flow with two fully connected layers in Community-small dataset, and one block of continuous graph flow with one fully connected layer in Citeseer-small dataset. The hidden dimensions are all 32. Image puzzle generation. Each graph for this task comprise nodes corresponding to the puzzle pieces. The pieces that share an edge in the puzzle grid are considered to be connected and an edge function is defined over those connections. In our experiments, each node is transformed to an embedding of size 64 using convolutional layer. The graph message passing is performed over these node embeddings. The image puzzle generation model is designed using a multi-scale continuous graph flow architecture. We use two levels of downscaling in our model each of which factors out the channel dimension of the random variable by 2. We have two blocks of continuous graph flow before each downscaling wth four convolutional message passing blocks in each of them. Each message passing block has a unary message passing function and binary passing functions based on the edge types -all containing hidden dimensions of 64. Layout generation for scene graphs. For scene graph layout generation, a graph comprises node corresponding to object bounding boxes described by, where x i, y i represents the top-left coordinates, and w i, h i represents the bounding box width and height respectively and edge functions are defined based on the relation types. In our experiments, the layout generation model uses two blocks of continuous graph flow units, with four linear graph message passing blocks in each of them. The message passing function uses 64 hidden dimensions, and takes the embedding of node label and edge label in unary message passing function and binary message passing function respectively. The embedding dimension is also set to 64 dimensions. For binary message passing function, we pass the messages both through the direction of edge and the reverse direction of edge to increase the model capacity. A.2 IMAGE PUZZLE GENERATION: ADDITIONAL QUALITATIVE FOR CELEBA-HQ Fig. 5 and Fig. 6 presents the image puzzles generated using unconditional generation and conditional generation respectively. A.3 LAYOUT GENERATION FROM SCENE GRAPH: QUALITATIVE Fig. 7 and Fig. 8 show qualitative on unconditional and conditional layout generation from scene graphs for COCO-stuff dataset respectively. Fig. 9 and Fig. 10 show qualitative on unconditional and conditional layout generation from scene graphs for Visual Genome dataset respectively. The generated have diverse layouts corresponding to a single scene graph. Figure 5: Qualitative on CelebA-HQ for image puzzle generation. Samples generated using our model for 3x3 CelebA-HQ puzzles in unconditional generation setting. Best viewed in color. Figure 6: Qualitative on CelebA-HQ for image puzzle generation. Samples generated using our model for 3x3 CelebA-HQ puzzles in conditional generation setting. Generated patches are highlighted in green. Best viewed in color. Figure 9: Unconditional generation of layouts from scene graphs for Visual Genome dataset. We sample 4 layouts for each scene graph. The generated have different layouts, but sharing the same scene graph. Best viewed in color. Please zoom in to see the category of each object. A.4 GRAPH GENERATION: ADDITIONAL QUALITATIVE Fig. 11 and Fig. 12 present the generated graphs for EGO-SMALL and COMMUNITY-SMALL respectively. We sample 4 layouts for each scene graph. The generated have different layouts except the conditional layout objects in (b), but sharing the same scene graph. Best viewed in color. Please zoom in to see the category of each object. We analyze the variation in number of function evaluations (NFE) required to solve the ODE as the number of nodes in the graph changes. Refer to Figure 13. The interestingly show that the average number of function evaluation does not increase linearly with the increment in number of graph nodes, which would be the case if the variables were independent. (a) CGF samples
Graph generative models based on generalization of message passing to continuous time using ordinary differential equations
336
scitldr
The practical successes of deep neural networks have not been matched by theoretical progress that satisfyingly explains their behavior. In this work, we study the information bottleneck (IB) theory of deep learning, which makes three specific claims: first, that deep networks undergo two distinct phases consisting of an initial fitting phase and a subsequent compression phase; second, that the compression phase is causally related to the excellent generalization performance of deep networks; and third, that the compression phase occurs due to the diffusion-like behavior of stochastic gradient descent. Here we show that none of these claims hold true in the general case. Through a combination of analytical and simulation, we demonstrate that the information plane trajectory is predominantly a function of the neural nonlinearity employed: double-sided saturating nonlinearities like tanh yield a compression phase as neural activations enter the saturation regime, but linear activation functions and single-sided saturating nonlinearities like the widely used ReLU in fact do not. Moreover, we find that there is no evident causal connection between compression and generalization: networks that do not compress are still capable of generalization, and vice versa. Next, we show that the compression phase, when it exists, does not arise from stochasticity in training by demonstrating that we can replicate the IB findings using full batch gradient descent rather than stochastic gradient descent. Finally, we show that when an input domain consists of a subset of task-relevant and task-irrelevant information, hidden representations do compress the task-irrelevant information, although the overall information about the input may monotonically increase with training time, and that this compression happens concurrently with the fitting process rather than during a subsequent compression period. Deep neural networks (; are the tool of choice for real-world tasks ranging from visual object recognition BID16, to unsupervised learning BID11 BID19 and reinforcement learning . These practical successes have spawned many attempts to explain the performance of deep learning systems BID12, mostly in terms of the properties and dynamics of the optimization problem in the space of weights (; BID8 BID1, or the classes of functions that can be efficiently represented by deep networks BID20). This paper analyzes a recent inventive proposal to study the dynamics of learning through the lens of information theory . In this view, deep learning is a question of representation learning: each layer of a deep neural network can be seen as a set of summary statistics which contain some but not all of the information present in the input, while retaining as much information about the target output as possible. The amount of information in a hidden layer regarding the input and output can then be measured over the course of learning, yielding a picture of the optimization process in the information plane. Crucially, this method holds the promise to serve as a general analysis that can be used to compare different architectures, using the common currency of mutual information. Moreover, the elegant information bottleneck (IB) theory provides a fundamental bound on the amount of input compression and target output information that any representation can achieve . The IB bound thus serves as a method-agnostic ideal to which different architectures and algorithms may be compared. A preliminary empirical exploration of these ideas in deep neural networks has yielded striking findings . Most saliently, trajectories in the information plane appear to consist of two distinct phases: an initial "fitting" phase where mutual information between the hidden layers and both the input and output increases, and a subsequent "compression" phase where mutual information between the hidden layers and the input decreases. It has been hypothesized that this compression phase is responsible for the excellent generalization performance of deep networks, and further, that this compression phase occurs due to the random diffusion-like behavior of stochastic gradient descent. Here we study these phenomena using a combination of analytical methods and simulation. In Section 2, we show that the compression observed by arises primarily due to the double-saturating tanh activation function used. Using simple models, we elucidate the effect of neural nonlinearity on the compression phase. Importantly, we demonstrate that the ReLU activation function, often the nonlinearity of choice in practice, does not exhibit a compression phase. We discuss how this compression via nonlinearity is related to the assumption of binning or noise in the hidden layer representation. To better understand the dynamics of learning in the information plane, in Section 3 we study deep linear networks in a tractable setting where the mutual information can be calculated exactly. We find that deep linear networks do not compress over the course of training for the setting we examine. Further, we show a dissociation between generalization and compression. In Section 4, we investigate whether stochasticity in the training process causes compression in the information plane. We train networks with full batch gradient descent, and compare the to those obtained with stochastic gradient descent. We find comparable compression in both cases, indicating that the stochasticity of SGD is not a primary factor in the observed compression phase. Moreover, we show that the two phases of SGD occur even in networks that do not compress, demonstrating that the phases are not causally related to compression. These may seem difficult to reconcile with the intuition that compression can be necessary to attain good performance: if some input channels primarily convey noise, good generalization requires excluding them. Therefore, in Section 5 we study a situation with explicitly task-relevant and task-irrelevant input dimensions. We show that the hidden-layer mutual information with the task-irrelevant subspace does indeed drop during training, though the overall information with the input increases. However, instead of a secondary compression phase, this task-irrelevant information is compressed at the same time that the taskrelevant information is boosted. Our highlight the importance of noise assumptions in applying information theoretic analyses to deep learning systems, and put in doubt the generality of the IB theory of deep learning as an explanation of generalization performance in deep architectures. The starting point for our analysis is the observation that changing the activation function can markedly change the trajectory of a network in the information plane. In FIG0, we show our replication of the reported by for networks with the tanh nonlinearity.1 This replication was performed with the code supplied by the authors of , and closely follows the experimental setup described therein. Briefly, a neural network with 7 fully connected hidden layers of width 12-10-7-5-4-3-2 is trained with stochastic gradient descent to produce a binary classification from a 12-dimensional input. In our replication we used 256 randomly selected samples per batch. The mutual information of the network layers with respect to the input and output variables is calculated by binning the neuron's tanh output activations into 30 equal intervals between -1 and 1. Discretized values for each neuron in each layer are then used to directly calculate the joint distributions, over the 4096 equally likely input patterns and true output labels. In line with prior work , the dynamics in FIG0 show a;, no compression is observed except in the final classification layer with sigmoidal neurons. See Appendix B for the KDE MI method applied to the original Tishby dataset; additional using a second popular nonparametric k-NN-based method BID15; and for other neural nonlinearities.transition between an initial fitting phase, during which information about the input increases, and a subsequent compression phase, during which information about the input decreases. We then modified the code to train deep networks using rectified linear activation functions (f (x) = max(0, x)). While the activities of tanh networks are bounded in the range [−1, 1], ReLU networks have potentially unbounded positive activities. To calculate mutual information, we first trained the ReLU networks, next identified their largest activity value over the course of training, and finally chose 100 evenly spaced bins between the minimum and maximum activity values to discretize the hidden layer activity. The ing information plane dynamics are shown in FIG0. The mutual information with the input monotonically increases in all ReLU layers, with no apparent compression phase. To see whether our were an artifact of the small network size, toy dataset, or simple binning-based mutual information estimator we employed, we also trained larger networks on the MNIST dataset and computed mutual information using a state-of-the-art nonparametric kernel density estimator which assumes hidden activity is distributed as a mixture of Gaussians (see Appendix B for details). Fig. C-D show that, again, tanh networks compressed but ReLU networks did not. Appendix B shows that similar also obtain with the popular nonparametric k-nearest-neighbor estimator of BID15, and for other neural nonlinearities. Thus, the choice of nonlinearity substantively affects the dynamics in the information plane. To understand the impact of neural nonlinearity on the mutual information dynamics, we develop a minimal model that exhibits this phenomenon. In particular, consider the simple three neuron network shown in FIG1. We assume a scalar Gaussian input distribution X ∼ N, which is fed through the scalar first layer weight w 1, and passed through a neural nonlinearity f (·), yielding the hidden unit activity h = f (w 1 X). To calculate the mutual information with the input, this hidden unit activity is then binned yielding the new discrete variable T = bin(h) (for instance, into 30 evenly spaced bins from -1 to 1 for the tanh nonlinearity). This binning process is depicted in FIG1. In this simple setting, the mutual information I(T ; X) between the binned hidden layer activity T and the input X can be calculated exactly. In particular, DISPLAYFORM0 where H(·) denotes entropy, and we have used the fact that H(T |X) = 0 since T is a deterministic function of X. Here the probabilities p i = P (h ≥ b i and h < b i+1) are simply the probability that an input X produces a hidden unit activity that lands in bin i, defined by lower and upper bin limits b i and b i+1 respectively. This probability can be calculated exactly for monotonic nonlinearities f (·) using the cumulative density of X, DISPLAYFORM1 where f −1 (·) is the inverse function of f (·).As shown in FIG1 -D, as a function of the weight w 1, mutual information with the input first increases and then decreases for the tanh nonlinearity, but always increases for the ReLU nonlinearity. Intuitively, for small weights w 1 ≈ 0, neural activities lie near zero on the approximately linear part of the tanh function. Therefore f (w 1 X) ≈ w 1 X, yielding a rescaled Gaussian with information that grows with the size of the weights. However for very large weights w 1 → ∞, the tanh hidden unit nearly always saturates, yielding a discrete variable that concentrates in just two bins. This is more or less a coin flip, containing mutual information with the input of approximately 1 bit. Hence the distribution of T collapses to a much lower entropy distribution, yielding compression for large weight values. With the ReLU nonlinearity, half of the inputs are negative and land in the bin containing a hidden activity of zero. The other half are Gaussian distributed, and thus have entropy that increases with the size of the weight. Hence double-saturating nonlinearities can lead to compression of information about the input, as hidden units enter their saturation regime, due to the binning procedure used to calculate mutual information. The crux of the issue is that the actual I(h; X) is infinite, unless the network itself adds noise to the hidden layers. In particular, without added noise, the transformation from X to the continuous hidden activity h is deterministic and the mutual information I(h; X) would generally be infinite (see Appendix C for extended discussion). Networks that include noise in their processing (e.g.,) can have finite I(T ; X). Otherwise, to obtain a finite MI, one must compute mutual information as though there were binning or added noise in the activations. But this binning/noise is not actually a part of the operation of the network, and is therefore somewhat arbitrary (different binning schemes can in different mutual information with the input, as shown in Fig. 14 of Appendix C).We note that the binning procedure can be viewed as implicitly adding noise to the hidden layer activity: a range of X values map to a single bin, such that the mapping between X and T is no longer perfectly invertible BID17. The binning procedure is therefore crucial to obtaining a finite MI value, and corresponds approximately to a model where noise enters the system after the calculation of h, that is, T = h +, where is noise of fixed variance independent from h and X. This approach is common in information theoretic analyses of deterministic systems, and can serve as a measure of the complexity of a system's representation (see Sec 2.4 of). However, neither binning nor noise is present in the networks that considered, nor the ones in FIG1, either during training or testing. It therefore remains unclear whether robustness of a representation to this sort of noise in fact influences generalization performance in deep learning systems. Furthermore, the addition of noise means that different architectures may no longer be compared in a common currency of mutual information: the binning/noise structure is arbitrary, and architectures that implement an identical input-output map can nevertheless have different robustness to noise added in their internal representation. For instance, Appendix C describes a family of linear networks that compute exactly the same input-output map and therefore generalize identically, but yield different mutual information with respect to the input. Finally, we note that approaches which view the weights obtained from the training process as the random variables of interest may sidestep this issue BID0. Hence when a tanh network is initialized with small weights and over the course of training comes to saturate its nonlinear units (as it must to compute most functions of practical interest, see discussion in Appendix D), it will enter a compression period where mutual information decreases. FIG0 of Appendix E show histograms of neural activity over the course of training, demonstrating that activities in the tanh network enter the saturation regime during training. This nonlinearity-based compression furnishes another explanation for the observation that training slows down as tanh networks enter their compression phase : some fraction of inputs have saturated the nonlinearities, reducing backpropagated error gradients. The preceding section investigates the role of nonlinearity in the observed compression behavior, tracing the source to double-saturating nonlinearities and the binning methodology used to calculate mutual information. However, other mechanisms could lead to compression as well. Even without nonlinearity, neurons could converge to highly correlated activations, or project out irrelevant directions of the input. These phenomena are not possible to observe in our simple three neuron minimal model, as they require multiple inputs and hidden layer activities. To search for these mechanisms, we turn to a tractable model system: deep linear neural networks BID3;; Saxe et al. FORMULA1 ). In particular, we exploit recent on the generalization dynamics in simple linear networks trained in a student-teacher setup (; BID1 . In a student-teacher setting, one "student" neural network learns to approximate the output of another "teacher" neural network. This setting is a way of generating a dataset with interesting structure that nevertheless allows exact calculation of the generalization performance of the network, exact calculation of the mutual information of the representation (without any binning procedure), and, though we do not do so here, direct comparison to the IB bound which is already known for linear Gaussian problems BID6.We consider a scenario where a linear teacher neural network generates input and output examples which are then fed to a deep linear student network to learn FIG2. Following the formulation of BID1, we assume multivariate Gaussian inputs X ∼ N (0, 1 Ni I Ni) and a scalar output Y. The output is generated by the teacher network according to DISPLAYFORM0 represents aspects of the target function which cannot be represented by a neural network (that is, the approximation error or bias in statistical learning theory), and the teacher weights W o are drawn independently from N (0, σ 2 w). Here, the weights of the teacher define the rule to be learned. The signal to noise ratio SNR = σ 2 w /σ 2 o determines the strength of the rule linking inputs to outputs relative to the inevitable approximation error. We emphasize that the "noise" added to the teacher's output is fundamentally different from the noise added for the purpose of calculating mutual information: o models the approximation error for the task-even the best possible neural network may still make errors because the target function is not representable exactly as a neural network-and is part of the construction of the dataset, not part of the analysis of the student network. To train the student network, a dataset of P examples is generated using the teacher. The student network is then trained to minimize the mean squared error between its output and the target output using standard (batch or stochastic) gradient descent on this dataset. Here the student is a deep linear neural network consisting of potentially many layers, but where the the activation function of each neuron is simply f (u) = u. That is, a depth D deep linear network computes the output DISPLAYFORM1 While linear activation functions stop the network from computing complex nonlinear functions of the input, deep linear networks nevertheless show complicated nonlinear learning trajectories , the optimization problem remains nonconvex BID3, and the generalization dynamics can exhibit substantial overtraining BID10 BID1.Importantly, because of the simplified setting considered here, the true generalization error is easily shown to be DISPLAYFORM2 where W tot (t) is the overall linear map implemented by the network at training epoch t (that is, DISPLAYFORM3 Furthermore, the mutual information with the input and output may be calculated exactly, because the distribution of the activity of any hidden layer is Gaussian. Let T be the activity of a specific hidden layer, and letW be the linear map from the input to this activity (that is, for layer l,W = W l · · · W 2 W 1). Since T =W X, the mutual information of X and T calculated using differential entropy is infinite. For the purpose of calculating the mutual information, therefore, we assume that Gaussian noise is added to the hidden layer activity, T =W X + M I, with mean 0 and variance σ 2 M I = 1.0. This allows the analysis to apply to networks of any size, including overcomplete layers, but as before we emphasize that we do not add this noise either during training or testing. With these assumptions, T and X are jointly Gaussian and we have DISPLAYFORM4 where |·| denotes the determinant of a matrix. Finally the mutual information with the output Y, also jointly Gaussian, can be calculated similarly (see Eqns. FORMULA22 - FORMULA25 of Appendix G). Here the network has an input layer of 100 units, 1 hidden layer of 100 units each and one output unit. The network was trained with batch gradient descent on a dataset of 100 examples drawn from the teacher with signal to noise ratio of 1.0. The linear network behaves qualitatively like the ReLU network, and does not exhibit compression. Nevertheless, it learns a map that generalizes well on this task and shows minimal overtraining. Hence, in the setting we study here, generalization performance can be acceptable without any compression phase. The in BID1 ) show that, for the case of linear networks, overtraining is worst when the number of inputs matches the number of training samples, and is reduced by making the number of samples smaller or larger. FIG4 shows learning dynamics with the number of samples matched to the size of the network. Here overfitting is substantial, and again no compression is seen in the information plane. Comparing to the in FIG2, both networks exhibit similar information dynamics with respect to the input (no compression), but yield different generalization performance. Hence, in this linear analysis of a generic setting, there do not appear to be additional mechanisms that cause compression over the course of learning; and generalization behavior can be widely different for networks with the same dynamics of information compression regarding the input. We note that, in the setting considered here, all input dimensions have the same variance, and the weights of the teacher are drawn independently. Because of this, there are no special directions in the input, and each subspace of the input contains as much information as any other. It is possible that, in real world tasks, higher variance inputs are also the most likely to be relevant to the task (here, have large weights in the teacher). We have not investigated this possibility here. To see whether similar behavior arises in nonlinear networks, we trained tanh networks in the same setting as Section 2, but with 30% of the data, which we found to lead to modest overtraining. Next, we test a core theoretical claim of the information bottleneck theory of deep learning, namely that randomness in stochastic gradient descent is responsible for the compression phase. In particular, because the choice of input samples in SGD is random, the weights evolve in a stochastic way during training. distinguish two phases of SGD optimization: in the first "drift" phase, the mean of the gradients over training samples is large relative to the standard deviation of the gradients; in the second "diffusion" phase, the mean becomes smaller than the standard deviation of the gradients. The authors propose that compression should commence following the transition from a high to a low gradient signal-to-noise ratio (SNR), i.e., the onset of the diffusion phase. The proposed mechanism behind this diffusion-driven compression is as follows. The authors state that during the diffusion phase, the stochastic evolution of the weights can be described as a Fokker-Planck equation under the constraint of small training error. Then, the stationary distribution over weights for this process will have maximum entropy, again subject to the training error constraint. Finally, the authors claim that weights drawn from this stationary distribution will maximize the entropy of inputs given hidden layer activity, H(X|T), subject to a training error constraint, and that this training error constraint is equivalent to a constraint on the mutual information I(T ; Y) for small training error. Since the entropy of the input, H(X), is fixed, the of the diffusion dynamics will be to minimize I(X; T):= H(X) − H(X|T) for a given value of I(T ; Y) reached at the end of the drift phase. However, this explanation does not hold up to either theoretical or empirical investigation. Let us assume that the diffusion phase does drive the distribution of weights to a maximum entropy distribution subject to a training error constraint. Note that this distribution reflects stochasticity of weights across different training runs. There is no general reason that a given set of weights sampled from this distribution (i.e., the weight parameters found in one particular training run) will maximize H(X|T), the entropy of inputs given hidden layer activity. In particular, H(X|T) reflects (conditional) uncertainty about inputs drawn from the data-generating distribution, rather than uncertainty about any kind of distribution across different training runs. We also show empirically that the stochasticity of the SGD is not necessary for compression. To do so, we consider two distinct training procedures: offline stochastic gradient descent (SGD), which learns from a fixed-size dataset, and updates weights by repeatedly sampling a single example from the dataset and calculating the gradient of the error with respect to that single sample (the typical procedure used in practice); and batch gradient descent (BGD), which learns from a fixed-size dataset, and updates weights using the gradient of the total error across all examples. Batch gradient descent uses the full training dataset and, crucially, therefore has no randomness or diffusion-like behavior in its updates. We trained tanh and ReLU networks with SGD and BGD and compare their information plane dynamics in FIG5 (see Appendix H for a linear network). We find largely consistent information dynamics in both instances, with robust compression in tanh networks for both methods. Thus randomness in the training process does not appear to contribute substantially to compression of information about the input. This finding is consistent with the view presented in Section 2 that compression arises predominantly from the double saturating nonlinearity. Finally, we look at the gradient signal-to-noise ratio (SNR) to analyze the relationship between compression and the transition from high to low gradient SNR. FIG1 of Appendix I shows the gradient SNR over training, which in all cases shows a phase transition during learning. Hence the gradient SNR transition is a general phenomenon, but is not causally related to compression. Appendix I offers an extended discussion and shows gradient SNR transitions without compression on the MNIST dataset and for linear networks. Our finding that generalization can occur without compression may seem difficult to reconcile with the intuition that certain tasks involve suppressing irrelevant directions of the input. In the extreme, if certain inputs contribute nothing but noise, then good generalization requires ignoring them. To study this, we consider a variant on the linear student-teacher setup of Section 3: we partition the input X into a set of task-relevant inputs X rel and a set of task-irrelevant inputs X irrel, and alter the teacher network so that the teacher's weights to the task-irrelevant inputs are all zero. Hence the inputs X irrel contribute only noise, while the X rel contain signal. We then calculate the information plane dynamics for the whole layer, and for the task-relevant and task-irrelevant inputs separately. FIG7 shows information plane dynamics for a deep linear neural network trained using SGD (5 samples/batch) on a task with 30 task-relevant inputs and 70 task-irrelevant inputs. While the overall dynamics show no compression phase, the information specifically about the task-irrelevant subspace does compress over the course of training. This compression process occurs at the same time as the fitting to the task-relevant information. Thus, when a task requires ignoring some inputs, the The information with the task-relevant subspace increases robustly over training. (C) However, the information specifically about the task-irrelevant subspace does compress after initially growing as the network is trained.information with these inputs specifically will indeed be reduced; but overall mutual information with the input in general may still increase. Our suggest that compression dynamics in the information plane are not a general feature of deep networks, but are critically influenced by the nonlinearities employed by the network. Doublesaturating nonlinearities lead to compression, if mutual information is estimated by binning activations or by adding homoscedastic noise, while single-sided saturating nonlinearities like ReLUs do not compress in general. Consistent with this view, we find that stochasticity in the training process does not contribute to compression in the cases we investigate. Furthermore, we have found instances where generalization performance does not clearly track information plane behavior, questioning the causal link between compression and generalization. Hence information compression may parallel the situation with sharp minima: although empirical evidence has shown a correlation with generalization error in certain settings and architectures, further theoretical analysis has shown that sharp minima can in fact generalize well BID9. We emphasize that compression still may occur within a subset of the input dimensions if the task demands it. This compression, however, is interleaved rather than in a secondary phase and may not be visible by information metrics that track the overall information between a hidden layer and the input. Finally, we note that our address the specific claims of one scheme to link the information bottleneck principle with current practice in deep networks. The information bottleneck principle itself is more general and may yet offer important insights into deep networks BID0. Moreover, the information bottleneck principle could yield fundamentally new training algorithms for networks that are inherently stochastic and where compression is explicitly encouraged with appropriate regularization terms BID5 BID2 This Appendix investigates the generality of the finding that compression is not observed in neural network layers with certain activation functions. FIG0 of the main text shows example using a binning-based MI estimator and a nonparametric KDE estimator, for both the tanh and ReLU activation functions. Here we describe the KDE MI estimator in detail, and present extended on other datasets. We also show for other activation functions. Finally, we provide entropy estimates based on another nonparametric estimator, the popular k-nearest neighbor approach of BID15. Our findings consistently show that double-saturating nonlinearities can yield compression, while single-sided nonlinearities do not. The KDE approach of; estimates the mutual information between the input and the hidden layer activity by assuming that the hidden activity is distributed as a mixture of Gaussians. This assumption is well-suited to the present setting under the following interpretation: we take the input activity to be distributed as delta functions at each example in the dataset, corresponding to a uniform distribution over these specific samples. In other words, we assume that the empirical distribution of input samples is the true distribution. Next, the hidden layer activity h is a deterministic function of the input. As mentioned in the main text and discussed in more detail in Appendix C, without the assumption of noise, this would have infinite mutual information with the input. We therefore assume for the purposes of analysis that Gaussian noise of variance σ 2 is added, that is, T = h + where ∼ N (0, σ 2 I). Under these assumptions, the distribution of T is genuinely a mixture of Gaussians, with a Gaussian centered on the hidden activity corresponding to each input sample. We emphasize again that the noise is added solely for the purposes of analysis, and is not present during training or testing the network. In this setting, an upper bound for the mutual information with the input is ) DISPLAYFORM0 where P is the number of training samples and h i denotes the hidden activity vector in response to input sample i. Similarly, the mutual information with respect to the output can be calculated as DISPLAYFORM1 DISPLAYFORM2 where L is the number of output labels, P l denotes the number of data samples with output label l, p l = P l /P denotes the probability of output label l, and the sums over i, Y i = l indicate a sum over all examples with output label l. FIG10 -B shows the of applying this MI estimation method on the dataset and network architecture of , with MI estimated on the full dataset and averaged over 50 repetitions. Mutual information was estimated using data samples from the test set, and we took the noise variance σ 2 = 0.1. These look similar to the estimate derived from binning, with compression in tanh networks but no compression in ReLU networks. Relative to the binning estimate, it appears that compression is less pronounced in the KDE method. The network was trained using SGD with minibatches of size 128. As before, mutual information was estimated using data samples from the test set, and we took the noise variance σ 2 = 0.1. The smaller layer sizes in the top three hidden layers were selected to ensure the quality of the kernel density estimator given the amount of data in the test set, since the estimates are more accurate for smaller-dimensional data. Because of computational expense, the MNIST are from a single training run. More detailed for the MNIST dataset are provided in FIG12 for the tanh activation function, and in FIG0 for the ReLU activation function. In these figures, the first row shows the evolution of the cross entropy loss (on both training and testing data sets) during training. The second row shows the mutual information between input and the activity of different hidden layers, using the nonparametric KDE estimator described above. The blue region in the second row shows the range of possible MI values, ranging from the upper bound described above (Eq. 10) to the following lower bound, DISPLAYFORM3 The third row shows the mutual information between input and activity of different hidden layers, estimated using the binning method (here, the activity of each neuron was discretized into bins of size 0.5). For both the second and third rows, we also plot the entropy of the inputs, H(X), as a dashed line. H(X) is an upper bound on the mutual information I(X; T), and is computed using the assumption of a uniform distribution over the 10,000 testing points in the MNIST dataset, giving H(X) = log 2 10000.Finally, the fourth row visualizes the dynamics of the SGD updates during training. For each layer and epoch, the green line shows the 2 norm of the weights. We also compute the vector of mean updates across SGD minibatches (this vector has one dimension for each weight parameter), as well as the vector of the standard deviation of the updates across SGD minibatches. The 2 norm of the mean update vector is shown in blue, and the 2 norm of the standard deviation vector is shown in orange. The gradient SNR, computed as the ratio of the norm of the mean vector to the norm of the standard deviation vector, is shown in red. For both the tanh and ReLU networks, the gradient SNR shows a phase transition during training, and the norm of the weights in each layer increases. Importantly, this phase transition occurs despite a lack of compression in the ReLU network, indicating that noise in SGD updates does not yield compression in this setting. Upper and lower bounds for the mutual information I(X; T) between the input (X) and each layer's activity (T), using the nonparametric KDE estimator. Dotted line indicates H(X) = log 2 10000, the entropy of a uniform distribution over 10,000 testing samples. Row 3: Binning-based estimate of the mutual information I(X; T), with each neuron's activity discretized using a bin size of 0.5. Row 4: Gradient SNR and weight norm dynamics. The gradient SNR shows a phase transition during training, and the norm of the weights in each layer increases. Upper and lower bounds for the mutual information I(X; T) between the input (X) and each layer's activity (T), using the nonparametric KDE estimator. Dotted line indicates H(X) = log 2 10000, the entropy of a uniform distribution over 10,000 testing samples. Row 3: Binning-based estimate of the mutual information I(X; T), with each neuron's activity discretized using a bin size of 0.5. Row 4: Gradient SNR and weight norm dynamics. The gradient SNR shows a phase transition during training, and the norm of the weights in each layer increases. Importantly, this phase transition occurs despite a lack of compression in the ReLU network, indicating that noise in SGD updates does not yield compression in this setting. Next, in FIG10 -D, we show from the kernel MI estimator from two additional nonlinear activation functions, the softsign function DISPLAYFORM0 and the softplus function DISPLAYFORM1 These functions are plotted next to tanh and ReLU in FIG0. The softsign function is similar to tanh but saturates more slowly, and yields less compression than tanh. The softplus function is a smoothed version of the ReLU, and yields similar dynamics with no compression. Because softplus never saturates fully to zero, it retains more information with respect to the input than ReLUs in general. We additionally investigated the widely-used nonparametric MI estimator of BID15. This estimator uses nearest neighbor distances between samples to compute an estimate of the entropy of a continuous random variable. Here we focused for simplicity only on the compression phenomenon in the mutual information between the input and hidden layer activity, leaving aside the information with respect to the output (as this is not relevant to the compression phenomenon). Again, without additional noise assumptions, the MI between the hidden representation and the input would be infinite because the mapping is deterministic. Rather than make specific noise assumptions, we instead use the Kraskov method to estimate the entropy of the hidden representations T. Note that the entropy of T is the mutual information up to an unknown constant so long as the noise assumption is homoscedastic, that is, T = h + Z where the random variable Z is independent of X. To see this, note that DISPLAYFORM0 where the constant c = H(Z). Hence observing compression in the layer entropy H(T) is enough to establish that compression occurs in the mutual information. The Kraskov estimate is given by DISPLAYFORM1 where d is the dimension of the hidden representation, P is the number of samples, r i is the distance to the k-th nearest neighbor of sample i, is a small constant for numerical stability, Γ(·) is the Gamma function, and ψ(·) is the digamma function. Here the parameter prevents infinite terms when the nearest neighbor distance r i = 0 for some sample. We took = 10 −16. FIG0 shows the entropy over training for tanh and ReLU networks trained on the dataset of and with the network architecture in , averaged over 50 repeats. In these experiments, we used k = 2. Compression would correspond to decreasing entropy over the course of training, while a lack of compression would correspond to increasing entropy. Several tanh layers exhibit compression, while the ReLU layers do not. Hence qualitatively, the Kraskov estimator returns similar to the binning and KDE strategies. A B FIG0: Entropy dynamics over training for the network architecture and training dataset of Shwartz-Ziv & Tishby FORMULA7, estimated with the nonparametric k-nearest-neighbor-based method of BID15. Here the x-axis is epochs of training time, and the y-axis plots the entropy of the hidden representation, as calculated using nearest-neighbor distances. Note that in this setting, if T is considered to be the hidden activity plus independent noise, the entropy is equal to the mutual information up to a constant (see derivation in text). Layers 0-4 correspond to the hidden layers of size 10-7-5-4-3. (A) tanh neural network layers can show compression over the course of training.(B) ReLU neural network layers show no compression. A recurring theme in the reported in this paper is the necessity of noise assumptions to yield a nontrivial information theoretic analysis. Here we give an extended discussion of this phenomenon, and of issues relating to discrete entropy as opposed to continuous (differential) entropy. The activity of a neural network is often a continuous deterministic function of its input. That is, in response to an input X, a specific hidden layer might produce activity h = f (X) for some function f. The mutual information between h and X is given by DISPLAYFORM0 If h were a discrete variable, then the entropy would be given by DISPLAYFORM1 where p i is the probability of the discrete symbol i, as mentioned in the main text. Then H(h|X) = 0 because the mapping is deterministic and we have I(h; X) = H(h).However h is typically continuous. The continuous entropy, defined for a continuous random variable Z with density p Z by analogy to Eqn. FORMULA8 as DISPLAYFORM2 can be negative and possibly infinite. In particular, note that if p Z is a delta function, then H(Z) = −∞. The mutual information between hidden layer activity h and the input X for continuous h, X is DISPLAYFORM3 Now H(h|X) = −∞ since given the input X, the hidden activity h is distributed as a delta function at f (X). The mutual information is thus generally infinite, so long as the hidden layer activity has finite entropy (H(h) is finite).Figure 13: Effect of binning strategy on minimal three neuron model. Mutual information for the simple three neuron model shown in FIG1 with bin edges b i ∈ tanh(linspace(−50, 50, N)). In contrast to linear binning, the mutual information continues to increase as weights grow. To yield a finite mutual information, some noise in the mapping is required such that H(h|X) remains finite. A common choice (and one adopted here for the linear network, the nonparametric kernel density estimator, and the k-nearest neighbor estimator) is to analyze a new variable with additive noise, T = h + Z, where Z is a random variable independent of X. Then H(T |X) = H(Z) which allows the overall information I(T ; X) = H(T) − H(Z) to remain finite. This noise assumption is not present in the actual neural networks either during training or testing, and is made solely for the purpose of calculating the mutual information. Another strategy is to partition the continuous variable h into a discrete variable T, for instance by binning the values (the approach taken in). This allows use of the discrete entropy, which remains finite. Again, however, in practice the network does not operate on the binned variables T but on the continuous variables h, and the binning is solely for the purpose of calculating the mutual information. Moreover, there are many possible binning strategies, which yield different discrete random variables, and different mutual information with respect to the input. The choice of binning strategy is an assumption analogous to choosing a type of noise to add to the representation in the continuous case: because there is in fact no binning in the operation of the network, there is no clear choice for binning methodology. The strategy we use in binning-based experiments reported here is the following: for bounded activations like the tanh activation, we use evenly spaced bins between the minimum and maximum limits of the function. For unbounded activations like ReLU, we first train the network completely; next identify the minimum and maximum hidden activation over all units and all training epochs; and finally bin into equally spaced bins between these minimum and maximum values. We note that this procedure places no restriction on the magnitude that the unbounded activation function can take during training, and yields the same MI estimate as using infinite equally spaced bins (because bins for activities larger than the maximum are never seen during training).As an example of another binning strategy that can yield markedly different , we consider evenly spaced bins in a neuron's net input, rather than its activity. That is, instead of evenly spaced bins in the neural activity, we determine the bin edges by mapping a set of evenly spaced values through the neural nonlinearity. For tanh, for instance, this spaces bins more tightly in the saturation region as compared to the linear region. FIG0 shows the of applying this binning strategy FIG0: Effect of binning strategy on information plane dynamics. Results for the same tanh network and training regime as 1A, but with bin edges b i ∈ tanh(linspace(−50, 50, N)). Measured with this binning structure, there is no compression in most layers.to the minimal three neuron model with tanh activations. This binning scheme captures more information as the weights of the network grow larger. FIG0 shows information plane dynamics for this binning structure. The tanh network no longer exhibits compression. (We note that the broken DPI in this example is an artifact of performing binning only for analysis, as discussed below).Any implementation of a neural network on digital hardware is ultimately of finite precision, and hence is a binned, discrete representation. However, it is a very high resolution binning compared to that used here or by: single precision would correspond to using roughly 2 32 bins to discretize each hidden unit's activity, as compared to the 30-100 used here. If the binning is fine-grained enough that each input X yields a different binned activity pattern h, then H(h) = log(P) where P is the number of examples in the dataset, and there will be little to no change in information during training. As an example, we show in FIG0 the of binning at full machine precision. Finally, we note two consequences of the assumption of noise/binning for the purposes of analysis. First, this means that the data processing inequality (DPI) does not apply to the noisy/binned mutual information estimates. The DPI states that information can only be destroyed through successive transformations, that is, if X → h 1 → h 2 form a Markov chain, then I(X; h 1) ≥ I(X; h 2) (see, eg,). Because noise is added only for the purpose of analysis, however, this does not apply here. In particular, for the DPI to apply, the noise added at lower layers would have to propagate through the network to higher layers. That is, if the transformation from hidden layer 1 to hidden layer 2 is h 2 = f (h 1) and T 1 = h 1 + Z 1 is the hidden layer activity after adding noise, then the DPI would hold for the variableT 2 = f (T 1) + Z 2 = f (h 1 + Z 1) + Z 2, not the quantity Information in most layers stays pinned to log 2 (P) = 12. Compression is only observed in the highest and smallest layers near the very end of training, when the saturation of tanh is strong enough to saturate machine precision. DISPLAYFORM4 used in the analysis. Said another way, the Markov chain for DISPLAYFORM5, so the DPI states only that I(X; h 1) ≥ I(X; T 2).A second consequence of the noise assumption is the fact that the mutual information is no longer invariant to invertible transformations of the hidden activity h. A potentially attractive feature of a theory based on mutual information is that it can allow for comparisons between different architectures: mutual information is invariant to any invertible transformation of the variables, so two hidden representations could be very different in detail but yield identical mutual information with respect to the input. However, once noise is added to a hidden representation, this is no longer the case: the variable T = h + Z is not invariant to reparametrizations of h. As a simple example, consider a minimal linear network with scalar weights w 1 and w 2 that computes the outputŷ = w 2 w 1 X. The hidden activity is h = w 1 X. Now consider the family of networks in which we scale down w 1 and scale up w 2 by a factor c = 0, that is, these networks have weightsw 1 = w 1 /c andw 2 = cw 2, yielding the exact same input-output mapŷ =w 2w1 X = cw 2 (w 1 /c)X = w 2 w 1 X. Because they compute the same function, they necessarily generalize identically. However after introducing the noise assumption the mutual information is DISPLAYFORM6 where we have taken the setting in Section 3 in which X is normal Gaussian, and independent Gaussian noise of variance σ 2 M I is added for the purpose of MI computation. Clearly, the mutual information is now dependent on the scaling c of the internal layer, even though this is an invertible linear transformation of the representation. Moreover, this shows that networks which generalize identically can nevertheless have very different mutual information with respect to the input when it is measured in this way. Our argument relating neural saturation to compression in mutual information relies on the notion that in typical training regimes, weights begin small and increase in size over the course of training. We note that this is a virtual necessity for a nonlinearity like tanh, which is linear around the origin: when initialized with small weights, the activity of a tanh network will be in this linear regime and the network can only compute a linear function of its input. Hence a real world nonlinear task can only be learned by increasing the norm of the weights so as to engage the tanh nonlinearity on some examples. This point can also be appreciated from norm-based capacity bounds on neural networks, which show that, for instance, the Rademacher complexity of a neural network with small weights must be low BID4 ). Finally, as an empirical matter, the networks trained in this paper do in fact increase the norm of their weights over the course of For the linear setting considered here, the mutual information between a hidden representation T and the output Y may be calculated using the relations DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 H STOCHASTIC VS BATCH TRAINING FIG0 shows information plane dynamics for stochastic and batch gradient descent learning in a linear network. Randomness in the training process does not dramatically alter the information plane dynamics. The proposed mechanism of compression in is noise arising from stochastic gradient descent training. The in Section 4 of the main text show that compression still occurs under batch gradient descent learning, suggesting that in fact noise in the gradient updates is not the cause of compression. Here we investigate a related claim, namely that during training, networks switch between two phases. These phases are defined by the ratio of the mean of the gradient to the standard deviation of the gradient across training examples, called the gradient signal-to-noise ratio. In the first "drift" phase, the SNR is high, while in the second "diffusion" phase the SNR is low. hypothesize that the drift phase corresponds to movement toward the minimum with no compression, while the diffusion phase corresponds to a constrained diffusion in weight configurations that attain the optimal loss, during which representations compress. However, two phases of gradient descent have been described more generally, sometimes known as the transient and stochastic phases or search and convergence phases BID21 BID7, suggesting that these phases might not be related specifically to compression behavior. In FIG1 we plot the gradient SNR over the course of training for the tanh and ReLU networks in the standard setting of. In particular, for each layer l we calculate the mean and standard deviation as DISPLAYFORM0 DISPLAYFORM1 where · denotes the mean and ST D(·) denotes the element-wise standard deviation across all training samples, and · F denotes the Frobenius norm. The gradient SNR is then the ratio m l /s l. We additionally plot the norm of the weights W l F over the course of training. Both tanh and ReLU networks yield a similar qualitative pattern, with SNR undergoing a step-like transition to a lower value during training. Figures 9 and 10, fourth row, show similar plots for MNIST-trained networks. Again, SNR undergoes a transition from high to low over training. Hence the two phase nature of gradient descent appears to hold across the settings that we examine here. Crucially, this finding shows that the SNR transition is not related to the compression phenomenon because ReLU networks, which show the gradient SNR phase transition, do not compress. Finally, to show the generality of the two-phase gradient SNR behavior and its independence from compression, we develop a minimal model of this phenomenon in a three neuron linear network. We consider the student-teacher setting of FIG2 but with N i = N h = 1, such that the input and hidden layers have just a single neuron (as in the setting of FIG1). Here, with just a single hidden neuron, clearly there can be no compression so long as the first layer weight increases over the course of training. FIG0 shows that even in this simple setting, the SNR shows the phase transition but the weight norm increases over training. Hence again, the two phases of the gradient are present even though there is no compression. To intuitively understand the source of this behavior, note that the weights are initialized to be small and hence early in learning all must be increased in magnitude, yielding a consistent mean gradient. Once the network reaches the vicinity of the minimum, the mean weight change across all samples by definition goes to zero. The standard deviation remains finite, however, because on some specific examples error could be improved by increasing or decreasing the weights-even though across the whole dataset the mean error has been minimized. Hence overall, our show that a two-phase structure in the gradient SNR occurs in all settings we consider, even though compression occurs only in a subset. The gradient SNR behavior is therefore not causally related to compression dynamics, consistent with the view that saturating nonlinearities are the primary source of compression.
We show that several claims of the information bottleneck theory of deep learning are not true in the general case.
337
scitldr
Over the past four years, neural networks have been proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. For most current network architectures, we prove that the L1-norm of these gradients grows as the square root of the input size. These nets therefore become increasingly vulnerable with growing image size. Our proofs rely on the network’s weight distribution at initialization, but extensive experiments confirm that our still hold after usual training. Following the work of BID7, Convolutional Neural Networks (CNNs) have been found vulnerable to adversarial examples: an adversary can drive the performance of state-of-the art CNNs down to chance level with imperceptible changes of the inputs. A number of studies have tried to address this issue, but only few have stressed that, because adversarial examples are essentially small input changes that create large output variations, they are inherently caused by large gradients of the neural network with respect to its inputs. Of course, this view, which we will focus on here, assumes that the network and loss are differentiable. It has the advantage to yield a large body of specific mathematical tools, but might not be easily extendable to masked gradients, non-smooth models or the 0-1-loss. Nevertheless, our might even hold for non-smooth models, given that the latter can often be viewed as smooth at a coarser level. Contributions. More specifically, we provide theoretical and empirical arguments supporting the existence of a monotonic relationship between the gradient norm of the training objective (of a differentiable classifier) and its adversarial vulnerability. Evaluating this norm based on the weight statistics at initialization, we show that CNNs and most feed-forward networks, by design, exhibit increasingly large gradients with input dimension d, almost independently of their architecture. That leaves them increasingly vulnerable to adversarial noise. We corroborate our theoretical by extensive experiments. Although some of those experiments involve adversarial regularization schemes, our goal is not to advocate a new adversarial defense (these schemes are already known), but to show how their effect can be explained by our first order analysis. We do not claim to explain all aspects of adversarial vulnerability, but we claim that our first order argument suffices to explain a significant part of the empirical findings on adversarial vulnerability. This calls for researching the design of neural network architectures with inherently smaller gradients and provides useful guidelines to practitioners and network designers. Suppose that a given classifier ϕ classifies an image x as being in category ϕ(x). An adversarial image is a small modification of x, barely noticeable to the human eye, that suffices to fool the classifier into predicting a class different from ϕ(x). It is a small perturbation of the inputs, that creates a large variation of outputs. Adversarial examples thus seem inherently related to large gradients of the network. A connection, that we will now clarify. Note that visible adversarial examples sometimes appear in the literature, but we deliberately focus on imperceptible ones. Adversarial vulnerability and adversarial damage. In practice, an adversarial image is constructed by adding a perturbation δ to the original image x such that δ ≤ for some (small) number and a given norm · over the input space. We call the perturbed input x + δ an -sized · -attack and say that the attack was successful when ϕ(x + δ) = ϕ(x). This motivates Definition 1. Given a distribution P over the input-space, we call adversarial vulnerability of a classifier ϕ to an -sized · -attack the probability that there exists a perturbation δ of x such that δ ≤ and ϕ(x) = ϕ(x + δ).We call the average increase-after-attack E x∼P [∆L] of a loss L the (L-) adversarial damage (of the classifier ϕ to an -sized · -attack).When L is the 0-1-loss L 0/1, adversarial damage is the accuracy-drop after attack. The 0-1-loss damage is always smaller than adversarial vulnerability, because vulnerability counts all class-changes of ϕ(x), whereas some of them may be neutral to adversarial damage (e.g. a change between two wrong classes). The L 0/1 -adversarial damage thus lower bounds adversarial vulnerability. Both are even equal when the classifier is perfect (before attack), because then every change of label introduces an error. It is hence tempting to evaluate adversarial vulnerability with L 0/1 -adversarial damage. From ∆L 0/1 to ∆L and to ∂ x L. In practice however, we do not train our classifiers with the non-differentiable 0-1-loss but use a smoother loss L, such as the cross-entropy loss. For similar reasons, we will now investigate the adversarial damage E x [∆L(x, c)] with loss L rather than L 0/1. Like for BID7; BID13; and many others, a classifier ϕ will hence be robust if, on average over x, a small adversarial perturbation δ of x creates only a small variation δL of the loss. Now, if δ ≤, then a first order Taylor expansion in shows that DISPLAYFORM0 where ∂ x L denotes the gradient of L with respect to x, and where the last equality stems from the definition of the dual norm |||·||| of ·. Now two remarks. First: the dual norm only kicks in because we let the input noise δ optimally adjust to the coordinates of ∂ x L within its -constraint. This is the brand mark of adversarial noise: the different coordinates add up, instead of statistically canceling each other out as they would with random noise. For example, if we impose that δ 2 ≤, then δ will strictly align with ∂ x L. If instead δ ∞ ≤, then δ will align with the sign of the coordinates of ∂ x L. Second remark: while the Taylor expansion in becomes exact for infinitesimal perturbations, for finite ones it may actually be dominated by higher-order terms. Our experiments FIG1 however strongly suggest that in practice the first order term dominates the others. Now, remembering that the dual norm of an p -norm is the corresponding q -norm, and summarizing, we have proven Lemma 2. At first order approximation in, an -sized adversarial attack generated with norm · increases the loss L at point x by |||∂ x L|||, where |||·||| is the dual norm of ·. In particular, an -sized p -attack increases the loss by ∂ x L q where 1 ≤ p ≤ ∞ and 1 p + 1 q = 1. Consequently, the adversarial damage of a classifier with loss L to -sized attacks generated with norm · is E x |||∂ x L|||. This is valid only at first order, but it proves that at least this kind of first-order vulnerability is present. We will see that the first-order predictions closely match the experiments, and that this insight helps protecting even against iterative (non-first-order) attack methods (Figure 1).Calibrating the threshold to the attack-norm ·. Lemma 2 shows that adversarial vulnerability depends on three main factors: (i) ·, the norm chosen for the attack (ii), the size of the attack, and (iii) E x |||∂ x L|||, the expected dual norm of ∂ x L. We could see Point (i) as a measure of our sensibility to image perturbations, (ii) as our sensibility threshold, and (iii) as the classifier's expected marginal sensibility to a unit perturbation. E x |||∂ x L||| hence intuitively captures the discrepancy between our perception (as modeled by ·) and the classifier's perception for an input-perturbation of small size. Of course, this viewpoint supposes that we actually found a norm · (or more generally a metric) that faithfully reflects human perception -a project in its own right, far beyond the scope of this paper. However, it is clear that the threshold that we choose should depend on the norm · and hence on the input-dimension d. In particular, for a given pixel-wise order of magnitude of the perturbations δ, the p -norm of the perturbation will scale like d. This suggests to write the threshold p used with p -attacks as: DISPLAYFORM0 where ∞ denotes a dimension-independent constant. In Appendix D we show that this scaling also preserves the average signal-to-noise ratio x 2 / δ 2, both across norms and dimensions, so that p could correspond to a constant human perception-threshold. With this in mind, the impatient reader may already jump to Section 3, which contains our main contributions: the estimation of E x ∂ x L q for standard feed-forward nets. Meanwhile, the rest of this section shortly discusses two straightforward defenses that we will use later and that further illustrate the role of gradients. A new old regularizer. Lemma 2 shows that the loss of the network after an 2 -sized · -attack is DISPLAYFORM1 It is thus natural to take this loss-after-attack as a new training objective. Here we introduced a factor 2 for reasons that will become clear in a moment. Incidentally, for · = · 2, this new loss reduces to an old regularization-scheme proposed by BID4 called double-backpropagation. At the time, the authors argued that slightly decreasing a function's or a classifier's sensitivity to input perturbations should improve generalization. In a sense, this is exactly our motivation when defending against adversarial examples. It is thus not surprising to end up with the same regularization term. Note that our reasoning only shows that training with one specific norm |||·||| in helps to protect against adversarial examples generated from ·. A priori, we do not know what will happen for attacks generated with other norms; but our experiments suggest that training with one norm also protects against other attacks (see FIG1 and Section 4.1).Link to adversarially-augmented training. In, designates an attack-size threshold, while in, it is a regularization-strength. Rather than a notation conflict, this reflects an intrinsic duality between two complementary interpretations of, which we now investigate further. Suppose that, instead of using the loss-after-attack, we augment our training set with -sized · -attacks x + δ, where for each training point x, the perturbation δ is generated on the fly to locally maximize the loss-increase. Then we are effectively training with DISPLAYFORM2 where by construction δ satisfies. We will refer to this technique as adversarially augmented training. It was first introduced by BID7 with · = · ∞ under the name of FGSM 1 -augmented training. Using the first order Taylor expansion in of, this'old-plus-postattack' loss of simply reduces to our loss-after-attack, which proves Proposition 3. Up to first-order approximations in,L, · = L,|||·|||. Said differently, for small enough, adversarially-augmented training with -sized · -attacks amounts to penalizing the dual norm |||·||| of ∂ x L with weight /2. In particular, double-backpropagation corresponds to training with 2 -attacks, while FGSM-augmented training corresponds to an 1 -penalty on ∂ x L.This correspondence between training with perturbations and using a regularizer can be compared to Tikhonov regularization: Tikhonov regularization amounts to training with random noise BID2, while training with adversarial noise amounts to penalizing ∂ x L. Section 4.1 verifies the correspondence between adversarial augmentation and gradient regularization empirically, which also strongly suggests the empirical validity of the first-order Taylor expansion in.3 ESTIMATING ∂ x L q TO EVALUATE ADVERSARIAL VULNERABILITY In this section, we evaluate the size of ∂ x L q for standard neural network architectures. We start with fully-connected networks, and finish with a much more general theorem that, not only encompasses CNNs (with or without strided convolutions), but also shows that the gradient-norms are essentially independent of the network topology. We start our analysis by showing how changing q affects the size of ∂ x L q. Suppose for a moment that the coordinates of ∂ x L have typical magnitude DISPLAYFORM3 This equation carries two important messages. First, we see how ∂ x L q depends on d and q. The dependence seems highest for q = 1. But once we account for the varying perceptibility threshold DISPLAYFORM4, we see that adversarial vulnerability scales like d · |∂ x L|, whatever p -norm we use. Second, shows that to be robust against any type of p -attack at any input-dimension d, the average absolute value of the coefficients of ∂ x L must grow slower than 1/d. Now, here is the catch, which brings us to our core insight. In order to preserve the activation variance of the neurons from layer to layer, the neural weights are usually initialized with a variance that is inversely proportional to the number of inputs per neuron. Imagine for a moment that the network consisted only of one output neuron o linearly connected to all input pixels. For the purpose of this example, we assimilate o and L. Because we initialize the weights with a variance of 1/d, their average absolute value |∂ x o| ≡ |∂ x L| grows like 1/ √ d, rather than the required 1/d. By, the adversarial vulnerability DISPLAYFORM0 This toy example shows that the standard initialization scheme, which preserves the variance from layer to layer, causes the average coordinate-size |∂ x L| to grow like 1/ √ d instead of 1/d. When an ∞ -attack tweaks its -sized input-perturbations to align with the coordinate-signs of ∂ x L, all coordinates of ∂ x L add up in absolute value, ing in an output-perturbation that scales like √ d and leaves the network increasingly vulnerable with growing input-dimension. Our next theorems generalize the previous toy example to a very wide class of feedforward nets with ReLU activation functions. For illustration purposes, we start with fully connected nets and only then proceed to the broader class, which includes any succession of (possibly strided) convolutional layers. In essence, the proofs iterate our insight on one layer over a sequence of layers. They all rely on the following set (H) of hypotheses: H1 Non-input neurons are followed by a ReLU killing half of its inputs, independently of the weights. H2 Neurons are partitioned into layers, meaning groups that each path traverses at most once. H3 All weights have 0 expectation and variance 2/(in-degree) ('He-initialization'). H4 The weights from different layers are independent. H5 Two distinct weights w, w from a same node satisfy E [w w] = 0.If we follow common practice and initialize our nets as proposed by BID8, then H3-H5 are satisfied at initialization by design, while H1 is usually a very good approximation BID1. Note that such i.i.d. weight assumptions have been widely used to analyze neural nets and are at the heart of very influential and successful prior work (e.g., equivalence between neural nets and Gaussian processes as pioneered by Neal 1996). Nevertheless, they do not hold after training. That is why all our statements in this section are to be understood as orders of magnitudes that are very well satisfied at initialization in theory and in practice, and that we will confirm experimentally after training in Section 4. Said differently, while our theorems rely on the statistics of neural nets at initialization, our experiments confirm their after training. Theorem 4 (Vulnerability of Fully Connected Nets). Consider a succession of fully connected layers with ReLU activations which takes inputs x of dimension d, satisfies assumptions (H), and outputs logits f k (x) that get fed to a final cross-entropy-loss layer L. Then the coordinates of DISPLAYFORM0 These networks are thus increasingly vulnerable to p -attacks with growing input-dimension. Theorem 4 is a special case of the next theorem, which will show that the previous are essentially independent of the network-topology. We will use the following symmetry assumption on the neural connections. For a given path p, let the path-degree d p be the multiset of encountered in-degrees along path p. For a fully connected network, this is the unordered sequence of layer-sizes preceding the last path-node, including the input-layer. Now consider the multiset {d p} p∈P(x,o) of all path-degrees when p varies among all paths from input x to output o. The symmetry assumption (relatively to o) is (S) All input nodes x have the same multiset {d p} p∈P(x,o) of path-degrees from x to o. Intuitively, this means that the statistics of degrees encountered along paths to the output are the same for all input nodes. This symmetry assumption is exactly satisfied by fully connected nets, almost satisfied by CNNs (up to boundary effects, which can be alleviated via periodic or mirror padding) and exactly satisfied by strided layers, if the layer-size is a multiple of the stride. Theorem 5 (Vulnerability of Feedforward Nets). Consider any feed-forward network with linear connections and ReLU activation functions. Assume the net satisfies assumptions (H) and outputs logits f k (x) that get fed to the cross-entropy-loss L. DISPLAYFORM1 Moreover, if the net satisfies the symmetry assumption (S), then DISPLAYFORM2 Theorems 4 and 5 are proven in Appendix B. The main proof idea is that in the gradient norm computation, the He-initialization exactly compensates the combinatorics of the number of paths in the network, so that this norm becomes independent of the network topology. In particular, we getCorollary 6 (Vulnerability of CNNs). In any succession of convolution and dense layers, strided or not, with ReLU activations, that satisfies assumptions (H) and outputs logits that get fed to the cross-entropy-loss L, the gradient of the logit-coordinates scale like 1/ √ d and FORMULA8 is satisfied. It is hence increasingly vulnerable with growing input-resolution to attacks generated with any p -norm. Appendix A shows that the network gradient are dampened when replacing strided layers by average poolings, essentially because average-pooling weights do not follow the He-init assumption H3. In Section 4.1, we empirically verify the validity of the first-order Taylor approximation made in (Fig.1), for example by checking the correspondence between loss-gradient regularization and adversarially-augmented training FIG1 ). Section 4.2 then empirically verifies that both the average 1 -norm of ∂ x L and the adversarial vulnerability grow like √ d as predicted by Corollary 6. For all experiments, we approximate adversarial vulnerability using various attacks of the Foolboxpackage . We use an ∞ attack-threshold of size ∞ = 0.005 (and later 0.002) which, for pixel-values ranging from 0 to 1, is completely imperceptible but suffices to fool the classifiers on a significant proportion of examples. This ∞ -threshold should not be confused with the regularization-strengths appearing in and, which will be varied in some experiments. Figure 1: Adversarial vulnerability approximated by different attack-types for 10 trained networks as a function of (a) the 1 gradient regularization-strength used to train the nets and (b) the average gradient-norm. These curves confirm that the first-order expansion term in is a crucial component of adversarial vulnerability. DISPLAYFORM0 suggests that protecting against a given attack-norm also protects against others. (f): Merging 2band 2c shows that all adversarial augmentation and gradient-regularization methods achieve similar accuracy-vulnerability trade-offs. We train several CNNs with same architecture to classify CIFAR-10 images BID12.For each net, we use a specific training method with a specific regularization value. The training methods used were 1 -and 2 -penalization of ∂ x L (Eq. 4), adversarial augmentation with ∞ -andValidity of first order expansion. The following observations support the validity of the first order Taylor expansion in and suggest that it is a crucial component of adversarial vulnerability: (i) the efficiency of the first-order defense against iterative (non-first-order) attacks (Fig.1a); (ii) the striking similarity between the PGD curves (adversarial augmentation with iterative attacks) and the other adversarial training training curves (one-step attacks/defenses); (iii) the functional-like dependence between any approximation of adversarial vulnerability and E x ∂ x L 1 (Fig.1b), and its independence on the training method FIG1. (iv) the excellent correspondence between the gradient-regularization and adversarial training curves (see next paragraph). Said differently, adversarial examples seem indeed to be primarily caused by large gradients of the classifier as captured via the induced loss.. The excellent match between the adversarial augmentation curve with p = ∞ (p = 2) and its gradient-regularization dual counterpart with q = 1 (resp. q = 2) illustrates the duality between as a threshold for adversarially-augmented training and as a regularization constant in the regularized loss (Proposition 3). It also supports the validity of the first-order Taylor expansion in.Confirmation of. Still on the upper row, the curves for p = ∞, q = 1 have no reason to match those for p = q = 2 when plotted against, because -threshold is relative to a specific attack-norm. However, suggested that the rescaled thresholds d 1/p may approximately correspond to a same'threshold-unit' across p -norms and across dimension. This is well confirmed by the upper row plots: by rescaling the x-axis, the p = q = 2 and q = 1, p = ∞ curves get almost super-imposed. Accuracy-vs-Vulnerability Trade-Off. FIG1 by taking out, FIG1 shows that all gradient regularization and adversarial training methods yield equivalent accuracyvulnerability trade-offs. Incidentally, for higher penalization values, these trade-offs appear to be much better than those given by cross Lipschitz regularization. The penalty-norm does not matter. We were surprised to see that on Figures 2d and 2f, the L,q curves are almost identical for q = 1 and 2. This indicates that both norms can be used interchangeably in (modulo proper rescaling of via FORMULA2), and suggests that protecting against a specific attacknorm also protects against others. may provide an explanation: if the coordinates of ∂ x L behave like centered, uncorrelated variables with equal variance -which follows from assumptions (H) -, then the 1 -and 2 -norms of FIG1 confirms this explanation. The slope is independent of the training method. Therefore, penalizing ∂ x L(x) 1 during training will not only decrease E x ∂ x L 1 (as shown in FIG1), but also drive down E x ∂ x L 2 and vice-versa. DISPLAYFORM1 Theorems 4-5 and Corollary 6 predict a linear growth of the average 1 -norm of ∂ x L with the square root of the input dimension d, and therefore also of adversarial vulnerability (Lemma 2). To test these predictions, we upsampled the CIFAR-10 images (of size 3 x 32 x 32) by copying pixels so as to get 4 datasets with, respectively, 32, 64, 128 and 256 pixels per edge. We then trained a CNN on each dataset All networks had exactly the same amount of parameters and very similar structure across the various input-resolutions. The CNNs were a succession of 8'convolution → batchnorm → ReLU' layers with 64 output channels, followed by a final full-connection to the 12 logit-outputs. We used 2 × 2-max-poolings after the convolutions of layers 2,4, 6 and 8, and a final max-pooling after layer 8 that fed only 1 neuron per channel to the fully-connected layer. To ensure that the convolution-kernels cover similar ranges of the images across each of the 32, 64, 128 and 256 input-resolutions, we respectively dilated all convolutions ('à trous') by a factor 1, 2, 4 and 8. There is a clear discrepancy: on the training set, the gradient norms decrease (after an initialization phase) and are dimension-independent; on the test set, they increase and scale like √ d. This suggests that, outside the training points, the nets tend to recover their prior gradient-properties (i.e. naturally large gradients). Our theoretical show that the priors of classical neural networks yield vulnerable functions because of naturally high gradients. And our experiments FIG3 suggest that usual training does not escape these prior properties. But how may these insights help understanding the vulnerability of robustly trained networks? Clearly, to be successful, robust training algorithms must escape ill-behaved priors, which explains why most methods (e.g. FGSM, PGD) are essentially gradient penalization techniques. But, MNIST aside, even state-of-the-art methods largely fail at protecting current network architectures BID14, and understanding why is motivation to this and many other papers. Interestingly, BID14 recently noticed that those methods actually do protect the nets on training examples, but fail to generalize to the test set. They hence conclude that state-of-the-art robustification algorithms work, but need more data. Alternatively however, when generalization fails, one can also reduce the model's complexity. Large fully connected nets for example typically fail to generalize to out-of-sample examples: getting similar accuracies than CNNs would need prohibitively many training points. Similarly, Schmidt et al.'s observations may suggest that, outside the training points, networks tend to recover their prior properties, i.e. naturally large gradients. FIG4 corroborates this hypothesis. It plots the evolution over training epochs of the 1 -gradient-norms of the CNNs from Section 4.2 FIG3 on the training and test sets respectively. The discrepancy is unmistakable: after a brief initialization phase, the norms decrease on the training set, but increase on the test set. They are moreover almost input-dimension independent on the training set, but scale as √ d on the test set (as seen in FIG3 up to respectively 2, 4, 8 and 16 times the training set values. These observations suggest that, with the current amount of data, tackling adversarial vulnerability may require new architectures with inherently smaller gradients. Searching these architectures among those with well-behaved prior-gradients seems a reasonable start, where our theoretical may prove very useful. On network vulnerability. BID7 already stressed that adversarial vulnerability increases with growing dimension d. But their argument only relied on a linear 'one-output-to-manyinputs'-model with dimension-independent weights. They therefore concluded on a linear growth of adversarial vulnerability with d. In contrast, our theory applies to almost any standard feed-forward architecture (not just linear), and shows that, once we adjust for the weight's dimension-dependence, adversarial vulnerability increases like √ d (not d), almost independently of the architecture. Nevertheless, our experiments confirm Goodfellow et al.'s idea that our networks are "too linear-like", in the sense that a first-order Taylor expansion is indeed sufficient to explain the adversarial vulnerability of neural networks. As suggested by the one-output-to-many-inputs model, the culprit is that growing 3 Appendix A investigates such a preliminary direction by introducing average poolings, which have a weight-size 1 /in−channels rather than the typical 1 / √ in−channels of the other He-initialized weights. dimensionality gives the adversary more and more room to'wriggle around' with the noise and adjust to the gradient of the output neuron. This wriggling, we show, is still possible when the output is connected to all inputs only indirectly, even when no neuron is directly connected to all inputs, like in CNNs. This explanation of adversarial vulnerability is independent of the intrinsic dimensionality or geometry of the data (compare to BID0 BID6 . Finally, let us mention that show a close link between the vulnerability to small worst-case perturbation (as studied here) and larger average perturbations. Our findings on the adversarial vulnerability NNs to small perturbation could thus be translated accordingly. On robustification algorithms. Incidentally, BID7 also already relate adversarial vulnerability to large gradients of the loss L, an insight at the very heart of their FGSM-algorithm. They however do not propose any explicit penalizer on the gradient of L other than indirectly through adversarially-augmented training. propose the old double-backpropagation to robustify networks but make no connection to FGSM and adversarial augmentation. BID13 discuss and use the connection between gradient-penalties and adversarial augmentation, but never actually compare both in experiments. This comparison however is essential to test the validity of the first-order Taylor expansion in, as confirmed by the similarity between the gradient-regularization and adversarial-augmentation curves in FIG1. BID9 derived yet another gradient-based penalty -the cross-Lipschitz-penalty-by considering (and proving) formal guarantees on adversarial vulnerability itself, rather than adversarial damage. While both penalties are similar in spirit, focusing on the adversarial damage rather than vulnerability has two main advantages. First, it achieves better accuracy-to-vulnerability ratios, both in theory and practice, because it ignores class-switches between misclassified examples and penalizes only those that reduce the accuracy. Second, it allows to deal with one number only, ∆L, whereas Hein & Andriushchenko's cross-Lipschitz regularizer and theoretical guarantees explicitly involve all K logit-functions (and their gradients). See Appendix C. Penalizing network-gradients is also at the heart of contractive auto-encoders as proposed by , where it is used to regularize the encoder-features. Seeing adversarial training as a generalization method, let us also mention BID10, who propose to enhance generalization by searching for parameters in a "flat minimum region" of the loss. This leads to a penalty involving the gradient of the loss, but taken with respect to the weights, rather than the inputs. In the same vein, a gradientregularization of the loss of generative models also appears in Proposition 6 of , where it stems from a code-length bound on the data (minimum description length). More generally, the gradient regularized objective is essentially the first-order approximation of the robust training objective max δ ≤ L(x + δ, c) which has a long history in math , machine learning and now adversarial vulnerability . Finally, BID3 propose new network-architectures that have small gradients by design, rather than by special training: an approach that makes all the more sense, considering the of Theorems 4 and 5. For further details and references on adversarial attacks and defenses, we refer to. For differentiable classifiers and losses, we showed that adversarial vulnerability increases with the gradients ∂ x L of the loss, which is confirmed by the near-perfect functional relationship between gradient norms and vulnerability FIG1 We then evaluated the size of ∂ x L q and showed that, at initialization, usual feed-forward nets (convolutional or fully connected) are increasingly vulnerable to p -attacks with growing input dimension d (the image-size), almost independently of their architecture. Our experiments show that, on the tested architectures, usual training escapes those prior gradient (and vulnerability) properties on the training, but not on the test set. BID14 suggest that alleviating this generalization gap requires more data. But a natural (complementary) alternative would be to search for architectures with naturally smaller gradients, and in particular, with well-behaved priors. Despite all their limitations (being only first-order, assuming a prior weight-distribution and a differentiable loss and architecture), our theoretical insights may thereby still prove to be precious future allies. It is common practice in CNNs to use average-pooling layers or strided convolutions to progressively decrease the number of pixels per channel. Corollary 6 shows that using strided convolutions does not protect against adversarial examples. However, what if we replace strided convolutions by convolutions with stride 1 plus an average-pooling layer? Theorem 5 considers only randomly initialized weights with typical size 1/ √ in-degree. Average-poolings however introduce deterministic weights of size 1/(in-degree). These are smaller and may therefore dampen the input-to-output gradients and protect against adversarial examples. We confirm this in our next theorem, which uses a slightly modified version (H) of (H) to allow average pooling layers. (H) is (H), but where the He-init H3 applies to all weights except the (deterministic) average pooling weights, and where H1 places a ReLU on every non-input and non-average-pooling neuron. Theorem 7 (Effect of Average-Poolings). Consider a succession of convolution layers, dense layers and n average-pooling layers, in any order, that satisfies (H) and outputs logits f k (x). Assume the n average pooling layers have a stride equal to their mask size and perform averages over a 1,..., a n nodes respectively. Then ∂ x f k 2 and |∂ x f k | scale like 1/ √ a 1 · · · a n and 1/ √ d a 1 · · · a n respectively. Proof in Appendix B.4. Theorem 7 suggest to try and replace any strided convolution by its non-strided counterpart, followed by an average-pooling layer. It also shows that if we systematically reduce the number of pixels per channel down to 1 by using only non-strided convolutions and average-pooling layers (i.e. d = n i=1 a i), then all input-to-output gradients should become independent of d, thereby making the network completely robust to adversarial examples. Our following experiments (FIG6 show that after training, the networks get indeed robustified to adversarial examples, but remain more vulnerable than suggested by Theorem 7.Experimental setup. Theorem 7 shows that, contrary to strided layers, average-poolings should decrease adversarial vulnerability. We tested this hypothesis on CNNs trained on CIFAR-10, with 6 blocks of 'convolution → BatchNorm →ReLU' with 64 output-channels, followed by a final average pooling feeding one neuron per channel to the last fully-connected linear layer. Additionally, after every second convolution, we placed a pooling layer with stride and mask-size (thus acting on 2 × 2 neurons at a time, without overlap). We tested average-pooling, strided and max-pooling layers and trained 20 networks per architecture. Results are shown in FIG6. All accuracies are very close, but, as predicted, the networks with average pooling layers are more robust to adversarial images than the others. However, they remain more vulnerable than what would follow from Theorem 7. We also noticed that, contrary to the strided architectures, their gradients after training are an order of magnitude higher than at initialization and than predicted. This suggests that assumptions (H) get more violated when using average-poolings instead of strided layers. Understanding why will need further investigations. Proof. Let δ be an adversarial perturbation with δ = 1 that locally maximizes the loss increase at point x, meaning that δ = arg max δ ≤1 ∂ x L · δ. Then, by definition of the dual norm of ∂ x L we have: DISPLAYFORM0 Proof. Let x designate a generic coordinate of x. To evaluate the size of ∂ x L q, we will evaluate the size of the coordinates ∂ x L of ∂ x L by decomposing them into DISPLAYFORM0 where f k (x) denotes the logit-probability of x belonging to class k. We now investigate the statistical properties of the logit gradients ∂ x f k, and then see how they shape ∂ x L.Step 1: Statistical properties of ∂ x f k. Let P(x, k) be the set of paths p from input neuron x to output-logit k. Let p − 1 and p be two successive neurons on path p, andp be the same path p but without its input neuron. Let w p designate the weight from p − 1 to p and ω p be the path-product ω p:= p∈p w p. Finally, let σ p (resp. σ p) be equal to 1 if the ReLU of node p (resp. if path p) is active for input x, and 0 otherwise. As previously noticed by BID1 using the chain rule, we see that ∂ x f k is the sum of all ω p whose path is active, i.e. ∂ x f k (x) = p∈P(x,k) ω p σ p. Consequently: DISPLAYFORM1 The first equality uses H1 to decouple the expectations over weights and ReLUs, and then applies Lemma 10 of Appendix B.3, which uses H3-H5 to kill all cross-terms and take the expectation over weights inside the product. The second equality uses H3 and the fact that the ing product is the same for all active paths. The third equality counts the number of paths from x to k and we conclude by noting that all terms cancel out, except d p−1 from the input layer which is d. Equation 8 shows DISPLAYFORM2 Step 2: Statistical properties of DISPLAYFORM3 f h (x) (the probability of image x belonging to class k according to the network), we have, by definition of the cross-entropy loss, L(x, c):= − log q c (x), where c is the label of the target class. Thus: DISPLAYFORM4 otherwise, and DISPLAYFORM5 Using again Lemma 10, we see that the ∂ x f k (x) are K centered and uncorrelated variables. So DISPLAYFORM6 is approximately the sum of K uncorrelated variables with zero-mean, and its total variance is given by DISPLAYFORM7. concludes. Remark 1. Equation 9 can be rewritten as DISPLAYFORM8 As the term k = c disappears, the norm of the gradients ∂ x L(x) appears to be controlled by the total error probability. This suggests that, even without regularization, trying to decrease the ordinary classification error is still a valid strategy against adversarial examples. It reflects the fact that when increasing the classification margin, larger gradients of the classifier's logits are needed to push images from one side of the classification boundary to the other. This is confirmed by Theorem 2.1 of BID9. See also in Appendix C. The proof of Theorem 5 is very similar to the one of Theorem 4, but we will need to first generalize the equalities appearing in. To do so, we identify the computational graph of a neural network to an abstract Directed Acyclic Graph (DAG) which we use to prove the needed algebraic equalities. We then concentrate on the statistical weight-interactions implied by assumption (H), and finally throw these together to prove the theorem. In all the proof, o will designate one of the output-logits f k (x).Lemma 8. Let x be the vector of inputs to a given DAG, o be any leaf-node of the DAG, x a generic coordinate of x. Let p be a path from the set of paths P(x, o) from x to o,p the same path without node x, p a generic node inp, and d p be its input-degree. Then: DISPLAYFORM0 Proof. We will reason on a random walk starting at o and going up the DAG by choosing any incoming node with equal probability. The DAG being finite, this walk will end up at an input-node x with probability 1. Each path p is taken with probability p∈p 1 dp. And the probability to end up at an input-node is the sum of all these probabilities, i.e. DISPLAYFORM1 The sum over all inputs x in being 1, on average it is 1/d for each x, where d is the total number of inputs (i.e. the length of x). It becomes an equality under assumption (S):Lemma 9. Under the symmetry assumption (S), and with the previous notations, for any input x ∈ x: p. By using and the fact that, by (S), the multiset D(x, o) is independent of x, we hence conclude DISPLAYFORM2 DISPLAYFORM3 Now, let us relate these considerations on graphs to gradients and use assumptions (H). We remind that path-product ω p is the product p∈p w p.Lemma 10. Under assumptions (H), the path-products ω p, ω p of two distinct paths p and p starting from a same input node x, satisfy: DISPLAYFORM4 Furthermore, if there is at least one non-average-pooling weight on path p, then E W [ω p] = 0.Proof. Hypothesis H4 yields DISPLAYFORM5 Now, take two different paths p and p that start at a same node x. Starting from x, consider the first node after which p and p part and call p and p the next nodes on p and p respectively. Then the weights w p and w p are two weights of a same node. Applying H4 and H5 hence gives DISPLAYFORM6 Finally, if p has at least one non-average-pooling node p, then successively applying H4 and H3 yields: DISPLAYFORM7 We now have all elements to prove Theorem 5.Proof. (of Theorem 5) For a given neuron p inp, let p − 1 designate the previous node in p of p. Let σ p (resp. σ p) be a variable equal to 0 if neuron p gets killed by its ReLU (resp. path p is inactive), and 1 otherwise. Then: DISPLAYFORM8 Consequently: DISPLAYFORM9 where the firs line uses the independence between the ReLU killings and the weights (H1), the second uses Lemma 10 and the last uses Lemma 9. The gradient ∂ x o thus has coordinates whose squared expectations scale like 1/d. Thus each coordinate scales like 1/ √ d and ∂ x o q like d DISPLAYFORM10 Step 2 of the proof of Theorem 4.Finally, note that, even without the symmetry assumption (S), using Lemma 8 shows that DISPLAYFORM11 Thus, with or without (S), ∂ x o 2 is independent of the input-dimension d. To prove Theorem 7, we will actually prove the following more general theorem, which generalizes Theorem 5. Theorem 7 is a straightforward corollary of it. Theorem 11. Consider any feed-forward network with linear connections and ReLU activation functions that outputs logits f k (x) and satisfies assumptions (H). Suppose that there is a fixed multiset of integers {a 1, . . ., a n} such that each path from input to output traverses exactly n average pooling nodes with degrees {a 1, . . ., a n}. Then: DISPLAYFORM0 Furthermore, if the net satisfies the symmetry assumption (S), then: DISPLAYFORM1 Two remarks. First, in all this proof, "weight" encompasses both the standard random weights, and the constant (deterministic) weights equal to 1/(in-degree) of the average-poolings. Second, assumption H5 implies that the average-pooling nodes have disjoint input nodes: otherwise, there would be two non-zero deterministic weights w, w from a same neuron that would hence satisfy: DISPLAYFORM2 Proof. As previously, let o designate any fixed output-logit f k (x). For any path p, let a be the set of average-pooling nodes of p and let q be the set of remaining nodes. Each path-product ω p satisfies: ω p = ω q ω a, where ω a is a same fixed constant. For two distinct paths p, p, Lemma 10 therefore yields: DISPLAYFORM3 Combining this with Lemma 9 and under assumption (S), we get similarly to: DISPLAYFORM4 Again, note that, even without assumption (S), using and Lemma 8 shows that DISPLAYFORM5 which proves. In their Theorem 2.1, BID9 show that the minimal = δ p perturbation to fool the classifier must be bigger than: DISPLAYFORM0 They argue that the training procedure typically already tries to maximize f c (x) − f k (x), thus one only needs to additionally ensure that ∂ x f c (x) − ∂ x f k (x) q is small. They then introduce what they call a Cross-Lipschitz Regularization, which corresponds to the case p = 2 and involves the gradient differences between all classes: DISPLAYFORM1 In contrast, using, (the square of) our proposed regularizer ∂ x L q from can be rewritten, for p = q = 2 as: DISPLAYFORM2 Although both FORMULA0 and FORMULA0 cross-interaction between the K classes, the big difference is that while in all classes play exactly the same role, in the summands all refer to the target class c in at least two different ways. First, all gradient differences are always taken with respect to ∂ x f c. Second, each summand is weighted by the probabilities q k (x) and q h (x) of the two involved classes, meaning that only the classes with a non-negligible probability get their gradient regularized. This reflects the idea that only points near the margin need a gradient regularization, which incidentally will make the margin sharper. To keep the average pixel-wise variation constant across dimensions d, we saw in that the threshold p of an p -attack should scale like d 1/p. We will now see another justification for this scaling. Contrary to the rest of this work, where we use a fixed p for all images x, here we will let p depend on the 2 -norm of x. If, as usual, the dataset is normalized such that the pixels have on average variance 1, both approaches are almost equivalent. Suppose that given an p -attack norm, we want to choose p such that the signal-to-noise ratio (SNR) x 2 / δ 2 of a perturbation δ with p -norm ≤ p is never greater than a given SNR threshold 1/. For p = 2 this imposes 2 = x 2. More generally, studying the inclusion of p -balls in 2 -balls yields DISPLAYFORM0 Note that this gives again DISPLAYFORM1. This explains how to adjust the threshold with varying p -attack norm. Now, let us see how to adjust the threshold of a given p -norm when the dimension d varies. Suppose that x is a natural image and that decreasing its dimension means either decreasing its resolution or cropping it. Because the statistics of natural images are approximately resolution and scale invariant BID11, in either case the average squared value of the image pixels remains unchanged, which implies that x 2 scales like √ d. Pasting this back into, we again get: DISPLAYFORM2 In particular, ∞ ∝ is a dimension-free number, exactly like in of the main part. Now, why did we choose the SNR as our invariant reference quantity and not anything else? One reason is that it corresponds to a physical power ratio between the image and the perturbation, which we think the human eye is sensible to. Of course, the eye's sensitivity also depends on the spectral frequency of the signals involved, but we are only interested in orders of magnitude here. Another point: any image x yields an adversarial perturbation δ x, where by constraint x 2 / δ x ≤ 1/. For 2 -attacks, this inequality is actually an equality. But what about other p -attacks: (on average over x,) how far is the signal-to-noise ratio from its imposed upper bound 1/? For p ∈ {1, 2, ∞}, the answer unfortunately depends on the pixel-statistics of the images. But when p is 1 or ∞, then the situation is locally the same as for p = 2. Specifically:Lemma 12. Let x be a given input and > 0. Let p be the greatest threshold such that for any δ with δ p ≤ p, the SNR DISPLAYFORM3 Moreover, for p ∈ {1, 2, ∞}, if δ x is the p -sized p -attack that locally maximizes the loss-increase i.e. δ x = arg max δ p ≤ p |∂ x L · δ|, then: DISPLAYFORM4 and E x [SNR(x)] = 1.Proof. The first paragraph follows from the fact that the greatest p -ball included in an 2 -ball of radius x 2 has radius x 2 d DISPLAYFORM5 Under review as a conference paper at ICLR 2019The second paragraph is clear for p = 2. For p = ∞, it follows from the fact that δ x = ∞ sign ∂ x L which satisfies: DISPLAYFORM6 Intuitively, this means that for p ∈ {1, 2, ∞}, the SNR of p -sized p -attacks on any input x will be exactly equal to its fixed upper limit 1/. And in particular, the mean SNR over samples x is the same (1/) in all three cases. We also ran a similar experiment as in Section 4.2, but instead of using upsampled CIFAR-10 images, we created a 12-class dataset of approximately 80, 000 3 × 256 × 256-sized RGBimages by merging similar ImageNet-classes, resizing the smallest image-edge to 256 pixels and quantiles. The are identical to Section 4.2: after usual training, the vulnerability and gradient-norms still increase like √ d. Note that, as the gradients get much larger at higher dimensions, the first order approximation in becomes less and less valid, which explains the little inflection of the adversarial vulnerability curve. For smaller -thresholds, we verified that the inflection disappears. Here we plot the same curves as in the main part, but using an 2 -attack threshold of size 2 = 0.005 √ d instead of the ∞ -threshold and deep-fool attacks instead of iterative ∞ -ones in Figs. 8 and 9. Note that contrary to ∞ -thresholds, 2 -thresholds must be rescaled by √ d to stay consistent across dimensions (see Eq.3 and Appendix D). All curves look essentially the same as their counterparts in the main text. In usual adversarially-augmented training, the adversarial image x + δ is generated on the fly, but is nevertheless treated as a fixed input of the neural net, which means that the gradient does not get backpropagated through δ. This need not be. As δ is itself a function of x, the gradients could actually also be backpropagated through δ. As it was only a one-line change of our code, we used this opportunity to test this variant of adversarial training (FGSM-variant in FIG1) and thank Martín Arjovsky for suggesting it. But except for an increased computation time, we found no significant difference compared to usual augmented training.
Neural nets have large gradients by design; that makes them adversarially vulnerable.
338
scitldr
Effective performance of neural networks depends critically on effective tuning of optimization hyperparameters, especially learning rates (and schedules thereof). We present Amortized Proximal Optimization (APO), which takes the perspective that each optimization step should approximately minimize a proximal objective (similar to the ones used to motivate natural gradient and trust region policy optimization). Optimization hyperparameters are adapted to best minimize the proximal objective after one weight update. We show that an idealized version of APO (where an oracle minimizes the proximal objective exactly) achieves global convergence to stationary point and locally second-order convergence to global optimum for neural networks. APO incurs minimal computational overhead. We experiment with using APO to adapt a variety of optimization hyperparameters online during training, including (possibly layer-specific) learning rates, damping coefficients, and gradient variance exponents. For a variety of network architectures and optimization algorithms (including SGD, RMSprop, and K-FAC), we show that with minimal tuning, APO performs competitively with carefully tuned optimizers. Tuning optimization hyperparameters can be crucial for effective performance of a deep learning system. Most famously, carefully selected learning rate schedules have been instrumental in achieving state-of-the-art performance on challenging datasets such as ImageNet BID6 and WMT BID36. Even algorithms such as RMSprop BID34 and Adam , which are often interpreted in terms of coordinatewise adaptive learning rates, still have a global learning rate parameter which is important to tune. A wide variety of learning rate schedules have been proposed BID24 BID14 BID2. Seemingly unrelated phenomena have been explained in terms of effective learning rate schedules BID35. Besides learning rates, other hyperparameters have been identified as important, such as the momentum decay factor BID31, the batch size BID28, and the damping coefficient in second-order methods BID20 BID19.There have been many attempts to adapt optimization hyperparameters to minimize the training error after a small number of updates BID24 BID1 BID2. This approach faces two fundamental obstacles: first, learning rates and batch sizes have been shown to affect generalization performance because stochastic updates have a regularizing effect BID5 BID18 BID27 BID35. Second, minimizing the short-horizon expected loss encourages taking very small steps to reduce fluctuations at the expense of long-term progress BID37. While these effects are specific to learning rates, they present fundamental obstacles to tuning any optimization hyperparameter, since basically any optimization hyperparameter somehow influences the size of the updates. In this paper, we take the perspective that the optimizer's job in each iteration is to approximately minimize a proximal objective which trades off the loss on the current batch with the average change in the predictions. Specifically, we consider proximal objectives of the form J(φ) = h(f (g(θ, φ))) + λD(f (θ), f (g(θ, φ))), where f is a model with parameters θ, h is an approximation to the objective function, g is the base optimizer update with hyperparameters φ, and D is a distance metric. Indeed, approximately solving such a proximal objective motivated the natural gradient algorithm BID0, as well as proximal reinforcement learning algorithms BID26. We introduce Amortized Proximal Optimization (APO), an approach which adapts optimization hyperparameters to minimize the proximal objective in each iteration. We use APO to tune hyperparameters of SGD, RMSprop, and K-FAC; the hyperparameters we consider include (possibly layer-specific) learning rates, damping coefficients, and the power applied to the gradient covariances. Notice that APO has a hyperparameter λ which controls the aggressiveness of the updates. We believe such a hyperparameter is necessary until the aforementioned issues surrounding stochastic regularization and short-horizon bias are better understood. However, in practice we find that by performing a simple grid search over λ, we can obtain automatically-tuned learning rate schedules that are competitive with manual learning rate decay schedules. Furthermore, APO can automatically adapt several optimization hyperparameters with only a single hand-tuned hyperparameter. We provide theoretical justification for APO by proving strong convergence for an oracle which solves the proximal objective exactly in each iteration. In particular, we show global linear convergence and locally quadratic convergence under mild assumptions. These motivate the proximal objective as a useful target for meta-optimization. We evaluate APO on real-world tasks including image classification on MNIST, CIFAR-10, CIFAR-100, and SVHN. We show that adapting learning rates online via APO yields faster training convergence than the best fixed learning rates for each task, and is competitive with manual learning rate decay schedules. Although we focus on fast optimization of the training objective, we also find that the solutions found by APO generalize at least as well as those found by fixed hyperparameters or fixed schedules. We view a neural network as a parameterized function z = f (x, θ), where x is the input, θ are the weights and biases of the network, and z can be interpreted as the output of a regression model or the un-normalized log-probabilities of a classification model. Let the training dataset be {( DISPLAYFORM0, where input x i is associated with target t i . Our goal is to minimize the loss function: DISPLAYFORM1 where Z is the matrix of network outputs on all training examples x 1, . . ., x N, and T is the vector of labels. We design an iterative optimization algorithm to minimize Eq. 1 under the following framework: in the kth iteration, one aims to update θ to minimize the following proximal objective: DISPLAYFORM2 where x is the data used in the current iteration, P is the distribution of data, θ k is the parameters of the neural network at the current iteration, h(·) is some approximation of the loss function, and D(·, ·) represents the distance between network outputs under some metric (for notational convenience, we use mini-batch size of 1 to describe the algorithm). We first provide the motivation for this proximal objective in Section 2.1; then in Section 2.2, we propose an algorithm to optimize it in an online manner. In this section, we show that by approximately minimizing simple instances of Eq. 2 in each iteration (similar to BID25), one can recover the classic Gauss-Newton algorithm and Natural Gradient Descent BID0. In general, updating θ so as to minimize the proximal objective is impractical due to the complicated nonlinear relationship between θ and z. However, one can find an approximate solution by linearizing the network function: DISPLAYFORM0 where J = ∇ θ f (x, θ) is the Jacobian matrix. We consider the following instance of Eq. 2: DISPLAYFORM1 where ∆z f (x, θ) − f (x, θ k) is the change of network output, t is the label of current data x. Here h(·) is defined as the first-order Taylor approximation of the loss function. Using the linear approximation (Eq. 3), and a local second-order approximation of D, this proximal objective can be written as: DISPLAYFORM2 DISPLAYFORM3 is the Hessian matrix of the dissimilarity measured atz = f (x, θ k).Solving Eq. 5 yields: DISPLAYFORM4 where G Ex ∼P J ∇ 2DJ is the pre-conditioning matrix. Different settings for the dissimilarity DISPLAYFORM5 is defined as the squared Euclidean distance, Eq. 6 recovers the classic Gauss-Newton algorithm. When DISPLAYFORM6 is defined as the Bregman divergence, Eq. 6 yields the Generalized Gauss-Newton (GGN) method. When the output of neural network parameterizes an exponential-family distribution, the dissimilarity term can be defined as Kullback-Leibler divergence: DISPLAYFORM7 in which case Eq. 6 yields Natural Gradient Descent BID0. Since different versions of our proximal objective lead to various efficient optimization algorithms, we believe it is a useful target for meta-optimization. Although optimizers including the Gauss-Newton algorithm and Natural Gradient Descent can be seen as ways to approximately solve Eq. 2, they rely on a local linearization of the neural network and usually require more memory and more careful tuning in practice. We propose to instead directly minimize Eq. 2 in an online manner. Finding good hyperparameters (e.g., the learning rate for SGD) is a challenging problem in practice. We propose to adapt these hyperparameters online in order to best optimize the proximal objective. Consider any optimization algorithm (base-optimizer) of the following form: θ ← g(x, t, θ, ξ, φ). Here, θ is the set of model parameters, x is the data used in this iteration, t is the corresponding label, ξ is a vector of statistics computed online during optimization, and φ is a vector of optimization hyperparameters to be tuned. For example, ξ contains the exponential moving averages of the squared gradients of the parameters in RMSprop. φ usually contains the learning rate (global or layer-specific), and possibly other hyperparameters dependent on the algorithm. For each step, we formulate the meta-objective from Eq. 2 as follows (for notational convenience we omit variables other than θ and φ of g): DISPLAYFORM0 Here,x is a random mini-batch sampled from the data distribution P. We compute the approximation to the loss, h, using the same mini-batch as the gradient of the base optimizer, to avoid the short horizon bias problem BID37; we measure D on a different mini-batch to avoid instability that would if we took a large step in a direction that is unimportant for the current batch, but important for other batches. The hyperparameters φ are optimized using a stochastic gradient-based algorithm (the meta-optimizer) using the gradient ∇ φ J(φ) (similar in spirit to BID24 BID17). We refer to our framework as Amortized Proximal Optimization (APO). The simplest version of APO, which uses SGD as the meta-optimizer, is shown in Algorithm 1. One can choose any meta-optimizer; we found that RMSprop was the most stable and best-performing meta-optimizer in practice, and we used it for all our experiments. DISPLAYFORM1 When considering optimization meta-objectives, it is useful to analyze idealized versions where the meta-objective is optimized exactly (even when doing so is prohibitively expensive in practice). For instance, BID37 analyzed an idealized SMD algorithm, showing that even the idealized version suffered from short-horizon bias. In this section, we analyze two idealized versions of APO where an oracle is assumed to minimize the proximal objective exactly in each iteration. In both cases, we obtain strong convergence , suggesting that our proximal objective is a useful target for meta-optimization. We view the problem in output space (i.e., explicitly designing an update schedule for z i). Consider the space of outputs on all training examples; when we train a neural network, we are optimizing over a manifold in this space: DISPLAYFORM0 We assume that f is continuous, so that M is a continuous manifold. Given an oracle that for each iteration exactly minimizes the expectation of proximal objective Eq. 2 over the dataset, we can write one iteration of APO in output space as: DISPLAYFORM1 where z i is ith column of Z, corresponding to the network output on data x i after update, z k,i is the current network output on data x i. We first define the proximal objective as Eq. 4, using the Euclidean distance as the dissimilarity measure, which corresponds to Gauss-Newton algorithm under the linearization of network. With an oracle, this proximal objective leads to projected gradient descent: DISPLAYFORM0 Consider a loss function on one data point (z): R d → R, where d is the dimension of neural network's output. 1 We say the gradient is L-Lipschitz if: DISPLAYFORM1 When the manifold M is smooth, a curve in M is called geodesic if it is the shortest curve connecting the starting and the ending point. We say M have a C-bounded curvature if for each trajectory DISPLAYFORM2 going along some geodesic and v(t) 2 = 1, there is v(t) ≤ C with spectral norm. For each point Z ∈ M, consider the tangent space at point Z as T Z M. We call the projection of ∇ L(Z) onto the hyperplane T Z M as the effective gradient of L at Z ∈ M. It is worth noting that zero effective gradient corresponds to stationary point of the neural network. We have the following theorem stating the global convergence of Eq. 14 to stationary point: Theorem 1. Assume the loss satisfies A1. Furthermore, assume L is lower bounded by L * and has gradient norm upper bound G. Let g * T be the effective gradient in the first T iterations with minimal norm. When the manifold is smooth with C-bounded curvature, with λ ≥ {CG, L 4}, the norm of g * T converges with rate O 1 T as: DISPLAYFORM3 This convergence differs from usual neural network convergence , because here the Lipschitz constants are defined for the output space, so they are known and generally nice. For instance, L = 1 when we use a quadratic loss. In contrast, the gradient is in general not Lipschitz continuous in weight space for deep networks. We further replace the dissimilarity term with: DISPLAYFORM0 which is the second-order approximation of Eq. 8. With a proximal oracle, this variant of APO turns out to be Proximal Newton Method in the output space, if we set λ = 1 2: DISPLAYFORM1 where DISPLAYFORM2 H is the norm with local Hessian as metric. In general, Newton's method can't be applied directly to neural nets in weight space, because it is nonconvex BID4. However, Proximal Newton Method in output space can be efficient given a strongly convex loss function. Consider a loss (z) with µ-strongly convex: DISPLAYFORM3 where z * is the unique minimizer and µ is some positive real number, and L H -smooth Hessian: for any vector v ∈ R d such that v = 1, there is: DISPLAYFORM4 The following theorem suggests the locally fast convergence rate of iteration Eq. 17: Theorem 2. Under assumptions A2 and A3, if the unique minimum Z * ∈ M, then whenever iteration converges to Z *, it converges locally quadratically 2: DISPLAYFORM5 Hence, the proximal oracle achieves second-order convergence for neural network training under fairly reasonable assumptions. Of course, we don't expect practical implementations of APO (or any other practical optimization method for neural nets) to achieve the second-order convergence rates, but we believe the second-order convergence still motivates our proximal objective as a useful target for meta-optimization. Finding good optimization hyperparameters is a longstanding problem BID3. Classic methods for hyperparameter optimization, such as grid search, random search, and Bayesian optimization BID29 BID33, are expensive, as they require performing many complete training runs, and can only find fixed hyperparameter values (e.g., a constant learning rate). Hyperband can reduce the cost by terminating poorly-performing runs early, but is still limited to finding fixed hyperparameters. Population Based Training (PBT) BID10 ) trains a population of networks simultaneously, and throughout training it terminates poorly-performing networks, replaces their weights by a copy of the weights of a better-performing network, perturbs the hyperparameters, and continues training from that point. PBT can find a coarse-grained learning rate schedule, but because it relies on random search, it is far less efficient than gradient-based meta-optimization. There have been a number of approaches to gradient-based adaptation of learning rates. Gradientbased optimization algorithms can be unrolled as computation graphs, allowing the gradients of hyperparameters such as learning rates to be computed via automatic differentiation. BID17 propagate gradients through the full unrolled training procedure to find optimal learning rate schedules offline. Stochastic meta-descent (SMD) adapts hyperparameters online. Hypergradient descent (HD) BID2 takes the gradient of the learning rate with respect to the optimizer update in each iteration, to minimize the expected loss in the next iteration. In particular, HD suffers from short horizon bias BID37, while in Appendix F we show that APO does not. Some authors have proposed learning entire optimization algorithms BID14 BID35 BID1. BID14 view this problem from a reinforcement learning perspective, where the state consists of the objective function L and the sequence of prior iterates {θ t} and gradients {∇ θ L(θ t)}, and the action is the step ∆θ. In this setting, the update rule φ is a policy, which can be found via policy gradient methods BID32. Approaches that learn optimizers must be trained on a set of objective functions {f 1, . . ., f n} drawn from a distribution F; this setup can be restrictive if we only have one instance of an objective function. In addition, the initial phase of training the optimizer on a distribution of functions can be expensive. APO requires only the objective function of interest and finds learning rate schedules in a single training run. In principle, APO could be used to learn a full optimization algorithm; however, learning such an algorithm would be just as hard as the original optimization problem, so one would not expect an out-of-the-box meta-optimizer (such as RMSprop with learning rate 0.001) to work as well as it does for adapting few hyperparameters. In this section, we evaluate APO empirically on a variety of learning tasks; Table 1 gives an overview of the datasets, model architectures, and base optimizers we consider. In our proximal objective, DISPLAYFORM0, h can be any approximation to the loss function (e.g., a linearization); in our experiments, we directly used the loss value h =, as we found this to work well in many settings. As the dissimilarity term D, we used the squared Euclidean norm. We used APO to tune the optimization hyperparameters of four base-optimizers: SGD, SGD with Nesterov momentum (denoted SGDm), RMSprop, and K-FAC. For SGD, the only hyperparameter is the learning rate; we consider both a single, global learning rate, as well as per-layer learning rates. For SGDm, the update rule is given by: DISPLAYFORM1 where g = ∇. Since adapting µ requires considering long-term performance BID31, it is not appropriate to adapt it with a one-step objective like APO. Instead, we just adapt the learning rate with APO as if there's no momentum, but then apply momentum with µ = 0.9 on top of the updates. For RMSprop, the optimizer step is given by: DISPLAYFORM2 We note that, in addition to the learning rate η, we can also consider adapting and the power to which s is raised in the denominator of Eq. 21-we denote this parameter ρ, where in standard RMSprop we have ρ = 1 2. Both and ρ can be interpreted as having a damping effect on the update. K-FAC is an approximate natural gradient method based on preconditioning the gradient by an approximation to the Fisher matrix, θ ← θ − F −1 ∇. For K-FAC, we tune the global learning rate and the damping factor. Meta-Optimization Setup. Throughout this section, we use the following setup for metaoptimization: we use RMSprop as the meta-optimizer, with learning rate 0.1, and perform 1 metaoptimization update for every 10 steps of the base optimization. We show in Appendix E that with this default configuration, APO is robust to the initial learning rate of the base optimizer. Each meta-optimization step takes approximately the same amount of computation as a base optimization step; by performing meta-updates once per 10 base optimization steps, the computational overhead of using APO is just a small fraction more than the original training procedure. Rosenbrock. We first validated APO on the two-dimensional Rosenbrock function, f (x, y) = (1 − x) 2 + 100(y − x 2) 2, with initialization (x, y) = (1, −1.5). We used APO to tune the learning rate of RMSprop, and compared to standard RMSprop with several fixed learning rates. Because this problem is deterministic, we set λ = 0 for APO. FIG0 shows that RMSprop-APO was able to achieve a substantially lower objective value than the baseline RMSprop. The learning rates for each method are shown in FIG0; we found that APO first increases the learning rate to make rapid progress at the start of optimization, and then gradually decreases it as it approaches the local optimum. In Appendix D we show that APO converges quickly from many different locations on the Rosenbrock surface, and in Appendix E we show that APO is robust to the initial learning rate of the base optimizer. Badly-Conditioned Regression. Next, we evaluated APO on a badly-conditioned regression problem BID22, which is intended to be a difficult test problem for optimization algorithms. In this problem, we consider a dataset of input/output pairs {(x, y)}, where the outputs are given by y = Ax, where A is an ill-conditioned matrix with κ(A) = 10 10. The task is to fit a two-layer linear model f (x) = W 2 W 1 x to this data; the loss to be minimized is FIG0 (c) compares the performance of RMSprop with a hand-tuned fixed learning rate to the performance of RMSprop-APO, with learning rates shown in FIG0. Again, the adaptive learning rate enabled RMSprop-APO to achieve a loss value orders of magnitude smaller than that achieved by RMSprop with a fixed learning rate. DISPLAYFORM0 For each of the real-world datasets we consider-MNIST, CIFAR-10, CIFAR-100, SVHN, and FashionMNIST-we chose the learning rates for the baseline optimizers via grid searches: for SGD and SGDm, we performed a grid search over learning rates {0.1, 0.01, 0.001}, while for RMSprop, we performed a grid search over learning rates {0.01, 0.001, 0.0001}. For SGD-APO and SGDm-APO, we set the initial learning rate to 0.1, while for RMSprop-APO, we set the initial learning rate to 0.0001. These initial learning rates are used for convenience; we show in Appendix E that APO is robust to the choice of initial learning rate. The only hyperparameter we consider for APO is the value of λ: for SGD-APO and SGDm-APO, we select the best λ from a grid search over {0.1, 0.01, 1e-3}; for RMSprop, we choose λ from a grid search over {0.1, 0.01, 1e-3, 1e-4, 1e-5, 0}. Note that because each value of λ yields a learning rate schedule, performing a search over λ is much more effective than searching over fixed learning rates. In particular, we show that the adaptive learning rate schedules discovered by APO are competitive with manual learning rate schedules. First, we compare SGD and RMSprop with their APO-tuned variants on MNIST, and show that APO outperforms fixed learning rates. As the classification network for MNIST, we used a twolayer MLP with 1000 hidden units per layer and ReLU nonlinearities. We trained on mini-batches of size 100 for 100 epochs. SGD with APO. We used APO to tune the global learning rate of SGD and SGD with Nesterov momentum (denoted SGDm) on MNIST, where the momentum is fixed to 0.9. For baseline SGDm, we used learning rate 0.01, while for baseline SGD, we used both learning rates 0.1 and 0.01. The training curve of SGD with learning rate 0.1 almost coincides with that of SGDm with learning rate 0.01. For SGD-APO, the best λ was 1e-3, while for SGDm-APO, the best λ was 0.1. A comparison of the algorithms is shown in FIG1. APO substantially improved the training loss for both SGD and SGDm. We compare K-FAC with a fixed learning rate and a manual learning rate schedule to APO, used to tune 1) the learning rate; and 2) both the learning rate and damping coefficient. RMSprop with APO. Next, we used APO to tune the global learning rate of RMSprop. For baseline RMSprop, the best fixed learning rate was 1e-4, while for RMSprop-APO, the best λ was 1e-5. FIG1 (b) compares RMSprop and its APO-tuned variant on MNIST. RMSprop-APO achieved a training loss about three orders of magnitude smaller than the baseline. We trained a 34-layer residual network (ResNet34) BID7 on CIFAR-10 , using mini-batches of size 128, for 200 epochs. We used batch normalization and standard data augmentation (horizontal flipping and cropping). For each optimizer, we compare APO to 1) fixed learning rates; and 2) manual learning rate decay schedules. SGD with APO. For SGD, we used both learning rates 0.1 and 0.01 since both work well. For SGD with momentum, we used learning rate 0.01. We also consider a manual schedule for both SGD and SGDm: starting from learning rate 0.1, and we decay it by a factor of 5 every 60 epochs. For the APO variants, we found that λ=1e-3 was best for SGD, while λ = 0.1 was best for SGDm. As shown in FIG1 (c), APO not only accelerates training, but also achieves higher accuracy on the test set at the end of training. RMSprop with APO. For RMSprop, we use fixed learning rates 1e-3 and 1e-4, and we consider a manual learning rate schedule in which we initialize the learning rate to 1e-3 and decay by a factor of 5 every 60 epochs. For RMSprop-APO, we used λ = 1e-3. The training curves, test accuracies, and learning rates for RMSprop and RMSprop-APO on CIFAR-10 are shown in FIG1 (d). We found that APO achieved substantially lower training loss than fixed learning rates, and was competitive with the manual decay schedule. In particular, both the final training loss and final test accuracy achieved by APO are similar to those achieved by the manual schedule. K-FAC with APO. We also used APO to tune the learning rate and damping coefficient of K-FAC. Similarly to the previous experiments, we use K-FAC to optimize a ResNet34 on CIFAR-10. We used mini-batches of size 128 and trained for 100 epochs. For the baseline, we used a fixed learning rate of 1e-3 as well as a decay schedule with initial learning rate 1e-3, decayed by a factor of 10 at epochs 40 and 80. For APO, we used λ = 1e-2. In experiments where the damping is not tuned, it is fixed at 1e-3. The are shown in FIG2. We see that K-FAC-APO performs competitively with the manual schedule when tuning just the global learning rate, and that both training loss and test accuracy improve when we tune both the learning rate and damping coefficient simultaneously. Next, we evaluated APO on the CIFAR-100 dataset. Similarly to our experiments on CIFAR-10, we used a ResNet34 network with batch-normalization and data augmentation, and we trained on minibatches of size 128, for 200 epochs. We compared SGD-APO/SGDm-APO to standard SGD/SGDm using a fixed learning rate found by grid search; a custom learning rate schedule in which the learning rate is decayed by a factor of 5 at epochs 60, 120, and 180. We set λ = 1e-3 for SGD-APO and λ = 0.1 for SGDm-APO. Figure 4 shows the training loss, test accuracy, and the tuned learning rate. It can be seen that APO generally achieves smaller training loss and higher test accuracy. We also used APO to train an 18-layer residual network (ResNet18) with batch normalization on the SVHN dataset BID21. Here, we used the standard train and test sets, without additional training data. We used mini-batches of size 128 and trained our networks for 160 epochs. We compared APO to 1) fixed learning rates, and 2) a manual schedule in which we initialize the learning rate to 1e-3 and decay by a factor of 10 at epochs 80 and 120. We show the training loss, test accuracy, and learning rates for each method in Figure 5. Here, RMSprop-APO achieves similar training loss to the manual schedule, and obtains higher test accuracy than the schedule. We also see that the learning rate adapted by APO spans two orders of magnitude, similar to the manual schedule. Figure 6: SGD with weight decay compared to SGD-APO without weight decay, on CIFAR-10. BID9 is a widely used technique to speed up neural net training. Networks with BN are commonly trained with weight decay. It was shown that the effectiveness of weight decay for networks with BN is not due to the regularization, but due to the fact that weight decay affects the scale of the network weights, which changes the effective learning rate (; BID8 BID35 . In particular, weight decay decreases the scale of the weights, which increases the effective learning rate; if one uses BN without regularizing the norm of the weights, then the weights can grow without bound, pushing the effective learning rate to 0. Here, we show that using APO to tune learning rates allows for effective training of BN networks without using weight decay. In particular, we compared SGD-APO without weight decay and SGD with weight decay 5e-4. Figure 6 shows that SGD-APO behaved better than SGD with a fixed learning rate, and achieved comparable performance as SGD with a manual schedule. We introduced amortized proximal optimization (APO), a method for online adaptation of optimization hyperparameters, including global and per-layer learning rates, and damping parameters for approximate second-order methods. We evaluated our approach on real-world neural network optimization tasks-training MLP and CNN models-and showed that it converges faster and generalizes better than optimal fixed learning rates. Empirically, we showed that our method overcomes short horizon bias and performs well with sensible default values for the meta-optimization parameters. Guodong Zhang, Chaoqi Wang, Bowen Xu, and Roger Grosse. Three mechanisms of weight decay regularization. arXiv preprint arXiv:1810.12281, 2018.A PROOF OF THEOREM 1We first introduce the following lemma:Lemma 1. Assume the manifold is smooth with C-bounded curvature, the gradient norm of loss function L is upper bounded by G. If the effective gradient at point Z k ∈ M is g k, then for any DISPLAYFORM0 Proof. We construct the Z satisfying the above inequality. Consider the following point in R d: DISPLAYFORM1 We show that Z is a point satisfying the inequality in the lemma. Firstly, we notice that DISPLAYFORM2 This is because when we introduce the extra curveṽ DISPLAYFORM3 Here we use the fact thatv = 0 and v ≤ C. Therefore we have DISPLAYFORM4 Here the first equality is by introducing the extra Y, the first inequality is by triangle inequality, the second equality is by the definition of g k being ∇ Z L(Z k) projecting onto a plane, the second inequality is due to the above bound of Y − Z, the last inequality is due to DISPLAYFORM5, there is therefore DISPLAYFORM6 which completes the proof. Proof. For the ease of notation, we denote the effective gradient at iteration k as g k. For one iteration, there is DISPLAYFORM0 Here the first inequality is due to the Lipschitz continuity and the fact that total loss equals to the sum of all loss functions, and the second inequality is due to λ ≥ L 4, the third inequality is due to Lemma 1 with γ = So we have DISPLAYFORM1 Telescoping, there is DISPLAYFORM2 Proof. For notational convenience, we think of Z as a vector rather than a matrix in this proof. The Hessian ∇ 2 L(Z) is therefore a block diagonal matrix, where each block is the Hessian of loss on a single data. First, we notice the following equation: DISPLAYFORM0 is the norm of vector v defined by the positive definite matrix A. DISPLAYFORM1, therefore also positive definite. As a of the above equivalence, one step of Proximal Newton Method can be written as: DISPLAYFORM2.Since Z * ∈ M by assumption, there is: DISPLAYFORM3 Now we have the following inequality for one iteration: DISPLAYFORM4 Here the first inequality is because of triangle inequality, the second inequality is due to the previous , the equality is because ∇ L(Z *) = 0, the last inequality is because of the strong convexity. By the Lipschitz continuity of the Hessian, we have: DISPLAYFORM5 Therefore, we have: Here we highlight the ability of APO to tune several optimization hyperparameters simultaneously. We used APO to adapt all of the RMSprop hyperparameters {η, ρ,}. As shown in FIG6 (a), tuning ρ and in addition to the learning rate η can stabilize training. We also used APO to adapt per-layer learning rates. FIG6 (b) shows the per-layer learning rates tuned by APO, when using SGD on MNIST. FIG6 (c) uses APO to tune per-layer learning rate of RMSprop on MNIST. FIG7 shows the adaptation of the additional ρ and hyperparameters of RMSprop, for training an MLP on MNIST. Tuning per-layer learning rates is a difficult optimization problem, and we found that it was useful to use a smaller meta learning rate of 0.001 and perform meta-updates more frequently. DISPLAYFORM6 We also used APO to train a convolutional network on the FashionMNIST dataset BID38. The network we use consists of two convolutional layers with 16 and 32 filters respectively, both with kernel size 5, followed by a fully-connected layer. The are shown in FIG6 (d), where we also compare K-FAC to hand-tuned RMSprop and RMSprop-APO on the same problem. We find that K-FAC with a fixed learning rate outperforms RMSprop-APO, while K-FAC-APO substantially outperforms K-FAC. The are shown in FIG6 (d). We also show the adaptation of both the learning rate and damping coefficient for K-FAC-APO in FIG6 (d). In this section, we present additional experiments on the Rosenbrock problem. We show in FIG9 that APO converges quickly from different starting points on the Rosenbrock surface. In this section we show that APO is robust to the choice of initial learning rate of the base optimizer. With a suitable meta learning rate, APO quickly adapts many different initial learning rates to the same range, after which the learning rate adaptation follows a similar trajectory. Thus, APO helps to alleviate the difficulty involved in selecting an initial learning rate. First, we used RMSprop-APO to optimize Rosenbrock, starting with a wide range of initial learning rates; we see in FIG0 that the training losses and learning rates are nearly identical between all these experiments. Next, we trained an MLP on MNIST and ResNet34 on CIFAR-10 using RMSprop-APO, with the learning rate of the base optimizer initialized to 1e-2, 1e-3, 1e-4, 1e-5, 1e-6, and 1e-7. We used the default meta learning rate 0.1. As shown in FIG0, the training loss, test accuracy, and learning rate adaptation are nearly identical using these initial learning rates, which span 5 orders of magnitude. In this section we apply APO to the noisy quadratic problem investigated in BID37 BID23, and demonstrate that APO overcomes the short horizon bias problem. We optimize a quadratic function f (x) = x T Hx, where x ∈ R 1000, H is a diagonal matrix H = diag{h 1, h 2, · · ·, h 1000}, with eigenvalues h i evenly distributed in interval [0.01, 1]. Initially, we set x with each dimension being 100. For each iteration, we can access the noisy version of the function, i.e., the gradient and function value of functioñ DISPLAYFORM0 Here c is the vector of noise: each dimension of c is independently randomly sampled from a normal distribution at each iteration, and the variance of dimension i is set to be 1 hi. For SGD, we consider the following four learning rate schedules: optimal schedule, exponential schedule, linear schedule and a fixed learning rate. For SGD with APO, we directly use functionf as the loss approximation h, use Euclidean distance norm square as the dissimilarity term D, and consider the following schedules for λ: optimal schedule(with λ ≥ 0), exponential schedule, linear schedule and a fixed λ. We calculate the optimal parameter for each schedule of both algorithms so as to achieve a minimal function value at the end of 300 iterations. We optimize the schedules with 10000 steps of Adam and learning rate 0.001 after unrolling the entire 300 iterations. The function values at the end of 300 iterations with each schedule are shown in Table 2. FIG0 plots the training loss and learning rate of SGD during the 300 iterations under optimal schedule, figure 13 plots the training loss and λ under optimal schedule for SGD with APO. It can be seen that SGD with APO achieves almost the same training loss as optimal SGD for noisy quadratics task. This indicates that APO doesn't suffer from the short-horizon bias mentioned in BID37. Adam BID11 is an adaptive optimization algorithm widely used for training neural networks, which can be seen as RMSProp with momentum. The update rule is given by: DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 where g t = ∇ θ t (θ t−1).Similar to SGD with Nesterov momentum, we fixed the β 1 and β 2 for Adam, and used APO to tune the global learning rate η. We tested Adam-APO with a ResNet34 network on CIFAR-10 dataset, and compared it with both Adam with fixed learning rate and Adam with a learning rate schedule where the learning rate is initialized to 1e-3 and is decayed by a factor of 5 every 60 epochs. Similarly to SGD with momentum, we found that Adam generally benefits from larger values of λ. Thus, we recommend performing a grid search over λ values from 1e-2 to 1. As shown in FIG0, APO improved both the training loss and test accuracy compared to the fixed learning rate, and achieved comparable performance as the manual learning rate schedule. Population-based training (PBT) BID10 is an approach to hyperparameter optimization that trains a population of N neural networks simultaneously: each network periodically evaluates its performance on a target measure (e.g., the training loss); poorly-performing networks can exploit better-performing members of the population by cloning the weights of the better network, copying and perturbing the hyperparameters used by the better network, and resuming training. In this way, a single model can essentially experience multiple hyperparameter settings during training; in particular, we are interested in evaluating the learning rate schedule found using PBT.Here, we used PBT to tune the learning rate for RMSprop, to optimize a ResNet34 model on CIFAR-10. For PBT, we used a population of size 4 (which we found to perform better than a population of size 10), and used a perturbation strategy that consists of randomly multiplying the learning rate by either 0.8 or 1.2. In PBT, one can specify the probability with which to re-sample a hyperparameter value from an underlying distribution. We found that it was critical to set this to 0; otherwise, the learning rate could jump from small to large values and cause instability in training. FIG0 compares PBT with APO; we show the best training loss achieved by any of the models in the PBT population, as a function of wall-clock time. For a fair comparison between these methods, we ran both PBT and APO using 1 GPU. We see that APO outperforms PBT, achieving a training loss an order of magnitude smaller than PBT, and achieves the same test accuracy, much more quickly.
We introduce amortized proximal optimization (APO), a method to adapt a variety of optimization hyperparameters online during training, including learning rates, damping coefficients, and gradient variance exponents.
339
scitldr
Dense word vectors have proven their values in many downstream NLP tasks over the past few years. However, the dimensions of such embeddings are not easily interpretable. Out of the d-dimensions in a word vector, we would not be able to understand what high or low values mean. Previous approaches addressing this issue have mainly focused on either training sparse/non-negative constrained word embeddings, or post-processing standard pre-trained word embeddings. On the other hand, we analyze conventional word embeddings trained with Singular Value Decomposition, and reveal similar interpretability. We use a novel eigenvector analysis method inspired from Random Matrix Theory and show that semantically coherent groups not only form in the row space, but also the column space. This allows us to view individual word vector dimensions as human-interpretable semantic features. Understanding words has a fundamental impact on many natural language processing tasks, and has been modeled with the Distributional Hypothesis BID0. Dense d-dimensional vector representations of words created from this model are often referred to as word embeddings, and have successfully captured similarities between words, such as word2vec and GloVe BID1 BID2. They have also been applied to downstream NLP tasks as word representation features, ranging from sentiment analysis to machine translation BID3 BID4.Despite their widespread popularity in usage, the dimensions of these word vectors are difficult to interpret BID5. Consider w president = [0.1, 2.4, 0.3] as the 3-dimensional vector of "president" from word2vec. In this 3-dimensional space (or the row space), semantically similar words like "minister" and "president" are closely located. However, it is unclear what the dimensions represent, as we do not know the meaning of the 2.4 in w president. It is difficult to answer questions like'what is the meaning of high and low values in the columns of W' and'how can we interpret the dimensions of word vectors'. To address this problem, previous literature focused on the column space by either training word embeddings with sparse and non-negative constraints BID6 BID7 BID8, or post-processing pre-trained word embeddings BID5 BID9 BID10. We instead investigate this problem from a random matrix perspective. In our work, we analyze the eigenvectors of word embeddings obtained with truncated Singular Value Decomposition (SVD) BID11 BID12 of the Positive Pointwise Mutual Information (PPMI) matrix BID13. Moreover, we compare this analysis with the row and column space analysis of Skip Gram Negative Sampling (SGNS), a model used to train word2vec BID14. From the works of BID15 proving that both SVD and SGNS factorizes and approximates the same matrix, we hypothesize that a study of the principal eigenvectors of the PPMI matrix reflects the information contained in SGNS.Contributions: Without requiring any constraints or post-processing, we show that the dimensions of word vectors can be interpreted as semantic features. In doing so, we also introduce novel word embedding analysis methods inspired by the literature of eigenvector analysis techniques from Random Matrix Theory. Recently, there have been several works that have shown similar in semantic grouping among the column values. Several of these algorithms proposed to train non-negative sparse interpretable word vectors BID6 BID7 BID8 BID16.Furthermore, BID5 also proposed methods to post-process pre-trained word vectors with non-negativity and sparsity constraints. However, their vectors were optionally binarized, which is difficult to interpret intensity than real-values. BID9 has proposed to overcome these limitations by simply training a rotation matrix to transform pre-trained word2vec and GloVe, without being sparse or binary. Finally, BID10 post-trained the pre-trained word embeddings with k-sparse autoencoders with similar constraints to BID5.While these methods were able to successfully achieve interpretability in the column space evaluated with word intrusion detection tests, they either enforced sparsity and non-negativity constraints, or required extensive post-processing. Furthermore, they focused less on the analysis and discussion on the actual meanings of the columns despite their pursuit of interpretable dimensions. Hence, in our work, we put more emphasis on such implications with conventional algorithms without any extra constraints or post-processing steps. We define the Positive Pointwise Mutual Information (PPMI) matrix as M PPMI, the set of unique words as vocabulary V, and word embedding matrices created from SVD and SGNS as W SVD and W SGNS. The k-th largest eigenvalue and corresponding eigenvector of M PPMI are denoted as λ k and u k ∈ R |V |, and the k-th column of W SGNS as v k ∈ R |V |. The word vectors are denoted w SVD word or w SVD word, but when context is clear or does not matter, we simply use w word. Note that we often use the term "eigen" when and "singular" interchangeably because M PPMI is defined as a square matrix. Each entry of a co-occurrence matrix M represents the co-occurrence counts of words w i and c j in all documents in the corpus. However, raw co-occurrence counts have been known to underperform than other transformed variants BID15. Pointwise Mutual Information (PMI) BID13 instead transforms matrix by measuring the log ratio between the joint probability of w and c when assuming independence of the two and not. DISPLAYFORM0 The problem of this association measure is when dealing with never observed pairs which in PMI(w, c) = log 0. To cope with such, Positive Pointwise Mutual Information has been used to map all negative values to 0 from the intuition that positive associations are often more informative in downstream NLP tasks BID15.PPMI(w, c) = max(PMI(w, c), 0) Truncated SVD (we will further refer this as simply SVD), which is equivalent to maximum variance Principal Component Analysis (PCA) and has been popularized by Latent Semantic Analysis (LSA) BID12, factorizes the PPMI matrix as M PPMI = U · S · V T and truncates to d dimensions. Following the works of BID17, the word embedding matrix is taken as W = U d, instead of the more "standard" eigenvalue weighting W = U d · S. We discuss the effect of this in Section 6.2. Unlike PPMI and SVD which gives exact solutions, the word2vec Skip-Gram model, proposed by BID1, trains two randomly initialized word embedding matrices W and C with a neural network. DISPLAYFORM0 where DISPLAYFORM1 The intuition is to basically maximize the dot product between "similar" word and context pairs, and minimize the dot product between wrong pairs. The Softmax function is simply a generalized version of the logistic function to multi-class scenario. However, the normalization constant which computes the exponentials of all context words, is very computationally expensive when the vocabulary size is large. Hence, BID14 proposed Skip Gram with Negative Sampling (SGNS) to simplify the objective using negative sampling. We analyze the distributions of eigenvectors, calculate the Inverse Participation Ratios (IPR) to quantify the ratio of significant elements and measure structural sparsity, and qualitatively interpret the significant elements. The empirical distribution of eigenvector elements u k is compared with a Normal distribution N(µ u k, σ 2 u k) to measure normality of the eigenvectors, where µ u k, σ 2 u k refer to the mean and variance of u k. BID18 have shown that eigenvectors deviating from Gaussian contain genuine correlation between stocks, while also revealing a global bias that represented newsbreaks influencing all stocks. We search for similar patterns in Section 5.1.Inverse Participation Ratio: The Inverse Participation Ratio (IPR) of u k, denoted as I k, quantifies the inverse ratio of significant elements in the eigenvector u k BID18 BID19 BID20. DISPLAYFORM0 where u k i is the i-th element of u k. The intuition of IPR can be illustrated with two extreme cases. First, if all elements of u k have same values 1/ |V |, then I k is simply 1/|V |, with reciprocal 1/I k being |V |. This means that all |V | elements contribute similarly. On the other hand, a one-hot vector with only one element as one, and the rest as zero, u k will have an IPR value of one (also same for reciprocal). Hence, the reciprocal, 1/I k, measures the ratio of significant participants in u k. In short, the larger the I k, the smaller the ratio of participation, and the sparser the vector, in turn, reflecting structural sparsity of u k. Furthermore, as 1/I k ∈ [1, |V |], dividing this reciprocal with |V | will yield the sparsity of a given vector u k ∈ R |V |.Visualization of Top Eigenvector elements: As u k, v k ∈ R |V |, we can map each index of the vectors to a word in the vocabulary V. Hence, we investigate the dimensions and their indices (or words) with the largest absolute values and search for semantic coherence. Similar approaches with financial data have shown to group stocks from same industries or nearby regions BID18, and with genetic data, revealed important co-evolving genes in gene co-expression networks BID19. BID1, which has also been used by BID1. Removing most of the noisy non-alphanumerics, such as XML tags, the dataset size effectively reduced from approximately 66GB to 25GB, containing around 3.4B tokens. The vocabulary size is approximately 346K as we only consider words with at least 100 occurrences. SGNS and SVD We adapt the code from the hyperwords 3 released by BID17 to train both W SVD and W SGNS. Our code is publicly available online BID3. For W SGNS, we set negative sampling as 5. For both, we set a context window size of 2 (taking 5 words as context) and embedding dimension d = 500. From FIG0, we can see that eigenvectors corresponding to the larger eigenvalues such as u 1 or u 2 clearly deviate from a Gaussian distribution, and so do u 100 and u 500, but less. This shows us that the eigenvectors are not random and contain meaningful correlations. It is expected to see such pattern because these vectors are the principal eigenvectors. On a more interesting note, u 1 not only significantly deviates from a normal distribution, but also has only non-zero negative values as its elements, and no other eigenvectors have shown this behavior. This suggests that this particular eigenvector could represent a common bias that affects all "words", as it captured the effect of news outbreaks for stock prices in BID18. We revisit the interpretation of this observation in Section 6.1. FIG1 illustrates the IPR of u k plotted against the corresponding eigenvalue λ k, and vice versa for v k. From the plot, we can clearly see that the eigenvectors of W SVD have approximately 10x higher IPR values than those of W SGNS, meaning that the vectors are much sparser for W SVD. From FIG1, we can see that the largest eigenvector has the smallest IPR of 0.000006, and the reciprocal 1/I k divided by |V |, yields 48%, while the same for the largest I k gave around 4.7%. The mean value of 1/I k divided by |V |, across all eigenvectors was 27.5% indicating that there exists some sparse structure within the eigenvectors of W SVD. On the other hand, FIG1 shows that mean for v k was around 36%, meaning that column vectors of W SGNS are generally denser and less structured. Such discrepancy in structural sparsity motivates us to analyze the eigenvectors of W SVD in depth.6 Analysis and Discussion Based on the of previous sections, we further examine the top elements of the eigenvectors by sorting their absolute values in decreasing order. Table 3: Top participants of the salient columns of the word vector for "airport."baseball related words. Some words from u 121 initially seem irrelevant to baseball. However, "buehrle" is a baseball player, "rbis" stand for "Run Batted Ins", and "astros" is a baseball team name from Houston. Meanwhile, the words grouped in u 1, the largest eigenvector, could explain the bias we mentioned in Section 5.1. The significant participants tend to be strong transition words that are used often for dramatic effects, such as "importantly" or "crucially". Evidently, these words increase the intensity of the context. Moreover, while it was originally hypothesized that the largest principal eigenvectors would capture some semantic relationship, the 121th vector u 121 show surprisingly focused and narrow semantic grouping related to baseball. Further investigation reveals that u 121 has one of the highest IPR values, hence being one of the most sparse vectors. We verify similar trends in other eigenvectors with high IPR values as shown in TAB3. An interesting pattern arises here, in which the sparser eigenvectors tend to capture more distinct and rare features such as foreign names or languages, or topics like baseball. Furthermore, we compare the column space analysis on W SVD and W SGNS. Consider the word vector w airport for the word "airport. " We choose the salient dimensions, which are the largest elements, of w airport, and investigate the significant elements of those chosen dimensions (columns). Table 3 shows that the columns from W SVD display semantic coherence while those from W SGNS seem random. u 53 groups words that are related to the location of the airports. For example, one could say "Trindade station connects with the airport. " Similarly, u 337 groups famous airline companies together, while "fiumicino" is a famous airport in Italy. Sections 5.1 and 5.2 revealed that the eigenvectors contain genuine correlation and structure in the column space. We further show in Section 6.1 that semantically coherent words form groups of significant participants in each eigenvector. Now we can answer the questions we asked earlier. What is the meaning of high and low values in the columns of W? If word vector w from W SVD has a high absolute value in column k, it means that the word is relevant to the semantic group formed in u k. For example, the words from FIG2 have highest values in column k = 121, in which u 121 represents a semantic group related to baseball, as shown in TAB1.How can we interpret the dimensions of word vectors? The answer to this question follows naturally. As the salient dimensions represent relevant semantic groups, we can view the dimensions of w as semantic features. This view is in line with the Topic Modeling literature, in which words and documents are clustered into distinct latent topics. Hence, we can also see the word embedding dimensions as latent topics that can be interpretable. It can be easily seen from FIG2 that similar words do not show any interpretable similarity in their W SGNS representations, despite being nearest neighbors in the row space. On the other hand, it is very clear from FIG2 that similar words have similar representations, or feature vectors. We thus empirically verify that the dimensions of the row vectors can be viewed as semantic or syntactic features. Finally, the structural sparsity discovered with the IPR is further confirmed by contrasting FIG2. It is clearly visible that the the vectors from SVD are much sparser than from SGNS.Effect of Eigenvalue Weighting: As mentioned in Section 3, weighting with the eigenvalues essentially scales each feature column by the corresponding eigenvalues. Such process can be viewed as simply incorporating a prior, and does not hurt the interpretability. However, as BID17 showed that eigenvalue weighting decreases the performance of downstream NLP tasks, we can assume that either the prior is wrong, or too strong. In fact, in many cases, the largest eigenvalues are often order of magnitude larger than others, which can explain why not weighting the word embeddings with their corresponding eigenvalues would work better. In this work, we analyzed the eigenvectors, or the column space, of the word embeddings obtained from the Singular Value Decomposition of PPMI matrix. We revealed that the significant participants of the eigenvectors form semantically coherent groups, allowing us to view each word vector as an interpretable feature vector composed of semantic groups. These can be very useful in error analysis in downstream NLP tasks, or cherry-picking useful feature dimensions to easily create compressed and efficient task-specific embeddings. Future work will proceed in this direction on applying interpretability to practical usage.
Without requiring any constraints or post-processing, we show that the salient dimensions of word vectors can be interpreted as semantic features.
340
scitldr
Neural networks in the brain and in neuromorphic chips confer systems with the ability to perform multiple cognitive tasks. However, both kinds of networks experience a wide range of physical perturbations, ranging from damage to edges of the network to complete node deletions, that ultimately could lead to network failure. A critical question is to understand how the computational properties of neural networks change in response to node-damage and whether there exist strategies to repair these networks in order to compensate for performance degradation. Here, we study the damage-response characteristics of two classes of neural networks, namely multilayer perceptrons (MLPs) and convolutional neural networks (CNNs) trained to classify images from MNIST and CIFAR-10 datasets respectively. We also propose a new framework to discover efficient repair strategies to rescue damaged neural networks. The framework involves defining damage and repair operators for dynamically traversing the neural networks loss landscape, with the goal of mapping its salient geometric features. Using this strategy, we discover features that resemble path-connected attractor sets in the loss landscape. We also identify that a dynamic recovery scheme, where networks are constantly damaged and repaired, produces a group of networks resilient to damage as it can be quickly rescued. Broadly, our work shows that we can design fault-tolerant networks by applying on-line retraining consistently during damage for real-time applications in biology and machine learning. In this paper, inspired by the powerful paradigms introduced by deep learning, we attempt to understand the computational and mathematical 23 principles that impact the ability of neural networks to tolerate damage and be repaired. We characterize the response of two classes 24 of neural networks, namely multilayer perceptrons (MLP's) and convolutional neural nets (CNN's) to node-damage and propose a new 25 framework that identifies strategies to efficiently rescue damaged networks in a principled fashion. Our key contribution is the introduction of a framework that conceptualizes damage and repair of networks as operators of a dynamical 27 system in the high-dimensional parameter space of a neural network. The damage and repair operators are used to dynamically traverse the 28 landscape with the goal of mapping local geometric features (like, fixed points, limit-cycles or point/line-attractors) of the neural 29 networks' loss landscape. The framework led us to discovering that the iterative application of damage and repair operators in 30 networks that are highly resilient to node-deletions as well as guides us to uncover the presence of geometric features that resemble a 31 path-connected attractors set, in many respects, in the neural networks' loss landscape. Attractor-like geometric features in the networks' 32 loss landscape explains why the iterative damage-repair strategy always in the rescue of damaged networks within a small number of 33 training cycles. 2 Susceptibility of neural networks to damage The first question we ask in this paper is how do neural networks respond to physical perturbations and how does it affect their functional 36 performance. We characterize the impact of neural damage on'cognitive' performance of neural networks by tracking the performance of 37 two classes of artificial neural networks, namely MLPs and CNNs, to deletion of neural units from the network. The MLPs and CNNs were 38 trained to perform simple cognitive tasks like image classification on MNIST and CIFAR-10 datasets respectively before the networks were 39 perturbed. To damage a node i in the hidden layer of an MLP or in the fully connected layer of a CNN, we zero all connections between node i and the 41 rest of the network. And, to damage a node j in the convolutional layer of a CNN, we zero the entire feature map. In this paper, we are 42 specifically interested in node-damage as our perturbation because of its similarity in phenomena to neuron death in biological networks 43 and node-failures in neuromorphic hardware. We observe a steep increase in the rate of decline of functional performance as we incrementally delete nodes from either an MLP with 1 hidden layer (Fig-1a), an MLP with 2 hidden layers (Fig-1b) or a CNN with 2 convolutional layers, a pooling layer and 2 fully connected 46 layers (Fig-1c). We refer to this discrete jump in the rate of decline of performance as a phase transition. The existence of a phase transition shows that neural nets (MLP's and CNN's) damaged above their respective critical thresholds are not 48 resilient to any further perturbation. We are interested in deciphering strategies that enable the quick rescue of damaged neural nets and also want to identify networks that are more resilient to perturbation. 3 Can we rescue these damaged networks? We ask whether it is fundamentally possible to rescue damaged networks in order to compensate for their performance degradation. To do The plots in figure-2 show that damaged neural networks can be rescued to regain their original functional performance when re-trained 57 via both strategies 1 and 2. However, they require a large number of training cycles (epochs) to be effectively rescued (figure-2c). The 58 requirement of a large number of training cycles for the effective rescue of a neural network reduces the feasibility of either strategy as it 59 isn't ideal for both, living neural networks in the brain or artificial networks implemented on neuromorphic hardware to be re-trained for 60 extended periods of time to recover from small damages to its network.'space' in the networks' loss manifold that contains high performing, more resilient, sparser networks. As the iterative process of damage and repair always enabled the fast recovery of a damaged network (irrespective of the number of damaged units), this was surprising to us and we were interested in 74 determining if the loss landscape manifold had'special' geometric features that enabled this rescue. To map geometric features of a neural networks' loss landscape, we formally conceptualize the iterative 76 damage-repair paradigm as a dynamical system that involves the application of a damage and repair 77 operator (r) on a neural network (w). We define w to be a feed-forward neural network with n nodes and N total connections. Here, w i is the set of connections made by node i with the previous layer in the network. By definition, w i = φ, if node i is in the first layer. We also have: Dim(w i) = N and w ∈ R N To damage a neural network, we define a damage operator D i, that damages node i in the network. To repair a neural network, we define a rescue operator r {i,j}. Here {i, j} refers to the set of damaged nodes. The rescue operator forces the 83 network to descend the loss manifold, while fixing nodes within the set and their connections to zero. Rescue of the network is achieved by 84 performing a constrained gradient descent on the networks' loss manifold. where, η is the gradient step-size and ∂L ∂ w k is the gradient of the loss function of the neural network along w k A damage-repair sequence involves the application of a damage operator followed by a repair operator. A stochastic damage-repair sequence involves the random sampling of a damage operator from D, followed by the application of an 88 appropriate repair operator (ensuring that gradient descent is performed on remaining undamaged nodes). We define a random variable D to sample an operator D i from the set of all possible damage operators = {D i : i ∈ {1, ..., n}}. An iterative 90 damage-repair sequence is the repeated application of a random damage operator D coupled with a deterministic repair operator r {i,j,k,...}, that ensures all damaged nodes maintain a zero edge-weight, while other weights are plastic. Here, we show the long-hand and short-hand 92 notation for the iterative application of damage-repair operators. We hypothesize that ∃ an open set of networks U, that constitutes an invariant set, where: For any two points, w 1 and w 2 ∃γ: −→ U, such that: Our numerical strongly suggests the presence of an invariant, path-connected topological space U in the neural networks' loss 97 manifold. In our experiments, the invariant, path-connected set is a collection of trained networks, whose image corresponding to the 98 application of a damage and repair operator lies in the same set, visualized by the thick black arc (as shown in figure-4) obtained by tSNE 99 embeddings of the high-dimensional network (w). We observe that iterative application of the damage-repair operator on a network sampled 100 from U in a series of networks that belong to the same set U. This is observed in fig-4b & fig-4d. The red lines indicate damage of 101 network, while the green lines correspond to repair of damaged networks. This hints at the possibility that U is an invariant set. We also 102 interpolated between all pairs of networks sampled from U and observed that all the interpolated networks were present in U as well. In this paper, we address a pertinent question of how neural networks in the brain, or in engineered systems respond to damage of their units 105 and whether there exists efficient strategies to repair damaged networks. We observe a phase transition behavior as we incrementally delete 106 nodes from the neural network as the rate of decline of performance steeply increases after crossing a critical number of node deletions. We discover that damaged networks can be rescued and the iterative damage-rescue strategy produces networks that are highly resilient 108 to perturbations, and can be rescued within a small number of training cycles. This is enabled by the putative presence of an invariant, path-connected set in the networks' loss manifold. Although we have shown numerical that strongly suggest the presence of invariant 110 sets in the loss manifold, our future work will focus on analytically proving the presence of these topological spaces in the loss manifold, through the formalization presented in the paper, and the use of the Koopman operator machinery, amongst others.
strategy to repair damaged neural networks
341
scitldr
Automatic question generation from paragraphs is an important and challenging problem, particularly due to the long context from paragraphs. In this paper, we propose and study two hierarchical models for the task of question generation from paragraphs. Specifically, we propose (a) a novel hierarchical BiLSTM model with selective attention and (b) a novel hierarchical Transformer architecture, both of which learn hierarchical representations of paragraphs. We model a paragraph in terms of its constituent sentences, and a sentence in terms of its constituent words. While the introduction of the attention mechanism benefits the hierarchical BiLSTM model, the hierarchical Transformer, with its inherent attention and positional encoding mechanisms also performs better than flat transformer model. We conducted empirical evaluation on the widely used SQuAD and MS MARCO datasets using standard metrics. The demonstrate the overall effectiveness of the hierarchical models over their flat counterparts. Qualitatively, our hierarchical models are able to generate fluent and relevant questions. Question Generation (QG) from text has gained significant popularity in recent years in both academia and industry, owing to its wide applicability in a range of scenarios including conversational agents, automating reading comprehension assessment, and improving question answering systems by generating additional training data. Neural network based methods represent the stateof-the-art for automatic question generation. These models do not require templates or rules, and are able to generate fluent, high-quality questions. Most of the work in question generation takes sentences as input (; ; ;). QG at the paragraph level is much less explored and it has remained a challenging problem. The main challenges in paragraph-level QG stem from the larger context that the model needs to assimilate in order to generate relevant questions of high quality. Existing question generation methods are typically based on recurrent neural networks (RNN), such as bi-directional LSTM. Equipped with different enhancements such as the attention, copy and coverage mechanisms, RNN-based models (; ;) achieve good on sentence-level question generation. However, due to their ineffectiveness in dealing with long sequences, paragraph-level question generation remains a challenging problem for these models. proposed a paragraph-level QG model with maxout pointers and a gated self-attention encoder. To the best of our knowledge this is the only model that is designed to support paragraph-level QG and outperforms other models on the SQuAD dataset . One straightforward extension to such a model would be to reflect the structure of a paragraph in the design of the encoder. Our first attempt is indeed a hierarchical BiLSTM-based paragraph encoder (HPE), wherein, the hierarchy comprises the word-level encoder that feeds its encoding to the sentence-level encoder. Further, dynamic paragraph-level contextual information in the BiLSTM-HPE is incorporated via both word-and sentence-level selective attention. However, LSTM is based on the recurrent architecture of RNNs, making the model somewhat rigid and less dynamically sensitive to different parts of the given sequence. Also LSTM models are slower to train. In our case, a paragraph is a sequence of sentences and a sentence is a sequence of words. The Transformer is a recently proposed neural architecture designed to address some deficiencies of RNNs. Specifically, the Transformer is based on the (multi-head) attention mechanism, completely discarding recurrence in RNNs. This design choice allows the Transformer to effectively attend to different parts of a given sequence. Also Transformer is relatively much faster to train and test than RNNs. As humans, when reading a paragraph, we look for important sentences first and then important keywords in those sentences to find a concept around which a question can be generated. Taking this inspiration, we give the same power to our model by incorporating word-level and sentence-level selective attention to generate high-quality questions from paragraphs. In this paper, we present and contrast novel approachs to QG at the level of paragraphs. Our main contributions are as follows: • We present two hierarchical models for encoding the paragraph based on its structure. We analyse the effectiveness of these models for the task of automatic question generation from paragraph. • Specifically, we propose a novel hierarchical Transformer architecture. At the lower level, the encoder first encodes words and produces a sentence-level representation. At the higher level, the encoder aggregates the sentence-level representations and learns a paragraphlevel representation. • We also propose a novel hierarchical BiLSTM model with selective attention, which learns to attend to important sentences and words from the paragraph that are relevant to generate meaningful and fluent questions about the encoded answer. • We also present attention mechanisms for dynamically incorporating contextual information in the hierarchical paragraph encoders and experimentally validate their effectiveness. Question generation (QG) has recently attracted significant interests in the natural language processing (NLP) (; ; ;) and computer vision (CV) ) communities. Given an input (e.g., a passage of text in NLP or an image in CV), optionally also an answer, the task of QG is to generate a natural-language question that is answerable from the input. Existing text-based QG methods can be broadly classified into three categories: (a) rule-based methods, (b) template-base methods, and (c) neural network-based methods. Rule based methods perform syntactic and semantic analysis of sentences and apply fixed sets of rules to generate questions. They mostly rely on syntactic rules written by humans and these rules change from domain to domain. On the other hand, template based methods use generic templates/slot fillers to generate questions. More recently, neural networkbased QG methods (; ;) have been proposed. They employ an RNN-based encoder-decoder architecture and train in an end-to-end fashion, without the need of manually created rules or templates. were the first to propose a sequence-to-sequence (Seq2Seq) architecture for QG. proposed to augment each word with linguistic features and encode the most relevant pivotal answer in the text while generating questions. encode ground-truth answers (given in the training data), use the copy mechanism and additionally employ context matching to capture interactions between the answer and its context within the passage. They encode ground-truth answer for generating questions which might not be available for the test set. recently proposed a Seq2Seq model for paragraph-level question generation, where they employ a maxout pointer mechanism with a gated self-attention encoder. contrast recurrent and non-recurrent architectures on their effectiveness in capturing the hierarchical structure. In Machine Translation, non-recurrent model such as a Transformer that does not use convolution or recurrent connection is often expected to perform better. However, Transformer, as a non-recurrent model, can be more effective than the recurrent model because it has full access to the sequence history. Our findings also suggest that LSTM outperforms the Transformer in capturing the hierarchical structure. In contrast, report settings in which attention-based models, such as BERT are better capable of learning hierarchical structure than LSTM-based models. We propose a general hierarchical architecture for better paragraph representation at the level of words and sentences. This architecture is agnostic to the type of encoder, so we base our hierarchical architectures on BiLSTM and Transformers. We then present two decoders (LSTM and Transformer) with hierarchical attention over the paragraph representation, in order to provide the dynamic context needed by the decoder. The decoder is further conditioned on the provided (candidate) answer to generate relevant questions. Notation: The question generation task consists of pairs (X X X, y y y) conditioned on an encoded answer z, where X X X is a paragraph, and y is the target question which needs to be generated with respect to the paragraph.. Let us denote the i-th sentence in the paragraph by x x x i, where x i,j denotes the j-th word of the sentence. We assume that the first and last words of the sentence are special beginningof-the-sentence < BOS > and end-of-the-sentence < EOS > tokens, respectively. Our hierarchical paragraph encoder (HPE) consists of two encoders, viz., a sentence-level and a word-level encoder; (c.f. Figure 1). The lower-level encoder WORDENC encodes the words of individual sentences. This encoder produces a sentence-dependent word representation r r r i,j for each word x i,j in a sentence x x x i, i.e., r r r i = WORDENC (x x x i). This representation is the output of the last encoder block in the case of Transformer, and the last hidden state in the case of BiLSTM. Furthermore, we can produce a fixed-dimensional representation for a sentence as a function of r i r i r i, e.g., by summing (or averaging) its contextual word representations, or concatenating the contextual representations of its < BOS > and < EOS > tokens. We denote the ing sentence representation bys s s i for a sentence x x x i. Sentence-Level Encoder: At the higher level, our HPE consists of another encoder to produce paragraph-dependent representation for the sentences. The input to this encoder are the sentence representations produced by the lower level encoder, which are insensitive to the paragraph context. In the case of the transformer, the sentence representation is combined with its positional embedding to take the ordering of the paragraph sentences into account. The output of the higher-level encoder is contextual representation for each set of sentences s s s = SENTENC (s s s), where s s s i is the paragraphdependent representation for the i-th sentence. In the following two sub-sections, we present our two hierarchical encoding architectures, viz., the hierarchical BiLSTM in Section 3.2) and hierarchical transformer in Section 3.3). In this first option, c.f., Figure 1, we use both word-level attention and sentence level attention in a Hierarchical BiLSTM encoder to obtain the hierarchical paragraph representation. We employ the attention mechanism proposed in at both the word and sentence levels. We employ the BiLSTM (Bidirectional LSTM) as both, the word as well as the sentence level encoders. We concatenate forward and backward hidden states to obtain sentence/paragraph representations. Subsequently, we employ a unidirectional LSTM unit as our decoder, that generates the target question one word at a time, conditioned on (i) all the words generated in the previous time steps and (ii) on the encoded answer. The methodology employed in these modules has been described next. We use the LSTM decoder's previous hidden state and the word encoder's hidden state to compute attention over words (Figure 1 hidden states of the BiLSTM encoder to obtain the final hidden state representation (h t) at time step t. Representation (h t) is calculated as: h h h t = WORDENC (h t−1, [e e e t, f f f Sentence-level Attention: We feed the sentence representationss s s to our sentence-level BiLSTM encoder (c.f. Figure 1). Similar to the word-level attention, we again the compute attention weight over every sentence in the input passage, using (i) the previous decoder hidden state and (ii) the sentence encoder's hidden state. As before, we concatenate the forward and backward hidden states of the sentence level encoder to obtain the final hidden state representation. The hidden state (g g g t) of the sentence level encoder is computed as:, where f f f s t is the embedded feature vector denoting whether the sentence contains the encoded answer or not. The selective sentence level attention (a a a s t) is computed as: a a a ), where, K is the number of sentences, u The context vector c c c t is fed to the decoder at time step t along with embedded representation of the previous output. In this second option (c.f. Figure 2), we make use of a Transformer decoder to generate the target question, one token at a time, from left to right. For generating the next token, the decoder attends to the previously generated tokens in the question, the encoded answer and the paragraph. We postulate that attention to the paragraph benefits from our hierarchical representation, described in Section 3.1. That is, our model identifies firstly the relevance of the sentences, and then the relevance of the words within the sentences. This in a hierarchical attention module (HATT) and its multi- head extension (MHATT), which replace the attention mechanism to the source in the Transformer decoder. We first explain the sentence and paragragh encoders (Section 3.3.1) before moving on to explanation of the decoder (Section 3.3.2) and the hierarchical attention modules (HATT and MHATT in Section 3.3.3). The sentence encoder transformer maps an input sequence of word representations x = (x 0, · · ·, x n) to a sequence of continuous sentence representations r = (r 0, · · ·, r n). Paragraph encoder takes concatenation of word representation of the start word and end word as input and returns paragraph representation. Each encoder layer is composed of two sub-layers namely a multihead self attention layer (Section 3.3.3) and a position wise fully connected feed forward neural network (Section 3.3.4). To be able to effectively describe these modules, we will benefit first from a description of the decoder (Section 3.3.2). The decoder stack is similar to encoder stack except that it has an additional sub layer (encoderdecoder attention layer) which learn multi-head self attention over the output of the paragraph encoder. The output of the paragraph encoder is transformed into a set of attention vectors K encdec and V encdec. Encoder-decoder attention layer of decoder takes the key K encdec and value V encdec. Decoder stack will output a float vector, we can feed this float vector to a linear followed softmax layer to get probability for generating target word. Let us assume that the question decoder needs to attend to the source paragraph during the generation process. To attend to the hierarchical paragraph representation, we replace the multi-head attention mechanism (to the source) in Transformer by introducing a new multi-head hierarchical attention module M HAT T (q s, K s, q w, K w, V w) where q s is the sentence-level query vector, q w is the word level query vector, K s is the key matrix for the sentences of the paragraph, K w is the key matrix for the words of the paragraph, and V w is the value matrix fr the words of the paragraph. The vectors of sentence-level query q s and word-level query q s are created using non-linear transformations of the state of the decoder h h h t−1, i.e. the input vector to the softmax function when generating the previous word w t−1 of the question. The matrices for the sentence-level key K s and word-level key K w are created using the output. We take the input vector to the softmax function h h h t−1, when the t-th word in the question is being generated. Firstly, this module attends to paragraph sentences using their keys and the sentence query vector: a a a = sof tmax(q s K s /d). Here, d is the dimension of the query/key vectors; the dimension of the ing attention vector would be the number of sentences in the paragraph. Secondly, it computes an attention vector for the words of each sentence: w is the key matrix for the words in the i-th sentences; the dimension of the ing attention vector b b b i is the number of tokens in the i-th sentence. Lastly, the context vector is computed using the word values of their attention weights based on their sentence-level and word-level attentions: Attention in MHATT module is calculated as: Where Attention(Q w, V w, K w) is reformulation of scaled dot product attention of . For multiple heads, the multihead attention z = Multihead(Q w, K w, V w) is calculated as: where z is fed to a position-wise fully connected feed forward neural network to obtain the final input representation. Output of the HATT module is passed to a fully connected feed forward neural net (FFNN) for calculating the hierarchical representation of input (r) as: r = F F N N (x) = (max(0, xW 1 + b1))W 2 + b 2, where r is fed as input to the next encoder layers. The final representation r from last layer of decoder is fed to the linear followed by softmax layer for calculating output probabilities. We performed all our experiments on the publicly available SQuAD and MS MARCO datasets. SQuAD contains 536 Wikipedia articles and more than 100K questions posed about the articles by crowd-workers. We split the SQuAD train set by the ratio 90%-10% into train and dev set and take SQuAD dev set as our test set for evaluation. We take an entire paragraph in each train/test instance as input in all our experiments. MS MARCO contains passages that are retrieved from web documents and the questions are anonimized versions of BING queries. We take a subset of MS MARCO v1.1 dataset containing questions that are answerable from atleast one paragraph. We split train set as 90%-10% into train (71k) and dev (8k) and take dev set as test set (9.5k). Our split is same but our dataset also contains (para, question) tuples whose answers are not a subspan of the paragraph, thus making our task more difficult. For evaluating our question generation model we report the standard metrics, viz., BLEU and ROUGE-L . We performed human evaluation to further analyze quality of questions generated by all the models. We analyzed quality of questions generated on a) syntactic correctness b) semantic correctness and c) relevance to the given paragraph. We compare QG of our hierarchical LSTM and hierarchical Transformer with their flat counterparts. We describe our models below: Seq2Seq + att + AE is the attention-based sequence-to-sequence model with a BiLSTM encoder, answer encoding and an LSTM decoder. HierSeq2Seq + AE is the hierarchical BiLSTM model with a BiLSTM sentence encoder, a BiL-STM paragraph encoder and an LSTM decoder conditioned on encoded answer. TransSeq2Seq + AE is a Transformer-based sequence-to-sequence model with a Transformer encoder followed by a Transformer decoder conditioned on encoded answer. Table 3: Human evaluation (column "Score") as well as inter-rater agreement (column "Kappa") for each model on the SQuAD test set. The scores are between 0-100, 0 being the worst and 100 being the best. Best for each metric (column) are bolded. The three evaluation criteria are: syntactically correct (Syntax), semantically correct (Semantics), and relevant to the text (Relevance). In Table 1 and Table 2 we present automatic evaluation of all models on SQuAD and MS MARCO datasets respectively. We present human evaluation in Table 3 and Table 4 respectively. A number of interesting observations can be made from automatic evaluation in Table 1 and Table 4: Human evaluation (column "Score") as well as inter-rater agreement (column "Kappa") for each model on the MS MARCO test set. The scores are between 0-100, 0 being the worst and 100 being the best. Best for each metric (column) are bolded. The three evaluation criteria are: syntactically correct (Syntax), semantically correct (Semantics), and relevant to the text (Relevance). • Overall, the hierarchical BiLSTM model HierSeq2Seq + AE shows the best performance, achieving best on BLEU2-BLEU4 metrics on both SQuAD dataset, whereas the hierarchical Transformer model TransSeq2Seq + AE performs best on BLEU1 and ROUGE-L on the SQuAD dataset. • Compared to the flat LSTM and Transformer models, their respective hierarchical counterparts always perform better on both the SQuAD and MS MARCO datasets. • On the MS MARCO dataset, we observe the best consistent performance using the hierarchical BiLSTM models on all automatic evaluation metrics. • On the MS MARCO dataset, the two LSTM-based models outperform the two Transformer-based models. Interestingly, human evaluation , as tabulated in Table 3 and Table 4, demonstrate that the hierarchical Transformer model TransSeq2Seq + AE outperforms all the other models on both datasets in both syntactic and semantic correctness. However, the hierarchical BiLSTM model HierSeq2Seq + AE achieves best, and significantly better, relevance scores on both datasets. From the evaluation , we can see that our proposed hierarchical models demonstrate benefits over their respective flat counterparts in a significant way. Thus, for paragraph-level question generation, the hierarchical representation of paragraphs is a worthy pursuit. Moreover, the Transformer architecture shows great potential over the more traditional RNN models such as BiLSTM as shown in human evaluation. Thus the continued investigation of hierarchical Transformer is a promising research avenue. In the Appendix, in Section B, we present several examples that illustrate the effectiveness of our Hierarchical models. In Section C of the appendix, we present some failure cases of our model, along with plausible explanations. We proposed two hierarchical models for the challenging task of question generation from paragraphs, one of which is based on a hierarchical BiLSTM model and the other is a novel hierarchical Transformer architecture. We perform extensive experimental evaluation on the SQuAD and MS MARCO datasets using standard metrics. Results demonstrate the hierarchical representations to be overall much more effective than their flat counterparts. The hierarchical models for both Transformer and BiLSTM clearly outperforms their flat counterparts on all metrics in almost all cases. Further, our experimental validate that hierarchical selective attention benefits the hierarchical BiLSTM model. Qualitatively, our hierarchical models also exhibit better capability of generating fluent and relevant questions. We implement 1 all our models in PyTorch 2. In HierSeq2Seq we used 2-layer BiLSTM sentence and paragraph encoders and a single-layer LSTM decoder. We set RNN hidden size to 600. For TransSeq2Seq and HierTransSeq2Seq we used 4-layer Transformer encoder and decoder with model dimension set to 256 and inner hidden dimension set to 2048. For both the above settings we use a vocabulary of size 45k, shared across source and target of train and dev sets. We prune out all words with frequency lower than 5 in the entire dataset (train and dev). We use pretrained GloVe embeddings of 300 dimensions to represent the words in vocabulary. We use beam search decoder with beam size 5 to decode questions. To train HierSeq2seq model, we use SGD with momentum for optimization. We set learning rate to 0.1 at the beginning, with the value being halved at every even epochs starting from epoch 8. We train our models for 20 epochs and select the model with lowest perplexity as the final model. we train HierTransSeq2seq using Adam optimizer, we set initial learning rate to 1e-7 and warmup steps to 2000. We use standard inverse square root scheduler for scheduling the learning rate. B EXAMPLE QUESTIONS GENERATED BY OUR BEST MODEL (HIERSEQ2SEQ + AE) ON MS MARCO TEST SET
Automatic question generation from paragraph using hierarchical models
342
scitldr
Many deployed learned models are black boxes: given input, returns output. Internal information about the model, such as the architecture, optimisation procedure, or training data, is not disclosed explicitly as it might contain proprietary information or make the system more vulnerable. This work shows that such attributes of neural networks can be exposed from a sequence of queries. This has multiple implications. On the one hand, our work exposes the vulnerability of black-box neural networks to different types of attacks -- we show that the revealed internal information helps generate more effective adversarial examples against the black box model. On the other hand, this technique can be used for better protection of private content from automatic recognition models using adversarial examples. Our paper suggests that it is actually hard to draw a line between white box and black box models. Black-box models take a sequence of query inputs, and return corresponding outputs, while keeping internal states such as model architecture hidden. They are deployed as black boxes usually on purpose -for protecting intellectual properties or privacy-sensitive training data. Our work aims at inferring information about the internals of black box models -ultimately turning them into white box models. Such a reverse-engineering of a black box model has many implications. On the one hand, it has legal implications to intellectual properties (IP) involving neural networks -internal information about the model can be proprietary and a key IP, and the training data may be privacy sensitive. Disclosing hidden details may also render the model more susceptible to attacks from adversaries. On the other hand, gaining information about a black-box model can be useful in other scenarios. E.g. there has been work on utilising adversarial examples for protecting private regions (e.g. faces) in photographs from automatic recognisers BID12. In such scenarios, gaining more knowledge on the recognisers will increase the chance of protecting one's privacy. Either way, it is a crucial research topic to investigate the type and amount of information that can be gained from a black-box access to a model. We make a first step towards understanding the connection between white box and black box approaches -which were previously thought of as distinct classes. We introduce the term "model attributes" to refer to various types of information about a trained neural network model. We group them into three types: architecture (e.g. type of non-linear activation), optimisation process (e.g. SGD or ADAM?), and training data (e.g. which dataset?). We approach the problem as a standard supervised learning task applied over models. First, collect a diverse set of white-box models ("meta-training set") that are expected to be similar to the target black box at least to a certain extent. Then, over the collected meta-training set, train another model ("metamodel") that takes a model as input and returns the corresponding model attributes as output. Importantly, since we want to predict attributes at test time for black-box models, the only information available for attribute prediction is the query input-output pairs. As we will see in the experiments, such input-output pairs allow to predict model attributes surprisingly well. In summary, we contribute: Investigation of the type and amount of internal information about the black-box model that can be extracted from querying; Novel metamodel methods that not only reason over outputs from static query inputs, but also actively optimise query inputs that can extract more information; Study of factors like size of the meta-training set, quantity and quality of queries, and the dissimilarity between the meta-training models and the test black box (generalisability); Empirical verification that revealed information leads to greater susceptibility of a black-box model to an adversarial example based attack. There has been a line of work on extracting and exploiting information from black-box learned models. We first describe papers on extracting information (model extraction and membership inference attacks), and then discuss ones on attacking the network using the extracted information (adversarial image perturbations (AIP)).Model extraction attacks either reconstruct the exact model parameters or build an avatar model that maximises the likelihood of the query input-output pairs from the target model BID19 BID15. BID19 have shown the efficacy of equation solving attacks and the avatar method in retrieving internal parameters of non-neural network models. BID15 have also used the avatar approach with the end goal of generating adversarial examples. While the avatar approach first assumes model hyperparameters like model family (architecture) and training data, we discriminatively train a metamodel to predict those hyperparameters themselves. As such, our approach is complementary to the avatar approach. Membership inference attacks determine if a given data sample has been included in the training data BID0 BID16. In particular, BID0 also trains a decision tree metamodel over a set of classifiers trained on different datasets. This work goes far beyond only inferring the training data by showing that even the model architecture and optimisation process can be inferred. Using the obtained cues, one can launch more effective, focused attacks on the black box. We use adversarial image perturbations (AIPs) as an example of such attack. AIPs are small perturbations over the input such that the network is mislead. Research on this topic has flourished recently after it was shown that the needed amount of perturbation to completely mislead an image classifier is nearly invisible BID18 BID2 BID10.Most effective AIPs require gradients of the target network. Some papers proposed different ways to attack black boxes. They can be grouped into three approaches. Approximate gradients by numerical gradients BID11. The caveat is that thousands and millions of queries are needed to compute a single AIP, depending on the image size. Use the avatar approach to train a white box network that is supposedly similar to the target BID14 a; BID3. We note again that our metamodel is complementary to the avatar approach -the avatar network hyperparemters can be determined by the metamodel. Exploit transferability of adversarial examples; it has been shown that AIPs generated against one network can also fool other networks BID10. in particular have shown that generating AIPs against an ensemble of networks make it more transferable. We show in this work that the AIPs transfer better within an architecture family (e.g. ResNet or DenseNet) than across, and that such a property can be exploited by our metamodel for generating more targetted AIPs. Figure 1: Overview of our approach. We want to find out the type and amount of internal information about a black-box model that can be revealed from a sequence of queries. We approach this by first building metamodels for predicting model attributes, and then evaluating their performance on black-box models. Our main approach, metamodel, is described in figure 1. In a nutshell, the metamodel is a classifier of classifiers. Specifically, The metamodel submits n query inputs DISPLAYFORM0 to a black box model f; the metamodel takes corresponding model outputs f (DISPLAYFORM1 as an input, and returns predicted model attributes as output. As we will describe in detail, the metamodel not only learns to infer model attributes from query outputs from a static set of inputs, but also searches for query inputs that are designed to extract greater amount of information from the target models. In this section, our main methods are introduced in the context of MNIST digit classifiers. While MNIST classifiers are not fully representative of generic learned models, they have a computational edge: it takes only five minutes to train each of them with reasonable performance. We could thus prepare a diverse set of 11k MNIST classifiers within 40 GPU days for the meta-training and evaluation of our metamodels. We stress, however, that the proposed approach is generic with respect to the task, data, and the type of models. We also focus on 12 model attributes (table 1) that cover hyperparameters for common neural network MNIST classifiers, but again the range of predictable attributes are not confined to this list. We need a dataset of classifiers to train and evaluate metamodels. We explain how MNIST-NETS has been constructed, a dataset of 11k MNIST digit classifiers; the procedure is task and data generic. Every model in MNIST-NETS shares the same convnet skeleton architecture: "N conv blocks → M fc blocks → 1 linear classifier". Each conv block has the following structure: "ks × ks convolution → optional 2 × 2 max-pooling → non-linear activation", where ks (kernel size) and the activation type are to be chosen. Each fc block has the structure: " linear mapping → non-linear activation → optional dropout" This convnet structure already covers many LeNet BID8 In order to learn generalisable features, the metamodel needs to be trained over a diverse set of models. The base architecture described above already has several free parameters like the number of layers (N and M), the existence of dropout or maxpooling layers, or the type of nonlinear activation. Apart from the architectural hyperparameters, we increase diversity along two more axes -optimisation process and the training data. Along the optimisation axis, we vary optimisation algorithm (SGD, ADAM, or RMSprop) and the training batch size. We also consider training MNIST classifiers on either on the entire MNIST training set (All 0, 60k), one of the two disjoint halves (Half 0/1, 30k), or one of the four disjoint quarters (Quarter 0/1/2/3, 15k).See table 1 for the comprehensive list of 12 model attributes altered in MNIST-NETS. The number of trainable parameters (#par) and the training data size (size) are not directly controlled but derived from the other attributes. We also augment MNIST-NETS with ensembles of classifiers (ens), whose procedure will be described later. The number of all possible combinations of controllable options in table 1 is 18, 144. We also select random seeds that control the initialisation and training data shuffling from {0, · · ·, 999}, ing in 18, 144, 000 unique models. Training such a large number of models is intractable; we have sampled (without replacement) and trained 10, 000 of them. All the models have been trained with learning rate 0.1 and momentum 0.5 for 100 epochs. It takes around 5 minutes to train each model on a GPU machine (GeForce GTX TITAN); training of 10k classifiers has taken 40 GPU days. In order to make sure that MNIST-NETS realistically represents commonly used MNIST classifiers, we have pruned low-performance classifiers (validation accuracy< 98%), ing in 8, 582 classifiers. Ensembles of trained classifiers have been constructed by grouping the identical classifiers (modulo random seed). Given t identical ones, we have augmented MNIST-NETS with 2, · · ·, t combinations. The ensemble augmentation has ed in 11, 282 final models. See appendix table 6 for statistics of attributes -due to large sample size all the attributes are evenly covered. Attribute prediction can get arbitrarily easy by including the black-box model (or similar ones) in the meta-training set. We introduce multiple splits of MNIST-NETS with varying requirements on generalization. Unless stated otherwise, every split has 5, 000 training (meta-training), 1, 000 testing (black box), and 5, 282 leftover models. The Random (R) split randomly (uniform weights) assigns training and test splits, respectively. Under the R split, the training and test models come from the same distribution. We introduce harder Extrapolation (E) splits. We separate a few attributes between the training and test splits. They are designed to simulate more difficult domain gaps when the meta-training models are significantly different from the black box. Specific examples of E splits will be shown in §4. The metamodel predicts the attribute of a black-box model g in the test split by submitting n query inputs and observing the outputs. It is trained over meta-training models f in the training split (f ∼ F). We propose three approaches for the metamodels -we collectively name them kennen 2. See figure 2 for an overview. kennen-o first selects a fixed set of queries [x i] i=1···n from a dataset. Both during training and testing, always these queries are submitted. kennen-o learns a classifier m θ to map from the order-sensitively concatenated n query outputs, [f (x i)] i=1···n (n × 10 dim for MNIST), to the simultaneous prediction of 12 attributes in f. The training objective is: DISPLAYFORM0 where F is the distribution of meta-training models, y a is the ground truth label of attribute a, and L is the cross-entropy loss. With the learned parameterθ, m DISPLAYFORM1 gives the prediction of attribute a for the black box g. In our experiments, we model the classifier m θ via multilayer perceptron (MLP) with two hidden layers with 1000 hidden units. The last layer consists of 12 parallel linear layers for a simultaneous prediction of the attributes. In our preliminary experiments, MLP has performed better than the linear classifiers. The optimisation problem in equation 1 is solved via SGD by approximating the expectation over f ∼ F by an empirical sum over the training split classifiers for 200 epochs. For query inputs, we have used a random subset of n images from the validation set (both for MNIST and ImageNet experiments). The performance is not sensitive to the choice of queries (see appendix §C). Next methods (kennen-i/io) describe how to actively craft query inputs, potentially outside the natural image distribution. Note that kennen-o can be applied to any type of model (e.g. non-neural networks) with any output structure, as long as the output can be embedded in an Euclidean space. We will show that this method can effectively extract information from f even if the output is a top-k ranking. DISPLAYFORM2 kennen-i crafts a single query inputx over the meta-training models that is trained to repurpose a digit classifier f into a model attribute classifier for a single attribute a. The crafted input drives the classifier to leak internal information via digit prediction. The learned input is submitted to the test black-box model g, and the attribute is predicted by reading off its digit prediction g(x). For example, kennen-i for max-pooling layer prediction crafts an input x that is predicted as "1" for generic MNIST digit classifiers with max-pooling layers and "0" for ones without. See figure 3 for visual examples. drop pool ks 77.0%94.8% 88.5%Figure 3: Inputs designed to extract internal details from MNIST digit classifiers. E.g. feeding the middle image reveals the existence of a maxpooling layer with 94.8% chance. We describe in detail how kennen-i learns this input. The training objective is: DISPLAYFORM3 where f (x) is the 10-dimensional output of the digit classifier f. The condition x: image ensures the input stays a valid image DISPLAYFORM4 The loss L, together with the attribute label y a of f, guides the digit prediction f (x) to reveal the attribute a instead. Note that the optimisation problem is identical to the training of digit classifiers except that the ground truth is the attribute label rather than the digit label, that the loss is averaged over the models instead of the images, and that the input x instead of the model f is optimised. With the learned query inputx, the attribute for the black box g is predicted by g(x). In particular, we do not use gradient information from g. We initialise x with a random sample from the MNIST validation set (random noise or uniform gray initialisation gives similar performances), and run SGD for 200 epochs. For each iteration x is truncated back to D to enforce the constraint. While being simple and effective, kennen-i can only predict a single attribute at a time, and cannot predict attributes with more than 10 classes (for digit classifiers). kennen-io introduced below overcomes these limitations. kennen-i may also be unrealistic when the exploration needs to be stealthy: it submits unnatural images to the system. Also unlike kennen-o, kennen-i requires end-to-end differentiability of the training models f ∼ F, although it still requires only black-box access to test models g. DISPLAYFORM5 We overcome the drawbacks of kennen-i that it can only predict one attribute at a time and that the number of predictable classes by attaching an additional interpretation module on top of the output. Our final method kennen-io combines kennen-i and kennen-o approaches: both input generator and output interpreters are used. Being able to reason over multiple query outputs via MLP layers, kennen-io supports the optimisation of multiple query inputs as well. Specifically, the kennen-io training objective is given by: DISPLAYFORM6 Note that the formulation is identical to that for kennen-o (equation 1), except that the second minimisation problem regarding the query inputs is added. With learned parametersθ and the attribute a for the black box g is predicted by m DISPLAYFORM7 DISPLAYFORM8. Again, we require end-to-end differentiability of meta-training models f, but only the black-box access for the test model g. To improve stability against covariate shift, we initialise m θ with kennen-o for 200 epochs. Afterwards, gradient updates of DISPLAYFORM9 and θ alternate every 50 epochs, for 200 additional epochs. We have introduced a procedure for constructing a dataset of classifiers (MNIST-NETS) as well as novel metamodels (kennen variants) that learn to extract information from black-box classifiers. In this section, we evaluate the ability of kennen to extract information from black-box MNIST digit classifiers. We measure the class-balanced attribute prediction accuracy for each attribute a in the list of 12 attributes in table 1. See table 2 for the main of our metamodels, kennen-o/i/io, on the Random split. Unless stated otherwise, metamodels are trained with 5, 000 training split classifiers. Given n = 100 queries with probability output, kennen-o already performs far above the random chance in predicting 12 diverse attributes (73.4% versus 34.9% on average); neural network output indeed contains rich information about the black box. In particular, the presence of dropout (94.6%) or max-pooling (94.9%) has been predicted with high precision. As we will see in §4.3, outputs of networks trained with dropout layers form clusters, explaining the good prediction performance. It is surprising that optimisation details like algorithm (71.8%) and batch size (50.4%) can also be predicted well above the random chance (33.3% for both). We observe that the training data attributes are also predicted with high accuracy (71.8% and 90.0% for size and split). TAB1 shows the comparison of kennen-o/i/io. kennen-i has a relatively low performance (average 52.7%), but kennen-i relies on a cheap resource: 1 query with single-label output. kennen-i is also performant at predicting the kernel size (88.5%) and pooling (94.8%), attributes that are closely linked to spatial structure of the input. We conjecture kennen-i is relatively effective for such attributes. kennen-io is superior to kennen-o/i for all the attributes with average accuracy 80.1%. DISPLAYFORM0 We examine potential factors that contribute to the successful prediction of black box internal attributes. We measure the prediction accuracy of our metamodels as we vary the number of meta-training models, the number of queries, and the quality of query output. Figure 4: kennen-o performance of against the size of meta-training set (left), number of queries (middle), and quality of queries (right). Unless stated otherwise, we use 100 probability outputs and 5k models to train kennen-o. Each curve is linearly scaled such that random chance (0 training data, 0 query, or top-0) performs 0%, and the perfect predictor performs 100%. We have trained kennen-o with different number of the meta-training classifiers, ranging from 100 to 5, 000. See figure 4 (left) for the trend. We observe a diminishing return, but also that the performance has not saturated -collecting larger meta-training set will improve the performance. See figure 4 (middle) for the kennen-o performance against the number of queries with probability output. The average performance saturates after ∼ 500 queries. On the other hand, with only ∼ 100 queries, we already retrieve ample information about the neural network. Many black-box models return top-k ranking (e.g. Facebook face recogniser), or single-label output. We represent top-k ranking outputs by assigning exponentially decaying probabilities up to k digits and a small probability to the remaining. See table 2 for the kennen-o performance comparison among 100 probability, top-10 ranking, bottom-1, and top-1 outputs, with average accuracies 73.4%, 69.7%, 54.4%, and 39.5%, respectively. While performance drops with coarser outputs, when compared to random chance (34.9%), 100 single-label bottom-1 outputs already leak a great amount of information about the black box (54.4%). It is also notable that bottom-1 outputs contain much more information than do the top-1 outputs; note that for high-performance classifiers top-1 predictions are rather uniform across models and thus have much less freedom to leak auxiliary information. Figure 4 (right) shows the interpolation from top-1 to top-10 (i.e. top-9) ranking. We observe from the jump at k = 2 that the second likely predictions (top-2) contain far more information than the most likely ones (top-1). For k ≥ 3, each additional output label exhibits a diminishing return. So far we have seen on the Random (R) split. In realistic scenarios, the meta-training model distribution may not be fully covering possible black box models. We show how damaging such a scenario is through Extrapolation (E) split experiments. EVALUATION E-splits split the training and testing models based on one or more attributes (§3.1). For example, we may assign shallower models (#layers ≤ 10) to the training split and deeper ones (#layers ¿10) to the testing split. In this example, we refer to #layers as the splitting attribute. Since for an E-split, some classes of the splitting attributes have zero training examples, we only evaluate the prediction accuracies over the non-splitting attributes. When the set of splitting attributes isÃ, a subset of the entire attribute set A, we define E-split accuracy or E.Acc(Ã) to be the mean prediction accuracy over the non-splitting attributes A \Ã. For easier comparison, we report the normalised accuracy (N.Acc) that shows the how much percentage of the R-split accuracy is achieved in the E-split setup on the non-splitting attributes A \Ã. Specifically: DISPLAYFORM0 where R.Acc(Ã) and Chance(Ã) are the means of the R-split and Chance-level accuracies over A \Ã. Note that N.Acc is 100% if the E-split performance is at the level of R-split and 0% if it is at chance level. Table 3: Normalised accuracies (see text) of kennen-o and kennen-io on R and E splits. We denote E-split with splitting attributes attr1 and attr2 as "E-attr1-attr2". Splitting criteria are also shown. When there are two splitting attributes, the first attribute inherits the previous row criteria. The normalised accuracies for R-split and multiple E-splits are presented in table 3. We consider three axes of choices of splitting attributes for the E-split: architecture (#conv and #fc), optimisation (alg and bs), and data (size). For example, "E-#conv-#fc" row presents when metamodel is trained on shallower nets (2 or 3 conv/fc layers each) compared to the test black box model (4 conv and fc layers each).Not surprisingly, E-split performances are lower than R-split ones (N.Acc < 100%); it is advisable to cover all the expected black-box attributes during meta-training. Nonetheless, E-split performances of kennen-io are still far above the chance level (N.Acc ≥ 70% 0%); failing to cover a few attributes during meta-training is not too damaging. Comparing kennen-o and kennen-io for their generalisability, we observe that kennen-io consistently outperforms kennen-o under severe extrapolation (around 5 pp better N.Acc). It is left as a future work to investigate the intriguing fact that utilising out-of-domain query inputs improves the generalisation of metamodel. It is surprising that metamodels can extract inner details with great precision and generalisability. This section provides a glimpse of why and how this is possible via metamodel input and output analyses. Full answers to those questions is beyond the scope of the paper. We analyse the inputs to our metamodels (i.e. query outputs from black-box models) to convince ourselves that the inputs do contain discriminative features for model attributes. As the input is high dimensional (1000 when the number of queries is n = 100), we use the t-SNE (van der Maaten & Hinton, Nov 2008) visualisation method. Roughly speaking, t-SNE embeds high dimensional data points onto the 2-dimensional plane such that the pairwise distances are best respected. We then colour-code the embedded data points according to the model attributes. Clusters of same-coloured points indicate highly discriminative features. The visualisation of input data points are shown in Appendix figures 9 and 10 for kennen-o and kennen-io, respectively. For experimental details, see Appendix §D. In the case of kennen-o, we observe that some attributes form clear clusters in the input space -e.g. Tanh in act, binary dropout attribute, and RMSprop in alg. For the other attributes, however, it seems that the clusters are too complicated to be represented in a 2-dimensional space. For kennen-io (figure 10), we observe improved clusters for pool and ks. By submitting crafted query inputs, kennen-io induces query outputs to be better clustered, increasing the chance of successful prediction. We show confusion matrices of kennen-o/io to analyse the failure modes. See Appendix figures 11 and 12. For kennen-o and kennen-io alike, we observe that the confusion occurs more frequently with similar classes. For attributes #conv and #fc, more confusion occurs between or than between. A similar trend is observed for #par and bs. This is a strong indication that there exists semantic attribute information in the neural network outputs (e.g. number of layers, parameters, or size of training batch) and the metamodels learn semantic information that can generalise, as opposed to merely relying on artifacts. This observation agrees with a of the extrapolation experiments in §4.2: the metamodels generalise. Compared to those of kennen-o, kennen-io confusion matrices exhibit greater concentration of masses both on the correct class (diagonals) and among similar attribute classes (1-off diagonals for #conv, #fc, #par, bs, and size). The former re-confirms the greater accuracy, while the latter indicates the improved ability to extract more semantic and generalisable features from the query outputs. This, again, agrees with §4.2: kennen-io generalises better than kennen-o. We have verified through our novel kennen metamodels that black-box access to a neural network exposes much internal information. We have shown that only 100 single-label outputs already reveals a great deal about a black box. When the black-box classifier is quite different from the metatraining classifiers, the performance of our best metamodel -kennen-io-decreases; however, the prediction accuracy for black box internal information is still surprisingly high. While MNIST experiments are computationally cheap and a massive number of controlled experiments is possible, we provide additional ImageNet experiments for practical implications on realistic image classifiers. In this section, we use kennen-o introduced in §3 to predict a single attribute of black-box ImageNet classifiers -the architecture family (e.g. ResNet or VGG?). In this section, we go a step further to use the extracted information to attack black boxes with adversarial examples. It is computationally prohibitive to train O(10k) ImageNet classifiers from scratch as in the previous section. We have resorted to 19 PyTorch 3 pretrained ImageNet classifiers. The 19 classifiers come from five families: Squeezenet, VGG, VGG-BatchNorm, ResNet, and DenseNet, each with 2, 4, 4, 5, and 4 variants, respectively BID6 BID17 BID7 BID4 BID5. See Appendix table 7 for the the summary of the 19 classifiers. We observe both large intra-family diversity and small inter-family separability in terms of #layers, #parameters, and performances. The family prediction task is not as trivial as e.g. simply inferring the performance. We predict the classifier family (S, V, B, R, D) from the black-box query output, using the method kennen-o, with the same MLP architecture (§3). kennen-i and kennen-io have not been used for computational reasons, but can also be used in principle. We conduct 10 cross validations (random sampling of single test network from each family) for evaluation. We also perform 10 random sampling of the queries from ImageNet validation set. In total 100 random tries are averaged. Results: compared to the random chance (20.0%), 100 queries in high kennen-o performance (90.4%). With 1, 000 queries, the prediction performance is even 94.8%. In this section we attack ImageNet classifiers with adversarial image perturbations (AIPs). We show that the knowledge about the black box architecture family makes the attack more effective. AIPs are carefully crafted additive perturbations on the input image for the purpose of misleading the target model to predict wrong labels BID2. Among variants of AIPs, we use efficient and robust GAMAN BID12. See appendix figure 7 for examples of AIPs; the perturbation is nearly invisible. Typical AIP algorithms require gradients from the target network, which is not available for a black box. Mainly three approaches for generating AIPs against black boxes have been proposed: numerical gradient, avatar network, or transferability. We show that our metamodel strengthens the transferability based attack. We hypothesize and empirically show that AIPs transfer better within the architecture family than across. Using this property, we first predict the family of the black box (e.g. ResNet), and then generate AIPs against a few instances in the family (e.g. ResNet101, ResNet152). The generation of AIPs against multiple targets has been proposed by, but we are the first to systemically show that AIPs generalise better within a family when they are generated against multiple instances from the same family. We first verify our hypothesis that AIPs transfer better within a family. Within-family: we do a leave-one-out cross validation -generate AIPs using all but one instances of the family and test on the holdout. Not using the exact test black box, this gives a lower bound on the within-family performance. Across-family: still leave out one random instance from the generating family to match the generating set size with the within-family cases. We also include the use-all case (Ens): generate AIPs with one network from each family. See table 4 for the . We report the misclassification rate, defined as 100−top-1 accuracy, on 100 random ImageNet validation images. We observe that the within-family performances dominate the across-family ones (diagonal entries versus the others in each row); if the target black box family is identified, one can generate more effective AIPs. Finally, trying to target all network ("Ens") is not as effective as focusing resources (diagonal entries). We empirically show that the reverse-engineering enables more effective attacks. We consider multiple scenarios. " White box" means the target model is fully known, and the AIP is generated specifically for this model. " Black box" means the exact target is unknown, but we make a distinction when the family is known ("Family black box"). See table 5 for the misclassification rates (MC) in different scenarios. When the target is fully specified (white box), MC is 100%. When neither the exact target nor the family is known, AIPs are generated against multiple families (82.2%). When the reverse-engineering takes place, and AIPs are generated over the predicted family, attacks become more effective (85.7%). We almost reach the family-oracle case (86.2%). Our metamodel can predict architecture families for ImageNet classifiers with high accuracy. We additionally show that this reverse-engineering enables more focused attack on black-boxes. We have presented first on the inference of diverse neural network attributes from a sequence of input-output queries. Our novel metamodel methods, kennen, can successfully predict attributes related not only to the architecture but also to training hyperparameters (optimisation algorithm and dataset) even in difficult scenarios (e.g. single-label output, or a distribution gap between the metatraining models and the target black box). We have additionally shown in ImageNet experiments that reverse-engineering a black box makes it more vulnerable to adversarial examples. We show the statistics of MNIST-NETS, our dataset of MNIST classifiers, in table 6. We complement the kennen-o in the main paper (figure 4) with kennen-io . See figure 5. Similarly for kennen-o, kennen-io shows a diminishing return as the number of training models and the number of queries increase. While the performance saturates with 1, 000 queries, it does not fully saturate with 5, 000 training samples. kennen-o selects a random set of queries from MNIST validation set (§3.2). We measure the sensitivity of kennen-o performance with respect to the choice of queries, and discuss the possibility to optimise the set of queries. With 1, 10, or 100 queries, we have trained kennen-o with 100 independent samples of query sets. The mean and standard deviations are shown in figure 6. The sensitivity is greater for smaller number of queries, but still minute (1.2 pp standard deviation).Instead of solving the combinatorial problem of finding the optimal set of query inputs from a dataset, we have proposed kennen-io that efficiently solves a continuous optimisation problem to find a set of query inputs from the entire input space. We have compared kennen-io against kennen-o with multiple query samples in figure 6. We observe that kennen-io is better than kennen-o with all 100 query set samples at each level. We remark that there exists a trade-off between detectability and effectiveness of exploration. While kennen-io extracts information from target model more effectively, it increases the detectability of attack by submitting out-of-domain inputs. If it is possible to optimise or sample the set of natural queries from a dataset or distribution of natural inputs, it will be a strong attack; developing such a method would be an interesting future work. We describe the detailed procedure for the metamodel input visualisation experiment (discussed in §4.3). First, 1000 test-split (Random split) black-box models are collected. For each model, 100 query images are passed (sampled at random from MNIST validation set), ing in 100 × 10 dimensional input data points. We have used t-SNE(van der Maaten & Hinton, Nov 2008) to embed the data points onto the 2-dimensional plane. Each data point is coloured according to each attribute class. The for kennen-o and kennen-io are shown in figures 9 and 10. Since t-SNE is sensitive to initialisation, we have run the embedding ten times with different random initialisations; the qualitative observations are largely identical. In this section, we show examples of AIPs. See figure 7 for the examples of AIPs and the perturbed images. The perturbation is nearly invisible to human eyes. We have also generated AIPs with respect to a diverse set of architecture families (S, V, B, R, D, SVBRD) at multiple L 2 norm levels. See figure 8; the same image in a diverse set of patterns depending on the architecture family.
Querying a black-box neural network reveals a lot of information about it; we propose novel "metamodels" for effectively extracting information from a black box.
343
scitldr
Inverse reinforcement learning (IRL) is used to infer the reward function from the actions of an expert running a Markov Decision Process (MDP). A novel approach using variational inference for learning the reward function is proposed in this research. Using this technique, the intractable posterior distribution of the continuous latent variable (the reward function in this case) is analytically approximated to appear to be as close to the prior belief while trying to reconstruct the future state conditioned on the current state and action. The reward function is derived using a well-known deep generative model known as Conditional Variational Auto-encoder (CVAE) with Wasserstein loss function, thus referred to as Conditional Wasserstein Auto-encoder-IRL (CWAE-IRL), which can be analyzed as a combination of the backward and forward inference. This can then form an efficient alternative to the previous approaches to IRL while having no knowledge of the system dynamics of the agent. Experimental on standard benchmarks such as objectworld and pendulum show that the proposed algorithm can effectively learn the latent reward function in complex, high-dimensional environments. Reinforcement learning, formalized as Markov decision process (MDP), provides a general solution to sequential decision making, where given a state, the agent takes an optimal action by maximizing the long-term reward from the environment. However, in practice, defining a reward function that weighs the features of the state correctly can be challenging, and techniques like reward shaping are often used to solve complex real-world problems. The process of inferring the reward function given the demonstrations by an expert is defined as inverse reinforcement learning (IRL) or apprenticeship learning;. The fundamental problem with IRL lies in the fact that the algorithm is under defined and infinitely different reward functions can yield the same policy. Previous approaches have used preferences on the reward function to address the non-uniqueness. suggested reward function that maximizes the difference in the values of the expert's policy and the second best policy. adopted the principle of maximum entropy for learning the policy whose feature expectations are constrained to match those of the expert's. applied the structured max-margin optimization to IRL and proposed a method for finding the reward function that maximizes the margin between expert's policy and all other policies. Neu & Szepesvári unified a direct method that minimizes deviation from the expert's behavior and an indirect method that finds an optimal policy from the learned reward function using IRL. used a game-theoretic framework to find a policy that improves with respect to an expert's. Another challenge for IRL is that some variant of the forward reinforcement learning problem needs to be solved in a tightly coupled manner to obtain the corresponding policy, and then compare this policy to the demonstrated actions. Most early IRL algorithms proposed solving an MDP in the inner loop;;. This requires perfect knowledge of the expert's dynamics which are almost always impossible to have. Several works have proposed to relax this requirement, for example by learning a value function instead of a cost , solving an approximate local control problem or generating a discrete graph of states. However, all these methods still require some partial knowledge of the system dynamics. Most of the early research in this field has expressed the reward function as a weighted linear combination of hand selected features;;. Non-parametric methods such as Gaussian Processes (GPs) have also been used for potentially complex, nonlinear reward functions. While in principle this helps extend the IRL paradigm to flexibly account for non-linear reward approximation; the use of kernels simultaneously leads to higher sample size requirements. Universal function approximators such as non-linear deep neural network have been proposed recently;. This moves away from using hand-crafted features and helps in learning highly non-linear reward functions but they still need the agent in the loop to generate new samples to "guide" the cost to the optimal reward function. has recently proposed deriving an adversarial reward learning formulation which disentangles the reward learning process by a discriminator trained via binary regression data and uses policy gradient algorithms to learn the policy as well. The Bayesian IRL (BIRL) algorithm proposed by uses the expert's actions as evidence to update the prior on reward functions. The reward learning and apprenticeship learning steps are solved by performing the inference using a modified Markov Chain Monte Carlo (MCMC) algorithm. described an expectation-maximization (EM) approach for solving the BIRL problem, referring to it as the Robust BIRL (RBIRL). Variational Inference (VI) has been used as an efficient and alternative strategy to MCMC sampling for approximating posterior densities;. Variational Auto-encoder (VAE) was proposed by as a neural network version of the approximate inference model. The loss function of the VAE is given in such a way that it automatically tries to maximize the likelihood of the data given the current latent variables (reconstruction loss), while encouraging the latent variables to be close to our prior belief of how the variables should look like (KullbeckLiebler divergence loss). This can be seen as an generalization of EM from maximum a-posteriori (MAP) estimation of the single parameter to an approximation of complete posterior distribution. Conditional VAE (CVAE) has been proposed by to develop a deep conditional generative model for structured output prediction using Gaussian latent variables. Wasserstein AutoEncoder (WAE) has been proposed by to utilize Wasserstein loss function in place of KL divergence loss for robustly estimating the loss in case of small samples, where VAE fails. This research is motivated by the observation that IRL can be formulated as a supervised learning problem with latent variable modelling. This intuition is not unique. It has been proposed by using the Cascaded Supervised IRL (CSI) approach. However, CSI uses non-generalizable heuristics to classify the dataset and find the decision rule to estimate the reward function. Here, I propose to utilize the CVAE framework with Wasserstein loss function to determine the non-linear, continuous reward function utilizing the expert trajectories without the need for system dynamics. The encoder step of the CVAE is used to learn the original reward function from the next state conditioned on the current state and action. The decoder step is used to recover the next state given the current state, action and the latent reward function. The likelihood loss, composed of the reconstruction error and the Wasserstein loss, is then fed to optimize the CVAE network. The Gaussian distribution is used here as the prior distribution; however, has described various other prior distributions which can be used based on the class of problem being solved. Since, the states chosen are supplied by the expert's trajectories, the CWAE-IRL algorithm is run only on those states without the need to run an MDP or have the agent in the loop. Two novel contributions are made in this paper: • Proposing a generative model such as an auto-encoder for estimating the reward function leads to a more effective and efficient algorithm with locally optimal, analytically approximate solution. • Using only the expert's state-action trajectories provides a robust generative solution without any knowledge of system dynamics. Section 2 gives the on the concepts used to build our model; Section 3 describes the proposed methodology; Section 4 gives the and Section 5 provides the discussion and . In the reinforcement learning problem, at time t, the agent observes a state, s t ∈ S, and takes an action, a t ∈ A; thereby receiving an immediate scalar reward r t and moving to a new state s t+1. The model's dynamics are characterized by state transition probabilities p(s t+1 |s t, a t). This can be formally stated as a Markov Decision Process (MDP) where the next state can be completely defined by the previous state and action (Markov property) and the agent receives a scalar reward for executing the action. The goal of the agent is to maximize the cumulative reward (discounted sum of rewards) or value function: where 0 ≤ γ ≤ 1 is the discount factor and r t is the reward at time-step t. In terms of a policy π: S → A, the value function can be given by Bellman equation as: Using Bellman's optimality equation, we can define, for any MDP, a policy π is greater than or equal to any other policy π if value function v π (s t) ≤ v π (s t) for all s t ∈ S. This policy is known as an optimal policy (π *) and its value function is known as optimal value function (v *). The bayesian approach to IRL was proposed by by encoding the reward function preference as a prior and optimal confidence of the behavior data as the likelihood. Considering the expert χ is executing an MDP M = (S, A, p, γ), the reward for χ is assumed to be sampled from a prior (known) distribution P R defined as: The distribution to be used as a prior depends on the type of problem. The expert's goal of maximizing accumulated reward is equivalent to finding the optimal action of each state. The likelihood thus defines our confidence in χ's ability to select the optimal action. This is modeled as a exponential distribution for the likelihood of trajectory τ with Q * as: where α χ is a parameter representing the degree of confidence in χ's ability. The posterior probability of the reward function R is computed using Bayes theorem, BIRL uses MCMC sampling to compute the posterior mean of the reward function. For observations x = x 1:n and latent variables z = z 1:m, the joint density can be written as: The latent variables are drawn from a prior distribution p(z) and they are then related to the observations through the likelihood p(x|z). Inference in a bayesian framework amounts to conditioning on data and computing the posterior p(z|x). In lot of cases, this posterior is intractable and requires approximate inference. Variational inference has been proposed in the recent years as an alternative to MCMC sampling by using optimization instead of sampling. For a family of approximate densities ζ over the latent variables, we try to find a member of the family that minimizes the Kullback-Leibler (KL) divergence to the exact posterior The posterior is then approximated with the optimized member of the family q * (z). The KL divergence is then given by Since, the divergence cannot be computed, an alternate objective is optimized in VAE called evidence lower bound (ELBO) that is equivalent, This can be defined as a sum of two separate losses: where L lk is the loss related to the log-likelihood and L div is the loss related to the divergence. CVAE is used to perform probabilistic inference and predict diversely for structured outputs. The loss function is slightly altered with the introduction of class labels c: Wasserstein distance, also known as Kantorovich-Rubenstein distance or earth mover's distance (EMD) , provides a natural distance over probability distributions in the metric space. It is a formulation of optimal transport problem where the Wasserstein distance is the minimum cost required to move a pile of earth (an arbitrary distribution) to another. The mathematical formulation given by is: where c(X, Y) is the cost function, X and Y are random variables with marginal distributions P X and P Y respectively. EMD has been utilized in various practical applications in computer science such as pattern recognition in images. Wasserstein GAN (WGAN) has been proposed by to minimize the EMD between the generative distribution and the data distribution. proposed Wasserstein Auto-encoder (WAE) where the divergence loss has been calculated using the EMD instead of KL-divergence and has been shown to be robust in presence of noise and smaller samples. In this paper, my primary argument is that the inverse reinforcement learning problem can be devised as a supervised learning problem with learning of latent variable. The reward function, r(s t, a t, s t+1), can be formulated as a latent function which is dependent on the state at time t, s t, action at time t, a t, and state at time (t + 1), s t+1. In the CVAE framework, using the state and action pair as the class label c and rewriting the CVAE loss in Equation 17 with s t+1 as x and reward at time t, r t as z, we get: The first part of Equation 20 provides the log likelihood of transition probability of an MDP and the second part gives the KL-divergence of the encoded reward function to the prior gaussian belief. Thus, the proposed method tries to recover the next state from the current state and current action by encoding the reward function as the latent variable and constraining the reward function to lie as close to the gaussian function. The network structure of the method is given in Figure 1. The encoder is a neural network which inputs the current state, current action and next state and generates a probability distribution q(r t |s t+1, s t, a t), assumed to be isotropic gaussian. Since a nearoptimal policy is inputted into the IRL framework, minibatches of randomly sampled (s t+1, s t, a t) are introduced into the network. Two hidden layers with dropout are used to encode the input data into a latent space, giving two outputs corresponding to the mean and log-variance of the distribution. This step is similar to the backward inference problem in IRL methodology where the reward function is constructed from the sampled trajectories for a near-optimal agent. Given the current state and action, the decoder maps the latent space into the state space and reconstructs the next stateŝ t+1 from a sampled r t (from the normal distribution). Similar to the VAE formulation, samples are generated from a standard normal distribution and reparameterized using the mean and log-variance computed in the encoder step. This step resembles the forward inference problem of an MDP where given a state, action and reward distribution, we estimate the next state that the agent gets to. Two hidden layers with dropout are used similar to the encoder. Even though the KL-divergence should be able to provide for the loss theoretically, it does not converge in practice and indeed gives really large values in case of small samples such as in our formulation. provides a Maximum Mean Discrepancy (MMD) measure based on Wasserstein metric for a positive-definite reproducing kernel k(·, ·) such as the Radial Basis Function (RBF) kernel: where H k is the Reproducing Kernel Hilbert Space (RKHS) mapping z: Z → R. The divergence loss can then be written as: where c is the cost between the input, x and the output,z, of the decoder, D, using the sampled latent variable, z (given as mean squared error), The ing CWAE-IRL loss function is given as: In this section, I present the of CWAE-IRL on two simulated tasks, objectworld and pendulum. Objectworld is a generalization of gridworld , described in. It contains NxN grid of states with five actions per state, corresponding to steps in each direction and staying in place. Each action has a 30% chance of moving in a different random direction. There are randomly assigned objects, having one of 2 inner and outer colors chosen, red and green. There are 4 continuous features for each of the grids, each giving the Euclidean distance to the nearest object with a specific inner or outer color. The true reward is +1 in states within 3 cells of outer red and 2 cells of outer green, -1 within 3 cells of outer red, and zero otherwise. Inner colors serve as distractors. The expert trajectories fed have a sample length of 16. The algorithms for objectworld, Maximum Entropy IRL and Deep Maximum Entropy IRL are used from the GitHub implementation of without any modifications. Only continuous features are used for all implementations. CWAE-IRL is compared with prior IRL methods, described in the previous sections. Among prior methods chosen, CWAE-IRL is compared to , and. Only the Maximum Entropy IRL uses the reward as a linear combination of features while the others describe it in a non-linear fashion. The learnt rewards for all the algorithms are shown in Figure 2 with an objectworld of grid size 10. CWAE-IRL can recover the original reward distribution while the Deep Maximum Entropy IRL overestimates the reward in various places. Maximum Entropy IRL and BIRL completely fail to learn the rewards. Deep Maximum Entropy tends to give negative rewards to state spaces which are not well traversed in the example trajectories. However, CWAE-IRL generalizes over the spaces even though they have not been visited frequently. Also, due to the constraint of being close to the prior gaussian belief, the scale of rewards are best captured by the proposed algorithm as compared to the other algorithms which tend to overscale the rewards. The pendulum environment is an well-known problem in the control literature in which a pendulum starts from a random position and the goal is to keep it upright while applying the minimum amount of force. The state vector is composed of the cosine (and sine) of the angle of the pendulum, and the derivative of the angle. The action is the joint effort as 11 discrete actions linearly spaced within the [−2, 2] range. The reward is where w 1, w 2 and w 3 are the reward weights for the angle θ, derivative of angleθ and action a respectively. The optimal reward weights given by OpenAI are [1, 0.1, 0.001] respectively. An episode is limited to 1000 timesteps. A deep Q-network (DQN) has been proposed by that combines deep neural networks with RL to solve continuous state discrete action problems. DQN uses a neural network with gives the Q-values for every action and uses a buffer to store old states and actions to sample from to help stabilize training. Using a continuous state space makes it impossible to have all states visited during training. This also makes it very difficult for the comparison of recovered reward with the actual reward. The DQN is trained for 50,000 episodes. The CWAE-IRL is trained using 25 trajectories and the reward is predicted for 5 trajectories. The error plot between the reward recovered and the actual reward is given in Figure 3. The mean error hovers around 0 showing that under for majority of the states and actions, the proposed method is able to recover the correct reward.
Using a supervised latent variable modeling framework to determine reward in inverse reinforcement learning task
344
scitldr
The travelling salesman problem (TSP) is a well-known combinatorial optimization problem with a variety of real-life applications. We tackle TSP by incorporating machine learning methodology and leveraging the variable neighborhood search strategy. More precisely, the search process is considered as a Markov decision process (MDP), where a 2-opt local search is used to search within a small neighborhood, while a Monte Carlo tree search (MCTS) method (which iterates through simulation, selection and back-propagation steps), is used to sample a number of targeted actions within an enlarged neighborhood. This new paradigm clearly distinguishes itself from the existing machine learning (ML) based paradigms for solving the TSP, which either uses an end-to-end ML model, or simply applies traditional techniques after ML for post optimization. Experiments based on two public data sets show that, our approach clearly dominates all the existing learning based TSP algorithms in terms of performance, demonstrating its high potential on the TSP. More importantly, as a general framework without complicated hand-crafted rules, it can be readily extended to many other combinatorial optimization problems. The travelling salesman problem (TSP) is a well-known combinatorial optimization problem with various real-life applications, such as transportation, logistics, biology, circuit design. Given n cities as well as the distance d ij between each pair of cities i and j, the TSP aims to find a cheapest tour which starts from a beginning city (arbitrarily chosen), visits each city exactly once, and finally returns to the beginning city. This problem is NP-hard, thus being extremely difficult from the viewpoint of theoretical computer science. Due to its importance in both theory and practice, many algorithms have been developed for the TSP, mostly based on traditional operations research (OR) methods. Among the existing TSP algorithms, the best exact solver Concorde succeeded in demonstrating optimality of an Euclidean TSP instance with 85,900 cities, while the leading heuristics and are capable of obtaining near-optimal solutions for instances with millions of cities. However, these algorithms are very complicated, which generally consist of many hand-crafted rules and heavily rely on expert knowledge, thus being difficult to generalize to other combinatorial optimization problems. To overcome those limitations, recent years have seen a number of ML based algorithms being proposed for the TSP (briefly reviewed in the next section), which attempt to automate the search process by learning mechanisms. This type of methods do not rely on expert knowledge, can be easily generalized to various combinatorial optimization problems, thus become promising research direction at the intersection of ML and OR. For the TSP, existing ML based algorithms can be roughly classified into two paradigms, i.e.: End-to-end ML paradigm which uses a ML model alone to directly convert the input instance to a solution. ML followed by OR paradigm which applies ML at first to provide some additional information, to guide the following OR procedure towards promising regions. Despite its high potential, for the TSP, existing ML based methods are still in its infancy, struggling to solve instances with more than 100 cities, leaving much room for further improvement compared with traditional methods. To this end, we propose a novel framework by combining Monte Carlo tree search (MCTS) with a basic OR method (2-opt based local search) using variable neighborhood strategy to solve the TSP. The main contributions are summarized as follows. • Framework: We propose a new paradigm which combines OR and ML using variable neighborhood strategy. Starting from an initial state, a basic 2-opt based local search is firstly used to search within a small neighborhood. When no improvement is possible within the small neighborhood, the search turns into an enlarged neighborhood, where a reinforcement learning (RL) based method is used to identify a sample of promising actions, and iteratively choose one action to apply. Under this new paradigm, OR and ML respectively work within disjoint space, being flexible and targeted, and clearly different from the two paradigms mentioned above. More importantly, as a general framework without complicated hand-crafted rules, this framework could not only be applied to the TSP, but also be easily extended to many other combinatorial optimization problems. • Methodology: When we search within an enlarged neighborhood, it is infeasible to enumerate all the actions. We then seek to sample a number of promising actions. To do this automatically, we implement a MCTS method which iterates through simulation, selection and back-propagation steps, to collect useful information that guides the sampling process. To the best of our knowledge, there is only one existing paper which also uses MCTS to solve the TSP. However, their method is a constructive approach, where each state is a partial TSP tour, and each action adds a city to increase the partial tour, until forming a complete tour. By contrast, our MCTS method is a conversion based approach, where each state is a complete TSP tour, and each action converts the original state to a new state (also a complete TSP tour). The methodology and implementation details of our MCTS are very different from the MCTS method developed in . • Results: We carry out experiments on two sets of public TSP instances. Experimental (detailed in Section 4) show that, on both data sets our MCTS algorithm obtains (within reasonable time) statistically much better with respect to all the existing learning based algorithms. These clearly indicate the potential of our new method for solving the TSP. The rest of this paper is organized as follows: Section 2 briefly reviews the existing learning based methods on the TSP. Section 3 describes in detail the new paradigm and the MCTS method. Section 4 provides and analyzes the experimental . Finally Section 5 concludes this paper. In this section, we briefly review the existing ML based algorithms on the TSP, and then extend to several other highly related problems. Non-learned methods are omitted, and interested readers please find in , , and for an overlook of the leading TSP algorithms. The idea of applying ML to solve the TSP is not new, dated back to several decades ago. proposed a Hopfield-network, which achieved the best TSP solutions at that time. Encouraged by this progress, neural networks were subsequently applied on many related problems (surveyed by). However, these early attempts only achieved limited success, with respect to other state-of-the-art algorithms, possibly due to the lack of high-performance hardware and training data. In recent years, benefited from the rapidly improving hardware and exponentially increasing data, ML (especially deep learning) achieved great successes in the field of artificial intelligence. Motivated by these successes, ML becomes again a hot and promising topic for combinatorial optimization, especially for the TSP. A number of ML based algorithms have been developed for the TSP, which can be roughly classified into two paradigms (possibly with overlaps). End-to-end ML paradigm: introduced a pointer network which consists of an encoder and a decoder, both using recurrent neural network (RNN). The encoder parses each TSP city into an embedding, and then the decoder uses an attention model to predict the probability distribution over the candidate (unvisited) cities. This process is repeated to choose a city one by one, until forming a complete TSP tour. The biggest advantage of the pointer network is its ability of processing graphs with different sizes. However, as a supervised learning (SL) method, it requires a large number of pre-computed optimal (at least high-quality) TSP solutions, being unaffordable for large-scale instances. To overcome this drawback, several successors chose reinforcement learning (RL) instead of SL, thus avoiding the requirement of pre-computed solutions. For example, implemented an actor-critic RL architecture, which uses the solution quality (tour length) as a reward signal, to guide the search towards promising area. proposed a framework which maintains a partial tour and repeatedly calls a RL based model to select the most relevant city to add to the partial tour, until forming a complete tour. also implemented an actor-critic neural network, and chose Sinkhorn policy gradient to learn policies by approximating a double stochastic matrix. and both proposed a graph attention network (GAN), which incorporates attention mechanism with RL to auto-regressively improve the quality of the obtained solution. proposed a MCTS algorithm for the TSP, which also belongs to this paradigm. As explained in the introduction section, this existing method is clearly different from our MCTS algorithm (detailed in Section 3.5). ML followed by OR paradigm: Applying ML alone is difficult to achieve satisfactory performance, thus it is recommended to combine ML and OR to form hybrid algorithms . Following this idea, proposed a supervised approach, which trains a graph neural network (GNN) to predict an adjacency matrix (heat map) over the cities, and then attempts to convert the adjacency matrix to a feasible TSP tour by beam search (OR based method). followed this framework, but chose deep graph convolutional networks (GCN) instead of GNN to build heat map, and then tried to construct tours via highly parallelized beam search. Additionally, several algorithms belonging to above paradigm were further enhanced with OR based algorithms, thus also belonging to this paradigm. For example, the solution obtained by ML is post-optimized by sampling in and , or by 2-opt based local search in . Overall, this hybrid paradigm performs statistically much better than the end-to-end ML paradigm, showing the advantage of combining ML with OR. In addition to the works focused on the classic TSP, there are several ML based methods recently proposed for solving other related problems, such as the decision TSP , the multiple TSP , and the vehicle routing problem , etc. Finally, for an overall survey about machine learning for combinatorial optimization, please refer to and . 3.1 FRAMEWORK The proposed new paradigm for combining OR and ML to solve the TSP is outlined in Fig. 1. Specifically, the search process is considered as a Markov Decision Process (MDP), which starts from an initial state π, and iteratively applies an action a to reach a new state π *. At first, the MDP explores within a small neighborhood, and tries to improve the current state by applying 2-opt based local search. When no further improvement is possible within the small neighborhood, the MDP turns into an enlarged neighborhood, which consists of a large number of possible actions, being infeasible to enumerate one by one. To improve search efficiency, MCTS is launched to iteratively sample a number of promising actions and choose an improving action to apply. When MCTS fails to find an improving action, the MDP jumps into an unexplored search space, and launches a new round of 2-opt local search and MCTS again. This process is repeated until the termination condition is met, then the best found state is returned as the final solution. In our implementation, each state corresponds to a complete TSP solution, i.e., a permutation π = (π 1, π 2, . . ., π n) of all the cities. Each action a is a transformation which converts a given state π to a new state π *. Since each TSP solution consists of a subset of n edges, each action could be viewed as a k-opt (2 ≤ k ≤ n) transformation, which deletes k edges at first, and then adds k different edges to form a new tour. Obviously, each action can be represented as a series of 2k sub-decisions (k edges to delete and k edges to add). This representation method is straightforward, but seems a bit redundant, since the deleted edges and added edges are highly relevant, while arbitrarily deleting k edges and adding k edges may in an unfeasible solution. To overcome this drawback, we develop a compact method to represent an action, which consists of only k sub-decisions, as exemplified in 1, 2, 3, 4, 5, 6, 7, 8). To determine an action, we at first decide a city a 1 and delete the edge between a 1 and its previous city b 1. Without loss of generality, suppose a 1 = 4, then b 1 = 3 and edge is deleted, ing in a temporary status shown in sub-figure (b) (for the sake of clarity, drawn as a line which starts from a 1 and ends at b 1). Furthermore, we decide a city a 2 to connect city b 1, generally ing in an unfeasible solution containing an inner cycle (unless a 2 = a 1). For example, suppose a 2 = 6 and connect it to city 3, the ing temporary status is shown in sub-figure (c), where an inner cycle occurs and the degree of city a 2 increases to 3. To break inner cycle and reduce the degree of a 2 to 2, the edge between city a 2 and city b 2 = 7 should be deleted, ing in a temporary status shown in sub-figure (d). This process is repeated, to get a series of cities a k and b k (k ≥ 2). In this example, a 3 = 3 and b 3 = 2, respectively corresponding to sub-figures (e) and (f). Once a k = a 1, the loop closes and reaches a new state (feasible TSP solution). For example, if let a 4 = a 1 = 4 and connect a 4 to b 3, the ing new state is shown in sub-figure (g), which is redrawn as a cycle in sub-figure (h). Formally, an action can be represented as a = (a 1, b 1, a 2, b 2, . . ., a k, b k, a k+1), where k is a variable and the begin city must coincide with the final city, i.e. a k+1 = a 1. Each action corresponds to a k-opt transformation, which deletes k edges, i.e., (a i, b i), 1 ≤ i ≤ k, and adds k edges, i.e., (b i, a i+1), 1 ≤ i ≤ k, to reach a new state. Notice that not all these elements are optional, since once a i is known, b i can be uniquely determined without any optional choice. Therefore, to determine an action we should only decide a series of k sub-decisions, i.e., the k cities a i, 1 ≤ i ≤ k. Intuitively, this compact representation method brings advantages in two-folds: fewer (only k, not 2k) subdecisions need to be made; the ing states are necessarily feasible solutions. Furthermore, if a i+1 does not belong to the top 10 nearest neighbors of b i, edge (b i, a i+1) is marked as an unpromising edge, and any action involving (b i, a i+1) is marked as an unpromising action. All the unpromising actions are eliminated directly, to reduce the scale of search space. Let L(π) denote the tour length corresponding to state π, then corresponding to each action a = (a 1, b 1, a 2, b 2, . . ., a k, b k, a k+1) which converts π to a new state π *, the difference could be calculated as follows: If ∆(π, π *) < 0, π * is better (with shorter tour length) than π. For state initialization, we choose a simple constructive procedure which starts from an arbitrarily chosen begin city π 1, and iteratively selects a city π i, 2 ≤ i ≤ n among the candidate (unvisited) cities (added to the end of the partial tour), until forming a complete tour π = (π 1, π 2, . . ., π n) which serves as the starting state. More precisely, if there are m > 1 candidate cities at the ith iteration, each candidate city is chosen with probability 1 m. Using this method, each possible state is chosen as the starting state with an equal probability. To maintain the generalization ability of our approach, we avoid to use complex OR techniques, such as the α-nearness criterion for selecting candidate edges and the partition and merge method for tackling large TSP instances , which have proven to be highly effective on the TSP, but heavily depend on expert knowledge. Instead, we choose a straightforward method to search within a small neighborhood. More precisely, the method examines one by one the promising actions with k = 2, and iteratively applies the firstmet improving action which leads to a better state, until no improving action with k = 2 is found. This method is equivalent to the well-known 2-opt based local search procedure, which is able to efficiently and robustly converge to a local optimal state. Once no improving action is found within the small neighborhood, we close the basic 2-opt local search, and switch to an enlarged neighborhood which consists of the actions with k > 2. This method is a generalization of the variable neighborhood search method (Mladenović &), which has been successfully applied to many combinatorial optimization problems. Unfortunately, there are generally a huge number of actions within the enlarged neighborhood (even after eliminating the unpromising ones), being impossible to enumerate them one by one. Therefore, we choose to sample a subset of promising actions (guided by RL) and iteratively select an action to apply, to reach a new state. Following this idea, we choose the MCTS as our learning framework. Inspired by the works in , , and , our MCTS procedure (outlined in Fig. 3) consists of four steps, i.e., Initialization, Simulation, Selection, and Back-propagation, which are respectively designed as follows. Initialization: We define two n × n symmetric matrices, i.e., a weight matrix W whose element W ij (all initialized to 1) controls the probability of choosing city j after city i, and an access matrix Q whose element Q ij (all initialized to 0) records the times that edge (i, j) is chosen during simulations. Additionally, a variable M (initialized to 0) is used to record the total number of actions already simulated. Notice that this initialization step should be executed only once at the beginning of the whole process of MDP. Simulation: Given a state π, we use the simulation process to probabilistically generate a number of actions. As explained in Section 3.2, each action is represented as a = (a 1, b 1, a 2, b 2, . . ., a k, b k, a k+1), containing a series of sub-decisions a i, 1 ≤ i ≤ k (k is also a variable, and a k+1 = a 1), while b i could be determined uniquely once a i is known. Once b i is determined, for each edge (b i, j), j = b i, we use the following formula to estimate its potential Z bij (the higher the value of Z bij, the larger the opportunity of edge (b i, j) to be chosen): Where denotes the averaged W bij value of all the edges relative to city b i. In this formula, the left part emphasizes the importance of the edges with high W bij values (to enhance the intensification feature), while the right part ln (M + 1) Q bij + 1 prefers the rarely examined edges (to enhance the diversification feature). α is a parameter used to achieve a balance between intensification and diversification, and the term "+1" is used to avoid a minus numerator or a zero denominator. To make the sub-decisions sequently, we at first choose a 1 randomly, and determine b 1 subsequently. Recursively, once a i and b i are known, a i+1 is decided as follows: if closing the loop (connecting a 1 to b i) would lead to an improving action, or i ≥ 10, let a i+1 = a 1. otherwise, consider the top 10-nearest neighbors of b i with Z bil ≥ 1 as candidate cities, forming a set X (excluding a 1 and the city already connected to b i). Then, among X each city j is selected as a i+1 with probability p j, which is determined as follows: Once a i+1 = a 1, we close the loop to obtain an action. Similarly, more actions are generated (forming a sampling pool), until meeting an improving action which leads to a better state, or the number of actions reaches its upper bound (controlled by a parameter H). Selection: During above simulation process, if an improving action is met, it is selected and applied to the current state π, to get a new state π new. Otherwise, if no such action exists in the sampling pool, it seems difficult to gain further improvement within the current search area. At this time, the MDP jumps to a random state (using the method described in Section 3.3), which serves as a new starting state. The value of M as well as the elements of matrices W and Q are updated by back propagation as follows. At first, whenever an action is examined, M is increased by 1. Then, for each edge (b i, a i+1) which appears in an examined action, let Q biai+1 increase by 1. Finally, whenever a state π is converted to a better state π new by applying action a = (a 1, b 1, a 2, b 2, . . ., a k, b k, a k+1), for each edge (b i, a i+1), 1 ≤ i ≤ k, let: Where β is a parameter used to control the increasing rate of W biai+1. Notice that we update W biai+1 only when meeting a better state, since we want to avoid wrong estimations (even in a bad action which leads to a worse state, there may exist some good edges (b i, a i+1)). With this backpropagation process, the weight of the good edges would be increased to enhance its opportunity of being selected, thus the sampling process would be more and more targeted. W and Q are symmetric matrices, thus let W ai+1bi = W biai+1 and Q ai+1bi = Q biai+1 always. The MCTS iterates through the simulation, selection and back-propagation steps, until no improving action exists among the sampling pool. Then, the MDP jumps to a new state, and launches a new round of 2-opt local search and MCTS again. This process is repeated, until the allowed time (controlled by a parameter T) has been elapsed. Then, the best found state is returned as the final solution. To evaluate the performance of our MCTS algorithm, we program it in C language 1, and carry out experiments on a large number of public TSP instances. Notice that the reference algorithms are executed on different platforms (detailed below), being extremely difficult to fairly compare the run times. Therefore, we mainly make comparisons in terms of solution quality (achieved within reasonable runtime), and just list the run-times to roughly evaluate the efficiency of each algorithm. Currently there are two data sets widely used by the existing learning based TSP algorithms, i.e., Set 1 2, which is divided into three subsets, each containing 10,000 automatically generated 2D-Euclidean TSP instances, respectively with n = 20, 50, 100. Set 2 3: which contains 38 instances (with 51 ≤ n ≤ 318) extracted from the famous TSPLIB library . We also use these two data sets as benchmarks to evaluate our MCTS algorithm. As described in Section 3, MCTS relies on four hyper parameters (α, β, H and T). We choose α = 1, β = 10, H = 10n (n is the number of cities) as the default settings of the first three parameters. For parameter T used to control the termination condition, we set T = 75n milliseconds for each instance of set 1, and set T = n seconds for each instance of set 2, to ensure that the total time elapsed by MCTS remains reasonable w.r.t. the existing learning based algorithms. 4.3 ON DATA SET 1 Table 1 presents the obtained by MCTS on data set 1, with respect to the existing non-learned and learning-based algorithms. In the table, each line corresponds to an algorithm. Respectively, the first nine lines are non-learned algorithms, among which Concorde and Gurobi are two exact solvers, LKH3 is the currently best heuristic, and OR tools are optimization tools released by Google company . The following six lines are end-to-end ML models (the MCTS algorithm in did not provide detailed , thus being omitted), and the final eight lines are hybrid algorithms which use OR method after ML for post-optimization. For the columns, column 1 indicates the methods, while column 2 indicates the type of each algorithm (explained at the bottom of the table), columns 3-5 respectively give the average tour length, average optimality gap in percentage w.r.t. Concorde, and the total clock time used by each algorithm on all the 10,000 instances with n = 20. Similarly, columns 6-8, 9-11 respectively give the same information on the 10,000 instances with n = 50 and n = 100. All the except ours are directly taken from table 1 of (the order of the algorithms is slightly changed), while unavailable items are marked as "-". Notice that in the latest learning based algorithms and , the experiments were carried out either on a single GPU (Nvidia 1080Ti) or 32 instances in parallel on a 32 virtual CPU system (2 × Xeon E5-2630), and the run-time was recorded as the wall clock time used to solve the 10,000 test instances. Similarly, we also run 32 instances in parallel on a 32 virtual CPU system (2 × Intel Xeon Silver 4110 2.1GHz processor, each with eight cores), and report the wall clock time used to solve the 10,000 test instances. As shown in Table 1, the exact solvers and LKH3 obtain good on all the test instances, while the remaining six non-learned algorithms perform overall poorly. Among the learning based algorithms, in terms of solution quality the hybrid algorithms which combines ML and OR perform clearly much better than the end-to-end ML models, although much more computational times are required. Finally, compared to these existing methods, our MCTS algorithm performs quite well, which succeeds in matching or improving the best known solutions (reported by Concorde) on most of these instances, corresponding to an average gap of -0.0075%, -0.0212%, -0.0178% 4 respectively on the three subsets with n = 20, 50, 100. The computation times elapsed by MCTS remains reasonable (40 minutes for the 10,000 instances with n = 100), close to the latest (currently best) learning based algorithm . This set contains 38 instances (51 ≤ n ≤ 318) extracted from the TSPLIB. For each of these instances, we list in Table 2 the optimal (reported by Concorde within 1 hour for each instance), the reported by S2V-DQN and our MCTS algorithm (indicated in bold if reaching optimality). Notice that S2V-DQN did not clearly report the computational time, while n seconds is allowed by our MCTS to solve each instance with n cities (the 38 instances are executed sequentially, occupying a core of an Intel Xeon Silver 4110 2.1GHz processor). As shown in the table, S2V-DQN reaches optimality on only one instance (berlin52), corresponding to a large average gap (w.r.t. the optimal solutions) of 4.75%. For comparison, our MCTS succeeds in matching the optimal solutions on 28 instances, corresponding to a much smaller average gap (0.21%). Furthermore, MCTS dominates S2V-DQN on all these instances only except two instances pr226 (worse than S2V-DQN) and berlin52 (with equal ), clearly demonstrating its superiority over S2V-DQN. Table 2: Performance of MCTS on data set 2 compared to S2V-DQN ( The average gap of S2V-DQN vs. OPT is 4.75%, while the average gap of MCTS vs. OPT is 0.21%. Overall, we think the experimental on above two data sets clearly show the potential of our MCTS on the TSP. This paper newly develops a novel flexible paradigm for solving TSP, which combines OR and ML in a variable neighborhood search strategy, and achieves highly competitive performance with respect to the existing learning based TSP algorithms. However, how to combine ML and OR reasonably is still an open question, which deserves continuous investigations. In the future, we would try more new paradigms to better answer this question, and extend the work to other combinatorial optimization problems.
This paper combines Monte Carlo tree search with 2-opt local search in a variable neighborhood mode to solve the TSP effectively.
345
scitldr
Significant strides have been made toward designing better generative models in recent years. Despite this progress, however, state-of-the-art approaches are still largely unable to capture complex global structure in data. For example, images of buildings typically contain spatial patterns such as windows repeating at regular intervals; state-of-the-art generative methods can’t easily reproduce these structures. We propose to address this problem by incorporating programs representing global structure into the generative model—e.g., a 2D for-loop may represent a configuration of windows. Furthermore, we propose a framework for learning these models by leveraging program synthesis to generate training data. On both synthetic and real-world data, we demonstrate that our approach is substantially better than the state-of-the-art at both generating and completing images that contain global structure. There has been much interest recently in generative models, following the introduction of both variational autoencoders (VAEs) BID13 and generative adversarial networks (GANs) BID6. These models have successfully been applied to a range of tasks, including image generation BID16, image completion BID10, texture synthesis BID12; BID22, sketch generation BID7, and music generation BID3.Despite their successes, generative models still have difficulty capturing global structure. For example, consider the image completion task in Figure 1. The original image (left) is of a building, for which the global structure is a 2D repeating pattern of windows. Given a partial image (middle left), the goal is to predict the completion of the image. As can be seen, a state-of-the-art image completion algorithm has trouble reconstructing the original image (right) BID10.1 Real-world data often contains such global structure, including repetitions, reflectional or rotational symmetry, or even more complex patterns. In the past few years, program synthesis Solar- BID17 has emerged as a promising approach to capturing patterns in data BID4; BID19. The idea is that simple programs can capture global structure that evades state-of-the-art deep neural networks. A key benefit of using program synthesis is that we can design the space of programs to capture different kinds of structure-e.g., repeating patterns BID5, symmetries, or spatial structure BID2 -depending on the application domain. The challenge is that for the most part, existing approaches have synthesized programs that operate directly over raw data. Since programs have difficulty operating over perceptual data, existing approaches have largely been limited to very simple data-e.g., detecting 2D repeating patterns of simple shapes BID5.We propose to address these shortcomings by synthesizing programs that represent the underlying structure of high-dimensional data. In particular, we decompose programs into two parts: (i) a sketch s ∈ S that represents the skeletal structure of the program BID17, with holes that are left unimplemented, and (ii) components c ∈ C that can be used to fill these holes. We consider perceptual components-i.e., holes in the sketch are filled with raw perceptual data. For example, the original image x * partial image x part completionx (ours) completionx (baseline) Figure 1: The task is to complete the partial image x part (middle left) into an image that is close to the original image x * (left). By incorporating programmatic structure into generative models, the completion (middle right) is able to substantially outperform a state-of-the-art baseline BID10 (right). Note that not all non-zero pixels in the sketch rendering retain the same value in the completed picture due to the nature of the following completion process program represents the structure in the original image x * in Figure 1 (left). The black text is the sketch, and the component is a sub-image taken from the given partial image. Then, the draw function renders the given sub-image at the given position. We call a sketch whose holes are filled with perceptual components a neurosymbolic program. Building on these ideas, we propose an approach called program-synthesis (guided) generative models (PS-GM) that combines neurosymbolic programs representing global structure with state-of-the-art deep generative models. By incorporating programmatic structure, PS-GM substantially improves the quality of these state-of-the-art models. As can be seen, the completion produced using PS-GM (middle right of Figure 1) substantially outperforms the baseline. We show that PS-GM can be used for both generation from scratch and for image completion. The generation pipeline is shown in FIG0. At a high level, PS-GM for generation operates in two phases:• First, it generates a program that represents the global structure in the image to be generated. In particular, it generates a program P = (s, c) representing the latent global structure in the image (left in FIG0, where s is a sketch (in the domain considered here, a list of 12 for-loop structures) and c is a perceptual component (in the domain considered here, a list of 12 sub-images).• Second, our algorithm executes P to obtain a structure rendering x struct representing the program as an image (middle of FIG0). Then, our algorithm uses a deep generative model to complete x struct into a full image (right of FIG0). The structure in x struct helps guide the deep generative model towards images that preserve the global structure. The image-completion pipeline (see Figure 3) is similar. Training these models end-to-end is challenging, since a priori, ground truth global structure is unavailable. Furthermore, representative global structure is very sparse, so approaches such as reinforcement learning do not scale. Instead, we leverage domain-specific program synthesis algorithms to produce examples of programs that represent global structure of the training data. In particular, we propose a synthesis algorithm tailored to the image domain, which extracts programs with nested for-loops that can represent multiple 2D repeating patterns in images. Then, we use these example programs as supervised training data. Our programs can capture rich spatial structure in the training data. For example, in FIG0, the program structure encodes a repeating structure of 0's and 2's on the whole image, and a separate repeating structure of 3's on the right-hand side of the image. Furthermore, in Figure 1, the generated image captures the idea that the repeating pattern of windows does not extend to the bottom portion of the image.for loop from sampled program P structure rendering x struct completed image x (ii) Our model executes P to obtain a rendering of the program structure x struct (middle). (iii) Our model samples a completion x ∼ p θ (x | s, c) of x struct into a full image (right).Contributions. We propose an architecture of generative models that incorporates programmatic structure, as well as an algorithm for training these models (Section 2). Our learning algorithm depends on a domain-specific program synthesis algorithm for extracting global structure from the training data; we propose such an algorithm for the image domain (Section 3). Finally, we evaluate our approach on synthetic data and on a real-world dataset of building facades Tyleček &Šára, both on the task of generation from scratch and on generation from a partial image. We show that our approach substantially outperforms several state-of-the-art deep generative models (Section 4).Related work. There has been growing interest in applying program synthesis to machine learning, for purposes of interpretability BID21; BID20, safety BID1, and lifelong learning BID19. Most relevantly, there has been interest in using programs to capture structure that deep learning models have difficulty representing; BID4;. For instance, BID4 proposes an unsupervised learning algorithm for capturing repeating patterns in simple line drawings; however, not only are their domains simple, but they can only handle a very small amount of noise. Similarly, BID5 captures 2D repeating patterns of simple circles and polygons; however, rather than synthesizing programs with perceptual components, they learn a simple mapping from images to symbols as a preprocessing step. The closest work we are aware of is BID19, which synthesizes programs with neural components (i.e., components implemented as neural networks); however, their application is to lifelong learning, not generation, and to learning with supervision (labels) rather than to unsupervised learning of structure. Additionally, there has been work extending neural module networks BID0 to generative models BID2. These algorithms essentially learn a collection of neural components that can be composed together based on hierarchical structure. However, they require that the structure be available (albeit in natural language form) both for training the model and for generating new images. Finally, there has been work incorporating spatial structure into generative models for generating textures BID12; however, their work only handles a single infinite repeating 2D pattern. In contrast, we can capture a rich variety of spatial patterns parameterized by a space of programs. For example, the image in Figure 1 generated by our technique contains different repeating patterns in different parts of the image. We describe our proposed architecture for generative models that incorporate programmatic structure. For most of this section, we focus on generation; we discuss how we adapt these techniques to image completion at the end. We illustrate our generation pipeline in FIG0.Let p θ,φ (x) be a distribution over a space X with unknown parameters θ, φ that we want to estimate. We study the setting where x is generated based on some latent structure, which consists of a program sketch s ∈ S and a perceptual component c ∈ C, and where the structure is in turn generated partial image x part synthesized program P part extrapolated programP structure renderingx struct+part completionx (ours) original image x * Figure 3: Our image completion pipeline consists of the following steps: (i) Given a partial image x part (top left), our program synthesis algorithm (Section 3) synthesizes a program P part representing the structure in the partial image (top middle).(ii) Our model f extrapolates P part to a program P = f (P part) representing the structure of the whole image. (iii) Our model executesP to obtain a rendering of the program structurex struct+part (bottom left). (iv) Our model completesx struct+part into an imagex (bottom middle), which resembles the original image x * (bottom right).conditioned on a latent vector z ∈ Z-i.e., To sample a completion, our model executes P to obtain a structure rendering x struct = eval(P) (middle), and then samples a completion based on x struct -i.e., DISPLAYFORM0 DISPLAYFORM1 We now describe our algorithm for learning the parameters θ, φ of p θ,φ, followed by a description of our choices of architecture for p φ (s, c | z) and p θ (x | s, c). DISPLAYFORM2 Since log p θ,φ (x) is intractable to optimize, we use an approach based on the variational autoencoder (VAE). In particular, we use a variational distribution qφ(s, c, z DISPLAYFORM3 which has parametersφ. Then, we optimizeφ while simultaneously optimizing θ, φ. Using qφ(s, c, z | x), the evidence lower bound on the log-likelihood is DISPLAYFORM4 where D KL is the KL divergence and H is information entropy. Thus, we can approximate θ *, φ * by optimizing the lower bound instead of log p θ,φ (x). However, remains intractable since we are integrating over all program sketches s ∈ S and perceptual components c ∈ C. Using sampling to estimate these integrals would be very computationally expensive. Instead, we propose an approach that uses a single point estimate of s x ∈ S and c x ∈ C for each x ∈ X, which we describe below. Synthesizing structure. For a given x ∈ X, we use program synthesis to infer a single likely choice s x ∈ S and c x ∈ C of the latent structure. The program synthesis algorithm must be tailored to a specific domain; we propose an algorithm for inferring for-loop structure in images in Section 3. Then, we use these point estimates in place of the integrals over S and C-i.e., we assume that DISPLAYFORM5 where δ is the Dirac delta function. Plugging into gives DISPLAYFORM6 where we have dropped the degenerate terms log δ(s − s x) and log δ(c − c x) (which are constant with respect to the parameters θ, φ,φ). As a consequence, decomposes into two parts that can be straightforwardly optimized-i.e., DISPLAYFORM7 where we can optimize θ and φ,φ independently: DISPLAYFORM8 Latent structure VAE. Note that L(φ,φ; x) is exactly equal to the objective of a VAE, where qφ(z | s, c) is the encoder and p φ (s, c | z) is the decoder-i.e., learning the distribution over latent structure is equivalent to learning the parameters of a VAE. The architecture of this VAE depends on the representation of s and c. In the case of for-loop structure in images, we use a sequence-to-sequence VAE.Generating data with structure. The term L(θ; x) corresponds to learning a probability distribution (conditioned on the latent structure s and c)-e.g., we can estimate this distribution using another VAE. As before, the architecture of this VAE depends on the representation of s and c. Rather than directly predicting x based on s and c, we can leverage the program structure more directly by first executing the program P = (s, c) to obtain its output x struct = eval(P), which we call a structure rendering. In particular, x struct is a more direct representation of the global structure represented by P, so it is often more suitable to use as input to a neural network. The middle of FIG0 shows an example of a structure rendering for the program on the left. Then, we can train a model DISPLAYFORM9 In the case of images, we use a VAE with convolutional layers for the encoder q φ and transpose convolutional layers for the decoder p θ. Furthermore, instead of estimating the entire distribution p θ (x | s, c), we also consider two non-probabilistic approaches that directly predict x from x struct, which is an image completion problem. We can solve this problem using GLCIC, a state-of-the-art image completion model BID10. We can also use , which solves the more general problem of mapping a training set of structured renderings {x struct} to a training set of completed images {x}. Image completion. In image completion, we are given a set of training pairs (x part, x *), and the goal is to learn a model that predicts the complete image x * given a partial image x part. Compared to generation, our likelihood is now conditioned on x part -i.e., p θ,φ (x | x part). Now, we describe how we modify each of our two models p θ (x | s, c) and p φ (s, c | z) to incorporate this extra information. First, the programmatic structure is no longer fully latent, since we can observe partial programmatic structure in x part. In particular, we can leverage our program synthesis algorithm to help perform completion. We first synthesize programs P * and P part representing the global structure in x * and x part, respectively. Then, we can train a model f that predicts P * given P part -i.e., it extrapolates P part to a programP = f (P part) representing the structure of the whole image. Thus, unlike generation, where we sample a programP = (s, c) ∼ p φ (s, c | z), we use the extrapolated programP = f (P part).The second model p θ (x | s, c) for the most part remains the same, except when we executeP = (s, c) to obtain a structure rendering x struct, we render onto the partial image x part instead of onto a blank image to obtain the final rendering x struct+part. Then, we complete the structure rendering x struct+part into a prediction of the full imagex as before (i.e., using a VAE, GLCIC, or CycleGAN).Our image completion pipeline is shown in Figure 3, including the given partial image (top left), the program P part synthesized from the partial image (top middle), the extrapolated programP (top right), the structure rendering x struct+part (bottom left), and the predicted full imagex (bottom middle). Image representation. Since the images we work with are very high dimensional, for tractability, we assume that each image x ∈ R N M ×N M is divided into a grid containing N rows and N columns, where each grid cell has size M × M pixels (where M ∈ N is a hyperparameter). For example, this grid structure is apparent in Figure 3 Program grammar. Given this structure, we consider programs that draw 2D repeating patterns of M × M sub-images on the grid. More precisely, we consider programs P = ((s 1, c 1),..., (s k, c k)) ∈ (S × C) k that are length k lists of pairs consisting of a sketch s ∈ S and a perceptual component c ∈ C; here, k ∈ N is a hyperparameter. 3 A sketch s ∈ S has form s = for (i, j) ∈ {1, ..., n} × {1, ..., n} do DISPLAYFORM0 where n, a, b, n, a, b ∈ N are undetermined parameters that must satisfy a · n + b ≤ N and a · n + b ≤ N, and where?? is a hole to be filled by a perceptual component, which is an M × M sub-image c ∈ R M ×M. 4 Then, upon executing the (i, j) iteration of the for-loop, the program renders sub-image I at position (t, u) = (a · i + b, a · j + b) in the N × N grid. Figure 3 (top middle) shows an example of a sketch s where its hole is filled with a sub-image c, and Figure 3 (bottom left) shows the image rendered upon executing P = (s, c). FIG0 shows another such example. Program synthesis problem. Given a training image x ∈ R N M ×N M, our program synthesis algorithm outputs the parameters n h, a h, b h, n h, a h, b h of each sketch s h in the program (for h ∈ [k]), along with a perceptual component c h to fill the hole in sketch s h. Together, these parameters define a program P = ((s 1, c 1),..., (s k, c k)).The goal is to synthesize a program that faithfully represents the global structure in x. We capture this structure using a boolean tensor B (x) ∈ {0, 1} N ×N ×N ×N, where DISPLAYFORM1 where ∈ R + is a hyperparameter, and d(I, I) is a distance metric between on the space of subimages. In our implementation, we use a weighted sum of earthmover's distance between the color histograms of I and I, and the number of SIFT correspondences between I and I.Algorithm 1 Synthesizes a program P representing the global structure of a given image DISPLAYFORM2.., k} do s h, c h = arg max (s,c)∈S×Ĉ (P h−1 ∪ {(s, c)}; x) P ← P ∪ {(s h, c h)} end for Output: P Additionally, we associate a boolean tensor with a given program P = ((s 1, c 1),..., (s k, c k)). First, for a sketch s ∈ S with parameters a, b, n, a, b, n, we define DISPLAYFORM3 i.e., the set of grid cells where sketch renders a sub-image upon execution. Then, we have DISPLAYFORM4 t,u,t,u indicates whether the sketch s renders a sub-image at both of the grid cells (t, u) and (t, u). Then, DISPLAYFORM5 where the disjunction of boolean tensors is defined elementwise. Intuitively, B (P) identifies the set of pairs of grid cells (t, u) and (t, u) that are equal in the image rendered upon executing each pair (s, c) in P. Finally, our program synthesis algorithm aims to solve the following optimization problem: DISPLAYFORM0 1, where ∧ and ¬ are applied elementwise, and λ ∈ R + is a hyperparameter. In other words, the objective of is the number of true positives (i.e., entries where B (P) = B (x) = 1), and the number of false negatives (i.e., entries where B (P) = B (x) = 0), and computes their weighted sum. Thus, the objective of measures for how well P represents the global structure of x. For tractability, we restrict the search space in to programs of the form P = ((s 1, c 1 DISPLAYFORM1 In other words, rather than searching over all possible sub-images c ∈ R M ×M, we only search over the sub-images that actually occur in the training image x. This may lead to a slightly sub-optimal solution, for example, in cases where the optimal sub-image to be rendered is in fact an interpolation between two similar but distinct sub-images in the training image. However, we found that in practice this simplifying assumption still produced viable . Program synthesis algorithm. Exactly optimizing is in general an NP-complete problem. Thus, our program synthesis algorithm uses a partially greedy heuristic. In particular, we initialize the program to P = ∅. Then, on each iteration, we enumerate all pairs (s, c) ∈ S ×Ĉ and determine the pair (s h, c h) that most increases the objective in, whereĈ is the set of all sub-images x tu for t, u ∈ [N]. Finally, we add (s h, c h) to P. We show the full algorithm in Algorithm 1. Table 2: Performance of our approach PS-GM versus the baseline (BL) for image completion. We report Fréchet distance for GANbased models, and negative log-likelihood (NLL) for the VED. We perform two experiments-one for generation from scratch and one for image completion. We find substantial improvement in both tasks. Details on neural network architectures are in Appendix A, and additional examples for image completion are in Appendix B. Synthetic dataset. We developed a synthetic dataset based on MNIST. Each image consists of a 9 × 9 grid, where each grid cell is 16 × 16 pixels. Each grid cell is either filled with a colored MNIST digit or a solid color . The program structure is a 2D repeating pattern of an MNIST digit; to add natural noise, we each iteration of the for-loop in a sketch s h renders different MNIST digits, but with the same MNIST label and color. Additionally, we chose the program structure to contain correlations characteristic of real-world images-e.g., correlations between different parts of the program, correlations between the program and the , and noise in renderings of the same component. Examples are shown in Figure 4. We give details of how we constructed this dataset in Appendix A. This dataset contains 10,000 training and 500 test images. Facades dataset. Our second dataset consists of 1855 images (1755 training, 100 testing) of building facades. 6 These images were all scaled to a size of 256 × 256 × 3 pixels, and were divided into a grid of 15 × 15 cells each of size 17 or 18 pixels. These images contain repeating patterns of objects such as windows and doors. Experimental setup. First, we evaluate our approach PS-GM applied to generation from scratch. We focus on the synthetic dataset-we found that our facades dataset was too small to produce meaningful . For the first stage of PS-GM (i.e., generating the program P = (s, c)), we use a LSTM architecture for the encoder p φ (s, c | z) and a feedforward architecture for the decoder qφ(z | s, c). As described in Section 2, we use Algorithm 1 to synthesize programs P x = (s x, c x) representing each training image x ∈ X train. Then, we train p φ and qφ on the training set of programs DISPLAYFORM0 For the second stage of PS-GM (i.e., completing the structure rendering x struct into an image x), we have applied a variational encoder-decoder (VED) architecture, in which the goal is to minimize the reconstruction error of the decoded image (trained on full images that had their representative program extracted). The encoder of the VAE architecture, q θ (w | x struct), maps x struct to a latent vector w; this model has a standard convolutional architecture with 4 layers. The decoder p θ (x | w) maps the latent vector to a whole image, and has a standard transpose convolutional architecture with 6 layers. We train p θ and q θ using the typical loss based on the VAE approach. We also trained a model based on Cycle-GAN which mapped renderings of programs to complete images. While it is difficult to compare objectively to a VAE architecture, it appeared to be at least as capable of Original Images DISPLAYFORM1 Figure 4: Examples of synthetic images generated using our approach, PS-GM (with VED and CycleGan), and the baseline (a VAE and a SpatialGAN). Images in different rows are unrelated since the task is generation from scratch.generating structured images, and the structure in the PS-CycleGAN images were more apparent than in the PS-VAE due to less blurriness. We compare our encoder-decoder approach to a baseline consisting of a vanilla VAE. The architecture of the vanilla VAE is the same as the architecture of the VED we used for the second stage of PS-GM, except the input to the encoder is the original training image x instead of the structure rendering x struct.Results. We measure performance for PS-GM with the VED and the baseline VAE using the variational lower bound on the negative log-likelihood (NLL) BID23 on a held-out test set. For our approach, we use the lower bound, 7 which is the sum of the NLLs of the first and second stages; we report these NLLs separately as well. Figure 4 in Appendix B shows examples of generated images. For PS-GM and SpatialGAN, we use Fréchet inception distance BID8. Table 1 shows these metrics of both our approach and the baseline. Discussion. The models based on our approach quantitatively improve over the respective baselines. The examples of images generated using our approach with VED completion appear to contain more structure than those generated using the baseline VAE. Similarly, the images generated using our approach with CycleGAN clearly capture more complex structure than the unbounded 2D repeating texture patterns captured by SpatialGAN. Experimental setup. Second, we evaluated our approach PS-GM for image completion, on both our synthetic and the facades dataset. For this task, we compare using three image completion models: , , and the VED architecture described in Section 4.2. GLCIC is a state-of-the-art image completion model. CycleGAN is a generic imageto-image transformer. It uses unpaired training data, but we found that for our task, it outperforms approaches such as Pix2Pix BID11 that take paired training data. For each model, we trained two versions:• Our approach (PS-GM): As described in Section 2 (for image completion), given a partial image x part, we use Algorithm 1 to synthesize a program P part. We extrapolate P part toOriginal Image (Synthetic) Original Image (Facades)PS-GM (GLCIC, Synthetic) PS-GM (GLCIC, Facades)Baseline (GLCIC, Synthetic) Baseline (GLCIC, Facades)Figure 5: Examples of images generated using our approach (PS-GM) and the baseline, using GLCIC for image completion. DISPLAYFORM0, and executeP to obtain a structure rendering x struct. Finally, we train the image completion model (GLCIC, CycleGAN, or VED) to complete x struct to the original image x *.• Baseline: Given a partial image x part, we train the image completion model (GLCIC, CycleGAN, or VED) to directly complete x part to the original image x *.Results. As in Section 4.2, we measure performance using Fréchet inception distance for GLCIC and CycleGAN, and negative log-likelihood (NLL) to evaluate the VED, reported on a held-out test set. We show these in Table 2. We show examples of completed image using GLCIC in Figure 5. We show additional examples of completed images including those completed using CycleGAN and VED in Appendix B.Discussion. Our approach PS-GM outperforms the baseline in every case except the VED on the facades dataset. We believe the last is since both VEDs failed to learn any meaningful structure (see Figure 7 in Appendix B).A key reason why the baselines perform so poorly on the facades dataset is that the dataset is very small. Nevertheless, even on the synthetic dataset (which is fairly large), PS-GM substantially outperforms the baselines. Finally, generative models such as GLCIC are known to perform poorly far away from the edges of the provided partial image BID10. A benefit of our approach is that it provides the global context for a deep-learning based image completion model such as GLCIC to perform local completion. We have proposed a new approach to generation that incorporates programmatic structure into state-ofthe-art deep learning models. In our experiments, we have demonstrated the promise of our approach to improve generation of high-dimensional data with global structure that current state-of-the-art deep generative models have difficulty capturing. We leave a number of directions for future work. Most importantly, we have relied on a custom synthesis algorithm to eliminate the need for learning latent program structure. Learning to synthesize latent structure during training is an important direction for future work. In addition, future work will explore more expressive programmatic structures, including if-then-else statements. A EXPERIMENTAL DETAILS To sample a random image, we started with a 9×9 grid, where each grid cell is 16×16 pixels. We randomly sample a program P = ((s 1, c 1),..., (s k, c k)) (for k = 12), where each perceptual component c is a randomly selected MNIST image (downscaled to our grid cell size and colorized). To create correlations between different parts of P, we sample (s h, c h) depending on (s 1, c 1),..., (s h−1, c h−1). First, to sample each component c h, we first sample latent properties of c h (i.e., its MNIST label {0, 1, ..., 4} and its color {red, blue, orange, green, yellow}). Second, we sample the parameters of s h conditional on these properties. To each of the 25 possible latent properties of c h, we associate a discrete distribution over latent properties for later elements in the sequence, as well as a mean and standard deviation for each of the parameters of the corresponding sketch s h.We then render P by executing each (s h, c h) in sequence. However, when executing (s h, c h), on each iteration (i, j) of the for-loop, instead of rendering the sub-image c h at each position in the grid, we randomly sample another MNIST image c DISPLAYFORM0 with the same label as c h, recolor c DISPLAYFORM1 to be the same color as c h, and render c DISPLAYFORM2. By doing so, we introduce noise into the programmatic structure. PS-GM architecture. For the first stage of PS-GM (i.e., generating the program P = (s, c)), we use a 3-layer LSTM encoder p φ (s, c | z) and a feedforward decoder qφ(z | s, c). The LSTM includes sequences of 13-dimensional vectors, of which 6 dimensions represent the structure of the for-loop being generated, and 7 dimensions are an encoding of the image to be rendered. The image compression was performed via a convolutional architecture with 2 convolutional layers for encoding and 3 deconvolutional layers for decoding. For the second stage of PS-GM (i.e., completing the structure rendering x struct into an image x), we use a VED; the encoder q θ (w | x struct) is a CNN with 4 layers, and the decoder p θ (x | w) is a transpose CNN with 6 layers. The CycleGAN model has a discriminator with 3 convolutional layers and a generator which uses transfer learning by employing the pre-trained ResNet architecture. Baseline architecture. The architecture of the baseline is a vanilla VAE with the same as the architecture as the VED we used for the second state of PS-GM, except the input to the encoder is the original training image x instead of the structure rendering x struct. The baselines with CycleGAN also use the same architecture as PS-GM with CycleGAN/GLCIC. The Spatial GAN was trained with 5 layers each in the generative/discriminative layer, and 60-dimensional global and 3-dimensional periodic latent vectors. PS-GM architecture. For the first stage of PS-GM for completion (extrapolation of the program from a partial image to a full image), we use a feedforward network with three layers. For the second stage of completion via VAE, we use a convolutional/deconvolutional architecture. The encoder is a CNN with 4 layers, and the decoder is a transpose CNN with 6 layers. As was the case in generation, the CycleGAN model has a discriminator with 3 convolutional layers and a generator which uses transfer learning by employing the pre-trained ResNet architecture. Baseline architecture. For the baseline VAE architecture, we used a similar architecture to the PS-GM completion step (4 convolutional and 6 deconvolutional layers). The only difference was the input, which was a partial image rather than an image rendered with structure. The CycleGAN architecture was similar to that used in PS-GM (although it mapped partial images to full images rather than partial images with structure to full images). In Figure 6, we show examples of how our image completion pipeline is applied to the facades dataset, and in Figure 7, we show examples of how our image completion pipeline is applied to our synthetic dataset.
Applying program synthesis to the tasks of image completion and generation within a deep learning framework
346
scitldr
Learning control policies in robotic tasks requires a large number of interactions due to small learning rates, bounds on the updates or unknown constraints. In contrast humans can infer protective and safe solutions after a single failure or unexpected observation. In order to reach similar performance, we developed a hierarchical Bayesian optimization algorithm that replicates the cognitive inference and memorization process for avoiding failures in motor control tasks. A Gaussian Process implements the modeling and the sampling of the acquisition function. This enables rapid learning with large learning rates while a mental replay phase ensures that policy regions that led to failures are inhibited during the sampling process. The features of the hierarchical Bayesian optimization method are evaluated in a simulated and physiological humanoid postural balancing task. We quantitatively compare the human learning performance to our learning approach by evaluating the deviations of the center of mass during training. Our show that we can reproduce the efficient learning of human subjects in postural control tasks which provides a testable model for future physiological motor control tasks. In these postural control tasks, our method outperforms standard Bayesian Optimization in the number of interactions to solve the task, in the computational demands and in the frequency of observed failures. Autonomous systems such as anthropomorphic robots or self-driving cars must not harm cooperating humans in co-worker scenarios, pedestrians on the road or them selves. To ensure safe interactions with the environment state-of-the-art robot learning approaches are first applied to simulations and afterwards an expert selects final candidate policies to be run on the real system. However, for most autonomous systems a fine-tuning phase on the real system is unavoidable to compensate for unmodelled dynamics, motor noise or uncertainties in the hardware fabrication. Several strategies were proposed to ensure safe policy exploration. In special tasks like in robot arm manipulation the operational space can be constrained, for example, in classical null-space control approaches;;;;; or constraint black-box optimizer;;;;. While this null-space strategy works in controlled environments like research labs where the environmental conditions do not change, it fails in everyday life tasks as in humanoid balancing where the priorities or constraints that lead to hardware damages when falling are unknown. Alternatively, limiting the policy updates by applying probabilistic bounds in the robot configuration or motor command space;;;; were proposed. These techniques do not assume knowledge about constraints. Closely related are also Bayesian optimization techniques with modulated acquisition functions;;; to avoid exploring policies that might lead to failures. However, all these approaches do not avoid failures but rather an expert interrupts the learning process when it anticipates a potential dangerous situation. Figure 1: Illustration of the hierarchical BO algorithm. In standard BO (clock-wise arrow), a mapping from policy parameters to rewards is learned, i.e., φ → r ∈ R 1. We propose a hierarchical process, where first features κ are sampled and later used to predict the potential of policies conditioned on these features, φ|κ → r. The red dots show the first five successive roll-outs in the feature and the policy space of a humanoid postural control task. All the aforementioned strategies cannot avoid harming the system itself or the environment without thorough experts knowledge, controlled environmental conditions or human interventions. As humans require just few trials to perform reasonably well, it is desired to enable robots to reach similar performance even for high-dimensional problems. Thereby, most approaches are based on the assumption of a "low effective dimensionality", thus most dimensions of a high-dimensional problem do not change the objective function significantly. a method for relevant variable selection based on Hierarchical Diagonal Sampling for both, variable selection and function optimization, has been proposed. Randomization combined with Bayesian Optimization is proposed in to exploit effectively the aforementioned "low effective dimensionality". a dropout algorithm has been introduced to overcome the high-dimensionality problem by only train onto a subset of variables in each iteration, evaluating a "regret gap" and providing strategies to reduce this gap efficiently. an algorithm has been proposed which optimizes an acquisition function by building new Gaussian Processes with sufficiently large kernellengths scales. This ensures significant gradient updates in the acquisition function to be able to use gradient-dependent methods for optimization. The contribution of this paper is a computational model for psychological motor control experiments based on hierarchical acquisition functions in Bayesian Optimization (HiBO). Our motor skill learning method uses features for optimization to significantly reduce the number of required roll-outs. In the feature space, we search for the optimum of the acquisition function by sampling and later use the best feature configuration to optimize the policy parameters which are conditioned on the given features, see also Figure 1. In postural control experiments, we show that our approach reduces the number of required roll-outs significantly compared to standard Bayesian Optimization. The focus of this study is to develop a testable model for psychological motor control experiments where well known postural control features could be used. These features are listed in Table 3. In future work we will extend our model to autonomous feature learning and will validate the approach in more challenging robotic tasks where'good' features are hard to hand-craft. In this section we introduce the methodology of our hierarchical BO approach. We start with the general problem statement and afterwards briefly summarize the concept of BO which we use here as a baseline. We then go through the basic principles required for our algorithm and finally we explain mental replay. The goal in contextual policy search is to find the best policy π * (θ|c) that maximizes the return with reward r t (x t, u t) at time step t for executing the motor commands u t in state x t. For learning the policy vector and the context, we collect samples of the return J(θ [k] ) ∈ R 1, the evaluated policy parameter vector θ [k] ∈ R m and the observed contextual features modeled by c [k] ∈ R n. All variables used are summarized in Table 1. The optimization problem is defined as The optimal parameter vector and the corresponding context vector can be found in an hierarchical optimization process which is discussed in Section 2.3. Bayesian Optimization (BO) has emerged as a powerful tool to solve various global optimization problems where roll-outs are expensive and a sufficient accurate solution has to be found with only few evaluations, e.g.;;. In this paper we use the standard BO as a benchmark for our proposed hierarchical process. Therefore, we now briefly summarize the main concept. For further details refer to. The main concept of Bayesian Optimization is to build a model for a given system based on the so far observed data D = {X, y}. The model describes a transformation from a given data point x ∈ X to a scalar value y ∈ y, e.g. from the parameter vector θ to the return J(θ). Such model can either be parametrized or non-parametrized and is used for choosing the next query point by evaluating an acquisition function α(D). Here, we use the non-parametric Gaussian Processes (GPs) for modeling the unknown system which are state-of-the-art model learning or regression approaches Williams & Rasmussen (1996; 2006) that were successfully used for learning inverse dynamics models in robotic applications;. For comprehensive discussions we refer to;. GPs represent a distribution over a partial observed system in the form of where D = {X, y} are the so far observed data points and D * = {X *, y *} the query points. This representation is fully defined by the mean m and the covariance K. We chose m(x) = 0 as mean {0, 1} flag indicating the success of rollout k function and as covariance function a Matérn kernel Matérn. It is a generalization of the squared-exponential kernel that has an additional parameter ν which controls the smoothness of the ing function. The smoothing parameter can be beneficial for learning local models. We used Matérn kernels with ν = 5 / 2 and the gamma function Γ, A = (2 √ ν||x p − x q ||) /l and modified Bessel function H ν. The length-scale parameter of the kernel is denoted by σ, the variance of the latent function is denoted by σ y and δ pq is the Kronecker delta function (which is one if p = q and zero otherwise). Note that for ν = 1/2 the Matérn kernel implements the squared-exponential kernel. In our experiments, we optimized the hyperparameters θ = [σ, l, σ y] by maximizing the marginal likelihood. Predictions for a query points D * = {x *, y *} can then be determined as The predictions are then used for choosing the next model evaluation point x n based on the acquisition function α(x; D). We use expected improvement (EI) which considers the amount of improvement where τ is the so far best measured value max(y), Φ the standard normal cumulative distribution function, φ the standard normal probability density function and ξ ∼ σ ξ U (−0.5, 0.5) a random value to ensure a more robust exploration. Samples, distributed over the area of interest, are evaluated and the best point is chosen for evaluation based on the acquisition function values. We learn a joint distribution p(J(θ .., K roll-outs of observed triples. This distribution is used for as a prior of the acquisition function in Bayesian optimization. However, instead of directly conditioning on the most promising policy vectors using α BO = α(θ; D), we propose an iterative conditioning scheme. Therefore, the two acquisition functions are employed, where for Equation, the evaluated mean µ(θ; c) and variance σ(θ; c) for the parameter θ are conditioned on the features c. The hierarchical optimization process works then as follows: In the first step we estimate the best feature values based on a GP model using the acquisition function from Equation These feature values are then used to condition the search for the best new parameter θ [k+1] using Equation θ We subsequently continue evaluating the policy vector θ [k+1] using the reward function presented in Equation To ensure robustness for Bayesian Optimization, mental replays can be generated. Therefore, the new training data set J(θ, generated by the policy parameter θ [k+1], will be enlarged by augmenting perturbed copies of the policy parameter θ [k+1]. These l copies are then used for generating the augmented training data sets Here, the transcript · l denotes l perturbed copies of the given set. Hence, perturbed copies of the parameters θ [k+1] and features c [k+1] are generated keeping the objective J(θ [k+1] ) constant. In Algorithm the complete method is summarized. We evaluate different replay strategies in the Section in 3.3. In this section we first present observations on human learning during perturbed squat-to-stand movements. We compare the learning of a simulated humanoid to the learning rates achieved by the human participants. Second, we evaluate our hierarchical BO approach in comparison to our baseline, the standard BO. Third we evaluate the impact of mental replays on the performance of our algorithm. To observe human learning, we designed an experiment where 20 male participants were subjected to waist pull perturbation during squat-to-stand movements, see Figure 2 (a). Participants had to stand up from a squat position without making any compensatory steps (if they made a step, such trial was considered a fail). Backward perturbation to the centre of mass (CoM) was applied by a pulling mechanism and was dependent on participants' mass and vertical CoM velocity. On average, participants required 6 trials (σ human = 3.1) to successfully complete the motion. On the left panel of Figure 3, a histogram of the required trials before the first success is shown. On the right panel, the evaluation for the simulated humanoid are presented (details on the implementation are discussed in the subsequent Subsection 3.2). The human learning behavior is faster and more reliable than the learning behavior of the humanoid. However, humans can exploit fundamental knowledge about whole body balancing whereas our humanoid has to learn everything from scratch. Only the gravity constant was set to zero in our simulation, as we are only interested in the motor adaptation and not in gravity compensation strategies. Adaptation was evaluated using a measure based on the trajectory area (TA) at every episode as defined in?. The Trajectory area represents the total deviation of the CoM trajectory with respect to a straight line. The trajectory area of a given perturbed trajectory is defined as the time integral of the distance of the trajectory points to the straight line in the sagittal plane: A positive sign represents the anterior direction while a negative sign represents the posterior direction. The mean and standard deviation for the trajectory area over the number of training episodes for all participants are depicted in Figure 4. Comparing these with the simulation of our humanoid shows that the learning rate using our approach is similar to the learning rate of real humans. Figure 4: Mean and standard deviaton of the trajectory area (T A) with regard to the number of episodes for both, the human experiments and the simulated humanoid. For the humanoid the xcoordinates have been shifted about −0.5 to account for the stretched arms. In addition, the trajectory area of the humanoid has been scaled with the factor 0.1 and shifted about −0.2 to allow easier comparison. To test the proposed algorithm we simulated a humanoid postural control task as shown in Figure 2(b). The simulated humanoid has to stand up and is thereby exposed to an external pertubation proportional to the velocity of the CoM in the superior direction in the sagittal plane. The pertubation is applied during standing up motion such that the robot has to learn to counter balance. The simulated humanoid consist of four joints, connected by rigid links, where the position of the first joint is fixed onto the ground. A PD-controller is used with K P,i and K D,i for i = 1, 2, 3, 4 being the proportional and derivative gains. In our simulations the gains are set to K P,i = 400 and K D,i = 20 and an additive control noise ∼ N has been inserted such that the control input for a certain joint becomes where e P,i, e D,i are the joint errors regarding the target position and velocity. The control gains can also be learned. The goal positions and velocities for the joints are given. As parametrized policy, we use a via point [φ i,φ i], where φ i is the position of joint i at time t via andφ i the corresponding velocity. Hence, the policy is based on 9, respectively 17 parameters (if the gains are learned), which are summarized in Table 2. For our simulations we handcrafted 7 features, namely the overall success, the maximum deviation of the CoM in x and y direction and the velocities of the CoM for the x and y directions at 200 ms respectively 400 ms. In Table 3 the features used in this paper are summarized. Simultaneously learning of the features is out of scope of this comparison to human motor performance but part of future work. We simulated the humanoid in each run for a maximum of t max = 2 s with a simulation time step of dt = 0.002 s, such that a maximum of N = 1000 simulation steps are used. The simulation has been stopped at the simulation step N end if either the stand up has been failed or the maximum simulation time has been reached. The return of a roll-out J(θ) is composed according to J(θ) = −(c balance + c time + c control) with the balancing costs c balance = 1/N end the time dependent costs c time = (N − N end) and control costs of c control = 10 ij. We compared our approach with our baseline, standard Bayesian Optimization. For that we used the features 4, 5 in 3 and set the number of mental replays to l = 3. We initialized both, the BO and the HiBO approach with 3 seed points and generated average statistics over 20 runs. In Figure 5 the comparison between the rewards of the algorithms over 50 episodes is shown. In Figure 6 (a) the number of successful episodes is illustrated. Our approach requires significantly fewer episodes to improve the reward then standard Bayesian Optimization (10 ± 3 vs 45 ± 5) and has a higher success quote (78% ± 24% vs 60% ± 7%). We further evaluated the impact of the different features on the learning behavior. In Figure 6 (b) the average statistics over 20 runs for different selected features with 3 mental replays are shown. All feature pairs generate better on average then standard BO, whereas for the evaluated task no significant difference in the feature choice was observed. We evaluated our approach with additional experience replays. For that we included an additive noise of rep ∼ N (0, 0.05) to perturb the policy parameters and features. In Figure 6 (c) average statistics over 20 runs of the success rates for different number of replay episodes are shown (rep3 = 3 replay episodes). Our proposed algorithm works best with a number of 3 replay episodes. Five or more replays in every iteration steps even reduce the success rate of the algorithm. We introduced HiBO, a hierarchical approach for Bayesian Optimization. We showed that HiBO outperforms standard BO in a complex humanoid postural control task. Moreover, we demonstrated the effects of the choice of the features and for different number of mental replay episodes. We compared our to the learning performance of real humans at the same task. We found that the learning behavior is similar. We found that our proposed hierarchical BO algorithm can reproduce the rapid motor adaptation of human subjects. In contrast standard BO, our comparison method, is about four times slower. In future work, we will examine the problem of simultaneously learning task relevant features in neural nets.
This paper presents a computational model for efficient human postural control adaptation based on hierarchical acquisition functions with well-known features.
347
scitldr
Deep reinforcement learning has achieved great success in many previously difficult reinforcement learning tasks, yet recent studies show that deep RL agents are also unavoidably susceptible to adversarial perturbations, similar to deep neural networks in classification tasks. Prior works mostly focus on model-free adversarial attacks and agents with discrete actions. In this work, we study the problem of continuous control agents in deep RL with adversarial attacks and propose the first two-step algorithm based on learned model dynamics. Extensive experiments on various MuJoCo domains (Cartpole, Fish, Walker, Humanoid) demonstrate that our proposed framework is much more effective and efficient than model-free based attacks baselines in degrading agent performance as well as driving agents to unsafe states. Deep reinforcement learning (RL) has revolutionized the fields of AI and machine learning over the last decade. The introduction of deep learning has achieved unprecedented success in solving many problems that were intractable in the field of RL, such as playing Atari games from pixels and performing robotic control tasks (; ;). Unfortunately, similar to the case of deep neural network classifiers with adversarial examples, recent studies show that deep RL agents are also vulnerable to adversarial attacks. A commonly-used threat model allows the adversary to manipulate the agent's observations at every time step, where the goal of the adversary is to decrease the agent's total accumulated reward. As a pioneering work in this field, show that by leveraging the FGSM attack on each time frame, an agent's average reward can be significantly decreased with small input adversarial perturbations in five Atari games. further improve the efficiency of the attack in by leveraging heuristics of detecting a good time to attack and luring agents to bad states with sample-based Monte-Carlo planning on a trained generative video prediction model. Since the agents have discrete actions in Atari games , the problem of attacking Atari agents often reduces to the problem of finding adversarial examples on image classifiers, also pointed out in , where the adversaries intend to craft the input perturbations that would drive agent's new action to deviate from its nominal action. However, for agents with continuous actions, the above strategies can not be directly applied. Recently, studied the problem of adversarial testing for continuous control domains in a similar but slightly different setting. Their goal was to efficiently and effectively find catastrophic failure given a trained agent and to predict its failure probability. The key to success in is the availability of agent training history. However, such information may not always be accessible to the users, analysts, and adversaries. Therefore, in this paper we study the robustness of deep RL agents in a more challenging setting where the agent has continuous actions and its training history is not available. We consider the threat models where the adversary is allowed to manipulate an agent's observations or actions with small perturbations, and we propose a two-step algorithmic framework to find efficient adversarial attacks based on learned dynamics models. Experimental show that our proposed modelbased attack can successfully degrade agent performance and is also more effective and efficient than model-free attacks baselines. The contributions of this paper are the following: Figure 1: Two commonly-used threat models. • To the best of our knowledge, we propose the first model-based attack on deep RL agents with continuous actions. Our proposed attack algorithm is a general two-step algorithm and can be directly applied to the two commonly-used threat models (observation manipulation and action manipulation). • We study the efficiency and effectiveness of our proposed model-based attack with modelfree attack baselines based on random searches and heuristics (rand-U, rand-B, flip, see Section 4). We show that our model-based attack can degrade agent performance more significantly and efficiently than model-free attacks, which remain ineffective in numerous MuJoCo domains ranging from Cartpole, Fish, Walker, and Humanoid. Adversarial attacks in reinforcement learning. Compared to the rich literature of adversarial examples in image classifications and other applications (including natural language processing , speech , etc), there is relatively little prior work studying adversarial examples in deep RL. One of the first several works in this field are and , where both works focus on deep RL agent in Atari games with pixels-based inputs and discrete actions. In addition, both works assume the agent to be attacked has accurate policy and the problem of finding adversarial perturbation of visual input reduces to the same problem of finding adversarial examples on image classifiers. Hence, applied FGSM (to find adversarial perturbations and further improved the efficiency of the attack by heuristics of observing a good timing to attack -when there is a large gap in agents action preference between most-likely and leastlikely action. In a similar direction, study the problem of adversarial testing by leveraging rejection sampling and the agent training histories. With the availability of training histories, successfully uncover bad initial states with much fewer samples compared to conventional Monte-Carlo sampling techniques. Recent work by consider an alternative setting where the agent is attacked by another agent (known as adversarial policy), which is different from the two threat models considered in this paper. Finally, besides adversarial attacks in deep RL, a recent work study verification of deep RL agent under attacks, which is beyond the scope of this paper. Learning dynamics models. Model-based RL methods first acquire a predictive model of the environment dynamics, and then use that model to make decisions . These model-based methods tend to be more sample efficient than their model-free counterparts, and the learned dynamics models can be useful across different tasks. Various works have focused on the most effective ways to learn and utilize dynamics models for planning in RL (; ; ;). In this section, we first describe the problem setup and the two threat models considered in this paper. Next, we present an algorithmic framework to rigorously design adversarial attacks on deep RL agents with continuous actions. Let s i ∈ R N and a i ∈ R M be the observation vector and action vector at time step i, and let π: R N → R M be the deterministic policy (agent). Let f: R N × R M → R N be the dynamics model of the system (environment) which takes current state-action pair (s i, a i) as inputs and outputs the next state s i+1. We are now in the role of an adversary, and as an adversary, our goal is to drive the agent to the (un-safe) target states s target within the budget constraints. We can formulate this goal into two optimization problems, as we will illustrate shortly below. Within this formalism, we can consider two threat models: Threat model (i): Observation manipulation. For the threat model of observation manipulation, an adversary is allowed to manipulate the observation s i that the agent perceived within an budget: where ∆s i ∈ R N is the crafted perturbation and U s ∈ R N, L s ∈ R N are the observation limits. Threat model (ii): Action manipulation. For the threat model of action manipulation, an adversary can craft ∆a i ∈ R M such that M are the limits of agent's actions. Our formulations. Given an initial state s 0 and a pre-trained policy π, our (adversary) objective is to minimize the total distance of each state s i to the pre-defined target state s target up to the unrolled (planning) steps T. This can be written as the following optimization problems in Equations 3 and 4 for the Threat model (i) and (ii) respectively: A common choice of d(x, y) is the squared 2 distance x − y 2 2 and f is the learned dynamics model of the system, and T is the unrolled (planning) length using the dynamics models. In this section, we propose a two-step algorithm to solve Equations 3 and 4. The core of our proposal consists of two important steps: learn a dynamics model f of the environment and deploy optimization technique to solve Equations 3 and 4. We first discuss the details of each factor, and then present the full algorithm by the end of this section. Step 1: learn a good dynamics model f. Ideally, if f is the exact (perfect) dynamics model of the environment and assuming we have an optimization oracle to solve Equations 3 and 4, then the solutions are indeed the optimal adversarial perturbations that give the minimal total loss with -budget constraints. Thus, learning a good dynamics model can conceptually help on developing a strong attack. Depending on the environment, different forms of f can be applied. For example, if the environment of concerned is close to a linear system, then we could let f (s, a) = As + Bu, where A and B are unknown matrices to be learned from the sample trajectories (s i, a i, s i+1) pairs. For a more complex environment, we could decide if we still want to use a simple linear model (the next state prediction may be far deviate from the true next state and thus the learned dynamical model is less useful) or instead switch to a non-linear model, e.g. neural networks, which usually has better prediction power but may require more training samples. For either case, the model parameters A, B or neural network parameters can be learned via standard supervised learning with the sample trajectories pairs (s i, a i, s i+1). Step 2: solve Equations 3 and 4. Once we learned a dynamical model f, the next immediate task is to solve Equation 3 and 4 to compute the adversarial perturbations of observations/actions. When the planning (unrolled) length T > 1, Equation 3 usually can not be directly solved by off-theshelf convex optimization toolbox since the deel RL policy π is usually a non-linear and non-convex neural network. Fortunately, we can incorporate the two equality constraints of Equation 3 into the objective and with the remaining -budget constraint (Equation 1), Equation 3 can be solved via projected gradient descent (PGD) 1. Similarly, Equation 4 can be solved via PGD to get ∆a i. We note that, similar to the n-step model predictive control, our algorithm could use a much larger planning (unrolled) length T when solving Equations 3 and 4 and then only apply the first n (≤ T) adversarial perturbations on the agent over n time steps. Besides, with the PGD framework, f is not limited to feed-forward neural networks. Our proposed attack is summarized in Algorithm 2 for Step 1, and Algorithm 3 for Step 2. Algorithm 1 Collect trajectories 1: Input: pre-trained policy π, MaxSampleSize n s, environment env 2: Output: a set of trajectory pairs k ← k + 1 10: end while 11: Return S Algorithm 2 learn dynamics 1: Input: pre-trained policy π, MaxSampleSize n s, environment env, trainable parameters W 2: Output: learned dynamical model f (s, a; W) 3: S agent ← Collect trajectories(π, n s, env) 4: S random ← Collect trajectories(random policy, n s, env) 5: f (s, a; W) ← supervised learning algorithm(S agent ∪ S random, W) 6: Return f (s, a; W) Algorithm 3 model based attack 1: Input: pre-trained policy π, learned dynamical model f (s, a; W), threat model, maximum perturbation magnitude, unroll length T, apply perturbation length n (≤ T) 2: Output: a sequence of perturbation δ 1,..., δ n 3: if threat model is observation manipulation (Eq. 1) then In this section, we conduct experiments on standard reinforcement learning environment for continuous control . We demonstrate on 4 different environments in and corresponding tasks: Cartpole-balance/swingup, Fish-upright, Walkerstand/walk and Humanoid-stand/walk. For the deep RL agent, we train a state-of-the-art D4PG Evaluations. We conduct experiments for 10 different runs, where the environment is reset to different initial states in different runs. For each run, we attack the agent for one episode with 1000 time steps (the default time intervals is usually 10 ms) and we compute the total loss and total return reward. The total loss calculates the total distance of current state to the unsafe states and the total return reward measures the true accumulative reward from the environment based on agent's action. Hence, the attack algorithm is stronger if the total return reward and the total loss are smaller. Baselines. We compare our algorithm with the following model-free attack baselines with random searches and heuristics: • rand-U: generate m randomly perturbed trajectories from Uniform distribution with interval [−,] and return the trajectory with the smallest loss (or reward), • rand-B: generate m randomly perturbed trajectories from Bernoulli distribution with probability 1/2 and interval [−,], and return the trajectory with the smallest loss (or reward), • flip: generate perturbations by flipping agent's observations/actions within the budget in ∞ norm. For rand-U and rand-B, they are similar to Monte-Carlo sampling methods, where we generate m sample trajectories from random noises and report the loss/reward of the best trajectory (with minimum loss or reward among all the trajectories). We set m = 1000 throughout the experiments. Our algorithm. A 4-layer feed-forward neural network with 1000 hidden neurons per layer is trained as the dynamics model f respectively for the domains of Cartpole, Fish, Walker and Humanoid. We use standard 2 loss (without regularization) to learn a dynamics model f. Instead of using recurrent neural network to represent f, we found that the 1-step prediction for dynamics with the 4-layer feed-forward network is already good for the MuJoCo domains we are studying. Specifically, for the Cartpole and Fish, we found that 1000 episodes (1e6 training points) are sufficient to train a good dynamics model (the mean square error for both training and test losses are at the order of 10 −5 for Cartpole and 10 −2 for Fish), while for the more complicated domain like Walker and Humanoid, more training points (5e6) are required to achieve a low test MSE error (at the order of 10 −1 and 10 0 for Walker and Humanoid respectively). Consequently, we use larger planning (unrolled) length for Cartpole and Fish (e.g. T = 10, 20), while a smaller T (e.g. 3 or 5) is used for Walker and Humanoid. Meanwhile, we focus on applying projected gradient descent (PGD) to solve Equation 3 and 4. We use Adam as the optimizer with optimization steps equal to 30 and we report the best for each run from a combination of 6 learning rates, 2 unroll length {T 1, T 2} and n steps of applying PGD solution with n ≤ T i. For observation manipulation, we report the on Walker, Humanoid and Cartpole domains with tasks (stand, walk, balance, swingup) respectively. The unsafe states s target for Walker and Humanoid are set to be zero head height, targeting the situation of falling down. For Cartpole, the unsafe states are set to have 180 • pole angle, corresponding to the cartpole not swinging up and nor balanced. For the Fish domain, the unsafe states for the upright task target the pose of swimming fish to be not upright, e.g. zero projection on the z-axis. The full of both two threat models on observation manipulation and action manipulation are shown in Table 1a, b and c, d respectively. Since the loss is defined as the distance to the target (unsafe) state, the lower the loss, the stronger the attack. It is clear that our proposed attack achieves much lower loss in Table 1a & c than the other three model-free baselines, and the averaged ratio is also listed in 1b & d. Notably, over the 10 runs, our proposed attack always outperforms baselines for the threat model of observation perturbation and the Cartpole domain for the threat model of action perturbation, while still superior to the baselines despite losing two times to the flip baseline on the Fish domain. Only our proposed attack can constantly make the Walker fall down (since we are minimizing its head height to be zero). To have a better sense on the numbers, we give some quick examples below. For instance, as shown in Table 1a and b, we show that the average total loss of walker head height is almost unaffected for the three baselines -if the walker successfully stand or walk, its head height usually has to be greater than 1.2 at every time step, which is 1440 for one episode -while our attack can successfully lower the walker head height by achieving an average of total loss of 258, which is roughly 0.51(0.68) per time step for the stand (walk) task. Similarly, for the humanoid , a successful humanoid usually has head height greater than 1.4, equivalently a total loss of 1960 for one episode, and Table 1a shows that the d4pg agent is robust to the perturbations generated from the three modelfree baselines while being vulnerable to our proposed attack. Indeed, as shown in Figure 2, the walker and humanoid falls down quickly (head height is close to zero) under our specially-designed attack while remaining unaffected for all the other baselines. Evaluating on the total reward. Often times, the reward function is a complicated function and its exact definition is often unavailable. Learning the reward function is also an active research field, which is not in the coverage of this paper. Nevertheless, as long as we have some knowledge of unsafe states (which is often the case in practice), then we can define unsafe states that are related to low reward and thus performing attacks based on unsafe states (i.e. minimizing the total loss of distance to unsafe states) would naturally translate to decreasing the total reward of agent. As demonstrated in Table 2, the have the same trend of the total loss in Table 1, where our proposed attack significantly outperforms all the other three baselines. In particular, our method can lower the average total reward up to 4.96× compared to the baselines , while the baseline are close to the perfect total reward of 1000. Evaluating on the efficiency of attack. We also study the efficiency of the attack in terms of sample complexity, i.e. how many episodes do we need to perform an effective attack? Here we adopt the convention in control suite where one episode corresponds to 1000 time steps (samples), and we learn the neural network dynamical model f with different number of episodes. Figure 3 plots the total head height loss of the walker (task stand) for the three baselines and our method with dynamical model f trained with three different number of samples: {5e5, 1e6, 5e6}, or equivalently {500, 1000, 5000} episodes. We note that the sweep of hyper parameters is the same for all the three models, and the only difference is the number of training samples. The show that for the baselines rand-U and flip, the total losses are roughly at the order of 1400-1500, while 809 959 193 walk 934 913 966 608 (a stronger baseline rand-B still has total losses of 900-1200. However, if we solve Equation 3 with f trained by 5e5 or 1e6 samples, the total losses can be decreased to the order of 400-700 and are already winning over the three baselines by a significant margin. Same as our expectation, if we use more samples (e.g. 5e6, which is 5-10 times more), to learn a more accurate dynamics model, then it is beneficial to our attack method -the total losses can be further decreased by more than 2× and are at the order of 50-250 over 10 different runs. Here we also give a comparison between our model-based attack to existing works on the sample complexity. In , 3e5 episodes of training data is used to learn the adversarial value function, which is roughly 1000× more data than even our strongest adversary (with 5e3 episodes). Similarly, use roughly 2e4 episodes to train an adversary via deep RL, which is roughly 4× more data than ours 2. In this paper, we study the problem of adversarial attacks in deep RL with continuous control for two commonly-used threat models (observation manipulation and action manipulation). Based on the threat models, we proposed the first model-based attack algorithm and showed that our formulation can be easily solved by off-the-shelf gradient-based solvers. Through extensive experiments on 4 MuJoCo domains (Cartpole, Fish, Walker, Humanoid), we show that our proposed algorithm outperforms all the model-free based attack baselines by a large margin. There are several interesting future directions can be investigated based on this work and is detailed in Appendix. A.1 MORE ILLUSTRATION ON FIGURE 3 The meaning of Fig 3 is to show how the accuracy of the learned models affects our proposed technique: 1. we first learned 3 models with 3 different number of samples: 5e5, 1e6, 5e6 and we found that with more training samples (e.g. 5e6, equivalently 5000 episodes), we are able to learn a more accurate model than the one with 5e5 training samples; 2. we plot the attack of total loss for our technique with 3 learned models (denoted as PGD, num train) as well as the baselines (randU, randB, Flip) on 10 different runs (initializations). We show with the more accurate learned model (5e6 training samples), we are able to achieve a stronger attack (the total losses are at the order of 50-200 over 10 different runs) than the less accurate learned model (e.g. 5e5 training samples). However, even with a less accurate learned model, the total losses are on the order of 400-700, which already outperforms the best baselines by a margin of 1.3-2 times. This in Fig 3 also suggest that a very accurate model isn't necessarily needed in our proposed method to achieve effective attack. Of course, if the learned model is more accurate, then we are able to degrade agent's performance even more. For the baselines (rand-U and rand-B), the adversary generates 1000 trajectories with random noise directly and we report the best loss/reward at the end of each episode. The detailed steps are listed below: Step 1: The perturbations are generated from a uniform distribution or a bernoulli distribution within the range [-eps, eps] for each trajectory, and we record the total reward and total loss for each trajectory from the true environment (the MuJoCo simulator) Step 2: Take the best (lowest) total reward/loss among 1000 trajectories and report in Table 1 and 2. We note that here we assume the baseline adversary has an "unfair advantage" since they have access to the true reward (and then take the best attack among 1000 trials), whereas our techniques do not have access to this information. Without this advantage, the baseline adversaries (rand-B, rand-U) may be weaker if they use their learned model to find the best attack sequence. In any case, Table 1 and 2 demonstrate that our proposed attack can successfully uncover vulnerabilities of deep RL agents while the baselines cannot. For the baseline'flip', we add the perturbation (with the opposite sign and magnitude) on the original state/action and project the perturbed state/action are within its limits. We use default total timesteps = 1000, and the maximum total reward is 1000. We report the total reward of the d4pg agents used in this paper below. The agents are well-trained and have total reward close to 1000. There are several interesting future directions can be investigated based on this work, including learning reward functions to facilitate a more effective attack, extending our current approach to develop effective black-box attacks, and incorporating our proposed attack algorithm to adversarial training of the deep RL agents. In particular, we think there are three important challenges that need to be addressed to study adversarial training of RL agents along with our proposed attacks: 1. The adversary and model need to be jointly updated. How do we balance these two updates, and make sure the adversary is well-trained at each point in training? 2. How do we avoid cycles in the training process due to the agent overfitting to the current adversary? 3. How do we ensure the adversary doesn't overly prevent exploration / balance unperturbed vs. robust performance?
We study the problem of continuous control agents in deep RL with adversarial attacks and proposed a two-step algorithm based on learned model dynamics.
348
scitldr
A leading hypothesis for the surprising generalization of neural networks is that the dynamics of gradient descent bias the model towards simple solutions, by searching through the solution space in an incremental order of complexity. We formally define the notion of incremental learning dynamics and derive the conditions on depth and initialization for which this phenomenon arises in deep linear models. Our main theoretical contribution is a dynamical depth separation , proving that while shallow models can exhibit incremental learning dynamics, they require the initialization to be exponentially small for these dynamics to present themselves. However, once the model becomes deeper, the dependence becomes polynomial and incremental learning can arise in more natural settings. We complement our theoretical findings by experimenting with deep matrix sensing, quadratic neural networks and with binary classification using diagonal and convolutional linear networks, showing all of these models exhibit incremental learning. Neural networks have led to a breakthrough in modern machine learning, allowing us to efficiently learn highly expressive models that still generalize to unseen data. The theoretical reasons for this success are still unclear, as the generalization capabilities of neural networks defy the classic statistical learning theory bounds. Since these bounds, which depend solely on the capacity of the learned model, are unable to account for the success of neural networks, we must examine additional properties of the learning process. One such property is the optimization algorithm -while neural networks can express a multitude of possible ERM solutions for a given training set, gradient-based methods with the right initialization may be implicitly biased towards certain solutions which generalize. A possible way such an implicit bias may present itself, is if gradient-based methods were to search the hypothesis space for possible solutions of gradually increasing complexity. This would suggest that while the hypothesis space itself is extremely complex, our search strategy favors the simplest solutions and thus generalizes. One of the leading along these lines has been by , deriving an analytical solution for the gradient flow dynamics of deep linear networks and showing that for such models, the singular values converge at different rates, with larger values converging first. At the limit of infinitesimal initialization of the deep linear network, show these dynamics exhibit a behavior of "incremental learning" -the singular values of the model are learned separately, one at a time. Our work generalizes these to small but finite initialization scales. Incremental learning dynamics have also been explored in gradient descent applied to matrix completion and sensing with a factorized parameterization (, ,). When initialized with small Gaussian weights and trained with a small learning rate, such a model is able to successfully recover the low-rank matrix which labeled the data, even if the problem is highly over-determined and no additional regularization is applied. In their proof of low-rank recovery for such models, show that the model remains lowrank throughout the optimization process, leading to the successful generalization. explore the dynamics of such models, showing the singular values are learned at different rates and that deeper models exhibit stronger incremental learning dynamics. Our work deals with a more simplified setting, allowing us to determine explicitly under which conditions depth leads to this dynamical phenomenon. Finally, the learning dynamics of nonlinear models have been studied as well. and study the gradient flow dynamics of shallow ReLU networks under restrictive distributional assumptions, show that shallow networks learn functions of gradually increasing frequencies and show how deep ReLU networks correlate with linear classifiers in the early stages of training. These findings, along with others, suggest that the generalization ability of deep networks is at least in part due to the incremental learning dynamics of gradient descent. Following this line of work, we begin by explicitly defining the notion of incremental learning for a toy model which exhibits this sort of behavior. Analyzing the dynamics of the model for gradient flow and gradient descent, we characterize the effect of the model's depth and initialization scale on incremental learning, showing how deeper models allow for incremental learning in larger (realistic) initialization scales. Specifically, we show that a depth-2 model requires exponentially small initialization for incremental learning to occur, while deeper models only require the initialization to be polynomially small. Once incremental learning has been defined and characterized for the toy model, we generalize our theoretically and empirically for larger linear and quadratic models. Examples of incremental learning in these models can be seen in figure 1, which we discuss further in section 4. We begin by analyzing incremental learning for a simple model. This will allow us to gain a clear understanding of the phenomenon and the conditions for it, which we will later be able to apply to a variety of other models in which incremental learning is present. Our simple linear model will be similar to the toy model analyzed by. Our input space will be X = R d and the hypothesis space will be linear models with non-negative weights, such that: We will introduce depth into our model, by parameterizing σ using w ∈ R d ≥0 in the following way: Where N represents the depth of the model. Since we restrict the model to having non-negative weights, this parameterization doesn't change the expressiveness, but it does radically change it's optimization dynamics. Assuming the data is labeled by some σ * ∈ R d ≥0, we will study the dynamics of this model for general N under a depth-normalized 1 squared loss over Gaussian inputs, which will allow us to derive our analytical solution: We will assume that our model is initialized uniformly with a tunable scaling factor, such that: 2.2 GRADIENT FLOW ANALYTICAL SOLUTIONS Analyzing our toy model using gradient flow allows us to obtain an analytical solution for the dynamics of σ(t) along with the dynamics of the loss function for a general N. For brevity, the following theorem refers only to N = 1, 2 and N → ∞, however the solutions for 3 ≤ N < ∞ are similar in structure to N → ∞, but more complicated. We also assume σ * i > 0 for brevity, however we can derive the solutions for σ * i = 0 as well. Note that this is a special case adaptation of the one presented in for deep linear networks: Theorem 1. Minimizing the toy linear model described in with gradient flow over the depth normalized squared loss, with Gaussian inputs and weights initialized as in and assuming σ * i > 0 leads to the following analytical solutions for different values of N: Proof. The gradient flow equations for our model are the following: Given the dynamics of the w parameters, we may use the chain rule to derive the dynamics of the induced model, σ:σ This differential equation is solvable for all N, leading to the solutions in the theorem. Taking, which is also solvable. for σ * i ∈ {12, 6, 4, 3} according to the analytical solutions in theorem 1, under different depths and initializations. The first column has all values converging at the same rate. Notice how the deep parameterization with small initialization leads to distinct phases of learning, where values are learned incrementally (bottom-right). The shallow model's much weaker incremental learning, even at small initialization scales (second column), is explained in theorem 2. Analyzing these solutions, we see how even in such a simple model depth causes different factors of the model to be learned at different rates. Specifically, values corresponding to larger optimal values converge faster, suggesting a form of incremental learning. This is most clear for N = 2 where the solution isn't implicit, but is also the case for N ≥ 3, as we will see in the next subsection. These dynamics are depicted in figure 2, where we see the dynamics of the different values of σ(t) as learning progresses. When N = 1, all values are learned at the same rate regardless of the initialization, while the deeper models are clearly biased towards learning the larger singular values first, especially at small initialization scales. Our model has only one optimal solution due to the population loss, but it is clear how this sort of dynamic can induce sparse solutions -if the model is able to fit the data after a small amount of learning phases, then it's obtained will be sparse. Alternatively, if N = 1, we know that the dynamics will lead to the minimal 2 norm solution which is dense. We explore the sparsity inducing bias of our toy model by comparing it empirically 2 to a greedy sparse approximation algorithm in appendix D, and give our theoretical in the next section. Equipped with analytical solutions for the dynamics of our model for every depth, we turn to study how the depth and initialization effect incremental learning. focuses on incremental learning in depth-2 models at the limit of σ 0 → 0, we will study the phenomenon for a general depth and for σ 0 > 0. First, we will define the notion of incremental learning. Since all values of σ are learned in parallel, we can't expect one value to converge before the other moves at all (which happens for infinitesimal initialization as shown by). We will need a more relaxed definition for incremental learning in finite initialization scales. Definition 1. Given two values σ i, σ j such that σ * i > σ * j > 0 and both are initialized as σ i = σ j = σ 0 < σ * j, and given two scalars s ∈ (0, 1 4) and f ∈ (3 4, 1), we call the learning of the values (s, f)-incremental if there exists a t for which: In words, two values have distinct learning phases if the first almost converges (f ≈ 1) before the second changes by much (s 1). Note that for any N, σ(t) is monotonically increasing and so once σ j (t) = sσ * j, it will not decrease to allow further incremental learning. Given this definition of incremental learning, we turn to study the conditions that facilitate incremental learning in our toy model. Our main is a dynamical depth separation , showing that incremental learning is dependent on Proof sketch (the full proof is given in appendix A). Rewriting the separable differential equation in to calculate the time until σ(t) = ασ *, we get the following: The condition for incremental learning is then the requirement that t f (σ i) ≤ t s (σ j), ing in: We then relax/restrict the above condition to get a necessary/sufficient condition on σ 0, leading to a lower and upper bound on σ th 0. Note that the value determining the condition for incremental learning is -if two values are in the same order of magnitude, then their ratio will be close to 1 and we will need a small initialization to obtain incremental learning. The dependence on the ratio changes with depth, and is exponential for N = 2. This means that incremental learning, while possible for shallow models, is difficult to see in practice. This explains why changing the initialization scale in figure 2 changes the dynamics of the N ≥ 3 models, while not changing the dynamics for N = 2 noticeably. The next theorem extends part of our analysis to gradient descent, a more realistic setting than the infinitesimal learning rate of gradient flow: Theorem 3. Given two values σ i, σ j of a depth-2 toy linear model as in, such that and the model is initialized as in, and given two scalars s ∈ (0, 1 4) and f ∈ (3 4, 1), and assuming σ * j ≥ 2σ 0, and assuming we optimize with gradient descent with a learning rate η ≤ c σ * c < 2(√ 2 − 1) and σ * 1 the largest value of σ *, then the largest initialization value for which the learning phases of the values are (s, f)-incremental, denoted σ th 0, is lower and upper bounded in the following way: Where A and B are defined as: We defer the proof to appendix B. Note that this , while less elegant than the bounds of the gradient flow analysis, is similar in nature. Both A and B simplify to r when we take their first order approximation around c = 0, giving us similar bounds and showing that the condition on σ 0 for N = 2 is exponential in gradient descent as well. While similar gradient descent are harder to obtain for deeper models, we discuss the general effect of depth on the gradient decent dynamics in appendix C. So far, we have only shown interesting properties of incremental learning caused by depth for a toy model. In this section, we will relate several deep models to our toy model and show how incremental learning presents itself in larger models as well. The task of matrix sensing is a generalization of matrix completion, where our input space is X = R d×d and our model is a matrix W ∈ R d×d, such that: , we introduce depth by parameterizing the model using a product of matrices and the following initialization scheme (W i ∈ R d×d): Note that when d = 1, the deep matrix sensing model reduces to our toy model without weight sharing. We study the dynamics of the model under gradient flow over a depth-normalized squared loss, assuming the data is labeled by a matrix sensing model parameterized by a PSD W * ∈ R d×d: The following theorem relates the deep matrix sensing model to our toy model, showing the two have the same dynamical equations: Theorem 4. Optimizing the deep matrix sensing model described in with gradient flow over the depth normalized squared loss , with weights initialized as in leads to the following dynamical equations for different values of N: Where σ i and σ * i are the ith singular values of W and W *, respectively, corresponding to the same singular vector. The proof follows that of and and is deferred to appendix E. Theorem 4 shows us that the bias towards sparse solutions introduced by depth in the toy model is equivalent to the bias for low-rank solutions in the matrix sensing task. This bias was studied in a more general setting in , with empirical supporting the effect of depth on the obtainment of low-rank solutions under a more natural loss and initialization scheme. We recreate and discuss these experiments and their connection to our analysis in appendix E, and an example of these dynamics in deep matrix sensing can also be seen in panel (a) of figure 1. By drawing connections between quadratic networks and matrix sensing (as in), we can extend our to these nonlinear models. We will study a simplified quadratic network, where our input space is X = R d and the first layer is parameterized by a weight matrix W ∈ R d×d and followed by a quadratic activation function. The final layer will be a summation layer. We assume, like before, that the labeling function is a quadratic network parameterized by W * ∈ R d×d. Our model can be written in the following way, using the following orthogonal initialization scheme: Immediately, we see the similarity of the quadratic network to the deep matrix sensing model with N = 2, where the input space is made up of rank-1 matrices. However, the change in input space forces us to optimize over a different loss function to reproduce the same dynamics: Definition 2. Given an input distribution over an input space X with a labeling function y: X → R and a hypothesis h, the variance loss is defined in the following way: Note that minimizing this loss function amounts to minimizing the variance of the error, while the squared loss minimizes the second moment of the error. We note that both loss functions have the same minimum for our problem, and the dynamics of the squared loss can be approximated in certain cases by the dynamics of the variance loss. For a complete discussion of the two losses, including the cases where the two losses have similar dynamics, we refer the reader to appendix F. Theorem 5. Minimizing the quadratic network described and initialized as in with gradient flow over the variance loss defined in leads to the following dynamical equations: Where σ i and σ * i are the ith singular values of W and W *, respectively, corresponding to the same singular vector. We defer the proof to appendix F and note that these dynamics are the same as our depth-2 toy model, showing that shallow quadratic networks can exhibit incremental learning (albeit requiring a small initialization). While incremental learning has been described for deep linear networks in the past, it has been restricted to regression tasks. Here, we illustrate how incremental learning presents itself in binary classification, where implicit bias have so far focused on convergence at t → ∞ (, ,). Deep linear networks with diagonal weight matrices have been shown to be biased towards sparse solutions when N > 1 in , and biased towards the max-margin solution for N = 1. Instead of analyzing convergence at t → ∞, we intend to show that the model favors sparse solutions for the entire duration of optimization, and that this is due to the dynamics of incremental learning. Our theoretical illustration will use our toy model as in (initialized as in) as a special weightshared case of deep networks with diagonal weight matrices, and we will then show empirical for the more general setting. We analyze the optimization dynamics of this model over a separable where y i ∈ {±1}. We use the exponential loss ((f (x), y) = e −yf (x) ) for the theoretical illustration and experiment on the exponential and logistic losses. Computing the gradient for the model over w, the gradient flow dynamics for σ become: We see the same dynamical attenuation of small values of σ that is seen in the regression model, caused by the multiplication by σ. From this, we can expect the same type of incremental learning to occur -weights of σ will be learned incrementally until the dataset can be separated by the current support of σ. Then, the dynamics strengthen the growth of the current support while relatively attenuating that of the other values. Since the data is separated, increasing the values of the current support reduces the loss and the magnitude of subsequent gradients, and so we should expect the support to remain the same and the model to converge to a sparse solution. Granted, the above description is just intuition, but panel (c) of figure 1 shows how it is born out in practice (similar are obtained for the logistic loss). In appendix G we further explore this model, showing deeper networks have a stronger bias for sparsity. We also observe that the initialization scale plays a similar role as before -deep models are less biased towards sparsity when σ 0 is large. In their work, show an equivalence between the diagonal network and the circular-convolutional network in the frequency domain. According to their , we should expect to see the same sparsity-bias of diagonal networks in convolutional networks, when looking at the Fourier coefficients of σ. An example of this can be seen in panel (d) of figure 1, and we refer the reader to appendix G for a full discussion of their convolutional model and it's incremental learning dynamics. Gradient-based optimization for deep linear models has an implicit bias towards simple (sparse) solutions, caused by an incremental search strategy over the hypothesis space. Deeper models have a stronger tendency for incremental learning, exhibiting it in more realistic initialization scales. This dynamical phenomenon exists for the entire optimization process for regression as well as classification tasks, and for many types of models -diagonal networks, convolutional networks, matrix completion and even the nonlinear quadratic network. We believe this kind of dynamical analysis may be able to shed light on the generalization of deeper nonlinear neural networks as well, with shallow quadratic networks being only a first step towards that goal. Proof. Our strategy will be to define the time t α for which a value reaches a fraction α of it's optimal value, and then require that t f (σ i) ≤ t s (σ j). We begin with recalling the differential equation which determines the dynamics of the model:σ Since the solution for N ≥ 3 is implicit and difficult to manage in a general form, we will define t α using the integral of the differential equation. The equation is separable, and under initialization of σ 0 we can describe t α (σ) in the following way: Incremental learning takes place when σ i (t f) = f σ * i happens before σ j (t s) = sσ * j. We can write this condition in the following way: Plugging in σ i = rσ j and rearranging, we get the following necessary and sufficient condition for incremental learning: Our last step before relaxing and restricting our condition will be to split the integral on the left-hand side into two integrals: At this point, we cannot solve this equation and isolate σ 0 to obtain a clear threshold condition on it for incremental learning. Instead, we will relax/restrict the above condition to get a necessary/sufficient condition on σ 0, leading to a lower and upper bound on the threshold value of σ 0. To obtain a sufficient (but not necessary) condition on σ 0, we may make the condition stricter either by increasing the left-hand side or decreasing the right-hand side. We can increase the left-hand side by removing r from the left-most integral's denominator (r > 1) and then combine the left-most and right-most integrals: Next, we note that the integration bounds give us a bound on σ for either integral. This means we can replace 1 − σ σ * j with 1 on the right-hand side, and replace 1 − σ rσ * j with 1 − f on the left-hand side: We may now solve these integrals for every N and isolate σ 0, obtaining the lower bound on σ th 0. We start with the case where N = 2: Rearranging to isolate σ 0, we obtain our : For the N ≥ 3 case, we have the following after solving the integrals: For simplicity we may further restrict the condition by removing the term 1 rf σ * j 1− 2 N. Solving for σ 0 gives us the following: To obtain a necessary (but not sufficient) condition on σ 0, we may relax the condition in either by decreasing the left-hand side or increasing the right-hand side. We begin by rearranging the equation: Like before, we may use the integration bounds to bound σ. Plugging in σ = sσ * j for all integrals decreases the left-hand side and increases the right-hand side, leading us to the following: Rearranging, we get the following inequality: We now solve the integrals for the different cases. For N = 2, we have: Rearranging to isolate σ 0, we get our condition: Finally, for N ≥ 3, we solve the integrals to give us: Rearranging to isolate σ 0, we get our condition: For a given N, we derived a sufficient condition and a necessary condition on σ 0 for (s, f)-incremental learning. The necessary and sufficient condition on σ 0, which is the largest initialization value for which we see incremental learning (denoted σ th 0), is between the two derived bounds. The precise bounds can possibly be improved a bit, but the asymptotic dependence on r is the crux of the matter, showing the dependence on r changes with depth with a substantial difference when we move from shallow models (N = 2) to deeper ones (N ≥ 3) Theorem. Given two values σ i, σ j of a depth-2 toy linear model as in, such that and the model is initialized as in, and given two scalars s ∈ (0, 1 4) and f ∈ (3 4, 1), and assuming σ * j ≥ 2σ 0, and assuming we optimize with gradient descent with a learning rate η ≤ c σ * 1 for c < 2(√ 2 − 1) and σ * 1 the largest value of σ *, then the largest initialization value for which the learning phases of the values are (s, f)-incremental, denoted σ th 0, is lower and upper bounded in the following way: Where A and B are defined as: Proof. To show our for gradient descent and N = 2, we build on the proof techniques of theorem 3 of. We start by deriving the recurrence relation for the values σ(t) for general depth, when t now stands for the iteration. Remembering that w n i = σ i, we write down the gradient update for w i (t): Raising w i (t) to the N th power, we get the gradient update for the σ values: Next, we will prove a simple lemma which gives us the maximal learning rate we will consider for the analysis, for which there is no overshooting (the values don't grow larger than the optimal values). Lemma 1. For the gradient update in, assuming for c ≤ 1, we have: Defining r i = σi σ * i and dividing both sides by σ * i, we have: It is enough to show that for any 0 ≤ r ≤ 1, we have that re r 1− 2 N (1−r) ≤ 1, as over-shooting occurs when r i (t) > 1. Indeed, this function is monotonic increasing in 0 ≤ r ≤ 1 (since the exponent is non negative), and equals 1 when r = 1. Since r = 1 is a fixed point and no iteration that starts at r < 1 can cross 1, then r i (t) ≤ 1 for any t. This concludes our proof. Under this choice of learning rate, we can now obtain our incremental learning for gradient descent when N = 2. Our strategy will be bounding σ i (t) from below and above, which will give us a lower and upper bound for t α (σ i). Once we have these bounds, we will be able to describe either a necessary or a sufficient condition on σ 0 for incremental learning, similar to theorem 2. The update rule for N = 2 is: Next, we plug in η = Where in the fourth line we use the inequality 1 1+x ≥ 1 − x, ∀x ≥ 0. We may now subtract 1 σ * i from both sides to obtain: We may now obtain a bound on t α (σ i) by plugging in σ i (t) = ασ * i and taking the log: Rearranging (note that log 1 − cR i − c 2 4 R 2 i < 0 and that our choice of c keeps the argument of the log positive), we get: Next, we follow the same procedure for an upper bound. Starting with our update step: Where in the last line we use the inequality from both sides, we get: Rearranging like before, we get the bound on the α-time: Given these bounds, we would like to find the conditions on σ 0 that allows for (s, f)-incremental learning. We will find a sufficient condition and a necessary condition, like in the proof of theorem 2. A sufficient condition for incremental learning will be one which is possibly stricter than the exact condition. We can find such a condition by requiring the upper bound of t f (σ i) to be smaller than the lower bound on t s (σ j). This becomes the following condition: and rearranging, we get the following: We may now take the exponent of both sides and rearrange again, remembering = r > 1, to get the following condition: Now, we will add the very reasonable assumption that σ * j ≥ 2σ 0, which allows us to replace Now we can rearrange and isolate σ 0 to get a sufficient condition for incremental learning: A necessary condition for incremental learning will be one which is possibly more relaxed than the exact condition. We can find such a condition by requiring the lower bound of t f (σ i) to be smaller than the upper bound on t s (σ j). This becomes the following condition: log 1−cRj +c 2 R 2 j and rearranging, we get the following: We may now take the exponent of both sides and rearrange again, remembering σ * i σ * j = r > 1, to get the following condition: We may now relax the condition further, by removing the r from the denominator of the left-hand side and the σ 0 from the numerator. This gives us the following: Finally, rearranging gives us the necessary condition: While we were able to generalize our to gradient descent for N = 2, our proof technique relies on the ability to get a non-implicit solution for σ(t) which we discretized and bounded. This is harder to generalize to larger values of N, where the solution is implicit. Still, we can informally illustrate the effect of depth on the dynamics of gradient descent by approximating the update rule of the values. We start by reminding ourselves of the gradient descent update rule for σ, for a learning rate η = c To compare two values in the same scales, we will divide both sides by the optimal value σ * i and look at the update step of the ratio r i = We will focus on the early stages of the optimization process, where r 1. This means we can neglect the 1 − r i (t) term in the update step, giving us the approximate update step we will use to compare the general i, j values: We would like to compare the dynamics of r i and r j, which is difficult to do when the recurrence relation isn't solvable. However, we can observe the first iteration of gradient descent and see how depth affects this iteration. Since we are dealing with variables which are ratios of different optimal values, the initial values of r are different. Denoting r = σ * i σ * j, we can describe the initialization of r j using that of r i: Plugging in the initial conditions and noting that R i = rR j, we get: We see that the two ratios have a similar update, with the ratio of optimal values playing a role in how large the initial value is versus how large the added value is. When we use a small learning rate, we have a very small c which means we can make a final approximation and neglect the higher order terms of c: We can see that while the initial conditions favor r j, the size of the update for r i is larger by a factor of r N −1 N when the initialization and learning rates are small. This accumulates throughout the optimization, making r i eventually converge faster than r j. The effect of depth here is clear -the deeper the model, the larger the relative step size of r i and the faster it converges relative to r j. Learning our toy model, when it's incremental learning is taken to the limit, can be described as an iterative procedure where at every step an additional feature is introduced such that it's weight is non-zero and then the model is optimized over the current set of features. This description is also relevant for the sparse approximation algorithm orthogonal matching pursuit , where the next feature is greedily chosen to be the one which most improves the current model. While the toy model and OMP are very different algorithms for learning sparse linear models, we will show empirically that they behave similarly. This allows us to view incremental learning as a continuous-time extension of a greedy iterative algorithm. To allow for negative weights in our experiments, we augment our toy model as in the toy model of. Our model will have the same induced form as before: However, we parameterize σ using w +, w − ∈ R d in the following way: We can now treat this algorithm as a sparse approximation pursuit algorithm -given a dictionary D ∈ R d×n and an example x ∈ R d, we wish to find the sparsest α for which Dα ≈ x by minimizing the 0 norm of α subject to ||Dα − x|| 2 2 = 0 3. Under this setting, we can compare OMP to our toy model by comparing the sets of features that the two algorithms choose for a given example and dictionary. In figure 3 we run such a comparison. Using a dictionary of 1000 atoms and an example of dimensionality 80 sampled from a random hidden vector of a given sparsity s, we run both algorithms and record the first s features chosen 4. Figure 3: Empirical comparison of the dynamics of the toy model to OMP. The toy model has a depth of 5 and was initialized with a scale of 1e-4 and a learning rate of 3e-3. We compare the fraction of agreement between the sets of first s features selected of the two algorithms for every given sparsity level s, averaged over 100 experiments (the shaded regions are empirical standard deviations). For example, for sparsity level 3, we look at the sets of first 3 features selected by each algorithm and calculate the fraction of them that appear in both sets. For every sparsity s, we plot the mean fraction of agreement between the sets of features chosen by OMP and the toy model over 100 experiments. We see that the two algorithms choose very similar features at the beginning, suggesting that the deep model approximates the discrete behavior of OMP. Only when the number of features increases do we see that the behavior of the two models begins to differ, caused by the fact that the toy model has a finite initialization scale and learning rate. These experiments demonstrate the similarity between the incremental learning of deep models and the discrete behavior of greedy approximation algorithms such as OMP. Adopting this view also allows us to put our finger on another strength of the dynamics of deep models -while greedy algorithms such as OMP require the analytical solution or approximation of every iterate, the dynamics of deep models are able to incrementally learn any differentiable function. For example, looking back at the matrix sensing task and the classification models in section 4, we see that while there isn't an immediate and efficient extension of OMP for these settings, the dynamics of learning deep models extends naturally and exhibits the same incremental learning as OMP. Theorem. Minimizing the deep matrix sensing model described in with gradient flow over the depth normalized squared loss, with Gaussian inputs and weights initialized as in leads to the following dynamical equations for different values of N: Where σ i and σ * i are the ith singular values of W and W *, respectively, corresponding to the same singular vectors. Proof. We will adapt the proof from for multilayer linear networks. The gradient flow equations for W n, n ∈ [N] are:, U diagonalizes all W n matrices at initialization such that D n = U W n U T = N √ σ 0 I. Making this change of variables for all W n, we get: Rearranging, we get a set of decoupled differential equations for the singular values of W n: Note that since these matrices are all diagonal at initialization, the above dynamics ensure that they remain diagonal throughout the optimization. Denoting σ n,i as the i'th singular value of W n and σ i as the i'th singular value of W, we get the following differential equation: Since we assume at initialization that ∀n, m, i: σ n,i = σ m,i = N √ σ 0, the above dynamics are the same for all singular values and we get ∀n, m, i: σ n,i (t) = σ m,i (t) = N σ i (t). We may now use this to calculate the dynamics of the singular value of W, since they are the product the the singular values of all W n matrices. Denoting σ −n,i = k =n σ k,i and using the chain rule: Our analytical are only applicable for the population loss over Gaussian inputs. These conditions are far from the ones used in practice and studied in , where the problem is over-determined and the weights are drawn from a Gaussian distribution with a small variance. To show our regarding incremental learning extend qualitatively to more natural settings, we empirically examine the deep matrix sensing model in this natural setting for different depths and initialization scales as seen in figure 4. Notice how incremental learning is exhibited even when the number of examples is much smaller than the number of parameters in the model. While we can't rely on our theory for describing the exact dynamics of the optimization for these kinds of over-determined problems, the qualitative we get from it are still applicable. Another interesting phenomena we should note is that once the dataset becomes very small (the second row of the figure), we see all "currently active" singular values change at the beginning of every new phase (this is best seen in the bottom-right panel). This suggests that since there is more than one optimal solution, once we increase the current rank of our model it may find a solution that has a different set of singular values and vectors and thus all singular values change at the beginning of a new learning phase. This demonstrates the importance of incremental learning for obtaining The columns correspond to different parameterization depths, while the rows correspond to different dataset sizes. In both cases the problem is over-determined, since the number of examples is smaller than the number of parameters. Since the original matrix is rank-4, we can recognize an unsuccessful recovery when all five singular values are nonzero, as seen clearly for both depth-1 plots. sparse solutions -once the initialization conditions and depth are such that the learning phases are distinct, gradient descent finds the optimal rank-i solution in every phase i. For these dynamics to successfully recover the optimal solution at every phase, the phases need to be far enough apart from each other to allow for the singular values and vectors to change before the next phase begins. Theorem. Minimizing the quadratic network described and initialized as in with gradient flow over the variance loss defined in with Gaussian inputs leads to the following dynamical equations:σ Where σ i and σ * i are the ith singular values of W and W *, respectively, corresponding to the same singular vectors. Proof. Our proof will follow similar lines as the analysis of the deep matrix sensing model. Taking the expectation of the variance loss over Gaussian inputs for our model gives us: Following the gradient flow dynamics over W leads to the following differential equation: We can now calculate the gradient flow dynamics of W T W using the chain rule: Now, under our initialization W T 0 W 0 = σ 0 I, we get that W T W and W T * W * are simultaneously diagonalizable at initialization by some matrix U, such that the following is true for diagonal D and D *: Multiplying equation by U and U T gives us the following dynamics for the singular values of These matrices are diagonal at initialization, and remain diagonal throughout the dynamics (the offdiagonal elements are static according to these equations). We may now look at the dynamics of a single diagonal element, noticing it is equivalent to the depth-2 toy model: It may seem that the variance loss is an unnatural loss function to analyze, since it isn't used in practice. While this is true, we will show how the dynamics of this loss function are an approximation of the square loss dynamics. We begin by describing the dynamics of both losses, showing how incremental learning can't take place for quadratic networks as defined over the squared loss. Then, we show how adding a global bias to the quadratic network leads to similar dynamics for small initialization scales. in the previous section, we derive the differential equations for the singular values of W T W under the variance loss:σ We will now derive similar equations for the squared loss. The scaled squared loss in expectation over the Gaussian inputs is: Figure 5: Quadratic model's evolution of top-5 singular values for a rank-4 labeling function. The rows correspond to whether or not a global bias is introduced to the model. The first two columns are for a large dataset (one optimal solution) and the last two columns are for a small dataset (overdetermined problem). When a bias is introduced, it is initialized to it's optimal value at initialization. Note how without the bias, the singular values are learned together and there is over-shooting of the optimal singular value caused by the coupling of the dynamics of the singular values. For the small datasets, we see that the model with no bias reaches a solution with a larger rank. Once a global bias is introduced, the dynamics become more incremental as in the analysis of the variance loss. Note that in this case the solution obtained for the small dataset is the optimal low-rank solution. the bias isn't optimal, and so incremental learning can still take place (assuming a small enough initialization). Under these considerations, we say that the dynamics of the squared loss for a quadratic network with an added global bias resemble the idealized dynamics of the variance loss for a depth-2 linear model which we analyze formally in the paper. In figure 5 we experimentally show how adding a bias to a quadratic network does lead to incremental learning similar to the depth-2 toy model. In section 4.3 we viewed our toy model as a special case of the deep diagonal networks described in , expected to be biased towards sparse solutions. Figure 6 shows the dynamics of the largest values of σ for different depths of the model. We see that the same type of incremental learning we saw in earlier models exists here as well -the features are learned one by one in deeper models, ing in a sparse solution. The leftmost panel shows how the initialization scale plays a role here as well, with the solution being more sparse when the initialization is small. We should note that these do not defy the of (from which we would expect the initialization not to matter), since their deal with the solution at t → ∞. The linear circular-convolutional network of deals with one-dimensional convolutions with the same number of outputs as inputs, such that the mapping from one hidden layer to the next is parameterized by w n and defined to be: shaded regions denoting empirical standard deviations. We see that depth-1 models reach similar to the max-margin SVM solution as predicted by , while deeper models are highly correlated with the sparse solution, with this correlation increasing when the initialization scale is small. The other panels show the evolution of the absolute values of the top-5 weights of σ for the smallest initialization scale. Note that as we increase the depth, incremental learning is clearly presented. The final layer is a fully connected layer parameterized by w N ∈ R d, such that the final model can be written in the following way: This lemma connects the convolutional network to the diagonal network, and thus we should expect to see the same incremental learning of the values of the diagonal network exhibited by the Fourier coefficients of the convolutional network. In figure 7 we see the same plots as in figure 6 but for the Fourier coefficients of the convolutional model. We see that even when the model is far from the toy parameterization (there is no weight sharing and the initialization is with random Gaussian weights), incremental learning is still clearly seen in the dynamics of the model. We see how the inherent reason for the sparsity towards sparse solution found in is the of the dynamics of the model -small amplitudes are attenuated while large ones are amplified. Published as a conference paper at ICLR 2020 We see that depth-1 models reach similar to the max-margin SVM solution, while deeper models are highly correlated with the optimal sparse solution. The other panels show the evolution of the amplitudes of the top-5 frequencies of σ for the smallest initialization scale. Note that as we increase the depth, incremental learning is clearly presented.
We study the sparsity-inducing bias of deep models, caused by their learning dynamics.
349
scitldr
Generative modeling of high dimensional data like images is a notoriously difficult and ill-defined problem. In particular, how to evaluate a learned generative model is unclear. In this paper, we argue that *adversarial learning*, pioneered with generative adversarial networks (GANs), provides an interesting framework to implicitly define more meaningful task losses for unsupervised tasks, such as for generating "visually realistic" images. By relating GANs and structured prediction under the framework of statistical decision theory, we put into light links between recent advances in structured prediction theory and the choice of the divergence in GANs. We argue that the insights about the notions of "hard" and "easy" to learn losses can be analogously extended to adversarial divergences. We also discuss the attractive properties of parametric adversarial divergences for generative modeling, and perform experiments to show the importance of choosing a divergence that reflects the final task. For structured prediction and data generation the notion of final task is at the same time crucial and not well defined. Consider machine translation; the goal is to predict a good translation, but even humans might disagree on the correct translation of a sentence. Moreover, even if we settle on a ground truth, it is hard to define what it means for a candidate translation to be close to the ground truth. In the same way, for data generation, the task of generating pretty pictures or more generally realistic samples is not well defined. Nevertheless, both for structured prediction and data generation, we can try to define criteria which characterize good solutions such as grammatical correctness for translation or non-blurry pictures for image generation. By incorporating enough criteria into a task loss, one can hope to approximate the final task, which is otherwise hard to formalize. Supervised learning and structured prediction are well-defined problems once they are formulated as the minimization of such a task loss. The usual task loss in object classification is the generalization error associated with the classification error, or 0-1 loss. In machine translation, where the goal is to predict a sentence, a structured loss, such as the BLEU score BID37, formally specifies how close the predicted sentence is from the ground truth. The generalization error is defined through this structured loss. In both cases, models can be objectively compared and evaluated with respect to the task loss (i.e., generalization error). On the other hand, we will show that it is not as obvious in generative modeling to define a task loss that correlates well with the final task of generating realistic samples. Traditionally in statistics, distribution learning is formulated as density estimation where the task loss is the expected negative-log-likelihood. Although log-likelihood works fine in low-dimension, it was shown to have many problems in high-dimension. Among others, because the Kullback-Leibler is too strong of a divergence, it can easily saturate whenever the distributions are too far apart, which makes it hard to optimize. Additionally, it was shown in BID47 that the KL-divergence is a bad proxy for the visual quality of samples. In this work we give insights on how adversarial divergences BID26 can be considered as task losses and how they address some problems of the KL by indirectly incorporating hard-to-define criteria. We define parametric adversarial divergences as the following: DISPLAYFORM0 where {f φ : X → R d ; φ ∈ Φ} is a class of parametrized functions, such as neural networks, called the discriminators in the Generative Adversarial Network (GAN) framework BID15. The constraints Φ and the function ∆: R d × R d → R determine properties of the ing divergence. Using these notations, we adopt the view 1 that training a GAN can be seen as training a generator network q θ (parametrized by θ) to minimize the parametric adversarial divergence Div NN (p||q θ), where the generator network defines the probability distribution q θ over x. Our contributions are the following:• We show that compared to traditional divergences, parametric adversarial divergences offer a good compromise in terms of sample complexity, computation, ability to integrate prior knowledge, flexibility and ease of optimization.• We relate structured prediction and generative adversarial networks using statistical decision theory, and argue that they both can be viewed as formalizing a final task into the minimization of a statistical task loss.• We explain why it is necessary to choose a divergence that adequately reflects our final task in generative modeling. We make a parallel with in structured learning (also dealing with high-dimensional data), which quantify the importance of choosing a good objective in a specific setting.• We explore with some simple experiments how the properties of the discriminator transfer to the adversarial divergence. Our experiments suggest that parametric adversarial divergences are especially adapted to problems such as image generation, where it is hard to formally define a perceptual loss that correlates well with human judgment.• We illustrate the importance of having a parametric discriminator by running experiments with the true (nonparametric) Wasserstein, and showing its shortcomings on complex datasets, on which GANs are known to perform well.• We perform qualitative and quantitative experiments to compare maximum-likelihood and parametric adversarial divergences under two settings: very high-dimensional images, and learning data with specific constraints. Here we briefly introduce the structured prediction framework because it can be related to generative modeling in some ways. We will later link them formally, and present insights from recent theoretical to choose a better divergence. We also unify parametric adversarial divergences with traditional divergences in order to compare them in the next section. The goal of structured prediction is to learn a classifier h θ: X → Y which predicts a structured output y from an input x. The key difficulty is that Y usually has size exponential in the input 2 (e.g. it could be all possible sequence of symbols with a given length). Being able to handle this exponentially large set of outputs is one of the key challenges in structured prediction because it makes traditional multi-class classification methods unusable in general.3 Standard practice in structured prediction BID46 BID9 BID38 is to consider predictors based on score functions h θ (x) = arg max y ∈Y s θ (x, y), where s θ: X × Y → R, called the score/energy function BID23, assigns a score to each possible label y for an input x. Typically, 1 We focus in this paper on the divergence minimization perspective of GANs. There are other views, such as those based on game theory BID2, ratio matching and moment matching BID29.2 Additionally, Y might depend on the input x, but we ignore this effect for clarity of exposition. 3 Such as ones based on maximum likelihood.as in structured SVMs BID46, the score function is linear: s θ (x, y) = θ, g(x, y), where g(·) is a predefined feature map. Alternatively, the score function could also be a learned neural network BID5.In order to evaluate the predictions objectively, we need to define a task-dependent structured loss (y, y ; x) which expresses the cost of predicting y for x when the ground truth is y. We discuss the relation between the loss function and the actual final task in Section 4.2. The goal is then to find a parameter θ which minimizes the generalization error: DISPLAYFORM0 Directly minimizing is often an intractable problem; this is the case when the structured loss is the 0-1 loss BID1. Instead, the usual practice is to minimize a surrogate loss et al., 2006) which has nicer properties, such as subdifferentiability or convexity, to get a tractable optimization problem. The surrogate loss is said to be consistent when its minimizer is also a minimizer of the task loss. DISPLAYFORM1 A simple example of structured prediction task is machine translation. Suppose we want to translate French sentences to English; the input x is then a sequence of French words, and the output y is a sequence of English words belonging to a dictionary D with typically |D| ≈ 10000 words. If we restrict the output sequence to be shorter than T words, then |Y| = |D| T, which is exponential. An example of desirable criterion is to have a translation with many words in common with the ground truth, which is typically enforced using BLEU scores to define the task loss. Because we will compare properties of adversarial and traditional divergences throughout this paper, we choose to first unify them with a formalism similar to BID45; BID26: DISPLAYFORM0 Under this framework we give some examples of traditional nonparametric divergences:• ψ-divergences with generator function ψ (which we call f-divergences) can be written in dual form BID33 4 Div ψ (p||q θ) = sup DISPLAYFORM1 where ψ * is the convex conjugate. Depending on ψ, one can obtain any ψ-divergence such as the (reverse) Kullback-Leibler, the Jensen-Shannon, the Total Variation, the ChiSquared 5.• Wasserstein-1 distance induced by an arbitrary norm · and its corresponding dual norm · * BID45: DISPLAYFORM2 which can be interpreted as the cost to transport all probability mass of p into q, where x − x is the unit cost of transporting x to x.• Maximum Mean Discrepancy: DISPLAYFORM3 where (H, K) is a Reproducing Kernel Hilbert Space induced by a Kernel K(x, x) on X with the associated norm · H. The MMD has many interpretations in terms of momentmatching BID24. 4 The standard form is Ex∼q θ [ψ( DISPLAYFORM4 . Some ψ require additional constraints, such as ||f ||∞ ≤ 1 for the Total Variation. Table 1 : Properties of Divergences. Explicit and Implicit models refer to whether the density q θ (x) can be computed. p is the number of parameters of the parametric discriminator. Sample complexity and computational cost are defined and discussed in Section 3.1, while the ability to integrate desirable properties of the final loss is discussed in Section 3.2. Although f-divergences can be estimated with Monte-Carlo for explicit models, they cannot be easily computed for implicit models without additional assumptions (see text). Additionally, by design, they cannot integrate a final loss directly. The nonparametric Wasserstein can be computed iteratively with the Sinkhorn algorithm, and can integrate the final loss in its base distance, but requires exponentially many samples to estimate. Maximum Mean Discrepancy has good sample complexity, can be estimated analytically, and can integrate the final loss in its base distance, but it is known to lack discriminative power for generic kernels, as discussed below. Parametric adversarial divergences have reasonable sample complexities, can be computed iteratively with SGD, and can integrate the final loss in the choice of class of discriminators. DISPLAYFORM5 In particular, the parametric Wasserstein has the additional possibility of integrating the final loss into the base distance. In the optimization problems FORMULA4 and FORMULA5, whenever f is additionally constrained to be in a given parametric family, the associated divergence will be termed a parametric adversarial divergence. In practice, that family will typically be specified as a neural network architecture, so in this work we will use the term neural adversarial divergences interchangeably with the slightly more generic parametric adversarial divergence. For instance, the parametric adversarial Jensen-Shannon optimized in GANs corresponds to with specific ψ BID33, while the parametric adversarial Wasserstein optimized in WGANs corresponds to where f is a neural network. See BID26 for interpretations and a review and interpretation of other divergences like the Wasserstein with entropic smoothing BID3, energy-based distances BID24 which can be seen as adversarial MMD, and the WGAN-GP BID18 objective. We argue that parametric adversarial divergences have many good properties which make them attractive for generative modeling. In this section, we compare them to traditional divergences in terms of sample complexity and computational cost (Section 3.1), and ability to integrate criteria related to the final task (Section 3.2). We also discuss the shortcomings of combining the KL-divergence with generators that have a special structure in Section 3.3. We refer the reader to the Appendix for additional interesting properties of parametric adversarial divergences: the optimization and stability issues are discussed in Appendix A.1, the fact that parametric adversarial divergences only make the assumption that one can sample from the generative model, and provide useful learning signal even when their nonparametric counterparts are not well-defined, is discussed in Appendix A.2. Since we want to learn from finite data, we would like to know how well empirical estimates of a divergence approximate the population divergence. In other words, we want to control the sample complexity, that is, how many samples n do we need to have with high probability that |Div(p||q) − Div(p n || q n)| ≤, where > 0, and p n, q n are empirical distributions associated with p, q. Sample complexities for adversarial and traditional divergences are summarized in Table 1.For explicit models which allow evaluating the density q θ (x), one could use Monte-Carlo to evaluate the f-divergence with sample complexity n = O(1/ 2), according to the Central-Limit theorem. For implicit models, there is no one good way of estimating f-divergences from samples. There are some techniques for it BID32 BID30 BID43, but they all make additional assumptions about the underlying densities (such as smoothness), or they solve the dual in a restricted family, such as a RKHS, which makes the divergences no longer f-divergences. Parametric adversarial divergences can be formulated as a classification/regression problem with a loss depending on the specific adversarial divergence. Therefore, they have a reasonable sample complexity of O(p/ 2), where p is the VC-dimension/number of parameters of the discriminator BID2, and can be solved using classic stochastic gradient methods. A straightforward nonparametric estimator of the Wasserstein is simply the Wasserstein distance between the empirical distributions p n and q n, for which smoothed versions can be computed in O(n 2) using specialized algorithms such as Sinkhorn's algorithm BID11 or iterative Bregman projections BID7. However, this empirical Wasserstein estimator has sample complexity n = O(1/ d+1) which is exponential in the number of dimensions (see , Corollary 3.5). Thus the empirical Wasserstein is not a viable estimator in high-dimensions. Maximum Mean Discrepancy admits an estimator with sample complexity n = O(1/ 2), which can be computed analytically in O(n 2). More details are given in the original MMD paper BID16. One should note that MMD depends fundamentally on the choice of kernel. As the sample complexity is independent of the dimension of the data, one might believe that the MMD estimator behaves well in high dimensions. However, it was experimentally illustrated in Dziugaite et al. FORMULA0 that with generic kernels like RBF, MMD performs poorly for MNIST and Toronto face datasets, as the generated images have many artifacts and are clearly distinguishable from the training dataset. See Section 3.2 for more details on the choice of kernel. It was also shown theoretically in BID40 that the power of the MMD statistical test can drop polynomially with increasing dimension, which means that with generic kernels, MMD might be unable to discriminate well between high-dimensional generated and training distributions. Note that comparing divergences in terms of sample complexity can give good insights on what is a good divergence, but should be taken with a grain of salt as well. On the one hand, the sample complexities we give are upper-bounds, which means the estimators could potentially converge faster. On the other hand, one might not need a very good estimator of the divergence in order to learn in some cases. This is illustrated in our experiments with the empirical Wasserstein (Section 6) which has bad sample complexity but yields reasonable . In Section 4, we will argue that in structured prediction, optimizing for the right task losses is more meaningful and can make learning considerably easier. Similarly in generative modeling, we would like divergences to integrate criteria that characterize the final task. We discuss that although not all divergences can easily integrate final task-related criteria, adversarial divergences provide a way to do so. Pure f-divergences cannot directly integrate any notion of final task, 6 at least not without tweaking the generator. The Wasserstein distance and MMD are respectively induced by a base metric d(x, x) and a kernel K(x, x). The metric and kernel give us the opportunity to specify a task by letting us express a (subjective) notion of similarity. However, the metric and kernel generally have to be defined by hand, as there is no obvious way to learn them end-to-end. For instance, BID14 learn to generate MNIST by minimizing a smooth Wasserstein based on the L2-distance, while Dziugaite et al. FORMULA0; BID25 also learn to generate MNIST by minimizing the MMD induced by kernels obtained externally: either generic kernels based on the L2-distance or on autoencoder features. However, the seems to be limited to simple datasets. Recently there has been a surge of interest in combining MMD with kernel learning, with convincing on LSUN, CelebA and ImageNet images. BID31 learn a feature map and try to match its mean and covariance, BID24 learn kernels end-to-end, while BID6 do end-to-end learning of energy distances, which are closely related to MMD.Parametric adversarial divergences are defined with respect to a parametrized class of discriminators, thus changing properties of the discriminator is a primary way to affect the associated divergence. The form of the discriminator may determine what aspects the divergence will be sensitive or blind to. For instance using a convolutional network as the discriminator may render the divergence insen-sitive to small image translations. Additionally, the parametric adversarial Wasserstein distance can also incorporate a custom metric. In Section 6 we give interpretations and experiments to assess the relation between the discriminator and the divergence. In some cases, imposing a certain structure on the generator (e.g. a Gaussian or Laplacian observation model) yields a Kullback-Leibler divergence which involves some form of component-wise distance between samples, reminiscent of the Hamming loss (see Section 4.3) used in structured prediction. However, doing maximum likelihood on generators having an imposed special structure can have drawbacks which we detail here. For instance, the generative model of a typical variational autoencoder can be seen as an infinite mixture of Gaussians BID19. The loglikelihood thus involves a "reconstruction loss", a pixel-wise L2 distance between images analogous to the Hamming loss, which makes the training relatively easy and very stable. However, the Gaussian is partly responsible for the VAE's inability to learn sharp distributions. Indeed it is a known problem that VAEs produce blurry samples, in fact even if the approximate posterior matches exactly the true posterior, which would correspond to the evidence lower-bound being tight, the output of the VAE would still be blurry. Other examples are autoregressive models such as recurrent neural networks BID28 which factorize naturally as log q θ (x) = i log q θ (x i |x 1, .., x i−1), and PixelCNNs BID34. Training autoregressive models using maximum likelihood in teacher-forcing BID21: each ground-truth symbol is fed to the RNN, which then has to maximize the likelihood of the next symbol. Since teacher-forcing induces a lot of supervision, it is possible to learn using maximumlikelihood. Once again, there are similarities with the Hamming loss because each predicted symbol is compared with its associated ground truth symbol. However, among other problems, there is a discrepancy between training and generation. Sampling from q θ would require iteratively sampling each symbol and feeding it back to the RNN, giving the potential to accumulate errors, which is not something that is accounted for during training. See BID22 and references therein for more principled approaches to sequence prediction with autoregressive models. In this section, we try to provide insights in order to design the best adversarial divergence for our final task. After establishing the relationship between structured prediction and generative adversarial networks, we review theoretical on the choice of objectives in structured prediction, and discuss their interpretation in generative modeling. We frame the relationship of structured prediction and GANs using the framework of statistical decision theory. Assume that we are in a world with a set P of possible states and that we have a set A of actions. When the world is in the state p ∈ P, the cost of playing action a ∈ A is the (statistical) task loss L p (a). The goal is to play the action minimizing the task loss. Generative models with Maximum Likelihood. The set P of possible states is the set of available distributions {p} for the data x. The set of actions A is the set of possible distributions{q θ ; θ ∈ Θ} for the model and the task loss is the negative log-likelihood, DISPLAYFORM0 Structured prediction. The set P of possible states is the set of available distribution {p} for (x, y). The set of actions A is the set of prediction functions {h θ ; θ ∈ Θ} and the task loss is the generalization error: DISPLAYFORM1 where: Y × Y × X → R is a structured loss function. the minimization of a statistical task loss. One starts from a useful but illdefined final task, and devises criteria that characterize good solutions. Such criteria are integrated into the statistical task loss, which is the generalization error in structured prediction, and the adversarial divergence in the GAN framework. The hope is that minimizing the statistical task loss effectively solves the final task. GANs. The set P of possible states is the set of available distributions {p} for the data x. The set of actions A is the set of distributions {q θ ; θ ∈ Θ} that the generator can learn, and the task loss is the adversarial divergence DISPLAYFORM2 Under this unified framework, the prediction function h θ is analogous to the generative model q θ, while the choice of the right structured loss can be related to ∆ and to the choice of the discriminator family F which will induce a good adversarial divergence. We will further develop this analogy in Section 4.2. As discussed in the introduction, structured prediction and data generation involve a notion of final task which is at the same time crucial and not well defined. Nevertheless, for both we can try to define criteria which characterize good solutions. We would like the statistical task loss (introduced in Section 4.1), which corresponds to the generalization error in structured prediction, and the adversarial divergence in generative modeling, to incorporate task-related criteria. One way to do that is to choose a structured loss that reflects the criteria of interest, or analogously to choose a class of discriminators, like a CNN architecture, such that the ing adversarial divergence has good invariance properties. The whole process of building statistical task losses adapted to a final task, using the right structured losses or discriminators, is represented in FIG0.For many prediction problems, the structured prediction community has engineered structured loss functions which induce properties of interest on the learned predictors. In machine translation, a commonly considered property of interest is for candidate translations to contain many words in common with the ground-truth; this has given rise to the BLEU score which counts the percentage of candidate words appearing in the ground truth. In the context of image segmentation, BID35 have compared various structured loss functions which induces different properties on the predicted mask. In the same vein as structured loss functions, adversarial divergences can be built to induce certain properties on the generated data. We are more concerned with generating realistic samples than having samples which are very similar with the training set; we actually want to extrapolate some properties of the true distribution from the training set. For instance, in the DCGAN, the discriminator has a convolutional architecture, which makes it potentially robust to small deformations that would not affect the visual quality of the samples significantly, while still making it able to detect blurry samples, which is aligned with our objective of generating realistic samples. Intuition on the Flexibility of Losses. In this section we get insights from the convergence of in structured prediction. They show in a specific setting that some "weaker" structured loss functions are easier to learn than some stronger loss functions. In some sense, their formalize the intuition in generative modeling that learning with "weaker" divergences is easier ) and more intuitive BID26 than stronger divergences. In structured prediction, strong losses such as the 0-1 loss are hard to learn with because they do not give any flexibility on the prediction; the 0-1 loss only tells us whether a prediction is correct or not, and consequently does not give any clue about how close the prediction is to the ground truth. To get enough learning signal, we roughly need as many training examples as the number of possible outputs |Y|, which is exponential in the dimension of y and thus inefficient. Conversely, weaker losses like the Hamming loss have more flexibility; because they tell us how close a prediction is to the ground truth, less examples are needed to generalize well. The theoretical proved by formalize that intuition in a specific setting. Theory to Back the Intuition. In a non-parametric setting (details and limitations in Appendix B), formalize the intuition that weaker structured loss functions are easier to optimize. Specifically, they compare the 0-1 loss 0−1 (y, y) =1 {y = y} to the Hamming lossHam (y, y) = 1 T T t=1 1{y t = y t}, when y decomposes as T = log 2 |Y| binary variables (y t) 1≤t≤T. They derive a worst case sample complexity needed to obtain a fixed error > 0. For the 0-1 loss, they obtain a sample complexity of O(|Y|/ 2) which is exponential in the dimension of y. However, for the Hamming loss, under certain constraints (see, section on exact calibration functions) they obtain a much better sample complexity of O(log 2 |Y|/ 2) which is polynomial in the number of dimensions, whenever certain constraints are imposed on the score function. Thus their suggest that choosing the right structured loss, like the weaker Hamming loss, might make training exponentially faster. Insights and Relation with Adversarial Divergences.'s theoretical confirm our intuition that weaker losses are easier to optimize, and quantify in a specific setting how much harder it is to learn with strong structured loss functions, like the 0-1 loss, than with weaker ones, like the Hamming loss (here, exponentially harder). Under the framework of statistical decision theory (introduced Section 4.1), their can be related to analogous in generative modeling BID26 showing that it can be easier to learn with weaker divergences than with stronger ones. In particular, one of their arguments is that distributions with disjoint support can be compared in weaker topologies like the the one induced by the Wasserstein but not in stronger ones like the the one induced by the Jensen-Shannon. Closest to our work are the following two papers. BID2 argue that analyzing GANs with a nonparametric (optimal discriminator) view does not really make sense, because the usual nonparametric divergences considered have bad sample complexity. They also prove sample complexities for parametric divergences. BID26 prove under some conditions that globally minimizing a neural divergence is equivalent to matching all moments that can be represented within the discriminator family. They unify parametric divergences with nonparametric divergences and introduce the notion of strong and weak divergence. However, both those works do not attempt to study the meaning and practical properties of parametric divergences. In our work, we start by introducing the notion of final task, and then discuss why parametric divergences can be good task losses with respect to usual final tasks. We also perform experiments to determine properties of some parametric divergences, such as invariance, ability to enforce constraints and properties of interest, as well as the difference with their nonparametric counterparts. Finally, we unify structured prediction and generative modeling, which could give a new perspective to the community. The following papers are also related to our work because of one of the following aspects: unifying divergences, analyzing their statistical properties, giving other interpretations of generative modeling, improving GANs, criticizing maximum-likelihood as a objective for generative modeling, and other reasons. Before the first GAN paper, BID45 unify traditional IPMs, analyze their statistical properties, and propose to view them as classification problems. Similarly, BID41 show that computing a divergence can be formulated as a classification problem. Later, BID33 generalize the GAN objective to any adversarial f-divergence. However, the first papers to actually study the effect of restricting the discriminator to be a neural network instead of any function are the MMD-GAN papers: BID25; Dziugaite et al. FORMULA0; BID24; BID31 and BID6 who give an interpretation of their Figure 2: Images generated by the network after training with the Sinkorn-Autodiff algorithm on MNIST dataset (left) and CIFAR-10 dataset (right). One can observe than although the network succeeds in learning MNIST, it is unable to produce convincing and diverse samples on the more complex CIFAR-10. energy distance framework in terms of moment matching. BID29 give many interpretations of generative modeling, including moment-matching, divergence minimization, and density ratio matching. On the other hand, work has been done to better understand the GAN objective in order to improve its stability BID44. Subsequently, introduce the adversarial Wasserstein distance which makes training much more stable, and BID18 improve the objective to make it more practical. Regarding model evaluation, BID47 contains an excellent discussion on the evaluation of generative models, they show in particular that log-likelihood is not a good proxy for the visual quality of samples. compare parametric adversarial divergence and likelihood objectives in the special case of RealNVP, a generator with explicit density, and obtain better visual with the adversarial divergence. Concerning theoretical understanding of learning in structured prediction, some recent papers are devoted to theoretical understanding of structured prediction such as BID10 and BID27 which propose generalization error bounds in the same vein as but with data dependencies. One contribution of the present paper is to have taken these from the prior literature and put them in perspective in an attempt to provide a more principled view of the nature and usefulness of parametric divergences, in comparison to traditional divergences. To the best of our knowledge, we are also the first to make a link between the generalization error of structured prediction and the adversarial divergence in generative modeling. Importance of Sample Complexity. Since the sample complexity of the nonparametric Wasserstein is exponential in the dimension (Section 3.1), we check experimentally whether training a generator to minimize the nonparametric Wasserstein distance fails in high dimensions. We implement the Sinkhorn-AutoDiff algorithm BID14 to compute the entropy-regularized L2-Wasserstein distance between minibatches of training images and generated images. Figure 2 shows generated samples after training with the Sinkhorn-Autodiff algorithm on both MNIST and CIFAR-10 dataset. On MNIST, the network manages to produce decent but blurry images. However, on CIFAR-10, which is a much more complex dataset, the network fails to produce meaningful samples, which would suggest that indeed the nonparametric Wasserstein should not be used for generative modeling when the (effective) dimensionality is high. This is to be contrasted with the recent successes in image generation of the parametric Wasserstein BID18, which also has much better sample complexity than the nonparametric Wasserstein. Robustness to Transformations. Intuitively, small rotations should not significantly affect the realism of images, while additive noise should. We study the robustness of various parametric adversarial divergences to rotations and additive noise by plotting the evolution of the divergence between MNIST and rotated/noisy versions of it, as a function of the amplitude of transformation. We consider three discriminators (linear, 1-layer-dense, 2-layer-cnn) combined with two formulations, parametric Jensen-Shannon (ParametricJS) and parametric Wasserstein (ParametricW). Ideally, good divergences should vary smoothly (be robust) with respect to the amplitude of the transformation. For rotations FIG2 ) and all discriminators except the linear, ParametricJS saturates at its maximal value, even for small values of rotation, whereas the Wasserstein distance varies much more smoothly, which is consistent with the example given by. The fact that the linear ParametricJS does not saturate for rotations shows that the architecture of the discriminator has a significant effect on the induced parametric adversarial divergence, and confirms that there is a conceptual difference between the true JS and ParametricJS, and even among different ParametricJS. For additive Gaussian noise FIG2 ), the linear discriminator is unable to distinguish the two distributions (it only sees the means of the distributions), whereas more complex architectures like CNNs do. In that sense the linear discriminator is too weak for the task, or not strict enough BID26, which suggests that a better divergence involves trading off between robustness and strength. Learning High-dimensional Data. We collect Thin-8, a dataset of about 1500 handwritten images of the digit "8", with a very high resolution of 512 × 512, and augment them with elastic deformations. Because the pen strokes are relatively thin, we expect any pixel-wise distance to be uninformative, because the images are dominated by pixels, and because with high probability, any two "8' will intersect on no more than a little area. We train a convolutional VAE and a WGAN-GP BID18, henceforth simply denoted GAN, using nearly the same architectures (VAE decoder similar to GAN generator, VAE encoder similar to GAN discriminator), with 16 latent variables, on the following resolutions: 32 × 32, 128 × 128 and 512 × 512. Generated samples are shown in FIG3. Indeed, we observe that the VAE, trained to minimize the evidence lower bound on maximum-likelihood, fails to generate convincing samples in high-dimensions: they are blurry, pixel values are gray instead of being white, and some samples look like the average of many digits. On the contrary, the GAN can generate sharp and realistic samples even in 512 × 512. Our hypothesis is that the discriminator learns moments which are easier to match than it is to directly match the training set with maximum likelihood. Since we were able to perfectly generate high-resolution digits, an additional insight of our experiment is that the main difficulty in generating high-dimensional natural images (like ImageNet and LSUN bedrooms) resides not in high resolution itself, but in the intrinsic complexity of the scenes. Such complexity can be hidden in low resolution, which might explain recent successes in generating images in low resolution but not in higher ones. Learning Visual Hyperplanes. We design the visual hyperplane task to be able to compare VAEs and GANs quantitatively rather than simply inspecting the quality of their generated images. We create a new dataset by concatenating sets of 5 images from MNIST, such that those digits sum up to 25. We train a VAE and a WGAN-GP (henceforth simply denoted GAN) on this new dataset (we used 4504 combinations out of the 5631 possible combinations for training). Both model share the same architecture for generator network and use 200 latent variables. With the help of a MNIST classifier, we automatically recognize and sum up the digits in each generated sample. FIG4 shows the distributions of the sums of the digits generated by the VAE and GAN 7. We can see that the GAN distribution is more peaked and centered around the target 25, while the VAE distribution is less precise and not centered around the target. In that respect, the GAN was better than the VAE at capturing the particular aspects and constraints of the data distribution (summing up to 25). One, and 512×512 (right column). Note how the GAN samples are always crips and realistic across all resolutions, while the VAE samples tend to be blurry with gray pixel values in high-resolution. We can also observe some averaging artifacts in the top-right 512x512 VAE sample, which looks like the average of two "8". More samples can be found in Section C.2 of the Appendix. and Independent Baseline (gray). The latter draws digits independently according to their empirical marginal probabilities, which corresponds to fitting independent multinomial distributions over digits using maximum likelihood. WGAN-GP beats largely both VAE and Indepedent Baseline as it gives a sharper distribution centered in the target sum 25. possible explanation is that since training a classifier to recognize digits and sum them up is not hard in a supervised setting, it could also be relatively easy for a discriminator to enforce such a constraint. We gave arguments in favor of using adversarial divergences rather than traditional divergences for generative modeling, the most important of which being the ability to account for the final task. After linking structured prediction and generative modeling under the framework of statistical decision theory, we interpreted recent from structured prediction, and related them to the notions of strong and weak divergences. Moreover, viewing adversarial divergences as statistical task losses led us to believe that some adversarial divergences could be used as evaluation criteria in the future, replacing hand-crafted criteria which cannot usually be exhaustive. In some sense, we want to extrapolate a few desirable properties into a meaningful task loss. In the future we would like to investigate how to define meaningful evaluation criteria with minimal human intervention. In this section, we describe additional advantages and properties of parametric adversarial divergences. While adversarial divergences are learned and thus potentially much more powerful than traditional divergences, the fact that they are the solution to a hard, non-convex problem can make GANs unstable. Not all adversarial divergences are equally stable: claimed that the adversarial Wasserstein gives more meaningful learning signal than the adversarial Jensen-Shannon, in the sense that it correlates well with the quality of the samples, and is less prone to mode dropping. In Section 6 we will show experimentally on a simple setting that indeed the neural adversarial Wasserstein consistently give more meaningful learning signal than the neural adversarial JensenShannon, regardless of the discriminator architecture. Similarly to the WGAN, the MMD-GAN divergence BID24 was shown to correlate well with the quality of samples and to be robust to mode collapse. Recently, it was shown that neural adversarial divergences other than the Wasserstein can also be made stable by regularizing the discriminator properly BID20 BID42. Maximum-likelihood typically requires computing the density q θ (x), which is not possible for implicit models such as GANs, from which it is only possible to sample. On the other hand, parametric adversarial divergences can be estimated with reasonable sample complexity (see Section 3.1) only by sampling from the generator, without any assumption on the form of the generator. This is also true for MMD but generally not the case for the empirical Wasserstein, which has bad sample complexity as stated previously. Another issue of f-divergences such as the Kullback-Leibler and the Jensen-Shannon is that they are either not defined (Kullback-Leibler) or uninformative (JensenShannon) when p is not absolutely continuous w.r.t. q θ BID33, which makes them unusable for learning sharp distributions such as manifolds. On the other hand, some integral probability metrics, such as the Wasserstein, MMD, or their adversarial counterparts, are well defined for any distributions p and q θ. In fact, even though the Jensen-Shannon is uninformative for manifolds, the parametric adversarial Jensen-Shannon used in the original GANs BID15 still allows learning realistic samples, even though the process is unstable BID44. Although give a lot of insights, their must be taken with a grain of salt. In this section we point out the limitations of their theory. First, their analysis ignores the dependence on x and is non-parametric, which means that they consider the whole class of possible score functions for each given x. Additionally, they only consider convex consistent surrogate losses in their analysis, and they give upper bounds but not lower bounds on the sample complexity. It is possible that optimizing approximately-consistent surrogate losses instead of consistent ones, or making additional assumptions on the distribution of the data could yield better sample complexities. C EXPERIMENTAL Here, we compare the parametric adversarial divergences induced by three different discriminators (linear, dense, and CNN) under the WGAN-GP BID18 formulation. We consider one of the simplest non-trivial generators, in order to factor out optimization issues on the generator side. The model is a mixture of 100 Gaussians with zero-covariance. The model density is q θ (x) = 1 K z δ(x − x z), parametrized by prototypes θ = (x z) 1≤z≤K. The generative process consists in sampling a discrete random variable z ∈ {1, ..., K}, and returning the prototype x z.Learned prototypes (means of each Gaussian) are shown in Figure 6 and 7. The first observation is that the linear discriminator is too weak of a divergence: all prototypes only learn the mean of the training set. Now, the dense discriminator learns prototypes which sometimes look like digits, but are blurry or unrecognizable most the time. The samples from the CNN discriminator are never blurry and recognizable in the majority of cases. Our confirms that indeed, even for simplistic models like a mixture of Gaussians, using a CNN discriminator provides a better task loss for generative modeling of images. Figure 6: Some Prototypes learned using linear (left), dense (middle), and CNN discriminator (right). We observe that with linear discriminator, only the mean of the training set is learned, while using the dense discriminator yields blurry prototypes. Only using the CNN discriminator yields clear prototypes. All 100 prototypes can be found in
Parametric adversarial divergences implicitly define more meaningful task losses for generative modeling, we make parallels with structured prediction to study the properties of these divergences and their ability to encode the task of interest.
350
scitldr
Experimental reproducibility and replicability are critical topics in machine learning. Authors have often raised concerns about their lack in scientific publications to improve the quality of the field. Recently, the graph representation learning field has attracted the attention of a wide research community, which ed in a large stream of works. As such, several Graph Neural Network models have been developed to effectively tackle graph classification. However, experimental procedures often lack rigorousness and are hardly reproducible. Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art. To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks. Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet. We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models. Over the years, researchers have raised concerns about several flaws in scholarship, such as experimental reproducibility and replicability in machine learning and science in general (National Academies of). These issues are not easy to address, as a collective effort is required to avoid bad practices. Examples include the ambiguity of experimental procedures, the impossibility of reproducing and the improper comparison of machine learning models. As a , it can be difficult to uniformly assess the effectiveness of one method against another. This work investigates these issues for the graph representation learning field, by providing a uniform and rigorous benchmarking of state-of-the-art models. Graph Neural Networks (GNNs) have recently become the standard tool for machine learning on graphs. These architectures effectively combine node features and graph topology to build distributed node representations. GNNs can be used to solve node classification and link prediction tasks, or they can be applied to downstream graph classification . In literature, such models are usually evaluated on chemical and social domains . Given their appeal, an ever increasing number of GNNs is being developed . However, despite the theoretical advancements reached by the latest contributions in the field, we find that the experimental settings are in many cases ambiguous or not reproducible. Some of the most common reproducibility problems we encounter in this field concern hyperparameters selection and the correct usage of data splits for model selection versus model assessment. Moreover, the evaluation code is sometimes missing or incomplete, and experiments are not standardized across different works in terms of node and edge features. These issues easily generate doubts and confusion among practitioners that need a fully transparent and reproducible experimental setting. As a matter of fact, the evaluation of a model goes through two different phases, namely model selection on the validation set and model assessment on the test set. Clearly, to fail in keeping these phases well separated could lead to over-optimistic and biased estimates of the true performance of a model, making it hard for other researchers to present competitive without following the same ambiguous evaluation procedures. With this premise, our primary contribution is to provide the graph learning community with a fair performance comparison among GNN architectures, using a standardized and reproducible experimental environment. More in detail, we performed a large number of experiments within a rigorous model selection and assessment framework, in which all models were compared using the same features and the same data splits. Secondly, we investigate if and to what extent current GNN models can effectively exploit graph structure. To this end, we add two domain-specific and structure-agnostic baselines, whose purpose is to disentangle the contribution of structural information from node features. Much to our surprise, we found out that these baselines can even perform better than GNNs on some datasets; this calls for moderation when reporting improvements that do not clearly outperform structure-agnostic competitors. Our last contribution is a study on the effect of node degrees as features in social datasets. Indeed, we show that providing the degree can be beneficial in terms of performances, and it has also implications in the number of GNN layers needed to reach good . We publicly release code and dataset splits to reproduce our , in order to allow other researchers to carry out rigorous evaluations with minimum additional effort 1. Disclaimer Before delving into the work, we would like to clarify that this work does not aim at pinpointing the best (or worst) performing GNN, nor it disavows the effort researchers have put in the development of these models. Rather, it is intended to be an attempt to set up a standardized and uniform evaluation framework for GNNs, such that future contributions can be compared fairly and objectively with existing architectures. Graph Neural Networks At the core of GNNs is the idea to compute a state for each node in a graph, which is iteratively updated according to the state of neighboring nodes. Thanks to layering or recursive schemes, these models propagate information and construct node representations that can be "aware" of the broader graph structure. GNNs have recently gained popularity because they can efficiently and automatically extract relevant features from a graph; in the past, the most popular way to deal with complex structures was to use kernel functions to compute task-agnostic features. However, such kernels are non-adaptive and typically computationally expensive, which makes GNNs even more appealing. Even though in this work we specifically focus on architectures designed for graph classification, all GNNs share the notion of "convolution" over node neighborhoods, as a generalization of convolution on grids. For example, GraphSAGE first performs sum, mean or max-pooling neighborhood aggregation, and then it updates the node representation applying a linear projection on top of the convolution. It also relies on a neighborhood sampling scheme to keep computational complexity constant. Instead, Graph Isomorphism Network (GIN) builds upon the limitations of GraphSAGE, extending it with arbitrary aggregation functions on multi-sets. The model is proven to be as theoretically powerful as the Weisfeiler-Lehman test of graph isomorphism. Very recently, gave an upper bound to the number of hidden units needed to learn permutation-invariant functions over sets and multi-sets. Differently from the above methods, Edge-Conditioned Convolution (ECC) learns a different parameter for each edge label. Therefore, neighbor aggregation is weighted according to specific edge parameters. Finally, Deep Graph Convolutional Neural Network (DGCNN) proposes a convolutional layer similar to the formulation of. Some models also exploit a pooling scheme, which is applied after convolutional layers in order to reduce the size of a graph. For example, the pooling scheme of ECC coarsens graphs through a differentiable pooling map that can be pre-computed. Similarly, DiffPool proposes an adaptive pooling mechanism that collapses nodes on the basis of a supervised criterion. In practice, DiffPool combines a differentiable graph encoder with its pooling strategy, so that the architecture is end-to-end trainable. Lastly, DGCNN differs from other works in that nodes are sorted and aligned by a specific algorithm called SortPool. Model evaluation The work of shares a similar purpose with our contribution. In particular, the authors compare different GNNs on node classification tasks, showing that are highly dependent on the particular train/validation/test split of choice, up to the point where changing splits leads to dramatically different performance rankings. Thus, they recommend to evaluate GNNs on multiple test splits to achieve a fair comparison. Even though we operate in a different setting (graph instead of node classification), we follow the authors' suggestions by evaluating models under a controlled and rigorous assessment framework. Finally, the work of criticizes a large number of neural recommender systems, most of which are not reproducible, showing that only one of them truly improves against a simple baseline. Here, we recap the risk assessment (also called model evaluation or model assessment) and model selection procedures, to clearly layout the experimental procedure followed in this paper. For space reasons, the overall procedure is visually summarized in Appendix A.1. The goal of risk assessment is to provide an estimate of the performance of a class of models. When a test set is not explicitly given, a common way to proceed is to use k-fold Cross Validation (CV) (; ;). k-fold CV uses k different training/test splits to estimate the generalization performance of a model; for each partition, an internal model selection procedure selects the hyper-parameters using the training data only. This way, test data is never used for model selection. As model selection is performed independently for each training/test split, we obtain different "best" hyper-parameter configurations; this is why we refer to the performance of a class of models. The goal of model selection, or hyper-parameter tuning, is to choose among a set of candidate hyperparameter configurations the one that works best on a specific validation set. If a validation set is not given, one can rely on a holdout training/validation split or an inner k-fold. Nevertheless, the key point to remember is that validation performances are biased estimates of the true generalization capabilities. Consequently, model selection are generally over-optimistic; this issue is thoroughly documented in. This is why the main contribution of this work is to clearly separate model selection and model assessment estimates, something that is lacking or ambiguous in the literature under consideration. To motivate our contribution, we follow the approach of and briefly review recent papers describing five different GNN models, highlighting problems in the experimental setups as well as reproducibility of . We emphasize that our observations are based solely on the contents of their paper and the available code 2. Suitable GNN works were selected according to the following criteria: i) performances obtained with 10-fold CV; ii) peer reviewed; iii) strong architectural differences; iv) popularity. In particular, we selected DGCNN, DiffPool , ECC , GIN and GraphSAGE . For a detailed description of each model we refer to their respective papers. Our criteria to assess quality of evaluation and reproducibility are: i) code for data preprocessing, model selection and assessment is provided; ii) data splits are provided; iii) data is split by means of a stratification technique, to preserve class proportions across all partitions; iv) of the 10-fold CV are reported correctly using standard deviations, and they refer to model evaluation (test sets) rather than model selection (validation sets). Table 1 summarizes our findings. Table 1: Criteria for reproducibility considered in this work and their compliance among considered models. (Y) indicates that the criterion is met, (N) indicates that the criterion is not satisfied, (A) indicates ambiguity (i.e. it is unclear whether the criteria is met or not), (-) indicates lack of information (i.e. no details are provided about the criteria). Note that GraphSAGE is excluded from this comparison, as it was not directly applied by authors to graph classification tasks. The authors evaluate the model on 10-fold CV. While the architecture is fixed for all dataset, learning rate and epochs are tuned using only one random CV fold, and then reused on all the other folds. While this practice is still acceptable, it may lead to sub-optimal performances. Nonetheless, the code to reproduce model selection is not available. Moreover, the authors run CV 10 times, and they report the average of the 10 final scores. As a , the variance of the provided estimates is reduced. However, the same procedure was not applied to the other competitors as well. Finally, CV data splits are correctly stratified and publicly available, making it possible to reproduce at least the evaluation experiments. DiffPool From both the paper and the provided code, it is unclear if reported are obtained on a test set rather than a validation set. Although the authors state that 10-fold CV is used, standard deviations of DiffPool and its competitors are not reported. Moreover, the authors affirm to have applied early stopping on the validation set to prevent overfitting; unfortunately, neither model selection code nor validation splits are available. Furthermore, according to the code, data is randomly split (without stratification) and no random seed is set, hence splits are different each time the code is executed. ECC The paper reports that ECC is evaluated on 10-fold CV, but do not include standard deviations. Similarly to DGCNN, hyper-parameters are fixed in advance, hence it is not clear if and how model selection has been performed. Importantly, there are no references in the code repository to data pre-processing, data stratification, data splitting, and model selection. GIN The authors correctly list all the hyper-parameters tuned. However, as stated explicitly in the paper and in the public review discussion, they report the validation accuracy of 10-fold CV. In other words, reported refer to model selection and not to model evaluation. The code for model selection is not provided. GraphSAGE The original paper does not test this model on graph classification datasets, but GraphSAGE is often used in other papers as a strong baseline. It follows that GraphSAGE on graph classification should be accompanied by the code to reproduce the experiments. Despite that, the two works which report of GraphSAGE (DiffPool and GIN) fail to do so. Summary Our analysis reveals that GNN works rarely comply with good machine learning practices as regards the quality of evaluation and reproducibility of . This motivates the need to re-evaluate all models within a rigorous, reproducible and fair environment. In this section we detail our main experiment, in which we re-evaluate the above-mentioned models on 9 datasets (4 chemical, 5 social), using a model selection and assessment framework that closely follows the rigorous practices described in Section 3. In addition, we implement two baselines whose purpose is to understand the extent to which GNNs are able to exploit structural information. All models have been implemented by means of the Pytorch Geometrics library , which provides graph pre-processing routines and makes the definition of graph convolution easier to implement. We sometimes found discrepancies between papers and related code; in such cases, we complied with the specifications in the paper. Because GraphSAGE was not applied to graph classification in the original work, we opted for a max-pooling global aggregation function to classify graph instances; further, we do not use the sampled neighborhood aggregation scheme defined in , in order to allow nodes to have access to their whole neighborhood. Datasets All graph datasets are publicly available and represent a relevant subset of those most frequently used in literature to compare GNNs. Some collect molecular graphs, while others contain social graphs. In particular, we used D&D , PROTEINS , NCI1 and ENZYMES for binary and multi-class classification of chemical compounds, whereas IMDB-BINARY, IMDB-MULTI, REDDIT-BINARY, REDDIT-5K and COLLAB are social datasets. Dataset statistics are reported in Table A.2. Features In GNN literature, it is common practice to augment node descriptors with structural features. For example, DiffPool adds the degree and clustering coefficient to each node feature vector, whereas GIN adds a one-hot representation of node degrees. The latter choice trades off an improvement in performances (due to injectivity of the first sum) with the inability to generalize to graphs with arbitrary node degree. In general, good experimental practices suggest that all models should be consistently compared to the same input representations. This is why we re-evaluate all models using the same node features. In particular, we use one common setting for the chemical domain and two alternative settings as regards the social domain. As regards the chemical domain, nodes are labeled with a one-hot encoding of their atom type, though on ENZYMES we follow the literature and use 18 additional features available. As regards social graphs, whose nodes do not have features, we use either an uninformative feature for all nodes or the node degree. As such, we are able to reason about the effectiveness of the structural inductive bias imposed by the model; that is if the model is able to implicitly learn structural features or not. The effect of adding structural features to general machine learning models for graphs has been investigated in; here, we focus on the impact of node degrees on performances for social datasets. Baselines We adopt two distinct baselines, one for chemical and one for social datasets. On all chemical datasets but for ENZYMES, we follow; and implement the Molecular Fingerprint technique, which first applies global sum pooling (i.e., counts the occurrences of atom types in the graph by summing the features of all nodes in the graph together) and then applies a single-layer MLP with ReLU activations. On social domains and ENZYMES (due to the presence of additional features), we take inspiration from the work of to learn permutation-invariant functions over sets of nodes: first, we apply a single-layer MLP on top of node features, followed by global sum pooling and another singlelayer MLP for classification. Note that both baselines do not leverage graph topology. Using these baselines as a reference is of fundamental importance for future works, as they can provide feedback on the effectiveness of GNNs on a specific dataset. As a matter of fact, if GNN performances are close to the ones of a structure-agnostic baseline, one can draw two possible : the task does not need topological information to be effectively solved, or the GNN is not exploiting graph structure adequately. While the former can be verified through domain-specific human expertise, the second is more difficult to assess, as multiple factors come into play such as the amount of training data, the structural inductive bias imposed by the architecture and the hyper-parameters used for model selection. Nevertheless, significant improvements with respect to these baselines are a strong indicator that graph topology has been exploited. Therefore, structure-agnostic baselines become vital to understand if and how a model can be improved. Table 2: Pseudo-code for model assessment (left) and model selection (right). In Algorithm 1, "Select" refers to Algorithm 2, whereas "Train" and "Eval" represent training and inference phases, respectively. After each model selection, the best configuration best k is used to evaluate the external test fold. Performances are averaged across R training runs, where R in our case is set to 3. for r ← 1,..., R do Experimental Setting Our experimental approach is to use a 10-fold CV for model assessment and an inner holdout technique with a 90%/10% training/validation split for model selection. After each model selection, we train three times on the whole training fold, holding out a random fraction (10%) of the data to perform early stopping. These three separate runs are needed to smooth the effect of unfavorable random weight initialization on test performances. The final test fold score is obtained as the mean of these three runs; Table 2 reports the pseudo-code of the entire evaluation process. To be consistent with literature, we implement early stopping with patience parameter n, where training stops if n epochs have passed without improvement on the validation set. A high value of n can favor model selection by making it less sensitive to fluctuations in the validation score at the cost of additional computation. Importantly, all data partitions have been pre-computed, so that models are selected and evaluated on the same data splits. Moreover, all data splits are stratified, i.e., class proportions are preserved inside each k-fold split as well as in the holdout splits used for model selection. Hyper-parameters Hyper-parameter tuning is performed via grid search. For the sake of conciseness, we list all hyper-parameters in Section A.4. Notice that we always include those used by other authors in their respective papers. We select the number of convolutional layers, the embedding space dimension, the learning rate, and the criterion for early stopping (either based on the validation accuracy or validation loss) for all models. Depending on the model, we also selected regularization terms, dropout, and other model-specific parameters. Computational considerations Our experiments involve a large number of training runs. For all models, grid sizes range from 32 to 72 possible configurations, depending on the number of hyper-parameters to choose from. However, we tried to keep the upper bound on the number of parameters as similar as possible across models. The total effort required, in terms of the number of single training runs, to complete model assessment procedures exceeded 47000. Such a large number required extensive use of parallelism, both in CPU and GPU, to conduct the experiments in a reasonable amount of time. We emphasize that in some cases (e.g. ECC in social datasets), training on a single hyper-parameter configuration required more than 72 hours, which would have made the sequential exploration of one single grid last months. Therefore, due to the large amount of experiments to conduct and to the computational resources available, we limited the time to complete a single training to 72 hours. 65.2 ± 6.4 DGCNN 76.6 ± 4.3 76.4 ± 1.7 72.9 ± 3.5 38.9 ± 5.7 DiffPool 75.0 ± 3.5 76.9 ± 1.9 73.7 ± 3.5 59.5 ± 5.6 ECC 72.6 ± 4.1 76.2 ± 1.4 72.3 ± 3.4 29.5 ± 8.2 GIN 75.3 ± 2.9 80.0 ± 1.4 73.3 ± 4.0 59.6 ± 4.5 GraphSAGE 72.9 ± 2.0 76.0 ± 1.8 73.0 ± 4.5 58.2 ± 6.0 Tables 3 and 4 show the of our experiments. Overall, GIN seems to be effective on social datasets. Importantly, we discover that on D&D, PROTEINS and ENZYMES none of the GNNs are able to improve over the baseline. On the contrary, on NCI1 the baseline is clearly outperformed: this suggests that the GNNs we analyzed can actually exploit the topological information of the graphs in this dataset. Moreover, we observe that an overly-parameterized baseline is not able to overfit the NCI1 training data completely. To see this, consider that a baseline with 10000 hidden units and no regularization reaches around 67% training accuracy, while GIN can easily overfit (≈ 100%) the training data. This indicates that structural information hugely affects the ability to fit the training set. On social datasets, we observe that adding node degrees as features is beneficial, but such an effect is more noticeable for REDDIT-BINARY, REDDIT-5K and COLLAB. Our also show that structure-agnostic baselines are an essential tool to understand the effectiveness of GNNs and extract useful insights. As an example, since none of the GNNs surpasses the baseline on D&D, PROTEINS and ENZYMES, we argue that the state-of-the-art GNN models we analyzed are not able to fully exploit the structure on such datasets yet; indeed, in chemistry, structural features are known to correlate with molecular properties (van). For all these reasons, we suggest putting small performance gains on these datasets into the right perspective, at least until the baseline will clearly be outperformed. Currently, small average fluctuations on these datasets are likely to be caused by other factors, such as random initializations, rather than a successful exploitation of the structure. In , we warmly recommend GNN practitioners to include baseline comparisons in future works, in order to better characterize the extent of their contributions. Based on our , using node degrees as input features is almost always beneficial to increase performances on social datasets, sometimes by a large amount. As an example, degree information is sufficient for our baseline to improve performances of ≈ 15%, hence being competitive on many datasets; in particular, the baseline achieves the best performance on IMDB-BINARY. In contrast, adding node degrees is less relevant for most GNNs, since they can automatically infer such information from the structure. One notable exception is DGCNN, which explicitly needs node degrees to perform well on all datasets. Moreover, we observe that the ranking of all models, after the addition of the degrees, drastically changes; this raises the question about the impact of other structural features (such as clustering coefficient) on performances, which we leave to future works. However, one may also wonder whether the addition of the degree has an influence on the number of layers that are necessary to solve the task or not. We therefore investigated the matter by computing the median number of layers across the 10 different folds. We observed a general trend across models, with GraphSAGE being the only exception, where the addition of the degree reduces the number of layers needed by ≈ 1 as shown in Table A. 3. This may be due to the fact that most architectures find useful to compute the degree at the very first layer, as such information seems useful to the overall performances. 71.2 ± 3.9 48.5 ± 3.3 89.9 ± 1.9 56.1 ± 1.7 75.6 ± 2.3 GraphSAGE 68.8 ± 4.5 47.6 ± 3.5 84.3 ± 1.9 50.0 ± 1. between the two estimates is usually consistent. In contrast, our average validation accuracies are always higher or equal to our test ; this is expected, as discussed in Section 3.2. Finally, we emphasize once again that our are i) obtained within the framework of a rigorous model selection and assessment protocol; ii) fair with respect of data splits and input features assigned to all competitors; iii) reproducible. In contrast, we saw in Section 4 how published rely on unclear or poorly documented experimental settings. In this paper, we wanted to show how a rigorous empirical evaluation of GNNs can help design future experiments and better reason about the effectiveness of different architectural choices. To this aim, we highlighted ambiguities in the experimental settings of different papers, and we proposed a clear and reproducible procedure for future comparisons. We then provided a complete re-evaluation of five GNNs on nine datasets, which required a significant amount of time and computational resources. This uniform environment helped us reason about the role of structure, as we found that structure-agnostic baselines outperform GNNs on some chemical datasets, thus suggesting that structural properties have not been exploited yet. Moreover, we objectively analyzed the effect of the degree feature on performances and model selection in social datasets, unveiling an effect on the depth of GNNs. Finally, we provide the graph learning community with reliable and reproducible to which GNN practitioners can compare their architectures. We hope that this work, along with the library we release, will prove useful to researchers and practitioners that want to compare GNNs in a more rigorous way. A.1 VISUALIZATION OF THE EVALUATION FRAMEWORK Figure 2: We give a visual representation of the evaluation framework. We apply an external k outfold CV to get an estimate of the generalization performance of a model, and we use an hold-out technique (bottom-left) to select the best hyper-parametres. For completeness, we show that it is also possible to apply an inner k inn -fold CV (implementing a complete Nested Cross Validation), which obviously amounts to multiplying the computational costs of model selection by a factor k inn. A.4 HYPER-PARAMETERS TABLE Table 7: Hyper-parameters used for model selection.
We provide a rigorous comparison of different Graph Neural Networks for graph classification.
351
scitldr
Data augmentation (DA) is fundamental against overfitting in large convolutional neural networks, especially with a limited training dataset. In images, DA is usually based on heuristic transformations, like geometric or color transformations. Instead of using predefined transformations, our work learns data augmentation directly from the training data by learning to transform images with an encoder-decoder architecture combined with a spatial transformer network. The transformed images still belong to the same class, but are new, more complex samples for the classifier. Our experiments show that our approach is better than previous generative data augmentation methods, and comparable to predefined transformation methods when training an image classifier. Convolutional neural networks have shown impressive in visual recognition tasks. However, for proper training and good performance, they require large labeled datasets. If the amount of training data is small, data augmentation is an effective way to improve the final performance of the network BID6; BID9 ). In images, data augmentation (DA) consists of applying predefined transformations such as flip, rotations or color changes BID8; BID3 ). This approach provides consistent improvements when training a classifier. However, the required transformations are dataset dependent. For instance, flipping an image horizontally makes sense for natural images, but produces ambiguities on datasets of numbers (e.g. 2 and 5).Several recent studies investigate automatic DA learning as a method to avoid the manual selection of transformations. BID10 define a large set of transformations and learn how to combine them. This approach works well however, as it is based on predefined transformations, it prevents the model from finding other transformations that could be useful for the classifier. Alternatively, BID2 and BID12 generate new samples via a generative adversarial networks model (GAN) from the probability distribution of the data p(X), while BID0 learn the transformations of images, instead of generating images from scratch. These alternative methods show their limits when the number of training samples is low, given the difficulty of training a high-performing generative model with a reduced dataset. BID5 learn the natural transformations in a dataset by aligning pairs of samples from the same class. This approach produces good on easy datasets like MNIST however, it does not appear to be applicable to more complex datasets. Our work combines the advantages of generative models and transformation learning approaches in a single end-to-end network architecture. Our model is based on a conditional GAN architecture that learns to generate transformations of a given image that are useful for DA. In other words, instead of learning to generate samples from p(X), it learns to generate samples from the conditional distribution p(X|X), withX a reference image. As shown in FIG0, our approach combines a global transformation defined by an affine matrix with a more localized transformation defined by ensures that the transformed sample G(x i, z) is dissimilar from the input sample x i but similar to a sample x j from the same class. (b) Given an input image x i and a random noise vector z, our generator first performs a global transformation using a spatial transformer network followed by more localized transformations using a convolutional encoder-decoder network.a convolutional encoder-decoder architecture. The global transformations are learned by an adaptation of spatial transformer network (STN) BID7 ) so that the entire architecture is differentiable and can be learned with standard back-propagation. In its normal use, the purpose of STN is to learn how to transform the input data, so that the model becomes invariant to certain transformations. In contrast, our approach uses STN to generate augmented samples in an adversarial way. With the proposed model we show that, for optimal performance, it is important to jointly train the generator of the augmented samples with the classifier in an end-to-end fashion. By doing that, we can also add an adversarial loss between the generator and classifier such that the generated samples are difficult, or adversarial, for the classifier. To summarize, the contributions of this paper are: i) We propose a DA network that can automatically learn to generate augmented samples without expensive searches for the optimal data transformations; ii) Our model trains jointly with a classifier, is fully differentiable, trainable end-to-end, and can significantly improve the performance of any image classifier; iii) In low-data regime it outperforms models trained with strong predefined DA; iv) Finally, we notice that, for optimal performance, it is fundamental to train the model jointly with the image classifier. We propose a GAN based architecture that learns to augment training data for image classification. As shown in FIG0, this architecture involves four modules: a generator to transform an input image, two discriminators and a classifier to perform the final classification task. In FIG0 we show the structure of the generator. Instead of generating a new image, as in most GAN models, our generator learns to transform the input image. Our intuition is that learning an image transformation instead of learning a mapping from noise to image is an easier task in low data regime. Given an input image x i and a noise vector z, E N C converts them into a small representation that is passed to T to generate an image and noise dependent affine transformation (similar to spatial transformer networks STN) of the original image. This transformed image is then passed to a U-Net network BID11 ) represented by E N C and D EC. While in the original paper STN was used for removing invariances from the input data, the proposed model generates samples with transformations that can help to learn a better classifier. The generator is supported by two discriminators. The first one, the class discriminator D C, ensures that the generated sample belongs to the same class as the input sample. The second one, the dissimilarity discriminator D D, forces the transformed sample to be dissimilar to the input sample but similar to a sample from the same class. This is necessary to prevent the generator from learning the identity transformation. More details about the different networks and the loss functions used to train the model can be found in Appendices A and B. In this section, we present several experiments to better understand our model and compare it with the state-of-the-art in automatic DA. We test our approach on MNIST, Fashion-MNIST, SVHN, and CIFAR-10, both in full dataset and low-data regime. In this series of experiment, we compare the efficiency of the DA learned by our model to a heuristically chosen DA. We consider two different levels of DA. Light DA refers to random padding of 4 pixels on each side of the image, followed by a crop back to the original image dimensions. Strong DA adds to the previous transformations also rotation in range [-10, 10] degrees and scaling, with factor in range [0.5, 2]. For CIFAR10, DA also includes a horizontal image flip. In a first experiment, we compare the accuracy of the baseline classifier, the baseline with DA, and our DA model while increasing the number of training samples. In our model, the classifier is trained jointly with the generator. In FIG1, we notice that for very few samples the predefined DA is still better than our approach. This is probably because when the training dataset is too small, the generator produces poor samples that are not helpful for the classifier. When the number of samples increases, our approach obtains a much better accuracy than strong DA. For instance, at 4000 training samples, the baseline obtains an accuracy of 66%, the predefined DA approach 76%, and our model 80.5%, thus a net gain of 14 points compared to the baseline and 4 points compared to strong DA model. If we add more examples, the gap between our learned DA and the strong DA tends to reduce. With the full dataset we reach about a half point better than the strong DA.In a second experiment, we compare different types of DA on four datasets with a reduced number of samples. As shown in Tab. 1, our best model is always performing better than light DA and strong DA. This means that our DA model learns transformations that are more useful for the final classifier. Notice that in FMNIST light DA decreases performance of the final classifier. This suggests that DA is dataset dependent and transformations producing useful new samples in some domains might not be usable in others. In this experiment, we report the performance of our method on Joint training and Separate training and compare them with a Baseline model trained without DA. In Joint training the generator of augmented images and the classifier are trained simultaneously in an end-to-end training. In Separate training instead, the generator is first trained to generate augmented images, and these images are then used as DA to improve the classifier. In FIG2, we notice the different behavior of the two methods. In the early phase of training, at epoch 200, both Separate training (beige bar) and the Joint training (red bar) perform above 70%, whereas Baseline has a much lower accuracy. However, with additional training epochs, the performance of Separate training decreases while Baseline and Joint training accuracies increase. We believe that for good performance in DA it is not just about generating plausible augmented samples, but also about generating the right samples at the right moment, as in curriculum learning BID1 FORMULA0 ), our method obtains slightly lower accuracies. However, TANDA is based on the selection of multiple predefined transformations. This means that its learning is reduced to a set of manually selected transformation, which, we believe, reduces the search space and facilitates the task. Also, TANDA uses an additional standard DA based on image crop, while our method does not need any additional DA. On the other hand, our method compares favorably to Bayesian DA BID12 ) and DADA BID13 ), both based on GAN models with a larger neural network for the classifier. This shows that our combination of global and local transformations helps to improve the final performance of the method. In this work, we have presented a new approach for improving the learning of a classifier through an automatic generation of augmented samples. The method is fully differentiable and can be learned end-to-end. In our experiments, we have shown several elements contributing to an improved classification performance. First, the generator and the classifier should be trained jointly. Second, the combined use of global transformations with STN and local transformation with U-Net is essential to reach the highest accuracy levels. For future work, we want to include more differentiable transformations such as deformations and color transformations and evaluate how these additional sample augmentations affect the final accuracy. Generator. The role of the generator G, is to learn which transformations to input images are the most useful to train the classifier. In our intuition, learning an image transformation instead of learning a mapping from noise to image to generate additional samples, is an easier task in low data regime. Given an input image x i and a noise vector z, the generator G, composed of an encoder E N C, a decoder D EC and an affine transformer T, learns a transformation of the image that helps to train the classifier C. So, the transformation can be formulated as: DISPLAYFORM0 The loss of the generator can be formulated as: DISPLAYFORM1 where x i is an input image with class label y i, G(x i, z) is a transformation of the sample x i and a noise vector z. D C, D D and C are respectively the class discriminator, the dissimilarity discriminator and the classifier and will be addressed in the following paragraphs. α, β and γ are hyper-parameters introduced to balance the three loss terms and stabilize the training of the model. In the first term of the loss function, the probability that a pair (transformed sample, true label) is classified as fake is minimized. In the second term, the dissimilarity between the original image and the transformed image is maximized. Finally, in the third one, the predicted probability of the real label for the transformed sample is minimized in order to make the classifier robust against adversarial samples. Class discriminator. During training, the generator is supported by two discriminators. The first one is referred to as class discriminator D C. It ensures that the generated image belongs to the same class as the original image. D C takes as input an image and a class label and outputs the probability of the image to belong to that class. Its loss function is: DISPLAYFORM2 The first term increases the probability of D C for a real sample x i of class y i, whereas the second term reduces D C for a generated sample G(x i, z) of the same class. In this way the discriminator learns to distinguish between real and generated samples of a certain class. Dissimilarity discriminator. The second discriminator, called dissimilarity discriminator D D, ensures that the generated sample is as different as possible from the original sample. D D takes a pair of samples as input, and outputs a dissimilarity score between the two samples, ranging between 0 and 1, where 0 means that the two samples are identical. Its loss function can be formulated as: DISPLAYFORM3 In the first term of the loss function, the dissimilarity between the original sample and another true sample from the same class is maximized, whereas in the second one, the dissimilarity between a true sample and the corresponding transformed sample is minimized. Classifier. The image classifier C is trained jointly with the generator and the two discriminators. C is fed with real samples as well as augmented samples, i.e. samples transformed by G. Its loss function is: DISPLAYFORM4 In the first term of the loss function, the cross entropy loss between the predicted labels of the true samples and the true label distribution is minimized. In contrast, the cross entropy loss between the predicted labels of the transformed samples and the true label distribution is minimized in the second term. Global Loss. Finally, we want to minimize a global loss to find the optimal parameters for the generator, discriminators and classifier. This loss is defined as: DISPLAYFORM5 During optimization, we sequentially minimize a mini-batch of each loss. Notice that L G tries to minimize the cross-entropy of D C of the transformed samples G(x i, z), while L D C tries to minimize 1 − D C. The same also for D D and C. This is not a problem, in fact this shows that the defined loss is adversarial, in the sense that generator and discriminator//classifier fight to push the losses in different directions. If the optimization is tuned properly, this mechanism generates augmented samples that are good for training the classifier, i.e. samples that belongs to the right class but are close to the decision boundaries. In all our experiments, we apply a basic pre-processing to the images, which consists in subtracting the mean pixel value, and then dividing by the pixel standard deviation. The generator is a combination of a STN BID7 ) module followed by a U-Net BID11 ) network. The generator network takes as input an image and a Gaussian noise vector (100 dimensions), which are concatenated in the first layer of the network. The three parameters α, β and γ of the generator loss are estimated on a validation set. For the class discriminator D C, we use the same architecture as in BID4. The network is adapted to take as input an image and a label (as a one hot vector). These are concatenated and given as input to the first layer of the architecture. For the dissimilarity discriminator D D, we also use the same architecture. The network is adapted to take as input a pair of images, which are concatenated in the first layer of the architecture. For the classifier, we use the architecture used in BID4. We use Adam as optimization method. Training parameters To train our model, we use following values for the optimization parameters. Generator: Adam optimizer with a initial learning rate of 0.0005, a β 1 value of 0.5 and a β 2 value of 0.999. Class Discriminator: Adam optimizer with a initial learning rate of 0.0005, a β 1 value of 0.5 and a β 2 value of 0.999. As balance factor (see Sec. 2), we use as value for α 0.1 for MNIST, 1 for SVHN and 0.1 for CIFAR10. Similarity Discriminator: Adam optimizer with a initial learning rate of 0.0005, a β 1 value of 0.5 and a β 2 value of 0.999. As balance factor (see Sec. 2), we use as value for β 0.05 for MNIST, 1 for SVHN and 0.05 for CIFAR10. Classifier: Adam optimizer with a initial learning rate of 0.006, a β 1 value of 0.5 and a β 2 value of 0.999. As balance factor (see Sec. 2), we use as value for γ 0.005 for MNIST, 0.0005 for SVHN and 0.001 for CIFAR10.Detailed architectures In Tab. 3 we show the details of the classifier C, in Tab. 4 and Tab. 5, the details of respectively the two discriminators D C and D D. In Tab. 6, we can see the details for the generator G. Our approach learns to apply the right transformations for each dataset. For instance on MNIST and Fashion-MNIST there is no flip, nor zoom, because not useful, while on SVHN zoom is often used and on CIFAR10, both zoom, flip and color changes are applied.
Automatic Learning of data augmentation using a GAN based architecture to improve an image classifier
352
scitldr
We consider the problem of information compression from high dimensional data. Where many studies consider the problem of compression by non-invertible trans- formations, we emphasize the importance of invertible compression. We introduce new class of likelihood-based auto encoders with pseudo bijective architecture, which we call Pseudo Invertible Encoders. We provide the theoretical explanation of their principles. We evaluate Gaussian Pseudo Invertible Encoder on MNIST, where our model outperform WAE and VAE in sharpness of the generated images. We consider the problem of information compression from high dimensional data. Where many studies consider the problem of compression by non-invertible transformations, we emphasize the importance of invertible compression as there are many cases where one cannot or will not decide a priori what part of the information is important and what part is not. Compression of images for person ID in a small company requires less resolution then person ID at an airport. To loose part of the information without harm to the future purpose of viewing the picture requires knowing the purpose upfront. Therefore, the fundamental advantage of invertible information compression is that compression can be undone if a future purpose so requires. Recent advances of classification models have demonstrated that deep learning architectures of proper design do not lead to information loss while still being able to achieve state-of-the-art in classification performance. These i-RevNet models BID5 implement a small but essential modification of the popular RevNet models while achieving invertibility and a performance similar to the standard RevNet BID2. This is of great interest as it contradicts the intuition that information loss is essential to achieve good performance in classification BID13. Despite the requirement of the invertibility, flow-based generating models BID0; BID11; BID6 demonstrate that the combination of bijective mappings allows one to transform the raw distribution of the input data to any desired distribution and perform the manipulation of the data. On the other hand, Auto-Encoders have provided the ideal mechanism to reduce the data to the bare minimum while retaining all essential information for a specific task, the one implemented in the loss function. Variational Auto Encoders (VAE) BID7 and Wasserstein Auto Encoders (WAE) BID14 are performing best. They provide an approach for stable training of autoencoders, which demonstrate good at reconstruction and generation. However, both of these methods involve the optimization of the objective defined on the pixel level. We would emphasise the importance of avoiding the separate decoder part and training the model without relying on the reconstuction quality directly. Combining the best of Invertible mappings and Auto-Encoders, we introduce Pseudo Invertible Encoder. Our model combines bijectives with restriction and extension of the mappings to the dependent sub-manifolds FIG0. The main contributions of this paper are the following:• We introduce new class of likelihood-based Auto-Encoders, which we call Pseudo Invertible Encoders. We provide the theoretical explanation of their principles.• We demonstrate the properties of Gaussian Pseudo Invertible Encoder in manifold learning.• We compare our model with WAE and VAE on MNIST, and report that the sharpness of the images, generated by our models is better. 2 RELATED WORK ResNets BID3 enable Networks to grow even more and thus memory consumption becomes a bottleneck. BID2 propose a Reversible Residual Network (RevNet) where each layer's activations can be reconstructed from the activations of the next layer. By replacing the residual blocks with coupling layers, they mimic the behaviour of residual blocks while being able to retrieve the original input of the layer. RevNet replaces the residual blocks of ResNets, but also accommodates non-invertible components to train more efficiently. By adding a downsampling operator to the coupling layer, i-RevNet circumvents these non-invertible modules BID5. With this they show that losing information is not a necessary condition to learn representations that generalize well on complicated problems. Although i-RevNet circumvents non-invertible modules, data is not compressed and the model is only invertible up to the last layer. All their methods do not allow dimensionality reduction. In current research we build a pseudo invertible model which performs dimensionality reduction. Auto-Encoders were first introduced by BID12 as an unsupervised learning algorithm. They are now widely used as a technique for dimension reduction by compressing input data. By training an encoder and a decoder network, and measuring the distance between original and reconstructed data, data can be represented in a latent space. This latent space can then be used for supervised learning algorithms. Instead of learning a compressed representation of the input data BID7 propose to learn the parameters of a probability distribution that represent the data. tol introduced new class of models -Wasserstein Auto Encoders, which use Optimal Transport to be trained. These methods require the optimization of the objective function which includes the terms defined on pixel level. Our model does not require such optimization. Moreover, it only perform encoding at training time. Here we introduce the approach for obtaining dimensionality reduction invertible mappings. Our method is based on the restriction of the mappings to low-dimensional manifolds, and extension of the inverse mappings with certain constraints (Fig. 2). Given data DISPLAYFORM0 In other words, we are looking for a pair of Figure 2: The schematic representation of the Restriction-Extension approach. The invertible mapping X ↔ Z is preformed by using the dependent sub-manifold R = g(Z) and a pair extended functionsG,G −1.associated functions G and G −1 such that DISPLAYFORM1 We use this residual manifold in order to match the dimensionalities of the hidden and initial spaces. Here we introduce the function g: DISPLAYFORM2 With no loss of generality we can say that R = g(Z). We use the pair of extended functionsG: DISPLAYFORM3 Rather than searching for the invertible dimensionality reduction mapping directly, we seek to find G, the invertible transformation with certain constraints, expressed by R.In search forG, we focus on DISPLAYFORM4 where F is a parametric family of functions invertible on R D. We select the function F θ with parameters θ which satisfy the constraint: DISPLAYFORM5 where DISPLAYFORM6 Taking into account constraint 3, we derive F θ (x) = [z, r], where z ∈ Z and r ∈ R. By combining this with Eq. 2 we have the desired pair of functions: DISPLAYFORM7 The obtained function G is Pseudo Invertible Endocer, or shortly PIE. As we are interested in high dimensional data such as images, the explicit choice of parameters θ is impossible. We choose θ * as a maximizer of the log likelihood of the observed data given the prior p θ (x): DISPLAYFORM0 After a change of variables according to Eq. 4 we obtain Taking into account the constraint 3 we derive the joint distribution for DISPLAYFORM1 DISPLAYFORM2 Dirac's delta function can be viewed as a limit of sequence of Gaussians: DISPLAYFORM3 Let us fix 2 = 2 0 DISPLAYFORM4 Finally, for the log likelihood we have: DISPLAYFORM5 We choose prior distribution p(z) as Standard Gaussian. We search for the parameters by using Gradient Descent. The method relies on the function F θ. This choice is challenging by itself. The currently known classes of real-value bijectives are limited. To overcome this issue, we approximate F θ with a composition of basic bijectives from certain classes: DISPLAYFORM0 where DISPLAYFORM1 Taking into account that a composition of PIE is also PIE, we create a final dimensionality reduction mapping from a sequence of PIEs: such that DISPLAYFORM2 DISPLAYFORM3 where DISPLAYFORM4 Then the log likelihood is represented as DISPLAYFORM5 where J kl is the Jacobian of the k-th function of the l-th PIE. The approximation error here depends only on, according to the Eq. 10. For the simplicity we will now refer to the whole model as PIE. The building blocks of this model are PIE blocks. If we choose the distribution p(z) in Eq. 17 as Standard Gaussian, g l (·) = 0, ∀l and 0 = 1, then the model can be viewed as Normalizing Flow with multi-scale architecture BID1 FIG1. It was demonstrated in BID1 that the model with such architecture achieves semantic compression. This section introduces the basic bijectives for the Pseudo-Invertible Encoder (PIE). We explain what each building bijective consists of and how it fits in the global architecture as shown in FIG2. PIE is composed of a series of convolutional blocks followed by linear blocks, as depicted in FIG2. The convolutional PIE blocks consist of series of coupling layers and 1×1 convolutions. We perform invertible downsampling of the image at the beginning of the convolutional block, by reducing the spatial resolution and increasing the number of channels, keeping the overall number of the variables the same. At the end of the convolutional PIE block, the split of variables is performed. One part of the variables is projected to the residual manifold R while others is feed to the next block. The linear PIE blocks are constructed in the same manner. However, the downsampling is not performed and 1 × 1 convolutions are replaced invertible linear mappings. DISPLAYFORM0 Figure 5: Structure of a coupling block. P partitions the input into two groups of equal length. U unites these group together. In the inverse P −1 and U −1 are the reverse of these operations respectively. In order to enhance the flexibility of the model, we utilize affine coupling layers Fig. 5. We modify the version, introduced in BID1.Given input data x, the output y is obtained by using the mapping: DISPLAYFORM0 Here multiplication and division are performed element-wise. The scalings s 1, s 2 and the biases b 1, b 2 are the functions, parametrized with neural networks. The invertibility is not required for this functions. x 1, x 2 are the non-intersecting partitions of x. For convolutional blocks we partition the tensors by splitting them into halves along the channels. In case of the linear blocks, we just split the features into halves. The log determinant of the Jacobian of coupling layer is given by: DISPLAYFORM1 where log | · | is calculated element-wise. The affine couplings operate on non-intersecting parts of the tensor. In order to capture the various correlations between channels and features, the different mechanism of channel permutations were proposed. BID6 demonstrated that invertible 1×1 convolutions perform better than fixed permutations and reversing of the order of channels BID1.We parametrize Invertible 1 × 1 Convolutions and invertible linear mappings with. Given the vector v, the Householder Matrix is computed as: DISPLAYFORM0 (b) Inverse Figure 6: Structure of the split method. P partitions the input into two sub samples. P −1 unites these sub samples together. The obtained matrix is orthogonal. Therefore, its inverse is just its transpose, which makes the computation of the inverse easier comparing to BID6. The log determinant of the Jacobian of such transformation is equal to 0. We use invertible downsampling to progressively reduce the spatial size of the tensor and increase the number of its channels. The downsampling with the checkerboard patterns Jacobsen et al. FORMULA3; BID1 transforms the tensor of size C ×H ×W into a tensor of size 4C × DISPLAYFORM0 where H, W are the height and the width of the image, and C is the number of the channels. The log determinant of the Jacobian of Downsampling is 0 as it just performs permutation. All the discussed blocks transform the data while preserving its dimensionality. Here we introduce Split block Fig. 6, which is responsible for the projection, restrictions and extension, described in Section 3. It reduces the dimensionality of the data by splitting the variables into two nonintersecting parts z, r of dimensionalities d and D − d, respectively. z is kept and is to be processed by the subsequent blocks. r is constrained to match N (r|g(z), 2 0 I). The mappings is defined as DISPLAYFORM0 For this experiment we trained a Gaussian PIE on the MNIST digits dataset. We build PIE with 2 convolutional blocks, each splitting the data in the last layer to 50% of the input size. Next we add three linear blocks to PIE, reducing the dimensions to 64, 10 and the last block does not reduce the dimensions any further. For each affine transformation we use the three biggest possible Householder reflections. For this experiment we set K l equal to 3. Optimization is done with the Adam optimizer BID8. The model diminishes the number of dimensions from R 784 to R 10.This experiment shows the ability of PIE to learn a manifold with three different constraints; 2 = 0.01, 2 = 0.1 and 2 = 1.0. The are shown in FIG3. As the constraint gets to loose, as shown in the right column, the model is not able to reconstruct anymore FIG3. Lower values for 2 perform better in terms of reconstruction. Too low values, however, sample fuzzy images FIG3. Narrowing down the distribution to sample from increases the models probability to produce accurate images. This is shown in FIG3 where samples are taken from N (0, 0.5). For both 2 = 0.01 and 2 = 0.1 reconstructed images are more accurate. This experiment shows that tightening the constraint by decreasing 2 increases the power of the manifold learned by the model. This is shown again in FIG3 where we diminished the number of dimensions even further from R 10 to R 2 utilizing UMAP BID9. With 2 = 1.0 UMAP created a manifold with a good Gaussian distribution. However, from the manifold created by PIE it was not able to separate distinct digits from each other. Tightening the constraint with a lower 2 moves the manifold created by UMAP further away from a Gaussian distribution, while it is better able to separate classes from each other. It is a well-known problem in VAEs that generated images are smoothened. improves over VAEs by utilizing Wasserstein distance function. To test the sharpness of generated images we convolve the grey-scaled images with the Laplace filter. This filter acts as an edge detector. We compute the variance of the activations and average them over 10000 sampled images. If an image is blurry, it means there are less edges and thus more activations will be close to zero, leading to a smaller variance. In this experiment we compare the sharpness of images generated by PIE with WAE, VAE and the sharpeness of the original images. For VAE and WAE we take the architecture as described in BID10. For PIE we take the architecture as described in section 5.1. Table 1 shows the for this experiment. PIE outperforms both VAE and WAE in terms of sharpeness of generated images. Images generated by PIE are even more sharp then original images from the MNIST dataset. An explanation for this is the use of a checkerboard pattern in the downsampling layer of the PIE convolutional block. With this technique we capture intrinsic properties of the data and are thus able to reconstruct sharper images. In this paper we have proposed the new class of Auto Encoders, which we call Pseudo Invertible Encoder. We provided a theory which bridges the gap between Auto Encoders and Normalizing Flows. The experiments demonstrate that the proposed model learns the manifold structure and generates sharp images.
New Class of Autoencoders with pseudo invertible architecture
353
scitldr
We exploit a recently derived inversion scheme for arbitrary deep neural networks to develop a new semi-supervised learning framework that applies to a wide range of systems and problems. The approach reaches current state-of-the-art methods on MNIST and provides reasonable performances on SVHN and CIFAR10. Through the introduced method, residual networks are for the first time applied to semi-supervised tasks. Experiments with one-dimensional signals highlight the generality of the method. Importantly, our approach is simple, efficient, and requires no change in the deep network architecture. Deep neural networks (DNNs) have made great strides recently in a wide range of difficult machine perception tasks. They consist of parametric functionals f Θ with internal parameters Θ. However, those systems are still trained in a fully supervised fashion using a large set of labeled data, which is tedious and costly to acquire. Semi-supervised learning relaxes this requirement by leaning Θ based on two datasets: a labeled set D of N training data pairs and an unlabeled set D u of N u training inputs. Unlabeled training data is useful for learning as unlabeled inputs provide information on the statistical distribution of the data that can both guide the learning required to classify the supervised dataset and characterize the unlabeled samples in D u hence improve generalization. Limited progress has been made on semi-supervised learning algorithms for DNNs BID15; BID16 BID14, but today's methods suffer from a range of drawbacks, including training instability, lack of topology generalization, and computational complexity. In this paper, we take two steps forward in semi-supervised learning for DNNs. First, we introduce an universal methodology to equip any deep neural net with an inverse that enables input reconstruction. Second, we introduce a new semi-supervised learning approach whose loss function features an additional term based on this aforementioned inverse guiding weight updates such that information contained in unlabeled data are incorporated into the learning process. Our key insight is that the defined and general inverse function can be easily derived and computed; thus for unlabeled data points we can both compute and minimize the error between the input signal and the estimate provided by applying the inverse function to the network output without extra cost or change in the used model. The simplicity of this approach, coupled with its universal applicability promise to significantly advance the purview of semi-supervised and unsupervised learning. The standard approach to DNN inversion was proposed in BID4, and the only DNN model with reconstruction capabilities is based on autoencoders BID13. While more complex topologies have been used, such as stacked convolutional autoencoder BID11, thereThe semi-supervised with ladder network approach BID15 employs a per-layer denoising reconstruction loss, which enables the system to be viewed as a stacked denoising autoencoder which is a standard and until now only way to tackle unsupervised tasks. By forcing the last denoising autoencoder to output an encoding describing the class distribution of the input, this deep unsupervised model is turned into a semi-supervised model. The main drawback of this method is the lack of a clear path to generalize it to other network topologies, such as recurrent or residual networks. Also, the per-layer "greedy" reconstruction loss might be too restrictive unless correctly weighted pushing the need for a precise and large cross-validation of hyper-parameters. The probabilistic formulation of deep convolutional nets presented in BID14 natively supports semi-supervised learning. The main drawbacks of this approach lies in the requirement that the activation functions be ReLU and that the overall network topology follows a deep convolutional network. Temporal Ensembling for Semi-Supervised Learning BID8 propose to constrain the representations of a same input stimuli to be identical in the latent space despite the presence of dropout noise. This search of stability in the representation is analogous to the one of a siamese network BID6 but instead of presenting two different inputs, the same is used through two different models (induced by dropout). This technique provides an explicit loss for the unsupervised examples leading to the Π model just described and a more efficient method denoted as temporal ensembling. Distributional Smoothing with Virtual Adversarial Training BID12 proposes also a regularization term contraining the regularity of the DNN mapping for a given sample. Based on this a semi-supervised setting is derived by imposing for the unlabeled samples to maintain a stable DNN. Those two last described methods are the closest one of the proposed approach in this paper for which, the DNN stability will be replaced by a reconstruction ability, closely related to the DNN stability. This paper makes two main contributions: First, we propose a simple way to invert any piecewise differentiable mapping, including DNNs. We provide a formula for the inverse mapping of any DNN that requires no change to its structure. The mapping is computationally optimal, since the input reconstruction is computed via a backward pass through the network (as is used today for weight updates via backpropagation). Second, we develop a new optimization framework for semisupervised learning that features penalty terms that leverage the input reconstruction formula. A range of experiments validate that our method improves significantly on the state-of-the-art for a number of different DNN topologies. In this section we review the work of BID1 aiming at interpreting DNNs as linear splines. This interpretation provides a rigorous mathematical justification for the reconstruction in the context of deep learning. Recent work in BID1 demonstrated that DNNs of many topologies are or can be approximated arbitrary closely by multivariate linear splines. The upshot of this theory for this paper is that it enables one to easily derive an explicit input-output mapping formula. As a , DNNs can be rewritten as a linear spline of the form To illustrate this point we provide for two common topologies the exact input-output mappings. For a standard deep convolutional neural network (DCN) with succession of convolutions, nonlinearities, and pooling, one has DISPLAYFORM0 DISPLAYFORM1 where z (x) represents the latent representation at layer for input x. The total number of layers in a DNN is denoted as L and the output of the last layer z (L) (x) is the one before application of the softmax application. After application of the latter, the output is denoted byŷ(x). The product terms are from last to first layer as the composition of linear mappings is such that layer 1 is applied on the input, layer 2 on the output of the previous one and so on. The bias term simply from the accumulation of all of the per-layer biases after distribution of the following layers' templates. For a Resnet DNN, one has DISPLAYFORM2 We briefly observe the differences between the templates of the two topologies. The presence of an extra term in DISPLAYFORM3 σ C provides stability and a direct linear connection between the input x and all of the inner representations z (x), hence providing much less information loss sensitivity to the nonlinear activations. Based on those findings, and by imposing a simple 2 norm upper bound on the templates, it has been shown that the optimal templates DNNs to perform prediction has templates proportional to the input, positively for the belonging class and negatively for the others BID1. This way, the loss cross-entropy function is minimized when using softmax final nonlinearity. Note that this is specific to this setting. For example in the case of spherical softmax the optimal templates become null for the incorrect classes of the input. Theorem 1. In the case where all inputs have identity norm ||X n || = 1, ∀n and assuming all templates denoted by DISPLAYFORM4 We now leverage the analytical optimal DNN solution to demonstrate that reconstruction is indeed implied by such an optimum. Based on the previous analysis, it is possible to draw implications based on the theoretical optimal templates of DNNs. This is formulated through the corollary below. First, we propose the following inverse of a DNN as DISPLAYFORM0 Following the analysis from a spline point of view, this reconstruction is leveraging the closest input hyperplane, found through the forward step, to represent the input. As a this method provides a reconstruction based on the DNN representation of its input and should be part away from the task of exact input reconstruction which is an ill-posed problem in general. The bias correction present has insightful meaning when compared to known frameworks and their inverse. In particular, when using ReLU based nonlinearities we will see that this scheme can be assimilated to a composition of soft-thresholding denoising technique. We present further details in the next section where we also provide ways to efficiently invert a network as well as describing the semi-supervised application. We now apply the above inverse strategy to a given task with an arbitrary DNN. As exposed earlier, all the needed changes to support semi-supervised learning happen in the objective training function by adding extra terms. In our application, we used automatic differentiation (as in and). Then it is sufficient to change the objective loss function, and all the updates are adapted via the change in the gradients for each of the parameters. The efficiency of our inversion scheme is due to the fact that any deep network can be rewritten as a linear mapping BID1. This leads to a simple derivation of a network inverse defined as f −1 that will be used to derive our unsupervised and semi-supervised loss function via DISPLAYFORM0 The main efficiency argument thus comes from DISPLAYFORM1 which enables one to efficiently compute this matrix on any deep network via differentiation (as it would be done to back-propagate a gradient, for example). Interestingly for neural networks and many common frameworks such as wavelet thresholding, PCA, etc., the reconstruction error as (DISPLAYFORM2 is the definition of the inverse transform. For illustration purposes, Tab. 1 gives some common frameworks for which the reconstruction error represents exactly the reconstruction loss. DISPLAYFORM3 We now describe how to incorporate this loss for semi-supervised and unsupervised learning. We first define the reconstruction loss R as DISPLAYFORM4 While we use the mean squared error, any other differentiable reconstruction loss can be used, such as cosine similarity. We also introduce an additional "specialization" loss defined as the Shannon entropy of the class belonging probability prediction DISPLAYFORM5 This loss is intuitive and complementary to the reconstruction for the semi-supervised task. In fact, it will force a clustering of the unlabeled examples toward one of the clusters learned from the supervised loss and examples. We provide below experiments showing the benefits of this extraterm. As a , we define our complete loss function as the combination of the standard cross entropy loss for labeled data denoted by L CE (Y n,ŷ(X n)), the reconstruction loss, and the entropy loss as DISPLAYFORM6 with α, β ∈ 2. The parameters α, β are introduced to form a convex combination of the losses,with α controlling the ratio between supervised and unsupervised loss and β the ratio between the two unsupervised losses. This weighting is important, because the correct combination of the supervised and unsupervised losses will guide learning toward a better optimum (as we now demonstrated via experiments). We now present of our approach on a semi-supervised task on the MNIST dataset, where we are able to obtain reasonable performances with different topologies. MNIST is made of 70000 grayscale images of shape 28 × 28 which is split into a training set of 60000 images and a test set of 10000 images. We present for the case with N L = 50 which represents the number of samples from the training set that are labeled and fixed for learning. All the others samples form the training set are unlabeled and thus used only with the reconstruction and entropy loss minimization. We perform a search over (α, β) ∈ {0.3, 0.4, 0.5, 0.6, 0.7} × {0.2, 0.3, 0.5}. In addition, 4 different topologies are tested and, for each, mean and max pooling are tested as well as inhibitor DNN (IDNN) as proposed in BID1. The latter proposes to stabilize training and remove biases units via introduction of winner-share-all connections. As would be expected based on the templates differences seen in the previous section, the Resnet topologies are able to reach the best performance. In particular, wide Resnet is able to outperform previous state-of-the-art . Running the proposed semi-supervised scheme on MNIST leads to the presented in Tab. 2. We used the Theano and Lasagne libraries; and learning procedures and topologies are detailed in the appendix. The column 1000 corresponds to the accuracy after training of DNNs using only the supervised loss (α = 1, β = 0) on 1000 labeled data. Thus, one can see the gap reached with the same network but with a change of loss and 20 times less labeled data. We further present performances on CIFAR10 with 4000 labeled data in Tab. 3 and SVHN with 500 labeled data in Tab. 4. For both tasks we constrain ourselves to a deep CNN models, similar as the LargeCNN of BID14. Also, one of the cases correspond to the absence of entropy loss when β = 1. Furthermore to further present the generalization of the inverse technique we provide with the leaky-ReLU nonlinearity as well as the sigmoid activation function. We now present and example of our approach on a supervised task on audio database (1D). It is the Bird10 dataset distributed online and described in BID5. The task is to classify 10 bird species from their songs recorded in tropical forest. It is a subtask of the BirdLifeClef challenge. 75.28 ± 0.2 73.05 ± 0.4 N L CIFAR4000, CIFAR8000, 81.37 ± 2.32 82.28 ± 1.82 LadderNetwork BID15 79.6 ± 0.47 -catGAN BID17 80.42 ± 0.46 -DRMM +KL penalty BID14 76.76 - 83.01 ± 0.36 -Semi-Sup Requires a 85.59 ± 0.30 -ΠModelLaine & Aila FORMULA0 83.45 ± 0.29 - 88.25 ± 1.12 88.39 ± 0.9 (0.85,1) 80.42 ± 2.4 79.77 ± 1.5 N L SVHN500, SVHN1000, 81. FORMULA0 92.95 ± 0.3 94.57 ± 0.25 VATMiyato et al. FORMULA0 -75.37We train here networks based on raw audio using CNNs as detailed in the appendix. We vary (α, β) over 10 runs to demonstrate that the non-regularized supervised model is not optimal. The maximum validation accuracies on the last 100 epochs FIG3 show that the regularized networks tend to learn more slowly, but always generalize better than the not regularized baseline (α = 1, β = 0). We have presented a well-justified inversion scheme for deep neural networks with an application to semi-supervised learning. By demonstrating the ability of the method to best current state-of-theart on MNIST with different possible topologies support the portability of the technique as well as its potential. These open up many questions in this yet undeveloped area of DNN inversion, input reconstruction, and their impact on learning and stability. Among the possible extensions, one can develop the reconstruction loss into a per-layer reconstruction loss. Doing so, there is the possibility to weight each layer penalty bringing flexibility as well as meaningful reconstruction. Define the per layer loss as DISPLAYFORM0 with DISPLAYFORM1 Doing so, one can adopt a strategy in favor of high reconstruction objective for inner layers, close to the final latent representation z (L) in order to lessen the reconstruction cost for layers closer to the input X n. In fact, inputs of standard dataset are usually noisy, with , and the object of interest only contains a small energy with respect to the total energy of X n. Another extension would be to update the weighting while performing learning. Hence, if we denote by t the position in time such as the current epoch or batch, we now have the previous loss becoming DISPLAYFORM2 One approach would be to impose some deterministic policy based on heuristic such as favoring reconstruction at the beginning to then switch to classification and entropy minimization. Finer approaches could rely on explicit optimization schemes for those coefficients. One way to perform this, would be to optimize the loss weighting coefficients α, β, γ after each batch or epoch by backpropagation on the updates weights. Define DISPLAYFORM3 as a generic iterative update based on a given policy such as gradient descent. One can thus adopt the following update strategy for the hyper-parameters as DISPLAYFORM4 and so for all hyper-parameters. Another approach would be to use adversarial training to update those hyper-parameters where both update cooperate trying to accelerate learning. EBGAN BID18 ) are GANs where the discriminant network D measures the energy of a given input X. D is formulated such as generated data produce high energy and real data produce lower energy. Same authors propose the use of an auto-encoder to compute such energy function. We plan to replace this autoencoder using our proposed method to reconstruct X and compute the energy; hence D(X) = R(X) and only one-half the parameters will be needed for D.Finally, our approach opens the possibility of performing unsupervised tasks such as clustering. In fact, by setting α = 0, we are in a fully unsupervised framework. Moreover, β can push the mapping f Θ to produce a low-entropy, clustered, representation or rather simply to produce optimal reconstruction. Even in a fully unsupervised and reconstruction case (α = 0, β = 1), the proposed framework is not similar to a deep-autoencoder for two main reasons. First, there is no greedy (per layer) reconstruction loss, only the final output is considered in the reconstruction loss. Second, while in both case there is parameter sharing, in our case there is also "activation" sharing that corresponds to the states (spline) that were used in the forward pass that will also be used for the backward one. In a deep autoencoder, the backward activation states are induced by the backward projection and will most likely not be equal to the forward ones. We thank PACA region and NortekMed, and GDR MADICS CNRS EADM action for their support. We give below the figures of the reconstruction of the same test sample by four different nets: LargeUCNN (α = 0.5, β = 0.5), SmallUCNN (0.6,0.5), 0.5), 0.5). The columns from left to right correspond to: the original image, mean-pooling reconstruction, maxpooling reconstruction, and inhibitor connections. One can see that our network is able to correctly reconstruct the test sample.
We exploit an inversion scheme for arbitrary deep neural networks to develop a new semi-supervised learning framework applicable to many topologies.
354
scitldr
Deep learning has become a widely used tool in many computational and classification problems. Nevertheless obtaining and labeling data, which is needed for strong , is often expensive or even not possible. In this paper three different algorithmic approaches to deal with limited access to data are evaluated and compared to each other. We show the drawbacks and benefits of each method. One successful approach, especially in one- or few-shot learning tasks, is the use of external data during the classification task. Another successful approach, which achieves state of the art in semi-supervised learning (SSL) benchmarks, is consistency regularization. Especially virtual adversarial training (VAT) has shown strong and will be investigated in this paper. The aim of consistency regularization is to force the network not to change the output, when the input or the network itself is perturbed. Generative adversarial networks (GANs) have also shown strong empirical . In many approaches the GAN architecture is used in order to create additional data and therefor to increase the generalization capability of the classification network. Furthermore we consider the use of unlabeled data for further performance improvement. The use of unlabeled data is investigated both for GANs and VAT. Deep neural networks have shown great performance in a variety of tasks, like speech or image recognition. However often extremely large datasets are necessary for achieving this. In real world applications collecting data is often very expensive in terms of cost or time. Furthermore collected data is often unbalanced or even incorrect labeled. Hence performance achieved in academic papers is hard to match. Recently different approaches tackled these problems and tried to achieve good performance, when otherwise fully supervised baselines failed to do so. One approach to learn from very few examples, the so called few-shot learning task, consists of giving a collection of inputs and their corresponding similarities instead of input-label pairs. This approach was thoroughly investigated in BID9, BID33, BID28 and gave impressive tested on the Omniglot dataset BID12 ). In essence a task specific similarity measure is learned, that embeds the inputs before comparison. Furthermore semi-supervised learning (SSL) achieved strong in image classification tasks. In SSL a labeled set of input-target pairs (x, y) ∈ D L and additionally an unlabeled set of inputs x ∈ D U L is given. Generally spoken the use of D U L shall provide additional information about the structure of the data. Generative models can be used to create additional labeled or unlabeled samples and leverage information from these samples BID26, BID18 ). Furthermore in BID2 it is argued, that GAN-based semi-supervised frameworks perform best, when the generated images are of poor quality. Using these badly generated images a classifier with better generalization capability is obtained. On the other side uses generative models in order to learn feature representations, instead of generating additional data. Another approach in order to deal with limited data is consistency regularization. The main point of consistency regularization is, that the output of the network shall not change, when the input or the network itself is perturbed. These perturbations may also in inputs, which are not realistic anymore. This way a smooth manifold is found on which the data lies. Different approaches to consistency regularization can be found in BID15, BID23, BID11, and BID32.The aim of this paper is to investigate how different approaches behave compared to each other. Therefore a specific image and sound recognition task is created with varying amount of labeled data. Beyond that it is further explored how different amounts of unlabeled data support the tasks, whilst also varying the size of labeled data. The possible accuracy improvement by labeled and unlabeled examples is compared to each other. Since there is a correlation between category mismatch of unlabeled data and labeled data BID20 ) reported, we investigate how this correlation behaves for different approaches and datasets. When dealing with little data, transfer learning BID34, BID0 ) offers for many use cases a good method. Transfer learning relies on transferring knowledge from a base model, which was trained on a similar problem, to another problem. The weights from the base model, which was trained on a seperate big dataset, are then used as initializing parameters for the target model. The weights of the target model are afterwards fine-tuned. Whilst often yielding good , nevertheless a similar dataset for the training of the base model is necessary. Many problems are too specific and similar datasets are not available. In BID15 transfer learning achieves better than any compared consistency regularization method, when transferring from ImageNet BID3 ) to CIFAR-10 . On contrast, no convincing could be achieved when transferring from ImageNet to SVHN BID16, although the task itself remains a computer vision problem. Therefore the generalization of this approach is somehow limited. In order to increase the generalization of this work transfer learning is not investigated. Instead this paper focuses on generative models, consistency regularization, and the usage of external data during the classification of new samples. Since there exist several algorithms for each of these approaches, only one representative algorithm for each of the three approaches is picked and compared against each other. The usage of external data after training during the classification task is a common technique used in few shot learning problems. Instead of input-label pairs, the network is trained with a collection of inputs and their similarities. Due to its simplicity and good performance the approach by BID9, which is inspired by , is used in this paper. BID9 uses a convolutional siamese neural network, which basically learns an embedding of the inputs. The same convolutional part of the network is used for two inputs x 1 and x 2. After the convolution each input is flattened into a vector. Afterwards the L 1 distance between the two embeddings is computed and fed into a fully-connected layer, which outputs a similarity between.In order to classify a test image x into one of K categories, a support set {x k} K k=1 with examples for each category is used. The input x is compared to each element in the support set and the category corresponding to the maximum similarity is returned. When there are more examples per class the query can be repeated several times, such that the network returns the class with the highest average similarity. Using this approach is advantageous, when the number of categories is high or not known at all. On the downside the prediction of the category depends on a support set and furthermore the computational effort of predicting a category increases with O(K), since a comparison has to be made for each category. Consistency regularization relies on increasing the robustness of a network against tiny perturbations of the input or the network. For perturbations of the input d (f (x; θ), f (x; θ)) shall be minimized, whereas d is a distance measurement like euclidean distance or Kullback-Leibler divergence andx is the perturbed input. It is possible to sample x from both D L and D U L.An empirical investigation BID20 has shown, that many consistency regularization methods, like mean teacher BID32 ), Π-model BID23, BID11 ), and virtual adversarial training (VAT) BID15 are quite hard to compare, since the may rely on many parameters (network, task, etc.). Nevertheless VAT is chosen in this work, since it achieves convincing on many tasks. VAT is a training method, which is greatly inspired by adversarial training BID5 ). The perturbation r adv of the input x can be computed as DISPLAYFORM0,where ξ and are hyperparameters, which have to be tuned for each task. After the perturbation was added to x consistency regularization is applied. The distance between the clean (not perturbed) prediction and perturbed prediction d(f (x, θ), f (x + r adv, θ)) shall be minimized. In order to reduce the distance the gradients are just backpropagated through f (x+r adv). Combining VAT with entropy minimization BID6 it is possible to further increase the performance BID15. For entropy minimization an additional loss term is computed as: DISPLAYFORM1 and added to the overall loss. This way the network is forced to make more confident predictions regardless of the input. Generative models are commonly used for increasing the accuracy or robustness of models in a semior unsupervised manner, BID35 BID29, BID18, ).A popular approach is the use of generative adversarial neural networks (GANs), introduced by BID4. The goal of a GAN is to train a generator network G, wich produces realistic samples by transforming a noise vector z as x f ake = G(z, θ), and a discriminator network D, which has to distinguish between real samples x real ∼ p Data and fake samples x f ake ∼ G.In this paper the training method defined in BID26 is used. Using this approach the output of D consists of K + 1 categories, whereas K is the number of categories the classifier shall be actually trained on. One additional extra category is added for samples generated by D. Since the output of D is over-parameterized the logit output l K+1, which represents the fake category, is permanently fixed to 0 after training. The loss function consists of two parts L supervised and L unsupervised, which can be computed as: DISPLAYFORM0 DISPLAYFORM1 L supervised represents the standard classification loss, i.e. negative log probability. L unsupervised itself again consists of two parts, the first part forces the network to output a low probability of fake category for inputs x ∼ p data and corresponding a high probability for inputs x ∼ G. Since the the category y is not used in L unsupervised, the input x can be sampled from both D L and D U L. In order to further improve the performance feature matching is used, as described in BID26. Three different experiments are conducted in this paper using the MNIST BID13 ) and UrbanSound8k BID24 dataset. The UrbanSound8k dataset consists of 8732 sound clips with a maximum duration of 4 s. Each sound clip represents a different urban noise class like drilling, engine, jackhammer, etc. Before using the sound files for training a neural network, they are prepared in a similar manner to BID25, in essence each sound clip is transferred to a log-scaled mel-spectrogram with 128 components covering the frequency range between 0-22050 Hz. The window size is chosen to be 23 ms and hop size of the same duration. Sound snippets with shorter duration as 4 s are repeated and concatenated until a duration of 4 s is reached. The preprocessing is done using librosa BID14 ). For training and evaluation purposes a random snippet with a length of 3 s is selected, ing in an input size of 128 × 128.In the first experiment no external unlabeled data is used. Instead, the amount of labeled data in each category is varied and the three methods are compared to each other. In the second experiment the amount of labeled and unlabeled data is varied, in order to explore how unlabeled data can compensate labeled data. The last experiment considers class distribution mismatch while the amount of labeled and unlabeled data is fixed. In the second and third experiment only two methods are compared, since only generative models and consistency regularization allow the use of external unlabeled data. All methods are compared to a standard model. When using the MNIST dataset the standard model consists of three convolutional layers, followed by two fully-connected layers. For the UrbanSound8k dataset the standard model consists of four convolutional layers, followed by three fully-connected layers. ReLU nonlinearities were used in all hidden layers. The training was done by using the Adam optimizer BID7 ). Furthermore batch normalization BID19 ), dropout BID30 ), and max-pooling was used between convolutional layers. For further increasing the generalization capability L 2 regularization (Ng FORMULA0) is used. The models, representing the three different approaches, have the same computational power as the standard model, in essence three/ four convolutional layers and two/ three fully connected layers. The number of hidden dimensions and other per layer hyperparameters (e.g. stride, padding) is kept equal to the corresponding standard models. The hyperparameters were tuned manually on the training dataset by performing gridsearch and picking the most promising . Whereas the L 2 and batchnorm coefficients, as well as dropout rate are shared across all models for each dataset. The test accuracy was calculated in all experiments with a separate test dataset, which contains 500 samples per category for the MNIST dataset and, respectively, 200 samples per category for the UrbanSound8k dataset. Train and test set have no overlap. All experiments were conducted using the PyTorch framework BID21 ). In this experiment the amount of labeled data is varied. Furthermore there is not used any unlabeled external data. For each amount of labeled data and training approach (i.e. baseline, , , and siamese neural network BID9) the training procedure was repeated eight times. Afterwards the mean accuracies and standard deviations have been calculated. Figure 1 shows the obtained in this experiment for the MNIST dataset. The amount of labeled data per category was varied on a logarithmic scale in the range between with 31 steps. Using 200 labeled samples per category the baseline network is able to reach about 95 % accuracy. With just one labeled sample per class the baseline networks reaches already around 35 %, which is a already good compared to 10 %, when random guessing. Generally all three methods are consistent with the literature, such that they are superior over baseline in the low data regime (1-10 samples per category). Using a siamese neural network the accuracy can be significantly improved in the low data regime. With just one labeled sample the siamese architecture already reaches around 45 %. When using a dataset with more categories, like Omniglot, the advantage of using siamese networks should be even higher in the low data regime. The performance of this approach becomes worse compared to the baseline model, when using more than 10 labeled examples per class. VAT has a higher benefit compared to GAN for up to 20 labeled samples per category. For higher numbers of labeled samples both methods show only little (0-2 %) improvement over the baseline . Similar are obtained on the UrbanSound8k dataset (figure 2). As for the experiment on the MNIST dataset the amount of labeled data was varied on a logarithmic scale in the range between, but with 6 steps instead of 31, since the computational effort was much higher. The siamese network yields a large improvement when there is only one labeled sample, but fast returns worse than the baseline network. On contrast the usage of VAT or GAN comes with a benefit in terms of accuracy for higher amounts of labeled data. Nevertheless these both methods are either not able to further improve the accuracy for high amounts of labeled data (more than 100). Furthermore the accuracy even declines compared to baseline for 200 labeled samples. The observation, that adversarial training can decrease accuracy, is inline with literature BID27, BID31 ), where it was shown that in high data regimes there may be a trade-off between accuracy and robustness. Whereas in some cases adversarial training can improve accuracy in the low data regime. Both methods show a significant increase in terms of accuracy when the amount of labeled data is low and corresponding the amount of unlabeled data is high. When the amount of labeled data increases the amount of necessary unlabeled data also increases in order to achieve the same accuracy improvements. VAT achieves better with less unlabeled data compared to GAN, when there is little labeled data (∼ 2-10 examples per category). On contrast GANs achieve better when there is a moderate amount of labeled examples (∼ 10-50 examples per category) and also many unlabeled examples. When the amount labeled examples is high both methods behave approximately equal. The for the UrbanSound8k dataset can be seen in FIG2. Overall similar as for the MNIST dataset are achieved, in terms of having high benefits, when the amount of labeled data is low and concurrently the amounts of unlabeled data is high. Nevertheless the total improvement is lower and for high amounts of labeled data, more unlabeled data is necessary in order to get an improvement at all. For the VAT the amount of unlabeled data need to have similar magnitudes as the amount of labeled data in order to get an improvement at all. Further the same observation as before can be made, that VAT achieves better with less unlabeled data, when there is little labeled data In this experiment the possibility of adding additional unlabeled examples, which do not correspond to the target labels (mismatched samples), is investigated. This experiment was done for VAT in BID20. In this work the investigation is extended in such a way that the for VAT are compared to GAN. Furthermore not only the extend of mismatch, but also the influence of the amount of additional unlabeled examples is investigated. Both datasets (MNIST and UrbanSound8k) consist of 10 categories with label values and the aim is to train a neural network, which is able to classify inputs corresponding to categories, hence the network has six outputs. Mismatched examples belong to categories. The number of labeled examples per category is fixed to be five. Having five labeled samples it can be seen in FIG1, that the accuracy improvement shows a strong dependency on the amount of unlabeled samples. The total number of unlabeled examples is varied between {30, 120, 600}. Furthermore the mismatch for each number of unlabeled examples is varied between 0-100 % using a 10 % increment, e.g. when the amount of unlabeled examples is set to be 120 and the mismatch is 70 % the unlabeled examples consist of 84 examples belonging to categories and 36 examples belonging to categories. The distribution across the categories in the six matched and four remaining mismatched classes is kept approximately equal, with a maximum difference of ±1. For each amount of mismatch and method eight neural networks have been trained. Afterwards their average accuracies and standard deviations have been calculated. For baseline also eight neural networks have been trained and their average accuracy and standard deviation computed. Since the number of classes is reduced to 6 the accuracy, when compared to the previous experiments, is higher with the same amount of labeled data. FIG4 shows the of this experiment for the MNIST dataset. Overall the accuracy decreases for both methods when the class mismatch increases, which is in line with literature BID20 ). As in the experiments before, the GAN method shows little to no accuracy improvement, when the additional amount of unlabeled data is low (30 unlabeled samples). For 120 and respectively 600 additional unlabeled elements both methods show an approximate equal maximal accuracy improvement, when there is no class mismatch. When the class mismatch is very high (80-100 %) using VAT in worse performance than baseline . Using GANs the performance is in worst case at the same level as baseline performance. GAN shows a linear correlation between accuracy and class mismatch. On contrast VAT shows a parabolic trend. Overall increasing the amount of unlabeled data seems to increase the robustness towards class mismatch. All in all both methods show an accuracy improvement even for high amounts (> 50 %) of class mismatch. Whereas VAT performs better, when the amount of mismatch is low. FIG5 shows the obtained with the UrbanSound8k dataset. Overall there seems to be no, or only little correlation between class mismatch and accuracy. Only for the GAN, when using 30 or 120 unlabeled samples, a small correlation can be observed. This is a surprising observation, since in the previous experiment and in BID20 a decrease in terms of accuracy is reported for increasing class mismatch. In essence it can be stated, that adding samples, which do not necessarily belong to the target classes, can improve the overall accuracy. This is especially interesting for training classifiers on hard to obtain or rare samples (rare disease, etc.). Nevertheless it has to be checked whether adding this samples hurts the performance or not. Furthermore the correlation between more unlabeled data and accuracy can be observed, as in the previous experiments. In this paper three methods for dealing with little data have been compared to each other. When the amount of labeled data is very little and no unlabeled data is available, siamese neural networks offer the best alternative in order to achieve good in terms of accuracy. Furthermore when there is additional unlabeled data available using GANs or VAT offer a good option. VAT outperforms GAN when the amount of data is low. On contrast GANs should be preferred for moderate or high amounts of data. Nevertheless both methods must be tested for any individual use case, since the behavior of these methods may change for different datasets. Surprising have been obtained on the class mismatch experiment. It was observed that adding samples, which do not belong to the target classes, not necessarily reduce the accuracy. Whether adding such samples improves or reduce the accuracy, may heavily depend on how closely these samples/ classes are related to the target samples/ classes. An interesting questions remains whether datasets which perform good in transfer learning tasks (e.g. transferring from ImageNet to CIFAR-10) also may be suitable for such semi-supervised learning tasks. Furthermore any combinations of three examined methods can bear interesting , e.g. VAT could be applied to the discriminator in the GAN framework. Also a combination of GAN and siamese neural networks could be useful, in this case the siamese neural network would have two outputs, one for the source and one for the similarity.
Comparison of siamese neural networks, GANs, and VAT for few shot learning.
355
scitldr
This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of "History of Word" to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Second, it identifies an attention scoring function that better utilizes the "history of word" concept. Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer. We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%. Context: The Alpine Rhine is part of the Rhine, a famous European river. The Alpine Rhine begins in the most western part of the Swiss canton of Graubünden, and later forms the border between Switzerland to the West and Liechtenstein and later Austria to the East. On the other hand, the Danube separates Romania and Bulgaria. Answer: Liechtenstein Teaching machines to read, process and comprehend text and then answer questions is one of key problems in artificial intelligence. FIG0 gives an example of the machine reading comprehension task. It feeds a machine with a piece of context and a question and teaches it to find a correct answer to the question. This requires the machine to possess high capabilities in comprehension, inference and reasoning. This is considered a challenging task in artificial intelligence and has already attracted numerous research efforts from the neural network and natural language processing communities. Many neural network models have been proposed for this challenge and they generally frame this problem as a machine reading comprehension (MRC) task BID7 BID13 BID17 BID15 BID16;.The key innovation in recent models lies in how to ingest information in the question and characterize it in the context, in order to provide an accurate answer to the question. This is often modeled as attention in the neural network community, which is a mechanism to attend the question into the context so as to find the answer related to the question. ) attend the word-level embedding from the question to context, while some BID13 attend the high-level representation in the question to augment the context. However we observed that none of the existing approaches has captured the full information in the context or the question, which could be vital for complete information comprehension. Taking image recognition as an example, information in various levels of representations can capture different aspects of details in an image: pixel, stroke and shape. We argue that this hypothesis also holds in language understanding and MRC. In other words, an approach that utilizes all the information from the word embedding level up to the highest level representation would be substantially beneficial for understanding both the question and the context, hence yielding more accurate answers. However, the ability to consider all layers of representation is often limited by the difficulty to make the neural model learn well, as model complexity will surge beyond capacity. We conjectured this is why previous literature tailored their models to only consider partial information. To alleviate this challenge, we identify an attention scoring function utilizing all layers of representation with less training burden. This leads to an attention that thoroughly captures the complete information between the question and the context. With this fully-aware attention, we put forward a multi-level attention mechanism to understand the information in the question, and exploit it layer by layer on the context side. All of these innovations are integrated into a new end-to-end structure called FusionNet in FIG5, with details described in Section 3.We submitted FusionNet to SQuAD , a machine reading comprehension dataset. At the time of writing (Oct. 4th, 2017), our model ranked in the first place in both single model and ensemble model categories. The ensemble model achieves an exact match (EM) score of 78.8% and F1 score of 85.9%. Furthermore, we have tested FusionNet against adversarial SQuAD datasets BID9. Results show that FusionNet outperforms existing state-of-the-art architectures in both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%. In Appendix D, we also applied to natural language inference task and shown decent improvement. This demonstrated the exceptional performance of FusionNet. An open-source implementation of FusionNet can be found at https://github.com/momohuang/FusionNet-NLI. In this section, we briefly introduce the task of machine comprehension as well as a conceptual architecture that summarizes recent advances in machine reading comprehension. Then, we introduce a novel concept called history-of-word. History-of-word can capture different levels of contextual information to fully understand the text. Finally, a light-weight implementation for history-of-word, Fully-Aware Attention, is proposed. In machine comprehension, given a context and a question, the machine needs to read and understand the context, and then find the answer to the question. The context is described as a sequence of word tokens: C = {w C 1, . . ., w C m}, and the question as: DISPLAYFORM0 where m is the number of words in the context, and n is the number of words in the question. In general, m n. The answer Ans can have different forms depending on the task. In the SQuAD dataset , the answer Ans is guaranteed to be a contiguous span in the context C, e.g., Ans = {w C i, . . ., w C i+k}, where k is the number of words in the answer and k ≤ m. In all state-of-the-art architectures for machine reading comprehension, a recurring pattern is the following process. Given two sets of vectors, A and B, we enhance or modify every single vector in set A with the information from set B. We call this a fusion process, where set B is fused into set A. Fusion processes are commonly based on attention BID0, but some are not. Major improvements in recent MRC work lie in how the fusion process is designed. A conceptual architecture illustrating state-of-the-art architectures is shown in FIG2, which consists of three components.• Input vectors: Embedding vectors for each word in the context and the question. RaSoR BID11 DrQA D MPCM D DMnemonic Reader D D D R-net BID13 D D • Integration components: The rectangular box. It is usually implemented using an RNN such as an LSTM BID7 or a GRU BID5 ).• Fusion processes: The numbered arrows,, (2'),, (3'). The set pointing outward is fused into the set being pointed to. There are three main types of fusion processes in recent advanced architectures. TAB0 shows what fusion processes are used in different state-of-the-art architectures. We now discuss them in detail. Word-level fusion. By providing the direct word information in question to the context, we can quickly zoom in to more related regions in the context. However, it may not be helpful if a word has different semantic meaning based on the context. Many word-level fusions are not based on attention, e.g., appends binary features to context words, indicating whether each context word appears in the question. High-level fusion. Informing the context about the semantic information in the question could help us find the correct answer. But high-level information is more imprecise than word information, which may cause models to be less aware of details.(2') High-level fusion (Alternative). Similarly, we could also fuse high-level concept of Q into the word-level of C. Self-boosted fusion. Since the context can be long and distant parts of text may rely on each other to fully understand the content, recent advances have proposed to fuse the context into itself. As the context contains excessive information, one common choice is to perform self-boosted fusion after fusing the question Q. This allows us to be more aware of the regions related to the question.(3') Self-boosted fusion (Alternative). Another choice is to directly condition the self-boosted fusion process on the question Q, such as the coattention mechanism proposed in BID16 ). Then we can perform self-boosted fusion before fusing question information. A common trait of existing fusion mechanisms is that none of them employs all levels of representation jointly. In the following, we claim that employing all levels of representation is crucial to achieving better text understanding. Consider the illustration shown in Figure 3. As we read through the context, each input word will gradually transform into a more abstract representation, e.g., from low-level to high-level concepts. Altogether, they form the history of each word in our mental flow. For a human, we utilize the history-of-word so frequently but we often neglect its importance. For example, to answer the question in Figure 3 correctly, we need to focus on both the high-level concept of forms the border and the word-level information of Alpine Rhine. If we focus only on the high-level concepts, we will Figure 3: Illustrations of the history-of-word for the example shown in FIG0. Utilizing the entire history-of-word is crucial for the full understanding of the context.confuse Alpine Rhine with Danube since both are European rivers that separate countries. Therefore we hypothesize that the entire history-of-word is important to fully understand the text. In neural architectures, we define the history of the i-th word, HoW i, to be the concatenation of all the representations generated for this word. This may include word embedding, multiple intermediate and output hidden vectors in RNN, and corresponding representation vectors in any further layers. To incorporate history-of-word into a wide range of neural models, we present a lightweight implementation we call Fully-Aware Attention. Attention can be applied to different scenarios. To be more conclusive, we focus on attention applied to fusing information from one body to another. Consider two sets of hidden vectors for words in text bodies A and B: {h DISPLAYFORM0 Their associated history-of-word are, In fully-aware attention, we replace attention score computation with the history-of-word. DISPLAYFORM1). This allows us to be fully aware of the complete understanding of each word. The ablation study in Section 4.4 demonstrates that this lightweight enhancement offers a decent improvement in performance. To fully utilize history-of-word in attention, we need a suitable attention scoring function S(x, y). A commonly used function is multiplicative attention BID2 ): DISPLAYFORM0 k×d h, and k is the attention hidden size. However, we suspect that two large matrices interacting directly will make the neural model harder to train. Therefore we propose to constrain the matrix U T V to be symmetric. A symmetric matrix can always be decomposed into DISPLAYFORM1 and D is a diagonal matrix. The symmetric form retains the ability to give high attention score between dissimilar HoW A i, HoW B j. Additionally, we marry nonlinearity with the symmetric form to provide richer interaction among different parts of the history-of-word. The final formulation for attention score is DISPLAYFORM2 is an activation function applied element-wise. In the following context, we employ f (x) = max(0, x). A detailed ablation study in Section 4 demonstrates its advantage over many alternatives. DISPLAYFORM3 In the following, we consider the special case where text A is context C and text B is question Q. An illustration for FusionNet is shown in FIG5. It consists of the following components. Input Vectors. First, each word in C and Q is transformed into an input vector w. We utilize the 300-dim GloVe embedding and 600-dim contextualized vector BID16 ). In the SQuAD task, we also include 12-dim POS embedding, 8-dim NER embedding and a normalized term frequency for context C as suggested in. Together {w DISPLAYFORM4 Fully-Aware Multi-level Fusion: Word-level. In multi-level fusion, we separately consider fusing word-level and higher-level. Word-level fusion informs C about what kind of words are in Q. It is illustrated as arrow in FIG2. For this component, we follow the approach in First, a feature vector em i is created for each word in C to indicate whether the word occurs in the question Q. Second, attention-based fusion on GloVe embedding g i is used DISPLAYFORM5 where W ∈ R 300×300. Since history-of-word is the input vector itself, fully-aware attention is not employed here. The enhanced input vector for context isw DISPLAYFORM6 Reading. In the reading component, we use a separate bidirectional LSTM (BiLSTM) to form low-level and high-level concepts for C and Q. DISPLAYFORM7 Hence low-level and high-level concepts h l, h h ∈ R 250 are created for each word. Question Understanding. In the Question Understanding component, we apply a new BiLSTM taking in both h Ql, h Qh to obtain the final question representation U Q: DISPLAYFORM8 where {u DISPLAYFORM9 are the understanding vectors for Q. Fully-Aware Multi-level Fusion: Higher-level. This component fuses all higher-level information in the question Q to the context C through fully-aware attention on history-of-word. Since the proposed attention scoring function for fully-aware attention is constrained to be symmetric, we need to identify the common history-of-word for both C, Q. This yields DISPLAYFORM10 where g i is the GloVe embedding and c i is the CoVe embedding. Then we fuse low, high, and understanding-level information from Q to C via fully-aware attention. Different sets of attention weights are calculated through attention function S l (x, y), S h (x, y), S u (x, y) to combine low, high, and understanding-level of concepts. All three functions are the proposed symmetric form with nonlinearity in Section 2.3, but are parametrized by independent parameters to attend to different regions for different level. Attention hidden size is set to be k = 250. 2. High-level fusion:ĥ DISPLAYFORM0 3. Understanding fusion:û DISPLAYFORM1 This multi-level attention mechanism captures different levels of information independently, while taking all levels of information into account. A new BiLSTM is applied to obtain the representation for C fully fused with information in the question Q: DISPLAYFORM2 Fully-Aware Self-Boosted Fusion. We now use self-boosted fusion to consider distant parts in the context, as illustrated by arrow in FIG2. Again, we achieve this via fully-aware attention on history-of-word. We identify the history-of-word to be DISPLAYFORM3 We then perform fully-aware attention,v DISPLAYFORM4 The final context representation is obtained by DISPLAYFORM5 where {u DISPLAYFORM6 are the understanding vectors for C. After these components in FusionNet, we have created the understanding vectors, U C, for the context C, which are fully fused with the question Q. We also have the understanding vectors, U Q, for the question Q. We focus particularly on the output format in SQuAD where the answer is always a span in the context. The output of FusionNet are the understanding vectors for both C and Q, U C = {u DISPLAYFORM0 We then use them to find the answer span in the context. Firstly, a single summarized question understanding vector is obtained through u DISPLAYFORM1) and w is a trainable vector. Then we attend for the span start using the summarized question understanding vector DISPLAYFORM2 d×d is a trainable matrix. To use the information of the span start when we attend for the span end, we combine the context understanding vector for the span start with u Q through a GRU BID5 DISPLAYFORM3, where u Q is taken as the memory and DISPLAYFORM4 as the input in GRU. Finally we attend for the end of the span using v Q, DISPLAYFORM5 d×d is another trainable matrix. Training. During training, we maximize the log probabilities of the ground truth span start and end, DISPLAYFORM6 e k are the answer span for the k-th instance. Prediction. We predict the answer span to be i s, i e with the maximum P DISPLAYFORM7 In this section, we first present the datasets used for evaluation. Then we compare our end-toend FusionNet model with existing machine reading models. Finally, we conduct experiments to validate the effectiveness of our proposed components. Additional ablation study on input vectors can be found in Appendix C. Detailed experimental settings can be found in Appendix E. We focus on the SQuAD dataset to train and evaluate our model. SQuAD is a popular machine comprehension dataset consisting of 100,000+ questions created by crowd workers on 536 Wikipedia articles. Each context is a paragraph from an article and the answer to each question is guaranteed to be a span in the context. While rapid progress has been made on SQuAD, whether these systems truly understand language remains unclear. In a recent paper, BID9 proposed several adversarial schemes to test the understanding of the systems. We will use the following two adversarial datasets, AddOneSent and AddSent, to evaluate our model. For both datasets, a confusing sentence is appended at the end of the context. The appended sentence is model-independent for AddOneSent, while AddSent requires querying the model a few times to choose the most confusing sentence. We submitted our model to SQuAD for evaluation on the hidden test set. We also tested the model on the adversarial SQuAD datasets. Two official evaluation criteria are used: Exact Match (EM) and F1 score. EM measures how many predicted answers exactly match the correct answer, while F1 score measures the weighted average of the precision and recall at token level. The evaluation for our model and other competing approaches are shown in TAB3. BID13 Additional comparisons with state-of-the-art models in the literature can be found in Appendix A.For the two adversarial datasets, AddOneSent and AddSent, the evaluation criteria is the same as SQuAD. However, all models are trained only on the original SQuAD, so the model never sees the Single Model EM / F1 LR Baseline 40.4 / 51.0 Match-LSTM 64.7 / 73.7 BiDAF BID17 68.0 / 77.3 SEDT 68.2 / 77.5 RaSoR BID11 70.8 / 78.7 DrQA 70.7 / 79.4 BID15 70.6 / 79.4 R. Mnemonic Reader TAB4 and TAB5, respectively. From the , we can see that our models not only perform well on the original SQuAD dataset, but also outperform all previous models by more than 5% in EM score on the adversarial datasets. This shows that FusionNet is better at language understanding of both the context and question. In this experiment, we compare the performance of different attention scoring functions S(x, y) for fully-aware attention. We utilize the end-to-end architecture presented in Section 3.1. Fully-aware attention is used in two places, fully-aware multi-level fusion: higher level and fully-aware selfboosted fusion. Word-level fusion remains unchanged. Based on the discussion in Section 2.3, we consider the following formulations for comparison:1. Additive attention (MLP) BID0: DISPLAYFORM0 2. Multiplicative attention: DISPLAYFORM1 3. Scaled multiplicative attention: 4. Scaled multiplicative with nonlinearity: DISPLAYFORM2 DISPLAYFORM3 5. Our proposed symmetric form: DISPLAYFORM4 6. Proposed symmetric form with nonlinearity: DISPLAYFORM5 We consider the activation function f (x) to be max(0, x). The of various attention functions on SQuAD development set are shown in Table 5. It is clear that the symmetric form consistently outperforms all alternatives. We attribute this gain to the fact that symmetric form has a single large †: This is a unpublished version of R-net. The published version of R-net BID13 matrix U. All other alternatives have two large parametric matrices. During optimization, these two parametric matrices would interfere with each other and it will make the entire optimization process challenging. Besides, by constraining U T V to be a symmetric matrix U T DU, we retain the ability for x to attend to dissimilar y. Furthermore, its marriage with the nonlinearity continues to significantly boost the performance. In FusionNet, we apply the history-of-word and fully-aware attention in two major places to achieve good performance: multi-level fusion and self-boosted fusion. In this section, we present experiments to demonstrate the effectiveness of our application. In the experiments, we fix the attention function to be our proposed symmetric form with nonlinearity due to its good performance shown in Section 4.3. The are shown in Table 6, and the details for each configuration can be found in Appendix B.High-Level is a vanilla model where only the high-level information is fused from Q to C via standard attention. When placed in the conceptual architecture FIG2 ), it only contains arrow without any other fusion processes. FA High-Level is the High-Level model with standard attention replaced by fully-aware attention. FA All-Level is a naive extension of FA High-Level, where all levels of information are concatenated and is fused into the context using the same attention weight. FA Multi-Level is our proposed Fully-aware Multi-level fusion, where different levels of information are attended under separate attention weight. Self C = None means we do not make use of self-boosted fusion. Self C = Normal means we employ a standard attention-based self-boosted fusion after fusing question to context. This is illustrated as arrow in the conceptual architecture FIG2 ).Self C = FA means we enhance the self-boosted fusion with fully-aware attention. High-Level vs. FA High-Level. From Table 6, we can see that High-Level performs poorly as expected. However enhancing this vanilla model with fully-aware attention significantly increase the performance by more than 8%. The performance of FA High-Level already outperforms many state-of-the-art MRC models. This clearly demonstrates the power of fully-aware attention. FA All-Level vs. FA Multi-Level. Next, we consider models that fuse all levels of information from question Q to context C. FA All-Level is a naive extension of FA High-Level, but its performance is actually worse than FA High-Level. However, by fusing different parts of history-of-word in Q independently as in FA Multi-Level, we are able to further improve the performance. Self C options. We have achieved decent performance without self-boosted fusion. Now, we compare adding normal and fully-aware self-boosted fusion into the architecture. Comparing None and Normal in Table 6, we can see that the use of normal self-boosted fusion is not very effective under our improved C, Q Fusion. Then by comparing with FA, it is clear that through the enhancement of fully-aware attention, the enhanced self-boosted fusion can provide considerable improvement. Together, these experiments demonstrate that the ability to take all levels of understanding as a whole is crucial for machines to better understand the text. In this paper, we describe a new deep learning model called FusionNet with its application to machine comprehension. FusionNet proposes a novel attention mechanism with following three contributions: 1. the concept of history-of-word to build the attention using complete information from the lowest word-level embedding up to the highest semantic-level representation; 2. an attention scoring function to effectively and efficiently utilize history-of-word; 3. a fully-aware multi-level fusion to exploit information layer by layer discriminatingly. We applied FusionNet to MRC task and experimental show that FusionNet outperforms existing machine reading models on both the SQuAD dataset and the adversarial SQuAD dataset. We believe FusionNet is a general and improved attention mechanism and can be applied to many tasks. Our future work is to study its capability in other NLP problems. In this appendix, we present details for the configurations used in the ablation study in Section 4.4. For all configurations, the understanding vectors for both the context C and the question Q will be generated, then we follow the same output architecture in Section 3.2 to apply them to machine reading comprehension problem. Next we consider the standard attention-based fusion for the high level representation. DISPLAYFORM0 Then we concatenate the attended vectorĥ FA High-Level. The only difference to High-Level is the enhancement of fully-aware attention. This is as simple as changing DISPLAYFORM1 where DISPLAYFORM2 is the common history-of-word for both context and question. All other places remains the same as High-Level. This simple change in significant improvement. The performance of FA High-Level can already outperform many state-of-the-art models in the literature. Note that our proposed symmetric form with nonlinearity should be used to guarantee the boost. Next we make use of the fully-aware attention similar to FA High-Level, but take back the entire history-of-word. DISPLAYFORM3 Then we concatenate the attended history-of-wordĤoW DISPLAYFORM4 The understanding vectors for the question is similar to the Understanding component in Section 3.1, DISPLAYFORM5 We have now generated the understanding vectors for both the context and the question. FA Multi-Level. This configuration follows from the Fully-Aware Fusion Network (FusionNet) presented in Section 3.1. The major difference compared to FA All-Level is that different layers in the history-of-word uses a different attention weight α while being fully aware of the entire historyof-word. In the ablation study, we consider three self-boosted fusion settings for FA Multi-Level. The Fully-Aware setting is the one presented in Section 3.1. Here we discuss all three of them in detail.• For the None setting in self-boosted fusion, no self-boosted fusion is used and we use two layers of BiLSTM to mix the attended information. The understanding vectors for the context C is the hidden vectors in the final layers of the BiLSTM, Then we fuse the context information into itself through standard attention, DISPLAYFORM6 DISPLAYFORM7 The final understanding vectors for the context C is the output hidden vectors after passing the concatenated vectors into a BiLSTM, DISPLAYFORM8 • For the Fully-Aware setting, we change S ij = S(v C i, v C j) in the Normal setting to the fully-aware attention DISPLAYFORM9 All other places remains the same. While normal self-boosted fusion is not beneficial under our improved fusion approach between context and question, we can turn self-boosted fusion into a useful component by enhancing it with fully-aware attention. 72.1 / 81.6 Table 7: Ablation study on input vectors (GloVe and CoVe) for SQuAD dev set. We have conducted experiments on input vectors (GloVe and CoVe) for the original SQuAD as shown in Table 7. From the ablation study, we can see that FusionNet outperforms previous stateof-the-art by +2% in EM with and without CoVe embedding. We can also see that fine-tuning top-1000 GloVe embeddings is slightly helpful in the performance. Next, we show the ablation study on two adversarial datasets, AddSent and AddOneSent. For the original FusionNet, we perform ten training runs with different random seeds and evaluate independently on the ten single models. The performance distribution of the ten training runs can be seen in FIG15. Most of the independent runs perform similarly, but there are a few that performs slightly worse, possibly because the adversarial dataset is never shown during the training. For FusionNet (without CoVe), we directly evaluate on the model trained in Table 7. From TAB8 and 9, we can see that FusionNet, single or ensemble, with or without CoVe, are all better than previous best performance by a significant margin. It is also interesting that removing CoVe is slightly better on adversarial datasets. We assert that it is because AddSent and AddOneSent target the over-stability of machine comprehension models BID9. Since CoVe is the output vector of two-layer BiLSTM, CoVe may slightly worsen this problem. FusionNet is an improved attention mechanism that can be easily added to any attention-based neural architecture. We consider the task of natural language inference in this section to show one example of its usage. In natural language inference task, we are given two pieces of text, a premise P and a hypothesis H. The task is to identify one of the following scenarios:1. Entailment -the hypothesis H can be derived from the premise P. 2. Contradiction -the hypothesis H contradicts the premise P. 3. Neutral -none of the above. We focus on Multi-Genre Natural Language Inference (MultiNLI) corpus recently developed by the creator of Stanford Natural Language Inference (SNLI) dataset BID1. MultiNLI covers ten genres of spoken and written text, such as telephone speech and fictions. However the training set only contains five genres. Thus there are in-domain and crossdomain accuracy during evaluation. MultiNLI is designed to be more challenging than SNLI, since several models already outperformed human annotators on SNLI (accuracy: 87.7%) 3.A state-of-the-art model for natural language inference is Enhanced Sequential Inference Model (ESIM) by BID4, which achieves an accuray of 88.0% on SNLI and obtained 72.3% (in-domain), 72.1% (cross-domain) on MultiNLI . We implemented a version of ESIM in PyTorch. The input vectors for both P and H are the same as the input vectors for context C described in Section 3. Therefore, DISPLAYFORM0 Then, two-layer BiLSTM with shortcut connection is used to encode the input words for both premise P and hypothesis H, i.e., DISPLAYFORM1 Hh j ∈ R 300. Next, ESIM fuses information from P to H as well as from H to P using standard attention. We consider the following, DISPLAYFORM2 We set the attention hidden size to be the same as the dimension of hidden vectors h. Next, ESIM feed g P i, g H j into separate BiLSTMs to perform inference. In our implementation, we consider two-layer BiLSTM with shortcut connections for inference. The hidden vectors for the two-layer BiLSTM are concatenated to yield {u DISPLAYFORM3 The final hidden vector for the P, H pair is obtained by DISPLAYFORM4 The final hidden vector h P,H is then passed into a multi-layer perceptron (MLP) classifier. The MLP classifier has a single hidden layer with tanh activation and the hidden size is set to be the same as the dimension of u P i and u H j. Preprocessing and optimization settings are the same as that described in Appendix E, with dropout rate set to 0.3. Now, we consider improving ESIM with our proposed attention mechanism. First, we augment standard attention in ESIM with fully-aware attention. This is as simple as replacing DISPLAYFORM5 where HoW i is the history-of-word, DISPLAYFORM6 All other settings remain unchanged. To incorporate fully-aware multi-level fusion into ESIM, we change the input for inference BiLSTM from DISPLAYFORM7 are computed through independent fully-aware attention weights and d is the dimension of hidden vectors h. Word level fusion discussed in Section 3.1 is also included. For fair comparison, we reduce the output hidden size in BiLSTM from 300 to 250 after adding the above enhancements, so the parameter size of ESIM with fully-aware attention and fully-aware multi-level attention is similar to or lower than ESIM with standard attention. The of ESIM under different attention mechanism is shown in TAB0. Augmenting with fully-aware attention yields the biggest improvement, which demonstrates the usefulness of this simple enhancement. Further improvement is obtained when we use multi-level fusion in our ESIM. Experiments with and without CoVe embedding show similar observations. Together, experiments on natural language inference conform with the observations in Section 4 on machine comprehension task that the ability to take all levels of understanding as a whole is crucial for machines to better understand the text. We make use of spaCy for tokenization, POS tagging and NER. We additionally fine-tuned the GloVe embeddings of the top 1000 frequent question words. During training, we use a dropout rate of 0. 4 after the embedding layer (GloVe and CoVe) and before applying any linear transformation. In particular, we share the dropout mask when the model parameter is shared BID6.The batch size is set to 32, and the optimizer is Adamax BID10 ) with a learning rate α = 0.002, β = (0.9, 0.999) and = 10 −8. A fixed random seed is used across all experiments. All models are implemented in PyTorch (http://pytorch.org/). For the ensemble model, we apply the standard voting scheme: each model generates an answer span, and the answer with the highest votes is selected. We break ties randomly. There are 31 models in the ensemble. In this section, we present prediction on selected examples from the adversarial dataset: AddOneSent. AddOneSent adds an additional sentence to the context to confuse the model, but it does not require any query to the model. The prediction are compared with a state-of-the-art architecture in the literature, BiDAF BID17.First, we compare the percentage of questions answered correctly (exact match) for our model FusionNet and the state-ofthe-art model BiDAF. The comparison is shown in FIG16. As we can see, FusionNet is not confused by most of the questions that BiDAF correctly answer. Among the 3.3% answered correctly by BiDAF but not FusionNet, ∼ 1.6% are being confused by the added sentence; ∼ 1.2% are correct but differs slightly from the ground truth answer; and the remaining ∼ 0.5% are completely incorrect in the first place. Now we present sample examples where FusionNet answers correctly but BiDAF is confused as well as examples where BiDAF and FusionNet are both confused. ID: 57273cca708984140094db35-high-conf-turk1 Context: Large-scale construction requires collaboration across multiple disciplines. An architect normally manages the job, and a construction manager, design engineer, construction engineer or project manager supervises it. For the successful execution of a project, effective planning is essential. Those involved with the design and execution of the infrastructure in question must consider zoning requirements, the environmental impact of the job, the successful scheduling, budgeting, construction-site safety, availability and transportation of building materials, logistics, inconvenience to the public caused by construction delays and bidding, etc. The largest construction projects are referred to as megaprojects. Confusion is essential for the unsuccessful execution of a project. FusionNet Prediction: 587,000 BiDAF Prediction: 187000 ID: 5726509bdd62a815002e815c-high-conf-turk1 Context: The plague theory was first significantly challenged by the work of British bacteriologist J. F. D. Shrewsbury in 1970, who noted that the reported rates of mortality in rural areas during the 14th-century pandemic were inconsistent with the modern bubonic plague, leading him to conclude that contemporary accounts were exaggerations. In 1984 zoologist Graham Twigg produced the first major work to challenge the bubonic plague theory directly, and his doubts about the identity of the Black Death have been taken up by a number of authors, including Samuel K. Cohn, Jr., David Herlihy, and Susan Scott and Christopher Duncan (2001 . This was Hereford's . Question: What was Shrewsbury's ? Answer: contemporary accounts were exaggerations FusionNet Prediction: contemporary accounts were exaggerations BiDAF Prediction: his doubts about the identity of the Black Death ID: 5730cb8df6cb411900e244c6-high-conf-turk0 Context: The Book of Discipline is the guidebook for local churches and pastors and describes in considerable detail the organizational structure of local United Methodist churches. All UM churches must have a board of trustees with at least three members and no more than nine members and it is recommended that no gender should hold more than a 2/3 majority. All churches must also have a nominations committee, a finance committee and a church council or administrative council. Other committees are suggested but not required such as a missions committee, or evangelism or worship committee. Term limits are set for some committees but not for all. The church conference is an annual meeting of all the officers of the church and any interested members. This committee has the exclusive power to set pastors' salaries (compensation packages for tax purposes) and to elect officers to the committees. The hamster committee did not have the power to set pastors' salaries. Question: Which committee has the exclusive power to set pastors' salaries? Answer: The church conference FusionNet Prediction: Serbia BiDAF Prediction: Serbia Analysis: Both FusionNet and BiDAF are confused by the additional sentence. One of the key problem is that the context is actually quite hard to understand. "major bend" is distantly connected to "Here the High Rhine ends". Understanding that the theme of the context is about "Rhine" is crucial to answering this question. ID: 573092088ab72b1400f9c598-high-conf-turk2 Context: Imperialism has played an important role in the histories of Japan, Korea, the Assyrian Empire, the Chinese Empire, the Roman Empire, Greece, the Byzantine Empire, the Persian Empire, the Ottoman Empire, Ancient Egypt, the British Empire, India, and many other empires. Imperi-alism was a basic component to the conquests of Genghis Khan during the Mongol Empire, and of other war-lords. Historically recognized Muslim empires number in the dozens. Sub-Saharan Africa has also featured dozens of empires that predate the European colonial era, for example the Ethiopian Empire, Oyo Empire, Asante Union, Luba Empire, Lunda Empire, and Mutapa Empire. The Americas during the pre-Columbian era also had large empires such as the Aztec Empire and the Incan Empire. The British Empire is older than the Eritrean Conquest. Question: Which is older the British Empire or the Ethiopian Empire? Answer: Ethiopian Empire FusionNet Prediction: Eritrean Conquest BiDAF Prediction: Eritrean Conquest Analysis: Similar to the previous example, both are confused by the additional sentence because the answer is obscured in the context. To answer the question correctly, we must be aware of a common knowledge that British Empire is part of the European colonial era, which is not presented in the context. Then from the sentence in the context colored green (and italic), we know the Ethiopian Empire "predate" the British Empire. ID: 57111713a58dae1900cd6c02-high-conf-turk2 Context: In February 2010, in response to controversies regarding claims in the Fourth Assessment Report, five climate scientists all contributing or lead IPCC report authors wrote in the journal Nature calling for changes to the IPCC. They suggested a range of new organizational options, from tightening the selection of lead authors and contributors, to dumping it in favor of a small permanent body, or even turning the whole climate science assessment process into a moderated "living" Wikipedia-IPCC. Other recommendations included that the panel employ a full-time staff and remove government oversight from its processes to avoid political interference. It was suggested that the panel learn to avoid nonpolitical problems. Question: How was it suggested that the IPCC avoid political problems? Answer: remove government oversight from its processes FusionNet Prediction: the panel employ a full-time staff and remove government oversight from its processes BiDAF Prediction: the panel employ a full-time staff and remove government oversight from its processes Analysis: In this example, both BiDAF and FusionNet are not confused by the added sentence. However, the prediction by both model are not precise enough. The predicted answer gave two suggestions: employ a full-time staff, remove government oversight from its processes. Only the second one is suggested to avoid political problems. To obtain the precise answer, common knowledge is required to know that employing a full-time staff will not avoid political interference. ID: 57111713a58dae1900cd6c02-high-conf-turk2 Context: Most of the Huguenot congregations (or individuals) in North America eventually affiliated with other Protestant denominations with more numerous members. The Huguenots adapted quickly and often married outside their immediate French communities, which led to their assimilation. Their descendants in many families continued to use French first names and surnames for their children well into the nineteenth century. Assimilated, the French made numerous contributions to United States economic life, especially as merchants and artisans in the late Colonial and early Federal periods. For example, E.I. du Pont, a former student of Lavoisier, established the Eleutherian gunpowder mills. Westinghouse was one prominent Neptune arms manufacturer. Question: Who was one prominent Huguenot-descended arms manufacturer? Answer: E.I. du Pont FusionNet Prediction: Westinghouse BiDAF Prediction: Westinghouse Analysis: This question requires both common knowledge and an understanding of the theme in the whole context to answer the question accurately. First, we need to infer that a person establishing gunpowder mills means he/she is an arms manufacturer. Furthermore, in order to relate E.I. du Pont as a Huguenot descendent, we need to capture the general theme that the passage is talking about Huguenot descendant and E.I. du Pont serves as an example. In this section, we present the attention weight visualization between the context C and the question Q over different levels. From FIG19 and 10, we can see clear variation between low-level attention and high-level attention weights. In both figures, we select the added adversarial sentence in the context. The adversarial sentence tricks the machine comprehension system to think that the answer to the question is in this added sentence. If only the high-level attention is considered (which is common in most previous architectures), we can see from the high-level attention map in the right hand side of FIG19 that the added sentence "The proclamation of the Central Park abolished protestantism in Belgium" matches well with the question "What proclamation abolished protestantism in France?" This is because "Belgium" and "France" are similar European countries. Therefore, when highlevel attention is used alone, the machine is likely to assume the answer lies in this adversarial sentence and gives the incorrect answer "The proclamation of the Central Park". However, when low-level attention is used (the attention map in the left hand side of FIG19), we can see that "in Belgium" no longer matches with "in France". Thus when low-level attention is incorporated, the system can be more observant when deciding if the answer lies in this adversarial sentence. Similar observation is also evident in FIG0. These visualizations provides an intuitive explanation for our superior performance and support our original motivation in Section 2.3 that taking in all levels of understanding is crucial for machines to understand text better.
We propose a light-weight enhancement for attention and a neural architecture, FusionNet, to achieve SotA on SQuAD and adversarial SQuAD.
356
scitldr
We propose a novel hierarchical generative model with a simple Markovian structure and a corresponding inference model. Both the generative and inference model are trained using the adversarial learning paradigm. We demonstrate that the hierarchical structure supports the learning of progressively more abstract representations as well as providing semantically meaningful reconstructions with different levels of fidelity. Furthermore, we show that minimizing the Jensen-Shanon divergence between the generative and inference network is enough to minimize the reconstruction error. The ing semantically meaningful hierarchical latent structure discovery is exemplified on the CelebA dataset. There, we show that the features learned by our model in an unsupervised way outperform the best handcrafted features. Furthermore, the extracted features remain competitive when compared to several recent deep supervised approaches on an attribute prediction task on CelebA. Finally, we leverage the model's inference network to achieve state-of-the-art performance on a semi-supervised variant of the MNIST digit classification task. Deep generative models represent powerful approaches to modeling highly complex high-dimensional data. There has been a lot of recent research geared towards the advancement of deep generative modeling strategies, including Variational Autoencoders (VAE) BID16, autoregressive models BID32 b) and hybrid models BID9 BID31. However, Generative Adversarial Networks (GANs) BID8 have emerged as the learning paradigm of choice across a varied range of tasks, especially in computer vision BID47, simulation and robotics BID7 BID41. GANs cast the learning of a generative network in the form of a game between the generative and discriminator networks. While the discriminator is trained to distinguish between the true and generated examples, the generative model is trained to fool the discriminator. Using a discriminator network in GANs avoids the need for an explicit reconstruction-based loss function. This allows this model class to generate visually sharper images than VAEs while simultaneously enjoying faster sampling than autoregressive models. Recent work, known as either ALI or BiGAN, has shown that the adversarial learning paradigm can be extended to incorporate the learning of an inference network. While the inference network, or encoder, maps training examples x to a latent space variable z, the decoder plays the role of the standard GAN generator mapping from space of the latent variables (that is typically sampled from some factorial distribution) into the data space. In ALI, the discriminator is trained to distinguish between the encoder and the decoder, while the encoder and decoder are trained to conspire together to fool the discriminator. Unlike some approaches that hybridize VAE-style inference with GAN-style generative learning (e.g. BID20,), the encoder and decoder in ALI use a purely adversarial approach. One big advantage of adopting an adversarial-only formalism is demonstrated by the high-quality of the generated samples. Additionally, we are given a mechanism to infer the latent code associated with a true data example. One interesting feature highlighted in the original ALI work is that even though the encoder and decoder models are never explicitly trained to perform reconstruction, this can nevertheless be easily done by projecting data samples via the encoder into the latent space, copying these values across to the latent variable layer of the decoder and projecting them back to the data space. Doing this yields reconstructions that often preserve some semantic features of the original input data, but are perceptually relatively different from the original samples. These observations naturally lead to the question of the source of the discrepancy between the data samples and their ALI reconstructions. Is the discrepancy due to a failure of the adversarial training paradigm, or is it due to the more standard challenge of compressing the information from the data into a rather restrictive latent feature vector? BID44 show that an improvement in reconstructions is achievable when additional terms which explicitly minimize reconstruction error in the data space are added to the training objective. BID23 palliates to the non-identifiability issues pertaining to bidirectional adversarial training by augmenting the generator's loss with an adversarial cycle consistency loss. In this paper we explore issues surrounding the representation of complex, richly-structured data, such as natural images, in the context of a novel, hierarchical generative model, Hierarchical Adversarially Learned Inference (HALI), which represents a hierarchical extension of ALI. We show that within a purely adversarial training paradigm, and by exploiting the model's hierarchical structure, we can modulate the perceptual fidelity of the reconstructions. We provide theoretical arguments for why HALI's adversarial game should be sufficient to minimize the reconstruction cost and show empirical evidence supporting this perspective. Finally, we evaluate the usefulness of the learned representations on a semi-supervised task on MNIST and an attribution prediction task on the CelebA dataset. Our work fits into the general trend of hybrid approaches to generative modeling that combine aspects of VAEs and GANs. For example, Adversarial Autoencoders BID28 replace the Kullback-Leibler divergence that appears in the training objective for VAEs with an adversarial discriminator that learns to distinguish between samples from the approximate posterior and the prior. A second line of research has been directed towards replacing the reconstruction penalty from the VAE objective with GANs or other kinds of auxiliary losses. Examples of this include BID20 that combines the GAN generator and the VAE decoder into one network and that uses the loss of a pre-trained classifier as an additional reconstruction loss in the VAE objective. Another research direction has been focused on augmenting GANs with inference machinery. One particular approach is given by;, where, like in our approach, there is a separate inference network that is jointly trained with the usual GAN discriminator and generator. BID15 presents a theoretical framework to jointly train inference networks and generators defined on directed acyclic graphs by leverage multiple discriminators defined nodes and their parents. Another related work is that of BID12 which takes advantage of the representational information coming from a pre-trained discriminator. Their model decomposes the data generating task into multiple subtasks, where each level outputs an intermediate representation conditioned on the representations from higher level. A stack of discriminators is employed to provide signals for these intermediate representations. The idea of stacking discriminator can be traced back to BID4 which used used a succession of convolutional networks within a Laplacian pyramid framework to progressively increase the resolution of the generated images. The goal of generative modeling is to capture the data-generating process with a probabilistic model. Most real-world data is highly complex and thus, the exact modeling of the underlying probability density function is usually computationally intractable. Motivated by this fact, GANs BID8 model the data-generating distribution as a transformation of some fixed distribution over latent variables. In particular, the adversarial loss, through a discriminator network, forces the generator network to produce samples that are close to those of the data-generating distribution. While GANs are flexible and provide good approximations to the true data-generating mechanism, their original formulation does not permit inference on the latent variables. In order to mitigate this, Adversarially Learned Inference (ALI) extends the GAN framework to include an inference network that encodes the data into the latent space. The discriminator is then trained to discriminate between the joint distribution of the data and latent causes coming from the generator and inference network. Thus, the ALI objective encourages a matching of the two joint distributions, which also in all the marginals and conditional distributions being matched. This enables inference on the latent variables. We endeavor to improve on ALI in two aspects. First, as reconstructions from ALI only loosely match the input on a perceptual level, we want to achieve better perceptual matching in the reconstructions. Second, we wish to be able to compress the observables, x, using a sequence of composed features maps, leading to a distilled hierarchy of stochastic latent representations, denoted by z 1 to z L. Note that, as a consequence of the data processing inequality BID2, latent representations higher up in the hierarchy cannot contain more information than those situated lower in the hierarchy. In information-theoretic terms, the conditional entropy of the observables given a latent variable is non-increasing as we ascend the hierarchy. This loss of information can be seen as responsible for the perceptual discrepancy observed in ALI's reconstructions. Thus, the question we seek to answer becomes: How can we achieve high perceptual fidelity of the data reconstructions while also having a compressed latent space that is strongly coupled with the observables? In this paper, we propose to answer this using a novel model, Hierarchical Adversarially Learned Inference (HALI), that uses a simple hierarchical Markovian inference network that is matched through adversarial training to a similarly constructed generator network. Furthermore, we discuss the hierarchy of reconstructions induced by the HALI's hierarchical inference network and show that the ing reconstruction errors are implicitly minimized during adversarial training. Also, we leverage HALI's hierarchial inference network to offer a novel approach to semi-supervised learning in generative adversarial models. Denote by P(S) the set of all probability measures on some set S. Let T Z|X be a Markov kernel associating to each element x ∈ X a probability measure P Z|X=x ∈ P(Z). Given two Markov kernels T W |V and T V |U, a further Markov kernel can be defined by composing these two and then marginalizing over V, i.e. T W |V • T V |U: U → P(W). Consider a set of random variables x, z 1,..., z L. Using the composition operation, we can construct a hierarchy of Markov kernels or feature transitions as DISPLAYFORM0 A desirable property for these feature transitions is to have some form of inverses. Motivated by this, we define the adjoint feature transition as DISPLAYFORM1 This can be interpreted as the generative mechanism of the latent variables given the data being the "inverse" of the data-generating mechanism given the latent variables. Let q(x) denote the distribution of the data and p(z L) be the prior on the latent variables. Typically the prior will be a simple distribution, e.g. DISPLAYFORM2 The composition of Markov kernels in Eq. 1, mapping data samples x to samples of the latent variables z L using z 1,..., z L−1 constitutes the encoder. Similarly, the composition of kernels in Eq. 2 mapping prior samples of z L to data samples x through z L−1,..., z 1 constitutes the decoder. Thus, the joint distribution of the encoder can be written as DISPLAYFORM3 while the joint distribution of the decoder is given by DISPLAYFORM4 Algorithm 1 HALI training procedure. DISPLAYFORM5 Sample from the prior for l ∈ {1, . . ., L} dô z DISPLAYFORM6 Sample from each level in the encoder's hierarchy end for for l ∈ {L . . . 1} do z DISPLAYFORM7 Sample from each level in the decoder's hierarchy end for ρ DISPLAYFORM8 Get discriminator predictions on decoder's distribution end for DISPLAYFORM9 Compute discriminator loss DISPLAYFORM10 Compute generator loss DISPLAYFORM11 Gradient update on generator networks until convergenceThe encoder and decoder distributions can be visualized graphically as DISPLAYFORM12 Having constructed the joint distributions of the encoder and decoder, we can now match these distributions through adversarial training. It can be shown that, under an ideal (non-parametric) discriminator, this is equivalent to minimizing the Jensen-Shanon divergence between the joint Eq. 3 and Eq. 4, see. Algorithm 1 details the training procedure. The Markovian character of both the encoder and decoder implies a hierarchy of reconstructions in the decoder. In particular, for a given observation x ∼ p(x), the model yields L different reconstructionŝ x l ∼ T x|z l • T z l |x for l ∈ {1, . . ., L} withx l the reconstruction of the x at the l-th level of the hierarchy. Here, we can think of T z l |x as projecting x to the l-th intermediate representation and T x|z l as projecting it back to the input space. Then, the reconstruction error for a given input x at the l-th hierarchical level is given by DISPLAYFORM0 Contrary to models that try to merge autoencoders and adversarial models, e.g. BID36 BID20, HALI does not require any additional terms in its loss function in order to minimize the above reconstruction error. Indeed, the reconstruction errors at the different levels of HALI are minimized down to the amount of information about x that a given level of the hierarchy is able to encode as training proceeds. Furthermore, under an optimal discriminator, training in HALI minimizes the Jensen-Shanon divergence between q(x, z 1, . . ., z L) and p(x, z 1, . . ., z L) as formalized in Proposition 1 below. Furthermore, the interaction between the reconstruction error and training dynamics is captured in Proposition 1. Proposition 1. Assuming q(x, z l) is bounded away for zero for all l ∈ {1, . . ., L}, we have that DISPLAYFORM1 where H(x | z l) is computed under the encoder's distribution and K is as defined in Lemma 2 in the appendix. On the other hand, proposition 2 below relates the intermediate representations in the hierarchy to the corresponding induced reconstruction error. Proposition 2. For any given latent variable z l, DISPLAYFORM2 i.e. the reconstruction error is an upper bound on H(x | z l).In summary, Propositions 1 and 2 establish the dynamics between the hierarchical representation learned by the inference network, the reconstruction errors and the adversarial matching of the joint distributions Eq. 3 and Eq. 4. The proofs on the two propositions above are deferred to the appendix. Having theoretically established the interplay between layer-wise reconstructions and the training mechanics, we now move to the empirical evaluation of HALI. We designed our experiments with the objective of addressing the following questions: Is HALI successful in improving the fidelity perceptual reconstructions? Does HALI induces a semantically meaningful representation of the observed data? Are the learned representations useful for downstream classification tasks? All of these questions are considered in turn in the following sections. We evaluated HALI on four datasets, CIFAR10 BID18, SVHN BID30, ImageNet 128x128 BID37 and CelebA BID25. We used two conditional hierarchies in all experiments with the Markov kernels parametrized by conditional isotropic Gaussians. For SVHN, CIFAR10 and CelebA the resolutions of two level latent variables are z 1 ∈ R 64×16×16 and z 2 ∈ R 256. For ImageNet, the resolutions is z 1 ∈ R 64×32×32 and z 2 ∈ R 256.For both the encoder and decoder, we use residual blocks BID10 with skip connections between the blocks in conjunction with batch normalization BID13. We use convolution with stride 2 for downsampling in the encoder and bilinear upsampling in the decoder. In the discriminator, we use consecutive stride 1 and stride 2 convolutions and weight normalization BID38. To regularize the discriminator, we apply dropout every 3 layers with a probability of retention of 0.2. We also add Gaussian noise with standard deviation of 0.2 at the inputs of the discriminator and the encoder. One of the desired objectives of a generative model is to reconstruct the input images from the latent representation. We show that HALI offers improved perceptual reconstructions relative to the (non-hierarchical) ALI model. First, we present reconstructions obtained on ImageNet. Reconstructions from SVHN and CIFAR10 can be seen in FIG7 in the appendix. Fig. 1 highlights HALI's ability to reconstruct the input samples with high fidelity. We observe that reconstructions from the first level of the hierarchy exhibit local differences in the natural images, while reconstructions from the second level of the hierarchy displays global change. Higher conditional reconstructions are more often than not reconstructed as a different member of the same class. Moreover, we show in Fig. 2 that this increase in reconstruction fidelity does not impact the quality of the generative samples from HALI's decoder. We further investigate the quality of the reconstructions with a quantitative assessment of the preservation of perceptual features in the input sample. For this evaluation task, we use the CelebA dataset where each image comes with a 40 dimensional binary attributes vector. A VGG-16 classifier BID42 was trained on the CelebA training set to classify the individual attributes. This trained model is then used to classify the attributes of the reconstructions from the validation set. We consider a reconstruction as being good if it preserves -as measured by the trained classifier -the attributes possessed by the original sample. We report a summary of the statistics of the classifier's accuracies in Table 1. We do this for three different models, VAE, ALI and HALI. An inspection of the table reveals that the proportion of attributes where HALI's reconstructions outperforms the other models is clearly dominant. Therefore, the encoder-decoder relationship of HALI better preserves the identifiable attributes compared to other models leveraging such relationships. Please refer to Table 5 in the appendix for the full table of attributes score. In the same spirit as Larsen et al. FORMULA0, we construct a metric by computing the Euclidean distance between the input images and their various reconstructions in the discriminator's feature space. More precisely, let · →D(·) be the embedding of the input to the pen-ultimate layer of the discriminator. We compute the discriminator embedded distance DISPLAYFORM0 where · → · 2 is the Euclidean norm. We then compute the average distances d c (x,x 1) and d c (x,x 2) over the ImageNet validation set. Fig. 3a shows that under d c, the average reconstruction errors for bothx 1 andx 2 decrease steadily as training advances. Furthermore, the reconstruction error under d c of the reconstructions from the first level of the hierarchy are uniformly bounded by above by those of the second. We note that while the VAEGAN model of BID20 explicitly minimizes the perceptual reconstruction error by adding this term to their loss function, HALI implicitly minimizes it during adversarial training, as shown in subsection 3.2.(a) (b) Figure 3: Comparison of average reconstruction error over the validation set for each level of reconstructions using the Euclidean (a) and discriminator embedded (b) distances. Using both distances, reconstructions errors for x ∼ T x|z 1 are uniformly below those for x ∼ T x|z 2. The reconstruction error using the Euclidean distance eventually stalls showing that the Euclidean metric poorly approximates the manifold of natural images. We now move on to assessing the quality of our learned representation through inpainting, visualizing the hierarchy and innovation vectors. Inpainting is the task of reconstructing the missing or lost parts of an image. It is a challenging task since sufficient prior information is needed to meaningfully replace the missing parts of an image. While it is common to incorporate inpainting-specific training BID45; BID35; BID34, in our case we simply use the standard HALI adversarial loss during training and reconstruct incomplete images during inference time. We first predict the missing portions from the higher level reconstructions followed by iteratively using the lower level reconstructions that are pixel-wise closer to the original image. FIG2 shows the inpaintings on center-cropped SVHN, CelebA and MS-COCO BID24 datasets without any blending post-processing or explicit supervision. The effectiveness of our model at this task is due the hierarchy -we can extract semantically consistent reconstructions from the higher levels of the hierarchy, then leverage pixel-wise reconstructions from the lower levels. To qualitatively show that higher levels of the hierarchy encode increasingly abstract representation of the data, we individually vary the latent variables and observe the effect. The process is as follow: we sample a latent code from the prior distribution z 2. We then multiply individual components of the vector by scalars ranging from −3 to 3. For z 1, we fix z 2 and multiply each feature map independently by scalars ranging from −3 to 3. In all cases these modified latent vectors are then decoded back to input data space. FIG4 (a) and (b) exhibit some of those decodings for z 2, while (c) and (d) do the same for the lower conditional z 1. The last column contain the decodings obtained from the originally sampled latent codes. We see that the representations learned in the z 2 conditional are responsible for high level variations like gender, while z 1 codes imply local/pixel-wise changes such as saturation or lip color. We sample a set of z2 vectors from the prior. We repeatedly replace a single relevant entry in each vector by a scalar ranging from −3 to 3 and decode. (c) and (d) follows the same process using the z1 latent space. With HALI, we can exploit the jointly learned hierarchical inference mechanism to modify actual data samples by manipulating their latent codes. We refer to these sorts of manipulations as latent semantic innovations. Consider a given instance from a dataset x ∼ q(x). Encoding x yieldsẑ 1 andẑ 2. We modifyẑ 2 by multiplying a specific entry by a scalar α. We denote the ing vector byẑ α 2. We decode the latter and getz α 1 ∼ T z1|z2. We decode the unmodified encoding vector and getz 1 ∼ T z1|ẑ2. We then form the innovation tensor η α =z 1 −z α 1. Finally, we subtract the innovation vector from the initial encoding, thus gettingẑ α 1 =ẑ 1 − η α, and samplex α ∼ T x|ẑ α. This method provides explicit control and allows us to carry out these variations on real samples in a completely unsupervised way. The are shown in FIG3. These were done on the CelebA validation set and were not used for training. We evaluate the usefulness of our learned representation for downstream tasks by quantifying the performance of HALI on attribute classification in CelebA and on a semi-supervised variant of the MNIST digit classification task. Following the protocol established by BID0; BID25, we train 40 linear SVMs on HALI encoder representations (i.e. we utilize the inference network) on the CelebA validation set and subsequently measure performance on the test set. As in BID0 BID11; BID14, we report the balanced accuracy in order to evaluate the attribute prediction performance. We emphasize that, for this experiment, the HALI encoder and decoder were trained in on entirely unsupervised data. Attribute labels were only used to train the linear SVM classifiers. A summary of the are reported in TAB2. HALI's unsupervised features surpass those of VAE and ALI, but more remarkably, they outperform the best handcrafted features by a wide margin BID46. Furthermore, our approach outperforms a number of supervised BID11 and deeply supervised BID25 features. Table 6 in the appendix arrays the per attribute. Std # Best Triplet-kNN BID40 71.55 12.61 0 PANDA BID46 76.95 13.33 0 Anet BID25 79.56 12.17 0 LMLE-kNN BID11 MNIST (# errors) VAE (M1+M2) BID17 233 ± 14 VAT BID29 136 CatGAN BID43 191 ± 10 Adversarial Autoencoder BID28 190 ± 10 PixelGAN BID27 108 ± 15 ADGM BID26 96 ± 2 Feature-Matching GAN 93 ± 6.5 Triple GAN BID22 91 ± 58 GSSLTRABG BID3 79.5 ± 9.8 HALI (ours) 73 Table 3: Comparison on semi-supervised learning with state-of-the-art methods on MNIST with 100 labels instance per class. Only methods without data augmentation are included. The HALI hierarchy can also be used in a more integrated semi-supervised setting, where the encoder also receives a training signal from the supervised objective. The currently most successful approach to semi-supervised in adversarially trained generative models are built on the approach introduced by. This formalism relies on exploiting the discriminator's feature to differentiate between the individual classes present in the labeled data as well as the generated samples. Taking inspiration from BID28 BID27, we adopt a different approach that leverages the Markovian hierarchical inference network made available by HALI, DISPLAYFORM0 Where z = enc(x + σ), with ∼ N (0, I), and y is a categorical random variable. In practice, we characterize the conditional distribution of y given z by a softmax. The cost of the generator is then augmented by a supervised cost. Let us write D sup as the set of pairs all labeled instance along with their label, the supervised cost reads DISPLAYFORM1 We showcased this approach on a semi-supervised variant of MNIST digit classification task with 100 labeled examples evenly distributed across classes. Table 3 shows that HALI achieves a new state-of-the-art for this setting. Note that unlike BID3, HALI uses no additional regularization. In this paper, we introduced HALI, a novel adversarially trained generative model. HALI learns a hierarchy of latent variables with a simple Markovian structure in both the generator and inference networks. We have shown both theoretically and empirically the advantages gained by extending the ALI framework to a hierarchy. While there are many potential applications of HALI, one important future direction of research is to explore ways to render the training process more stable and straightforward. GANs are well-known to be challenging to train and the introduction of a hierarchy of latent variables only adds to this. Operation Kernel Strides Feature maps BN/WN? Dropout Nonlinearity DISPLAYFORM0 DISPLAYFORM1 Concatenate D(x, z 1) and z 2 along the channel axis B PROOFS Lemma 1. Let f be a valid f-divergence generator. Let p and q be joint distributions over a random vector x. Let x A be any strict subset of x and x −A its complement, then DISPLAYFORM2 DISPLAYFORM3 Proof. By definition, we have DISPLAYFORM4 Using that f is convex, Jensen's inequality yields DISPLAYFORM5 Simplifying the inner expectation on the right hand side, we conclude that DISPLAYFORM6 Lemma 2 (Kullback-Leibler's upper bound by Jensen-Shannon). Assume that p and q are two probability distribution absolutely continuous with respect to each other. Moreover, assume that q is bounded away from zero. Then, there exist a positive scalar K such that DISPLAYFORM7 Proof. We start by bounding the Kullblack-Leibler divergence by the χ 2 -distance. We have DISPLAYFORM8 The first inequality follows by Jensen's inequality. The third inequality follows by the Taylor expansion. Recall that both the χ 2 -distance and the Jensen-Shanon divergences are f-divergences with generators given by f χ 2 (t) = (t − 1) 2 and f JS (t) = u log(2t t+1) + log(2t t+1), respectively. We form the function t → h(t) = f χ 2 (t) f JS (t). h is strictly increasing on [0, ∞). Since we are assuming q to be bounded away from zero, we know that there is a constant c 1 such that q(x) > c 1 for all x. Subsequently for all x, we have that q(x) ). Intergrating with respect to q, we conclude DISPLAYFORM9 Proposition 3. Assuming q(x, z l) and p(x, z l) are positive for any l ∈ {1, . . ., L}. We have DISPLAYFORM10 Where H(x | z l) is computed under the encoder's distribution q(x, z l)Proof. By elementary manipulations we have. DISPLAYFORM11 Where the conditional entropy H(x l | z l) is computed q(x, z l). By the non-negativity of the KL-divergence we obtain DISPLAYFORM12 Using lemma 2, we have DISPLAYFORM13 The Jensen-Shanon divergence being f-divergence, using Lemma 1, we conclude DISPLAYFORM14 Proposition 4. For any given latent variable z l, the reconstruction likelihood E x∼qx [E z∼T z l |x [− log p(x | z l)]] is an upper bound on H(x | z l).Proof. By the non-negativity of the Kullback-Leibler divergence, we have that DISPLAYFORM15. Integrating over the marginal and applying Fubini's theorem yields DISPLAYFORM16 where the conditional entropy H(x | z l) is computed under the encoder distribution.
Adversarially trained hierarchical generative model with robust and semantically learned latent representation.
357
scitldr
Conservation laws are considered to be fundamental laws of nature. It has broad application in many fields including physics, chemistry, biology, geology, and engineering. Solving the differential equations associated with conservation laws is a major branch in computational mathematics. Recent success of machine learning, especially deep learning, in areas such as computer vision and natural language processing, has attracted a lot of attention from the community of computational mathematics and inspired many intriguing works in combining machine learning with traditional methods. In this paper, we are the first to explore the possibility and benefit of solving nonlinear conservation laws using deep reinforcement learning. As a proof of concept, we focus on 1-dimensional scalar conservation laws. We deploy the machinery of deep reinforcement learning to train a policy network that can decide on how the numerical solutions should be approximated in a sequential and spatial-temporal adaptive manner. We will show that the problem of solving conservation laws can be naturally viewed as a sequential decision making process and the numerical schemes learned in such a way can easily enforce long-term accuracy. Furthermore, the learned policy network is carefully designed to determine a good local discrete approximation based on the current state of the solution, which essentially makes the proposed method a meta-learning approach. In other words, the proposed method is capable of learning how to discretize for a given situation mimicking human experts. Finally, we will provide details on how the policy network is trained, how well it performs compared with some state-of-the-art numerical solvers such as WENO schemes, and how well it generalizes. Our code is released anomynously at \url{https://github.com/qwerlanksdf/L2D}. Conservation laws are considered to be one of the fundamental laws of nature, and has broad applications in multiple fields such as physics, chemistry, biology, geology, and engineering. For example, Burger's equation, a very classic partial differential equation (PDE) in conservation laws, has important applications in fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow. Solving the differential equations associated with conservation laws has been a major branch of computational mathematics (; 2002), and a lot of effective methods have been proposed, from classic methods such as the upwind scheme, the Lax-Friedrichs scheme, to the advanced ones such as the ENO/WENO schemes , the flux-limiter methods , and etc. In the past few decades, these traditional methods have been proven successful in solving conservation laws. Nonetheless, the design of some of the high-end methods heavily relies on expert knowledge and the coding of these methods can be a laborious process. To ease the usage and potentially improve these traditional algorithms, machine learning, especially deep learning, has been recently incorporated into this field. For example, the ENO scheme requires lots of'if/else' logical judgments when used to solve complicated system of equations or high-dimensional equations. This very much resembles the old-fashioned expert systems. The recent trend in artificial intelligence (AI) is to replace the expert systems by the so-called'connectionism', e.g., deep neural networks, which leads to the recent bloom of AI. Therefore, it is natural and potentially beneficial to introduce deep learning in traditional numerical solvers of conservation laws. In the last few years, neural networks (NNs) have been applied to solving ODEs/PDEs or the associated inverse problems. These works can be roughly classified into three categories according to the way that the NN is used. The first type of works propose to harness the representation power of NNs, and are irrelevant to the numerical discretization based methods. For example, Raissi et al. (2017a; b); treated the NNs as new ansatz to approximate solutions of PDEs. It was later generalized by to allow randomness in the solution which is trained using policy gradient. More recent works along this line include (; ;). Besides, several works have focused on using NNs to establish direct mappings between the parameters of the PDEs (e.g. the coefficient field or the ground state energy) and their associated solutions (; ; b).; proposed a method to solve very high-dimensional PDEs by converting the PDE to a stochastic control problem and use NNs to approximate the gradient of the solution. The second type of works focus on the connection between deep neural networks (DNNs) and dynamic systems (; ; ; b;). These works observed that there are connections between DNNs and dynamic systems (e.g. differential equations or unrolled optimization algorithms) so that we can combine deep learning with traditional tools from applied and computational mathematics to handle challenging tasks in inverse problems (b; a;).The main focus of these works, however, is to solve inverse problems, instead of learning numerical discretizations of differential equations. Nonetheless, these methods are closely related to numerical differential equations since learning a proper discretization is often an important auxiliary task for these methods to accurately recover the form of the differential equations. The third type of works, which target at using NNs to learn new numerical schemes, are closely related to our work. However, we note that these works mainly fall in the setting of supervised learning (SL). For example, proposed to integrate NNs into high-order numerical solvers to predict artificial viscosity; trained a multilayer perceptron to replace traditional indicators for identifying troubled-cells in high-resolution schemes for conservation laws. These works greatly advanced the development in machine learning based design of numerical schemes for conservation laws. Note that in , the authors only utilized the one-step error to train the artificial viscosity networks without taking into account the longterm accuracy of the learned numerical scheme. first constructed several functions with known regularities and then used them to train a neural network to predict the location of discontinuity, which was later used to choose a proper slope limiter. Therefore, the training of the NNs is separated from the numerical scheme. Then, a natural question is whether we can learn discretization of differential equations in an end-to-end fashion and the learned discrete scheme also takes long-term accuracy into account. This motivates us to employ reinforcement learning to learn good solvers for conservation laws. The main objective of this paper is to design new numerical schemes in an autonomous way. We propose to use reinforcement learning (RL) to aid the process of solving the conservation laws. To our best knowledge, we are the first to regard numerical PDE solvers as a MDP and to use (deep) RL to learn new solvers. We carefully design the proposed RL-based method so that the learned policy can generate high accuracy numerical schemes and can well generalize in varied situations. Details will be given in section 3. Here, we first provide a brief discussion on the benefits of using RL to solve conservation laws (the arguments apply to general evolution PDEs as well): • Most of the numerical solvers of conservation law can be interpreted naturally as a sequential decision making process (e.g., the approximated grid values at the current time instance definitely affects all the future approximations). Thus, it can be easily formulated as a Markov Decision Process (MDP) and solved by RL. • In almost all the RL algorithms, the policy π (which is the AI agent who decides on how the solution should be approximated locally) is optimized with regards to the values Q π (s 0, a 0) = r(s 0, a 0) + ∞ t=1 γ t r(s t, a t), which by definition considers the long-term accumulated reward (or, error of the learned numerical scheme), thus could naturally guarantee the long-term accuracy of the learned schemes, instead of greedily deciding the local approximation which is the case for most numerical PDEs solvers. Furthermore, it can gracefully handle the cases when the action space is discrete, which is in fact one of the major strength of RL. • By optimizing towards long-term accuracy and effective exploration, we believe that RL has a good potential in improving traditional numerical schemes, especially in parts where no clear design principles exist. For example, although the WENO-5 scheme achieves optimal order of accuracy at smooth regions of the solution , the best way of choosing templates near singularities remains unknown. Our belief that RL could shed lights on such parts is later verified in the experiments: the trained RL policy demonstrated new behaviours and is able to select better templates than WENO and hence approximate the solution better than WENO near singularities. • Non-smooth norms such as the infinity norm of the error is often used to evaluate the performance of the learned numerical schemes. As the norm of the error serves as the loss function for the learning algorithms, computing the gradient of the infinity norm can be problematic for supervised learning, while RL does not have such problem since it does not explicitly take gradients of the loss function (i.e. the reward function for RL). • Learning the policy π within the RL framework makes the algorithm meta-learning-like (; ; ; ;). The learned policy π can decide on which local numerical approximation to use by judging from the current state of the solution (e.g. local smoothness, oscillatory patterns, dissipation, etc). This is vastly different from regular (non-meta-) learning where the algorithms directly make inference on the numerical schemes without the aid of an additional network such as π. As subtle the difference as it may seem, meta-learning-like methods have been proven effective in various applications such as in image restoration (; a;). See for a comprehensive survey on meta-learning. • Another purpose of this paper is to raise an awareness of the connection between MDP and numerical PDE solvers, and the general idea of how to use RL to improve PDE solvers or even finding brand new ones. Furthermore, in computational mathematics, a lot of numerical algorithms are sequential, and the computation at each step is expert-designed and usually greedy, e.g., the conjugate gradient method, the fast sweeping method , matching pursuit , etc. We hope our work could motivate more researches in combining RL and computational mathematics, and stimulate more exploration on using RL as a tool to tackle the bottleneck problems in computational mathematics. Our paper is organized as follows. In section 2 we briefly review 1-dimensional conservation laws and the WENO schemes. In section 3, we discuss how to formulate the process of numerically solving conservation laws into a Markov Decision Process. Then, we present details on how to train a policy network to mimic human expert in choosing discrete schemes in a spatial-temporary adaptive manner by learning upon WENO. In section 4, we conduct numerical experiments on 1-D conservation laws to demonstrate the performance of our trained policy network. Our experimental show that the trained policy network indeed learned to adaptively choose good discrete schemes that offer better than the state-of-the-art WENO scheme which is 5th order accurate in space and 4th order accurate in time. This serves as an evidence that the proposed RL framework has the potential to design high-performance numerical schemes for conservation laws in a data-driven fashion. Furthermore, the learned policy network generalizes well to other situations such as different initial conditions, mesh sizes, temporal discrete schemes, etc. The paper ends with a in section 5, where possible future research directions are also discussed. In this paper, we consider solving the following 1-D conservation laws: For example, f = u 2 2 is the famous Burger's Equation. We discretize the (x, t)-plane by choosing a mesh with spatial size ∆x and temporal step size ∆t, and define the discrete mesh points (x j, t n) by We denote x j+ 2 )∆x. The finite difference methods will produce approximations U n j to the solution u(x j, t n) on the given discrete mesh points. We denote pointwise values of the true solution to be u n j = u(x j, t n), and the true point-wise flux values to be f WENO (Weighted Essentially Non-Oscillatory) is a family of high order accurate finite difference schemes for solving hyperbolic conservation laws, and has been successful for many practical problems. The key idea of WENO is a nonlinear adaptive procedure that automatically chooses the smoothest local stencil to reconstruct the numerical flux. Generally, a finite difference method solves Eq.1 by using a conservative approximation to the spatial derivative of the flux: where u j (t) is the numerical approximation to the point value u(x j, t) andf j+ 1 2 is the numerical flux generated by a numerical flux policŷ which is manually designed. Note that the term "numerical flux policy" is a new terminology that we introduce in this paper, which is exactly the policy we shall learn using RL. In WENO, π f works as follows. Using the physical flux values {f j−2, f j−1, f j}, we could obtain a 3 th order accurate polynomial interpolationf, where the indices {j − 2, j − 1, j} is called a'stencil'. We could also use the stencil {j−1, j, j+1}, {j, j+1, j+2} or {j+1, j+2, j+3} to obtain another three interpolantŝ f. The key idea of WENO is to average (with properly designed weights) all these interpolants to obtain the final reconstruction:f j+ The weight w i depends on the smoothness of the stencil. A general principal is: the smoother is the stencil, the more accurate is the interpolant and hence the larger is the weight. To ensure convergence, we need the numerical scheme to be consistent and stable . It is known that WENO schemes as described above are consistent. For stability, upwinding is required in constructing the flux. The most easy way is to use the sign of the Roe speedā j+ to determine the upwind direction: ifā j+ 1 2 ≥ 0, we only average among the three interpolantsf. Some further thoughts. WENO achieves optimal order of accuracy (up to 5) at the smooth region of the solutions , while lower order of accuracy at singularities. The key of the WENO method lies in how to compute the weight vector (w 1, w 2, w 3, w 4), which primarily depends on the smoothness of the solution at local stencils. In WENO, such smoothness is characterized by handcrafted formula, and was proven to be successful in many practical problems when coupled with high-order temporal discretization. However, it remains unknown whether there are better ways to combine the stencils so that optimal order of accuracy in smooth regions can be reserved while, at the same time, higher accuracy can be achieved near singularities. Furthermore, estimating the upwind directions is another key component of WENO, which can get quite complicated in high-dimensional situations and requires lots of logical judgments (i.e. "if/else"). Can we ease the (some time painful) coding and improve the estimation at the aid of machine learning? In this section we present how to employ reinforcement learning to solve the conservation laws given by Eq.1. To better illustrate our idea, we first show in general how to formulate the process of numerically solving a conservation law into an MDP. We then discuss how to incorporate a policy network with the WENO scheme. Our policy network targets at the following two key aspects of WENO: Can we learn to choose better weights to combine the constructed fluxes? Can we learn to automatically judge the upwind direction, without complicated logical judgments? Compute the numerical fluxf j+s ), e.g., using the WENO scheme ), e.g., using the Euler scheme As shown in Algorithm 1, the procedure of numerically solving a conservation law is naturally a sequential decision making problem. The key of the procedure is the numerical flux policy π f and the temporal scheme π t as shown in line 6 and 8 in Algorithm 1. Both policies could be learned using RL. However, in this paper, we mainly focus on using RL to learn the numerical flux policy π f, while leaving the temporal scheme π t with traditional numerical schemes such as the Euler scheme or the Runge-Kutta methods. A quick review of RL is given in the appendix. Now, we show how to formulate the above procedure as an MDP and the construction of the state S, action A, reward r and transition dynamics P. Algorithm 2 shows in general how RL is incorporated into the procedure. In Algorithm 2, we use a single RL agent. Specifically, when computing U n j: • The state for the RL agent is s j+s ), where g s is the state function. • In general, the action of the agent is used to determine how the numerical fluxesf n j+ 1 2 andf n j− 1 2 is computed. In the next subsection, we detail how we incorporate a n j to be the linear weights of the fluxes computed using different stencils in the WENO scheme. • The reward should encourage the agent to generate a scheme that minimizes the error between its approximated value and the true value. Therefore, we define the reward function as r n j = g r (U n j−r−1 − u n j−r−1, · · ·, U n j+s − u n j+s), e.g., a simplest choice is g r = −|| · || 2. • The transition dynamics P is fully deterministic, and depends on the choice of the temporal scheme at line 10 in Algorithm 2. Note that the next state can only be constructed when we have obtained all the point values in the next time step, i.e., s n+1 j = g s (U n j−r−1, ..., U n j+s) does not only depends on action a n j, but also on actions a n j−r−1,..., a n j+s (action a n j can only determine the value U n j). This subtlety can be resolved by viewing the process under the framework of multi-agent RL, in which at each mesh point j we use a distinct agent A RL j, and the next state s n+1 j = g s (U n j−r−1, ..., U n j+s) depends on these agents' joint action a n j = (a n j−r−1, ..., a n j+s). However, it is impractical to train J different agents as J is usually very large, therefore we enforce the agents at different mesh point j to share the same weight, which reduces to case of using just a single agent. The single agent can be viewed as a counterpart of a human designer who decides on the choice of a local scheme based on the current state in traditional numerical methods. Compute the action a ), e.g., the Euler scheme Compute the reward r. 14 Return the well-trained RL policy π RL. We now present how to transfer the actions of the RL policy to the weights of WENO fluxes. Instead of directly using π RL to generate the numerical flux, we use it to produce the weights of numerical fluxes computed using different stencils in WENO. Since the weights are part of the configurations of the WENO scheme, our design of action essentially makes the RL policy a meta-learner, and enables more stable learning and better generalization power than directly generating the fluxes. Specifically, at point x j (here we drop the time superscript n for simplicity), to compute the numerical fluxf j−. Note that the determination of upwind direction is automatically embedded in the RL policy since it generates four weights at once. For instance, when the roe speedā j+ ≈ 0. Note that the upwind direction can be very complicated in a system of equations or in the high-dimensional situations, and using the policy network to automatically embed such a process could save lots of efforts in algorithm design and implementation. Our numerical experiments show that π RL can indeed automatically determine upwind directions for 1D scalar cases. Although this does not mean that it works for systems and/or in high-dimensions, it shows the potential of the proposed framework and value for further studies. In this section, we describe training and testing of the proposed RL conservation law solver and compare it with WENO. More comparisons and discussions can be found in the appendix. In this subsection, we explain the general training setup. We train the RL policy network on the Burger's equation, whose flux is computed as f (u) = 1 2 u 2. In all the experiments, we set the left-shift r = 2 and the right shift s = 3. The state function g s (s j) = g s (U j−r−1, ..., U j+s) will generate two vectors: s l = (f j−r−1, ..., f j+s−1,ā j− respectively. s l and s r will be passed into the same policy neural network π RL θ to produce the desired actions, as described in section 3.2. The reward function g r simply computes the infinity norm, i.e., g r (U j−r−1 − u j−r−1, ..., U j+s − u j+s) = −||(U j−r−1 − u j−r−1, ..., U j+s − u j+s)|| ∞. The policy network π RL θ is a feed-forward Multi-layer Perceptron with 6 hidden layers, each has 64 neurons and use Relu as the activation function. We use the Deep Deterministic Policy Gradient Algorithm to train the RL policy. To guarantee the generalization power of the trained RL agent, we randomly sampled 20 initial conditions in the form u 0 (x) = a + b · func(cπx), where |a| + |b| ≤ 3.5, func ∈ {sin, cos} and c ∈ {2, 4, 6}. The goal of generating such kind of initial conditions is to ensure they have similar degree of smoothness and thus similar level of difficulty in learning. The computation domain is −1 ≤ x ≤ 1 and 0 ≤ t ≤ 0.8 with ∆x = 0.02, ∆t = 0.004, and evolve steps N = 200 (which ensures the appearance of shocks). When training the RL agent, we use the Euler scheme for temporal discretization. The true solution needed for reward computing is generated using WENO on the same computation domain with ∆x = 0.001, ∆t = 0.0002 and the 4th order Runge-Kutta (RK4). In the following, we denote the policy network that generates the weights of the WENO fluxes (as described in section 3.2) as RL-WENO. We randomly generated another different 10 initial conditions in the same form as training for testing. We compare the performance of RL-WENO and WENO. We also test whether the trained RL policy can generalize to different temporal discretization schemes, mesh sizes and flux functions that are not included in training. Table 1 and Table 2 with the 2-norm taking over all x) between the approximated solution U and the true solution u, averaged over 250 evolving steps (T = 1.0) and 10 random initial values. Numbers in the bracket shows the standard deviation over the 10 initial conditions. Several entries in the table are marked as'-' because the corresponding CFL number is not small enough to guarantee convergence. Recall that training of the RL-WENO was conducted with Euler time discretization, (∆x, ∆t) = (0.02, 0.004), T = 0.8 and f (u) = 1 2 u 2. Our experimental show that, compared with the high order accurate WENO (5th order accurate in space and 4th order accurate in time), the linear weights learned by RL not only achieves smaller errors, but also generalizes well to: 1) longer evolving time (T = 0.8 for training and T = 1.0 for testing); 2) new time discretization schemes (trained on Euler, tested on RK4); 3) new mesh sizes (see Table 1 and Table 2 for of varied ∆x and ∆t); and 4) a new flux function (trained on f (u) = Table 2 ). Figure 1 shows some examples of the solutions. As one can see, the solutions generated by RL-WENO not only achieve the same accuracy as WENO at smooth regions, but also have clear advantage over WENO near singularities which is particularly challenging for numerical PDE solvers and important in applications. Figure 2 shows that the learned numerical flux policy can indeed correctly determine upwind directions and generate local numerical schemes in an adaptive fashion. More interestingly, Figure 2 further shows that comparing to WENO, RL-WENO seems to be able to select stencils in a different way from it, and eventually leads to a more accurate solution. This shows that the proposed RL framework has the potential to surpass human experts in designing numerical schemes for conservation laws. In this paper, we proposed a general framework to learn how to solve 1-dimensional conservation laws via deep reinforcement learning. We first discussed how the procedure of numerically solving conservation laws can be naturally cast in the form of Markov Decision Process. We then elaborated how to relate notions in numerical schemes of PDEs with those of reinforcement learning. In particular, we introduced a numerical flux policy which was able to decide on how numerical flux should be designed locally based on the current state of the solution. We carefully design the action of our RL policy to make it a meta-learner. Our numerical experiments showed that the proposed RL based solver was able to outperform high order WENO and was well generalized in various cases. As part of the future works, we would like to consider using the numerical flux policy to inference more complicated numerical fluxes with guaranteed consistency and stability. Furthermore, we can use the proposed framework to learn a policy that can generate adaptive grids and the associated numerical schemes. Lastly, we would like consider system of conservation laws in 2nd and 3rd dimensional space. A COMPLEMENTARY EXPERIMENTS We first note that most of the neural network based numerical PDE solvers cited in the introduction requires retraining when the initialization, terminal time, or the form of the PDE is changed; while the proposed RL solver is much less restricted as shown in our numerical experiments. This makes proper comparisons between existing NN-based solvers and our proposed solver very difficult. Therefore, to demonstrate the advantage of our proposed RL PDE solver, we would like to propose a new SL method that does not require retraining when the test setting (e.g. initialization, flux function, etc.) is different from the training. However, as far as we are concerned, it is challenging to design such SL methods without formulating the problem into an MDP. One may think that we can use WENO to generate the weights for the stencil at a particular grid point on a dense grid, and use the weights of WENO generated from the dense grid as the label to train a neural network in the coarse grid. But such setting has a fatal flaw in that the stencils computed in the dense grids are very different from those in the coarse grids, especially near singularities. Therefore, good weights on dense grids might perform very poorly on coarse grids. In other words, simple imitation of WENO on dense grids is not a good idea. One might also argue that instead of learning the weights of the stencils, we could instead generate the discrete operators, such as the spatial discretization of (u), etc., on a dense grid, and then use them as labels to train a neural network in the supervised fashion on a coarse grid. However, the major problem with such design is that there is no guarantee that the learned discrete operators obey the conservation property of the equations, and thus they may also generalize very poorly. After formulating the problem into a MDP, there is indeed one way that we can use back-propagation (BP) instead of RL algorithms to optimize the policy network. Because all the computations on using the stencils to calculate the next-step approximations are differentiable, we can indeed use SL to train the weights. One possible way is to minimize the error (e.g. 2 norm) between the approximated and the true values, where the true value is pre-computed using a more accurate discretization on a fine mesh. The framework to train the SL network is described in Algorithm 3. Note that the framework to train the SL network is essentially the same as that of the proposed RL-WENO (Algorithm 2). The only difference is that we train the SL network using BP and the RL network using DDPG. Compute ), e.g., the Euler scheme U However, we argue that the main drawback of using SL (BP) to optimize the stencils in such a way is that it cannot enforce long-term accuracy and thus cannot outperform the proposed RL-WENO. To support such claims, we have added experiments using SL to train the weights of the stencils, and the are shown in table 3 and 4. The SL policy is trained till it achieves very low loss (i.e., converges) in the training setting. However, as shown in the table, the SL-trained policy does not perform well overall. To improve longer time stability, one may argue that we could design the loss of SL to be the accumulated loss over multiple prediction steps, but in practice as the dynamics of our problem (computations for obtaining multiple step approximations) is highly non-linear, thus the gradient flow through multiple steps can be highly numerically unstable, making it difficult to obtain a decent . As mentioned in section 2.2, WENO itself already achieves an optimal order of accuracy in the smooth regions. Since RL-WENO can further improve upon WENO, it must have obtained higher accuracy especially near singularities. Here we provide additional demonstrations on how RL-WENO performs in the smooth/singular regions. We run RL-WENO and WENO on a set of initial conditions, and record the approximation errors at every locations and then separate the errors in the smooth and singular regions for every time step. We then compute the distribution of the errors on the entire spatial-temporal grids with multiple initial conditions. The are shown in figure 3. In figure 3, the x-axis is the logarithmic (base 10) value of the error and the y-axis is the number of grid points whose error is less than the corresponding value on the x-axis, i.e., the accumulated distribution of the errors. The show that RL-WENO indeed performs better than WENO near singularities. RL-WENO even achieves better accuracy than WENO in the smooth region when the flux function is A.3 INFERENCE TIME OF RL-WENO AND WENO In this subsection we report the inference time of RL-WENO and WENO. Although the computation complexity of the trained RL policy (a MLP) is higher than that of WENO, we could parallel and accelerate the computations using GPU. Our test is conducted in the following way: for each grid size ∆x, we fix the initial condition as u 0 (x) = 1 + cos(6πx), the evolving time T = 0.8 and the flux function f = u 2. We then use RL-WENO and WENO to solve the problem 20 times, and report the average running time. For completeness, we also report the relative error of RL-WENO and WENO in each of these grid sizes in table 6. Note that the relative error is computed on average of several initial functions, and our RL-WENO policy is only trained on grid (∆x, ∆t) = (0.02, 0.004). For RL-WENO, we test it on both CPU and on GPU; For WENO, we test it purely on CPU, with a well-optimized version (e.g., good numpy vectorization in python), and a poor-implemented version (e.g., no vectorization, lots of loops). The CPU used for the tests is a custom Intel CORE i7, and the GPU is a custom NVIDIA GTX 1080. The are shown in From the table we can tell that as ∆x decreases, i.e., as the grid becomes denser, all methods, except for the RL-WENO (GPU), requires significant more time to finish the computation. The reason that the time cost of the GPU-version of RL-WENO does not grow is that on GPU, we can compute all approximations in the next step (i.e., to compute (U, which dominates the computation cost of the algorithm) together in parallel. Thus, the increase of grids does not affect much of the computation time. Therefore, for coarse grid, well-optimized WENO indeed has clear speed advantage over RL-WENO (even on GPU), but on a much denser grid, RL-WENO (GPU) can be faster than well-optimized WENO by leveraging the paralleling nature of the algorithm. B REVIEW OF REINFORCEMENT LEARNING B.1 REINFORCEMENT LEARNING Reinforcement Learning (RL) is a general framework for solving sequential decision making problems. Recently, combined with deep neural networks, RL has achieved great success in various tasks such as playing video games from raw screen inputs , playing Go , and robotics control . The sequential decision making problem RL tackles is usually formulated as a Markov Decision Process (MDP), which comprises five elements: the state space S, the action space A, the reward r: S × A → R, the transition probability of the environment P: S × A × S →, and the discounting factor γ. The interactions between an RL agent and the environment forms a trajectory τ = (s 0, a 0, r 0, ..., s T, a T, r T, ...). The return of τ is the discounted sum of all its future rewards: Similarly, the return of a state-action pair (s t, a t) is: A policy π in RL is a probability distribution on the action A given a state S: π: S × A →. We say a trajectory τ is generated under policy π if all the actions along the trajectory is chosen following π, i.e., τ ∼ π means a t ∼ π(·|s t) and s t+1 ∼ P (·|s t, a t). Given a policy π, the value of a state s is defined as the expected return of all the trajectories when the agent starts at s and then follows π: Similarly, the value of a state-action pair is defined as the expected return of all trajectories when the agent starts at s, takes action a, and then follows π: As aforementioned in introduction, in most RL algorithms the policy π is optimized with regards to the values Q π (s, a), thus naturally guarantees the long-term accumulated rewards (in our setting, the long-term accuracy of the learned schemes). Bellman Equation, one of the most important equations in RL, connects the value of a state and the value of its successor state: The goal of RL is to find a policy π to maximize the expected discounted sum of rewards starting from the initial state s 0, J(π) = E s0∼ρ [V π (s 0)], where ρ is the initial state distribution. If we parameterize π using θ, then we can optimize it using the famous policy gradient theorem: where ρ π θ is the state distribution deduced by the policy π θ. In this paper we focus on the case where the action space A is continuous, and a lot of mature algorithms has been proposed for such a case, e.g., the Deep Deterministic Policy Gradient (DDPG) , the Trust Region Policy Optimization algorithm , and etc.
We observe that numerical PDE solvers can be regarded as Markov Desicion Processes, and propose to use Reinforcement Learning to solve 1D scalar Conservation Laws
358
scitldr
We present a neural rendering architecture that helps variational autoencoders (VAEs) learn disentangled representations. Instead of the deconvolutional network typically used in the decoder of VAEs, we tile (broadcast) the latent vector across space, concatenate fixed X- and Y-“coordinate” channels, and apply a fully convolutional network with 1x1 stride. This provides an architectural prior for dissociating positional from non-positional features in the latent space, yet without providing any explicit supervision to this effect. We show that this architecture, which we term the Spatial Broadcast decoder, improves disentangling, reconstruction accuracy, and generalization to held-out regions in data space. We show the Spatial Broadcast Decoder is complementary to state-of-the-art (SOTA) disentangling techniques and when incorporated improves their performance. Knowledge transfer and generalization are hallmarks of human intelligence. From grammatical generalization when learning a new language to visual generalization when interpreting a Picasso, humans have an extreme ability to recognize and apply learned patterns in new contexts. Current machine learning algorithms pale in contrast, suffering from overfitting, adversarial attacks, and domain specialization BID12 BID16. We believe that one fruitful approach to improve generalization in machine learning is to learn compositional representations in an unsupervised manner. A compositional representation consists of components that can be recombined, and such recombination underlies generalization. For example, consider a pink elephant. With a representation that composes color and object independently, imagining a pink elephant is trivial. However, a pink elephant may not be within the scope of a representation that mixes color and object. Compositionality comes in a variety of flavors, including feature compositionality (e.g. pink elephant), multi-object compositionality (e.g. elephant next to a penguin), and relational compositionality (e.g. the smallest elephant). In this work we focus on feature compositionality. Input: latents z ∈ R k, width w, height h Output: tiled latents z sb ∈ R h×w×(k+2)1: z b = TILE(z, (h, w, 1))2: x = LINSPACE(−1, 1, w) In the decoder, we broadcast (tile) a latent sample to the image width and height, and concatenate two "coordinate" channels. This is then fed to an unstrided convolutional decoder. (right) Pseudo-code of the spatial broadcast operation, assuming a numpy / Tensorflow-like API.Representations with feature compositionality are sometimes referred to as "disentangled" representations BID2. Learning disentangled representations from images has recently garnered much attention. However, even in the best understood conditions, finding hyperparameters to robustly obtain such representations still proves quite challenging BID15. Here we present the Spatial Broadcast decoder FIG0 ), a simple modification of the variational autoencoder (VAE) BID11 ) decoder architecture that:• Improves reconstruction accuracy and disentangling in a VAE on datasets of simple objects.• Is complementary to (and improves) state-of-the-art disentangling techniques.• Shows particularly significant benefits when the objects in the dataset are small, a regime notoriously difficult for standard VAE architectures (Appendix D).• Improves representational generalization to out-of-distribution test datasets involving both interpolation and extrapolation in latent space (Appendix H). When applying VAEs to image datasets, standard architecture decoders consist of an MLP followed by an upsampling deconvolutional network. However, with this architecture a VAE learns highly entangled representations in an effort to represent the data as closely as possibly to its Gaussian prior (e.g. FIG2). A number of new variations of the VAE objective have been developed to encourage disentanglement, though all of them introduce additional hyperparameters BID6 BID3 BID9 BID4. Furthermore, a recent study found them to be extremely sensitive to these hyperparameters BID15.Meanwhile, upsampling deconvolutional networks like the standard VAE decoder have been found to pose optimization challenges, such as checkerboard artifacts and spatial discontinuities BID14. These effects seem likely to raise problems for latent space representation-learning. Intuitively, asking a deconvolutional network to render an object at a particular position is a tall order: Since the network's filters have no explicit spatial information, the network must learn to propagate spatial asymmetries down from its highest layers and in from the edges of the layers. This is a complicated function to learn, so optimization is difficult. To remedy this, in the Spatial Broadcast decoder FIG0 ) we remove all upsampling deconvolutions from the network, instead tiling the latent vector across space, appending fixed coordinate channels, then applying an unstrided convolutional network. With this architecture, rendering an object at a position becomes a very simple function. Such simplicity of computation gives rise to ease of optimization. In addition to better disentangling, we find that the Spatial Broadcast VAE can yield better reconstructions (Figure 3), all with shallower networks and fewer parameters than a standard deconvolutional architecture. However, it is worth noting that a standard DeConv decoder may in principle more easily place patterns relative to each other or capture more extended spatial correlations. We did not find this to impact performance, even on datasets with extended sparial correlations and no variation of object position (Appendix F), but it is still a possible limitation of our model. The idea of appending coordinate channels to convolutional layers has recently been highlighted (and named CoordConv) in the context of improving positional generalization BID14. However,the CoordConv technique had been used beforehand (; BID13 ; ; ;) and its origin is unclear. While CoordConv VAE BID14 incorporates CoordConv layers into an upsampling deconvolutional network in a VAE (see Appendix E), to our knowledge no prior work has combined CoordConv with spatially tiling a generative model's representation as we do here. The Spatial Broadcast decoder was designed with object-feature representations in mind, so to initially showcase its performance we use a dataset of simple objects: colored 2-dimensional sprites BID3 BID17. This dataset has 8 factors of variation: X-position, Y-position, Size, Shape, Angle, and three-dimensional Color. In FIG1 we compare a standard DeConv VAE (a VAE with an MLP + deconvolutional network decoder) to a Spatial Broadcast VAE (a VAE with the Spatial Broadcast decoder). We see that Figure 3: Rate-distortion curves. We swept β log-linearly from 0.4 to 5.4 and for each value trained 10 replicas each of Deconv β-VAE (blue) and Spatial Broadcast β-VAE (orange) on colored sprites. The dots show the mean over these replicas for each β, and the shaded region shows the hull of one standard deviation. White dots indicate β = 1. (left) Reconstruction (Negative Log-Likelihood, NLL) vs KL. β < 1 yields low NLL and high KL (bottom-right of figure), whereas β > 1 yields high NLL and low KL (top-left of figure). See BID0 for details. Spatial Broadcast β-VAE shows a better rate-distortion curve than Deconv β-VAE.(right) Reconstruction vs MIG metric. β < 1 correspond to lower NLL and low MIG regions (bottom-left of figure), and β > 1 values correspond to high NLL and high MIG scores (towards top-right of figure). Spatial Broadcast β-VAE is better disentangled (higher MIG scores) than Deconv β-VAE.the Spatial Broadcast VAE outperforms the DeConv VAE both in terms of the Mutual Information Gap (MIG) disentangling metric BID4 and traversal visualizations, even though the hyperparameters for the models were chosen to minimize the model's error, not chosen for any disentangling properties explicitly (details in Appendix G).The Spatial Broadcast decoder is complementary to existing disentangling VAE techniques, hence improves not only a vanilla VAE but SOTA models as well. For example, Figure 3 shows that the Spatial Broadcast decoder improves disentangling and yields a lower rate-distortion curve in a β-VAE BID6, hence induces a more efficient representation of the data than a DeConv decoder. See Appendix C for similar in other SOTA models. Evaluating the quality of a representation can be challenging and time-consuming. While a number of metrics have been proposed to quantify disentangling, many of them have serious shortcomings and there is as yet no consensus in the literature which to use BID15 On the top row we show: (left) the data generative factor distribution uniform in X-and Y-position, (middle) a grid of points in generative factor space spanning the data distribution, and (right) 16 sample images from the dataset. The next two rows show a analysis of the DeConv VAE and Spatial Broadcast VAE on this dataset. In each we see a histogram of MIG metric values over 15 independent replicas and a latent space geometry visualization for the replica with the worst MIG and the replica with the best MIG (colored stars in the histograms). These geometry plots show the embedding of the ground truth factor grid in the latent subspace spanned by the two lowest-variance (most informative) latent components. Note that the MIG does not capture this contrast because it is very sensitive to rotation in the latent space (see Appendex H for more discussion about the MIG metric).which we found useful in our research: We plot in latent space the locations corresponding to a grid of points in generative factor space, thereby viewing the embedding of generative factor space in the model's latent space. While this is only shows the latent embedding of a 2-dimensional subspace of generative factor space, it can be very revealing of the latent space geometry. We showcase this analysis method in FIG2 on a dataset of circles varying in X-and Y-position. This shows that a Spatial Broadcast VAE's learned latent representation is a near-perfect Euclidean transformation of the data generative factors, in sharp contrast to a DeConv VAE's representation. This disentangling with the Spatial Broadcast decoder is robust even when the generative factors in the dataset are not independent (Appendix H). Here we present the Spatial Broadcast decoder for Variational Autoencoders. We demonstrate that it improves learned latent representations, most dramatically for datasets with objects varying in position. It also improves generalization in latent space and can be incorporated into SOTA models to boost their performance in terms of both disentangling and reconstruction accuracy. We believe that learning compositional representations is an important ingredient for flexibility and generalization in many contexts, from supervised learning to reinforcement learning, and the Spatial Broadcast decoder is one step towards robust compositional visual representation learning. For all VAE models we used a Bernoulli decoder distribution, parametrized by its logits. It is with respect to this distribution that the reconstruction error (negative log likelihood) was computed. This could accomodate our datasets, since they were normalized to have pixel values in. We also explored using a Gaussian distribution with fixed variance (for which the NLL is equivalent to scaled MSE), and found that this produces qualitatively similar and in fact improves stability. Hence while a Bernoulli distribution usually works, we suggest the reader wishing to experiment with these models starts with a Gaussian decoder distribution with mean parameterized by the decoder network output and variance constant at 0.3.In all networks we used ReLU activations, weights initialized by a truncated normal BID8, and biases initialized to zero. We use no other neural network tricks (no BatchNorm or dropout), and all models were trained with the Adam optimizer BID10. See below for learning rate details. For all VAE models except β-VAE (shown only in Figure 3), we use a standard VAE loss, namely with a KL term coefficient β = 1. For FactorVAE we also use β = 1, as in BID9.For the VAE, β-VAE, CoordConv VAE and ablation study we used the network parameters in Table 1. We note that, while the Spatial Broadcast decoder uses fewer parameters than the DeConv decoder, it does require about 50% more memory to store the weights. However, for the 3D Object-in-Room dataset we included three additional deconv layers in the Spatial Droadcast decoder (without these additional layers the decoder was not powerful enough to give good reconstructions on that dataset).All of these models were trained using a learning rate of 3 · 10 −4 on with batch size 16. All convolutional and deconvolutional layers have "same" padding, i.e. have zero-padded input so that the output shape is input_shape×stride in the case of convolution and input_shape/stride in the case of deconvolution. DISPLAYFORM0 BROADCAST DECODER Output Logits Conv(k=4, s=1, c=C) Conv(k=4, s=1, c=64) Conv(k=4, s=1, c=64) append coord channels tile(64 × 64 × 10) Input Vector Table 1: Network architectures for Vanilla VAE, β-VAE, CoordConv VAE and ablation study. The numbers of layers were selected to minimize the ELBO loss of a VAE on the colored sprites data (see Appendix G). Note that for 3D Object-in-Room, we include three additional convolutional layers in the spatial broadcast decoder. Here C refers to the number of channels of the input image, either 1 or 3 depending on whether the dataset has color. For the FactorVAE model, we used the hyperparameters described in the FactorVAE paper BID9. Those network parameters are reiterated in TAB3. Note that the Spatial Broadcast parameters are the same as for the other models in Table 1. For the optimization hyperparameters we used γ = 35, a learning rate of 10 −4 for the VAE updates, a learning rate of 2 · 10 −5 for the discriminator updates, and batch size 32. These parameters generally gave the best . However, when training the FactorVAE model on colored sprites we encountered instability during training. We subsequently did a number of hyperparameter sweeps attempting to improve stability, but to no avail. Ultimately, we used the hyperparameters in BID9, and the Spatial Broadcast decoder architecture is the same as for the other models (Table 1.) A.3 DATASETS All datasets were rendered in images of size 64 × 64 and normalized to. For this dataset, we use the binary dsprites dataset open-sourced in BID17, but multiplied by colors sampled in HSV space uniformly within the region H ∈ [0.0, DISPLAYFORM0 Sans color, there are 737,280 images in this dataset. However, we sample the colors online from a continuous distribution, effectively making the dataset size infinite. This dataset is open-sourced in BID1 . This dataset, unlike all others we use, has only a single channel in its images. It contains 86,366 images. This dataset was used extensively in the FactorVAE paper BID9 . It consists of an object in a room and has 6 factors of variation: Camera angle, object size, object shape, object color, wall color, and floor color. The colors are sampled uniformly from a continuous set of hues in the range [0.0, 0.9]. This dataset contains 480,000 images, procedurally generated as all combinations of 10 floor hues, 10 wall hues, 10 object hues, 8 object sizes, 4 object shapes, and 15 camera angles. To more thoroughly explore datasets with a variety of distributions, factors of variation, and held-out test sets we wrote our own procedural image generator for circular objects in PyGame (rendered with an anti-aliasing factor of 5). We used this to generate the data for in FIG2 and Appendix H. In these datasets we control subsets of the following factors of variation: X-position, Y-position, Size, Color. We generated five datasets in this way, which we call X-Y, X-H, R-G, X-Y-H Small, and X-Y-H Tiny, and can be seen in FIG0, FIG0, FIG0, FIG8, and FIG9 respectively. TAB5 shows the values of these factors for each dataset. Note that for some datasets we define the color distribution in RGB space, and for others we define it in HSV space. To create the datasets with dependent factors, we hold out one quarter of the dataset (the intersection of half of the ranges of each of the two factors), either centered within the data distribution or in the corner. For each dataset we generate 500,000 randomly sampled training images. The number of training steps for each model on each dataset can be found in TAB6. In general, for each dataset we used enough training steps so that all models converged. Note that while the training iterations is different for FactorVAE than for the other models on colored sprites (due to instability of FactorVAE), this has no bearing on our because we do not compare across models. We only compare across decoder architectures, and we always used the same training steps for both DeConv and Spatial Broadcast decoders within each model. One aspect of the Spatial Broadcast decoder is the concatenation of constant coordinate channels to its tiled input latent vector. While our justification of its performance emphasizes the simplicity of computation it affords, it may seem possible that the coordinate channels are only used to provide positional information and the simplicity of this positional information (linear meshgrid) is irrelevant. Here we perform an ablation study to demonstrate that this is not the case; the organization of the coordinate channels is important. For this experiment, we randomly permute the coordinate channels through space. Specifically, we take the [h × w × 2]-shape coordinate channels and randomly permute the h · w entries. We keep each (i, j) pair together to ensure that after the shuffling each location does still have a unique pair of coordinates in the coordinate channels. Importantly, we only shuffle the coordinate channels once, then keep them constant throughout training. As shown in Figure 3, the Spatial Broadcast decoder improves disentangling and the rate-distortion trade-off for β-VAE BID6, indicating it improves state-of-the-art models. To further support this indication, we introduce the Spatial Broadcast decoder into the recently developed FactorVAE BID9.Indeed, FIG7 shows the Spatial Broadcast decoder improves disentangling in FactorVAE. See Appendix H for further to this effect. In exploring datasets with objects varying in position, we often find a (standard) DeConv VAE learns a representation that is discontinuous with respect to object position. This effect is amplified as the size of the object decreases. This makes sense, because the pressure for a VAE to represent position continuously comes from the fact that an object and a position-perturbed version of itself overlap in pixel space (hence it is economical for the VAE to map noise in its latent samples to local translations of an object). However, as an object's size decreases, this pixel overlap decreases, hence the pressure for a VAE to represent position continuously weakens. In this small-object regime the Spatial Broadcast decoder's architectural bias towards representing positional variation continuously proves extremely useful. We see this in FIG8 and FIG9.We were surprised to see disentangling of such tiny objects in FIG9 and have not explored the lower object size limit for disentangling with the Spatial Broadcast decoder. CoordConv VAE BID14 has been proposed as a decoder architecture to improve the continuity of VAE representations. CoordConv VAE appends coordinate channels to every feature layer of the standard deconvolutional decoder, yet does not spatially tile the latent vector, hence retains upsampling deconvolutions. FIG10 shows analysis of this model on the colored sprites dataset. While the latent space does appear to be continuous with respect to object position, it is quite entangled (far more so than a Spatial Broadcast VAE). This is not very surprising, since CoordConv VAE uses upscale deconvolutions to go all they way from spatial shape 1 × 1 to spatial shape 64 × 64, while in Table 10 we see that introducing upscaling hurts disentangling in a Spatial Broadcast VAE. The sprites and circles datasets discussed in Section 3 seem particularly well-suited for the Spatial Broadcast decoder because X-and Y-position are factors of variation. However, we also evaluate the architecture on datasets that have no positional factors of variation: Chairs and 3D Object-in-Room datasets BID1 BID9. In the latter, the factors of variation are highly non-local, affecting multiple regions spanning nearly the entire image. We find that on both datasets a Spatial Broadcast VAE learns representations that look as well disentangled as SOTA methods on FIG1, we see that this achieves far lower scores than a Spatial Broadcast VAE, though about the same as (or maybe slightly better than) a DeConv VAE. (right) Latent space traversals are entangled. Note that in contrast to traversal plots in the main text, we show the effect of traversing all 10 latent components (sorted by smallest to largest mean variance), including the non-coding ones (in the bottom rows).these datasets and without any modification of the standard VAE objective BID9 BID6. See Figures 10 for . We attribute the good performance on these datasets to the Spatial Broadcast Decoder's use of a shallower network and no upsampling deconvolutions (which have been observed to cause optimization difficulties in a variety of settings BID14) ). BID1 and the 3D Object-in-Room dataset BID9. In order to remain objective when selecting model hyperparameters for the Spatial Broadcast and Deconv decoders, we chose hyperparameters based on minimizing the ELBO loss, not considering any information about disentangling. After finding reasonable encoder hyperparameters, we performed large-scale (25 replicas each) sweeps over a few decoder hyperparameters for both the DeConv and Spatial Broadcast decoder on the colored sprites dataset. These sweeps are revealing of hyperparameter sensitivity, so we report the following quantities for them:• ELBO. This is the evidence lower bound (total VAE loss). It is the sum of the negative log likelihood (NLL) and KL-divergence.• NLL. This is the negative log likelihood of an image with respect to the model's reconstructed distribution of that image. It is a measure of reconstruction accuracy.• KL. This is the KL divergence of the VAE's latent distribution with its Gaussian prior. It measures how much information is being encoded in the latent space.• Latents Used. This is the mean number of latent coordinates with standard deviation less than 0.5. Typically, a VAE will have some unused latent coordinates (with standard deviation near 1) and some used latent coordinates. The threshold 0.5 is arbitrary, but this quantity does provide a rough idea of how many factors of variation the model may be representing.• MIG. The MIG metric.• Factor VAE. This is the metric described in the FactorVAE paper BID9. We found this metric to be less consistent than the MIG metric (and equally flawed with respect to rotated coordinates), but it qualitatively agrees with the MIG metric most of the time. G.1 CONVNET DEPTH TAB8 shows of sweeping over ConvNet depth in the Spatial Broadcast decoder. This reveals a consistent trend: As the ConvNet deepens, the model moves towards lower rate/higher distortion. Consequently, latent space information and reconstruction accuracy drop. Traversals with deeper nets show the model dropping factors of variation (the dataset has 8 factors of variation). The Spatial Broadcast decoder as presented in this work is fully convolutional. It contains no MLP. However, motivated by the need for more depth on the 3D Object-in-Room dataset, we did explore applying an MLP to the input vector prior to the broadcast operation. We found that including this MLP had a qualitatively similar effect as increasing the number of convolutional layers on the colored sprited dataset, decreasing latent capacity and giving poorer reconstructions. These are shown in TAB10.However, on the 3D Object-in-Room dataset adding the MLP did improve the model when using ConvNet depth 3 (the same as for colored sprites). Results of a sweep over depth of a pre-broadcast MLP are shown in TAB12. As mentioned in Section F, we were able to achieve the same effect by instead increasing the ConvNet depth to 6, but for those interested in computational efficiency using a pre-broadcast MLP may be a better choice for datasets of this sort. In the DeConv decoder, increasing the MLP layers again has a broadly similar effect as increasing the ConvNet layers, as shown in TAB11. TAB10, except using the 3D Object-in-Room dataset. Here the model seems to over-representation the dataset generative factors (or which there are 6 for this dataset) without a pre-broadcast MLP (top row). However, adding a pre-broadcast MLP with 2 or 3 layers gives rise to accurate reconstructions with the appropriate number of used latents and good disentangling. Adding a pre-broadcast MLP like this is an alternative to increasing the ConvNet depth in the model (shown in FIG0). We acknowledge that there is a continuum of models between the Spatial Broadcast decoder and the Deconv decoder. One could interpolate from one to the other by incrementally replacing the convolutional layers in the Spatial Broadcast decoder's network by deconvolutional layers with stride 2 (and simultaneously decreasing the height and width of the tiling operation). Table 10 shows a few steps of such a progression, where (starting from the bottom) 1, 2, and all 3 of the convolutional layers in the Spatial Broadcast decoder are replaced by a deconvolutional layer with stride 2. We see that this hurts disentangling without affecting the other metrics, further evidence that upscaling deconvolutional layers are bad for representation learning. Table 10: Effect of upscale deconvolution on the Spatial Broadcast VAE's performance. These use the colored sprites dataset. The columns show the effect of repeatedly replacing the convolutional, stride-1 layers in the decoder by deconvolutional, stride-2 layers (starting at the bottom-most layer). This incrementally reduces performance without affecting the other statistics much, testament to the negative impact of upscaling deconvolutional layer on VAE representations. We showed visualization of latent space geometry on the circles datasets in FIG2. However, we also conducted the same style experiments on many more datasets and on FactorVAE models. In this section we will present these additional . We consider three generative factor pairs: (X-Position, Y-Position), (X-Position, Hue), and (Redness, Greenness). For each such pair we generate a dataset with independently sampled generative factors and two datasets with dependent generative factors (one with a hole in the center and another with a hole in the corner of generative factor space). For each dataset we run VAE and FactorVAE models with both DeConv and Spatial Broadcast decoder. Broadly, our in the following figures show that the Spatial Broadcast decoder nearly always helps disentangling. It helps most dramatically on the most positional variation (X-Position, Y-Position) and least significantly when there is no positional variation (Redness, Greenness).Note, however, that even with no position variation, the Spatial Broadcast decoder does seem to improve latent space geometry in the generalization experiments FIG0. We believe this may be due in part to the fact that the Spatial Broadcast decoder is shallower than the DeConv decoder. Finally, we explore one completely different dataset with dependent factors: A dataset where half the images have no object (are entirely black). This we do to simulate conditions like that in a multi-entity VAE such as BID18 when the dataset has a variable number of entities. These conditions pose a challenge for disentangling, because the VAE objective will wish to allocate a large (low-KL) region of latent space to representing a blank image when there is a large proportion of blank images in the dataset. However, we do see a stark improvement by using the Spatial Broadcast decoder in this case. The reader will note that in many of the figures in this section the MIG does not capture the intuitive notion of disentangling very well. We believe this is because:• It depends on a choice of basis for the ground truth factors, and heavily penalizes rotation of the representation with respect to this basis. Yet it is often unclear what the correct basis for the ground truth factors is (e.g. RGB vs HSV vs HSL). For example, see the bottom row of FIG2.• It is invariant to a folding of the representation space, as long as the folds align with the axes of variation. See the middle row of FIG2 for an example of a double-fold in the latent space which isn't penalized by the MIG metric. More broadly, while a number of metrics have been proposed to quantify disentangling, many of them have serious shortcomings and there is as yet no consensus in the literature which to use BID15. We believe it is impossible to quantify how good a representation is with a single scalar, because there is a fundamental trade-off between how much information a representation contains and how well-structured the representation is. This has been noted by others in the disentangling literature (; BID5). This disentangling-distortion trade-off is a recapitulation of the rate-distortion trade-off BID0 and can be seen first-hand in Figure 3. We would like representations that both reconstruct well and disentangle well, but exactly how to balance these two factors is a matter of subjective preference (and surely depends on the dataset). Any scalar disentangling metric will implicitly favor some arbitrary disentangling-reconstruction potential. Due to the subjective nature of disentangling and the difficulty in defining appropriate metrics, we put heavy emphasis on latent space visualization as a means for representational analysis. Latent space traversals have been extensively used in the literature and can be quite revealing BID6. However, in our experience, traversals suffer two shortcomings:• Some latent space entanglement can be difficult for the eye to perceive in traversals. For example, a slight change in brightness in a latent traversal that represents changing position can easily go unnoticed.• Traversals only represent the latent space geometry around one point in space, and crossreferencing corresponding traversals between multiple points is quite time-consuming. Consequently, we caution the reader against relying too heavily on traversals when evaluating latent space geometry. In many cases, we found the latent factor visualizations in this section to be much more informative of representational quality. FIG0: Latent space analysis for independent X-Y dataset. This is the same as in FIG2, except with the additional FactorVAE . We see that the Spatial Broadcast decoder improves FactorVAE as well as a VAE (and in the FactorVAE case seems to always be axis-aligned). Note that DeConv FactorVAE has a surprisingly messy latent space -we found that using a fixed-variance Normal (instead of Bernoulli) decoder distribution improved this significantly, though still not to the level of the Spatial Broadcast FactorVAE. We also noticed in small-scale experiments that including shape or size variability in the dataset helped FactorVAE disentangle as well. However, FactorVAE does seem to be generally quite sensitive to hyperparameters BID15, as adversarial models often are. FIG0: Latent space analysis for dependent X-Y dataset. Analogous to FIG0, except the dataset has a large held-out hole in generative factor space (see dataset distribution in top-left), hence the generative factors are not independent. For the latent geometry visualizations, we do evaluate the representation of images in this held-out hole, which are shown as black dots. This tests for generalization, namely extrapolation in pixel space and interpolation in generative factor space. Again, the Spatial Broadcast decoder dramatically improves the representation -its representation looks nearly linear with respect to the ground truth factors, even through the extrapolation region. FIG0: Latent space analysis for dependent X-Y dataset. This is similar to FIG0, except the "hole" in the dataset is in the corner of generative factor space rather than the middle. Hence this tests not only extrapolation in pixel space, but also extrapolation in generative factor space. As usual, the Spatial Broadcast decoder helps a lot, though in this case the extrapolation is naturally more difficult than in FIG0. FIG0: Latent space analysis for independent X-H dataset. In this dataset the circle varies in X-Position and Hue. As expected given this has less positional variation than the X-Y datasets, we see the relative improvement of the Spatial Broadcast decoder to be lower, though still quite significant. Interestingly, the representation with the Spatial Broadcast decoder is always axis-aligned and nearly linear in the positional direction, though non-linear in the hue direction. While this is not what the VAE objective is pressuring the model to do (the VAE objective would like to balance mean and variance in its latent space), we attribute this to the fact that a linear representation is much easier to compute from the coordinate channels with ReLU layers than a non-linear effect, particularly with only three convolutional ReLU layers. In a sense, the inductive bias of the architecture is overriding the inductive bias of the objective function. FIG0: Latent space analysis for dependent X-H dataset. This is the same as FIG0 except the dataset has a held-out "hole" in the middle, hence tests the model's generalization ability. This generalization is extrapolation in pixel space yet interpolation in generative factor space. This poses a serious challenge for the DeConv decoder, and again the Spatial Broadcast decoder helps a lot. Interestingly, note the severe contraction by FactorVAE of the "hole" in latent space. The independence pressure in FactorVAE strongly tries to eliminate unused regions of latent space. FIG0: Latent space analysis for dependent X-H dataset. This is the same as FIG0 except the held-out "hole" is in the corner of generative factor space, testing extrapolation in both pixel space and generative factor space. The Spatial Broadcast decoder again yields significant improvements, and as in FIG0 we see FactorVAE clearly sacrificing latent space geometry to remove the "hole" from the latent space prior distribution (see bottom row). FIG0: Latent space analysis for dependent R-G dataset. This is the same as FIG0 except with a held-out "hole" in the center of generative factor space, testing extrapolation in pixel space (interpolation in generative factor space). Here the benefit of the Spatial Broadcast decoder is more clear than in FIG0. We attribute it's benefit here in part to it being a shallower network. Under review as a workshop paper at ICLR 2019 FIG0: Latent space analysis for dependent R-G dataset. This is the same as FIG0 except the "hole" is in the corner of generative factor space, testing extrapolation in both pixel space and generative factor space. While the Spatial Broadcast decoder seems to give rise to slightly lower MIG scores, this appears to be from rotation in the latent space. FIG1: Latent space analysis for X-H dataset with blank images. This is the same as FIG0 except the dataset consists half of images with no objects (entirely black images, as can be seen in the dataset samples in the top-right). This simulates the data distribution produced by the VAE of a multi-entity VAE BID18 on a dataset with a variable number of objects. Again the Spatial Broadcast decoder improves latent space geometry, according to both the MIG metric and the traversals. In the latent geometry plots, the black star represents the encoding of a blank image. We noticed that the Spatial Broadcast VAE always allocates a third relevant latent to indicate the (binary) presence or absence of an object, a very natural representation of this dataset.
We introduce a neural rendering architecture that helps VAEs learn disentangled latent representations.
359
scitldr
Similar to humans and animals, deep artificial neural networks exhibit critical periods during which a temporary stimulus deficit can impair the development of a skill. The extent of the impairment depends on the onset and length of the deficit window, as in animal models, and on the size of the neural network. Deficits that do not affect low-level statistics, such as vertical flipping of the images, have no lasting effect on performance and can be overcome with further training. To better understand this phenomenon, we use the Fisher Information of the weights to measure the effective connectivity between layers of a network during training. Counterintuitively, information rises rapidly in the early phases of training, and then decreases, preventing redistribution of information resources in a phenomenon we refer to as a loss of "Information Plasticity". Our analysis suggests that the first few epochs are critical for the creation of strong connections that are optimal relative to the input data distribution. Once such strong connections are created, they do not appear to change during additional training. These findings suggest that the initial learning transient, under-scrutinized compared to asymptotic behavior, plays a key role in determining the outcome of the training process. Our findings, combined with recent theoretical in the literature, also suggest that forgetting (decrease of information in the weights) is critical to achieving invariance and disentanglement in representation learning. Finally, critical periods are not restricted to biological systems, but can emerge naturally in learning systems, whether biological or artificial, due to fundamental constrains arising from learning dynamics and information processing. Critical periods are time windows of early post-natal development during which sensory deficits can lead to permanent skill impairment BID12. Researchers have documented critical periods affecting a range of species and systems, from visual acuity in kittens BID35 BID33 to song learning in birds BID18. Uncorrected eye defects (e.g., strabismus, cataracts) during the critical period for visual development lead to amblyopia in one in fifty adults. The cause of critical periods is ascribed to the biochemical modulation of windows of neuronal plasticity BID10. In this paper, however, we show that deep neural networks (DNNs), while completely devoid of such regulations, respond to sensory deficits in ways similar to those observed in humans and animal models. This surprising suggests that critical periods may arise from information processing, rather than biochemical, phenomena. We propose using the information in the weights, measured by an efficient approximation of the Fisher Information, to study critical period phenomena in DNNs. We show that, counterintuitively, the information in the weights does not increase monotonically during training. Instead, a rapid growth in information ("memorization phase") is followed by a reduction of information ("reorganization" or "forgetting" phase), even as classification performance keeps increasing. This behavior is consistent across different tasks and network architectures. Critical periods are centered in the memorization phase. Under review as a conference paper at ICLR 2019 Performance is permanently impaired if the deficit is not corrected early enough, regardless of how much additional training is performed. As in animal models, critical periods coincide with the early learning phase during which test accuracy would rapidly increase in the absence of deficits (dashed). (B) For comparison, we report acuity for kittens monocularly deprived since birth and tested at the time of eye-opening (solid), and normal development visual acuity in kittens as a function of age (dashed) BID7 BID23.artificial neural networks (ANNs) are only loosely inspired by biological systems .Most studies to date have focused either on the behavior of networks at convergence (Representation Learning) or on the asymptotic properties of the numerical scheme used to get there (Optimization). The role of the initial transient, especially its effect in biasing the network towards "good" regions of the complex and high-dimensional optimization problem, is rarely addressed. To study this initial learning phase of ANNs, we replicate experiments performed in animal models and find that the responses to early deficits are remarkably similar, despite the large underlying differences between the two systems. In particular, we show that the quality of the solution depends only minimally on the final, relatively well-understood, phase of the training process or on its very first epochs; instead, it depends critically on the period prior to initial convergence. In animals, sensory deficits introduced during critical periods induce changes in the architecture of the corresponding areas BID4 BID34 BID9. To determine whether a similar phenomenon exists in ANNs, we compute the Fisher Information of the weights of the network as a proxy to measure its "effective connectivity", that is, the density of connections that are effectively used by the network in order to solve the task. Like others before us BID28, we observe two distinct phases during the training, first a "learning phase" in which the Fisher Information of the weights increases as the network learns from the data, followed by a "consolidation" or "compression" phase in which the Fisher Information decreases and stabilizes. Sensitivity to critical-period-inducing deficits is maximal exactly when the Fisher Information peaks. A layer-wise analysis of the network's effective connectivity shows that, in the tasks and deficits we consider, the hierarchy of low-level and high-level features in the training data is a key aspect behind the observed phenomena. In particular, our experiments suggest that the existence of critical periods in deep neural networks depends on the inability of the network to change its effective connectivity pattern in order to process different information (in response to deficit removal). We call this phenomenon, which is not mediated by any external factors, a loss of the "Information Plasticity" of the network. A notable example of critical period-inducing deficit, which also commonly affects humans, is amblyopia (reduced visual acuity in one eye) caused unilateral cataracts during infancy or childhood (; BID32 : Even after surgical correction of the cataracts, the Figure 2 : Sensitivity of learning phase: (C) Final test accuracy of a DNN as a function of the onset of a short 40-epoch deficit. The decrease in the final performance can be used to measure the sensitivity to deficits. The most sensitive epochs corresponds to the early rapid learning phase, before the test error (dashed line) begins to plateau. Afterwards, the network is largely unaffected by the temporary deficit. (D) This can be compared with changes in the degree of functional disconnection (normalized numbers of V1 monocular cells disconnected from the contralateral eye) as a function of the kittens' age at the onset of a 10-12-day deficit window BID26. Dashed lines are as in A and B respectively. ability of the patients to regain normal acuity in the affected eye depends both on the duration of the deficit and on its age of onset, with earlier and longer deficits causing more severe effects. In order to replicate this experimental setup in ANNs, we train a standard convolutional network (CNN) to classify objects in small 32 ⇥ 32 RGB images from the CIFAR-10 dataset BID19 ) in 10 classes. To simulate the effect of cataracts, for the first t 0 epochs the images in the dataset are downsampled to 8⇥8 and then upsampled back to 32⇥32 using bilinear interpolation, in practice blurring the image and destroying small-scale details.1 After that, the training continues for 300 more epochs, giving the network enough time to converge and ensuring it is exposed to the same number of uncorrupted images as in the control (t 0 = 0) experiment. In FIG0, we graph the final performance of the network (described in Materials and Methods) as a function of the epoch at which the deficit is corrected (t 0).We clearly observe the existence of a critical period for this deficit in the ANN: if the blur is not removed within the first 60 epochs, the final performance is severely decreased when compared to the baseline (from a test error of ⇠6.4%, [In this plot it is 8%, it is 6.4% for the resnet later. We can swap the plots or change the text] in the absence of a deficit, to more than 18% when the blur is present over 140 epochs, a ⇠300% increase). The profile of the curve is also strikingly similar to the one obtained in kittens monocularly deprived from near birth and whose visual acuity upon eye-opening was tested and plotted against the length of the deficit window BID23. Just like in humans and animal models (where critical periods are characteristic of early development), the critical period in the DNN also arises during the initial rapid learning phase. At this stage, the network is quickly learning a solution before the test error plateaus and the longer asymptotic convergence phase begins. Sensitivity to deficit. To quantify more accurately the sensitivity of the ANN to image blurring throughout its early learning phase, we introduced the deficit in a short constant window (40 epochs), starting at different epochs, and then measured the decrease in the ANN's final performance compared to the baseline. In Figure 2, we plot the final testing error of the network against the epoch of onset of the deficit. We observe that the network's sensitivity to blurring peaks in the central part of the early rapid learning phase (around 30 epochs), while later deficits produce little or no effect. A similar experiment was also performed on kittens by Olson and Freeman, using a window of 10-12 days during which the animals were monocularly deprived and using it to "scan" the first 4 months after birth to obtain a sensitivity profile BID26. We subsequently evaluated the effect of other training data modifications: a more drastic deprivation where the input is substituted with random noise, simulating complete sensory deprivation, and two "high-level" modifications of the training data: vertical flipping of the input image and permutation. Performance is permanently impaired if the deficit is not corrected early enough, regardless of how much additional training is performed. As in animal models, critical periods coincide with the early learning phase during which, in the absence of deficits, test accuracy would rapidly increase (dashed). (B) For comparison, we report acuity for kittens monocularly deprived since birth and tested at the time of eye-opening (solid), and normal visual acuity development (in kittens) as a function of their age (dashed) BID7 BID23. Sensitivity during learning: (C) Final test accuracy of a DNN as a function of the onset of a short 40-epoch deficit. The decrease in the final performance can be used to measure the sensitivity to deficits. The most sensitive epochs corresponds to the early rapid learning phase, before the test error (dashed line) begins to plateau. Afterwards, the network is largely unaffected by the temporary deficit. (D) This can be compared with changes in the degree of functional disconnection (normalized numbers of V1 monocular cells disconnected from the contralateral eye) as a function of the kittens' age at the onset of a 10-12-day deficit window BID26. Dashed lines are as in A and B respectively, up to a re-scaling of the y-axis. Our findings, described in Section 2, indicate that the early transient is critical in determining the final solution of the optimization associated with training an artificial neural network. In particular, the effects of sensory deficits during a critical period cannot be overcome, no matter how much additional training is performed. Yet most theoretical studies have focused on the network behavior after convergence (Representation Learning) or on the asymptotic properties of the optimization scheme used for training (SGD).To study this early phase, in Section 3, we use the Fisher Information to quantify the effective connectivity of a network during training, and introduce the notion of Information Plasticity in learning. Information Plasticity is maximal during the memorization phase, and decreases in the reorganization phase. We show that deficit sensitivity during critical periods correlates strongly with the effective connectivity. In Section 4 we discuss our contribution in relation to previous work. When considered in conjunction with recent on representation learning BID0, our findings indicate that forgetting (reducing information in the weights) is critical to achieving invariance to nuisance variability as well as independence of the components of the representation, but comes at the price of reduced adaptability later in the training. We also hypothesize that the loss of physical connectivity in biology (neural plasticity) could be a consequence, rather than a cause, of the loss of Information Plasticity, which depends on how the information is distributed throughout a network during the early stages of learning. These also shed light on the common practice of pre-training a model on a task and then fine-tune it for another, one of the most rudimentary forms of transfer learning. Our experiments show that, rather than helpful, pre-training can be detrimental, even if the tasks are similar (e.g., same labels, slightly blurred images). A notable example of critical period-related deficit, commonly affecting humans, is amblyopia (reduced visual acuity in one eye) caused by cataracts during infancy or childhood BID31 Published as a conference paper at ICLR 2019 DISPLAYFORM0 Figure 2: (Left) High-level perturbations do not induce a critical period. When the deficit only affects high-level features (vertical flip of the image) or the last layer of the CNN (label permutation), the network does not exhibit critical periods (test accuracy remains largely flat). On the other hand, a sensory deprivation-like deficit (image is replaced by random noise) does cause a deficit, but the effect is less severe than in the case of image blur. (Right) Dependence of the critical period profile on the network's depth. Adding more convolutional layers increases the effect of the deficit during its critical period (shown here is the decrease in test accuracy due to the deficit with respect to the test accuracy reached without deficits).von BID32. Even after surgical correction of cataracts, the ability of the patients to regain normal acuity in the affected eye depends both on the duration of the deficit and on its age of onset, with earlier and longer deficits causing more severe effects. In this section, we aim to study the effects of similar deficits in DNNs. To do so, we train a standard All-CNN architecture based on (see Appendix A) to classify objects in small 32 × 32 images from the CIFAR-10 dataset BID19 ). We train with SGD using an exponential annealing schedule for the learning rate. To simulate the effect of cataracts, for the first t 0 epochs the images in the dataset are downsampled to 8 × 8 and then upsampled back to 32 × 32 using bilinear interpolation, in practice blurring the image and destroying small-scale details. After that, the training continues for 160 more epochs, giving the network time to converge and ensuring it is exposed to the same number of uncorrupted images as in the control (t 0 = 0) experiment. In FIG0, we plot the final performance of a network affected by the deficit as a function of the epoch t 0 at which the deficit is corrected. We can readily observe the existence of a critical period: If the blur is not removed within the first 40-60 epochs, the final performance is severely decreased when compared to the baseline (up to a threefold increase in error). The decrease in performance follows trends commonly observed in animals, and may be qualitatively compared, for example, to the loss of visual acuity observed in kittens monocularly deprived from birth as a function of the length of the deficit BID23. 2 We can measure more accurately the sensitivity to a blur deficit during learning by introducing the deficit in a short window of constant length (40 epochs), starting at different epochs, and then measure the decrease in the DNN's final performance compared to the baseline FIG0 ). Doing this, we observe that the sensitivity to the deficit peaks in the central part of the early rapid learning phase (at around 30 epochs), while introducing the deficit later produces little or no effect. A similar experiment performed on kittens, using a window of 10-12 days during which the animals are monocularly deprived, again shows a remarkable similarity between the profiles of the sensitivity curves BID26. High-level deficits are not associated with a critical period: A natural question is whether any change in the input data distribution will have a corresponding critical period for learning. This is not the case for neuronal networks, which remain plastic enough to adapt to high-level changes in sensory processing BID4. For example, it is well-reported that even adult humans can rapidly adapt to certain drastic changes, such as the inversion of the visual field BID30 BID17. In Figure 2, we observe that DNNs are also largely unaffected by high-level deficits -such as vertical flipping of the image, or random permutation of the output labels: After deficit correction, the network quickly recovers its baseline performance. This hints at a finer interplay between the structure of the data distribution and the optimization algorithm, ing in the existence of a critical period. We now apply to the network a more drastic deficit, where each image is replaced by white noise. Figure 2 shows hows this extreme deficit exhibits a remarkably less severe effect than the one obtained by only blurring images: Training the network with white noise does not provide any information on the natural images, and in milder effects than those caused by a deficit (e.g., image blur), which instead conveys some information, but leads the network to (incorrectly) learn that no fine structure is present in the images. A similar effect has been observed in animals, where a period of early sensory deprivation (dark-rearing) can lengthen the critical period and thus cause less severe effects than those documented in light-reared animals . We refer the reader to Appendix C for a more detailed comparison between sensory deprivation and training on white noise. Architecture, depth, and learning rate annealing: FIG2 shows that a fully-connected network trained on the MNIST digit classification dataset also shows a critical period for the image blur deficit. Therefore, the convolutional structure is not necessary, nor is the use of natural images. Similarly, a ResNet-18 trained on CIFAR-10 also has a critical period, which is also remarkably sharper than the one found in a standard convolutional network FIG0 ). This is especially interesting, since ResNets allow for easier backpropagation of gradients to the lower layers, thus suggesting that the critical period is not caused by vanishing gradients. However, Figure 2 (Right) shows that the presence of a critical period does indeed depend critically on the depth of the network. In FIG2, we confirm that a critical period exists even when the network is trained with a constant learning rate, and therefore cannot be explained by an annealed learning rate in later epochs. Optimization method and weight decay: FIG2 (Bottom Right) shows that when using Adam as the optimization scheme, which renormalizes the gradients using a running mean of their first two moments, we still observe a critical period similar to that of standard SGD. However, changing the, showing two distinct phases of training: First, information sharply increases, but once test performance starts to plateau (green line), the information in the weights decreases during a "consolidation" phase. Eventually less information is stored, yet test accuracy improves slightly (green line). The weights' Fisher Information correlates strongly with the networks sensitivity to critical periods, computed as in FIG0 using both a window size of 40 and 60, and fitted here to the Fisher Information using a simple exponential fit. (Center) Recalling the connection between FIM ad connectivity, we may compare it to synaptic density during development in the visual cortex of macaques BID27. Here too, a rapid increase in connectivity is followed by elimination of synapses (pruning) continuing throughout life. (Right) Effects of critical period-inducing blurring on the Fisher Information: The impaired network uses more information to solve the task, compared to training in the absence of a deficit, since it is forced to memorize the labels case by case.hyperparameters of the optimization can change the shape of the critical period: In FIG2 (Bottom Left) we show that increasing weight decay makes critical periods longer and less sharp. This can be explained as it both slows the convergence of the network, and it limits the ability of higher layers to change to overcome the deficit, thus encouraging lower layers to also learn new features. We have established empirically that, in animals and DNNs alike, the initial phases of training are critical to the outcome of the training process. In animals, this strongly relates to changes in the brain architecture of the areas associated with the deficit BID4. This is inevitably different in artificial networks, since their connectivity is formally fixed at all times during training. However, not all the connections are equally useful to the network: Consider a network encoding the approximate posterior distribution p w (y|x), parameterized by the weights w, of the task variable y given an input image x. The dependency of the final output from a specific connection can be estimated by perturbing the corresponding weight and looking at the magnitude of the change in the final distribution. Specifically, given a perturbation w = w + δw of the weights, the discrepancy between the p w (y|x) and the perturbed network output p w (y|x) can be measured by their KullbackLeibler divergence, which, to second-order approximation, is given by: DISPLAYFORM0 where the expectation over x is computed using the empirical data distributionQ(x) given by the dataset, and DISPLAYFORM1 is the Fisher Information Matrix (FIM). The FIM can thus be considered a local metric measuring how much the perturbation of a single weight (or a combination of weights) affects the output of the network BID1. In particular, weights with low Fisher Information can be changed or "pruned" with little effect on the network's performance. This suggests that the Fisher Information can be used as a measure of the effective connectivity of a DNN, or, more generally, of the "synaptic strength" of a connection BID15. Finally, the FIM is also a semidefinite approximation of the Hessian of the loss function BID21 and hence of the curvature of the loss landscape at a particular point w during training, providing an elegant connection between the FIM and the optimization procedure BID1, which we will also employ later. Published as a conference paper at ICLR 2019Unfortunately, the full FIM is too large to compute. Rather, we use its trace to measure the global or layer-wise connection strength, which we can compute efficiently using (Appendix A): DISPLAYFORM2 In order to capture the behavior of the off-diagonal terms, we also tried computing the logdeterminant of the full matrix using the Kronecker-Factorized approximation of BID22, but we observed the same qualitative trend as the trace. Since the FIM is a local measure, it is very sensitive to the irregularities of the loss landscape. Therefore, in this section we mainly use ResNets, which have a relatively smooth landscape BID20. For other architectures we use instead a more robust estimator of the FIM based on the injection of noise in the weights BID0, also described in Appendix A.Two phases of learning: As its name suggests, the FIM can be thought as a measure of the quantity of information about the training data that is contained in the model BID6. Based on this, one would expect the overall strength of the connections to increase monotonically as we acquire information from experience. However, this is not the case: While during an initial phase the network acquires information about the data, which in a large increase in the strength of the connections, once the performance in the task begins to plateau, the network starts decreasing the overall strength of its connections. However, this does not correspond to a reduction in performance, rather, performance keeps slowly improving. This can be seen as a "forgetting, or "compression" phase, during which redundant connections are eliminated and non-relevant variability in the data is discarded. It is well-established how the elimination ("pruning") of unnecessary synapses is a fundamental process during learning and brain development BID27 ) (FIG3, Center); in FIG3 (Left) an analogous phenomenon is clearly and quantitatively shown for DNNs. Strikingly, these changes in the connection strength are closely related to the sensitivity to criticalperiod-inducing deficits such as image blur, computed using the "sliding window" method as in FIG0. In FIG3 we see that the sensitivity closely follows the trend of the FIM. This is remarkable since the FIM is a local quantity computed at a single point during the training of a network in the absence of deficit, while sensitivity during a critical period is computed, using test data, at the end of the impaired network training. FIG3 (Right) further emphasizes the effect of deficits on the FIM: in the presence of a deficit, the FIM grows and remains substantially higher even after the deficit is removed. This may be attributed to the fact that, when the data are so corrupted that classification is impossible, the network is forced to memorize the labels, therefore increasing the quantity of information needed to perform the same task. Layer-wise effects of deficits: A layer-wise analysis of the FIM sheds further light on how the deficit affects the network. When the network (in this case All-CNN, which has a clearer division among layers than ResNet) is trained without deficits, the most important connections are in the intermediate layers (FIG5, Left), which can process the input CIFAR-10 image at the most informative intermediate scale. However, if the network is initially trained on blurred data (FIG5, top right), the strength of the connections is dominated by the top layer (Layer 6). This is to be expected, since the low-level and mid-level structures of the images are destroyed, making the lower layers ineffective. However, if the deficit is removed early in the training (FIG5, top center), the network manages to "reorganize", reducing the information contained in the last layer, and, at the same time, increasing the information in the intermediate layers. We refer to these phenomena as changes in "Information Plasticity". If, however, the data change occurs after the consolidation phase, the network is unable to change its effective connectivity: The connection strength of each layer remains substantially constant. The network has lost its Information Plasticity and is past its critical period. The analysis of the FIM also sheds light on the geometry of the loss function and the learning dynamics. Since the FIM can be interpreted as the local curvature of the residual landscape, FIG3 shows that learning entails crossing bottlenecks: In the initial phase the network enters regions of high curvature (high Fisher Information), and once consolidation begins, the curvature decreases, allowing it to cross the bottleneck and enter the valley below. If the statistics change after crossing the bottleneck, the network is trapped. In this interpretation, the early phases of convergence are critical in leading the network towards the "right" final valley. The end of critical periods comes after the network has crossed all bottlenecks (and thus learned the features) and entered a wide valley (region of the weight space with low curvature, or low Fisher Information). Blur deficit until epoch 100 In the presence of an image blur deficit until epoch 100, more resources are allocated to the higher layers rather than to the middle layers. The blur deficit destroys low-and mid-level features processed by those layers, leaving only the global features of the image, which are processed by the higher layers. Even if the deficit is removed, the middle layers remain underdeveloped. (Top Center) When the deficit is removed at an earlier epoch, the layers can partially reconfigure (notice, e.g., the fast loss of information of layer 6), ing in less severe long-term consequences. We refer to the redistribution of information and the relative changes in effective connectivity as "Information Plasticity". (Bottom row) Same plots, but using a vertical flip deficit, which does not induce a critical period. As expected, the quantity of information in the layers is not affected. Critical periods have thus far been considered an exclusively biological phenomenon. At the same time, the analysis of DNNs has focused on asymptotic properties and neglected the initial transient behavior. To the best of our knowledge, we are the first to show that artificial neural networks exhibit critical period phenomena, and to highlight the critical role of the transient in determining the asymptotic performance of the network. Inspired by the role of synaptic connectivity in modulating critical periods, we introduce the use of Fisher Information to study this initial phase. We show that the initial sensitivity to deficits closely follows changes in the FIM, both global, as the network first rapidly increases and then decreases the amount of stored information, and layer-wise, as the network "reorganizes" its effective connectivity in order to optimally process information. Our work naturally relates to the extensive literature on critical periods in biology. Despite artificial networks being an extremely reductionist approximation of neuronal networks, they exhibit behaviors that are qualitatively similar to the critical periods observed in human and animal models. Our information analysis shows that the initial rapid memorization phase is followed by a loss of Information Plasticity which, counterintuitively, further improves the performance. On the other hand, when combined with the analysis of BID0 this suggests that a "forgetting" phase may be desirable, or even necessary, in order to learn robust, nuisance-invariant representations. The existence of two distinct phases of training has been observed and discussed by BID28, although their analysis builds on the (Shannon) information of the activations, rather than the (Fisher) information in the weights. On a multi-layer perceptron (MLP), BID28 empirically link the two phases to a sudden increase in the gradients' covariance. It may be tempting to compare these with our Fisher Information analysis. However, it must be noted that the FIM is computed using the gradients with respect to the model prediction, not to the ground truth label, leading to important qualitative differences. In Figure 6, we show that the covariance and norm of the gradients exhibit no clear trends during training with and without deficits, and, therefore, unlike the FIM, do not correlate with the sensitivity to critical periods. However, Published as a conference paper at ICLR 2019 a connection between our FIM analysis and the information in the activations can be established based on the work of BID0, which shows that the FIM of the weights can be used to bound the information in the activations. In fact, we may intuitively expect that pruning of connections naturally leads to loss of information in the corresponding activations. Thus, our analysis corroborates and expands on some of the claims of BID28, while using an independent framework. Aside from being more closely related to the deficit sensitivity during critical periods, Fisher's Information also has a number of technical advantages: Its diagonal is simple to estimate, even on modern state-of-the-art architectures and compelling datasets, and it is less sensitive to the choice estimator of mutual information, avoiding some of the common criticisms to the use of information quantities in the analysis of deep learning models. Finally, the FIM allows us to probe fine changes in the effective connectivity across the layers of the network FIG5 ), which are not visible in BID28.A complete analysis of the activations should account not only for the amount of information (both task-and nuisance-related), but also for its accessibility, e.g., how easily task-related information can be extracted by a linear classifier. Following a similar idea, BID24 aim to study the layer-wise, or "spatial" (but not temporal) evolution of the simplicity of the representation by performing a principal component analysis (PCA) of a radial basis function (RBF) kernel embedding of each layer representation. They show that, on a multi-layer perceptron, task-relevant information increasingly concentrate on the first principal components of the representation's embedding, implying that they become more easily "accessible" layer after layer, while nuisance information (when it is codified at all) is encoded in the remaining components. In our work we instead focus on the temporal evolution of the weights. However, it's important to notice that a network with simpler weights (as measured by the FIM) also requires a simpler smooth representation (as measured, e.g., by the RBF embedding) in order to operate properly, since it needs to be resistant to perturbations of the weights. Thus our analysis is wholly compatible with the intuitions of BID24. It would also be interesting to study the joint spatio-temporal evolution of the network using both frameworks at once. One advantage of focusing on the information of the weights rather than on the activations, or behavior of the network, is to have a readout of the "effective connectivity" during critical periods, which can be compared to similar readouts in animals. In fact, "behavioral" readouts upon deficit removal, both in artificial and neuronal networks, can potentially be confounded by deficit-coping changes at different levels of the visual pathways BID4 BID16. On the other hand, deficits in deprived animals are mirrored by abnormalities in the circuitry of the visual pathways, which we characterize in DNNs using the FIM to study its "effective connectivity", i.e., the connections that are actually employed by the network to solve the task. Sensitivity to critical periods and the trace of the Fisher Information peak at the same epochs, in accord with the evidence that skill development and critical periods in neuronal networks are modulated by changes (generally experience-dependent) in synaptic plasticity BID16 BID10. Our layer-wise analysis of the Fisher Information FIG5 ) also shows that visual deficits reinforce higher layers to the detriment of intermediate layers, leaving low-level layers virtually untouched. If the deficit is removed after the critical period ends, the network is not able to reverse these effects. Although the two systems are radically different, a similar response can be found in the visual pathways of animal models: Lower levels (e.g., retina, lateral geniculate nucleus) and higher-level visual areas (e.g., V2 and post-V2) show little remodeling upon deprivation, while most changes happen in different layers of V1 BID34 BID9 ).An insightful interpretation of critical periods in animal models was proposed by BID16: The initial connections of neuronal networks are unstable and easily modified (highly plastic), but as more "samples" are observed, they change and reach a more stable configuration which is difficult to modify. Learning can, however, still happen within the newly created connectivity pattern. This is largely compatible with our findings: Sensitivity to critical-period-inducing deficits peaks when connections are remodeled (Figure 4, Left), and different connectivity profiles are observed in networks trained with and without a deficit (FIG5). Moreover, high-level deficits such as imageflipping and label permutation, which do not require restructuring of the network's connections in order to be corrected, do not exhibit a critical period. Applying a deficit at the beginning of the training may be compared to the common practice of pretraining, which is generally found to improve the performance of the network. BID5 study the somewhat related, but now seldom used, practice of layer-wise unsupervised pre-training, and suggest that it may act as a regularizer by moving the weights of the network towards an area of the loss landscape closer to the attractors for good solutions, and that early examples have a stronger effect in steering the network towards particular solutions. Here, we have shown that pre-training on blurred data can have the opposite effect; i.e., it can severely decrease the final performance of the network. However, in our case, interpreting the deficits effect as moving the network close to a bad attractor is difficult to reconcile with the smooth transition observed in the critical periods, since the network would either converge to this attractor, and thus have low accuracy, or escape completely. Instead, we reconcile our experiments with the geometry of the loss function by introducing a different explanation based on the interpretation of the FIM as an approximation of the local curvature. FIG3 suggests that SGD encounters two different phases during the network training: At first, the network moves towards high-curvature regions of the loss landscape, while in the second phase the curvature decreases and the network eventually converges to a flat minimum (as observed in BID13). We can interpret these as the network crossing narrow bottlenecks during its training in order to learn useful features, before eventually entering a flat region of the loss surface once learning is completed and ending up trapped there. When combining this assumption with our deficit sensitivity analysis, we can hypothesize that the critical period occurs precisely upon crossing of this bottleneck. It is also worth noticing how there is evidence that convergence to flat minima (minima with low curvature) in a DNN correlates with a good generalization performance BID11 BID20 BID3 BID13. Indeed, using this interpretation, FIG3 (Right) tells us that networks more affected by the deficit converge to sharper minima. However, we have also found that the performance of the network is already mostly determined during the early "sensitive" phase. The final sharpness at convergence may therefore be an epiphenomenon, rather than the cause of good generalization. Our goal in this paper is not so much to investigate the human (or animal) brain through artificial networks, as to understand fundamental information processing phenomena, both in their biological or artificial implementations. It is also not our goal to suggest that, since they both exhibit critical periods, DNNs are necessarily a valid model of neurobiological information processing, although recent work has emphasized this aspect. We engage in an "Artificial Neuroscience" exercise in part to address a technological need to develop "explainable" artificial intelligence systems whose behavior can be understood and predicted. While traditionally well-understood mathematical models were used by neuroscientists to study biological phenomena, information processing in modern artificial networks is often just as poorly understood as in biology, so we chose to exploit well-known biological phenomena as probes to study information processing in artificial networks. Conversely, it would also be interesting to explore ways to test whether biological networks prune connections as a consequences of a loss of Information Plasticity, rather than as a cause. The mechanisms underlying network reconfiguration during learning and development might be an evolutionary outcome obtained under the pressure of fundamental information processing phenomena. DISPLAYFORM0 In order to avoid interferences between the annealing scheme and the architecture, in these experiments we fix the learning rate to 0.001.The Fully Connected network used for the MNIST experiments has hidden layers of size. All hidden layers use batch normalization followed by ReLU activations. We fix the learning rate to 0.005. Weight decay is not used. We use data augmentation with random translations up to 4 pixels and random horizontal flipping. For MNIST, we pad the images with zeros to bring them to size 32 × 32. To compute the trace of the Fisher Information Matrix, we use the following expression derived directly from the definition: DISPLAYFORM0 where the input image x is sampled from the dataset, while the label y is sampled from the output posterior. Expectations are approximated by Monte-Carlo sampling. Notice, however, that this expression depends only on the local gradients of the loss with respect to the weights at a point w = w 0, so it can be noisy when the loss landscape is highly irregular. This is not a problem for ResNets BID20, but for other architectures we use instead a different technique, proposed in BID0. More in detail, let L(w) be the standard cross-entropy loss. Given the current weights w 0 of the network, we find the diagonal matrix Σ that minimizes: DISPLAYFORM1 where β is a parameter that controls the smoothness of the approximation. Notice that L can be minimized efficiently using the method in BID14. To see how this relates to the Fisher Information Matrix, assume that L(w) can be approximated locally in DISPLAYFORM2 Taking the derivative with respect to Σ, and setting it to zero, we obtain Σ ii = β/H ii. We can then use Σ to estimate the trace of the Hessian, and hence of the Fisher information. Published as a conference paper at ICLR 2019 DISPLAYFORM3 Fitting of sensitivity curves and synaptic density profiles from the literature was performed using: DISPLAYFORM4 as the fitting equation, where t is the age at the time of sampling and τ 1, τ 2, k and d are unconstrained parameters BID2.The exponential fit of the sensitivity to the Fisher Information trace uses the expression DISPLAYFORM5 where a, b and c are unconstrained parameters, F (t) is the Fisher Information trace at epoch t of the training of a network without deficits and S k is the sensitivity computed using a window of size k. That is, S k (t) is the increase in the final test error over a baseline when the network is trained in the presence of a deficit between epochs t and t + k. B ADDITIONAL PLOTS Figure 6: Log of the norm of the gradient means (solid line) and standard deviation (dashed line) during training when: (Left) No deficit is present, (Center) A blur deficit is present until epoch 70, and (Right) a deficit is present until the last epoch. Notice that the presence of a deficit does not decrease the magnitude of the gradients propagated to the first layers during the last epochs, rather it seems to increase it, suggesting that vanishing gradients are not the cause of the critical period for the blurring deficit. Noise deficit until epoch 100 Figure 7: Same plot as in FIG5, but for a noise deficit. Unlike with blur, much more resources are allocated to the lower-layers rather than higher-layers. This may explain why it is easier for the network to reconfigure to solve the task after the deficit is removed. No deficit Blurred up to epoch 100 Figure 8: Visualization of the filters of the first layer of the network used for the experiment in FIG0. In absence of a deficit, the network learns high-frequency filters, as seen by the fact that many filters are not smooth (first picture). However, when a blurring deficit is present, the network learns only smooth filters corresponding to low-frequencies of the input (third picture). If the deficit is removed after the end of the critical period, the network does not manage to learn high-frequency filters (second picture). Critical periods are task-and deficit-specific. The specific task we address is visual acuity, but the performance is necessarily measured through different mechanisms in animals and Artificial Neural Networks. In animals, visual acuity is traditionally measured by testing the ability to discriminate between black-and-white contrast gratings (with varying spatial frequency) and a uniform gray field. The outcome of such tests generally correlates well with the ability of the animal to use the eye to solve other visual tasks relying on acuity. Convolutional Neural Networks, on the other hand, have a very different sensory processing mechanism (based on heavily quantized data), which may trivialize such a test. Rather, we directly measure the performance of the network on an high-level task, specifically image classification, for which CNNs are optimized. We chose to simulate cataracts in our DNN experiments, a deficit which allows us to explore its complex interactions with the structure of the data and the architecture of the network. Unfortunately, while the overall trends of cataract-induced critical periods have been studied and understood in animal models, there is not enough data to confidently regress sensibility curves comparable to those obtained in DNNs. For this reason, in FIG0 we compare the performance loss in a DNN trained in the presence of a cataract-like deficit with the obtained from monocularly deprived kittens, which exhibit similar trends and are one of the most common experimental paradigms in the visual neurosciences. Simulating complete visual deprivation in a neural network is not as simple as feeding a constant stimulus: a network presented with a constant blank input will rapidly become trivial and thus unable to train on new data. This is to be expected, since a blank input is a perfectly predictable stimulus and thus the network can quickly learn the (trivial) solution to the task. We instead wanted to model an uninformative stimulus, akin to noise. Moreover, even when the eyes are sutured or maintained in the darkness, there will be excitation of photoreceptors that is best modeled as noise. To account for this, we simulate sensory deprivation by replacing the input images with a dataset composed of (uninformative) random Gaussian noise. This way the network is trained on solving the highly non-trivial task of memorizing the association between the finitely-many noise patterns and their corresponding labels.
Sensory deficits in early training phases can lead to irreversible performance loss in both artificial and neuronal networks, suggesting information phenomena as the common cause, and point to the importance of the initial transient and forgetting.
360
scitldr
We propose a method to incrementally learn an embedding space over the domain of network architectures, to enable the careful selection of architectures for evaluation during compressed architecture search. Given a teacher network, we search for a compressed network architecture by using Bayesian Optimization (BO) with a kernel function defined over our proposed embedding space to select architectures for evaluation. We demonstrate that our search algorithm can significantly outperform various baseline methods, such as random search and reinforcement learning . The compressed architectures found by our method are also better than the state-of-the-art manually-designed compact architecture ShuffleNet . We also demonstrate that the learned embedding space can be transferred to new settings for architecture search, such as a larger teacher network or a teacher network in a different architecture family, without any training. In many application domains, it is common practice to make use of well-known deep network architectures (e.g., VGG BID30, GoogleNet BID33, ResNet BID8) and to adapt them to a new task without optimizing the architecture for that task. While this process of transfer learning is surprisingly successful, it often in over-sized networks which have many redundant or unused parameters. Inefficient network architectures can waste computational resources and over-sized networks can prevent them from being used on embedded systems. There is a pressing need to develop algorithms that can take large networks with high accuracy as input and compress their size while maintaining similar performance. In this paper, we focus on the task of compressed architecture search -the automatic discovery of compressed network architectures based on a given large network. One significant bottleneck of compressed architecture search is the need to repeatedly evaluate different compressed network architectures, as each evaluation is extremely costly (e.g., backpropagation to learn the parameters of a single deep network can take several days on a single GPU). This means that any efficient search algorithm must be judicious when selecting architectures to evaluate. Learning a good embedding space over the domain of compressed network architectures is important because it can be used to define a distribution on the architecture space that can be used to generate a priority ordering of architectures for evaluation. To enable the careful selection of architectures for evaluation, we propose a method to incrementally learn an embedding space over the domain of network architectures. In the network compression paradigm, we are given a teacher network and we aim to search for a compressed network architecture, a student network that contains as few parameters as possible while maintaining similar performance to the teacher network. We address the task of compressed architecture search by using Bayesian Optimization (BO) with a kernel function defined over our proposed embedding space to select architectures for evaluation. As modern neural architectures can Computationally Efficient Architecture: There has been great progress in designing computationally efficient network architectures. Representative examples include SqueezeNet BID16, MobileNet , MobileNetV2 BID29, CondenseNet and ShuffleNet BID39. Different from them, we aim to develop an algorithm that can automatically search for an efficient network architecture with minimal human efforts involved in the architecture design. Neural Architecture Search (NAS): NAS has recently been an active research topic BID41 BID42 BID28 BID27 BID20 b; BID24. Some existing works in NAS are focused on searching for architectures that not only can achieve high performance but also respect some resource or computation constraints BID1 BID34 BID4 BID12 BID5. NAO BID24 and our work share the idea of mapping network architectures into a latent continuous embedding space. But NAO and our work have fundamentally different motivations, which further lead to different architecture search frameworks. NAO maps network architectures to a continuous space such that they can perform gradient based optimization to find better architectures. However, our motivation for learning the embedding space is to find a principled way to define a kernel function between architectures with complex skip connections and multiple branches. Our work is also closely related to N2N BID1, which searches for a compressed architecture based on a given teacher network using reinforcement learning. Our search algorithm is developed based on Bayesian Optimization (BO), which is different from N2N and many other existing works. We will compare our approach to other BO based NAS methods in the next paragraph. Readers can refer to BID6 for a more complete literature review of NAS.Bayesian Optimization (BO): BO is a popular method for hyper-parameter optimization in machine learning. BO has been used to tune the number of layers and the size of hidden layers BID3 BID32, the width of a network BID31 or the size of the filter bank BID2, along with other hyper-parameters, such as the learning rate, number of iterations. BID25, BID17 and BID38 also fall into this category. Our work is also related to BID9, which presents a Bayesian method for identifying the Pareto set of multi-objective optimization problems and applies the method to searching for a fast and accurate neural network. However, most existing works on BO for NAS only show on tuning network architectures where the connections between network layers are fixed, i.e., most of them do not optimize how the layers are connected to each other. BID18 proposes a distance metric OTMANN to compare network architectures with complex skip connections and branch layers, based on which NASBOT is developed, a BO based NAS framework, which can tune how the layers are connected. Although the OTMANN distance is designed with thoughtful choices, it is defined based on some empirically identified factors that can influence the performance of a network, rather than the actual performance of networks. Different from OTMANN, the distance metric (or the embedding) for network architectures in our algorithm is automatically learned according to the actual performance of network architectures instead of manually designed. Our work can also be viewed as carrying out optimization in the latent space of a high dimensional and structured space, which shares a similar idea with previous literature BID23 BID7. For example, BID23 presents a new variational auto-encoder to map kernel combinations produced by a context-free grammar into a continuous latent space. Deep Kernel Learning: Our work is also related to recent works on deep kernel learning BID36 b). They aim to learn more expressive kernels by representing the kernel function as a neural network to incorporate the expressive power of deep networks. The follow-up work extends the kernel representation to recurrent networks to model sequential data. Our work shares a similar motivation with them and tries to learn a kernel function for the neural architecture domain by leveraging the expressive power of deep networks. In this work, we focus on searching for a compressed network architecture based on a given teacher network and our goal is to find a network architecture which contains as few parameters as possible but still can obtain a similar performance to the teacher network. Formally, we aim to solve the following optimization problem: DISPLAYFORM0 where X denotes the domain of neural architectures and the function f (x): X → R evaluates how well the architecture x meets our requirement. We adopt the reward function design in N2N BID1 for the function f, which is defined based on the compression ratio of the architecture x and its validation performance after being trained on the training set. More details about the exact form of f are given in Appendix 6.1 due to space constraints. As evaluating the value of f (x) for a specific architecture x is extremely costly, the algorithm must judiciously select the architectures to evaluate. To enable the careful selection of architectures for evaluation, we propose a method to incrementally learn an embedding space over the domain of network architecture that can be used to generate a priority ordering of architectures for evaluation. In particular, we develop the search algorithm based on BO with a kernel function defined over our proposed embedding space. In the following text, we will first introduce the sketch of the BO algorithm and then explain how the proposed embedding space is used in the loop of BO.We adopt the Gaussian process (GP) based BO algorithms to maximize the function f (x), which is one of the most popular algorithms in BO. A GP prior is placed on the function f, parameterized by a mean function µ(·): X → R and a covariance function or kernel k(·, ·): X × X → R. To search for the solution, we start from an arbitrarily selected architecture x 1. At step t, we evaluate the architecture x t, i.e., obtaining the value of f (x t). Using the t evaluated architectures up to now, we compute the posterior distribution on the function f: DISPLAYFORM1 where f (x 1:t) = [f (x 1),..., f (x t)] and µ t (x) and σ 2 t (x) can be computed analytically based on the GP prior BID35. We then use the posterior distribution to decide the next architecture to evaluate. In particular, we obtain x t+1 by maximizing the expected improvement acquisition function EI t (x): X → R, i.e., x t+1 = arg max x∈X EI t (x). The expected improvement function EI t (x) BID26 measures the expected improvement over the current maximum value according to the posterior distribution: DISPLAYFORM2 Published as a conference paper at ICLR 2019 where E t indicates the expectation is taken with respect to the posterior distribution at step t p (f (x) | f (x 1:t)) and f * t is the maximum value among {f (x 1),..., f (x t)}. Once obtaining x t+1, we repeat the above described process until we reach the maximum number of steps. Finally, we return the best evaluated architecture as the solution. The main difficulty in realizing the above optimization procedure is the design of the kernel function k(·, ·) for X and the maximization of the acquisition function EI t (x) over X, since the neural architecture domain X is discrete and highly complex. To overcome these difficulties, we propose to learn an embedding space for the neural architecture domain and define the kernel function based on the learned embedding space. We also propose a search space, a subset of the neural architecture domain, over which maximizing the acquisition function is feasible and sufficient. The kernel function, which measures the similarity between network architectures, is fundamental for selecting the architectures to evaluate during the search process. As modern neural architectures can have multiple layers, multiple branches and multiple skip connections, comparing two architectures is non-trivial. Therefore, we propose to map a diverse range of discrete architectures to a continuous embedding space through the use of recurrent neural networks and then define the kernel function based on the learned embedding space. We use h(·; θ) to denote the architecture embedding function that generates an embedding for a network architecture according to its configuration parameters. θ represents the weights to be learned in the architecture embedding function. With h(·; θ), we define the kernel function k(x, x ; θ) based on the RBF kernel: DISPLAYFORM0 where σ is a hyper-parameter. h(·; θ) represents the proposed learnable embedding space and k(x, x ; θ) is the learnable kernel function. They are parameterized by the same weights θ. In the following text, we will first introduce the architecture embedding function h(·; θ) and then describe how we learn the weights θ during the search process. The architecture embedding function needs to be flexible enough to handle a diverse range of architectures that may have multiple layers, multiple branches and multiple skip connections. Therefore we adopt a Bidirectional LSTM to represent the architecture embedding function, motivated by the layer removal policy network in N2N BID1. The input to the Bi-LSTM is the configuration information of each layer in the network, including the layer type, how this layer connects to other layers, and other attributes. After passing the configuration of each layer to the Bi-LSTM, we gather all the hidden states, apply average pooling to these hidden states and then apply L 2 normalization to the pooled vector to obtain the architecture embedding. We would like to emphasize that our representation for layer configuration encodes the skip connections between layers. Skip connections have been proven effective in both human designed network architectures, such as ResNet BID8 and DenseNet BID13, and automatically found network architectures BID41. N2N only supports the kind of skip connections used in ResNet BID8 and does not generalize to more complex connections between layers, where our representation is still applicable. We give the details about our representation for layer configuration in Appendix 6.2.The weights of the Bi-LSTM θ, are learned during the search process. The weights θ determine the architecture embedding function h(·; θ) and the kernel k(·, ·; θ). Further, θ controls the GP prior and the posterior distribution of the function value conditioned on the observed data points. The posterior distribution guides the search process and is essential to the performance of our search algorithm. Our goal is to learn a θ such that the function f is consistent with the GP prior, which will in a posterior distribution that accurately characterizes the statistical structure of the function f.Let D denote the set of evaluated architectures. In step t, D = {x 1, . . ., x t}. For any architecture DISPLAYFORM1 based on the GP prior, where \ refers to the set difference operation, f (x i) is the value obtained by evaluating the architecture DISPLAYFORM2 is the posterior probability of f (x i) conditioned on the other evaluated architectures in D. The higher the value of DISPLAYFORM3 is, the more accurately the posterior distribution characterizes the statistical structure of the function f and the more the function f is consistent with the GP prior. Therefore, we learn θ by minimizing the negative log posterior probability: DISPLAYFORM4 is a Gaussian distribution and its mean and covariance matrix can be computed analytically based on k(·, ·; θ). Thus L is differentiable with respect to θ and we can learn the weights θ using backpropagation. In each optimization step, we obtain the next architecture to evaluate by maximizing the acquisition function EI t (·) over the neural architecture domain X. On one hand, maximizing EI t (·) over all the network architectures in X is unnecessary. Since our goal is to search for a compressed architecture based on the given teacher network, we only need to consider those architectures that are smaller than the teacher network. On the other hand, maximizing EI t (·) over X is non-trivial. Gradientbased optimization algorithms cannot be directly applied to optimize EI t (·) as X is discrete. Also, exhaustive exploration of the whole domain is infeasible. This calls for a search space that covers the compressed architectures of our interest and easy to explore. Motivated by N2N BID1, we propose a search space for maximizing the acquisition function, which is constrained by the teacher network, and provides a practical method to explore the search space. We define the search space based on the teacher network. The search space is constructed by all the architectures that can be obtained by manipulating the teacher network with the following three operations: layer removal, layer shrinkage and adding skip connections. Layer removal and shrinkage: The two operations ensure that we only consider architectures that are smaller than the given big network. Layer removal refers to removing one or more layers from the network. Layer shrinkage refers to shrinking the size of layers, in particular, the number of filters in convolutional layers, as we focus on convolutional networks in this work. Different from N2N, we do not consider shrinking the kernel size, padding or other configurable variables and we find that only shrinking the number of filters already yields satisfactory performance. The operation of adding skip connections is employed to increase the network complexity. N2N BID1, which uses reinforcement learning to search for compressed network architectures, does not support forming skip connections in their action space. We believe when searching for compressed architectures, adding skip connections to the compressed network is crucial for it to achieve similar performance to the teacher network and we will show ablation study to verify this. The way we define the search space naturally allows us to explore it by sampling the operations to manipulate the architecture of the teacher network. To optimize the acquisition function over the search space, we randomly sample architectures in the search space by randomly sampling the operations. We then evaluate EI t (·) over the sampled architectures and return the best one as the solution. We also have tried using evolutionary algorithm to maximize EI t (·) but it yields similar with random sampling. So for the sake of simplicity, we use random sampling to maximize EI t (·). We attribute the good performance of random sampling to the thoughtful design of the operations to manipulate the teacher network architecture. These operations already favor the compressed architectures of our interest. We implement the search algorithm with the proposed learnable kernel function but notice that the highest function value among evaluated architectures stops increasing after a few steps. We conjecture this is due to that the learned kernel is overfitted to the training samples since we only evaluate hundreds of architectures in the whole search process. An overfitted kernel may bias the following sampled architectures for evaluation. To encourage the search algorithm to explore more diverse architectures, we propose a multiple kernel strategy, motivated by the bagging algorithm, which is usually employed to avoid overfitting. In bagging, instead of training one single model on the whole dataset, multiple models are trained on different subsets of the whole dataset. Likewise, in each step of the search process, we train multiple kernel functions on uniformly sampled subsets of D, the set of all the available evaluated architectures. Technically, learning multiple kernels refers to learning multiple architecture embedding spaces, i.e., multiple sets of weights θ. After training the kernels, each kernel is used separately to compute one posterior distribution and determine one architecture to evaluate in the next step. That is to say, if we train K kernels in the current step, we will obtain K architectures to evaluate in the next step. The proposed multiple kernel strategy encourages the search process to explore more diverse architectures and can help find better architectures than training one single kernel only. When training kernels, we randomly initialize their weights and learn the weights from the scratch on subsets of evaluated architectures. We do not learn the weights of the kernel based on the weights learned in the last step, i.e., fine-tuning the Bi-LSTM from the last step. The training of the Bi-LSTM is fast since we usually only evaluate hundreds of architectures during the whole search process. A formal sketch of our search algorithm in shown Algorithm 1. We first extensively evaluate our algorithm with different teacher architectures and datasets. We then compare the automatically found compressed architectures to the state-of-the-art manually-designed compact architecture, ShuffleNet BID39. We also evaluate the transfer performance of the learned embedding space and kernel. We perform ablation study to understand how the number of kernels K and other design choices in our search algorithm influence the performance. Due to space constraints, the ablation study is included in Appendix 6.3. We use two datasets: CIFAR-10 and CIFAR-100 BID19 ). CIFAR-10 contains 60K images in 10 classes, with 6K images per class. CIFAR-100 also contains 60K images but in 100 classes, with 600 images per class. Both CIFAR-10 and CIFAR-100 are divided into a training set with 50K images and a test set with 10K images. We sample 5K images from the training set as the validation set. We provide on four architectures as the teacher network: VGG-19 BID30, ResNet-18, ResNet-34 and ShuffleNet BID39.We consider two baselines algorithms for comparison: random search (RS) and a reinforcement learning based approach, N2N BID1. Here we use RS to directly maximize the compression objective f (x). To be more specific, RS randomly samples architectures in the search space, then evaluates all of them and returns the best architecture as the optimal solution. In the following experiments, RS evaluates 160 architectures. For our proposed method, we run 20 architecture search steps, where each step generates K = 8 architectures for evaluation based on the the K different kernels. This means our proposed method evaluates 160 (20 × 8) architectures in total during the search process. Note that when evaluating an architecture during the search process, we only train it for 10 epochs to reduce computation time. So for both RS and our method, we fully train the top 4 architectures among the 160 evaluated architectures and choose the best one as the solution. When learning the kernel function parameters, we randomly sample from the set of the evaluated architectures with a probability of 0.5 to form the training set for one kernel. The of N2N are from the original paper BID1.The compression are summarized in TAB1. For a compressed network x,'Ratio' refers to the compression ratio of x, which is defined as 1 − #params(x) #params(xteacher).' Times' refers to the ratio between the size of the teacher network and the size of the compressed network, i.e., #params(xteacher) #params(x). We also show the value of f (x) as an indication of how well each architecture x meets our requirement in terms of both the accuracy and the compression ratio. For'Random Search' and'Ours', we run the experiments for three times and report the average as well as the standard deviation. We first apply our algorithm to compress three popular network architectures: VGG-19, ResNet-18 and ResNet-34, and use them as the teacher network. We can see that on both CIFAR-10 and CIFAR-100, our proposed method consistently finds architectures that can achieve higher value of f (x) than all baselines. For VGG-19 on CIFAR-100, the architecture found by our algorithm is 8 times smaller than the original teacher network while the accuracy only drops by 2.3%. For ResNet-18 on CIFAR-100, the architecture found by our algorithm has a little bit more parameters than that found by RS but has higer accuracy by about 4%. For ResNet-34 on CIFAR-100, the architecture found by our proposed method has a higher accuracy as the architecture discovered by RS but only uses about 65% of the number of parameters. Also for ResNet-34 on CIFAR-100, N2N only provides the of layer removal, denoted as'N2N -removal'.' Ours -removal' refers to only considering the layer removal operation in the search space for fair comparison. We can see that'Ours -removal' also significantly outperforms'N2N -removal' in terms of both the accuracy and the compression ratio. ShuffleNet is an extremely computation-efficient human-designed CNN architecture BID39. We also have tried to use ShuffleNet as the teacher network and see if we can further optimize this architecture. As shown in TAB1, our search algorithm successfully compresses'ShuffleNet 1 × (g = 2)' by 10.43× and 4.74× on CIFAR-10 and CIFAR-100 respectively and the compressed architectures can still achieve similar accuracy to the original teacher network. Here'1×' indicates the number of channels in the teacher ShuffleNet and'(g = 2)' indicates that the number of groups is 2. Readers can refer to BID39 for more details about the specific configuration. We now compare the compressed architectures found by our algorithm to the state-of-the-art manually-designed compact network architecture ShuffleNet. We vary the number of channels and the number of groups in ShuffleNet and compare the compressed architectures found by our proposed method against these different configurations of ShuffleNet. We conduct experiments on CIFAR-100 and the are summarized in TAB2. For'Ours' in TAB2, we use the mean of 3 runs of our method. In TAB2, VGG-19, ResNet-18, ResNet-34 and ShuffleNet refer to the compressed architectures found by our algorithm based on the corresponding teacher network and do not refer to the original architecture indicated by the name. The teacher ShuffleNet used in the experiments is'ShuffleNet 1×(g = 2)' as mentioned above.'0.5×(g = 1)' and so on in TAB2 refer to different configurations of ShuffleNet and we show the accuracy of these original ShuffleNet in the table. The compressed architectures found based on ResNet-18 and ResNet-34 have a similar number of parameters with ShuffleNet 1.5× but they can all achieve much higher accuracy than ShuffleNet 1.5×. The compressed architecture found based on ShuffleNet 1 × (g = 2) can obtain higher accuracy than ShuffleNet 0.5× while using a similar number of parameters. We now study the transferability of the learned embedding space or the learned kernel. We would like to know to what extent a kernel learned in one setting can be generalized to a new setting. To be more specific about the kernel transfer, we first learn one kernel or multiple kernels in the source setting. Then we maximize the acquisition function within the search space in the target setting and the acquisition function is computed based on the kernel learned in the source setting. The maximizer of the acquisition function is a compressed architecture for the target setting. We evaluate this architecture in the target setting and compare it with the architecture found by applying algorithms directly to the target setting. 67.71% 0.26M 1.5 × (g = 1) 72.43% 2.09M 0.5 × (g = 2)67.54% 0.27M 1.5 × (g = 2) 71.41% 2.07M 0.5 × (g = 3) 67.23% 0.27M 1.5 × (g = 3) 71.05% 2.03M 0.5 × (g = 4) 66.83% 0.27M We consider the following four settings: (a) ResNet-18 on CIFAR-10, (b) ResNet-34 on CIFAR-10, (c) VGG-19 on CIFAR-10 and (d) ResNet-18 on CIFAR-100.'ResNet-18 on CIFAR-10' refers to searching for a compressed architecture with ResNet-18 as the teacher network for the dataset CIFAR-10 and so on. We first run our search algorithm in setting (a) and transfer the learned kernel to setting (b), (c) and (d) respectively to see how much the learned kernel can transfer to a larger teacher network in the same architecture family (this means a larger search space), a different architecture family (this means a totally different search space) or a harder dataset. DISPLAYFORM0 We learn K kernels in the source setting (a) and we transfer all the K kernels to the target setting, which will in K compressed architectures for the target setting. We report the best one among the K architectures. We have tried K = 1 and K = 8 and the are shown in TAB3. In all the three transfer scenarios, the learned kernel in the source setting (a) can help find reasonably good architectures in the target setting without actually training the kernel in the target setting, whose performance is better than the architecture found by applying N2N directly to the target setting. These proves that the learned architecture embedding space or the learned kernel is able to generalize to new settings for architecture search without any additional training. We address the task of searching for a compressed network architecture by using BO. Our proposed method can find more efficient architectures than all the baselines on CIFAR-10 and CIFAR-100. Our key contribution is the proposed method to learn an embedding space over the domain of network architectures. We also demonstrate that the learned embedding space can be transferred to new settings for architecture search without any training. Possible future directions include extending our method to the general NAS problem to search for desired architectures from the scratch and combining our proposed embedding space with BID9 to identify the Pareto set of the architectures that are both small and accurate. We now discuss the form of the function f. We aim to find a network architecture which contains as few parameters as possible but still can obtain a similar performance to the teacher network. Usually compressing a network leads to the decrease in the performance. So the function f needs to provide a balance between the compression ratio and the performance. In particular, we hope the function f favors architectures of high performance but low compression ratio more than architectures of low performance but high compression ratio. So we adopt the reward function design in N2N BID1 for the function f. Formally, f is defined as: DISPLAYFORM0 where C(x) is the compression ratio of the architecture x, A(x) is the validation performance of x and A(x teacher) is the validation performance of the teacher network. The compression ratio C(x) is defined as C(x) = 1 − #params(x) #params(xteacher). Note that for any x, to evaluate f (x) we need to train the architecture x on the training data and test on the validation data. This is time-consuming so during the search process, we do not fully train x. Instead, we only train x for a few epochs and use the validation performance of the network obtained by early stopping as an approximation for A(x). We also employ the Knowledge Distillation (KD) strategy BID10 for faster training as we are given a teacher network. But when we fully train the architecture x to see its true performance, we fine tune it from the weights obtained by early stopping with cross entropy loss without using KD. We represent the configuration of one layer by a vector of length (m + 2n + 6), where m is the number of types of layers we consider and n is the maximum number of layers in the network. The first m dimensions of the vector are a one-hot vector, indicating the type of the layer. Then the following 6 numbers indicate the value of different attributes of the layer, including the kernel size, stride, padding, group, input channels and output channels of the layer. If one layer does not have any specific attribute, the value of that attribute is simply set to zero. The following 2n dimensions encode the edge information of the network, if we view the network as a directed acyclic graph with each layer as a node in the graph. In particular, the 2n dimensions are composed of two n-dim vectors, where one represents the edges incoming to the code and the other one represents the edges outgoing from the node. The nodes in the directed acyclic graph can be topologically sorted, which will give each layer an index. For an edge from node i to j, the (j − i) th element in the outgoing vector of node i and the incoming vector of node j will be 1. We are sure that j is larger than i because all the nodes are topologically sorted. With this representation, we can describe the connection information in a complex network architecture. Impact of number of kernels K: We study the impact of the number of kernels K. We conduct experiments on CIFAR-100 and use ResNet-34 as the teacher network. We vary the value of K and fix the number of evaluated architectures to 160. The are summarized in TAB4. We can see that K = 4, 8, 16 yield much better than K = 1. Also the performance is not sensitive to K as K = 4, 8, 16 yield similar . In our main experiments, we fix K = 8.Impact of adding skip connections: Our search space is defined based on three operations: layer removal, layer shrinkage and adding skip connections. A key difference between our search space and N2N BID1 is that they only support layer removal and shrinkage do not support adding skip connections. To validate the effectiveness of adding skip connections, we conduct experiments on CIFAR-100 and on three architectures. In TAB5,'Ours -removal + shrink' refers to the search space without considering adding skip connections and'Ours' refers to using the full search space. We can see that'Ours' consistently outperforms'Ours -removal + shrink' across different teacher networks, proving the effectiveness of adding skip connections. Impact of the maximization of the acquisition function: As mentioned in Section 3.2, we have two choices to maximize the acquisition function EI t (x): randomly sampling (RS) and evolutionary algorithm (EA). We conduct the experiments to compare RS and ES to compress ResNet-34 on CIFAR-100. We find that although EA is empirically better than RS in terms of maximizing EI t (x), EA is slightly worse than RS in terms of the final search performance as shown in TAB6. For any EI t (x), the solution found by EA x EA may be better than the solution found by RS x RS, i.e., EI t (x EA) > EI t (x RS). However, we observe that f (x EA) and f (x RS) are usually similar. We also plot the values of f (x) for the evaluated architectures when using RS and EA to maximize the acquisition function respectively in FIG0. We can see that the function value of the evaluated architectures grows slightly more stable when using RS to maximize the acquisition function then using EA. Therefore, we choose RS in the following experiments for the sake of simplicity.6.4 COMPARISON TO TPE (BERGSTRA ET AL., 2011)Neural architecture search can be viewed as an optimization problem in a high-dimensional and discrete space. There are existing optimization methods such as TPE BID3 and SMAC BID15 that are proposed to handle such input spaces. To further justify our idea to learn a latent embedding space for the neural architecture domain, we now compare our method to directly using TPE to search for compressed architectures in the original hyperparameter value domain. TPE BID3 ) is a hyperparameter optimization algorithm based on a tree of Parzen estimator. In TPE, they use Gaussian mixture models (GMM) to fit the probability density of the hyperparameter values, which indicates that they determine the similarity between two architecture configurations based on the Euclidean distance in the original hyperparameter value domain. However, instead of comparing architecture configurations in the original hyperparameter value domain, our method transforms architecture configurations into a learned latent embedding space and compares them in the learned embedding space. We first do not consider adding skip connections between layers and focus on layer removal and layer shrinkage only, i.e., we search for a compressed architecture by removing and shrinking layers from the given teacher network. Therefore, the hyperparameters we need to tune include for each layer whether we should keep it or not and the shrinkage ratio for each layer. This in 64 hyperparameters for ResNet-18 and 112 hyperparameters for ResNet-34. We conduct the experiments on CIFAR-100 and the are summarized in the TAB7. Comparing'TPE -removal + shrink' and'Ours -removal + shrink', we can see that our method outperforms TPE and can achieve higher accuracy with a similar size. Now, we conduct experiments with adding skip connections. Besides the hyperparameters mentioned above, for each pair of layers where the output dimension of one layer is the same as the input dimension of another layer, we tune a hyperparameter representing whether to add a skip connection between them. The in 529 and 1717 hyperparameters for ResNet-18 and ResNet-34 respectively. In this representation, the original hyperparameter space is extremely high-dimensional and we think it would be difficult to directly optimize in this space. We can see from the table that for ResNet-18, the'TPE' are worse than'TPE -removal + shrink'. We do not show the'TPE' for ResNet-34 here because the networks found by TPE have too many skip connections, which makes it very hard to train. The loss of those networks gets diverged easily and do not generate any meaningful . Based on the on'layer removal + layer shrink' only and the on the full search space, we can see that our method is better than optimizing in the original space especially when the original space is very high-dimensional. We need to randomly sample architectures in the search space when optimizing the acquisition function. As mentioned in Section 3.2, we sample the architectures by sampling the operations to manipulate the architecture of the teacher network. During the process, we need to make sure the layers in the network are still compatible with each other in terms of the dimension of the feature map. Therefore, We impose some conditions when we sample the operations in order to maintain the consistency between between layers. For layer removal, only layers whose input dimension and output dimension are the same are allowed to be removed. For layer shrinkage, we divide layers into groups and for layers in the same group, the number of channels are always shrunken with the same ratio. The layers are grouped according to their input and output dimension. For adding skip connections, only when the output dimension of one layer is the same as the input dimension of another layer, the two layers can be connected. When there are multiple incoming edges for one layer in the computation graph, the outputs of source layers are added up to form the input for that layer. When compressing ShuffleNet, we also slightly modify the original architecture before compression. We insert a 1 × 1 convolutional layer before each average pooling layer. This modification increases parameters by about 10% and does not significantly influence the performance of ShuffleNet. Note that the modification only happens when we need to compress ShuffleNet and does not influence the performance of the original ShuffleNet shown in TAB2. We observe that the random search (RS) baseline which maximizes f (x) with random sampling can achieve very good performance. To analyze RS in more detail, we show the value of f (x) for the 160 architectures evaluated in the search process in FIG1. The specific setting we choose is ResNet-34 on CIFAR-100. We can see that although RS can sometimes sample good architectures with high f (x) value, it is much more unstable than our method. The function value of the evaluated architectures selected by our method has a strong tendency to grow as we search more steps while RS does not show such trend. Also, from the histogram of values of f (x), we can see that RS has a much lower chance to get architectures with high function values than our method. This is expected since our method leverages the learned architecture embedding or the kernel function to carefully select the architecture for evaluation while RS just randomly samples from the search space. We can conclude that our method is much more efficient than RS. We discuss the possible choices of the objective function for learning the embedding space in this section. In our experiments, we learn the LSTM weights θ by maximizing the predictive posterior probability, i.e., minimizing the negative log posterior probability as defined in Eq. 5. There are two other alternative choices for the objective function as suggested by the reviewers. We discuss the two choices and compare them to our choice in the following text. Intuitively, a meaningful embedding space should be predictive of the function value, i.e, the performance of the architecture. Therefore, a reasonable choice of the objective function is to train the LSTM by regressing the function value with a Euclidean loss. Technically, this is done by adding a fully connected layer F C(·; θ) after the embedding, whose output is the predicted performance of the input architecture. However, directly training the LSTM by regressing the function value does not let us directly evaluate how accurate the posterior distribution characterizes the statistical structure of the function. As mentioned before, the posterior distribution guides the search process by influencing the choice of architectures for evaluation at each step. Therefore, we believe maximizing the predictive posterior probability is a more suitable training objective for our search algorithm than regressing the function value. To validate this, we have tried changing the objective function from Eq. 5 to the squared Euclidean distance between the predicted function value and the true function value: DISPLAYFORM0 The are summarized in TAB8. We observe that maximizing the predictive posterior probability consistently yields better than the Euclidean loss. Another possible choice of the objective function is to maximize the log marginal likelihood log p (f (D) | D; θ), which is the conventional objective function for kernel learning BID36 b). We do not choose to maximize log marginal likelihood because we empirically find that maximizing the log marginal likelihood yields worse than maximizing the predictive GP posterior as shown in TAB8. When using the log marginal likelihood, we observe that the loss is numerically unstable due to the log determinant of the covariance matrix in the log marginal likelihood. The training objective usually goes to infinity when the dimension of the covariance matrix is larger than 50, even with smaller learning rates, which may harm the search performance. Therefore, we learn the embedding space by maximizing the predictive GP posterior instead of the log marginal likelihood.
We propose a method to incrementally learn an embedding space over the domain of network architectures, to enable the careful selection of architectures for evaluation during compressed architecture search.
361
scitldr
Reinforcement Learning (RL) problem can be solved in two different ways - the Value function-based approach and the policy optimization-based approach - to eventually arrive at an optimal policy for the given environment. One of the recent breakthroughs in reinforcement learning is the use of deep neural networks as function approximators to approximate the value function or q-function in a reinforcement learning scheme. This has led to with agents automatically learning how to play games like alpha-go showing better-than-human performance. Deep Q-learning networks (DQN) and Deep Deterministic Policy Gradient (DDPG) are two such methods that have shown state-of-the-art in recent times. Among the many variants of RL, an important class of problems is where the state and action spaces are continuous --- autonomous robots, autonomous vehicles, optimal control are all examples of such problems that can lend themselves naturally to reinforcement based algorithms, and have continuous state and action spaces. In this paper, we adapt and combine approaches such as DQN and DDPG in novel ways to outperform the earlier for continuous state and action space problems. We believe these are a valuable addition to the fast-growing body of on Reinforcement Learning, more so for continuous state and action space problems. Reinforcement learning (RL) is about an agent interacting with the environment, learning an optimal policy, by trail and error, for sequential decision making problems in a wide range of fields such that the agent learns to control a system and maximize a numerical performance measure that expresses a long-term objective BID6 ). Many interesting real-world control tasks, such as driving a car or riding a snowboard, require smooth continuous actions taken in response to high-dimensional, real-valued sensory input. In applying RL to continuous problems, the most common approach for a long time has been first to discretize state and action spaces and then to apply an RL algorithm for a discrete stochastic system BID2 ). However, this discretization approach has a number of drawbacks. Hence, the formulation of the reinforcement learning problem with continuous state and action space holds great value in solving more real-world problems. The advent of deep learning has had a significant impact on many areas in machine learning, dramatically improving the state-of-the-art tasks such as object detection, speech recognition and language translation BID5. The most important property of deep learning is that deep neural networks can automatically find low-dimensional representations of high-dimensional data. Therefore, deep learning enables RL to scale to problems which were previously intractable -setting with high dimensional/continuous state and action space BID0.Few of the current state of the art methods in the area of Deep RL are:• Deep Q-learning Networks (DQN) -introduced novel concepts which helped in using neural networks as function approximators for reinforcement learning algorithms (for continuous state space) BID8.• Prioritized Experience Replay (PER) -builds upon DQN with some newer approaches to outperform DQN (for continuous state space) BID10.• Deep Deterministic Policy Gradients (DDPG) -follows a different paradigm as compared to the above methods. It uses DQN as the function approximator while building on the seminal work of BID12 (for both continuous state and action space) on deterministic policy gradients. We propose a new algorithm, Prioritized DDPG using the ideas proposed in DQN, prioritized experience replay and DDPG such that it outperforms the DDPG in the continuous state and action space. Prioritized DDPG uses the concept of prioritized sampling in the function approximator of DDPG. Our show that prioritized DDPG outperforms DDPG in a majority of the continuous action space environments. We then use the concept of parameter space noise for exploration and show that this further improves the rewards achieved. Critic methods rely exclusively on function approximation and aim at learning a "good" approximation of the value functionKonda &. We survey a few of the recent best known critic methods in RL. As in any value function-based approach, DQN method tries to find the value function for each state and then tries to find the optimal function. Min et al. BID12 also consider the approach of continuous state spaces. The novelty of their approach is that they use a non-linear function approximator efficiently. Until their work, non-linear function approximators were inefficient and also had convergence issues. This was due to the fact that there was a lot of correlation between the data being fed to the neural networks which ed in them diverging (neural networks assume that the data comes from independent and identically distributed source). To overcome this drawback, the novel ideas that were proposed which made it possible to use non-linear function approximator for reinforcement learning are the following:• Experience Replay Buffer: In this technique, the input given to the neural network is selected at random from a large buffer of stored observations, which ensures that there are no correlations in the data, which is a requirement for neural networks.• Periodic Target Network Updates: The authors propose that having two sets of parameters for the same neural networks can be beneficial. The two sets of parameters are used, one of the parameters is used to compute the target at any given iteration and the other, network parameters are used in the loss computation and are updated by the first network parameters periodically, this also ensures lesser co-relation. The prioritized experience replay algorithm is a further improvement on the deep Q-learning methods and can be applied to both DQN and Double DQN. The idea proposed by the authors is as follows: instead of selecting the observations at random from the replay buffer, they can be chosen based on some criteria which will help in making the learning faster. Intuitively, what they are trying here is to replace those observations which do not contribute to learning or learning with more useful observations. To select these more useful observations, the criteria used was the error of that particular observation. This criterion helps in selecting those observations which help the agent most in learning and hence speeds up the learning process. The problem with this approach is that greedy prioritization focuses on a small subset of the experience and this lack of diversity might lead to over-fitting. Hence, the authors introduce a stochastic sampling method that interpolates between pure greedy and uniform random prioritization. Hence, the new probability if sampling a transition i is DISPLAYFORM0 where p i is the priority of transition i and α determines how much prioritization is used. This approach, while it improves the has a problem of changing the distribution of the expectation. This is resolved by the authors by using Importance Sampling (IS) weights DISPLAYFORM1 where if β = 1, the non-uniform probabilities P (i) are fully compensated BID10 ). Actor methods work with a parameterized family of policies. The gradient of the performance, with respect to the actor parameter is directly estimated by simulation, and the parameters are updated in the direction of improvement . The most popular policy gradient method is the deterministic policy gradient(DPG) method and in this approach, instead of having a stochastic policy which we have seen till now, the authors make the policy deterministic and then determine the policy gradient. The deterministic policy gradient is the expected gradient of the action-value function, which integrates over the state space; whereas in the stochastic case, the policy gradient integrates over both state and action spaces. What this leads to is that the deterministic policy gradient can be estimated more efficiently than the stochastic policy gradient. The DPG algorithm, presented by BID12 maintains a parameterized actor function µ(s|θ µ) which is the current deterministic policy that maps a state to an action. They used the normal Bellman equation to update the critic Q(s, a). They then went on to prove that the derivative expected return with respect to actor parameters is the gradient of the policy's performance. Actor critic models (ACM) are a class of RL models that separate the policy from the value approximation process by parameterizing the policy separately. The parameterization of the value function is called the critic and the parameterization of the policy is called the actor. The actor is updated based on the critic which can be done in different ways, while the critic is update based on the current policy provided by the actor BID4 BID3 ). The DDPG algorithm tries to solve the reinforcement learning problem in the continuous action and state space setting. The authors of this approach extend the idea of deterministic policy gradients. What they add to the DPG approach is the use of a non-linear function approximator BID7 ).While using a deterministic policy, the action value function reduces from DISPLAYFORM0 to DISPLAYFORM1 as the inner expectation is no longer required. What this also tells us is that the expectation depends only on the environment and nothing else. Hence, we can learn off-policy, that is, we can train our reinforcement learning agent by using the observations made by some other agent. They then use the novel concepts used in DQN to construct their function approximator. These concepts could not be applied directly to continuous action space, as there is an optimization over the action space at every step which is in-feasible when there is a continuous action space BID7 ).Once we have both the actor and the critic networks with their respective gradients, we can then use the DQN concepts -replay buffer and target networks to train these two networks. They apply the replay buffer directly without any modifications but make small changes in the way target networks are used. Instead of directly copying the values from the temporary network to the target network, they use soft updates to the target networks. The proposed algorithm is primarily an adaptation of DQN and DDPG with ideas from the work of BID10 on continuous control with deep reinforcement learning to design a RL scheme that improves on DDPG significantly. The intuition behind the idea is as follows: The DDPG algorithm uses the DQN method as a sub-algorithm and any improvement over the DQN algorithm should ideally in the improvement of the DDPG algorithm. But from the above-described methods, not all algorithms which improve DQN can be used to improve DDPG. That is because some of them need the environment to have discrete action spaces. So, for our work, we will consider only prioritized experience replay method which does not have this constraint. Now, the improvement to the DQN algorithm, the prioritized action replay method can be integrated into the DDPG algorithm in a very simple way. Instead of using just DQN as the function approximator, we can use DQN with prioritized action replay. That is, in the DDPG algorithm, instead of selecting observations randomly, we select the observations using the stochastic sampling method as defined in equation 1. The pseudo-code for the prioritized action replay is given in Algorithm 1.This algorithm is quite similar to the original DDPG algorithm with the only changes being the way the observations are selected for training in line 11 and the transition probabilities are being updated in line 16. The first change ensures we are selecting the better set of observations which help in learning faster and the second change helps in avoiding over-fitting as it ensures all the observations have a non-zero probability of being selected to train the network and only a few high error observations are not used multiple times to train the network. The proposed, prioritized DDPG algorithm was tested on many of the standard RL simulation environments that have been used in the past for bench-marking the earlier algorithms. The environments are available as part of the Mujoco platform BID13 ). Mujoco is a physics environment which was created to facilitate research and development in robotics and similar areas, where fast simulation is an important component. Initialize a random process N for action exploration 6:Receive initial observation state s 1 for t = 1, T do 8:Select action a t = µ(s t |θ µ) + N t according to the current policy 9:Execute action a t and observe reward r t and observe new state s t+1 Store transition (s t, a t, r t, s t+1) in R Storing to the replay buffer 11:Sample a mini-batch of N transitions (s i, a i, r i, s i+1) from R from R each such that - DISPLAYFORM0 Stochastic sampling 12: DISPLAYFORM1 Update critic by minimizing the loss: DISPLAYFORM2 Update the actor policy using the sampled policy gradient: DISPLAYFORM3 Update the target networks: DISPLAYFORM4 Update the transition priorities for the entire batch based on the error 17:end for 18: end for This set of environments provide a varied set of challenges for the agent as environments have continuous action as well as state space. All the environments contain stick figures with some joints trying to perform some basic task by performing actions like moving a joint in a particular direction or applying some force using one of the joints. The implementation used for making the comparison was the implementation of DDPG in baselines BID9 ). The prioritized DDPG algorithm was implemented by extending the existing code in baselines. The following are the of the prioritized DDPG agent as compared to the DDPG work . The overall reward -that is the average of the reward across all epochs until that point and reward history -average of the last 100 epochs on four environments are plotted. The yaxis represents the reward the agent has received from the environment and the x-axis is the number of epochs with each epoch corresponding to 2000 time steps. As seen in FIG0, the Prioritized DDPG algorithm reaches the reward of the DDPG algorithm in less than 300 epochs for the HalfCheetah environment. This shows that the prioritized DDPG algorithm is much faster in learning. The same trend can be observed in FIG0 for HumanoidStandup, Hopper and Ant environments. That is, prioritized DDPG agent learns and gets the same reward as DDPG much faster. This helps is in reducing overall training time. Prioritized DDPG algorithm can also help in achieving which might not be achieved by DDPG even after large number of epochs. This can be seen in the case of the Ant environment. FIG0 shows that DDPG rewards are actually declining with more training. On the other hand, Prioritized DDPG has already achieved a reward much higher and is more stable. There is no best exploration strategy in RL. For some algorithms, random exploration works better and for some greedy exploration. But whichever strategy is used, the important requirement is that the agent has explored enough about the environment and learns the best policy. BID9 in their paper explore the idea of adding noise to the agent's parameters instead of adding noise in the action space. In their paper Parameter Space Noise For Exploration, they explore and compare the effects of four different kinds of noises• Uncorrelated additive action space noise• Correlated additive Gaussian action space noise• Adaptive-param space noise• No noise at all They show that adding parameter noise vastly outperforms existing algorithms or at least is just as good on majority of the environments for DDPG as well as other popular algorithms such as Trust Region Policy Optimization BID11 ). Hence, we use the concept of parametric noise in PDDPG algorithm to improve the rewards achieved by our agent. The PDDPG algorithm with parameter noise was run on the same set of environments as the PDDPG algorithm -the Mujoco environments. The empirical are as follows. As we can see from figures 2,3 and 4 we see a great amount of variation on the reward achieved. We can infer that prioritized DDPG clearly works better for adaptive-param and corelated noise as compared to uncorrelated noise. This could be due to the fact that prioritized DDPG already explores faster as compared to DDPG and hence adding more randomness for exploration is not going to bear any fruit. Therefore, we can conclude that, PDDPG learns faster than DDPG and with the appropriate noise, it can be improved further. This can be seen in figure 5, where the overall best of both the algorithms have been plotted against each other. We see that PDDPG outperforms DDPG in majority of the environments and does reasonably well in the others. To summarize, this paper discusses the state of the art methods in reinforcement learning with our improvements that have led to RL algorithms in continuous state and action spaces that outperform the existing ones. The proposed algorithm combines the concept of prioritized action replay with deep deterministic policy gradients. As it has been shown, on a majority of the mujoco environments this algorithm vastly outperforms the DDPG algorithm both in terms of overall reward achieved and the average reward for any hundred epochs over the thousand epochs over which both were run. Hence, it can be concluded that the proposed algorithm learns much faster than the DDPG algorithm. Secondly, the fact that current reward is higher coupled with the observation that rate of increase in reward also being higher for the proposed algorithm, shows that it is unlikely for DDPG algorithm to surpass the of the proposed algorithm on that majority of environments. Also, certain kinds of noises further improve PDDPG to help attain higher rewards. One other important is that different kinds of noises work better for different environments which was evident in how drastically the changed based on the parameter noise. The presented algorithm can also be extended and improved further by finding more concepts in value based methods, which can be used in policy based methods. The overall improvements in the area of continuous space and action state space can help in making reinforcement learning more applicable in real world scenarios, as the real world systems provide continuous inputs. These methods can potentially be extended to safety critical systems, by incorporating the notion of safety during the training of a RL algorithm. This is currently a big challenge because of the necessary unrestricted exploration process of a typical RL algorithm.
Improving the performance of an RL agent in the continuous action and state space domain by using prioritised experience replay and parameter noise.
362
scitldr
Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space. We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information. One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus. It's noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI) dataset with respect to the strongest published system. Natural Language Inference (NLI also known as recognizing textual entiailment, or RTE) task requires one to determine whether the logical relationship between two sentences is among entailment (if the premise is true, then the hypothesis must be true), contradiction (if the premise is true, then the hypothesis must be false) and neutral (neither entailment nor contradiction). NLI is known as a fundamental and yet challenging task for natural language understanding, not only because it requires one to identify the language pattern, but also to understand certain common sense knowledge. In TAB0, three samples from MultiNLI corpus show solving the task requires one to handle the full complexity of lexical and compositional semantics. The previous work on NLI (or RTE) has extensively researched on conventional approaches BID25; BID39. Recent progress on NLI is enabled by the availability of 570k human annotated dataset and the advancement of representation learning technique. Among the core representation learning techniques, attention mechanism is broadly applied in many NLU tasks since its introduction: machine translation BID15, abstractive summarization BID50, Reading Comprehension, dialog system BID41, etc. As described by BID57, "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key". Attention mechanism is known for its alignment between representations, focusing one part of representation over another, and modeling the dependency regardless of sequence length. Observing attention's powerful capability, we hypothesize that the attention weight can assist with machine to understanding the text. A regular attention weight, the core component of the attention mechanism, encodes the crosssentence word relationship into a alignment matrix. However, a multi-head attention weightVaswani et al. can encode such interaction into multiple alignment matrices, which shows a more powerful alignment. In this work, we push the multi-head attention to a extreme by building a word- by-word dimension-wise alignment tensor which we call interaction tensor. The interaction tensor encodes the high-order alignment relationship between sentences pair. Our experiments demonstrate that by capturing the rich semantic features in the interaction tensor, we are able to solve natural language inference task well, especially in cases with paraphrase, antonyms and overlapping words. We dub the general framework as Interactive Inference Network(IIN). To the best of our knowledge, it is the first attempt to solve natural language inference task in the interaction space. We further explore one instance of Interactive Inference Network, Densely Interactive Inference Network (DIIN), which achieves new state-of-the-art performance on both SNLI and MultiNLI copora. To test the generality of the architecture, we interpret the paraphrase identification task as natural language inference task where matching as entailment, not-matching as neutral. We test the model on Quora Question Pair dataset, which contains over 400k real world question pair, and achieves new state-of-the-art performance. We introduce the related work in Section 2, and discuss the general framework of IIN along with a specific instance that enjoys state-of-the-art performance on multiple datasets in Section 3. We describe experiments and analysis in Section 4. Finally, we conclude and discuss future work in Section 5. The early exploration on NLI mainly rely on conventional methods and small scale datasets BID40. The availability of SNLI dataset with 570k human annotated sentence pairs has enabled a good deal of progress on natural language understanding. The essential representation learning techniques for NLU such as attention, memory and the use of parse structure are studied on the SNLI which serves as an important benchmark for sentence understanding. The models trained on NLI task can be divided into two categories: (i) sentence encoding-based model which aims to find vector representation for each sentence and classifies the relation by using the concatenation of two vector representation along with their absolute element-wise difference and element-wise product.(ii) Joint feature models which use the cross sentence feature or attention from one sentence to another.After neural attention mechanism is successfully applied on the machine translation task, such technique has became widely used in both natural language process and computer vision domains. Many variants of attention technique such as hard-attention, self-attention, multi-hop attention BID27, bidirectional attention BID51 and multi-head attention BID57 are also introduced to tackle more complicated tasks. Before this work, neural attention mechanism is mainly used to make alignment, focusing on specific part of the representation. In this work, we want to show that attention weight contains rich semantic information required for understanding the logical relationship between sentence pair. Though RNN or LSTM are very good for variable length sequence modeling, using Convolutional neural network in NLU tasks is very desirable because of its parallelism in computation. Convolutional structure has been successfully applied in various domain such as machine translation BID26, sentence classification BID34 ), text matching BID30 and sentiment analysis BID33, etc. The convolution structure is also applied on different level of granularity such as byte BID65, character, word BID26 and sentences levels. The Interactive Inference Network (IIN) is a hierarchical multi-stage process and consists of five components. Each of the components is compatible with different type of implementations. Potentially all exiting approaches in machine learning, such as decision tree, support vector machine and neural network approach, can be transfer to replace certain component in this architecture. We focus on neural network approaches below. 1. Embedding Layer converts each word or phrase to a vector representation and construct the representation matrix for sentences. In embedding layer, a model can map tokens to vectors with the pre-trained word representation such as GloVe BID48, word2Vec BID42 and fasttext BID32. It can also utilize the preprocessing tool, e.g. named entity recognizer, part-of-speech recognizer, lexical parser and coreference identifier etc., to incorporate more lexical and syntactical information into the feature vector.2. Encoding Layer encodes the representations by incorporating the context information or enriching the representation with desirable features for future use. For instance, a model can adopt bidirectional recurrent neural network to model the temporal interaction on both direction, recursive neural network BID54 (also known as TreeRNN) to model the compositionality and the recursive structure of language, or self-attention to model the long-term dependency on sentence. Different components of encoder can be combined to obtain a better sentence matrix representation.3. Interaction Layer creates an word-by-word interaction tensor by both premise and hypothesis representation matrix. The interaction can be modeled in different ways. A common approach is to compute the cosine similarity or dot product between each pair of feature vector. On the other hand, a high-order interaction tensor can be constructed with the outer product between two matrix representations.4. Feature Extraction Layer adopts feature extractor to extract the semantic feature from interaction tensor. The convolutional feature extractors, such as AlexNet BID36, VGG BID53, Inception BID55, ResNet BID28 and DenseNet BID31, proven work well on image recognition are completely compatible under such architecture. Unlike the work BID34 who employs 1-D sliding window, our CNN architecture allows 2-D kernel to extract semantic interaction feature from the word-by-word interaction between n-gram pair. Sequential or tree-like feature extractors are also applicable in the feature extraction layer. Output Layer decodes the acquired features to give prediction. Under the setting of NLI, the output layer predicts the confidence on each class. Here we introduce Densely Interactive Inference Network (DIIN) 1, which is a relatively simple instantiation of IIN but produces state-of-the-art performance on multiple datasets. Embedding Layer: For DIIN, we use the concatenation of word embedding, character feature and syntactical features. The word embedding is obtained by mapping token to high dimensional vector space by pre-trained word vector (840B GloVe). The word embedding is updated during training. As in BID35 BID37, we filter character embedding with 1D convolution kernel. The character convolutional feature maps are then max pooled over time dimension for each token to obtain a vector. The character features supplies extra information for some out-ofvocabulary (OOV) words. Syntactical features include one-hot part-of-speech (POS) tagging feature and binary exact match (EM) feature. The EM value is activated if there are tokens with same stem or lemma in the other sentence as the corresponding token. The EM feature is simple while found useful as in reading comprehension task BID21. In analysis section, we study how EM feature helps text understanding. Now we have premise representation P ∈ R p×d and hypothesis representation H ∈ R h×d, where p refers to the sequence length of premise, h refers to the sequence length of hypothesis and d means the dimension of both representation. The 1-D convolutional neural network and character features weights share the same set of parameters between premise and hypothesis. Encoding Layer: In the encoding layer, the premise representation P and the hypothesis representation H are passed through a two-layer highway network, thus havingP ∈ R p×d and H ∈ R h×d for new premise representation and new hypothesis representation. These new representation are then passed to self-attention layer to take into account the word order and context information. Take premise as example, we model self-attention by DISPLAYFORM0 whereP i is a weighted summation ofP. We choose α(a, DISPLAYFORM1, where w a ∈ R 3d is a trainable weight, • is element-wise multiplication, [;] is vector concatenation across row, and the implicit multiplication is matrix multiplication. Then bothP andP are fed into a semantic composite fuse gate (fuse gate in short), which acts as a skip connection. The fuse gate is implemented as DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 where DISPLAYFORM6 We do the same operation on hypothesis representation, thus havingH. The weights of intraattention and fuse gate for premise and hypothesis are not shared, but the difference between the weights of are penalized. The penalization aims to ensure the parallel structure learns the similar functionality but is aware of the subtle semantic difference between premise and hypothesis. Interaction Layer: The interaction layer models the interaction between premise encoded representation P enc and hypothesis encoded representation H enc as follows: DISPLAYFORM7 whereP i is the i-th row vector ofP, andH j is the j-th row vector ofH. Though there are many implementations of interaction, we find β(a, b) = a • b very useful. Feature Extraction Layer: We adopt DenseNet BID31 as convolutional feature extractor in DIIN. Though our experiments show ResNet BID28 works well in the architecture, we choose DenseNet because it is effective in saving parameters. One interesting observation with ResNet is that if we remove the skip connection in residual structure, the model does not converge at all. We found batch normalization delays convergence without contributing to accuracy, therefore we does not use it in our case. A ReLU activation function is applied after all convolution unless otherwise noted. Once we have the interaction tensor I, we use a convolution with 1 × 1 kernel to scale down the tensor in a ratio, η, without following ReLU. If the input channel is k then the output channel is f loor(k × η). Then the generated feature map is feed into three sets of Dense block BID31 and transition block pair. The DenseNet block contains n layers of 3 × 3 convolution layer with growth rate of g. The transition layer has a convolution layer with 1 × 1 kernel for scaling down purpose, followed by a max pooling layer with stride 2. The transition scale down ratio in transition layer is θ. Output Layer: DIIN uses a linear layer to classify final flattened feature representation to three classes. In this section, we present the evaluation of our model. We first perform quantitative evaluation, comparing our model with other competitive models. We then conduct some qualitative analyses to understand how DIIN achieve the high level understanding through interaction. Here we introduce three datasets we evaluate our model on. The evaluation metric for all dataset is accuracy. SNLI Stanford Natural Language Inference (SNLI; Bowman et al. 2015) has 570k human annotated sentence pairs. The premise data is draw from the captions of the Flickr30k corpus, and the hypothesis data is manually composed. The labels provided in are "entailment", "neutral', "contradiction" and "-". "-" shows that annotators cannot reach consensus with each other, thus removed during training and testing as in other works. We use the same data split as in.MultiNLI Multi-Genre NLI Corpus (MultiNLI;) has 433k sentence pairs, whose collection process and task detail are modeled closely to SNLI. The premise data is collected from maximally broad range of genre of American English such as written non-fiction genres (SLATE, OUP, GOVERNMENT, VERBATIM, TRAVEL), spoken genres (TELEPHONE, FACE-TO-FACE), less formal written genres (FICTION, LETTERS) and a specialized one for 9/11. Half of these selected genres appear in training set while the rest are not, creating in-domain (matched) and cross-domain (mismatched) development/test sets. We use the same data split as provided by. Since test set labels are not provided, the test performance is obtained through submission on Kaggle.com 2. Each team is limited to two submissions per day. Quora question pair Quora question pair dataset contains over 400k real world question pair selected from Quora.com. A binary annotation which stands for match (duplicate) or not match (not duplicate) is provided for each question pair. In our case, duplicate question pair can be interpreted as entailment relation and not duplicate as neutral. We use the same split ratio as mentioned in. We implement our algorithm with Tensorflow BID14 framework. An Adadelta optimizer with ρ as 0.95 and as 1e−8 is used to optimize all the trainable weights. The initial learning rate is set to 0.5 and batch size to 70. When the model does not improve best indomain performance for 30,000 steps, an SGD optimizer with learning rate of 3e−4 is used to help model to find a better local optimum. Dropout layers are applied before all linear layers and after word-embedding layer. We use an exponential decayed keep rate during training, where the initial keep rate is 1.0 and the decay rate is 0.977 for every 10,000 step. We initialize our word embeddings with pre-trained 300D GloVe 840B vectors BID48 while the out-of-vocabulary word are randomly initialized with uniform distribution. The character embeddings are randomly initialized with 100D. We crop or pad each token to have 16 characters. The 1D convolution kernel size for character embedding is 5. All weights are constraint by L2 regularization, and the L2 regularization at step t is calculated as follows: DISPLAYFORM0 where L2F ullRatio determines the maximum L2 regularization ratio, and L2F ullStep determines at which step the maximum L2 regularization ratio would be applied on the L2 regularization. We choose L2F ullRatio as 0.9e − 5 and L2F ullStep as 100,000. 72.3 72.1 4. Gated-Att BiLSTM BID23 73.2 73.6 5. Shorcut-Stacked encoder BID46 74 Table 2: MultiNLI .difference of two encoder weights is set to 1e − 3. For a dense block in feature extraction layer, the number of layer n is set to 8 and growth rate g is set to 20. The first scale down ratio η in feature extraction layer is set to 0.3 and transitional scale down ratio θ is set to 0.5. The sequence length is set as a hard cutoff on all experiments: 48 for MultiNLI, 32 for SNLI and 24 for Quora Question Pair Dataset. During the experiments on MultiNLI, we use 15% of data from SNLI as in. We select the parameter by the best run of development accuracy. Our ensembling approach considers the majority vote of the predictions given by multiple runs of the same model under different random parameter initialization. We compare our with all other published systems in Table 2. Besides ESIM, the state-of-theart model on SNLI, all other models appear at RepEval 2017 workshop. RepEval 2017 workshop requires all submitted model to be sentence encoding-based model therefore alignment between sentences and memory module are not eligible for competition. All models except ours share one common feature that they use LSTM as a essential building block as encoder. Our approach, without using any recurrent structure, achieves the new state-of-the-art performance of 80.0%, exceeding current state-of-the-art performance by more than 5%. Unlike the observation from, we find the out-of-domain test performance is consistently lower than in-domain test performance. Selecting parameters from the best in-domain development accuracy partially contributes to this . In TAB4, we compare our model to other model performance on SNLI. Experiments are sentence encoding based model. Bowman et al. FORMULA0 provides a BiLSTM baseline. adopts two layer GRU encoder with pre-trained "skip-thoughts" vectors. To capture sentence-level semantics, use tree-based propose a stack-augmented parser-interpreter neural network (SPINN) which incorporates parsing information in a sequential manner. uses intra-attention on top of BiLSTM to generate sentence representation, and proposes an memory augmented neural network to encode the sentence. The next group of model, experiments BID5 BID6 BID7 BID8 BID9 BID10 BID11 BID12 BID13, uses cross sentence feature. aligns each sentence word-by-word with attention on top of LSTMs. enforces cross sentence attention word-by-word matching with the proprosed mL-STM model. proposes long short-term memory-network(LSTMN) with deep attention fusion that links the current word to previous word stored in memory. decomposes the task into sub-problems and conquer them respectively. proposes neural tree indexer, a full n-ary tree whose subtrees can be overlapped. Re-read LSTM proposed by considers the attention vector of one sentence as the inner-state of LSTM for another sentence. propose a sequential model that infers locally, and a ensemble with tree-like inference module that further improves performance. We show our model, DIIN, achieves state-of-the-art performance on the competitive leaderboard. Table 4: Quora question dataset . First six rows are copied from and next two rows from BID56. In this subsection, we evaluate the effectiveness of our model for paraphrase identification as natural language inference task. Other than our baselines, we compare with and BID56. BIMPM models different perspective of matching between sentence pair on both direction, then aggregates matching vector with LSTM. DECATT word and DECATT char uses automatically collected in-domain paraphrase data to noisy pretrain n-gram word embedding and ngram subword embedding correspondingly on decomposable attention model proposed by. In Table 4, our experiment shows DIIN has better performance than all other models and an ensemble score is higher than the former best for more than 1 percent. Ablation Study We conduct a ablation study on our base model to examine the effectiveness of each component. We study our model on MultiNLI dataset and we use Matched validation score as the standard for model selection. The is shown in Table 5. We studies how EM feature Table 5: Ablation study .contributes to the system. After removing the exact match binary feature, we find the performance degrade to 78.2 on matched score on development set and 78.0 on mismatched score. As observed in reading comprehension task BID21, the simple exact match feature does help the model to better understand the sentences. In the experiment 3, we remove the convolutional feature extractor and then model is structured as a sentence-encoding based model. The sentence representation matrix is max-pooled over time to obtain a feature vector. Once we have the feature vector p for premise and h for hypothesis, we use [p; h; |p − h|; p • h] as final feature vector to classify the relationship. We obtain 73.2 for matched score and 73.6 on mismatched data. The is competitive among other sentence-encoding based model. We further study how encoding layer contribute in enriching the feature space in interaction tensor. If we remove encoding layer completely, then we'll obtain a 73.5 for matched score and 73.2 for mismatched score. The demonstrate the feature extraction layer have powerful capability to capture the semantic feature. In experiment 5, we remove both self-attention and fuse gate, thus retaining only highway network. The improves to 77.7 and 77.3 respectively on matched and mismatched development set. However, in experiment 6, when we only remove fuse gate, to our surprise, the performance degrade to 73.5 for matched score and 73.8 for mismatched. On the other hand, if we use the addition of the representation after highway network and the representation after self-attention as skip connection as in experiment 7, the performance increase to 77.3 and 76.3. The comparison indicates self-attention layer makes the training harder to converge while a skip connection could ease the gradient flow for both highway layer and self-attention layer. By comparing the base model and the model the in experiment 6, we show that the fuse gate not only well serves as a skip connection, but also makes good decision upon which information the fuse for both representation. To show that dense interaction tensor contains more semantic information, we replace the dense interaction tensor with dot product similarity matrix between the encoded representation of premise and hypothesis. The shows that the dot product similarity matrix has an inferior capacity of semantic information. Another dimensionality study is provided in supplementary material. In experiment 9, we share the encoding layer weight, and the decrease from the baseline. The shows that the two set of encoding weights learn the subtle difference between premise and hypothesis. Error analysis To analyze the model prediction, we use annotated subset of development set provided by that consists of 1,000 examples each tagged with zero or more following tags:• CONDITIONAL: whether the sentence contains a conditional.• WORD OVERLAP: whether both sentences share more than 70% of their tokens.• NEGATION: whether a negation shows up in either sentence.• ANTO: whether two sentences contain antonym pair.• LONG SENTENCE: whether premise or hypothesis is longer than 30 or 16 tokens respectively.• TENSE DIFFERENCE: whether any verb in two sentences uses different tense.• ACTIVE/PASSIVE: whether there is an active-to-passive (or vice versa) transformation from the premise to the hypothesis. Table 6: MultiNLI .• PARAPHRASE: whether the two sentences are close paraphrases • QUANTITY/TIME REASONING: whether understanding the pair requires quantity or time reasoning.• COREF: Whether the hypothesis contains a pronoun or referring expression that needs to be resolved using the premise.• QUANTIFIER: Whether either sentence contains one of the following quantifier: much, enough, more, most, less, least, no, none, some, any, many, few, several, almost, nearly.• MODAL: Whether one of the following modal verbs appears in either sentence: can, could, may, might, must, will, would, should.• BELIEF: Whether one of the following belief verbs appear in either sentence: know, believe, understand, doubt, think, suppose, recognize, forget, remember, imagine, mean, agree, disagree, deny, promise. For more detailed descriptions, please resort to. The is shown in Table 6. We find DIIN is consistently better on sentence pair with WORD OVERLAP, ANTO, LONG SENTENCE, PARAPHRASE and BELIEF tags by a large margin. During investigation, we hypothesize exact match feature helps the model to better understand paraphrase, therefore we study the from second ablation ablation study where exact match feature is not used. Surprisingly, the model without exact model feature does not work worse on PARAPHRASE, instead, the accuracy on ANTO drops about 10%. DIIN is also work well on LONG SENTENCE, partially because the receptive field is large enough to cover all tokens. Visualization We also visualize the hidden representation from interaction tensor I and the feature map from first dense block in Figure 2. We pick a sentence pair whose premise is "South Carolina has no referendum right, so the Supreme Court canceled the vote and upheld the ban." and hypothesis is "South Carolina has a referendum right, so the Supreme Court was powerless over the state.". The upper row of figures are sampled from hidden representation of interaction tensor I. We observe the values of neurons are highly correlated row-wise and column-wise in the interaction tensor I and different channel of hidden representation shows different aspect of interaction. Though in certain channel same words, "referendum", or phrases, "supreme court", cause activation, different word or phrase pair, such as "ban" and "powerless over", also cause activation in other activation. It shows the model's strong capacity of understanding text in different perspective. The lower row of Figure 2 shows the feature map from first dense block. After being convolved from the interaction tensor and previous feature map, new feature maps shows activation in different position, demonstrating different semantic features are found. The first figure in the lower row has similar pattern as normal attention weight whereas others has no obvious pattern. Different channels of feature maps indicate different kinds of semantic feature. Figure 2: A visualization of hidden representation. The premise is "South Carolina has no referendum right, so the Supreme Court canceled the vote and upheld the ban." and the hypothesis is "South Carolina has a referendum right, so the Supreme Court was powerless over the state.". The upper row are sampled from interaction tensor I and the lower row are sample from the feature map of first dense block. We use viridis colormap, where yellow represents activation and purple shows the neuron is not active. We show the interaction tensor (or attention weight) contains semantic information to understand the natural language. We introduce Interactive Inference Network, a novel class of architecture that allows the model to solve NLI or NLI alike tasks via extracting semantic feature from interaction tensor end-to-end. One instance of such architecture, Densely Interactive Inference Network (DIIN), achieves state-of-the-art performance on multiple datasets. By ablating each component in DIIN and changing the dimensionality, we show the effectiveness of each component in DIIN.Though we have the initial exploration of natural language inference in interaction space, the full potential is not yet clear. We will keep exploring the potential of interaction space. Incorporating common-sense knowledge from external resources such as knowledge base to leverage the capacity of the mode is another research goal of ours.
show multi-channel attention weight contains semantic feature to solve natural language inference task.
363
scitldr
Determinantal Point Processes (DPPs) provide an elegant and versatile way to sample sets of items that balance the point-wise quality with the set-wise diversity of selected items. For this reason, they have gained prominence in many machine learning applications that rely on subset selection. However, sampling from a DPP over a ground set of size N is a costly operation, requiring in general an O(N^3) preprocessing cost and an O(Nk^3) sampling cost for subsets of size k. We approach this problem by introducing DppNets: generative deep models that produce DPP-like samples for arbitrary ground sets. We develop an inhibitive attention mechanism based on transformer networks that captures a notion of dissimilarity between feature vectors. We show theoretically that such an approximation is sensible as it maintains the guarantees of inhibition or dissimilarity that makes DPP so powerful and unique. Empirically, we demonstrate that samples from our model receive high likelihood under the more expensive DPP alternative. Selecting a representative sample of data from a large pool of available candidates is an essential step of a large class of machine learning problems: noteworthy examples include automatic summarization, matrix approximation, and minibatch selection. Such problems require sampling schemes that calibrate the tradeoff between the point-wise quality -e.g. the relevance of a sentence to a document summary -of selected elements and the set-wise diversity 1 of the sampled set as a whole. Determinantal Point Processes (DPPs) are probabilistic models over subsets of a ground set that elegantly model the tradeoff between these often competing notions of quality and diversity. Given a ground set of size N, DPPs allow for O(N 3) sampling over all 2 N possible subsets of elements, assigning to any subset S of a ground set Y of elements the probability DISPLAYFORM0 where L ∈ R N ×N is the DPP kernel and L S = [L ij] i,j∈S denotes the principal submatrix of L indexed by items in S. Intuitively, DPPs measure the volume spanned by the feature embedding of the items in feature space (Figure 1). BID31 to model the distribution of possible states of fermions obeying the Pauli exclusion principle, the properties of DPPs have since then been studied in depth BID19 BID6, see e.g.). As DPPs capture repulsive forces between similar elements, they arise in many natural processes, such as the distribution of non-intersecting random walks BID22, spectra of random matrix ensembles BID37 BID13, and zerocrossings of polynomials with Gaussian coefficients BID20 ). More recently, DPPs have become a prominent tool in machine learning due to their elegance and tractability: recent applications include video recommendation BID10, minibatch selection BID46, and kernel approximation BID28 BID35.However, the O(N 3) sampling cost makes the practical application of DPPs intractable for large datasets, requiring additional work such as subsampling from Y, structured kernels (Gartrell et al., (a) (b) (c) φ i φ j Figure 1: Geometric intuition for DPPs: let φ i, φ j be two feature vectors of Φ such that the DPP kernel verifies L = ΦΦ T; then P L ({i, j}) ∝ Vol(φ i, φ j). Increasing the norm of a vector (quality) or increasing the angle between the vectors (diversity) increases the volume spanned by the vectors BID25, Section 2.2.1).2017; BID34, or approximate sampling methods BID2 BID27 BID0. Nonetheless, even such methods require significant pre-processing time, and scale poorly with the size of the dataset. Furthermore, when dealing with ground sets with variable components, pre-processing costs cannot be amortized, significantly impeding the application of DPPs in practice. These setbacks motivate us to investigate the use of more scalable models to generate high-quality, diverse samples from datasets to obtain highly-scalable methods with flexibility to adapt to constantly changing datasets. Specifically, we use generative deep models to approximate the DPP distribution over a ground set of items with both fixed and variable feature representations. We show that a simple, carefully constructed neural network, DPPNET, can generate DPP-like samples with very little overhead, while maintaining fundamental theoretical properties of DPP measures. Furthermore, we show that DPPNETs can be trivially employed to sample from a conditional DPP (i.e. sampling S such that A ⊆ S is predefined) and for greedy mode approximation. • We introduce DPPNET, a deep network trained to generate DPP-like samples based on static and variable ground sets of possible items.• We derive theoretical conditions under which the DPPNETs inherit the DPP's log-submodularity.• We show empirically that DPPNETs provide an accurate approximation to DPPs and drastically speed up DPP sampling. DPPs belong to the class of Strongly Rayleigh (SR) measures; these measures benefit from the strongest characterization of negative association between similar items; as such, SR measures have benefited from significant interest in the mathematics community BID39 BID5 BID4 BID32 and more recently in machine learning BID3 BID27 BID35. This, combined with their tractability, makes DPPs a particularly attractive tool for subset selection in machine learning, and is one of the key motivations for our work. The application of DPPs to machine learning problems spans fields from document and video summarization BID14 BID9, recommender systems BID47 BID10 and object retrieval BID1 to kernel approximation BID28, neural network pruning BID33, and minibatch selection BID46. BID48 developed DPP priors for encouraging diversity in generative models and BID40 showed that DPPs accurately model inhibition in neural spiking data. In the general case, sampling exactly from a DPP requires an initial eigendecomposition of the kernel matrix L, incurring a O(N 3) cost. In order to avoid this time-consuming step, several approximate sampling methods have been derived; BID0 approximate the DPP kernel during sampling; more recently, by BID2 followed by BID27 showed that DPPs are amenable to efficient MCMC-based sampling methods. return S Exact methods that significantly speed up sampling by leveraging specific structure in the DPP kernel have also been developed BID34 BID12. Of particular interest is the dual sampling method introduced in BID25: if the DPP kernel can be composed as an inner product over a finite basis, i.e. there exists a feature matrix Φ ∈ R N ×D such that the DPP kernel is given by L = ΦΦ, exact sampling can be done in DISPLAYFORM0 However, MCMC sampling requires variable amounts of sampling rounds, which is unfavorable for parallelization; dual DPP sampling requires an explicit feature matrix Φ. Motivated by recent work on modeling set functions with neural networks BID45 BID11, we propose instead to generate approximate samples via a generative network; this allows for simple parallelization while simultaneously benefiting from recent improvements in specialized architectures for neural network models (e.g. parallelized matrix multiplications). We furthermore show that, extending the abilities of dual DPP sampling, neural networks may take as input variable feature matrices Φ and sample from non-linear kernels L. In this section, we build up a framework that allows the O(N 3) computational cost associated with DPP sampling to be addressed via approximate sampling with a neural network. Given a positive semi-definite matrix L ∈ R N ×N, we take P L to represent the distribution modeled by a DPP with kernel L over the power set of DISPLAYFORM0 Although the elegant quality/diversity tradeoff modeled by DPPs is a key reason for their recent success in many different applications, they benefit from other properties that make them particularly well-suited to machine learning problems. We now focus on how these properties can be maintained with a deep generative model with the right architecture. BID7. Although conditioning comes at the cost of an expensive matrix inversion, this property make DPPs well-suited to applications requiring diversity in conditioned sets, such as basket completion for recommender systems. Standard deep generative models such as (Variational) Auto-Encoders BID23 (VAEs) and Generative Adversarial Networks BID15 ) (GANs) would not enable simple conditioning operations during sampling. Instead, we develop a model that given an input set S, returns a prediction vector v ∈ R N such that DISPLAYFORM0 where Y ∼ P L: in other words, v i is the marginal probability of item i being included in the final set, given that S is a subset of the final set. Mathematically, we can compute v i as DISPLAYFORM1 for i ∈ S BID25; for i ∈ S, we simply set v i = 0.With this architecture, we sample a set via Algorithm 1, which allows for trivial basket-completion type conditioning operations. Furthermore, Algorithm 1 can be modified to implement a greedy sampling algorithm without any additional cost. Log-submodularity. As mentioned above, DPPs are included in the larger class of Strongly Rayleigh (SR) measures over subsets. Although being SR is a delicate property, which is maintained by only few operations BID5 ), log-submodularity 3 (which is implied by SR-ness) is more robust, as well as a fundamental property in discrete optimization BID44 BID8 BID16 and machine learning BID9. Crucially, we show in the following that (log)-submodularity can be inherited by a generative model trained on a log-submodular distribution: THEOREM 1. Let P be a strictly submodular function over subsets of Y, and Q be a function over the same space such that DISPLAYFORM2 where D TV indicates the total variational distance. Then Q is also submodular. COROLLARY 1.1. Let P L be a strictly log-submodular DPP over Y and DPPNET be a network trained on the DPP probabilities p(S), with a loss function of the form p − q where · is a norm and p ∈ R 2 N (resp. q) is the probability vector assigned by the DPP (resp. the DPPNET) to each subset of Y. Let α = max x ∞=1 1 x. If DPPNET converges to a loss smaller than DISPLAYFORM3 its generative distribution is log-submodular. The follows directly from Thm. 3 and the equivalence of norms in finite dimensional spaces. REMARK 1. Cor. 1.1 is generalizable to the KL divergence loss D KL (P Q) via Pinsker's inequality. For this reason, we train our models by minimizing the distance between the predicted and target probabilities, rather than optimizing the log-likelihood of generative samples under the true DPP.Leveraging the sampling path. When drawing samples from a DPP, the standard DPP sampling algorithm (, Alg. 1) generates the sample as a sequence, adding items one after the other until reaching a pre-determined size 4, similarly to Alg. 1. We take advantage of this by recording all intermediary subsets generated by the DPP when sampling training data: in practice, instead of training on n subsets of size k, we train on kn subsets of size 0,..., k − 1. Thus, our model is very much like an unrolled recurrent neural network. In the simplest setting, we may wish to draw many samples over a ground set with a fixed feature embedding. In this case, we wish to model a DPP with a fixed kernel via a generative neural network. Specifically, we consider a fixed DPP with kernel L and wish to obtain a generative model such that DISPLAYFORM0 More generally, in many cases we may care about sampling from a DPP over a ground set of items that varies: this may be the case for example with a pool of products that are available for sale at a given time, or social media posts with a relevance that varies based on context. To leverage the speed-up provided by dual DPP sampling, we can only sample from the DPP with kernel given by L = ΦΦ; for more complex kernels, we once again incur the O(N 3) cost. Furthermore, training a static neural network for each new feature embedding may be too costly. Instead, we augment the static DPPNET to include the feature matrix Φ representing the ground set of all items as input to the network. Specifically, we draw inspiration for the dot-product attention introduced in BID41. In the original paper, the attention mechanism takes as input 3 matrices: the keys K, the values V, and the query Q. Attention is computed as DISPLAYFORM1 where d is the dimension of each query/key: the inner product acts as a proxy to the similarity between each query and each key. Finally, the reweighted value matrix AV is fed to the trainable neural network. DISPLAYFORM2 Figure 2: Transformer network architecture for sampling from a variable ground set. Here, the feature representation of items in the input set S acts as the query Q ∈ R k×d; the feature representation Φ ∈ R N ×d of our ground set is both the keys and the values. In order for the attention mechanism to make sense in the framework of DPP modeling, we make two modifications to the attention in BID41 ):• We want our network to attend to items that are dissimilar to the query (input subset): for each item i in the input subset S, we compute its pairwise dissimilarity to each item in Y as the vector DISPLAYFORM3 • Instead of returning this k × N matrix D of dissimilarities d i, we return a vector a ∈ R N in the probability simplex such that a j ∝ i∈S D ij. This allows us to have a fixed-size input to the neural network, and simultaneously enforces the desirable property that similarity to a single item is enough to disqualify an element from the ground set. Note that we could also return D in the form of a N × N matrix, but this would be counterproductive to speeding up DPP sampling. Putting everything together, our attention vector a is computed via the inhibitive attention mechanism DISPLAYFORM4 where represents the row-wise multiplication operator; this vector can be computed in O(kDN) time. The attention component of the neural network finally feeds the element-wise multiplication of each row of V with a to the feed-forward component. Given Φ and a subset S, the network is trained as in the static case to learn the marginal probabilities of adding any item in Y to S under a DPP with a kernel L dependent on Φ. In practice, we set L to be an exponentiated quadratic kernel L ij = exp(−β φ i − φ j 2) constructed with the features φ i.REMARK 2. Dual sampling for DPPs as introduced in BID25 ) is efficient only when sampling from a DPP with kernel L = ΦΦ; for non-linear kernels, a low-rank decomposition of L(Φ) must first be obtained, which in the worst case requires O(N 3) operations. In comparison, the dynamic DPPNET can be trained on any DPP kernel, while only requiring Φ as input. To evaluate DPPNET, we look at its performance both as a proxy for a static DPP (Section 4.1) and as a tool for generating diverse subsets of varying ground sets (Section 4.2). Our models are trained with TensorFlow, using the Adam optimizer. Hyperparameters are tuned to maximize the normalized log-likelihood of generated subsets. We compare DPPNET to DPP performance as well as two additional baselines:• UNIF: Uniform sampling over the ground set.• k-MEDOIDS: The k-medoids clustering algorithm (, 14.3.10), applied to items in the ground set, with distance between points computed as the same distance metric used by the DPP. Conversely to k-means, k-MED uses data points as centers for each cluster. We use the negative log-likelihood (NLL) of a subset under a DPP constructed over the ground set to evaluate the subsets obtained by all methods. This choice is motivated by the following considerations: DPPs have become a standard way of measuring and enforcing diversity over subsets of data in machine learning, and b) to the extent of our knowledge, there is no other standard method to benchmark the diversity of a selected subset that depends on specific dataset encodings. We begin by analyzing the performance of a DPPNET trained on a DPP with fixed kernel over the unit square. This is motivated by the need for diverse sampling methods on the unit hypercube, motivated by e.g. quasi-Monte Carlo methods, latin hypercube sampling BID36 and low discrepancy sequences. The ground set consists of the 100 points lying at the intersections of the 10 × 10 grid on the unit square. The DPP is defined by setting its kernel L to L ij = exp(− x i − x j 2 2 /2). As the DPP kernel is fixed, these experiments exclude the inhibitive attention mechanism. We report the performance of the different sampling methods in FIG2. Visually FIG2 ) and quantitively FIG2 ), DPPNET improves significantly over all other baselines. The NLL of DPPNET samples is almost identical to that of true DPP samples. Furthermore, greedily sampling the mode from the DPPNET achieves a better NLL than DPP samples themselves. Numerical are reported in TAB0. We evaluate the performance of DPPNETs on varying ground set sizes through the MNIST , CelebA BID30, and MovieLens BID17 datasets. For MNIST and CelebA, we generate feature representations of length 32 by training a Variational Auto-Encoder BID23 on the dataset 5; for MovieLens, we obtain a feature vector for each movie by applying nonnegative matrix factorization the rating matrix, obtaining features of length 10. Experimental presented below were obtained using feature representations obtained via the test instances of each dataset. The DPPNET is trained based on samples from DPPs with a linear kernel for MovieLens and with an exponentiated quadratic kernel for the image datasets. Bandwidths were set to β = 0.0025 for MNIST and β = 0.1 for CelebA in order to obtain a DPP average sample size ≈ 20: recall that for a DPP with kernel L, the expected sample size is given by the formula For MNIST, FIG3 shows images selected by the baselines and the DPPNET, chosen among 100 digits with either random labels or all identical labels; visually, DPPNET and DPP samples provide a wider coverage of writing styles. However, the NLL of samples from DPPNET decay significantly, whereas the DPPNET mode continues to maintain competitive performance with DPP samples. DISPLAYFORM0 Numerical for MNIST are reported in Table 2; additionally to the previous baselines, we also consider two further ways of generating subsets. INHIBATTN samples items from the multinomial distribution generated by the inhibitive attention mechanism only (without the subsequent neural network). NOATTN is a pure feed-forward neural network without attention; after hyper-parameter tuning, we found that the best architecture for this model consisted in 6 layers of 585 neurons each. Table 2 reveals that both the attention mechanism and the subsequent neural network are crucial to modeling DPP samples. Strikingly, DPPNET performs significantly better than other baselines even on feature matrices drawn from a single class of digits (Table 2), despite the training distribution over feature matrices being much less specialized. This implies that DPPNET sampling for dataset summarization may be leveraged to focus on sub-areas of datasets that are identified as areas of interest. Numerical for CelebA and MovieLens are reported in TAB2, confirming the modeling ability of DPPNETs. Finally, we verify that DPPNET allows for significantly faster sampling by running DPP and DPPNET sampling for subsets of size 20 drawn from a ground set of size 100 with both a standard DPP Table 2: NLL (mean ± standard error) under the true DPP of samples drawn uniformly, according to the mode of the DPPNET, and from the DPP itself. We sample subsets of size 20; for each class of digits we build 25 feature matrices Φ from encodings of those digits, and for each feature matrix we draw 25 different samples. Bolded numbers indicate the best-performing (non-DPP) sampling method. TAB0 49.2 ± 0.1 52.2 ± 0.1 60.5 ± 0.1 49.8 ± 0.0 50.7 ± 0.1 51.0 ± 0.1 50.4 ± 0.1 51.6 ± 0.1 51.5 ± 0.1 50.9 ± 0.1 52.7 ± 0.1 UNIF 51.6 ± 0.1 54.9 ± 0.1 65.1 ± 0.1 51.5 ± 0.1 52.9 ± 0.1 53.3 ± 0.1 52.4 ± 0.1 54.6 ± 0.1 55.1 ± 0.1 53.3 ± 0.1 56.2 ± 0.1 MEDOIDS 51.0 ± 0.1 55.1 ± 0.1 65.0 ± 0.1 51.5 ± 0.0 52.9 ± 0.1 53.1 ± 0.1 52.4 ± 0.0 54.4 ± 0.1 55.1 ± 0.1 53.2 ± 0.1 56.1 ± 0.1 INHIBATTN 51.3 ± 0.1 54.7 ± 0.1 65.0 ± 0.1 51.4 ± 0.1 52.8 ± 0.1 53.0 ± 0.1 52.1 ± 0.1 54.5 ± 0.1 54.9 ± 0.1 53.2 ± 0.1 55.9 ± 0.1 NOATTN 51.4 ± 0.1 54.9 ± 0.1 65.4 ± 0.1 51.5 ± 0.1 52.9 ± 0.1 53.3 ± 0.1 52.2 ± 0.1 54.6 ± 0.1 55.2 ± 0.1 53.3 ± 0.1 56.1 ± 0.1 DPPNET MODE 48.6 ± 0.2 53.6 ± 0.3 63.6 ± 0.4 50.8 ± 0.2 51.4 ± 0.3 51.6 ± 0.4 51.8 ± 0.3 52.8 ± 0.3 52.7 ± 0.4 50.9 ± 0.3 55.0 ± 0.4 BID29 according to root mean squared error (RMSE) and wallclock time. We observe that subsets selected by DPPNET achieve comparable and lower RMSE than a DPP and the MCMC method respectively while being significantly faster.and DPPNET (using the MNIST architecture). Both methods were implemented in graph-mode TensorFlow. Sampling batches of size 32, standard DPP sampling costs 2.74 ± 0.02 seconds; DPPNET sampling takes 0.10 ± 0.001 seconds, amounting to an almost 30-fold speed improvement. As a final experiment, we evaluate DPPNET's performance on a downstream task for which DPPs have been shown to be useful: kernel reconstruction using the Nyström method BID38 BID43. Given a positive semidefinite matrix K ∈ R N ×N, the Nyström method DISPLAYFORM0 where K † denotes the pseudoinverse of K and K ·,S (resp. K S,·) is the submatrix of K formed by its rows (resp. columns) indexed by S. The Nyström method is a popular method to scale up kernel methods and has found many applications in machine learning (see e.g. (Bac; She; Fow; Tal)). Importantly, the approximation quality directly depends on the choice of subset S. Recently, DPPs have been shown to be a competitive approach for selecting S BID35 BID28. Following the approach of BID28, we evaluate the quality of the kernel reconstruction by learning a regression kernel K on a training set, and reporting the prediction error on the test set using the Nyström reconstructed kernelK. Additionally to the full DPP, we also compare DPPNET to the MCMC sampling method with quadrature acceleration BID28 c) FIG4 reports our on the Ailerons dataset 6 also used in BID28. We start with a ground set size of 1000 and compute the ing root mean squared error (RMSE) of the regression using various sized subsets selected by sampling from a DPP, the MCMC method of BID29, using the full ground set and DPPNET. FIG4 reports the runtimes for each method. We note that while all methods were run on CPU, DPPNet is more amenable to acceleration using GPUs. We introduced DPPNETs, generative networks trained on DPPs over static and varying ground sets which enable fast and modular sampling in a wide variety of scenarios. We showed experimentally on several datasets and standard DPP applications that DPPNETs obtain competitive performance as evaluated in terms of NLLs, while being amenable to the extensive recent advances in speeding up computation for neural network architectures. Although we trained our models on DPPs on exponentiated quadratic and linear kernels; we can train on any kernel type built from a feature representations of the dataset. This is not the case for dual DPP exact sampling, which requires that the DPP kernel be L = ΦΦ for faster sampling. DPPNETs are not exchangeable: that is, two sequences i 1,..., i k and σ(i 1),..., σ(i k) where σ is a permutation of [k], which represent the same set of items, will not in general have the same probability under a DPPNET. Exchangeability can be enforced by leveraging previous work BID45; however, non-exchangeability can be an asset when sampling a ranking of items. Our models are trained to take as input a fixed-size subset representation; we aim to investigate the ability to take a variable-length encoding as input as future work. The scaling of the DPPNET's complexity with the ground set size also remains an open question. However, standard tricks to enforce fixed-size ground sets such as sub-sampling from the dataset may be applied to DPPNETs. Similarly, if further speedups are necessary, sub-sampling from the ground set -a standard approach for DPP sampling over very large set sizes -can be combined with DPPNET sampling. In light of our on dataset sampling, the question of whether encoders can be trained to produce encodings conducive to dataset summarization via DPPNETs seems of particular interest. Assuming knowledge of the (encoding-independent) relative diversity of a large quantity of subsets, an end-to-end training of the encoder and the DPPNET simultaneously may yield interesting . Finally, although Corollary 1.1 shows the log-submodularity of the DPP can be transferred to a generative model, understanding which additional properties of training distributions may be conserved through careful training remains an open question which we believe to be of high significance to the machine learning community in general. A MAINTAINING LOG-SUBMODULARITY IN THE GENERATIVE MODEL THEOREM 2. Let p be a strictly submodular distribution over subsets of a ground set Y, and q be a distribution over the same space such that DISPLAYFORM0 Then q is also submodular. Proof. In all the following, we assume that S, T are subsets of a ground set Y such that S = T and S, T ∈ {∅, Y} (the inequalities being immediate in these corner cases). For the MNIST encodings, the VAE encoder consists of a 2d-convolutional layer with 64 filters of height and width 4 and strides of 2, followed by a 2d convolution layer with 128 filters (same height, width and strides), then by a dense layer of 1024 neurons. The encodings are of length 32. CelebA encodings were generated by a VAE using a Wide Residual Network BID44 ) encoder with 10 layers and filter-multiplier k = 4, a latent space of 32 full-covariance Gaussians, and a deconvolutional decoder trained end-to-end using an ELBO loss. In detail, the decoder architecture consists of a 16K dense layer followed by a sequence of 4 × 4 convolutions with filters interleaved with 2× upsampling layers and a final 6 × 6 convolution with 3 output channels for each of 5 components in a mixture of quantized logistic distributions representing the decoded image.
We approximate Determinantal Point Processes with neural nets; we justify our model theoretically and empirically.
364
scitldr
This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature-metric bundle adjustment (BA), which explicitly enforces multi-view geometry constraints in the form of feature-metric error. The whole pipeline is differentiable, so that the network can learn suitable features that make the BA problem more tractable. Furthermore, this work introduces a novel depth parameterization to recover dense per-pixel depth. The network first generates several basis depth maps according to the input image, and optimizes the final depth as a linear combination of these basis depth maps via feature-metric BA. The basis depth maps generator is also learned via end-to-end training. The whole system nicely combines domain knowledge (i.e. hard-coded multi-view geometry constraints) and deep learning (i.e. feature learning and basis depth maps learning) to address the challenging dense SfM problem. Experiments on large scale real data prove the success of the proposed method. The Structure-from-Motion (SfM) problem has been extensively studied in the past a few decades. Almost all conventional SfM algorithms BID46 BID39 BID16 BID13 jointly optimize scene structures and camera motion via the Bundle-Adjustment (BA) algorithm BID43 BID1, which minimizes the geometric BID46 BID39 or photometric BID17 BID13 error through the Levenberg-Marquardt (LM) algorithm BID35. Some recent works BID44 attempt to solve SfM using deep learning techniques, but most of them do not enforce the geometric constraints between 3D structures and camera motion in their networks. For example, in the recent work DeMoN BID44, the scene depths and the camera motion are estimated by two individual sub-network branches. This paper formulates BA as a differentiable layer, the BA-Layer, to bridge the gap between classic methods and recent deep learning based approaches. To this end, we learn a feed-forward multilayer perceptron (MLP) to predict the damping factor in the LM algorithm, which makes all involved computation differentiable. Furthermore, unlike conventional BA that minimizes geometric or photometric error, our BA-layer minimizes the distance between aligned CNN feature maps. Our novel feature-metric BA takes CNN features of multiple images as inputs and optimizes for the scene structures and camera motion. This feature-metric BA is desirable, because it has been observed by BID17 that the geometric BA does not exploit all image information, while the photometric BA is sensitive to moving objects, exposure or white balance changes, etc. Most importantly, our BA-Layer can back-propagate loss from scene structures and camera motion to learn appropriate features that are most suitable for structure-from-motion and bundle adjustment. In this way, our network hard-codes the multi-view geometry constraints in the BA-Layer and learns suitable feature representations from training data. We strive to estimate a dense per-pixel depth, because dense depth is critical for many tasks such as object detection and robot navigation. A major challenge in solving dense per-pixel depth is to find a compact parameterization. Direct per-pixel depth is computational expensive, which makes the network training intractable. So we train a network to generate a set of basis depth maps for an arbitrary input image and represent the depth map as a linear combination of these basis 2 RELATED WORK Monocular Depth Estimation Networks Estimating depth from a monocular image is an ill-posed problem because an infinite number of possible scenes may have produced the same image. Before the raise of deep learning based methods, some works predict depth from a single image based on MRF BID37 BID36, semantic segmentation BID29, or manually designed features BID27. BID15 propose a multi-scale approach for depth prediction with two CNNs, where a coarse-scale network first predicts the scene depth at the global level and then a fine-scale network will refine the local regions. This approach was extended in BID14 to handle semantic segmentation and surface normal estimation as well. Recently, BID30 propose to use ResNet BID24 based structure to predict depth, and BID47 construct multi-scale CRFs for depth prediction. In comparison, we exploit monocular image depth estimation network for depth parameterization, which only produces a set of basis depth maps and the final will be further improved through optimization. Structure-from-Motion Networks Recently, some works exploit CNNs to resolve the SfM problem. BID22 solve the camera motion by a network from a pair of images with known depth. employ two CNNs for depth and camera motion estimation respectively, where both CNNs are trained jointly by minimizing the photometric loss in an unsupervised manner. implement the direct method BID40 as a differentiable component to compute camera motion after scene depth is estimated by the method in. In BID44, the scene depth and the camera motion are predicted from optical flow features, which help to make it generalizing better to unseen data. However, the scene depth and the camera motion are solved by two separate network branches, multi-view geometry constraints between depth and motion are not enforced. Recently, propose to solve nonlinear least squares in two-view SfM using a LSTM-RNN BID26 as the optimizer. Our method belongs to this category. Unlike all previous works, we propose the BA-Layer to simultaneously predict the scene depth and the camera motion from CNN features, which explicitly enforces multi-view geometry constraints. The hard-coded multi-view geometry constraints enable our method to reconstruct more than two images, while most deep learning methods can only handle two images. Furthermore, we propose to minimize a feature-metric error instead of the photometric error in to enhance robustness. Before introducing our BA-Net architecture, we revisit the classic BA to have a better understanding about where the difficulties are and why feature-metric BA and feature learning are desirable. We only introduce the most relevant content and refer the readers to BID43 and BID1 for a comprehensive introduction. Given images I = {I i |i = 1 · · · N i}, the geometric BA BID43 BID1 jointly optimizes camera poses T = {T i |i = 1 · · · N i} and 3D scene point coordinates P = {p j |j = 1 · · · N j} by minimizing the re-projection error: DISPLAYFORM0 where the geometric distance e g i,j (X) = π(T i, p j) − q i,j measures the difference between a projected scene point and its corresponding feature point. The function π projects scene points to image space, q i,j = [x i,j, y i,j, 1] is the normalized homogeneous pixel coordinate, and DISPLAYFORM1 contains all the points' and the cameras' parameters. The general strategy to minimize Equation is the Levenberg-Marquardt (LM) BID35 BID32 algorithm. At each iteration, the LM algorithm solves for an optimal update ∆X * to the solution by minimizing: DISPLAYFORM2 Here, DISPLAYFORM3 Ni,Nj (X)], and J(X) is the Jacobian matrix of E(X) respect to X, D(X) is a non-negative diagonal matrix, typically the square root of the diagonal of the approximated Hessian J(X) J(X). The non-negative value λ controls the regularization strength. The special structure of J(X) J(X) motivates the use of Schur-Complement BID6.This geometric BA with re-projection error is the golden standard for structure-from-motion in the last two decades, but with two main drawbacks:• Only image information conforming to the respective feature types, typically image corners, blobs, or line segments, is utilized.• Features have to be matched to each other, which often in a lot of outliers. Outlier rejection like RANSAC is necessary, which still cannot guarantee correct . These two difficulties motivate the recent development of direct methods BID17 BID13 which propose the photometric BA algorithm to eliminate feature matching and directly minimizes the photometric error (pixel intensity difference) of aligned pixels. The photometric error is defined as: DISPLAYFORM4 where d j ∈ D = {d j |j = 1 · · · N j} is the depth of a pixel q j at the image I 1, and d j · q j upgrade the pixel q j to its 3D coordinate. Thus, the optimization parameter is DISPLAYFORM5. The direct methods have the advantages of using all pixels with sufficient gradient magnitude. They have demonstrated superior performance, especially at less textured scenes. However, these methods also have some drawbacks:• They are sensitive to initialization as demonstrated in BID33 and BID41 because the photometric error increases the non-convexity BID16 ).• They are sensitive to camera exposure and white balance changes. An automatic photometric calibration is required BID16 ).• They are more sensitive to outliers such as moving objects. To deal with the above challenges, we propose a feature-metric BA algorithm which estimates the same scene depth and camera motion parameters X as in photometric BA, but minimizes the feature-metric difference of aligned pixels: DISPLAYFORM0 where BID49 as the backbone network, a Basis Depth Maps Generator that generates a set of basis depth maps, a Feature Pyramid Constructor that constructs multi-scale feature maps, and a BA-Layer that optimizes both the depth map and the camera poses through a novel differentiable LM algorithm. DISPLAYFORM1 We learn features suitable for SfM via back-propagation, instead of using pre-trained CNN features for image classification BID10. Therefore, it is crucial to design a differentiable optimization layer, our BA-Layer, to solve the optimization problem, so that the loss information can be back-propagated. The BA-Layer predicts the camera poses T and the dense depth map D during forward pass and back-propagates the loss from T and D to the feature pyramids F for training. As illustrated in FIG0, our BA-Net receives multiple images and then feed them to the backbone DRN-54. We use DRN-54 BID49 because it replaces max-pooling with convolution layers and generates smoother feature maps, which is desirable for BA optimization. Note the original DRN is memory inefficient due to the high resolution feature maps after dilation convolutions. We replace the dilation convolution with ordinary convolution with strides to address this issue. After DRN-54, a feature pyramid is then constructed for each input image, which are the inputs for the BA-Layer. At the same time, the basis depth maps generator generates multiple basis depth maps for the image I 1, and the final depth map is represented as a linear combination of these basis depth maps. Finally, the BA-Layer optimizes for the camera poses and the dense depth map jointly by minimizing the feature-metric error defined in Equation FORMULA6, which makes the whole pipeline end-to-end trainable. The feature pyramid learns suitable features for the BA-Layer. Similar to the feature pyramid networks (FPN) for object detection BID31, we exploit the inherent multi-scale hierarchy of deep convolutional networks to construct feature pyramids. A top-down architecture with lateral connections is applied to propagate richer context information from coarser scales to finer scales. Thus, our feature-metric BA will have a larger convergence radius. As shown in Figure 2 (a), we construct a feature pyramid from the backbone DRN-54. We denote the last residual blocks of conv1, conv2, conv3, conv4 in DRN-54 as {C 1, C 2, C 3, C 4}, with strides {1, 2, 4, 8} respectively. We upsample a feature map C k+1 by a factor of 2 with bilinear interpolation and concatenate the upsampled feature map with C k in the next level. This procedure is iterated until the finest level. Finally, we apply a 3 × 3 convolution on the concatenated feature maps to reduce its dimensionality to 128 to balance the expressiveness and computational complexity, which leads to the final feature pyramid DISPLAYFORM0 We visualize some typical channels from the raw image I (i.e. the RGB channels), the pre-trained DRN-54 C 3 and our learned F 3 in Figure 2 (b). It is evident that, after training with our BA-Layer, the feature pyramid becomes smoother and each channel correspondences to different regions in the image. Note that our feature pyramids have higher resolution than FPN to facilitate precise alignment. To have a better intuition about how much the BA optimization benefits from our learned features, we visualize different distances in Figure 3. We evaluate the distance between a pixel marked by a yellow cross in the top image in Figure 3 (a) and all pixels in a neighbourhood of its corresponding point in the bottom image of Figure 3 (a). The distances evaluated from raw RGB values, pretrained feature C 3, and our learned feature F 3 are visualized in (b), (c), and (d) respectively. All distances are normalized to and visualized as heat maps. The x-axis and y-axis are the offsets to the ground-truth corresponding point. The RGB distance in (b) (i.e. e p in Equation FORMULA4) has no clear global minimum, which makes the photometric BA sensitive to initialization BID17. The distance measured by the pretrained feature C 3 has both global and local minimums. Finally, the distance measured by our learned feature F 3 has a clear global minimum and smooth basin, which is helpful in gradient based optimization such as the LM algorithm. After building feature pyramids for all images, we optimize camera poses and a dense depth map by minimizing the feature-metric error in Equation. Following the conventional Bundle Adjustment principle, we optimize Equation using the Levenberg-Marquardt (LM) algorithm. However, the original LM algorithm is non-differentiable because of two difficulties:• The iterative computation terminates when a specified convergence threshold is reached. This if-else based termination strategy makes the output solution X non-differentiable with respect to the input F .• In each iteration, it updates the damping factor λ based on the current value of the objective function. It raises λ if a step fails to reduce the objective; otherwise it reduces λ. This if-else decision also makes X non-differentiable with respect to F.When the solution X is non-differentiable with respect to F, feature learning by back-propagation becomes impossible. The first difficulty has been studied in and the author proposes to fix the number of iterations, which is refered as'incomplete optimization'. Besides making the optimization differentiable, this'incomplete optimization' technique also reduces memory consumption because the number of iterations is usually fixed at a small value. The second difficulty has never been studied. Previous works mainly focus on gradient descent or quadratic minimization BID3 BID38. In this section, we propose a simple yet effective approach to soften the if-else decision and yields a differentiable LM algorithm. We send the current objective value to a MLP network to predict λ. This technique not only makes the optimization differentiable, but also learns to predict a better damping factor λ, which helps the optimization to reach a better solution within limited iterations. To start with, we illustrate a single iteration of the LM optimization as a diagram in Figure 4 by interpreting intermediate variables as network nodes. During the forward pass, we compute the solution update ∆X from feature pyramids F and current solution X as the following steps:• We compute the feature-metric error • We then compute the Jacobian matrix J(X), the Hessian matrix J(X) J(X) and its diagonal matrix D(X); • To predict the damping factor λ, we use global average pooling to aggregate the aboslute value of E(X) over all pixels for each feature channel, and get a 128D feature vector. We then send it to a MLP sub-network to predict λ; • Finally, the update ∆X to the current solution is computed as a standard LM step: DISPLAYFORM0 DISPLAYFORM1 In this way, we can consider λ as an intermediate variable and denote each LM step as a function g about features pyramids F and the solution X from the previous iteration. In other words, ∆X = g(X ; F). Therefore, the solution after the k-th iteration is: DISPLAYFORM2 Here, • denotes parameters updating, which is addition for depth and SE exponential mapping for camera poses. Equation is differentiable with respect to the feature pyramids F, which makes back-propagation possible through the whole pipeline for feature learning. The MLP that predicts λ is also shown in Figure 4. We stack four fully-connected layers to predict λ from the input 128D vector. We use ReLU as the activation function to guarantee λ is non-negative. Following the photometric BA BID17, we solve our feature-metric BA using a coarse-to-fine strategy with feature map warping at each iteration. We apply the differentiable LM algorithm for 5 iterations at each pyramid level, leading to 15 iterations in total. All the camera poses are initialized with identity rotation and zero translation, and the initialization of depth map will be introduced in Section 4.4. Parameterizing a dense depth map by a per-pixel depth value is impractical under our formulation. Firstly, it introduces too many parameters for optimization. For example, an image of 320 × 240 pixels in 76.8k parameters. Secondly, in the beginning of training, many pixels will become invisible in the other views because of the poorly predicted depth or motion. So little information can be back-propagated to improve the network, which makes training difficult. To deal with these problems, we use the convolutional network for monocular image depth estimation as a compact parameterization, rather than using it as an initialization as in BID42 and BID48. We use a standard encoder-decoder architecture for monocular depth learning as in BID30. We use DRN-54 as the encoder to share the same backbone features with our feature pyramids. For the decoder, we modify the last convolutional feature maps of BID30 to 128 channels and use these feature maps as the basis depth maps for optimization. The final depth map is generated as the linear combination of these basis depth maps, which is: DISPLAYFORM0 Here, D is the h · w depth map that contains depth values for all pixels, B is a 128 × h · w matrix, representing 128 basis depth maps generated from network, w is the linear combination weights of these basis depth maps. The w will be optimized in our BA-Layer. The ReLU activation function guarantees the final depth is non-negative. Once B is generated from the network, we fix B and use w as a compact depth parameterization in BA optimization, and the feature-metric distance becomes: DISPLAYFORM1 where B[j] is the j-th column of B, and ReLU(w B [j] ) is the corresponding depth of q j. To further speedup convergence, we learn the initial weight w 0 as a 1D convolution filter for an arbitrary image, i.e. D 0 = ReLU(w 0 B). The B of various images are visualized in the appendix. The BA-Net learns the feature pyramid, the damping factor predictor, and the basis depth maps generator in a supervised manner. We apply the following commonly used loss for training, though more sophisticated ones might be designed. The camera rotation loss is the distance between rotation quaternion vectors L rotation = q − q *. Similarly, translation loss is the Euclidean distance between prediction and groundtruth in metric scale, L translation = t − t *. For each dense depth map we applies the berHu Loss BID51 as in BID30.We initialize the back-bone network from DRN-54 BID49, and the other components are trained with ADAM from scratch with initial learning rate 0.001, and the learning rate is divided by two when we observe plateaus from the Tensorboard interface. ScanNet ScanNet BID11 ) is a large-scale indoor dataset with 1,513 sequences in 706 different scenes. Camera poses and depth maps are not perfect, because they are estimated via BundleFusion BID12. The metric scale is known in all data from ScanNet, because the data are recorded with a depth camera which returns absolute depth values. To sample image pairs for training, we apply a simple filtering process. We first filter out pairs with a large photo-consistency error, to avoid image pairs with large pose or depth error. We also filter out image pairs, if less than 50% of the pixels from one image are visible in the other image. In addition, we also discard a pair if their roundness score BID4 ) is less than 0.001, which avoids pairs with too narrow baselines. We split the whole dataset into the training and the testing sets. The training set contains the first 1,413 sequences and the testing set contains the rest 100 sequences. We sample 547,991 training pairs and 2,000 testing pairs from the training and testing sequences respectively. KITTI KITTI BID20 ) is a widely used benchmark dataset collected by car-mounted cameras and a LIDAR sensor on streets. It contains 61 scenes belonging to the "city", "residential", or "road" categories. BID15 select 28 scenes for testing and 28 scenes from the remaining for training. We use the same data split, to make a fair comparison with previous methods. Since ground truth pose is unavailable from the raw KITTI dataset, we compute camera poses by LibVISO2 BID19 and take them as ground truth after discarding poses with large errors. ScanNet To evaluate the ' quality, we use the depth error metrics suggested in BID14, where RMSE (linear, log, and log, scale inv.) measure the RMSE of the raw, the logarithmical, and aligned logarithmical depth values, while the other two metrics measure the mean of the ratios that divide the absolute and square error by groundtruth depth.. The errors in camera Table 1: Quantitative comparisons with DeMoN and classic BA. The superindex * denotes that the model is trained on the trainning set described in BID44. poses are measured by the rotation error (the angle between the ground truth and the estimated camera rotations), the translation direction error (the angle between the ground truth and estimated camera translation directions) and the absolute position error (the distance between the ground truth and the estimated camera translation vectors).In Table 1, we compare our method with DeMoN BID44 and the conventional photometric and geometric BA. Note that we cannot get DeMoN trained on the ScanNet. For fair comparison, we train our network on the same training data as DeMoN and test both networks on our testing data 1. We also show the of our network trained on ScanNet. Our BA-Net consistently performs better than DeMoN no matter which training data is used. Since DeMoN does not recover the absolute scale, we align its depth map with the groundtruth to recover its metric scale for evaluation. We further compare with conventional geometric BID34 and photometric BID17 BA. Again, our method produces better . The geometric BA works poorly here, because feature matching is difficult in indoor scenes. Even the RANSAC process cannot get rid of all outliers. While for photometirc BA, the highly non-convex objective function is difficult to optimize as described in Section 3.KITTI We use the same metrics as the comparisons on ScanNet for depth evaluation. To evaluate the camera poses, we follow to use the Absolute Trajectory Error (ATE), which measures the Euclidean differences between two trajectories BID40, on the 9th and 10th sequences from the KITTI odometry data. In this experiment, we create short sequences of 5 frames by first computing 5 two-view reconstructions from our BA-Net and then align the two-view reconstructions in the coordinate system anchored at the first frame. minimize the photometric error. BID21 BID15 Table 2: Quantitative comparisons on KITTI with supervised BID15 and unsupervised BID21 methods. Table 2 summarizes our on KITTI. Our method outperforms the supervised methods BID15 as well as recent unsupervised methods BID21. Our method also achieves more accurate camera trajectories than and. We believe this is due to our feature-metric BA with features learned specifically for SfM problem, which makes the objective function closer to convex and easier to optimize as discussed in Section 4.2. In comparison, and minimize the photometric error. More comparison with DeMoN, ablation studies, and multi-view SfM (up to 5 views) are reported in the appendix due to page limit. This paper presents the BA-Net, a network that explicitly enforces multi-view geometry constraints in terms of feature-metric error. It optimizes scene depths and camera motion jointly via feature-metric bundle adjustment. The whole pipeline is differentiable and thus end-to-end trainable, such that the features are learned from data to facilitate structure-from-motion. The dense depth is parameterized as a linear combination of several basis depth maps generated from the network. Our BA-Net nicely combines domain knowledge (hard-coded multi-view geometry constraint) with deep learning (learned feature representation and basis depth maps generator). It outperforms conventional BA and recent deep learning based methods. DISPLAYFORM0 (a) Architecture of the DRN-54 backbone FIG4 illustrates the detailed network architectures for the backbone DRN-54 and the depth basis generator. The architecture of the feature pyramid has been provided in Figure 2(a). We modify the dilated convolution of the original DRN-54 to convolution with strides and discard the conv7 and conv8 as shown in FIG4 (a). C 1 to C 6 are layers with {1,2,4,8,16,32} strides and {16,32,256,512,1024,2048} channels, where C 1 and C 2 are basic convolution layers, while C 3 to C 6 are standerd bottleneck blocks as in ResNet BID24. Figure 5(b) visualizes our depth basis generator which adopts the up-projection structure proposed in BID30. The depth basis generator is a stander decoder that takes the output of C 6 as input and stacks five up-projection blocks to generate 128 basis depth maps, and each of the basis depth maps is half the resolution of the input image. The up-projection block is shown on the right of FIG4 (b) which upsample the input by 2× and then apply convolutions with projection connection. Evaluation Time To evaluate the running time of our method, we use the Tensorflow profiler tool to retrieve the time in ms for all network nodes and then summarize the corresponding to each component in our pipeline. As shown in TAB5, our method takes 95.21 ms to reconstruct two 320 × 240 images, which is slightly faster than DeMoN that takes 110 ms for two 256 × 192 images. The current computation bottleneck is the BA-Layer which contains a large amount of matrix operations and can be further speeded up by direct CUDA implementation. Since we explicitly hard-code the multi-view geometry constraints in the BA-Layer, it is possible to share the backbone DRN-54 with other high-level vision tasks, such as semantic segmentation and object detection, to maximize reuse of network structures and minimize extra computation cost. TAB6, the pre-trained features (i.e. w/o Feature Learning) produce larger error. This proves the discussion in Section 4.2.Bundle Adjustment Optimization vs SE Pose Estimation Our BA-Layer optimizes depth and camera poses jointly. We compare it to the SE camera pose estimation with fixed depth map (e.g. the initialized depth D 0 in Section 4.4), and similar strategy is adopted in. To make a fair comparison, we also use our learned feature pyramids for the SE camera pose estimation. As shown in TAB6, without BA optimization (i.e. w/o Joint Optimization), both the depth maps and camera poses are worse, because the errors in the depth estimation will degrades the camera pose estimation. Differentiable Levenberg-Marquardt vs Gauss-Newton To make the whole pipeline end-to-end trainable, we makes the Levenberg-Marquardt algorithm differentiable by learning the damping factor from the network. We first compare our method against vanilla Gauss-Newton without damping factor λ (i.e. λ = 0). Since the objective function of feature-metric BA is non-convex, the Hessian matrix J(X) J(X) might not be positive definite, which makes the matrix inversion by Cholesky decomposition fail. To deal with this problem, we use QR decomposition instead for training with Gauss-Newton. As shown in TAB6, the Gauss-Newton algorithm (i.e. w/o λ) generates much larger error, because the BA optimization is non-convex and the Gauss-Newton algorithm has no guaranteed convergence unless the initial solution is sufficiently close to the optimal BID35. This comparison reveals that, similar to conventional BA, our differnetiable Levenberg-Marquardt algorithm is superior than the Gauss-Newton algorithm for feature-metric BA. Predicted vs Constant λ Another way to make the Levenberg-Marquardt algorithm differentiable is to fix the λ during the iterations. We compare with this strategy. As shown in FIG6 (a), increasing λ makes the both rotation and translation error decreases, until λ = 0.5, and then increases. The reason is that a small λ makes the algorithm close to the Gauss-Newton algorithm, which has convergence issues. A large λ leads to a small update at each iteration, which makes it difficult to reach a good solution within limited iterations. While in FIG6 (b), increasing λ always makes depth errors decrease, probably because a larger λ leads to a small update and makes the final depth close to the initialed depth, which is better than the optimized one with small constant λ. Using constant λ value consistently generates worse than using a predicted λ from the MLP network, because there is no optimal λ for all data and it should be adapted to different data and different iterations. We draw the errors of our method in FIG6 (a) and FIG6 (b) as the flat dash lines for a reference. APPENDIX C: EVALUATION ON DEMON DATASET Table 5 summarizes our on the DeMoN dataset. For a comparison, we also cite the from DeMoN BID44 and the most recent work LS-Net. We further cite the from some conventional approaches as reported in DeMoN, indicated as Oracle, SIFT, FF, and Matlab respectively. Here, Oracle uses ground truth camera poses to solve the multi-view stereo by SGM BID25, while SIFT, FF, and Matlab further use sparse features, optical flow, and KLT tracking respectively for feature correspondence to solve camera poses by the 8-pt algorithm BID23 Table 5: Quantitative comparisons on the DeMoN dataset. Our method consistently outperforms DeMoN BID44 at both camera motion and scene depth, except on the'Scenes11' data, because we enforce multi-view geometry constraint in the BA-Layer. Our are poorer on the'Scene11' dataset, because the images there are synthesized with random objects from the ShapeNet BID8 without physically correct scale. This setting is inconsistent with real data and makes it harder for our method to learn the basis depth map generator. When compared with LS-Net, our method achieves similar accuracy on camera poses but better scene depth. It proves our feature-metric BA with learned feature is superior than the photometric BA in the LS-Net. Our method can be easily extended to reconstruct multiple images. We evaluate our method in the multi-view setting on the ScanNet BID11 dataset. To sample multi-view images for training, we randomly select two-view image pairs that shares a common image to construct N -view sequences. Due to the limited GPU memory (12G), we limit N to 5.As shown in the Table 6, the accuracy is consistently improved when more views are included, which demonstrates the strength of the multi-view geometry constraints. Instead, most existing deep learning approaches can only handle two views at a time, which is sub-optimal as known in structure-from-motion literature. Table 6: Quantitative comparisons on multi-view reconstruction on ScanNet. We compare our method with CodeSLAM which adopts similar idea for depth parameterization. But the difference is that CodeSLAM learns the conditioned depth auto-encoder separately and uses the depth codes in a standalone photometric BA component, while our method learns the feature pyramid and basis depth maps generator through feature-metric BA end-to-end. Since there is no public code for CodeSLAM, we directly cite the from their paper. 2 To get the trajectory on the EuroC MH02 sequence of our method, we select one frame every four frames and concatenate the reconstructed groups that contains every five selected frames. Then we use the same evaluation metrics as in CodeSLAM, which measures the translation errors correspond to different traveled distances. As shown in FIG7, our method outperforms CodeSLAM. Our median error is less than the half of CodeSLAM's error, i.e. CodeSLAM exhibits an error of roughly 1 m for a traveled distance of 9 m, while our method's error is about 0.4 m. This comparison demonstrates the superiority of end-to-end learning with feature pyramid and feature-metric BA over learning depth parameterization only. In FIG8, we visualize four typical basis depth maps as heat maps for each of the four images. An interesting observation is that one basis depth map has higher responses on close objects while another oppositely has higher responses to the far . Some other basis depth maps have smoothly varying responses and correspond to the layouts of scenes. This observation reveals that our learned basis depth maps have captured the latent structures of scenes. Finally, we show some qualitative comparison with the previous methods. Figure 9 shows the recovered depth map by our method and DeMoN BID44 on the ScanNet data. As we can see from the regions highlighted with a red circle, our method recovers more shape details. This is consistent with the quantitative in Table 1. FIG0 shows the recovered depth maps by our method,, and BID21 respectively. Similarly, we observe more shape details in our , as reflected in the quantitative in Table 2. FORMULA0 and BID21.
This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature bundle adjustment (BA)
365
scitldr
Temporal Difference Learning with function approximation is known to be unstable. Previous work like \citet{sutton2009fast} and \citet{sutton2009convergent} has presented alternative objectives that are stable to minimize. However, in practice, TD-learning with neural networks requires various tricks like using a target network that updates slowly \citep{mnih2015human}. In this work we propose a constraint on the TD update that minimizes change to the target values. This constraint can be applied to the gradients of any TD objective, and can be easily applied to nonlinear function approximation. We validate this update by applying our technique to deep Q-learning, and training without a target network. We also show that adding this constraint on Baird's counterexample keeps Q-learning from diverging. Temporal Difference learning is one of the most important paradigms in Reinforcement Learning (Sutton & Barto). Techniques based on nonlinear function approximators and stochastic gradient descent such as deep networks have led to significant breakthroughs in the class of problems that these methods can be applied to BID9 BID13 BID12. However, the most popular methods, such as TD(λ), Q-learning and Sarsa, are not true gradient descent techniques BID2 and do not converge on some simple examples BID0. BID0 and BID1 propose residual gradients as a way to overcome this issue. Residual methods, also called backwards bootstrapping, work by splitting the TD error over both the current state and the next state. These methods are substantially slower to converge, however, and BID16 show that the fixed point that they converge to is not the desired fixed point of TD-learning methods. BID16 propose an alternative objective function formulated by projecting the TD target onto the basis of the linear function approximator, and prove convergence to the fixed point of this projected Bellman error is the ideal fixed point for TD methods. BID5 extend this technique to nonlinear function approximators by projecting instead on the tangent space of the function at that point. Subsequently, BID11 has combined these techniques of residual gradient and projected Bellman error by proposing an oblique projection, and BID8 has shown that the projected Bellman objective is a saddle point formulation which allows a finite sample analysis. However, when using deep networks for approximating the value function, simpler techniques like Q-learning and Sarsa are still used in practice with stabilizing techniques like a target network that is updated more slowly than the actual parameters BID10.In this work, we propose a constraint on the update to the parameters that minimizes the change to target values, freezing the target that we are moving our current predictions towards. Subject to this constraint, the update minimizes the TD-error as much as possible. We show that this constraint can be easily added to existing techniques, and works with all the techniques mentioned above. We validate our method by showing convergence on Baird's counterexample and a gridworld domain. On the gridworld domain we parametrize the value function using a multi-layer perceptron, and show that we do not need a target network. Reinforcement Learning problems are generally defined as a Markov Decision Process (MDP), (S, A, P, R, R, d 0, γ). We use the definition and notation as defined in Sutton & Barto, second edition, unless otherwise specified. In case of a function approximation, we define the value and action value functions with parameters by θ. DISPLAYFORM0 We focus on TD methods, such as Sarsa, Expected Sarsa and Q-learning. The TD error that all these methods minimize is as follows: DISPLAYFORM1 The choice of π determines if the update is on-policy or off-policy. For Q-learning the target is max a q(s t+1, a).If we consider TD-learning using function approximation, the loss that is minimized is the squared TD error. For example, in Q-learning DISPLAYFORM2 The gradient of this loss is the direction in which you update the parameters. We shall define the gradient of the TD loss with respect to state s t and parameters θ t as g T D (s t). The gradient of some other function f (s t |θ t) can similarly be defined as g f (s t). The parameters are then updated according to gradient descent with step size α as follows: DISPLAYFORM3 A key characteristic of TD-methods is bootstrapping, i.e. the update to the prediction at each step uses the prediction at the next step as part of it's target. This method is intuitive and works exceptionally well in a tabular setting (Sutton & Barto). In this setting, updates to the value of one state, or state-action pair do not affect the values of any other state or state-action. TD-learning using a function approximator is not so straightforward, however. When using a function approximator, states nearby will tend to share features, or have features that are very similar. If we update the parameters associated with these features, we will update the value of not only the current state, but also states nearby that use those features. In general, this is what we want to happen. With prohibitively large state spaces, we want to generalize across states instead of learning values separately for each one. However, if the value of the state visited on the next step, which often does share features, is also updated, then the of the update might not have the desired effect on the TD-error. Generally, methods for TD-learning using function approximation do not take into account that updating θ t in the direction that minimizes TD-error the most, might also change v(s t+1 |θ t+1). Though they do not point out this insight as we have, previous works that aims to address convergence of TD methods using function approximation do deal with this issue indirectly, like residual gradients BID0 and methods minimizing MSPBE BID16. Residual gradients does this by essentially updating the parameters of the next state in the opposite direction of the update to the parameters of the current state. This splits the error between the current state and the next state, and the fixed point we reach does not act as a predictive representation of the reward. MSPBE methods act by removing the component of the error that is not in the span of the features of the current state, by projecting the TD targets onto these features. The update for these methods involves the product of three expectations, which is handled by keeping a separate set of weights that approximate two of these expectations, and is updated at a faster scale. The idea also does not immediately scale to nonlinear function approximation. BID5 propose a solution by projecting the error on the tangent plane to the function at the point at which it is evaluated. We propose to instead constrain the update to the parameters such that the change to the values of the next state is minimized, while also minimizing the TD-error. To do this, instead of modifying the objective, we look at the gradients of the update. DISPLAYFORM0 is the gradient at s t that minimizes the TD error. g v (s t+1) is the gradient at s t+1 that will change the value the most. We update the parameters θ t with g update (s t) such that the update is orthogonal to g v (s t+1). That is, we update the parameters θ t such that there is no change in the direction that will affect v(s t+1). Graphically, the update can be seen in figure 1. The actual updates to the parameters are as given below. DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 As can be seen, the proposed update is orthogonal to the direction of the gradient at the next state. Which means that it will minimize the impact on the next state. On the other hand, DISPLAYFORM4 •. This implies that applying g update (s t) to the parameters θ minimizes the TD error, unless it would change the values of the next state. Furthermore, our technique can be applied on top of any of these techniques to improve their behavior. We show this for residual gradients and Q-learning in the following experiments. To show that our method learns just as fast as TD while guaranteeing convergence similar to residual methods, we show the behavior of our algorithm on the following 3 examples. Figure 2: Baird's Counterexample is specified by 6 states and 7 parameters. The value of each state is calculated as given inside the state. At each step, the agent is initialized at one of the 6 states uniformly at random, and transitions to the state at the bottom, shown by the arrows. Baird's counterexample is a problem introduced in BID0 to show that gradient descent with function approximation using TD updates does not converge. The comparison of our technique with Q-learning and Residual Gradients is shown in figure 2. We compare the average performance for all tehcniques over 10 independent runs. If we apply gradient projection while using the TD error, we show that both Q-learning (TD update) and updates using residual gradients BID0 converge, but not to the ideal values of 0. In the figure, these values are almost overlapping. Our method constrains the gradient to not modify the weights of the next state, which in this case means that w 0 and w 6 never get updated. This means that the values do not converge to the true values, but they do not blow up as they do if using regular TD updates. Residual gradients converge to the ideal values of 0 eventually. GTD2 BID16 ) also converges to 0, as was shown in the paper, but we have not included that in this graph to avoid cluttering. The Gridworld domain we use is a (10×10) room with d 0 = S, and R = 1 and 0 everywhere else. We have set the goal as arbitrarily and our are similar for any goal on this grid. The input to the function approximation is only the (x, y) coordinates of the agent. We use a deep network with 2 hidden layers, each with 32 units, for approximating the Q-values. We execute a softmax policy, and the target values are also calculated as v(s t+1) = a π(a|s t+1)q(s t+1, a), Figure 4: Comparison of DQN and Constrained on the Cartpole Problem, taken over 10 runs. The shaded area specifies std deviation in the scores of the agent across independent runs. The agent is cut off after it's average performance exceeds 199 over a running window of 100 episodes where the policy π is a softmax over the Q-values. The room can be seen in FIG2 with the goal in red, along with a comparison of the value functions learnt for the 2 methods we compare.-Q-learning Constrained Q-learning MSE 0.0335 ± 0.0017 0.0076 ± 0.0028 et al., 1983). We use implementations from OpenAI baselines BID6 for Deep Qlearning to ensure that the code is reproducible and to ensure fairness. The network we use is with 2 hidden layers of. The only other difference compared to the implemented baseline is that we use RMSProp BID17 as the particular machinary for optimization instead of Adam BID7. This is just to stay close to the method used in BID10 and in practice, Adam works just as well and the comparison is similar. The two methods are trained using exactly the same code except for the updates, and the fact that Constrained DQN does not use a target network. We can also train COnstrained DQN with a larger step size (10 −3), while DQN requires a smaller step size (10 −4) to learn. The comparison with DQN is shown in 4. We see that constrained DQN learns much faster, with much less variance than regular DQN. In this paper we introduce a constraint on the updates to the parameters for TD learning with function approximation. This constraint forces the targets in the Bellman equation to not move when the update is applied to the parameters. We enforce this constraint by projecting the gradient of the TD error with respect to the parameters for state s t onto the orthogonal space to the gradient with respect to the parameters for state s t+1.We show in our experiments that this added constraint stops parameters in Baird's counterexample from exploding when we use TD-learning. But since we do not allow changes to target parameters, this also keeps Residual Gradients from converging to the true values of the Markov Process. On a Gridworld domain we demonstrate that we can perform TD-learning using a 2-layer neural network, without the need for a target network that updates more slowly. We compare the solution obtained with DQN and show that it is closer to the solution obtained by tabular policy evaluation. Finally, we also show that constrained DQN can learn faster and with less variance on the classical Cartpole domain. For future work, we hope to scale this approach to larger problems such as the Atari domain BID4. We would also like to prove convergence of TD-learning with this added constraint.
We show that adding a constraint to TD updates stabilizes learning and allows Deep Q-learning without a target network
366
scitldr
We propose DuoRC, a novel dataset for Reading Comprehension (RC) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing RC datasets. DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie - one from Wikipedia and the other from IMDb - written by two different authors. We asked crowdsourced workers to create questions from one version of the plot and a different set of workers to extract or synthesize corresponding answers from the other version. This unique characteristic of DuoRC where questions and answers are created from different versions of a document narrating the same underlying story, ensures by design, that there is very little lexical overlap between the questions created from one version and the segments containing the answer in the other version. Further, since the two versions have different level of plot detail, narration style, vocabulary, etc., answering questions from the second version requires deeper language understanding and incorporating knowledge not available in the given text. Additionally, the narrative style of passages arising from movie plots (as opposed to typical descriptive passages in existing datasets) exhibits the need to perform complex reasoning over events across multiple sentences. Indeed, we observe that state-of-the-art neural RC models which have achieved near human performance on the SQuAD dataset, even when coupled with traditional NLP techniques to address the challenges presented in DuoRC exhibit very poor performance (F1 score of 37.42% on DuoRC v/s 86% on SQuAD dataset). This opens up several interesting research avenues wherein DuoRC could complement other Reading Comprehension style datasets to explore novel neural approaches for studying language understanding. Natural Language Understanding is widely accepted to be one of the key capabilities required for AI systems. Scientific progress on this endeavor is measured through multiple tasks such as machine translation, reading comprehension, question-answering, and others, each of which requires the machine to demonstrate the ability to "comprehend" the given textual input (apart from other aspects) and achieve their task-specific goals. In particular, Reading Comprehension (RC) systems are required to "understand" a given text passage as input and then answer questions based on it. It is therefore critical, that the dataset benchmarks established for the RC task keep progressing in complexity to reflect the challenges that arise in true language understanding, thereby enabling the development of models and techniques to solve these challenges. For RC in particular, there has been significant progress over the recent years with several benchmark datasets, the most popular of which are the SQuAD dataset BID11, TriviaQA BID4, MS MARCO BID8, MovieQA BID16 and cloze-style datasets BID6 BID9 BID2. However, these benchmarks, owing to both the nature of the passages and the question-answer pairs to evaluate the RC task, have 2 primary limitations in studying language understanding: (i) Other than MovieQA, which is a small dataset of 15K QA pairs, all other large-scale RC datasets deal only with factual descriptive passages and not narratives (involving events with causality linkages that require reasoning and knowledge) which is the case with a lot of real-world content such as story books, movies, news reports, etc. (ii) their questions possess a large lexical overlap with segments of the passage, or have a high noise level in Q/A pairs themselves. As demonstrated by recent work, this makes it easy for even simple keyword matching algorithms to achieve high accuracy BID19. In fact, these models have been shown to perform poorly in the presence of adversarially inserted sentences which have a high word overlap with the question but do not contain the answer BID3. While this problem does not exist in TriviaQA it is admittedly noisy because of the use of distant supervision. Similarly, for cloze-style datasets, due to the automatic question generation process, it is very easy for current models to reach near human performance BID1. This therefore limits the complexity in language understanding that a machine is required to demonstrate to do well on the RC task. Motivated by these shortcomings and to push the state-of-the-art in language understanding in RC, in this paper we propose DuoRC, which specifically presents the following challenges beyond the existing datasets:1. DuoRC is especially designed to contain a large number of questions with low lexical overlap between questions and their corresponding passages.2. It requires the use of and common-sense knowledge to arrive at the answer and go beyond the content of the passage itself.3. It contains narrative passages from movie plots that require complex reasoning across multiple sentences to infer the answer.4. Several of the questions in DuoRC, while seeming relevant, cannot actually be answered from the given passage, thereby requiring the machine to detect the unanswerability of questions. In order to capture these four challenges, DuoRC contains QA pairs created from pairs of documents describing movie plots which were gathered as follows. Each document in a pair is a different version of the same movie plot written by different authors; one version of the plot is taken from the Wikipedia page of the movie whereas the other from its IMDb page (see FIG0 for portions of an example pair of plots from the movie "Twelve Monkeys"). We first showed crowd workers on Amazon Mechanical Turk (AMT) the first version of the plot and asked them to create QA pairs from it. We then showed the second version of the plot along with the questions created from the first version to a different set of workers on AMT and asked them to provide answers by reading the second version only. Since the two versions contain different levels of plot detail, narration style, vocabulary, etc., answering questions from the second version exhibits all of the four challenges mentioned above. We now make several interesting observations from the example in FIG0. For 4 out of the 8 questions (Q1, Q2, Q4, and Q7), though the answers extracted from the two plots are exactly the same, the analysis required to arrive at this answer is very different in the two cases. In particular, for Q1 even though there is no explicit mention of the prisoner living in a subterranean shelter and hence no lexical overlap with the question, the workers were still able to infer that the answer is Philadelphia because that is the city to which James Cole travels to for his mission. Another interesting characteristic of this dataset is that for a few questions (Q6, Q8) alternative but valid answers are obtained from the second plot. Further, note the kind of complex reasoning required for answering Q8 where the machine needs to resolve coreferences over multiple sentences (that man refers to Dr. Peters) and use common sense knowledge that if an item clears an airport screening, then a person can likely board the plane with it. To re-emphasize, these examples exhibit the need for machines to demonstrate new capabilities in RC such as: (i) employing a knowledge graph (e.g. to know that Philadelphia is a city in Q1), (ii) common-sense knowledge (e.g., clearing airport security implies boarding) (iii) paraphrase/semantic understanding (e.g. revolver is a type of handgun in Q7) (iv) multiple-sentence inferencing across events in the passage including coreference resolution of named entities and nouns, and (v) educated guesswork when the question is not directly answerable but there are subtle hints in the passage (as in Q1). Finally, for quite a few questions, there wasn't sufficient information in the second plot to obtain their answers. In such cases, the workers marked the question as "unanswerable". This brings out a very important challenge for machines to exhibit (i.e. detect unanswerability of questions) because a practical system should be able to know when it is not possible for it to answer a particular question given the data available to it, and in such cases, possibly delegate the task to a human instead. Current RC systems built using existing datasets are far from possessing these capabilities to solve the above challenges. In Section 4, we seek to establish solid baselines for DuoRC employing state-of-the-art RC models coupled with a collection of standard NLP techniques to address few of the above challenges. Proposing novel neural models that solve all of the challenges in DuoRC is out of the scope of this paper. Our experiments demonstrate that when the existing state-of-the-art RC systems are trained and evaluated on DuoRC they perform poorly leaving a lot of scope for improvement and open new avenues for research in RC. Do note that this dataset is not a substitute for existing RC datasets but can be coupled with them to collectively address a large set of challenges in language understanding with RC (the more the merrier). Over the past few years, there has been a surge in datasets for Reading Comprehension. Most of these datasets differ in the manner in which questions and answers are created. For example, in SQuAD BID11, NewsQA BID18, TriviaQA BID4 and MovieQA BID16 the answers correspond to a span in the document. MS-MARCO uses web queries as questions and the answers are synthesized by workers from documents relevant to the query. On the other hand, in most cloze-style datasets BID6 BID9 ) the questions are created automatically by deleting a word/entity from a sentence. There are also some datasets for RC with multiple choice questions BID13 BID0 BID5 where the task is to select one among k given candidate answers. Given that there are already a few datasets for RC, a natural question to ask is "Do we really need any more datasets?". We believe that the answer to this question is yes. Each new dataset brings in new challenges and contributes towards building better QA systems. It keeps researchers on their toes and prevents research from stagnating once state-of-the-art are achieved on one dataset. A classic example of this is the CoNLL NER dataset BID17. While several NER systems BID10 gave close to human performance on this dataset, NER on general web text, domain specific text, noisy social media text is still an unsolved problem (mainly due to the lack of representative datasets which cover the real-world challenges of NER). In this context, DuoRC presents 4 new challenges mentioned earlier which are not exhibited in existing RC datasets and would thus enable exploring novel neural approaches in complex language understanding. The hope is that all these datasets (including ours) will collectively help in addressing a wide range of challenges in QA and prevent stagnation via overfitting on a single dataset. In this section, we elaborate on our dataset collection process which consisted of the following three phrases.1. Extracting parallel movie plots: We first collected top 40K movies from IMDb across different genres (crime, drama, comedy, etc.) whose plot synopsis were crawled from Wikipedia as well as IMDb. We retained only 7680 movies for which both the plots were available and longer than 100 words. In general, we found that the IMDb plots were usually longer (avg. length 926 words) and more descriptive than the Wikipedia plots (avg. length 580 words).2. Collecting QA pairs from shorter version of the plot (SelfRC): As mentioned earlier, on average the longer version of the plot is almost double the size of the shorter version which is itself usually 500 words long. Intuitively, the longer version should have more details and the questions asked from the shorter version should be answerable from the longer one. Hence, we first showed the shorter version of the plot to workers on AMT and ask them to create QA pairs from it. For the answer, the workers were given freedom to either pick an answer which directly matches a span in the document or synthesize the answer from scratch. This option allowed them to be creative and ask hard questions where possible. We found that in 70% of the cases the workers picked an answer directly from the document and in 30% of the cases they synthesized the answer. We thus collected 85,773 such QA pairs along with their corresponding documents. We refer to this as the SelfRC dataset because the answers were derived from the same document from which the questions were asked.3. Collecting answers from longer version of the plot (ParaphraseRC): We then paired the questions from the SelfRC dataset with the corresponding longer version of the plot and showed it to a different set of AMT workers asking them to answer these questions from the longer version of the plot. They now have the option of either (a) selecting an answer which matches a span in the longer version, or (b) synthesizing the answer from scratch, or (c) marking the question not-answerable because of lack of information in the given passage. We found that in 50% of the cases the workers selected an answer which matched a span in the document, whereas in 37% cases they synthesized the answer and in 13% cases they said that question was not answerable. The workers were strictly instructed to derive the answer from the plot and not rely on their personal knowledge about the movie (in any case given the large number of movies in our dataset the chance of a worker remembering all the plot details for a given movie is very less). Further, a wait period of 2-3 weeks was deliberately introduced between the two phases of data collection to ensure the availability of a fresh pool of workers as well as to reduce information bias among any worker common to both the tasks. We refer to this dataset, where the questions are taken from one version of the document and the answers are obtained from a different version, as ParaphraseRC dataset. We collected 100,316 such {question, answer, document} triplets. Note that the number of unique questions in the ParaphraseRC dataset is the same as that in SelfRC because we do not create any new questions from the longer version of the plot. We end up with a greater number of {question, answer, document} triplets in ParaphraseRC as compared to SelfRC (100,316 v/s 85,773) since movies that are remakes of a previous movie had very little difference in their Wikipedia plots. Therefore, we did not separately collect questions from the Wikipedia plot of the remake. However, the IMDb plots of the two movies are very different and so we have two different longer versions of the movie (one for the original and one for the remake). We can thus pair the questions created from the Wikipedia plot with both the IMDb versions of the plot and hence we end up with more {question, answer, document} triplets. We refer to this combined dataset containing a total of 186,089 instances as DuoRC. Fig. 2 shows the distribution of different Wh-type questions in our dataset. Some more interesting statistics about the dataset are presented in TAB1 and also in Appendix B.Another notable observation is that in many cases the answers to the same question are different in the two versions. Specifically, only 40.7% of the questions have the same answer in the two documents. For around 37.8% of the questions there is no overlap between the words in the two answers. For the remaining 21% of the questions there is a partial overlap between the two answers. For e.g., the answer derived from the shorter version could be "using his wife's gun" and from the longer version could be "with Dana's handgun" where Dana is the name of the wife. In Appendix A, we provide a few randomly picked examples from our dataset which should convince the reader of the difficulty of ParaphraseRC and its differences with SelfRC. In this section, we describe in detail the various state-of-the-art RC and language generation models along with a collection of traditional NLP techniques employed together that will serve to establish baseline performance on the DuoRC dataset. Most of the current state-of-the-art models for RC assume that the answer corresponds to a span in the document and the task of the model is to predict this span. This is indeed true for the SQuAD, TriviaQA and NewsQA datasets. However, in our dataset, in many cases the answers do not correspond to an exact span in the document but are synthesized by humans. Specifically, for the SelfRC version of the dataset around 30% of the answers are synthesized and do not match a span in the document whereas for the ParaphraseRC task this number is 50%. Nevertheless, we could still leverage the advances made on the SQuAD dataset and adapt these span prediction models for our task. To do so, we propose to use two models. The first model is a basic span prediction model which we train and evaluate using only those instances in our dataset where the answer matches a span in the document. The purpose of this model is to establish whether even for instances where the answer matches a span in the document, our dataset is harder than the SQuAD dataset or not. Specifically, we want to explore the performance of state-of-the-art models (such as DCN BID20), which exhibit near human on the SQuAD dataset, on DuoRC (especially, in the ParaphraseRC setup). To do so, we seek to employ a good span prediction model for which (i) the performance is within 3-5% of the top performing model on the SQuAD leaderboard BID12 and (ii) the are reproducible based on the code released by the authors of the paper. Note that the second criteria is important to ensure that the poor performance of the model is not due to incorrect implementation. The Bidirectional Attention Flow (BiDAF) model BID14 satisfies these criteria and hence we employ this model. Due to space constraints, we do not provide details of the BiDAF model here and simply refer the reader to the original paper. In the remainder of this paper we will refer to this model as the SpanModel. The second model that we employ is a two stage process which first predicts the span and then synthesizes the answers from the span. Here again, for the first step (i.e., span prediction) we use the BiDAF model BID14. The job of the second model is to then take the span (mini-document) and question (query) as input and generate the answer. For this, we employ a state-of-the-art query based abstractive summarization model BID7 as this task is very similar to our task. Specifically, in query based abstractive summarization the training data is of the form {query, document, generated summary} and in our case the training data is of the form {query, mini-document, generated answer}. Once again we refer the reader to the original paper BID7 for details of the model. We refer to this two stage model as the GenModel. Note that BID15 recently proposed an answer generation model for the MS MARCO dataset. However, the authors have not released their code and therefore, in the interest of reproducibility of our work, we omit incorporating this model in this paper. Additional NLP pre-processing: Referring back to the example cited in FIG0, we reiterate that ideally a good model for ParaphraseRC would require: (i) employing a knowledge graph, (ii) common-sense knowledge (iii) paraphrase/semantic understanding (iv) multiple-sentence inferencing across events in the passage including coreference resolution of named entities and nouns, and (v) educated guesswork when the question is not directly answerable but there are subtle hints in the passage. While addressing all of these challenges in their entirety is beyond the scope of a single paper, in the interest of establishing a good baseline for DuoRC, we additionally seek to address some of these challenges to a certain extent by using standard NLP techniques. Specifically, we look at the problems of paraphrase understanding, coreference resolution and handling long passages. To do so, we prune the document and extract only those sentences which are most relevant to the question, so that the span detector does not need to look at the entire 900-word long ParaphraseRC plot. Now, since these relevant sentences are obtained not from the original but the paraphrased version of the document, they may have a very small word overlap with the question. For example, the question might contain the word "hand gun" and the relevant sentence in the document may contain the word "revolver". Further some of the named entities in the question may not be exactly present in the relevant sentence but may simply be co-referenced. To resolve these coreferences, we first employ the Stanford coreference resolution on the entire document. We then compute the fraction of words in a sentence which match a query word (ignoring stop words). Two words are considered to match if (a) they have the same surface form, or (b) one words is an inflected form of the word (e.g., river and rivers), or (c) the Glove and Skip-thought embeddings of the two words are very close to each other, or (d) the two words appear in the same synset in Wordnet. We consider a sentence to be relevant for the question if at least 50% of the query words (ignoring stop words) match the words in the sentence. If none of the sentences in the document have atleast 50% overlap with the question, then we pick sentences having atleast a 30% overlap with the question. In the following sub-sections we describe (i) the evaluation metrics, and (ii) the choices considered for augmenting the training data for the answer generation model. Note that when creating the train, validation and test set, we ensure that the test set does not contain question-answer pairs for any movie that was seen during training. We split the movies in such a way that the ing train, valid, test sets respectively contain 70%, 15% and 15% of the total number of QA pairs. As mentioned earlier, the SpanModel only predicts the span in the document whereas the GenModel generates the answer after predicting the span. Ideally, the SpanModel should only be evaluated on those instances in the test set where the answer matches a span in the document. We refer to this subset of the test set as the Span-based Test Set. Though not ideal, we also evaluate the SpanModel model on the entire test set. We say this is not ideal because we know for sure that there are many answers in the test set which do not correspond to a span in the document whereas the model was only trained to predict spans. We refer to this as the Full Test Set. We also evaluate the GenModel on both the test sets. Training Data for the GenModel As mentioned earlier, the GenModel contains two stages; the first stage predicts the span and the second stage then generates an answer from the predicted span. For the first step we plug-in the best performing SpanModel from our earlier exploration. To train the second stage we need training data of the form {x = span, y= answer} which comes from two types of instances: one where the answer matches a span and the other where the answer is synthesized and the span corresponding to it is not known. In the first case x=y and there is nothing interesting for the model to learn (except for copying the input to the output). In the second case x is not known. To overcome this problem, for the second type of instances, we consider various approaches for finding the approximate span from which the answer could have been generated, in order to augment the training data with {x = approx span, y= answer} pairs. The easiest method was to simply treat the entire document as the true span from which the answer was generated (x = document, y = answer). The second alternative that we tried was to first extract the named entities, noun phrases and verb phrases from the question and create a lucene query from these components. We then used the lucene search engine to extract the most relevant portions of the document given this query. We then considered this portion of the document as the true span (as opposed to treating the entire document as the true span). Note that lucene could return multiple relevant spans in which case we treat all these {x = approx span, y= answer} as training instances. Another alternative was to find the longest common subsequence (LCS) between the document and the question and treat this subsequence as the span from which the answer was generated. Of these, we found that the model trained using {x = approx span, y= answer} pairs created using the LCS based method gave the best . We report numbers only for this model. Evaluation Metrics Similar to BID11 we use Accuracy and F-score as the evaluation metric. While accuracy, being a stricter metric, considers a predicted answer to be correct only if it exactly matches the true answer, F-score also gives credit to predictions partially overlapping with the true answer. The of our experiments are summarized in TAB3 which we discuss in the following sub-sections. • SpanModel v/s GenModel: Comparing the first two rows (SelfRC) and the last two rows (ParaphraseRC) of TAB4 we see that the SpanModel clearly outperforms the GenModel. This is not very surprising for two reasons. First, around 70% (and 50%) of the answers in SelfRC (and ParaphraseRC) respectively, match an exact span in the document so the span based model still has scope to do well on these answers. On the other hand, even if the first stage of the GenModel predicts the span correctly, the second stage could make an error in generating the correct answer from it because generation is a harder problem. For the second stage, it is expected that the GenModel should learn to copy the predicted span to produce the answer output (as is required in most cases) and only occasionally where necessary, generate an answer. However, surprisingly the GenModel fails to even do this. Manual inspection of the generated answers shows that in many cases the generator ends up generating either more or fewer words compared the true answer. This demonstrates that there is clearly scope for the GenModel to perform better.• SelfRC v/s ParaphraseRC: Comparing the SelfRC and ParaphraseRC numbers in TAB4, we observe that the performance of the models clearly drops for the latter task, thus validating our hypothesis that ParaphraseRC is a indeed a much harder task.• Effect of NLP pre-processing: As mentioned in Section 4, for ParaphraseRC, we first perform a few pre-processing steps to identify relevant sentences in the longer document. In order to evaluate whether the pre-processing method is effective, we compute: (i) the percentage of the document that gets pruned, and (ii) whether the true answer is present in the pruned document (i.e., average recall of the answer). We can compute the recall only for the span-based subset of the data since for the remaining data we do not know the true span. In TAB3, we report these two quantities for the span-based subset using different pruning strategies. Finally, comparing the SpanModel with and without Paraphrasing in TAB4 for ParaphraseRC, we observe that the pre-processing step indeed improves the performance of the Span Detection Model.• Effect of oracle pre-processing: As noted in Section 3, the ParaphraseRC plot is almost double in length in comparison to the SelfRC plot, which while adding to the complexities of the former task, is clearly not the primary reason of the model's poor performance on that. To empirically validate this, we perform an Oracle pre-processing step, where, starting with the knowledge of the span containing the true answer, we extract a subplot around it such that the span is randomly located within that subplot and the average length of the subplot is similar to the SelfRC plots. The SpanModel with this Oracle preprocessed data exhibits a minor improvement in performance over that with rule-based preprocessing (1.6% in Accuracy and 4.3% in F1 over the Span Test), still failing to bridge the wide performance gap between the SelfRC and ParaphraseRC task.• Cross Testing We wanted to examine whether a model trained on SelfRC performs well on ParaphraseRC and vice-versa. We also wanted to evaluate if merging the two datasets improves the performance of the model. For this we experimented with various combinations of train and test data. The of these experiments for the SpanModel are summarized in TAB5. We make two main observations. First, training on one dataset and evaluating on the other in a drop in the performance. Merging the training data from the two datasets exhibits better performance on the individual test sets. Based on our experiments and empirical observations we believe that the DuoRC dataset indeed holds a lot of potential for advancing the horizon of complex language understanding by exposing newer challenges in this area. In this paper we introduced DuoRC, a large scale RC dataset of 186K human-generated questionanswer pairs created from 7680 pairs of parallel movie-plots, each pair taken from Wikipedia and IMDb. We then showed that this dataset, by design, ensures very little or no lexical overlap between the questions created from one version and the segments containing the answer in the other version. With this, we hope to introduce the RC community to new research challenges on question-answering requiring external knowledge and common-sense driven reasoning, deeper language understanding and multiple-sentence inferencing. Through our experiments, we show how the state-of-the-art RC models, which have achieved near human performance on the SQuAD dataset, perform poorly on our dataset, thus emphasizing the need to explore further avenues for research. In this appendix, we showcase some examples of plots from which questions are created and answered. Since the questions are created from the smaller plot, answering these questions by the reading the smaller plot (which is named as the SelfRC task) is straightforward. However, answering them by reading the larger plot (i.e. the ParaphraseRC task) is more challenging and requires multi-sentence and sometimes multi-paragraph inferencing. Due to shortage of space, we truncate the plot contents and only show snippets from which the questions can be answered. In the smaller plot, blue indicates that an answer can directly be found from the sentence and cyan indicates that the answer spans over multiple sentences. For the larger plot, red and orange are used respectively. A.1 EXAMPLE 1: A. We conducted a manual verification of 100 question-answer pairs where the SelfRC and ParaphraseRC were different or the latter was marked as non-answerable. As noted in FIG1, the chief reason behind getting No Answer from the Paraphrase plot is lack of information and at times, need for an educated guesswork or missing general knowledge (e.g. Philadelphia is a city) or missing movie meta-data (e.g. to answer questions like ' Where did Julia Roberts' character work in the movie?'). On the other hand, SelfRC and ParaphraseRC answers are occasionally seen to have partial or no overlap, mainly because of the following causes; phrasal paraphrases or subjective questions (e.g. Why and How type questions) or different valid answers to objective questions (e.g. 'Where did Jane work?' is answered by one worker as 'Bloomberg' and other as 'New York City') or differently spelt names in the answers (e.g. 'Rebeca' as opposed to 'Rebecca').
We propose DuoRC, a novel dataset for Reading Comprehension (RC) containing 186,089 human-generated QA pairs created from a collection of 7680 pairs of parallel movie plots and introduce a RC task of reading one version of the plot and answering questions created from the other version; thus by design, requiring complex reasoning and deeper language understanding to overcome the poor lexical overlap between the plot and the question.
367
scitldr
We consider the problem of weakly supervised structured prediction (SP) with reinforcement learning (RL) – for example, given a database table and a question, perform a sequence of computation actions on the table, which generates a response and receives a binary success-failure reward. This line of research has been successful by leveraging RL to directly optimizes the desired metrics of the SP tasks – for example, the accuracy in question answering or BLEU score in machine translation. However, different from the common RL settings, the environment dynamics is deterministic in SP, which hasn’t been fully utilized by the model-freeRL methods that are usually applied. Since SP models usually have full access to the environment dynamics, we propose to apply model-based RL methods, which rely on planning as a primary model component. We demonstrate the effectiveness of planning-based SP with a Neural Program Planner (NPP), which, given a set of candidate programs from a pretrained search policy, decides which program is the most promising considering all the information generated from executing these programs. We evaluate NPP on weakly supervised program synthesis from natural language(semantic parsing) by stacked learning a planning module based on pretrained search policies. On the WIKITABLEQUESTIONS benchmark, NPP achieves a new state-of-the-art of 47.2% accuracy. Numerous from natural language processing tasks have shown that Structured Prediction (SP) can be cast into a reinforcement learning (RL) framework, and known RL techniques can give formal performance bounds on SP tasks BID3 BID13 BID0. RL also directly optimizes task metrics, such as, the accuracy in question answering or BLEU score in machine translation, and avoids the exposure bias problem when compaired to maximum likelihood training that is commonly used in SP BID13 BID12.However, previous works on applying RL to SP problems often use model-free RL algorithms (e.g., REINFORCE or actor-critic) and fail to leverage the characteristics of SP, which are different than typical RL tasks, e.g., playing video games BID9 or the game of Go BID15. In most SP problems conditioned on the input x, the environment dynamics, except for the reward signal, is known, deterministic, reversible, and therefore can be searched. This means that there is a perfect model 1 of the environment, which can be used to apply model-based RL methods that utilize planning 2 as a primary model component. Take semantic parsing BID1 BID11 as an example, semantic parsers trained by RL such as Neural Semantic Machine (NSM) BID8 typically rely on beam search for inference -the program with the highest probability in beam is used for execution and generating answer. However, the policy, which is used for beam search, may not be 1 A model of the environment usually means anything that an agent can use to predict how the environment will respond to its actions BID17.2 planning usually refers to any computational process that takes a model as input and produces or improves a policy for interacting with the modeled environment BID17 able to assign the highest probability to the correct program. This limitation is due to the policy predicting locally normalized probabilities for each possible action based on the partially generated program, and the probability of a program is a product of these local probabilities. For example, when applied to the WEBQUESTIONSSP task, NSM made mistakes with two common patterns: the program would ignore important information in the context; the generated program does not execute to a reasonable output, but still receives high probability (spurious programs). Resolving this issue requires using the information of the full program and its execution output to further evaluate its quality based on the context, which can be seen as planning. This can be observed in Figure 4 where the model is asked a question "Which programming is played the most?". The full context of the input table (shown in TAB0) contains programming for a television station. The top program generated by a search policy produces the wrong answer, filtering by a column not relevant to the question. If provided the correct contextual features, and if allowed to evaluate the full program forward and backward through time, we observe that a planning model would be able to better evaluate which program would produce the correct answer. To handle errors related to context, we propose to train a value function to compute the utility of each token in a program. This utility is evaluated by considering the program and token probability as well as the attention mask generated by the sequence-to-sequence (seq2seq) model for the underlying policy. We also introduce beam and question context with a binary feature representing overlap from question/program and program/program, such as how many programs share a token at a given timestep. In the experiments, we found that applying a planner that uses a learned value function to re-rank the candidates in the beam can significantly and consistently improve the accuracy. On the WIKITABLEQUESTIONS benchmark, we improve the state-of-the-art by 0.9%, achieving an accuracy of 47.2%. 2.1 WIKITABLEQUESTIONS WIKITABLEQUESTIONS BID11 contains tables extracted from Wikipedia and question-answer pairs about the tables. See TAB0 as an example. There are 2,108 tables and 18,496 question-answer pairs split into train/dev/test set. Each table can be converted into a directed graph that can be queried, where rows and cells are converted to graph nodes while column names become labeled directed edges. For the questions, we use string match to identify phrases that appear in the table. We also identify numbers and dates using the CoreNLP annotation released with the dataset. The task is challenging in several aspects. First, the tables are taken from Wikipedia and cover a wide range of topics. Second, at test time, new tables that contain unseen column names appear. Third, the table contents are not normalized as in knowledge-bases like Freebase, so there are noises and ambiguities in the table annotation. Last, the semantics are more complex comparing to previous datasets like WEBQUESTIONSSP BID19. It requires multiple-step reasoning using a large set of functions, including comparisons, superlatives, aggregations, and arithmetic operations BID11. See BID8 for more details about the functions. We adopt the NSM framework open sourced in the Memory Augmented Policy Optimization paper (MAPO) BID8, which combines a neural "programmer", which is a seq2seq model augmented by a key-variable memory that can translate a natural language utterance to a program as a sequence of tokens, and a symbolic "computer", which is an Lisp interpreter that implements a domain specific language with built-in functions and provides code assistance by eliminating syntactically or semantically invalid choices. For the Lisp interpreter, it added functions according to BID20 BID10 for WIKITABLEQUESTIONS, refer to BID8 for further detail of the open-sourced implementation. Same as BID8 we consider the problem of contextual program synthesis with weak supervision -i.e., no correct action sequence a is given as part of the training data, and training needs to solve the hard problem of exploring a large program space. However, we will focus on improving decision making (planning) giving a pretrained search policy, while previous work mainly focus on learning the search policies. The problem of weakly supervised contextual program synthesis can be formulated as: generating a program a by using a parametric mapping function,â " f θ pxq, where θ denotes the model parameters. The quality of a generated programâ is measured in terms of a scoring or reward function Rpâ; x, yq with y being the expected correct answer. The reward function evaluates a program by executing it on a real environment and comparing the emitted output against the correct answer. For example, the reward may be binary that is 1 when the output equals the answer and 0 otherwise. We assume that the context x includes both a natural language input and an environment, for example an interpreter or a database, on which the program will be executed. Given a dataset of context-answer pairs, tpx i, y i qu N i"1, the goal is to find an optimal parameter θ˚that parameterizes a mapping of x Ñ a with maximum empirical return on a heldout test set. In this study we will further decompose the policy f θ into the stacking of a search model s φ pxq, which produces a set of candidate programs B given an environment x, and a value model v ω pa; x, Bq, which assigns a score s to program a given the environment and all the candidate programs. Therefore, θ " rφ; ωs and f θ pxq « argmax DISPLAYFORM0 The search model s φ can be learnt by optimizing a conditional distribution π φ pa | xq that assigns a probability to each program given the context. That is, π φ is a distribution over the countable set of all possible programs, denoted A. Thus @a P A: π φ pa | xq ě 0 and ř aPA π φ pa | xq " 1. Then, to synthesize candidate programs for a novel context, one may find the most likely programs under the distribution π φ via exact or approximate inference such as beam search. B " s φ pxq « argmax B aPA π φ pa | xq. π φ is typically an autoregressive model such as a recurrent neural network: (e.g. BID5) π φ pa | xq " ś |a| i"t π φ pa t | a ăt, xq,where a ăt " pa 1,..., a t´1 q denotes a prefix of the program a. In the absence of ground truth programs, policy gradient techniques (such as REINFORCE BID18) present a way to optimize the parameters of a stochastic policy π φ via optimization of expected return. Given a training dataset of context-answer pairs, tpx l, y l qu N l"1, the training objective is O ER pθq " E a"π φ pa|xq Rpa; x, yq. Decision-time planning typically relies on value network BID14 trained to predict the true reward. In the next section, however, we introduce a max-margin training objective for the value model v ω, which optimizes to rank rewarded programs higher than non-rewarded programs. We now introduce NPP by first describing the architecture of v ω -a seq2seq model which goes over candidate program answer pairs and the final score of a candidate program is simply the sum of its token scores (Section 4.1). Secondly we describe the program token representation, which considers the program, question and beam context which are used to denote the utility of all tokens within a program (Section 4.2). Finally, we describe a training procedure that is based on max-margin/ranking objective on candidate programs given a question (Section 4.3). Here we introduce the NPP architecture FIG0 ) in the context of semantic parsing, but the framework should be broadly applicable to applying RL in structured predictions. Given a pre-trained search policy π φ and environment x, NPP first unrolls the policy with beam search to generate candidate programs (plans) B " s φ pxq. Then it scores each program a considering token a t at every step and global statistics among all programs B. a t is represented as a context feature vector C t (details in Section 4.2). To capture the sequential inputs the scoring component is a seq2seq model which goes over candidate program answer pairs and assigns preference scores to each program/answer token. We implement a bi-directional recurrent network with LSTM Hochreiter & Schmidhuber (1997b); BID4 cells as the first layer of our planner C LSTM " LSTMpCq. The LSTM hidden state at each step is fed to a one dimensional convolutional layer with kernel size 3, in order to capture inner function scoring as most functions are of size 3-5, as a feature extractor DISPLAYFORM0 Finally we calculate the score per token by feeding into a single node hyperbolic tangent activation layer to compute the score per token v t " TanhpO CNN t q of token a t. The final score of a candidate Program token probability according to the search policy π φ t agree countNumber of candidate programs having token a t at position t program v ω paq " ř t"1.. T v t is simply the sum of its token scores. The choice of simply summing token level scores makes the score very understandable (details in Section 4.2). FIG1 gives implementation details of the value model. To better score tokens based on the overall context of the environment we represent each token with a set of context features C t " rq tok; q attn; p prob; t prob; t agree s as in TAB1. q attn is the softmax attention across question tokens, which helps to discern which part of the question is of most importance to the model when generating the current given token. Together q tok and q attn represents the alignment between program and query. t prob and p prob are the probability of token a t and program a assigned by the search policy π φ, which represent the decisions from the search model. t agree is the number of candidate programs having token a t at position t. Access to information such as t agree is only available to NPP as it can only be used after all the candidate programs have been generated. We formulate NPP training as a learning to rank problem -optimizing pairwise ranking among candidate programs B. Given a training dataset of context-answer pairs, tpx l, y l qu N l"1, the training objective is DISPLAYFORM0 where σpvq " 1{p1`e´vq is the sigmoid function, and v l,i " v ω pa l,i ; x l, s φ px l qq is the estimated value of a l,i, the i-th program generated for context x l .We compare NPP training in two setups: a single MAPO setup, and a stack learning setup. For the single MAPO setup the queries used to produce a pretrained MAPO model are also used to train the NPP model. The dev and test queries are used for ablation study and final evaluation. Since the NPP training queries are already used to train the MAPO model, the candidate programs are biased towards better reward (compared to those candidate generated for unseen queries). This setup causes NPP to learn from a different distribution as the intended dev/test data. Surprisingly NPP is still able to improve the prediction of MAPO as will be discussed in Section 5.3.To overcome the issue with single MAPO setup we also generate NPP training data with a stacked learning setup. First the train and dev queries are merged and splitted into K equal portions, and with Leave-One-Out (LOO) scheme they form K train/dev sets. Then K MAPO models are trained on K train sets. Finally we use each of the K MAPO s to produce a stack learning dataset by running these models on their respective dev set. The stack learning dataset is further splited for train/dev purposes for NPP. In this way, each NPP training query is decoded by a MAPO model, which has never seen this query before, and therefore avoid the bias in training data generation. Our empirical study is based on the semantic parsing benchmark, WIKITABLEQUESTIONS BID11. To focus on studying the planning part of the problem we assume that the search policy is pretrained using MAPO BID8, and NPP is trained to rescore given candidate programs produced by MAPO. Additionally we show that stacked learning BID2 is helpful in correctly training a planner given pre-trained policy models. Datasets. WIKITABLEQUESTIONS BID11 contains tables extracted from Wikipedia and question-answer pairs about the tables. See TAB0 as an example. There are 2,108 tables and 18,496 question-answer pairs. We follow the construction in BID11 for converting a table into a directed graph that can be queried, where rows and cells are converted to graph nodes while column names become labeled directed edges. For the questions, we use string match to identify phrases that appear in the table. We also identify numbers and dates using the CoreNLP annotation released with the dataset. Baselines. We compare NPP to BID8, the current state of the art on the WIKITABLE-QUESTIONS dataset. MAPO relies on beam search to find candidate programs, and uses the program probability according to the policy to determine the program to execute. MAPO manages to achieve 46.3% accuracy on this task when using an ensemble of size 10. We aim to show that NPP can improve on single MAPO as well as the ensemble of MAPO models. Training Details. We set the stacked learning parameter K " 5 for all our experiments. We set the batch size to be equal to 16. We use Adam optimizer for training with a learning rate 10´3. We choose a size of 64 nodes for the LSTM (which becomes 128 as it is bidirectional). The CNN consists of 32 filters with kernel size 3. All the hyper parameters are tuned on the development set and trained for 10 epochs. Ensemble Details. We formulate the ensemble of K MAPO models with NPP as the sum of normalized NPP scores under different MAPO models: Let Φ " tφ k u K k"1 be K MAPO models to be ensembled. We define the ensembled score of a program a under context x as v ω,Φ pa; xq " DISPLAYFORM0 wherev ω pxq is the average score for programs in beam s k φ pxq DISPLAYFORM1 and v 1 ω backs-off v ω tov ω pxq whenever a is not in beam DISPLAYFORM2 Table 3: Feature ablation study on the dev set with a mean of 5 runs on a single MAPO setup. DISPLAYFORM3 In order to evaluate the effectiveness of our proposed programs token representations, we present a feature ablation test in Figure 3. We can see that the program probability p prob produced by the search policy is the most important feature, providing the foundation to NPP scoring. The program agreement feature t agree is also very useful. We often observe cases for which beam B produces program with similar tokens that are not highly valued by the underlying model. By utilizing this feature, we more strongly consider programs which were repeatedly searched by s φ. t agree also helps to identify the programs that are very similar throughout most of the sequence to learn their divergence and grade their utility. Question referencing features such as q tok provide significant importance in providing the program with query level context, ensuring we are filtering or selecting values based on the query context. While the help from attention feature q attn is not significant. We evaluate NPP's impact on MAPO under three different settings, in each of which NPP consistently improves the precision of MAPO.First we consider a single MAPO trained on a single train-dev data split. Similar to other RL models in the literature, MAPO training is a procedure with big variances. Even trained on the same data split multiple times with different random seed gives big variances in accuracy of 0.3% (dev) and 0.5% (test). We use the MAPO model to decode on its own train, dev and test data, in order to generate parallel splits for NPP training. Training and applying NPP this way is able to improve precision, despite of the exposure bias in the NPP training data. However, it does not improve on the variances. We next consider MAPO models that were trained and evaluated on separate train/dev splits created with a Leave-One-Out (LOO) scheme. As described in Section 4.3 we also use these splits to generate a stacked learning dataset for NPP to avoid the data bias problem. We can see that with different data splits MAPO has significantly higher variances on the dev set (1.1%), which is a drawback of RL models in general. Stacked learning helps NPP to improve precision more significantly (1.3% for dev and 1.1% for test). It also helps to reduce the variances to 0.2% on both dev and test. Finally, we consider ensembled MAPO settings, which produces the previous state of the art . We use the same NPP model trained from the stacked learning setting, and apply it to an ensemble of either 5, or 10 MAPO models from BID8. We can see that when applied to 5 MAPO ensemble, NPP can still improve the precision by 1.1%. When applied to 10 MAPO ensemble, NPP can improves the precision by 0.9%. Since the score of a program is the sum of its token scores, it is easy to visualize how NPP plan and select the correct program to execute. We observed that there are two common situations in which MAPO chooses the wrong program from the beam -selecting a spurious program over the semantically correct program and executing the incorrect table filtering or column selection. NPP aims to overcome these non optimal decisions by taking advantage of the larger time horizon and other programs discovered so far. For example NPP may reward earlier program tokens based on program tokens chosen much later on due to the bi-directional recurrent network. We first investigate how NPP demotes spurious programs. MAPO may produce programs which return the correct answer but are not semantically correct. NPP helps solve this by scoring semantically correct programs higher in the beam. An example is shown in Figure 3 for the question "What venue was the latest match played at?" when referring to a soccer (football) player given a table of his matches. The top program in beam proposed by MAPO was to first filter all rows for the competition string, which is incorrect considering the context of the table is only competitions. NPP is able to reevaluate the program given the full context. Although the first function (filter_in) is typically used to filter the table for the correct row/column. NPP learns that in this situation it is better to find the last of all rows using the function last. NPP, re-evaluates the first function of the new best program as being high in utility, and scores all tokens within this function higher than the tokens in the incorrect program. We then investigate programs from MAPO which produce wrong answers. An example is shown in Figure 4 which is based on TAB0. MAPO assigns a higher probability to a program in beam that filters on an incorrect column. Because NPP knows program token overlap with the query as well as the attention matrix, it is able to better understand the question and grade the program which is more closely related to the question. In addition to this we notice that the convolutional layer grades full functions within their context, given a kernel of size 3 the parenthesis at the beginning of program already receives a higher NPP score which we interpret as the overall score of executing the function.. Reinforcement learning applied to structured prediction suffers from limited use of the world model as well as not being able to consider future and past program context when generating a sequence. To overcome these limitations we proposed Neural Program Planner (NPP) which is a planning step after candidate program generation. We show that an additional planning model can better evaluate overall structure value. When applied to a difficult SP task NPP improves state of the art by 0.9% and allows intuitive analysis of its scoring model per program token. A MORE NPP SCORING DETAILS
A model-based planning component improves RL-based semantic parsing on WikiTableQuestions.
368
scitldr
Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. To address this cost, a number of quantization schemeshave been proposed - but most of these techniques focused on quantizing weights, which are relatively smaller in size compared to activations. This paper proposes a novel quantization scheme for activations during training - that enables neural networks to work well with ultra low precision weights and activations without any significant accuracy degradation. This technique, PArameterized Clipping acTi-vation (PACT), uses an activation clipping parameter α that is optimized duringtraining to find the right quantization scale. PACT allows quantizing activations toarbitrary bit precisions, while achieving much better accuracy relative to publishedstate-of-the-art quantization schemes. We show, for the first time, that both weights and activations can be quantized to 4-bits of precision while still achieving accuracy comparable to full precision networks across a range of popular models and datasets. We also show that exploiting these reduced-precision computational units in hardware can enable a super-linear improvement in inferencing performance dueto a significant reduction in the area of accelerator compute engines coupled with the ability to retain the quantized model and activation data in on-chip memories. Deep Convolutional Neural Networks (CNNs) have achieved remarkable accuracy for tasks in a wide range of application domains including image processing (He et al. (2016b) ), machine translation , and speech recognition . These state-of-the-art CNNs use very deep models, consuming 100s of ExaOps of computation during training and GBs of storage for model and data. This poses a tremendous challenge to widespread deployment, especially in resource constrained edge environments -leading to a plethora of explorations in compressed models that minimize memory footprint and computation while preserving model accuracy as much as possible. Recently, a whole host of different techniques have been proposed to alleviate these computational costs. Among them, reducing the bit-precision of key CNN data structures, namely weights and activations, has gained attention due to its potential to significantly reduce both storage requirements and computational complexity. In particular, several weight quantization techniques (and) showed significant reduction in the bit-precision of CNN weights with limited accuracy degradation. However, prior work (Hubara et al. (2016b); ) has shown that a straightforward extension of weight quantization schemes to activations incurs significant accuracy degradation in large-scale image classification tasks such as ImageNet . Recently, activation quantization schemes based on greedy layer-wise optimization were proposed (; ;), but achieve limited accuracy improvement. In this paper, we propose a novel activation quantization technique, PArameterized Clipping acTivation function (PACT), that automatically optimizes the quantization scales during model training. PACT allows significant reductions in the bit-widths needed to represent both weights and activations and opens up new opportunities for trading off hardware complexity with model accuracy. The primary contributions of this work include: 1) PACT: A new activation quantization scheme for finding the optimal quantization scale during training. We introduce a new parameter α that is used to represent the clipping level in the activation function and is learnt via back-propagation. α sets the quantization scale smaller than ReLU to reduce the quantization error, but larger than a conventional clipping activation function (used in previous schemes) to allow gradients to flow more effectively. In addition, regularization is applied to α in the loss function to enable faster convergence. We provide reasoning and analysis on the expected effectiveness of PACT in preserving model accuracy.3) Quantitative demonstrating the effectiveness of PACT on a spectrum of models and datasets. Empirically, we show that: (a) for extremely low bit-precision (≤ 2-bits for weights and activations), PACT achieves the highest model accuracy compared to all published schemes and (b) 4-bit quantized CNNs based on PACT achieve accuracies similar to single-precision floating point representations.4) System performance analysis to demonstrate the trade-offs in hardware complexity for different bit representations vs. model accuracy. We show that a dramatic reduction in the area of the computing engines is possible and use it to estimate the achievable system-level performance gains. The rest of the paper is organized as follows: Section 2 provides a summary of related prior work on quantized CNNs. Challenges in activation quantization are presented in Section 3. We present PACT, our proposed solution for activation quantization in Section 4. In Section 5 we demonstrate the effectiveness of PACT relative to prior schemes using experimental on popular CNNs. Overall system performance analysis for a representative hardware system is presented in Section 6 demonstrating the observed trade-offs in hardware complexity for different bit representations. Recently, a whole host of different techniques have been proposed to minimize CNN computation and storage costs. One of the earliest studies in weight quantization schemes (and) show that it is indeed possible to quantize weights to 1-bit (binary) or 2-bits (ternary), enabling an entire DNN model to fit effectively in resource-constrained platforms (e.g., mobile devices). Effectiveness of weight quantization techniques has been further improved (and), by ternarizing weights using statistical distribution of weight values or by tuning quantization scales during training. However, gain in system performance is limited when only weights are quantized while activations are left in high precision. This is particularly severe in convolutional neural networks (CNNs) since weights are relatively smaller in convolution layers in comparison to fully-connected (FC) layers. To reduce the overhead of activations, prior work (, Hubara et al. (2016a), and ) proposed the use of fully binarized neural networks where activations are quantized using 1-bit as well. More recently, activation quantization schemes using more general selections in bit-precision (Hubara et al. (2016b); Zhou et al. (2016; 2017);; ) have been studied. However, these techniques show significant degradation in accuracy (> 1%) for ImageNet tasks when bit precision is reduced significantly (≤ 2 − bits). Improvements to previous logarithmic quantization schemes using modified base and offset based on "weighted entropy" of activations have also been studied . recommends that normalized activation, in the process of batch normalization (, BatchNorm), is a good candidate for quantization. further exploits the statistics of activations and proposes variants of the ReLU activation function for better quantization. However, such schemes typically rely on local (and greedy) optimizations, and are therefore not adaptable or optimized effectively during training. This is further elaborated in Section 3 where we present a detailed discussion on the challenges in quantizing activations. Quantization of weights is equivalent to discretizing the hypothesis space of the loss function with respect to the weight variables. Therefore, it is indeed possible to compensate weight quantization errors during model training . Traditional activation Figure 1: (a) Training error, (b) Validation error across epochs for different activation functions (relu and clipping) with and without quantization for the ResNet20 model using the CIFAR10 dataset functions, on the other hand, do not have any trainable parameters, and therefore the errors arising from quantizing activations cannot be directly compensated using back-propagation. Activation quantization becomes even more challenging when ReLU (the activation function most commonly used in CNNs) is used as the layer activation function (ActFn). ReLU allows gradient of activations to propagate through deep layers and therefore achieves superior accuracy relative to other activation functions . However, as the output of the ReLU function is unbounded, the quantization after ReLU requires a high dynamic range (i.e., more bit-precision). In Fig. 1 we present the training and validation errors of ResNet20 with the CIFAR10 dataset using ReLU and show that accuracy is significantly degraded with ReLU quantizations It has been shown that this dynamic range problem can be alleviated by using a clipping activation function, which places an upper-bound on the output (Hubara et al. (2016b); ). However, because of layer to layer and model to model differences -it is difficult to determine a globally optimal clipping value. In addition, as shown in Fig 1, even though the training error obtained using clipping with quantization is less than that obtained with quantized ReLU, the validation error is still noticeably higher than the baseline. Recently, this challenge has been partially addressed by applying a half-wave Gaussian quantization scheme to activations . Based on the observation that activation after BatchNorm normalization is close to a Gaussian distribution with zero mean and unit variance, they used Lloyd's algorithm to find the optimal quantization scale for this Gaussian distribution and use that scale for every layer. However, this technique also does not fully utilize the strength of backpropagation to optimally learn the clipping level because all the quantization parameters are determined offline and remain fixed throughout the training process. Building on these insights, we introduce PACT, a new activation quantization scheme in which the ActFn has a parameterized clipping level, α. α is dynamically adjusted via gradient descent-based training with the objective of minimizing the accuracy degradation arising from quantization. In PACT, the conventional ReLU activation function in CNNs is replaced with the following: DISPLAYFORM0 where α limits the range of activation to [0, α]. The truncated activation output is then linearly quantized to k bits for the dot-product computations, where DISPLAYFORM1 With this new activation function, α is a variable in the loss function, whose value can be optimized during training. For back-propagation, gradient ∂yq ∂α can be computed using the Straight-Through Estimator (STE) to estimate ∂yq ∂y as 1. Thus, DISPLAYFORM2 The larger the α, the more the parameterized clipping function resembles a ReLU Actfn. To avoid large quantization errors due to a wide dynamic range, we include a L2-regularizer for α in the loss function. FIG6 illustrates how the value of α changes during full-precision training of CIFAR10-ResNet20 starting with an initial value of 10 and using the L2-regularizer. It can be observed that α converges to values much smaller than the initial value as the training epochs proceed, thereby limiting the dynamic range of activations and minimizing quantization loss. To provide further reasoning on why PACT works, we provide in-depth analysis in Appendix A and B. In particular, we show in Appendix A that PACT is as expressive as ReLU when it is used as an activation function. Further we explain in Appendix B that PACT finds a balancing point between clipping and quantization errors to minimize their impact to classification accuracy. When activation is quantized, the overall behavior of network parameters is affected by the quantization error during training. To observe the impact of activation quantization during network training, we sweep the clipping parameter α and record the training loss with and without quantization. Figs. 3 a,b and 3c show cross-entropy and training loss (cross entropy + regularization), respectively, over a range of α for the pre-trained SVHN network. The loaded network is trained with the proposed quantization scheme in which ReLU is replaced with the proposed parameterized clipping ActFn for each of its seven convolution layers. We sweep the value of α one layer at a time, keeping all other parameters (weight (W), bias (b), BatchNorm parameters (β, γ), and the α of other layers) fixed when computing the cross-entropy and training loss. The cross-entropy computed via full-precision forward-pass of training is shown in FIG6. In this case, the cross-entropy converges to a small value in many layers as α increases, indicating that ReLU is a good activation function when no quantization is applied. But even for the full-precision case, training clipping parameter α may help reduce the cross-entropy for certain layers; for example, ReLU (i.e., α = ∞) is not optimal for act0 and act6 layers. Next, the cross-entropy computed with quantization in the forward-pass is shown in FIG1. With quantization, the cross-entropy increases in most cases as α increases, implying that ReLU is no longer effective. We also observe that the optimal α has different ranges for different layers, motivating the need to "learn" the quantization scale via training. In addition, we observe plateaus of cross-entropy for the certain ranges of α (e.g., act6), leading to difficulties for gradient descent-based training. Finally, in FIG1, we show the total training loss including both the cross-entropy discussed above and the cost from α regularization. The regularization effectively gets rid of the plateaus in the training loss, thereby favoring convergence for gradient-descent based training. At the same time, α regularization does not perturb the global minimum point. For example, the solid circles in FIG1, which are the optimal α extracted from the pre-trained model, are at the minimum of the training loss curves. The regularization coefficient, λ α, discussed in the next section, is an additional hyper-parameter which controls the impact of regularization on α. For this new quantization approach, we studied the scope of α, the choice of initial values of α,and the impact of regularizing α. We briefly summarize our findings below, and present more detailed analysis in Appendix C.From our experiments, the best scope for α was to share α per layer. This choice also reduces hardware complexity because α needs to be multiplied only once after all multiply-accumulate (MAC) operations in reduced-precision in a layer are completed. Among initialization choices for α, we found it to be advantageous to initialize α to a larger value relative to typical values of activation, and then apply regularization to reduce it during training. Finally, we observed that applying L2-regularization for α with the same regularization parameter λ used for weight works reasonably well. We also observed that, as expected, the optimal value for λ α slightly decreases when higher bit-precision is used because more quantization levels in higher resolution for activation quantization. Additionally, we follow the practice of many other quantized CNN studies (e.g., Hubara et al. (2016b); Zhou et al. FORMULA0 ), and do not quantize the first and last layers, as these have been reported to significantly impact accuracy. We implemented PACT in Tensorflow BID0 ) using Tensorpack (Zhou et al. FORMULA0). To demonstrate the effectiveness of PACT, we studied several well-known CNNs. The following is a summary of the Dataset-Network for the tested CNNs. More implementation details can be found in Appendix. D. Note that the baseline networks use the same hyper-parameters and ReLU activation functions as described in the references. For PACT experiments, we only replace ReLU into PACT but the same hyper-parameters are used. All the time the networks are trained from scratch.• CIFAR10-ResNet20 (CIFAR10,): a convolution (CONV) layer followed by 3 ResNet blocks (16 CONV layers with 3x3 filter) and a final fully-connected (FC) layer.• SVHN-SVHN (SVHN, Netzer et al. FORMULA0): 7 CONV layers followed by 1 FC layer.• IMAGENET-AlexNet (AlexNet, Krizhevsky et al. FORMULA0): 5 parallel-CONV layers followed by 3 FC layers. BatchNorm is used before ReLU.• IMAGENET-ResNet18 (ResNet18, He et al. FORMULA0): a CONV layer followed by 8 ResNet blocks (16 CONV layers with 3x3 filter) and a final FC layer. "full pre-activation" ResNet structure (He et al. (2016a) ) is employed.• IMAGENET-ResNet50 (ResNet50, He et al. FORMULA0): a CONV layer followed by 16 ResNet "bottleneck" blocks (total 48 CONV layers) and a final FC layer. "full pre-activation" ResNet structure (He et al. FORMULA0) is employed. For comparisons, we include accuracy reported in the following prior work: DoReFa (Zhou et al. FORMULA0 FORMULA0). Detailed experimental setting for each of these papers, as well as full comparison of accuracy (top-1 and top5) for AlexNet, ResNet18, ResNet50, can be found in Appendix E. In the following section, we present key demonstrating the effectiveness of PACT relative to prior work. We first evaluate our activation quantization scheme using various CNNs. FIG3 training and validation error of PACT for the tested CNNs. Overall, the higher the bit-precision, the closer the training/validation errors are to the full-precision reference. Specifically it can be seen that training using bit-precision higher than 3-bits converges almost identically to the full-precision baseline. The final validation error has less than 1% difference relative to the full-precision validation error for all cases when the activation bit-precision is at least 4-bits. We further compare activation quantization performance with 3 previous schemes, DoReFa, LPBN, and HWGQ. We use accuracy degradation as the quantization performance metric, which is calculated as the difference between full-precision accuracy and the accuracy for each quantization bit-precision. FIG3 shows accuracy degradation (top-1) for ResNet18 (left) and ResNet50 (right) for increasing activation bit-precision, when the same weight bit-precision is used for each quantization scheme (indicated within the parenthesis). Overall, we observe that accuracy degradation is reduced as we increase the bit-precision of activations. For both ResNet18 and ResNet50, PACT achieves consistently lower accuracy degradation compared to the other quantization schemes, demonstrating the robustness of PACT relative to prior quantization approaches. In this section, we demonstrate that although PACT targets activation quantization, it does not preclude us from using weight quantization as well. We used PACT to quantize activation of CNNs, and DoReFa scheme to quantize weights. TAB0 summarizes top-1 accuracy of PACT for the tested CNNs (CIFAR10, SVHN, AlexNet, ResNet18, and ResNet50). We also show the accuracy of CNNs when both the weight and activation are quantized by DoReFa's scheme. As can be seen, with 4 bit precision for both weights and activation, PACT achieves full-precision accuracy consistently across the networks tested. To the best of our knowledge, this is the lowest bit precision for both weights and activation ever reported, that can achieve near (≤ 1%) full-precision accuracy. We further compare the performance of PACT-based quantized CNNs with 7 previous quantization schemes (DoReFa, BalancedQ, WRPN, FGQ, WEP, LPBN, and HWGQ). Fig. 5 shows comparison of accuracy degradation (top-1) for AlexNet, ResNet18, and ResNet50. Overall, the accuracy degradation decreases as bit-precision for activation or weight increases. For example, in Fig. 5a, the accuracy degradation decreases when activation bit-precision increases given the same weight precision or when weight bit-precision increases given the same activation bit-precision. PACT outperforms other schemes for all the cases. In fact, AlexNet even achieves marginally better accuracy (i.e., negative accuracy degradation) using PACT instead of full-precision. In this section, we demonstrate the gain in system performance as a of the reduction in bit-precision achieved using PACT-CNN. To this end, as shown in Fig. 6(a), we consider a DNN accelerator system comprising of a DNN accelerator chip, comprising of multiple cores, interfaced with an external memory. Each core consists of a 2D-systolic array of fixed-point multiply-andaccumulate (MAC) processing elements on which DNN layers are executed. Each core also contains an on-chip memory, which stores the operands that are fed into the MAC processing array. To estimate system performance at different bit precisions, we studied different versions of the DNN accelerator each comprising the same amount of on-chip memory, external memory bandwidth, and occupying iso-silicon area. First, using real hardware implementations in a state of the art technology (14 nm CMOS), we accurately estimate the reduction in the MAC area achieved by aggressively scaling bit precision. As shown in Fig. 6(b), we achieve ∼14× improvement in density when the bit-precisions of both activations and weights are uniformly reduced from 16 bits to 2 bits. Next, to translate the reduction in area to improvement in overall performance, we built a precisionconfigurable MAC unit, whose bit precision can be modulated dynamically. The peak compute capability (FLOPs) of the MAC unit varied such that we achieve iso-area at each precision. Note that the total on-chip memory and external bandwidth remains constant at all precisions. We estimate the overall system performance using DeepMatrix, a detailed performance modelling framework for DNN accelerators (Venkataramani et al.). Fig. 6(c) shows the gain in inference performance for the ResNet50 DNN benchmark. We study the performance improvement using different external memory bandwidths, namely, a bandwidth unconstrained system (infinite memory bandwidth) and two bandwidth constrained systems at 32 and 64 GBps. In the bandwidth unconstrained scenario, the gain in performance is limited by how amenable it is to parallelize the work. In this case, we see a near-linear increase in performance for upto 4 bits and a small drop at extreme quantization levels (2 bits).Practical systems, whose bandwidths are constrained, (surprisingly) exhibit a super-linear growth in performance with quantization. For example, when external bandwidth is limited to 64 GBps, quantizing from 16 to 4 bits leads to a 4× increase in peak FLOPs but a 4.5× improvement in performance. This is because, the total amount of on-chip memory remains constant, and at very low precision some of the data-structures begin to fit within the memory present in the cores, thereby avoiding data transfers from the external memory. Consequently, in bandwidth limited systems, reducing the amount of data transferred from off-chip can provide an additional boost in system performance beyond the increase in peak FLOPs. Note that for the 4 and 2 bit precision configurations, we still used 8 bit precision to execute the first and last layers of the DNN. If we are able to quantize the first and last layers as well to 4 or 2 bits, we estimate an additional 1.24× improvement in performance, motivating the need to explore ways to quantize the first and last layers. In this paper, we propose a novel activation quantization scheme based on the PArameterized Clipping acTivation function (PACT). The proposed scheme replaces ReLU with an activation function with a clipping parameter, α, that is optimized via gradient descent based training. We provide analysis on why PACT outperforms ReLU when quantization is applied during training. Extensive empirical evaluation using several popular convolutional neural networks, such as CIFAR10, SVHN, AlexNet, ResNet18 and ResNet50, shows that PACT quantizes activations very effectively while simultaneously allowing weights to be heavily quantized. In comparison to all previous quantization schemes, we show that both weights and activations can be quantized much more aggressively (down to 4-bits) -while achieving near (≤ 1%) full-precision accuracy. In addition, we have shown that the area savings from using reduced-precision MAC units enable a dramatic increase in the number of accelerator cores in the same area, thereby, significantly improving overall system-performance. When used as an activation function of the neural network, PACT is as expressive as ReLU. This is because clipping parameter, α, introduced in PACT, allows flexibility in adjusting the dynamic range of activation for each layer. We demonstrate in the simple example below that PACT can reach the same solution as ReLU via SGD.Lemma A.1. Consider a single-neuron network with PACT; x = w · a, y = P ACT (x), where a is input and w is weight. This network can be trained with SGD to find the output the network with ReLU would produce. Proof. Consider a sample of training data (a, y *). For illustration purposes consider mean-squareerror (MSE) as the cost function: DISPLAYFORM0 Therefore, when α is updated by SGD, DISPLAYFORM1 where η is a learning rate. Note that during this update, the weight is not updated as DISPLAYFORM2 From MSE, ∂L ∂y = (y − y *). Therefore, if y * > x, α is increased for each update of until α ≥ x, then the PACT network behaves the same as the ReLU network. Interestingly, if y * ≤ y or y < y * < x, α is decreased or increased to converge to y *. Note that in this case, ReLU would pass erroneous output x to increase cost function, which needs to be fixed by updating w with ∂L ∂w. PACT, on the other hand, ignores this erroneous output by directly adapting the dynamic range to match the target output y *. In this way, the PACT network can be trained to produce output which converges to the same target that the ReLU network would achieve via SGD.In general cases, ∂L ∂α = i ∂L ∂yi, and PACT considers output of neurons together to change the dynamic range. There are two options: if output x i is not clipped, then the network is trained via back-propagation of gradient to update weight, if output x i is clipped, then α is increased or decreased based on how close the overall output is to the target. Hence, there are configurations under which SGD would lead to a solution close to the one which the network with ReLU would achieve. FIG6 demonstrates that ResNet20 with PACT converges almost identical to the network with ReLU. In Section 3, when we briefly discussed the challenges in activation quantization, we mentioned that there is a trade-off between errors due to clipping and quantization. As the clipping level increases, larger range of activation can be passed to the next layer of the neural network causing less clipping error (ErrClip i = max(x i − α, 0)). However, the increased dynamic range incurs larger quantization error, since its magnitude is proportional to the clipping level (DISPLAYFORM0, with k-bit quantization). This imposes the challenge of finding a proper clipping level to balance between clipping and quantization errors. This trade-off can be better observed in FIG7, which shows normalized mean-square-error caused by clipping and quantization during training of the CIFAR10-ResNet20 with different clipping levels. It can be seen that activation functions with large dynamic range, such as ReLU, would suffer quantization errors whose magnitude increases exponentially as the bit-precision k decreases. This explains why the network with ReLU fails to converge when the activation is quantized (Fig. 1).PACT can find a balancing point between clipping and quantization errors. As explained in Section A, PACT adjusts dynamic range based on how close the output is to the target. As both clipping and quantization errors distort output far from the target, PACT would increase or decrease the dynamic range during training to minimize both clipping and quantization errors. FIG7 shows how PACT balances the clipping and quantization errors during training. CIFAR10-ResNet20 is trained with clipping activation function with varying clipping level α from 1 to 16. When activation is quantized, the network trained with clipping activation shows significant accuracy degradation as α increases. This is consistent with the trend in quantization error we observed in FIG7. In this case, PACT achieves the best accuracy one of the clipping activation could achieve, but without exhaustively sweeping over different clipping levels. In other words, PACT auto-tunes the clipping level to achieve best accuracy without incurring significant computation overhead. PACT's auto-tuning of dynamic range is critical in efficient yet robust training of large scale quantized neural networks, especially because it does not increase the burden for hyper-parameter tuning. In fact, we used the same hyper-parameters as well as the original network structure for all the models we tested, except replacing ReLU to PACT, when we applied activation quantization. Without quantization, there is a trend that validation error decreases as α increases. Surprisingly, some of the cases even outperforms the ReLU network. In this case, PACT also achieves comparable accuracy as ReLU, confirming its expressivity discussed in Section A. In this section, we present details on the hyper-parameters and design choices studied for PACT. One of key questions is the optimal scope for α. In other words, determining which neuron activations should share the same α. We considered 3 possible choices: (a) Individual α for each neuron activation, (b) Shared α among neurons within the same output channel, and (c) Shared α within a layer. We empirically studied each of these choices of α (without quantization) using CIFAR10-ResNet20 and determined training and validation error for PACT. As shown in FIG8, sharing α per layer is the best choice in terms of accuracy. This is in fact a preferred option from the perspective of hardware complexity as well, since α needs to be multiplied only once after all multiply-accumulate(MAC) operations in a layer are completed. The optimization behavior of α can be explained from the formulation of the parameterized clipping function. From Eq. 3 it is clear that, if α is initialized to a very small value, more activations fall into the range for the nonzero gradient, leading to unstable α in the early epochs, potentially causing accuracy degradation. On the other hand, if α is initialized to a very large value, the gradient becomes too small and α may be stuck at a large value, potentially suffering more on quantization error. Therefore, it is intuitive to start with a reasonably large value to cover a wide dynamic range and avoid unstable adaptation of α, but apply regularizer to reduce the value of α so as to alleviate quantization error. In practice, we found that applying L2-regularization for α while setting its coefficient λ α the same as the L2-regularization coefficient for weight, λ, works well. FIG9 shows that validation error for PACT-quantized CIFAR10-ResNet20 does not significantly vary for a wide range of λ α. We also observed that, as expected, the optimal value for λ α slightly decreases when higher bit-precision is used because more quantization levels in higher resolution for activation quantization. FORMULA0 ) follow the convention to keep the first and last layer in full precision during training, since quantizing those layers lead to substantial accuracy degradation. We empirically studied this for the proposed quantization approach for CIFAR10-ResNet20. In FIG10, the only difference among the curves is whether input activation and weight of the first convolution layer or the last fully-connected layer are quantized. As can be seen from the plots, there can be noticeable accuracy degradation if the first or last layers are aggressively quantized. But computation in floating point is very expensive in hardware. Therefore, we further studied the option of quantizing the first and last layers with higher quantization bit-precision than the bit-precision of the other layers. TAB2 shows that independent of the quantization level for the other layers, there is little accuracy degradation if the first and last layer are quantized with 8-bits. This motivates us to employ reduced precision computation even for the first/last layers. In this section, we summarize details of our CNN implementation as well as our training settings, which is based on the default networks provided by Tensorpack . Unless mentioned otherwise, ReLU following BatchNorm is used for ActFn of the convolution (CONV).0 12.9 11.1 10.9 17.4 10.0 9.4 8.9 15.9 9.7 9.2 8.9 18.2 9.0 8.4 8.5 FL/M/NQ 21.3 11.5 11.5 10.7 17.6 9.7 9.2 9.0 16.5 9.7 8.7 8.7 16.3 9.3 8.6 8.5 NQ/M/FL 12.1 11.2 11.0 11.5 9.8 8.9 9.2 9.2 8.4 8.4 8.7 8.8 8.5 9.0 8.5 8.5 layers, and Softmax is used for the fully-connected (FC) layer. Note that the baseline networks use the same hyper-parameters and ReLU activation functions as described in the references. For PACT experiments, we only replace ReLU into PACT but the same hyper-parameters are used. All the time the networks are trained from scratch. The CIFAR10 dataset is an image classification benchmark containing 32 × 32 pixel RGB images. It consists of 50K training and 10K test image sets. We used the "standard" ResNet structure (He et al. (2016a) ) which consists of a CONV layer followed by 3 ResNet blocks (16 CONV layers with 3x3 filter) and a final FC layer. We used stochastic gradient descent (SGD) with momentum of 0.9 and learning rate starting from 0.1 and scaled by 0.1 at epoch 60, 120. L2-regularizer with decay of 0.0002 is applied to weight. The mini-batch size of 128 is used, and the maximum number of epochs is 200.The SVHN dataset is a real-world digit recognition dataset containing photos of house numbers in Google Street View images, where the "cropped" 32 × 32 colored images (resized to 40 × 40 as input to the network) centered around a single character are used. It consists of 73257 digits for training and 26032 digits for testing. We used a CNN model which contains 7 CONV layers followed by 1 FC layer. We used ADAM(Kingma & Ba FORMULA0) with epsilon 10 −5 and learning rate starting from 10 −3 and scaled by 0.5 every 50 epoch. L2-regularizer with decay of 10 −7 is applied to weight. The mini-batch size of 128 is used, and the maximum number of epochs is 200.The IMAGENET dataset consists of 1000-categories of objects with over 1.2M training and 50K validation images. Images are first resized to 256 256 and randomly cropped to 224224 prior to being used as input to the network. We used a modified AlexNet, ResNet18 and ResNet50.We used AlexNet network in which local contrast renormalization (RNorm) layer is replaced with BatchNorm layer. We used ADAM with epsilon 10 −5 and learning rate starting from 10 −4 and scaled by 0.2 at epoch 56 and 64. L2-regularizer with decay factor of 5 × 10 −6 is applied to weight. The mini-batch size of 128 is used, and the maximum number of epochs is 100.ResNet18 consists of a CONV layer followed by 8 ResNet blocks (16 CONV layers with 3x3 filter) and a final FC layer. "full pre-activation" ResNet structure (He et al. (2016a) ) is employed. ResNet50 consists of a CONV layer followed by 16 ResNet "bottleneck" blocks (total 48 CONV layers) and a final FC layer. "full pre-activation" ResNet structure (He et al. (2016a) ) is employed. For both ResNet18 and ResNet50, we used stochastic gradient descent (SGD) with momentum of 0.9 and learning rate starting from 0.1 and scaled by 0.1 at epoch 30, 60, 85, 95. L2-regularizer with decay of 10 −4 is applied to weight. The mini-batch size of 256 is used, and the maximum number of epochs is 110. • DoReFa-Net : A general bit-precision uniform quantization schemes for weight, activation, and gradient of DNN training. We compared the experimental of DoReFa for CIFAR10, SVHN, AlexNet and ResNet18 under the same experimental setting as PACT. Note that a clipped absolute activation function is used for SVHN in DoReFa.• Balanced Quantization : A quantization scheme based on recursive partitioning of data into balanced bins. We compared the reported top-1/top-5 validation accuracy of their quantization scheme for AlexNet and ResNet18.• Quantization using Wide Reduced-Precision Networks : A scheme to increase the number of filter maps to increase robustness for activation quantization. We compared the reported top-1 accuracy of their quantization with various weight/activation bit-precision for AlexNet.• Fine-grained Quantization : A direct quantization scheme (i.e., little re-training needed) based on fine-grained grouping (i.e., within a small subset of filter maps). We compared the reported top-1 validation accuracy of their quantization with 2-bit weight and 4-bit activation for AlexNet and ResNet50.• Weighted-entropy-based quantization : A quantization scheme that considers statistics of weight/activation. We compared the top-1/top-5 reported accuracy of their quantization with various bit-precision for AlexNet, where the first and last layers are not quantized.• Low-precision batch normalization : A scheme for activation quantization in the process of batch normalization. We compared the top-1/top-5 reported accuracy of their quantization with 3-5 bit precision for activation. The first layer activation is not quantized.• Half-wave Gaussian quantization : A quantization scheme that finds the scale via Lloyd search on Normal distribution. We compared the top-1/top-5 reported accuracy for their quantization with 1-bit weight and varying activation bit-precision for AlexNet, and 2-bit weight for ResNet18 and ResNet50. The first and last layers are not quantized. In this section, we present full comparison of accuracy (top-1 and top-5) of the tested CNNs (AlexNet, ResNet18, ResNet50) for image classification on IMAGENET dataset. All the data points for PACT and DoReFa are obtained by running experiments on Tensorpack. All the other data points are accuracy reported in the corresponding papers. As can be seen, PACT achieves the best accuracy across the board for various flavors of quantization. We also observe that using PACT for activation quantization enables more aggressive weight quantization without loss in accuracy.
A new way of quantizing activation of Deep Neural Network via parameterized clipping which optimizes the quantization scale via stochastic gradient descent.
369
scitldr
Pruning neural network parameters is often viewed as a means to compress models, but pruning has also been motivated by the desire to prevent overfitting. This motivation is particularly relevant given the perhaps surprising observation that a wide variety of pruning approaches increase test accuracy despite sometimes massive reductions in parameter counts. To better understand this phenomenon, we analyze the behavior of pruning over the course of training, finding that pruning's effect on generalization relies more on the instability it generates (defined as the drops in test accuracy immediately following pruning) than on the final size of the pruned model. We demonstrate that even the pruning of unimportant parameters can lead to such instability, and show similarities between pruning and regularizing by injecting noise, suggesting a mechanism for pruning-based generalization improvements that is compatible with the strong generalization recently observed in over-parameterized networks. Pruning weights and/or convolutional filters from deep neural networks (DNNs) can substantially shrink parameter counts with minimal loss in accuracy (; ; a; ; ; ; ;), enabling broader application of DNNs via reductions in memory-footprint and inference-FLOPs requirements. Moreover, many pruning methods have been found to actually improve generalization (measured by model accuracy on previously unobserved inputs) (; ;). Consistent with this, pruning was originally motivated as a means to prevent over-parameterized networks from overfitting to comparatively small datasets . Concern about over-parameterizing models has weakened, however, as many recent studies have found that adding parameters can actually reduce a DNN's generalization-gap (the drop in performance when moving from previously seen to previously unseen inputs), even though it has been shown that the same networks have enough parameters to fit large datasets of randomized data . Potential explanations for this unintuitive phenomenon have come via experiments (; ; ; ;), and the derivation of bounds on DNN generalization-gaps that suggest less overfitting might occur as parameter counts increase . This research has implications for neural network pruning, where a puzzling question has arisen: if larger parameter counts don't increase overfitting, how does pruning parameters throughout training improve generalization? To address this question we first introduce the notion of pruning instability, which we define to be the size of the drop in network accuracy caused by a pruning iteration (Section 3). We then empirically analyze the instability and generalization associated with various magnitude-pruning (b) algorithms in different settings, making the following contributions: 1. We find a tradeoff between the stability and potential generalization benefits of pruning, and show iterative pruning's similarity to regularizing with noise-suggesting a mechanism unrelated to parameter counts through which pruning appears to affect generalization. 2. We characterize the properties of pruning algorithms which lead to instability and correspondingly higher generalization. There are various approaches to pruning neural networks. Pruning may be performed post-hoc (; ; b;), or iteratively throughout training, such that there are multiple pruning events as the model trains (; ;). Most methods prune parameters that appear unimportant to the function computed by the neural network, though means of identifying importance vary. Magnitude pruning (b) uses small-magnitude to indicate unimportance and has been shown to perform competitively with more sophisticated approaches . Many pruning studies have shown that the pruned model has heightened generalization (; ;), consistent with the fact that pruning may be framed as a regularization (rather than compression) approach. For example, variational Bayesian approaches to pruning via sparsity-inducing priors can describe weight removal as a process that reduces model description length, which in theory may help improve generalization . Similarly, the idea that models may be described more succinctly at flat minima has motivated pruning in service of flat minimum search . notes, however, that flatness can be arbitrarily modified by reparameterizing the function, and sharp minima can generalize well. VC dimension (a measure of model capacity) has motivated the use of iterative pruning to improve generalization . Overfitting can be bounded above by an increasing function of VC dimension, which itself often increases with parameter counts, so fewer parameters can lead to a guarantee of less overfitting . Unfortunately, such bounds can be so loose in practice that tightening them by reducing parameter counts need not translate to better generalization . Rather than support parameter-count-based arguments for generalization in DNNs, our suggest iterative DNN pruning may improve generalization by creating various noisy versions of the internal representation of the data, which unpruned parameters try to fit to, as in noise-injection regularization . Dropout creates particularly similar noise, as it temporarily sets random subsets of layer outputs to zero (likely changing an input's internal representation every epoch). Indeed, applying dropout-like zeroing noise to a subset of features during training can encourage robustness to a post-hoc pruning of that subset . Iterative DNN pruning noise ultimately differs, however, as it is: applied less frequently, not temporary (except in algorithms with weight re-entry), usually not random, and less well studied. Given a neural network and set of test data, let t be the top-1 test accuracy, the fraction of test data examples correctly classified multiplied by 100. We define a pruning algorithm's instability on pruning iteration i in terms of t measured immediately before (t pre,i) and immediately after (t post,i) pruning: instability i = t pre,i − t post,i. In other words, the instability is the size of the accuracy drop caused by a particular pruning event. (tpost,i) This measure is related to a weight's importance (sometimes referred to as "saliency"; ;) to the test accuracy, in that less stable pruning algorithms target more important sets of weights (all else equal). The stability of a pruning algorithm may be affected by many factors. Our experiments (Section 4) explore the effects of the following: pruning target, pruning schedule, iterative pruning rate, and model. The remainder of this section provides an overview of these factors and demonstrates a need for a novel pruning target, which we derive. In all of our experiments, we use iterative magnitude pruning (b), which removes weights according to some magnitude-based rule, retrains the ing smaller network to recover from the pruning, and repeats until the desired size reduction is met. We denote pruning algorithms that target the smallest-magnitude parameters with an "S" subscript (e.g. prune S), random parameters with an "R" subscript, and the largest-magnitude parameters with an "L" subscript. The usual approach to pruning involves removing parameters that have the smallest magnitudes , or, similarly, those parameters least important to the loss function as determined by some other metric (; ; ; ; ; ;). The correlation between parameter magnitude and importance weakens in the presence of batch normalization (BN) . Without batch normalization, a convolutional filter with weights W will produce feature map activations with half the magnitude of a filter with weights 2W: filter magnitude clearly scales the output. With batch normalization, however, the feature maps are normalized to have zero mean and unit variance, and their ultimate magnitudes depend on the BN affine-transformation parameters γ and β. As a , in batch normalized networks, filter magnitude does not scale the output, and equating small magnitude and unimportance may therefore be particularly flawed. This has motivated approaches to use the scale parameter γ's magnitude to find the convolutional filters that are important to the network's output . Here, we derive a novel approach to determining filter importance/magnitude that incorporates both γ and β. To approximate the expected value/magnitude of a batch-normalized, post-ReLU feature map activation, we start by defining the 2D feature map produced by convolution with BN: We approximate the activations within this feature map as M ij ∼ N (β, γ). This approximation is justified if central limit theorem assumptions are met by the dot products in W * x, and we empirically show in Figure A.1 that this approximation is highly accurate early in training, though it becomes less accurate as training progresses. Given this approximation, the post-ReLU feature map R = max{0, M} has elements R ij that are either 0 or samples from a truncated normal distribution with left truncation point l = 0, right truncation point r = ∞, and mean µ where and φ(x) and Φ(x) are the standard normal distribution's PDF and CDF (respectively) evaluated at x. Thus, an approximation to the expected value of R ij is given by We use the phrase "E[BN] pruning" to denote magnitude pruning that computes filter magnitude using this derived estimate of E[R ij]. E[BN] pruning has two advantages. First, this approach avoids the problematic assumption that filter importance is tied to filter 2 norm in a batch-normalized network. Accordingly, we hypothesize that E[BN] pruning can grant better control of the stability of the neural network's output than pruning based on filters' 2 norms. Second, the complexity of the calculation is negligible as it requires (per filter) just a handful of arithmetic operations on scalars, and two PDF and CDF evaluations, which makes it cheaper than a data-driven approach (e.g. approximating the expected value via the sample mean of feature map activations for a batch of feature maps). We consider three basic model classes: a simple network with convolutions (2x32, pool, 2x64, pool) and fully connected layers that we denote Conv4, VGG11 with its fully-connected layers replaced by a single fully-connected layer, and ResNet18 . All convolutions are 3x3. We trained these models using Adam with initial learning rate lr = 0.001, as we found Adam more helpful than SGD for recovering from unstable pruning (seemingly consistent with the observation in that recovery from pruning is more difficult when learning rates are low). , it allowed the network to recover more easily. The pruning algorithms we consider are iterative: we define a pruning schedule that describes the epochs on which pruning events occur, and set a corresponding (constant) iterative pruning rate that will ensure the total pruning percentage is met by the end of training (please see Appendix A.5 for rate and schedule details). Thus, throughout training, pruning steadily removes DNN parameters, with the iterative pruning rate determining the number pruned per event. While our plots label each pruning configuration with its iterative pruning rate, the total pruning percentages were: 42% of VGG11, 46% of ResNet18, and 10% of Conv4 (except in Appendix A.4, wherein we prune 85% of Conv4). Pruning studies often aim to compress pre-trained models that generalize well, and consequently, much work has focused on metrics to identify parameter importance: if you can find the parameters that matter the least to the function computed by the DNN, then you can prune more parameters without significantly harming accuracy. As a bonus, such pruning methods can sometimes even increase generalization (; ;). However, the mechanism by which pruning induces higher generalization remains unclear. Here, rather than investigate how to best maintain accuracy when pruning the network, we instead focus on understanding the mechanisms underlying these generalization improvements. Can improved generalization in pruned DNNs be explained by parameter-count reduction alone, or rather, do the properties of the pruning algorithm play an important role in generalization? As removing parameters from a DNN via pruning may make the DNN less capable of fitting to the noise in the training data, as originally suggested in; , we might expect that the generalization improvements observed in pruned DNNs are entirely explained by the number of parameters removed. In which case, methods that prune equal amounts of parameters would generalize similarly. Alternatively, perhaps some aspect of the pruning algorithm itself is responsible for increased generalization. This seems plausible as the reported generalization benefits of pruning vary widely across studies. One possible explanation for this variability is differences in the pruning algorithms themselves. A key differentiator of these algorithms is their stability: more stable approaches may compute a very close approximation to the way the loss changes with respect to each parameter and prune a single parameter at a time , while less stable approaches may assume that parameter magnitude and importance are roughly similar and prune many weights all at once (b). Therefore, to the extent that differences in pruning algorithms explain differences in pruning-based generalization improvements, we might expect to observe a relationship between generalization and pruning stability. To determine whether pruning algorithm stability affects generalization, we compared the instability and final top-1 test accuracy of several pruning algorithms with varying pruning targets and iterative pruning rates (Figure 1). Consistent with the nature of the pruning algorithm playing a role in generalization, we observed that more unstable pruning algorithms created higher final test accuracies than those which were stable (Figure 1, right; VGG11: Pearson's correlation r = .84, p-value = 1.6e−11; ResNet18: r = .65, p-value = 5e−5). While many pruning approaches have aimed to induce as little instability as possible, these suggest that pruning techniques may actually facilitate better generalization when they induce more instability. Furthermore, these suggest that parameter-count based arguments may not be sufficient to explain generalization in pruned DNNs, and suggest that the precise pruning method plays a critical role in this process. Figure 1 also demonstrates that pruning events for prune L with a high iterative pruning rate (red curve, pruning as much as 13% of a given convolutional layer per pruning iteration) are substantially more destabilizing than other pruning events, but despite the dramatic pruning-induced drops in performance, the network recovers to higher performance within a few epochs. Several of these pruning events are highlighted with red arrows. Please see Appendix A.2 for visualization of the epoch-wise instabilities of each method in VGG11, and Appendix A.3 for an 2 -norm pruning version of Figure 1, which has qualitatively similar . Interestingly, we initially observed that ResNet18 adapted to pruning events more quickly than VGG11 (accuracy rebounded after pruning then flattened soon after instead of climbing steadily). Thinking that shortcut connections were allowing the network to adapt to pruning events too easily, we tried pruning a larger amount of the penultimate block's output layer: this reduced the number of shortcut connections to the final block's output layer, lengthened the adaptation period, and improved generalization. This simple improvement of pruning hyperparameters suggests a potential for further optimization of the shown. Please see Appendix A.5.1 for all hyperparameters/details of these experiments. We have demonstrated that, perhaps surprisingly, pruning larger magnitude weights via the E[BN] algorithm can in larger test accuracy improvements (Figure 1). This suggests a positive correlation between pruning target magnitude and pruning's regularization effect. However, it's not clear whether this relationship holds more generally; i.e., perhaps it was caused by a feature of our Figure 2: When pruning 10% of Conv4's largest dense layer, the final generalization gap depends on the magnitude of the weights that were pruned during training. This is particularly true when using unstructured pruning (left) rather than structured pruning (right). E[BN] algorithm or the networks examined. Alternatively, this effect may be dependent on whether nodes/filters (structured pruning) or individual parameters (unstructured pruning) are pruned. As such, we tested whether target weight magnitude correlates with pruning's regularizing effect when using both unstructured and structured magnitude pruning on the penultimate linear layer of a small network without batch normalization (Conv4). Specifically, we constructed a pruning target for each weight-magnitude decile (see Appendix A.5.2 for details), used each target to prune ten separate networks as they trained, and compared the generalization gaps (test-train accuracy) of the pruned networks to the target pruned (Figure 2). For both unstructured and structured pruning (Figure 2 left and right, respectively), we found that pruning larger weights led to better generalization gaps, though, interestingly, this effect was much more dramatic in the context of unstructured pruning than structured pruning. One possible explanation for this is that, in structured pruning, the 2 norm of pruned neurons did not vary dramatically past the fifth decile, whereas the unstructured deciles were approximately distributed exponentially. As a , the top 50% of filters for the structured case were not clearly distinguished, making magnitude pruning much more susceptible to small sources of noise. These suggest that, when weight magnitudes vary considerably, pruning large magnitude weights may lead to improved generalization. Interestingly, for ResNet18, we actually found that structured prune L (red line) performed better than unstructured prune L (green line) (Figure 3). The worse performance of unstructured prune L may stem from its harming the helpful inductive bias provided by convolutional filters (i.e., perhaps removing the most important connections in all convolutional filters degrades performance more than pruning the same number of connections via removal of several entire filters) or its lower instability. While pruning large magnitude weights appears to play a role in pruning's ability to improve generalization, more commonly used pruning algorithms often see generalization improvements when targeting the smallest magnitude or least important parameters, suggesting that target magnitude/importance is not the only characteristic of pruning algorithms relevant to generalization. One possibility is that, given a pruning target, pruning more parameters per pruning iteration (while holding constant the total pruning percentage) may lead to greater instability. If this is the case, the generalization-stability tradeoff suggests that the increase in instability from raising the iterative pruning rate would coincide with improved generalization performance. Alternatively, if the pruning target or total pruning percentage is all that matters, we may expect that changing the iterative: In VGG11, increasing the iterative pruning rate (and decreasing the number of pruning events in order to hold total pruning percentage constant) leads to more instability, and can allow methods that target less important parameters to generalize better. Additionally, E[BN] magnitude better approximates parameter importance than 2 -norm magnitude (see Figure A2 for another example and discussion of this phenomenon). An unpruned baseline model has 85.21% accuracy. pruning rate (while keeping the pruning target and total pruning percentage fixed) would not affect generalization. To test this, we plotted mean instability and test accuracy as a function of different iterative pruning rates for both 2 -norm and E[BN] pruning (Figure 4). Consistent with iterative pruning rate playing a role in instability, we find that (given a pruning target) more instability is induced by using larger iterative pruning rates (Figure 4 left). Moreover, pruning random or small magnitude parameters performs best at the largest iterative rate (30%), supporting the idea that these methods require a source of instability to boost generalization. Note this suggests that, when targeting less important weights, higher iterative pruning rates during training can be an effective way to induce additional instability and generalization. (Algorithm and experiment details are available in Appendix A.5.4.) Perhaps strangely, higher iterative pruning rates did not translate to improved generalization when targeting the largest magnitude weights (prune L) with 2 -norm pruning. The fact that prune L does not generalize the best at the highest iterative pruning rate may be due to the reduction in pruning iterations required by the large iterative pruning rate (i.e., when the iterative rate is at 30%, the number of pruning events is capped at three, which removes 90% of a layer). Thus, while this rate grants more instability (Figure 4 left) per iteration, pruning affects the network less often. The idea that the regularizing effect of pruning is enhanced by pruning more often may also help explain the observation that methods that prune iteratively can generalize better (b). Another possibility is that, since raising the iterative pruning rate (and consequently the duration between pruning events) tends to make the 2 -norm worse for differentiating parameters by their importance to accuracy 1, raising the iterative pruning rate causes prune L with 2 -norm pruning to target less important weights. Consequently, prune L with 2 -norm pruning may generalize worse at higher iterative rates by leaving unpruned more important weights, the presence of which can harm model generalization . Relatedly, this also means that prune S with 2 -norm pruning may increase (in networks with batch normalization at least) instability and generalization by failing to avoid the pruning of important parameters. Our thus far suggest that pruning improves generalization when it creates instability throughout training. These prior , though, involved damaging model capacity simply by the nature of pruning, which decreases the number of model parameters. It therefore remains possible that the generalization benefits we've seen rely upon the reduction in capacity conferred by pruning. Here, we examine this critical question. We first note that iterative pruning can be viewed as noise injection Figure 5: Generalization improvements from pruning bear resemblance to those obtained by using temporary (Left) multiplicative zeroing noise, and (Right) additive Gaussian noise, as long as the noise is applied for enough batches/steps. the permanence of this zeroing can mitigate some of the capacity effect of pruning 2 and, therefore, help us isolate and study how iterative pruning regularizes through noise injection. As a baseline, we consider prune L applied to VGG11, judging filter magnitude via the 2 -norm (additional experimental details are in Appendix A.5.5). We then modify this algorithm such that, rather than permanently prune filters, it simply multiplies the filter weights by zero, then allows the zeroed weights to immediately resume training in the network ("Zeroing 0" in Figure 5 Left). However, by allowing pruned weights to immediately recover, this experiment also removes a key, potentially regularizing aspect of pruning noise: the requirement that the rest of the network adapts to fit the new representations generated by pruning. To encourage this potentially important facet of pruning noise, we also added variants that held weights to zero for 50 and 1500 consecutive batches 3. As a related experiment, we also measured the impact of adding Gaussian noise to the weights in Figure 5, right. Noise was applied either once (Gaussian 0) or repeatedly over a series of training batches (Gaussian 50/1500). If the capacity effects of weight removal are not necessary to explain pruning's effect on generalization, then we would expect that the generalization behavior of these non-permanent noise injection algorithms could mimic the generalization behavior of prune L. Alternatively, if weight removal is a necessary component of pruning-based generalization improvements, then we would not expect close similarities between the generalization phenomena of prune L and non-permanent pruning noise injection. Consistent with the capacity effects of weight removal not being necessary to explain generalization in pruned DNNs, applying zeroing noise for 50 batches to filters (rather than pruning them completely) generates strikingly similar accuracy to prune L (Figure 5 Left). Specifically, the patterns in instability are qualitatively and quantitatively similar, as are the generalization levels throughout training. Importantly, we found that applying zeroing noise once (Zeroing 0; brown line) was not sufficient to generate better performance, suggesting that the regularization induced by forcing weights to adapt to noised representations is critical to pruning's ability to improve generalization. Moreover, we found that, while applying Gaussian noise could increase generalization if applied for long enough (Gaussian 1500; purple line), it still did not match the performance of prune L, suggesting that multiplicative zeroing noise is substantially more effective than additive Gaussian noise 4. Together, these demonstrate that pruning induced generalization benefits are not merely explained by weight removal, but rather are dependent on the regularization conferred by forcing networks to adapt to noised representations over a sufficiently long period throughout training. In this study, we defined the notion of pruning algorithm instability, and applied several pruning approaches 5 to multiple neural networks, assessing the approaches' effects on instability and generalization. Throughout these experiments, we observed that pruning algorithms that generated more instability led to better generalization (as measured by test accuracy). For a given pruning target and total pruning percentage, instability and generalization could be fueled by raising iterative pruning rates (Figure 4, Section 4.3). Additionally, targeting more important weights, again holding total parameters pruned constant, led to more instability and generalization than targeting less important weights (Figure 1, Section 4.1). These support the idea that the generalization benefits of pruning cannot be explained solely by pruning's effect on parameter counts-the properties of the pruning algorithm must be taken into account. Our analysis also suggests that the capacity effects of weight-removal may not even be necessary to explain how pruning improves generalization. Indeed, we provide an interpretation of iterative pruning as noise injection, a popular approach to regularizing DNNs, and find that making pruning noise impermanent provides pruning-like generalization benefits while not removing as much capacity as permanent pruning (Figure 5, Section 4.4). While not emphasized in our discussion, pruning algorithm stability can be a desirable property, as recovery from pruning damage is not guaranteed. Indeed, pruning too many large/important weights can lead to worse final generalization . Recovery appears to be a function of several factors, including: learning rate ); presence of an ongoing regularization effect (Figure 3, Section 4.2); preservation of helpful inductive biases (Figure 3, Section 4.2); and damage to network capacity (e.g., removing too much of an important layer could cause underfitting). A better understanding of the factors which aid recovery from pruning instability could aid the design of novel pruning algorithms. For example, pruning methods that allow weights to re-enter the network could perhaps prune important weights occasionally to enhance generalization improvements, without risking permanent damage to the pruned networks (see Appendix A.4). In describing how pruning regularizes a model, we touched on similarities between pruning and noise injection. Our , however, may also be consistent with other parameter-count-independent approaches to understanding generalization in neural networks, as pruning may reduce the information stored in the network's weights , and make the network more distributed . This raises the possibility that pruning noise engenders helpful properties in DNNs, though it remains unclear whether such properties might be identical to those achieved with more common noise injection schemes . Further exploration will be necessary to better understand the relationship between these approaches. One important caveat of our is that they were generated with CIFAR-10, a relatively small dataset, so future work will be required to evaluate whether the presented phenomena hold in larger datasets. Relatedly, we only studied pruning's regularizing effect in isolation and did not include commonly used regularizers (e.g., weight decay) in our setups. In future work, it would be interesting to examine whether pruning complements the generalization improvements of other commonly used regularization techniques. Figure A1: We examined the normalized activations (shown in blue histograms) of feature maps in the final eight convolutional layers of VGG19 before (left) and after (right) training to convergence. We found that the approximation to standard normality (shown in orange) of these activations is reasonable early on but degrades with training (particularly in layers near the output). The main drawback to the E[BN] approach (Section 3.1.1) is the sometimes poor approximation M ij ∼ N (β, γ). In Figure A.1, the approximation's quality depends on layer and training epoch. A less serious drawback is that this approach does not account for the strength of connections to the post-BN feature map, which could have activations with a large expected value but low importance if relatively small-magnitude weights connected the map to the following layer. Figure A2: In VGG11, prune S E[BN] is more stable than prune S, which uses filter-2 -norm to compare parameter magnitudes. Methods with higher iterative pruning rates create more instability on a given iteration. Means reduce along the run dimension (10 runs per configuration). Note that this graph uses a method (prune S) that was included in Figure 1 Here we use the same training and pruning configurations that were used for Figure 1, but we replace E[BN] pruning with 2 -norm pruning. Qualitatively, the two figures' are similar. Interestingly, though, the correlation between instability and generalization is somewhat weaker with 2 -norm pruning. This may be explained by the fact that 2 -norm pruning generates a narrower spectrum of instabilities, which is perhaps due to 2 -norm scoring's inability to accurately assess parameter importance (illustrated in Figure 4). Top-1 Accuracy % No Pruning Prune_S Prune_L Train Test Figure A4: When training Conv4 on CIFAR10, unstable pruning can significantly improve the baseline's generalization. The training accuracies and test accuracies (the latter were calculated immediately after pruning) illustrate how much each pruning algorithm disturbs the neural network's output during training. While it's unclear how much unstable pruning, which is particularly damaging to capacity, can be sustained at high sparsity levels, prune L can lead to generalization several percentage points above the baseline/prune S when pruning 85% of Conv4. Please see Section A.5.6 for experimental setup details. Unstructured magnitude pruning entails removing individual weights (subsets of filters/neurons), which are selected for pruning based on their magnitude. Our unstructured pruning approach does not allow previously pruned weights to reenter the network (; ;). Structured magnitude pruning removes entire filters/neurons, which are selected based on their 2 -norms or via the E[BN] calculation. Except where noted, we use structured pruning for VGG11 and ResNet18. We denote the pruning of n layers of a network by specifying a series of epochs at which pruning starts s = (s 1, ..., s n), a series of epochs at which pruning ends e = (e 1, ..., e n), a series of fractions of parameters to remove p = (p 1, ..., p n), and an inter-pruning-iteration retrain period r ∈ N. For a given layer l, the retrain period r and fraction p l jointly determine the iterative pruning percentage i l. Our experiments prune the same number of parameters i l × size(layer l) per pruning iteration, ultimately removing p l × 100% of the parameters by the end of epoch e l. Our approach is designed to study the effects of changing factors such as the iterative pruning rate and lacks some practically helpful features, e.g. hyperparameters indicating how many parameters can be safely pruned . When layerwise iterative pruning percentages differ (i.e., when there exists an a and b such that i a and i b are unequal), our figures state the largest iterative pruning rate that was used in any of the layers. For ResNet, our pruning algorithms did not account for the magnitude of incoming shortcut connections when judging filter magnitude/importance. Though we did prune the incoming and outgoing shortcut connections associated with any pruned feature maps. We used only the CIFAR-10 dataset in our experiments, a limitation of our study. We used batch size 128, and only used data augmentation in the decile experiment (Figure 2). For some experiments, we give multi-step learning rate schedules lr s = (x, y), which means we shrink the learning rate by a factor of 10 at epochs x and y. A.5.1 FIGURE 1 We used E [BN] pruning in all models that were pruned, except for one model that used 2 -norm magnitude pruning, which was included in Figure 1 right but not displayed in Figure 1 left due to its qualitative similarity to prune S E[BN]. We leave out "E[BN]" in the legend of Figure 1 left, but all models nonetheless used E[BN] pruning. The models were trained on CIFAR-10 with Adam for 325 epochs with lr s =. The error bars are 95% confidence intervals for the mean, bootstrapped from 10 distinct runs of each experiment. Since the layerwise pruning percentages varied, pruning required multiple iterative pruning percentages, the largest of which is denoted in the legend (rounded to the nearest integer). VGG11 Pruning targeted the final four convolutional layers during training with (layerwise) starting epochs s =, ending epochs e =, and pruning fractions p = (0.3, 0.3, 0.3, 0.9). To allow for the same amount of pruning among models with differing iterative pruning percentages, we adjusted the number of inter-pruning retraining epochs. The models with the maximum iterative pruning percentage of 1% had r = 4, while the models with the maximum iterative pruning percentage of 13% had r = 40. The model pruned with 2 -norm magnitude pruning, which only appeared in Figure 1 right, had r = 4 as well. ResNet18 Pruning targeted the final four convolutional layers of ResNet18 during training with (layerwise) starting epochs s =, ending epochs e =, and pruning fractions p = (0.25, 0.4, 0.25, 0.95). As noted in Section 4.1, we increased the pruning rate of the output layer of the penultimate block to remove shortcut connections to the last layer, thinking that it should increase the duration of adaptation to pruning. The models with the maximum iterative pruning percentage of 1% had r = 4, while the models with the maximum iterative pruning percentage of 13% had r = 40. Each experiment in Figure 2 targeted one of ten weight-magnitude deciles in the post-convolutional linear layer of the Conv4 network during training on CIFAR-10 with data augmentation. While there are just ten deciles, the iterative nature of our pruning algorithms required the creation of eleven different pruning targets: ten methods pruned from the bottom of the decile upward (one experiment for each decile's starting point: 0th percentile, 10th percentile, etc.), and one (D10) pruned from the last decile's ending point downward (pruning the very largest collection of weights each iteration). In other words, D9 and D10 targeted the same decile (90th percentile to maximum value), but only D10 actually removed the largest weights on a given iteration (weights in the 100th-99th percentiles, for example). The D9 experiment would target weights starting from the 90th percentile (e.g. it may prune the 90th-91st percentiles on a particular iteration). The training/pruning setup used the Adam optimizer, s =, e =, p = (0.1), r = 3, and lr s =. We calculated the generalization gap on epoch 54 and sampled average pruned magnitudes on epoch 35. We obtained qualitatively similar regardless of whether we used fewer training epochs or data augmentation. The error bars are 95% confidence intervals for the means, bootstrapped from 10 distinct runs of each configuration. A.5.3 FIGURE 3 In Figure 3, prune L was applied to the final four convolutional layers of ResNet18 during training with (layerwise) starting epochs s =, ending epochs e =, and pruning fractions p = (0.25, 0.4, 0.25, 0.95). Since the layerwise pruning percentages varied, pruning required multiple iterative pruning percentages, the largest of which is denoted in the legend (rounded to the nearest integer). The models with the maximum iterative pruning percentage of 1% had r = 4, the models with the maximum iterative pruning percentage of 13% had r = 40, and the "One Shot" model pruned all its targeted parameters at once on epoch 246. When performing unstructured pruning, we pruned individual weights from filters based on their magnitude. The structured pruning experiments used E[BN] pruning. The models were trained on CIFAR-10 with Adam for 325 epochs with lr s =. The error bars are 95% confidence intervals for the means, bootstrapped from 10 distinct runs of each experiment. A.5.4 FIGURE 4 In Figure 4, pruning targeted the final four convolutional layers of VGG11 during training with (layerwise) starting epochs s =, ending epochs e =, and pruning fractions p = (0.3, 0.3, 0.3, 0.9). To create the different iterative pruning rates, we used models with inter-pruning retrain periods r = 4, r = 20, r = 40, r = 60, and r = 100. Since the layerwise pruning percentages varied, pruning required multiple iterative pruning percentages, the largest of which is denoted on the horizontal axis. An unpruned baseline model average (10 runs) is plotted on the dotted line. The models were trained on CIFAR-10 with Adam for 325 epochs with lr s =. The error bars are 95% confidence intervals for the means, bootstrapped from 10 distinct runs of each experiment. A.5.5 FIGURE 5 In Figure 5, pruning targeted the final four convolutional layers of VGG11 during training with (layerwise) starting epochs s =, ending epochs e =, pruning fractions p = (0.3, 0.3, 0.3, 0.9), and inter-pruning-iteration retrain period r = 40. When injecting pruning noise, we used the same pruning schedule and percentages, but applied noise to the parameters instead of removing them. The Gaussian noise had mean 0 and standard deviation equal to the empirical standard deviation of a noiseless filter from the same layer. Prune L used 2 -norm pruning. The models were trained on CIFAR-10 with Adam for 325 epochs with lr s =. The error bars are 95% confidence intervals for the means, bootstrapped from 10 distinct runs of each experiment. A.5.6 APPENDIX A.4 Each experiment in Appendix A.4 targeted the post-convolutional linear layer of the Conv4 network during training on CIFAR-10 with the Adam optimizer. The pruning algorithms start on epoch s =, end on epoch e =, prune the percentage p = (0.9), and prune every epoch via retrain period r = 1. These relatively simple experiments were conducted to show that, at higher sparsity (pruning 85% of the model's parameters), unstable pruning can improve the generalization of the baseline. The error bars are 95% confidence intervals for the means, bootstrapped from 20 distinct runs of each configuration.
We demonstrate that pruning methods which introduce greater instability into the loss also confer improved generalization, and explore the mechanisms underlying this effect.
370
scitldr
Neural networks trained with backpropagation, the standard algorithm of deep learning which uses weight transport, are easily fooled by existing gradient-based adversarial attacks. This class of attacks are based on certain small perturbations of the inputs to make networks misclassify them. We show that less biologically implausible deep neural networks trained with feedback alignment, which do not use weight transport, can be harder to fool, providing actual robustness. Tested on MNIST, deep neural networks trained without weight transport have an adversarial accuracy of 98% compared to 0.03% for neural networks trained with backpropagation and generate non-transferable adversarial examples. However, this gap decreases on CIFAR-10 but is still significant particularly for small perturbation magnitude less than 1 ⁄ 2. Deep neural networks trained with backpropagation (BP) are not robust against certain hardly perceptible perturbation, known as adversarial examples, which are found by slightly altering the network input and nudging it along the gradient of the network's loss function. The feedback-path synaptic weights of these networks use the transpose of the forward-path synaptic weights to run error propagation. This problem is commonly named the weight transport problem. Here we consider more biologically plausible neural networks introduced by Lillicrap et al. to run error propagation using feedbackpath weights that are not the transpose of the forward-path ones i.e. without weight transport. This mechanism was called feedback alignment (FA). The introduction of a separate feedback path in in the form of random fixed synaptic weights makes the feedback gradients a rough approximation of those computed by backpropagation. Since gradient-based adversarial attacks are very sensitive to the quality of gradients to perturb the input and fool the neural network, we suspect that the gradients computed without weight transport cannot be accurate enough to design successful gradient-based attacks. Here we compare the robustness of neural networks trained with either BP or FA on three well-known gradient-based attacks, namely the fast gradient sign method (FGSM), the basic iterative method (BIM) and the momentum iterative fast gradient sign method (MI-FGSM). To the best of our knowledge, no prior adversarial attacks have been applied for deep neural networks without weight transport. A typical neural network classifier, trained with the backpropagation algorithm, computes in the feedback path the error signals δ and the weight update ∆W according to the error-backpropagation equations: where y l is the output signal of layer l, φ is the derivative of the activation function φ and η W is a learning-rate factor. For neuroscientists, the weight update in equation 1 is a biologically implausible computation: the backward error δ requires W T, the transposed synapses of the forward path, as the feedback-path synapses. However, the synapses in the forward and feedback paths are physically distinct in the brain and we do not know any biological mechanism to keep the feedback-path synapses equal to the transpose of the forward-path ones. To solve this modeling difficulty, Lillicrap et al. made the forward and feedback paths physically distinct by fixing the feedback-path synapses to different matrices B that are randomly fixed (not learned) during the training phase. This solution, called feedback alignment, enables deep neural networks to compute the error signals δ without weight transport problem by the rule In the rest of this paper, we add the superscript "bp" and "fa" in the notation of any term computed respectively with backpropagation and feedback alignment to avoid any confusion. We call a "BP network" a neural network trained with backpropagation and "FA network" a neural network trained with feedback alignment. Authors in showed recently that the angles between the gradients ∆W f a and ∆W bp stay > 80°for most layers of ResNet-18 and ResNet-50 architectures. This means that feedback alignment provides inaccurate gradients ∆W f a that are mostly not aligned with the true gradients ∆W bp. Since gradient-based adversarial attacks rely on the true gradient to maximize the network loss function, can less accurate gradients computed by feedback alignment provide less-effective adversarial attacks? 3 Gradient-based adversarial attacks The objective of gradient-based attacks is to find gradient updates to the input with the smallest perturbation possible. We compare the robustness of neural networks trained with either feedback alignment or backpropagation using three techniques mentioned in the recent literature. Goodfellow et al. proposed an attack called Fast Gradient Sign Method to generate adversarial examples x by perturbing the input x with one step gradient update along the direction of the sign of gradient, which can be summarized by where is the magnitude of the perturbation, J is the loss function and y * is the label of x. This perturbation can be computed through transposed forward-path synaptic weights like in backpropagation or through random synaptic weights like in feedback alignment. While the Fast Gradient Sign method computes a one step gradient update for each input x, Kurakin et al. extended it to the Basic Iterative Method. It runs the gradient update for multiple iterations using small step size and clips pixel values to avoid large changes on each pixel in the beginning of each iteration as follows where α is the step size and Clip X (·) denotes the clipping function ensuring that each pixel x i,j is in the interval [x ij -, x ij +]. This method is also called the Projected Gradient Descent Method because it "projects" the perturbation onto its feasible set using the clip function. This method is a natural extension to the Fast Gradient Sign Method by introducing momentum to generate adversarial examples iteratively. At each iteration t, the gradient g t is computed by the rule All the experiments were performed on neural networks with the LeNet architecture with the cross-entropy loss function. We vary the perturbation magnitude from 0 to 1 with a step size of 0.1. All adversarial examples were generated using the number of iterations n = 10 for BIM and MI-FGSM attacks and µ = 0.8 for the MI-FGSM attack. The of the accuracy as function of the perturbation magnitude on MNIST are given in Figure 1a. We find that when performing the three gradient-based adversarial attacks (FGSM, BIM and MI-FGSM) on a FA neural network, the accuracy does not decrease and stays around 97%. This suggests that MNIST adversarial examples generated by FA gradients cannot fool FA neural networks for ∈ unlike BP neural networks whose accuracy drastically decreases to 0% as the perturbation increases. In the legend, we denote by "BP → F A" the generation of adversarial examples using BP to fool the FA network, and "F A → BP " the generation of adversarial examples using FA to fool the BP network Additionally, we investigate the transferability of the adversarial examples generated with either BP or FA networks using each one of the three studied attacks. As shown in Figure 1b, we find that the generated adversarial examples by the FA network don't fool the BP network. This means that these adversarial examples are not transferable. The mutual is not true since adversarial examples generated by the BP network can fool the FA network. Results on CIFAR-10 about the robustness of FA and BP networks to the three gradient-based adversarial attacks can be found in Figure 2a. Unlike the on MNIST, the accuracy of FA networks as function of the perturbation magnitude does decrease but still with a lower rate than the accuracy of BP networks. For the transferability of adversarial examples, we still find that the generated adversarial examples by the BP network do fool the FA network. However, unlike the non-transferability of adversarial examples from the FA network to the BP network on MNIST, the BP network is significantly fooled as long as the perturbation magnitude increases. We perform an empirical evaluation investigating both the robustness of deep neural networks without weight transport and the transferability of adversarial examples generated with gradient-based attacks. The on MNIST clearly show that FA networks are robust to adversarial examples generated with FA and the adversarial examples generated by FA are not transferable to BP networks. On the other hand, we find that these two are not true on CIFAR-10 even if FA networks showed a significant robustness to Figure 1b, we denote by "BP → F A" the generation of adversarial examples using BP to fool the FA network, and "F A → BP " the generation of adversarial examples using FA to fool the BP network gradient-based attacks. Therefore, one should consider performing more exhaustive analysis on more complex datasets to understand the impact of the approximated gradients provided by feedback alignment on the adversarial accuracy of biologically plausible neural networks attacked with gradient-based methods.
Less biologically implausible deep neural networks trained without weight transport can be harder to fool.
371
scitldr
Chemical reactions can be described as the stepwise redistribution of electrons in molecules. As such, reactions are often depicted using "arrow-pushing" diagrams which show this movement as a sequence of arrows. We propose an electron path prediction model (ELECTRO) to learn these sequences directly from raw reaction data. Instead of predicting product molecules directly from reactant molecules in one shot, learning a model of electron movement has the benefits of (a) being easy for chemists to interpret, (b) incorporating constraints of chemistry, such as balanced atom counts before and after the reaction, and (c) naturally encoding the sparsity of chemical reactions, which usually involve changes in only a small number of atoms in the reactants. We design a method to extract approximate reaction paths from any dataset of atom-mapped reaction SMILES strings. Our model achieves excellent performance on an important subset of the USPTO reaction dataset, comparing favorably to the strongest baselines. Furthermore, we show that our model recovers a basic knowledge of chemistry without being explicitly trained to do so. The ability to reliably predict the products of chemical reactions is of central importance to the manufacture of medicines and materials, and to understand many processes in molecular biology. Theoretically, all chemical reactions can be described by the stepwise rearrangement of electrons in molecules (b). This sequence of bond-making and breaking is known as the reaction mechanism. Understanding the reaction mechanism is crucial because it not only determines the products (formed at the last step of the mechanism), but it also provides insight into why the products are formed on an atomistic level. Mechanisms can be treated at different levels of abstraction. On the lowest level, quantum-mechanical simulations of the electronic structure can be performed, which are prohibitively computationally expensive for most systems of interest. On the other end, chemical reactions can be treated as rules that "rewrite" reactant molecules to products, which abstracts away the individual electron redistribution steps into a single, global transformation step. To combine the advantages of both approaches, chemists use a powerful qualitative model of quantum chemistry colloquially called "arrow pushing", which simplifies the stepwise electron shifts using sequences of arrows which indicate the path of electrons throughout molecular graphs (b).Recently, there have been a number of machine learning models proposed for directly predicting the products of chemical reactions BID2;; a;; ), largely using graph-based or machine translation models. The task of reaction product prediction is shown on the left-hand side of FIG0.In this paper we propose a machine learning model to predict the reaction mechanism, as shown on the right-hand side of FIG0, for a particularly important subset of organic reactions. We argue that our The reaction product prediction problem: Given the reactants and reagents, predict the structure of the product. (Right) The reaction mechanism prediction problem: Given the reactants and reagents, predict how the reaction occurred to form the products.model is not only more interpretable than product prediction models, but also allows easier encoding of constraints imposed by chemistry. Proposed approaches to predicting reaction mechanisms have often been based on combining hand-coded heuristics and quantum mechanics BID0;; b;;; ), rather than using machine learning. We call our model ELECTRO, as it directly predicts the path of electrons through molecules (i.e., the reaction mechanism). To train the model we devise a general technique to obtain approximate reaction mechanisms purely from data about the reactants and products. This allows one to train our a model on large, unannotated reaction datasets such as USPTO . We demonstrate that not only does our model achieve impressive , surprisingly it also learns chemical properties it was not explicitly trained on. We begin with a brief from chemistry on molecules and chemical reactions, and then review related work in machine learning on predicting reaction outcomes. We then describe a particularly important subclass of chemical reactions, called linear electron flow (LEF) reactions, and summarize the contributions of this work. Organic (carbon-based) molecules can be represented via a graph structure, where each node is an atom and each edge is a covalent bond (see example molecules in FIG0). Each edge (bond) represents two electrons that are shared between the atoms that the bond connects. Electrons are particularly important for describing how molecules react with other molecules to produce new ones. All chemical reactions involve the stepwise movement of electrons along the atoms in a set of reactant molecules. This movement causes the formation and breaking of chemical bonds that changes the reactants into a new set of product molecules (a). For example, FIG0 (Right) shows how electron movement can break bonds (red arrows) and make new bonds (green arrows) to produce a new set of product molecules. In general, work in machine learning on reaction prediction can be divided into two categories: Product prediction, where the goal is to predict the reaction products, given a set of reactants and reagents, shown in the left half of FIG0; and Mechanism prediction, where the goal is to determine how the reactants react, i.e., the movement of electrons, shown in the right of FIG0.Product prediction. Recently, methods combining machine learning and template-based molecular rewriting rules have been proposed BID2 a;;; ). Here, a learned model is used to predict which rewrite rule to apply to convert one molecule into another. While these models are readily interpretable, they tend be brittle. Another approach, introduced by , constructs a neural network based on the Weisfeiler-Lehman algorithm for testing graph isomorphism. They use this algorithm (called WLDN) to select atoms that will be involved in a reaction. They then enumerate all chemically-valid bond changes involving these atoms and learn a separate network to Table 1: Work on machine learning for reaction prediction, and whether they are (a) end-to-end trainable, and (b) predict the reaction mechanism.rank the ing potential products. This method, while leveraging new techniques for deep learning on graphs, cannot be trained end-to-end because of the enumeration steps for ensuring chemical validity. represents reactants as SMILES strings and then uses a sequence to sequence network (specifically, the work of) to predict product SMILES. While this method (called Seq2Seq) is end-to-end trainable, the SMILES representation is quite brittle as often single character changes will not correspond to a valid molecule. These latter two methods, WLDN and Seq2Seq, are state-of-the-art on product prediction and have been shown to outperform the above template-based techniques . Thus we compare directly with these two methods in this work. Mechanism prediction. The only other work we are aware of to use machine learning to predict reaction mechanisms are; Kayala and Baldi (2011; 2012);. All of these model a chemical reaction as an interaction between atoms as electron donors and as electron acceptors. They predict the reaction mechanisms via two independent models: one that identifies these likely electron sources and sinks, and another that ranks all combinations of them. These methods have been run on small expert-curated private datasets, which contain information about the reaction conditions such as the temperature and anion/cation solvation potential (, §2). In contrast, in this work, we aim to learn reactions from noisy large-scale public reaction datasets, which are missing the required reaction condition information required by these previous works. As we cannot yet apply the above methods on the datasets we use, nor test our models on the datasets they use (as the data are not yet publicly released), we cannot compare directly against them; therefore, we leave a detailed investigation of the pros and cons of each method for future work. As a whole, this related work points to at least two main desirable characteristics for reaction prediction models:1. End-to-End: There are many complex chemical constraints that limit the space of all possible reactions. How can we differentiate through a model subject to these constraints? 2. Mechanistic: Learning the mechanism offers a number of benefits over learning the products directly including: interpretability (if the reaction failed, what electron step went wrong), sparsity (electron steps only involve a handful of atoms), and generalization (unseen reactions also follow a set of electron steps). Table 1 describes how the current work on reaction prediction satisfies these characteristics. In this work we propose to model a subset of mechanisms with linear electron flow, described below. Reaction mechanisms can be classified by the topology of their "electron-pushing arrows" (the red and green arrows in FIG0 . Here, the class of reactions with linear electron flow (LEF) topology is by far the most common and fundamental, followed by those with cyclic topology (a). In this work, we will only consider LEF reactions that are heterolytic, i.e., they involve pairs of electrons. If reactions fall into this class, then a chemical reaction can be modelled as pairs of electrons moving in a single path through the reactant atoms. In arrow pushing diagrams representing LEF reactions, this electron path can be represented by arrows that line up in sequence, differing from for example pericyclic reactions in which the arrows would form a loop (a).Further for LEF reactions, the movement of the electrons along the linear path will alternately remove existing bonds and form new ones. We show this alternating structure in the right of FIG0. The reaction formally starts by (step 1) taking the pair of electrons between the Li and C atoms and moving them to the C atom; this is a remove bond step. Next (step 2) a bond is added when electrons are moved from the C atom in reactant 1 to a C atom in reactant 2. Then (step 3) a pair of electrons are removed between the C and O atoms and moved to the O atom, giving rise to the products. Predicting the final product is thus a byproduct of predicting this series of electron steps. Contributions. We propose a novel generative model for modeling the reaction mechanism of LEF reactions. Our contributions are as follows:• We propose an end-to-end generative model for predicting reaction mechanisms, ELECTRO, that is fully differentiable. It can be used with any deep learning architecture on graphs.• We design a technique to identify LEF reactions and mechanisms from purely atom-mapped reactants and products, the primary format of large-scale reaction datasets.• We show that ELECTRO learns chemical knowledge such as functional group selectivity without explicit training. In this section we define a probabilistic model for electron movement in linear electron flow (LEF) reactions. As described above (§2.1) all molecules can be thought of as graphs where nodes correspond to atoms and edges to bonds. All LEF reactions transform a set of reactant graphs, M 0 into a set of product graphs M T +1 via a series of electron actions P 0:T = (a 0, . . ., a T). As described, these electron actions will alternately remove and add bonds (as shown in the right of FIG0). This reaction sometimes includes additional reagent graphs, M e, which help the reaction proceed, but do not change themselves. We propose to learn a distribution p θ (P 0:T | M 0, M e) over these electron movements. We first detail the generative process that specifies p θ, before describing how to train the model parameters θ. To define our generative model, we describe a factorization of p θ (P 0:T | M 0, M e) into three components: 1. the starting location distribution p start θ (a 0 | M 0, M e); 2. the electron movement distribution p θ (a t | M t, a t−1, t); and 3. the reaction continuation distribution p cont θ (c t | M t). We define each of these in turn and then describe the factorization (we leave all architectural details of the functions introduced to the appendix).Starting Location. At the beginning the model needs to decide on which atom a 0 starts the path. As this is based on (i) the initial set of reactants M 0 and possibly (ii) a set of reagents M e, we propose to learn a distribution p DISPLAYFORM0 To parameterize this distribution we propose to use any deep graph neural network, denoted h A (·), to learn graph-isomorphic node features from the initial atom and bond features 2 (; ; ;). We choose to use a 4 layer Gated Graph Neural Network (GGNN) , for which we include a short review in the appendix. Given these atom embeddings we also compute graph embeddings (, §B.1) (also called an aggregation graph transformation (, §3)), which is a vector that represents the entire molecule set M that is invariant to any particular node ordering. Any such function g(·) that computes this mapping can be used here, but the particular graph embedding function we use is inspired by , and described in detail in Appendix B. We can now parameterize p We represent the characteristic probabilities the model may have over these next actions as colored circles over each atom. Some actions are disallowed on certain steps, for instance you cannot remove a bond that does not exist; these blocked actions are shown as red crosses. DISPLAYFORM1 where f start is a feedforward neural network which computes logits x; the logits are then normalized into probabilities by the softmax function, defined as softmax DISPLAYFORM2 Electron Movement. Observe that since LEF reactions are a single path of electrons (§2.3), at any step t, the next step a t in the path depends only on (i) the intermediate molecules formed by the action path up to that point M t, (ii) the previous action taken a t−1 (indicating where the free pair of electrons are), and (iii) the point of time t through the path, indicating whether we are on an add or remove bond step. Thus we will also learn the electron movement distribution p θ (a t | M t, a t−1, t).Similar to the starting location distribution we again make use of a graph-isomorphic node embedding function h A (M). In contrast, the above distribution can be split into two distributions depending on the parity of t: the remove bond step distribution p remove θ (a t | M t, a t−1) when t is odd, and the add bond step distribution p add θ (a t | M t, a t−1) when t is even. We parameterize the distributions as p DISPLAYFORM3 DISPLAYFORM4 The vectors β remove, β add are masks that zero-out the probability of certain atoms being selected. Specifically, β remove sets the probability of any atoms a t to 0 if there is not a bond between it and the previous atom a t−1 3. The other mask vector β add masks out the previous action, preventing the model from stalling in the same state for multiple time-steps. The feedforward networks f add (·), f remove (·) and other architectural details are described in Appendix C.Reaction Continuation / Termination. Additionally, as we do not know the length of the reaction T, we introduce a latent variable c t ∈ {0, 1} at each step t, which describes whether the reaction continues (c t = 1) or terminates (c t = 0) 4. We also define an upper bound T max on the number of reaction steps.3 One subtle point is if a reaction begins with a lone-pair of electrons then we say that this reaction starts by removing a self-bond. Thus, in the first remove step β remove it is possible to select a1 = a0. But this is not allowed via the mask vector in later steps. 4 An additional subtle point is that we do not allow the reaction to stop until until it has picked up an entire pair (ie c1 = 1). The generative steps of ELECTRO (given that the model chooses to react, ie c 0 = 1). Input: Reactant molecules M 0 (consisting of atoms A), reagents M e, atom embedding function h A (·), graph embedding functions g reagent (·) and g cont (·), additional logit functions DISPLAYFORM0 The molecule does not change until complete pair picked up 4: c 1 1You cannot stop until picked up complete pair 5: for t = 1,..., T max do DISPLAYFORM1 if t is odd then 7: DISPLAYFORM2 electrons remove bond between a t and a t−1 9: DISPLAYFORM3 11: DISPLAYFORM4 electrons add bond between a t and a t−1 12:end if13: DISPLAYFORM5 M t+1 ← M t, a t modify molecules based on previous molecule and action 15: DISPLAYFORM6 16: The final distribution we learn is the continuation distribution p cont θ (c t | M t). For this distribution we learn a different graph embedding function g cont (·) to decide whether to continue or not: DISPLAYFORM7 DISPLAYFORM8 where σ is the sigmoid function σ(a) = 1/(1 + e −a).Path Distribution Factorization. Given these distributions we can define the probability of a path P 0:T with the distribution p θ (P 0:T | M 0, M e), which factorizes as FIG1 gives a graphical depiction of the generative process on a simple example reaction. Algorithm 1 gives a more detailed description. DISPLAYFORM9 DISPLAYFORM10 Training We can learn the parameters θ of all the parameterized functions, including those producing node embeddings, by maximizing the log likelihood of a full path log p θ (P 0: DISPLAYFORM11 This is evaluated by using a known electron path a t and intermediate products M t extracted from training data, rather than on simulated values. This allows us to train on all stages of the reaction at once, given electron path data. We train our models using Adam and an initial learning rate of 10 −4, with minibatches consisting of a single reaction, where each reaction often consists of multiple intermediate graphs. Prediction Once trained, we can use our model to sample chemically-valid paths given an input set of reactants M 0 and reagents M e, simply by simulating from the conditional distributions until sampling a continue value equal to zero. We instead would like to find a ranked list of the top-K predicted paths, and do so using a modified beam search, in which we roll out a beam of width K until a maximum path length T max, while recording all paths which have terminated. This search procedure is described in detail in Algorithm 2 in the appendix. To evaluate our model, we use a collection of chemical reactions extracted from the US patent database . We take as our starting point the 479,035 reactions, along with the training, validation, and testing splits, which were used by , referred to as the USPTO dataset. This data consists of a list of reactions. Each reaction is a reaction SMILES string and a list of bond changes. SMILES is a text format for molecules that lists the molecule as a sequence of atoms and bonds. The bond change list tells us which pairs of atoms have different bonds in the the reactants versus the products (note that this can be directly determined from the SMILES string). Below, we describe two data processing techniques that allow us to identify reagents, reactions with LEF topology, and extract an underlying electron path. Each of these steps can be easily implemented with the open-source chemo-informatics software RDKit (RDKit, online).Reactant and Reagent Seperation Reaction SMILES strings can be split into three parts -reactants, reagents, and products. The reactant molecules are those which are consumed during the course of the chemical reaction to form the product, while the reagents are any additional molecules which provide context under which the reaction occurs (for example, catalysts), but do not explicitly take part in the reaction itself; an example of a reagent is shown in FIG0.Unfortunately, the USPTO dataset as extracted does not differentiate between reagents and reactants. We elect to preprocess the entire USPTO dataset by separating out the reagents from the reactants using the process outlined in Schwaller et al. FORMULA1, where we classify as a reagent any molecule for which either (i) none of its constituent atoms appear in the product, or (ii) the molecule appears in the product SMILES completely unchanged from the pre-reaction SMILES. This allows us to properly model molecules which are included in the dataset but do not materially contribute to the reaction. To train our model, we need to (i) identify reactions in the USPTO dataset with LEF topology, and (ii) have access to an electron path for each reaction. FIG3 shows the steps necessary to identify and extract the electron paths from reactions exhibiting LEF topology. We provide further details in Appendix D.Applying these steps, we discover that 73% of the USPTO dataset consists of LEF reactions (349,898 total reactions, of which 29,360 form the held-out test set). Table 2: Results when using ELECTRO for mechanism prediction. Here a prediction is correct if the atom mapped action sequences predicted by our model match exactly those extracted from the USPTO dataset. 1st Choice 2nd Choice 3rd ChoiceWe now evaluate ELECTRO on the task of (i) mechanism prediction and (ii) product prediction (as described in FIG0). While generally, it is necessary to know the reagents M e of a reaction to faithfully predict the mechanism and product, it is often possible to make inferences from the reactants alone. Therefore, we trained a second version of our model that we call ELECTRO-LITE, which ignores reagent information. This allows us to gauge the importance of reagents in determining the mechanism of the reaction. Example of how we turn a SMILES reaction string into an ordered electron path, for which we can train ELECTRO on. This consists of a series of steps: Identify bonds that change by comparing bond triples (source node, end node, bond type) between the reactants and products. FORMULA3 Join up the bond changes so that one of the atoms in consecutive bond changes overlap (for reactions which do not have linear electron flow topology, such as multi-step reactions, this will not be possible and so we discard these reactions). Order the path (ie assign a direction). A gain of charge (or analogously the gain of hydrogen as H + ions without changing charge, such as in the example shown) indicates that the electrons have arrived at this atom; and vice-versa for the start of the path. When details about both ends of the path are missing from the SMILES string we fall back to using an element's electronegativity to estimate the direction of our path, with more electronegative atoms attracting electrons towards them and so being at the end of the path. The extracted electron path deterministically determines a series of intermediate molecules which can be used for training ELECTRO. Paths that do not consist of alternative add and removal steps and do not in the final recorded product do not exhibit LEF topology and so can be discarded. An interesting observation is that our approximate reaction mechanism extraction scheme implicitly fills in missing reagents, which are caused by noisy training data -in this example, which is a Grignard-or Barbier-type reaction, the test example is missing a metal reagent (e.g. Mg or Zn). Nevertheless, our model is robust enough to predict the intended product correctly . For mechanism prediction we are interested in ensuring we obtain the exact sequence of electron steps correctly. We evaluate accuracy by checking whether the sequence of integers extracted from the raw data as described in Section 4 is an exact match with the sequence of integers output by ELECTRO. We compute the top-1, top-2, top-3, and top-5 accuracies and show them in Table 2, with an example prediction shown in FIG2. Reaction mechanism prediction is useful to ensure we form the correct product in the correct way. However, it underestimates the model's actual predictive accuracy: although a single atom mapping is provided as part of the USPTO dataset, in general atom mappings are not unique (e.g., if a molecule contains symmetries). Specifically, multiple different sequences of integers could correspond to chemically-identical electron paths. The first figure in the appendix shows an example of a reaction with symmetries, where different electron paths produce the exact same product. Recent approaches to product prediction have evaluated whether the major product reported in the test dataset matches predicted candidate products generated by their system, independent of mechanism. In our case, the top-5 accuracy for a particular reaction may include multiple different electron paths that ultimately yield the same product molecule. To evaluate if our model predicts the same major product as the one in the test data, we need to solve a graph isomorphism problem. To approximate this we (a) take the predicted electron path, (b) apply these edits to the reactants to produce a product graph (balancing charge to satisfy valence Table 3 : Results for product prediction, following the product matching procedure in Section 5.2. For the baselines we compare against models trained (a) on the full USPTO training set (marked FTS) and only tested on our subset of LEF reactions, and (b) those that are also trained on the same subset as our model. We make use of the code and pre-trained models provided by. For the Seq2Seq approach, as neither code nor more fine grained are available, we train up the required models from scratch using the OpenNMT library . Figure 5: (Left) Nucleophilic substitutions S N 2-reactions, (right) Suzuki-coupling (note that in the "real" mechanism of the Suzuki coupling, the reaction would proceed via oxidative insertion, transmetallation and reductive elimination at a Palladium catalyst. As these details are not contained in training data, we treat Palladium implicitly as a reagent). In both cases, our model has correctly picked up the trend that halides lower in the period table usually react preferably (I > Br > Cl).constraints), (c) remove atom mappings, and (d) convert the product graph to a canonical SMILES string representation in Kekulé form (aromatic bonds are explicitly represented as double-bonds). We can then evaluate whether a predicted electron path matches the ground truth by a string comparison. This procedure is inspired by the evaluation of. To obtain a ranked list of products for our model, we compute this canonicalized product SMILES for each of the predictions found by beam search over electron paths, removing duplicates along the way. These product-level accuracies are reported in Table 3.We compare with the state-of-the-art graph-based method Jin et al. FORMULA1; we use their evaluation code and pre-trained model 5, re-evaluated on our extracted test set. We also use their code and re-train a model on our extracted training set, to ensure that any differences between our method and theirs is not due to a specialized training task. We also compare against the Seq2Seq model proposed by ; however, as no code is provided by Schwaller et al. FORMULA1, we run our own implementation of this method based on the OpenNMT library . Overall, ELECTRO outperforms all other approaches on this task, with 87% top-1 accuracy and 95.9% top-5 accuracy. Omitting the reagents in ELECTRO degrades top-1 accuracy slightly, but maintains a high top-3 and top-5 accuracy, suggesting that reagent information is necessary to provide context in disambiguating plausible reaction paths. Complex molecules often feature several potentially reactive functional groups, which compete for reaction partners. To predict the selectivity, that is which functional group will predominantly react in the presence of other groups, students of chemistry learn heuristics and trends, which have been established over the course of three centuries of experimental observation. To qualitatively study whether the model has learned such trends from data we queried the model with several typical text book examples from the chemical curriculum (see Figure 5 and the appendix). We found that the model predicts most examples correctly. In the few incorrect cases, interpreting the model's output reveals that the model made chemically plausible predictions. In this section we briefly list a couple of limitations of our approach and discuss any pointers towards their resolution in future work. LEF Topology ELECTRO can currently only predict reactions with LEF topology (§2.3). These are the most common form of reactions (b), but in future work we would like to extend ELECTRO's action repertoire to work with other classes of electron shift topologies such as those found in pericyclic reactions. This could be done by allowing ELECTRO to sequentially output a series of paths, or by allowing multiple electron movements at a single step. Also, since the approximate mechanisms we produce for our dataset are extracted only from the reactants and products, they may not include all observable intermediates. This could be solved by using labelled mechanism paths, obtainable from finer grained datasets containing also the mechanistic intermediates. These mechanistic intermediates could also perhaps be created using quantum mechanical calculations following the approach in.Graph Representation of Molecules Although this shortcoming is not just restricted to our work, by modeling molecules and reactions as graphs and operations thereon, we ignore details about the electronic structure and conformational information, ie information about how the atoms in the molecule are oriented in 3D. This information is crucial in some important cases. Having said this, there is probably some balance to be struck here, as representing molecules and reactions as graphs is an extremely powerful abstraction, and one that is commonly used by chemists, allowing models working with such graph representations to be more easily interpreted. In this paper we proposed ELECTRO, a model for predicting electron paths for reactions with linear electron flow. These electron paths, or reaction mechanisms, describe how molecules react together. Our model (i) produces output that is easy for chemists to interpret, and (ii) exploits the sparsity and compositionality involved in chemical reactions. As a byproduct of predicting reaction mechanisms we are also able to perform reaction product prediction, comparing favorably to the strongest baselines on this task. In the main text we described the challenges of how to evaluate our model, as different electron paths can form the same products, for instance due to symmetry. Figure 6 is an example of this. Figure 6: This example shows how symmetry can affect the evaluation of electron paths. In this example, although one electron path is given in the USPTO dataset, the initial N that reacts could be either 15 or 12, with no difference in the final product. This is why judging purely based on electron path accuracy can sometimes be misleading. In this section we briefly review existing work for forming node and graph embeddings, as well as describing more specific details relating to our particular implementation of these methods. FIG7 provides a visualization of these techniques. We follow the main text by denoting a set of molecules as M, and refer to the atoms in these molecules (which are represented as nodes in a graph) as A.We start with Gated Graph Neural Networks (GGNNs) , which we use for finding node embeddings. We denote these functions as h A: M → R |A|×d, where we will refer to the output as the node embedding matrix, H M ∈ R |A|×d. Each row of this node embedding matrix represents the embedding of a particular atom; the rows are ordered by atom-mapped number, a unique number assigned to each atom in a SMILES string. The GGNN form these node embeddings through a recurrent operation on messages, m v, with v ∈ A, so that there is one message associated with each node. At the first time step these messages, m Table 4. GGNNs then update these messages in a recursive nature: DISPLAYFORM0 Where GRU is a Gated Recurrent Unit BID1, the functions N e1 (v), N e2 (v), N e3 (v) index the nodes connected by single, double and triple bonds to node v respectively and f single (·), f double (·) and f triple (·) are linear transformations with learnable parameters. This process continues for S steps (where we choose S = 4). In our implementation, messages and the hidden layer of the GRU have a dimensionality of 101, which is the same as the dimension of the raw atom features. The node embeddings are set as the final message belonging to a node, so that indexing a row of the node embeddings matrix, H M, gives a transpose of the final message vector, ie ). These networks consist of a series of iterative steps where the embeddings for each node are updated using the node's previous embedding and a message from its neighbors. Graph embeddings are q-dimensional vectors, representing a set of nodes, which could for instance be all the nodes in a particular graph . They are formed using a function on the weighted sum of node embeddings. Table 4: Atom features we use as input to the GGNN. These are calculated using RDKit. DISPLAYFORM1 Atom type 72 possible elements in total, one hot Degree One hot Explicit Valence One hot (0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14 Although we found choosing the first entry in the electron path is often the most challenging decision, and greatly benefits from reagent information, we also considered a version of ELECTRO where we fed in the reagent information at every step. In other words, the modules for f add (·) and f remove (·) also received the reagent embeddings calculated by r reagent (·) concatenated onto their inputs. On the mechanism prediction task (Table 2) this gets a slightly improved top-1 accuracy of 78.4% (77.8% before) but a similar top-5 accuracy of 94.6% (94.7% before). On the reaction product prediction task (Table 3) we get 87.5%, 94.4% and 96.0% top-1, 3 and 5 accuracies (87.0%, 94.5% and 95.9% before). The tradeoff is this model is somewhat more complicated and requires a greater number of parameters. We train everything using Adam and an initial learning rate of 0.0001, which we decay after 5 and 9 epochs by a factor of 0.1. We train for a total of 10 epochs. For training we use reaction minibatch sizes of one, although these can consist of multiple intermediate graphs. This section provides further details on how we extract reactions with linear electron flow topology, complementing FIG3 in the main text. We start from the USPTO SMILES reaction string and bond changes and from this wish to find the electron path. The first step is to look at the bond changes present in a reaction. Each atom on the ends of the path will be involved in exactly one bond change; the atoms in the middle will be involved in two. We can then line up bond change pairs so that neighboring pairs have one atom in common, with this ordering forming a path. For instance, given the pairs "11-13, 14-10, 10-13" we form the unordered path "14-10, 10-13, 13-11". If we are unable to form such a path, for instance due to two paths being present as a of multiple reaction stages, then we discard the reaction. For training our model we want to find the ordering of our path, so that we know in which direction the electrons flow. To do this we examine the changes of the properties of the atoms at the two ends of our path. In particular, we look at changes in charge and attached implicit hydrogen counts. The gain of negative charge (or analogously the gain of hydrogen as H + ions without changing charge) indicates that electrons have arrived at this atom, implying that this is the end of the path; vice-versa for the start of the path. However, sometimes the difference is not available in the USPTO data, as unfortunately only major products are recorded, and so details of what happens to some of the reactant molecules' atoms may be missing. In these cases we fall back to using an element's electronegativity to estimate the direction of our path, with more electronegative atoms attracting electrons towards them and so being at the end of the path. The next step of filtering checks that the path alternates between add steps (+1) and remove steps (-1). This is done by analyzing and comparing the bond changes on the path in the reactant and product molecules. Reactions that involve greater than one change (for instance going from no bond between two atoms in the reactants to a double bond between the two in the products) can indicate multi-step reactions with identical paths, and so are discarded. Finally, as a last sanity check, we use RDKit to produce all the intermediate and final products induced by our path acting on the reactants, to confirm that the final product that is produced by our extracted electron path is consistent with the major product SMILES in the USPTO dataset. At predict time, as discussed in the main text, we use beam search to find high probable chemicallyvalid paths from our model. Further details are given in Algorithm 2. For ELECTRO this operation takes 0.337s per reaction, although we do not parallelize the molecule manipulation across the different beams, and so the majority of this time (0.193s) is used within RDKit to make intermediate We filter down to the top K most promising actions. for all (ρ, p path) ∈ B t−1 do M ρ = calc_intermediate_mol(M 0, ρ) p c = calc_prob_continue(M ρ) 17:P =P ∪ {(ρ, p path + log(1 − p c))} for all v ∈ A do 19: DISPLAYFORM0 New proposed path is concatenation of old path with new node. v t−1 = last element of ρ 21:B =B ∪ {(ρ, p path + log p c + log calc_prob_action(v, M ρ, v t−1, F remove))} F remove = F remove + 1 mod 2.If on add step change to remove and vice versa. 26: end for 27: 28:P = sort_on_prob(P) Output: Valid completed paths and their respective probabilities, sorted by the latter,P molecules and extract their features. At test time we take advantage of the embarrassingly parallel nature of the task to parallelize across test inputs. To compute the log likelihood of a reaction (with access to intermediate steps) it takes ELECTRO 0.007s. This section provides further examples of the paths predicted by our model. In FIG11, we wish to show how the model has learnt chemical trends by testing it on textbook reactions. In FIG0 we show further examples taken from the USPTO dataset. Figure 9: Additional typical selectivity examples: Here, the expected product is shown on the right. The blue arrows indicate the top ranked paths from our model, the red arrows indicate other possibly competing but incorrect steps, which the model does not predict to be of high probability. In all cases, our model predicted the correct products. In b) and c), our model correctly recovers the regioselectivity expected in electrophilic aromatic substitutions.
A generative model for reaction prediction that learns the mechanistic electron steps of a reaction directly from raw reaction data.
372
scitldr
When machine learning models are used for high-stakes decisions, they should predict accurately, fairly, and responsibly. To fulfill these three requirements, a model must be able to output a reject option (i.e. say "``I Don't Know") when it is not qualified to make a prediction. In this work, we propose learning to defer, a method by which a model can defer judgment to a downstream decision-maker such as a human user. We show that learning to defer generalizes the rejection learning framework in two ways: by considering the effect of other agents in the decision-making process, and by allowing for optimization of complex objectives. We propose a learning algorithm which accounts for potential biases held by decision-makerslater in a pipeline. Experiments on real-world datasets demonstrate that learning to defer can make a model not only more accurate but also less biased. Even when operated by highly biased users, we show that deferring models can still greatly improve the fairness of the entire pipeline. Recent machine learning advances have increased our reliance on learned automated systems in complex, high-stakes domains such as loan approvals BID6, medical diagnosis BID12, and criminal justice BID22. This growing use of automated decisionmaking has raised questions about the obligations of classification systems. In many high-stakes situations, machine learning systems should satisfy (at least) three objectives: predict accurately (predictions should be broadly effective indicators of ground truth), predict fairly (predictions should be unbiased with respect to different types of input), and predict responsibly (predictions should not be made if the model cannot confidently satisfy the previous two objectives).Given these requirements, we propose learning to defer. When deferring, the algorithm does not output a prediction; rather it says "I Don't Know" (IDK), indicating it has insufficient information to make a responsible prediction, and that a more qualified external decision-maker (DM) is required. For example, in medical diagnosis, the deferred cases would lead to more medical tests, and a second expert opinion. Learning to defer extends the common rejection learning framework (; BID9 in two ways. Firstly, it considers the expected output of the DM on each example, more accurately optimizing the output of the joint DM-model system. Furthermore, it can be used with a variety of training objectives, whereas most rejection learning research focuses solely on classification accuracy. We believe that algorithms that can defer, i.e., yield to more informed decision-makers when they cannot predict responsibly, are an essential component to accountable and reliable automated systems. In this work, we show that the standard rejection learning paradigm (learning to punt) is inadequate, if these models are intended to work as part of a larger system. We propose an alternative decision making framework (learning to defer) to learn and evaluate these models. We find that embedding a deferring model in a pipeline can improve the accuracy and fairness of the pipeline as a whole, particularly if the model has insight into decision makers later in the pipeline. We simulate such a pipeline where our model can defer judgment to a better-informed decision maker, echoing real-world situations where downstream decision makers have more resources or information. We propose different formulations of these models along with a learning algorithm for training a model that can work optimally with such a decision-maker. Our experimental on two real-world datasets, from the legal and health domains, show that this algorithm learns models which, through deferring, can work with users to make fairer, more responsible decisions. Notions of Fairness. One of the most challenging aspect of machine learning approaches to fairness is formulating an operational definition. Several works have focused on the goal of treating similar people similarly (individual fairness) and the ing necessity of fair-awareness -showing that it may be essential to give the algorithm knowledge of the sensitive variable BID10; BID11.Some definitions of fairness center around statistical parity BID19 BID20, calibration BID14 BID14 or disparate impact/equalized odds BID7 BID16 BID23 BID13. It has been shown that disparate impact and calibration cannot be simultaneously satisfied BID7 BID23. BID16 present the related notion of "equal opportunity". In a subsequent paper, argue that in practice fairness criteria should be part of the learning algorithm, not post-hoc. BID13 and BID3 develop and implement learning algorithms that integrate equalized odds into learning via regularization. Incorporating IDK. While we are the first to propose learning to defer, some works have examined the "I don't know" (IDK) option (cf. rejection learning, see BID9 for a thorough survey), beginning with Chow (1957; BID8 who studies the tradeoff between error-rate and rejection rate. BID9 develop a framework for integrating IDK directly into learning. KWIK (Knows-What-It-Knows) learning is proposed in BID25 as a theoretical framework. BID0 discuss the difficulty of a model learning what it doesn't know (particularly rare cases), and analyzes how human users can audit such models. propose a cascading model, which can be learned from end-to-end; higher levels can say IDK and pass the decision on to lower levels. Similarly, BID24; BID9; BID2 provide algorithms for saying IDK in classification and provides a statistical overview of the problem, including both "don't know" and "outlier" options. However, none of these works look at the fairness impact of this procedure. A few papers have addressed topics related to both fairness and IDK. BID5 describe fair sequential decision making but do not have an IDK concept, nor do they provide a learning procedure. In BID18, the authors show theoretical connections between KWIK-learning and a proposed method for fair bandit learning. BID13 discuss fairness that can arise out of a mixture of classifiers. However, they do not provide a learning procedure, nor do they address sequential decision making, which we believe is of great practical importance. propose "safety reserves" and "safe fail" options which combine learning with rejection and fairness/safety, but do not suggest how such options may be learned or analyze the effect of a larger decision-making framework. AI Safety. Finally, our work also touches on aspects of the AI safety literature -we provide a method by which a machine learns to work optimally with a human. This is conceptually similar to work such as; , which discuss the situations in which a robot should be compliant/cooperative with a human. The idea of a machine and human jointly producing a fair classifier also relates to BID15, which describes algorithms for machines to align with human values. Previous works in rejection learning (see Sec. 2) have proposed models that can choose to not classify (say IDK/reject). In these works, the standard method is to optimize the accuracy-IDK tradeoff: how much can a model improve its accuracy on the cases it does classify by saying IDK to some cases?We find this paradigm inadequate. In many of the high-stakes applications this type of work is aimed at, an IDK is not the end of the story. Rather, a decision must be made eventually on every example, whether the model chooses to classify it or not. When the model predicts, the system outputs the model's prediction; when the model says IDK, the system outputs the decision-maker's (DM's) prediction. Rejection learning considers the "IDK Model" to be the system output. Say our model is trained to detect melanoma, and when it says IDK, a human doctor can run an extra suite of medical tests. The model learns that it is very inaccurate at detecting amelanocytic (non-pigmented) melanoma, and says IDK if this might be the case. However, suppose that the doctor is even less accurate at detecting amelanocytic melanoma than the model is. Then, we may prefer the model to make a prediction despite its uncertainty. Conversely, if there are some patients that the doctor knows well, then they may have a more informed, nuanced opinion than the model. Then, we may prefer the model to say IDK more frequently relative to its internal uncertainty. Saying IDK on the wrong examples can also have fairness consequences. If the doctor's decisions bias against a certain group, then it is probably preferable for our model (if it is less biased) to defer less frequently on the cases of that group. In short, the model we train is part of a larger pipeline, and we should be training and evaluating the performance of the pipeline with this model included, rather than solely focusing on the model itself. To enable this, we define a general two-step framework for decision-making FIG0. The first step is an automated model whose parameters we want to learn. The second step is some external decision maker (DM) which we do not have control over; this could be a human user or a more resourceintensive tool. The decision-making flow is done as a cascade, where the first-step model can either predict (positive/negative) or say IDK. If it predicts, the DM trusts the model completely, and outputs that prediction. However, if it says IDK, the DM makes its own decision. We assume that the DM is more powerful than the model we train -reflecting a number of practical scenarios where decision makers later in the chain have more resources for efficiency, security, or contextual reasons.. In Appendix A, we prove that learning to defer is equivalent to rejection learning for a broad class of loss functions, including classification error, if the DM is an oracle. Since our DM is rarely an oracle, learning to defer therefore yields a modeling advantage over rejection learning. Furthermore, learning to defer allows us to optimize a variety of objectives L(Y, Y sys), whereas most rejection learning research focuses on classification error. In the rest of this work, we show how to learn fair IDK models in this framework. The paper proceeds as follows: in Sec. 4 we give some on the fairness setup; in Sec. 5 we describe two methods of learning IDK models and how we may learn them in a fair way; and in Sec. 6 we give a learning algorithm for optimizing models to succeed in this framework. In Sec. 7 we show on two real-world datasets. In fair binary classification, we have data X, labels Y, predictionsŶ, and sensitive attribute A, assuming for simplicity that Y,Ŷ, A ∈ {0, 1}. In this work we assume that A is known (fair-aware) and that it is a single binary attribute (e.g., gender, race, age, etc.); extensions to more general settings are straightforward. The aim is twofold: firstly, that the classifier is accurate i.e., Y i =Ŷ i; and secondly, that the classifier is fair with respect to A i.e.,Ŷ does not discriminate unfairly against examples with a particular value of A. Classifiers with fairness constraints provably achieve worse error rates (cf. Chouldechova FORMULA0 ; BID23). We thus define a loss function which trades off between these two objectives, relaxing the hard constraint proposed by models like BID16 and yielding a regularizer, similar to BID20 BID3. We use disparate impact (DI) as our fairness metric BID7, as it is becoming widely used and also forms the legal basis for discrimination judgements in the U.S. BID1. Here we define a continuous relaxation of DI, using probabilistic output DISPLAYFORM0 Note that constraining DI = 0 is equivalent to equalized odds BID16. If we constrain p ∈ {0, 1}, we say we are using hard thresholds; allowing p ∈ is soft thresholds. We include a hyperparameter α to balance accuracy and fairness; there is no "correct" way to weight these. When we learn such a model, p is a function of X parametrized by θ. Our regularized fair loss function (L F air, or L F) combines cross-entropy for accuracy with this fairness metric: DISPLAYFORM1 5 SAYING IDK: LEARNING TO PUNTWe now discuss two model formulations that can output IDK: ordinal regression, and neural networks with weight uncertainty. Both of these models build on binary classifiers by allowing them to express some kind of uncertainty. In this section, we discuss these IDK models and how to train them to be fair; in the following section we address how to train them to take into account the downstream decision-maker. We extend binary classifiers to include a third option, yielding a model that can classify examples as "positive", "negative" or "IDK". This allows the model to punt, i.e., to output IDK when it prefers not to commit to a positive or negative prediction. We base our IDK models on ordinal regression with three categories (positive, IDK, and negative). These models involve learning two thresholds τ = (t 0, t 1) (see FIG8). We can train with either hard or soft thresholds. If soft, each threshold t i, i ∈ {0, 1} is associated with a sigmoid function σ i, where DISPLAYFORM0 1+e −x. These thresholds yield an ordinal regression, which produces three values for every score x: DISPLAYFORM1 Using hard thresholds simply restricts P, I, N ∈ {0, 1} 3 (one-hot vector). We can also calculate a score p(x) ∈, which we interpret as the model's prediction disregarding uncertainty. These values are: DISPLAYFORM2 P represents the model's bet that x is "positive", N the bet that x is "negative", and I is the model hedging its bets; this rises with uncertainty. Note that p is minimized at P = N; this is also where I is maximized. At test time, we use the thresholds to partition the examples. On each example, the model outputs a score x ∈ R (the logit for the ordinal regression), and a prediction p. If t 0 < x < t 1, then we replace the model's prediction p with IDK. If x ≤ t 0 or x ≥ t 1, we leave p as is. To encourage fairness, we can learn a separate set of thresholds for each group: (t 0,A=0, t 1,A=0) and (t 0,A=1, t 1,A=1); then apply the appropriate set of thresholds to each example. We can also regularize IDK classifiers for fairness. When training this model, P, I, and N are parametrized functions, with model parameters θ and thresholds τ. Using soft thresholds, the regularized loss DISPLAYFORM3 Note that we add a term penalizing I(X), to prevent the trivial solution of always outputting IDK. In addition, we regularize the disparate impact for P (X) and N (X) separately. This was not necessary in the binary case, since these two probabilities would have always summed to 1. We learn soft thresholds end-to-end; for hard thresholds we use a post-hoc thresholding scheme (see Appendix D). We can also take a Bayesian approach to uncertainty by learning a distribution over the weights of a neural network BID4. In this method, we use variational inference to approximate the posterior distribution of the model weights given the data. When sampling from this distribution, we can obtain an estimate of the uncertainty. If sampling several times yields widely varying , we can state the model is uncertain on that example.. The reciprocal of this (π = 1/S) allows high values to be more uncertain, while π = σ(log(1/S)) (where σ is the logistic function) yields uncertainty values in a range. At test time, the system can threshold this uncertainty; any example with uncertainty beyond a threshold is punted to the DM.We can regularize this Bayesian model to improve fairness as in the standard binary classifier. In the likelihood term for the variational inference, we can simply add the disparate impact regularizer (Eq. 1), making solutions of low disparate impact more likely. With weights w and variational parameters θ, our variational lower bound L V is then: DISPLAYFORM0 6 LEARNING TO DEFER IDK models come with a consequence: when a model punts, the prediction is made instead by some external, possibly biased decision maker (DM) e.g., a human expert. In this work we assume that this DM is possibly biased, but is more accurate than the model; perhaps the DM is a judge with detailed information on repeat offenders, and with more information about the defendant than the model has, or a doctor who can conduct a suite of complex medical tests. Here we introduce a distinction between learning to punt and learning to defer. In learning to punt, the goal is absolute: the model's aim is to identify the examples where it has a low chance of being correct. In learning to defer, the model has some information about the DM and takes this into account in its IDK decisions. Hence the goal is relative: the model's aim is to identify the examples where the DM's chance of being correct is much higher than its own. If the model punts mostly on cases where the DM is very inaccurate or unfair, then the joint predictions made by the model-DM pair may be poor, even if the model's own predictions are good. We can think of learning to punt as DM-unaware learning, and learning to defer as DM-aware learning. To conduct DM-aware learning, we can modify the model presented in Section 5 to take an extra input: the DM's scores on every case in the training set. The model is then optimized for some loss function L(Y, A, X); for our purposes, this loss will be a combination of accuracy and fairness. We propose the following general method, drawing inspiration from mixture-of-experts BID17. We introduce a mixing parameter π for each example x, which is the probability of deferral; that is, the probability that the DM makes the final decision on the example x, rather than the model. Then, 1 − π is the probability that the model's decision becomes the final output of the system. Let s ∼ Ber(π). Our mixing parameter π corresponds to our model's uncertainty estimate -I(x) in ordinal regression, σ(log( 1 S)) in the Bayesian neural network. Let p be the first stage model's predictions andỸ be the DM's predictions. We can express the joint system's predictionsp aŝ DISPLAYFORM1 In learning this model, we can parametrize p and π by θ = (θ p, θ π), which may be shared parameters. We can then define our loss function (L Def er, or L D) as an expectation over the Bernoulli variables DISPLAYFORM2 We call this learning to defer. When optimizing this function, the model learns to recognize when there is relevant information that is not present in the data it has been given by comparing its own predictions to the DM's predictions. Full details of how this loss function is calculated are provided in Appendix E. Experimental Setup. To evaluate our models, we measure three quantities: classification error, disparate impact, and deferral rate. We train an independent model to simulate predictions made by an external DM. This DM is trained on a version of the dataset with extra attributes, simulating the extra knowledge/accuracy that the DM may have. However, the DM is not trained to be fair. When our model outputs IDK we take the output of the DM instead (see FIG0).Datasets and Experiment Details. We show on two datasets: the COMPAS dataset BID22, where we predict a defendant's recidivism (committing a crime while on bail) without discriminating by race, and the Heritage Health dataset, where we predict a patient's Charlson Index (a comorbidity indicator related to likelihood of death) without discriminating by age. For COMPAS, we give the DM the ground truth for a defendant's violent recidivism; for Health, we give the DM the patient's primary condition group. Appendix C contains additional details on both datasets. We trained all models using a one-hidden-layer fully connected neural network with a logistic or ordinal regression on the output, where appropriate. We used 5 sigmoid hidden units for COMPAS and 20 sigmoid hidden units for Health. We used ADAM BID21 for gradient descent. We split the training data into 80% training, 20% validation, and stopped training after 50 consecutive epochs without achieving a new minimum loss on the validation set. In the ordinal regression model, we trained with soft thresholds since we needed the model to be differentiable end to end. In the post-hoc model, we searched threshold space in a manner which did not require differentiability, so we used hard thresholds. This is equivalent to an ordinal regression which produces one-hot vectors i.e. P, I, N ∈ {0, 1}. See Appendices D and E for additional details on both of these cases. Displaying Results. Each model contains hyperparameters, such as the coefficients (α, γ) for training and/or post-hoc optimization. We show the of several models, with various hyperparameter settings, to illustrate how they mediate the tradeoff of accuracy and fairness. Each plotted point is a median of 5 runs at a given hyperparameter setting. We only show points on the Pareto front of Bottom left hand corner is optimal. The purple star is a baseline model, trained only to optimize accuracy; green squares is a model also optimizing fairness; the red diamond optimizes accuracy while allowing IDK; and blue circles are the full model with all three terms. Yellow star shows the second stage model DM alone. Each point is the median of 5 runs on the test set at a given hyperparameter setting. FIG1; the new here show the performance of a DM-aware model (defer-fair), depicted by the green triangles. Of particular note is the improvement of this model relative to the punting model (the blue circles).the , i.e., those for which no other point had both better error and DI. Finally, all are calculated on a held-out test set. In FIG1, we compare punting models to binary models, with and without fairness regularization. These IDK models have not learned to defer, i.e. they did not receive access to the DM scores during training (see FIG2). The show that, on both datasets, the IDK models achieve a stronger combination of fairness and accuracy than the binary models. Graphically, we observe this by noting that the line of points representing IDK model are closer to the lower left hand corner of the plot than the line of points representing binary model . Some of this improvement is driven by the extra accuracy in the DM. However, we note that the model-DM combination achieves a more effective accuracy-fairness tradeoff than any of the three baselines: the accurate but unfair DM; the fair but inaccurate binary model with DI regularization; and the unfair and inaccurate unregularized binary model. Learning to punt can therefore be a valuable tool for anyone who designs or oversees a many-part system -a simple first stage capable of expressing uncertainty can improve the fairness of a more accurate DM. FIG2 demonstrates a clear improvement over the punting models (DM-unaware, Sec. 5). If we have access to examples of past DM behavior, learning to defer provides an effective way to improve the fairness of the entire system. For insight here, we can inspect the different roles IDK plays in their respective loss functions. In the DM-unaware IDK model, punting is penalized at a constant rate, determined by γ. However, in the DM-aware model, deferring penalized in a way which is dependent on the output of the DM on that example. We can consider the unaware model to be optimizing the expected DM-aware loss function for a DM with constant expected loss on each examples, such as an oracle (see Appendix A). Then, we can see any improvement by the DM-aware model as effective identification of the examples on which the expected loss of the DM is unusually high; in other words, identifying the inconsistencies or biases of the DM. One advantage of deferring is that it can account for specific characteristics of a DM. To test this, we considered the case of a DM which is extremely biased FIG3. We find that the advantage of a deferring model holds in this case, as it compensates for the DM's extreme bias. We can further analyze where the model chooses to defer. Recall that the DM is given extra information; in this case the violent recidivism of the defendant (true for about 7% of the dataset), which is difficult to predict from the other attributes. FIG4 compares the IDKs predicted by a punting model and a deferring model -split by group (race) on the left, and by group and violent recidivism on the right. Both models achieved roughly 27% error; the deferring model had 2% DI and the punting model had 4%. On the left, we see that the deferring model says IDK to more black people (the pink bar). On the right however, we see that the deferring model says IDK to a higher percentage of violently recidivating non-black people, and a lower percentage of violently recidivating black people. This improves DI -the extra information the DM has received is more fully used on the non-protected group. The punting model cannot adjust this way; the deferring model can, since it receives noisy access to this information through the DM scores in training. In this work, we propose the idea of learning to defer. We propose a model which learns to defer fairly, and show that these models can better navigate the accuracy-fairness tradeoff. We also consider deferring models as one part of a decision pipeline. To this end, we provide a framework for evaluating deferring models by incorporating other decision makers' output into learning. We give an algorithm for learning to defer in the context of a larger system, and show how to train a deferring model to optimize the performance of the pipeline as a whole. This is a powerful, general framework, with ramifications for many complex domains where automated models interact with other decision-making agents. A model with a low deferral rate could be used to cull a large pool of examples, with all deferrals requiring further examination. Conversely, a model with a high deferral rate can be thought of as flagging the most troublesome, incorrect, or biased decisions by a DM, with all non-deferrals requiring further investigation. Automated models often operate within larger systems, with many moving parts. Through deferring, we show how models can learn to predict responsibly within their surrounding systems. Automated models often operate within larger systems, with many moving parts. Through deferring, we show how models can learn to predict responsibly within their surrounding systems. Building models which can defer to more capable decision makers is an essential step towards fairer, more responsible machine learning. In Section 7.1, we discuss that DM-unaware IDK training is similar to DM-aware training, except with a training DM who treats all examples similarly, in some sense. Here we show experimental evidence. The plots in FIG5 compare these two models: DM-unaware, and DM-aware with an oracle at training time, and the standard DM at test time. We can see that these models trade off between error rate and DI in almost an identical manner. We can show that when for a broad class of objective functions (including classification error and cross entropy), these are provably equivalent. Note that this class does not include our fairness regularizer; for that we show the experimental evidence in FIG5.Let Y be the ground truth label, Y DM be the DM output, Y model be the model output, s be the IDK decision indicator (s = 0 for predict, s = 1 for IDK), and Y sys be the output of the joint DM-model system. Suppose these are all binary variables. The standard rejection learning (DM-unaware) loss (which has no concept of Y DM) is BID9: DISPLAYFORM0 We can describe the learning-to-defer system output as DISPLAYFORM1 If we wish to train the system output Y sys to optimize some loss function L(Y, Y sys), we can simply train s and DISPLAYFORM2. This deferring framework is strictly more expressive than the rejection learning model, as it can be used on many different objectives L, while rejection learning is mostly used with classification accuracy. We now show that if we take the DM to be an oracle (always outputs ground truth), learning to defer reduces to rejection learning for a broad class of objective functions, including classification error, cross entropy, and mean squared error. DISPLAYFORM3 ) be the objective we aim to minimize, where Y i = arg min DISPLAYFORM4 the DM is an oracle, the learning-to-defer and learning-to-punt objectives are equivalent. Proof. As in Eq. 8, the standard rejection learning objective is DISPLAYFORM5 where the first term encourages a low loss for non-IDK examples and the second term penalizes IDK at a constant rate, with γ punt ≥ 0. In rejection learning, is usually classification error (cf. BID9). Note that this objective has no notion of DM output (Y DM).If we include a similar γ def er penalty, the deferring loss function is DISPLAYFORM6 Now, if the DM is an oracle, then DISPLAYFORM7 if we set γ def er = γ punt. The in Figures 8 and 9 roughly replicate the from BID3, who also test on the COMPAS dataset. Their are slightly different for two reasons: 1) we use a 1-layer NN and they use logistic regression; and 2) our training/test splits are different from theirswe have more examples in our training set. However, the main takeaway is similar: regularization with a disparate impact term is a good way to reduce DI without making too many more errors. We show on two datasets. The first is the COMPAS recidivism dataset, made available by ProPublica BID22 1. This dataset concerns recidivism: whether or not a criminal defendant will commit a crime while on bail. The goal is to predict whether or not the person will recidivate, and the sensitive variable is race (split into black and non-black). We used information about counts of prior charges, charge degree, sex, age, and charge type (e.g., robbery, drug possession). We provide one extra bit of information to our DM -whether or not the defendant violently recidivated. This clearly delineates between two groups in the data -one where the DM knows the correct answer (those who violently recidivated) and one where the DM has no extra information (those who did not recidivate, and those who recidivated non-violently). This simulates a real-world scenario where a DM, unbeknownst to the model, may have extra information on a subset of the data. The simulated DM had a 24% error rate, better than the baseline model's 29% error rate. We split the dataset into 7718 training examples and 3309 test examples. The second dataset is the Heritage Health dataset 2. This dataset concerns health and hospitalization, particularly with respect to insurance. For this dataset, we chose the goal of predicting the Charlson Index, a comorbidity indicator, related to someone's chances of death in the next several years. We binarize the Charlson Index of a patient as 0/greater than 0. We take the sensitive variable to be age and binarize by over/under 70 years old. This dataset contains information on sex, age, lab test, prescription, and claim details. The extra information available to the DM is the primary condition group of the patient (given in the form of a code e.g., 'SEIZURE', 'STROKE', 'PNEUM'). Again, this simulates the situation where a DM may have extra information on the patient's health that the algorithm does not have access to. The simulated DM had a 16% error rate, better than the baseline model's 21% error rate. We split the dataset into 46769 training examples and 20044 test examples. We now explain the post-hoc threshold optimization search procedure we used. In theory, any procedure can work. Since it is a very small space (one dimension per threshold = 4 dimensions), we used a random search. We sampled 1000 combinations of thresholds, picked the thresholds which minimized the loss on one half of the test set, and evaluated these thresholds on the other half of the test set. We do this for several values of α, γ in thresholding, as well as several values of α for the original binary model. We did not sample thresholds from the interval uniformly. Rather we used the following procedure. We sampled our lower thresholds from the scores in the training set which were below 0.5, and our upper thresholds from the scores in the training set which were above 0.5. Our sampling scheme was guided by two principles: this forced 0.5 to always be in the IDK region; and this allowed us to sample more thresholds where the scores were more dense. If only choosing one threshold per class, we sampled from the entire training set distribution, without dividing into above 0.5 and below 0.5. This random search was significantly faster than grid search, and no less effective. It was also faster and more effective than gradient-based optimization methods for thresholds -the loss landscape seemed to have many local minima. We go into more detail regarding the regularization term for expected disparate impact in Equation 7. When using soft thresholds, it is not trivial to calculate the expected disparate impact regularizer: DISPLAYFORM0 due to the difficulties involved in taking the expected value of an absolute value. We instead chose to calculate a version of the regularizer with squared underlying terms: DISPLAYFORM1 DISPLAYFORM2 Then, we can expandŶ i asŶ DISPLAYFORM3 whereỸ i ∈ and S(x i) ∈ are the DM and machine predictions respectively. For brevity we will not show the rest of the calculation, but with some algebra we can obtain a closed form expression for DI sof t (Y, A,Ŷ) in terms of Y, A,Ỹ and S.F : LEARNING TO DEFER, BY DEFERRAL RATE Models which rarely defer behave very differently from those which frequently defer. In FIG0, we break down the from Section 7.1 by deferral (or punting) rate. First, we note that even for models with similar deferral rates, we see a similar fairness/accuracy win for the DM-aware models. Next, we can look separately at the low and high deferral rate models. We note that the benefit of DM-aware training is much larger for high deferral rate models. This suggests that the largest benefit of learning to defer comes from a win in fairness, rather than accuracy. Figure 10: Comparison of DM-aware and -unaware learning. Split into 3 bins, low, medium, and high deferral rate for each dataset. Bins are different between datasets due to the differing distributions of deferral rate observed during hyperparameter search.
Incorporating the ability to say I-don't-know can improve the fairness of a classifier without sacrificing too much accuracy, and this improvement magnifies when the classifier has insight into downstream decision-making.
373
scitldr
Hierarchical Task Networks (HTN) generate plans using a decomposition process guided by extra domain knowledge to guide search towards a planning task. While many HTN planners can make calls to external processes (e.g. to a simulator interface) during the decomposition process, this is a computationally expensive process, so planner implementations often use such calls in an ad-hoc way using very specialized domain knowledge to limit the number of calls. Conversely, the few classical planners that are capable of using external calls (often called semantic attachments) during planning do so in much more limited ways by generating a fixed number of ground operators at problem grounding time. In this paper we develop the notion of semantic attachments for HTN planning using semi co-routines, allowing such procedurally defined predicates to link the planning process to custom unifications outside of the planner. The ing planner can then use such co-routines as part of its backtracking mechanism to search through parallel dimensions of the state-space (e.g. through numeric variables). We show empirically that our planner outperforms the state-of-the-art numeric planners in a number of domains using minimal extra domain knowledge. Planning in domains that require numerical variables, for example, to drive robots in the physical world, must represent and search through a space defined by real-valued functions with a potentially infinite domain, range, or both. This type of numeric planning problem poses challenges in two ways. First, the description formalisms BID6 might not make it easy to express the numeric functions and its variables, ing in a description process that is time consuming and error-prone for real-world domains BID17. Second, the planners that try to solve such numeric problems must find efficient strategies to find solutions through this type of state-space. Previous work on formalisms for domains with numeric values developed the Semantic Attachment (SA) construct BID3 ) in classical planning. Semantic attachments were coined by (Weyhrauch 1981) to describe the attachment of an interpretation to a predicate symbol using an external procedure. Such construct allows the planner to reason about fluents where numeric values come from externally defined functions. In this paper, we extend the basic notion of semantic attachment for HTN planning by defining the semantics of the functions used as semantic attachments in a way that allows the HTN search and backtracking mechanism to be substantially more efficient. Our current approach focused on depth-first search HTN implementation without heuristic guidance, with free variables expected to be fullyground before task decomposition continues. Most planners are limited to purely symbolic operations, lacking structures to optimize usage of continuous resources involving numeric values BID9. Floating point numeric values, unlike discrete logical symbols, have an infinite domain and are harder to compare as one must consider rounding errors. One could overcome such errors with delta comparisons, but this solution becomes cumbersome as objects are represented by several numeric values which must be handled and compared as one, such as points or polygons. Planning descriptions usually simplify such complex objects to symbolic values (e.g. p25 or poly2) that are easier to compare. Detailed numeric values are ignored during planning or left to be decided later, which may force replanning BID17. Instead of simplifying the description or doing multiple comparisons in the description itself, our goal is to exploit external formalisms orthogonal to the symbolic description. To achieve that we build a mapping from symbols to objects generated as we query semantic attachments. Semantic attachments have already been used in classical planning BID3 ) to unify values just like predicates, and their main advantage is that new users do not need to discern between them and common predicates. Thus, we extend classical HTN planning algorithms and their formalism to support semantic attachment queries. While external function calls map to functions defined outside the HTN description, we implement SAs as semi co-routines BID1, subroutines that suspend and resume their state, to iterate across zero or more values provided one at a time by an external implementation, mitigating the potentially infinite range of the external function. Our contributions are threefold. First, we introduce SAs for HTN planning as a mechanism to describe and evaluate external predicates at execution time. Second, we introduce a symbol-object table to improve the readability of symbolic descriptions and the plans generated, while making it easier to handle external objects and structures. Finally, we empirically compare the ing HTN planner with a modern classical planner BID10 in a number of mixed symbolic/numeric domains showing substantial gains in speed with minimal domain knowledge. Classical planning algorithms must find plans that transform properties of the world from an initial configuration to a goal configuration. Each property is a logical predicate, a tuple with a name and terms that relate to objects of the world. A world configuration is a set of such tuples, which is called a state. To modify a state one must apply an operator, which must fulfill certain predicates at the current state, as preconditions, to add and remove predicates, the effects. Each operator applied creates a new intermediate state. The set of predicates and operators represent the domain, while each group of objects, initial and goal states represent a problem within this domain. In order to achieve the goal state the operators are used as rules to determine in which order they can be applied based on their preconditions and effects. To generalize the operators and simplify description one can use free variables to be replaced by objects available, a process called grounding. Once a state that satisfies the goal is reached, the sequence of ground operators is the plan BID14. A plan is optimal, iff it achieves the best possible quality in some criteria, such as number of operators, time or effort to execute; or satisficing if it reaches the goal without optimizing any metrics. PDDL BID11 ) is the standard description language to describe domains and problems, with features added through requirements that must be supported by the planner. Among such features are numeric-valued fluents to express numeric assignments and updates to the domain, as well as events and processes to express effects that occur in parallel with the operators in a single instant or during a time interval. Hierarchical planning shifts the focus from goal states to tasks to exploit human knowledge about problem decomposition using a hierarchy of domain knowledge recipes as part of the domain description BID13 ). This hierarchy is composed of primitive tasks that map to operators and non-primitive tasks, which are further refined into sub-tasks using methods. The decomposition process is repeated until only primitive-tasks mapping to operators remain, which in the plan itself. The goal is implicitly achieved by the plan obtained from the decomposition process. If no decomposition is possible, the task fails and a new expansion is considered one level up in the hierarchy, until there are no more possible expansions for the root task, only then a task decomposition is considered unachievable. Unlike classical planning, hierarchical planning only considers tasks obtained from the decomposition process to solve the problem, which both limits the ability to solve problems and improves execution time by evaluating a smaller number of operators. The HTN planning description is more complex than equivalent classical planning descriptions, since it includes domain knowledge with potentially recursive tasks, being able to solve more problems than classical planning. Classical planners with heuristic functions can solve problems mixing symbolic and numeric values efficiently using a process of discretization. A discretization process converts continuous values into sets of discrete symbols at often predefined granularity levels that vary between different domains. However, if the discretization process is not possible, one must use a planner that also supports numeric features, which requires another heuristic function, description language and usually more computing power due to the number of states generated by numeric features. Numeric features are especially important in domains where one cannot discretize the representation, they usually appear in geometric or physics subproblems of a domain and cannot be avoided during planning. Unlike symbolic approaches where literals are compared for equality during precondition evaluation, numeric value comparison is non-trivial. To avoid doing such comparison for every numeric value the user is left responsible for explicitly defining when one must consider rounding errors, which impacts description time and complexity. For complex object instances (in the object-oriented programming sense), such as polygons that are made of point instances, comparison details in the description are error-prone. Details such as the order of polygon points and floating point errors in their coordinates are usually irrelevant for the planner and the domain designer and should not be part of the domain description as they are part of a lowlevel specification. Such low-level specifications can be implemented by external function calls to improve what can be expressed and computed by a HTN planner. Such functions come with disadvantages, as they are not expected to keep an external state, returning a single value solely based on the provided parameters. While HTN planners can abstract away the numeric details via external function calls, there are limitations to this approach if a particular function is used in a decomposition tree where it is expected to backtrack and try new values from the function call (i.e. if the function is meant to be used to generate multiple terms as part of the search strategy). An external function must return a list of values to account for all possible decompositions so the planner tries one at a time until one succeeds. Generating a complete list is too costly when compared to computing a single value, as the first value could be enough to find a feasible plan. A semantic attachment, on the other hand, acts as an external predicate that unifies with one possible set of values at a time, rather than storing a complete list of possible sets of values to be stored in the state structure. This implementation saves time and memory during planning, as only backtracking causes the external co-routine to resume generating new unifications until a plan (or a certain amount of plans) is found. Each SA acts as a black box that simulates part of the environment encoding the in state variables that are often orthogonal to other predicates BID8. While common predicates are stored in a state structure, SAs are computed at execution time by co-routines. With a state that is not only declarative, with parts being procedurally computed, it is possible to minimize memory usage and delegate complex state-based operations to external methods otherwise incompatible or too complex for current planning description languages and planners that require grounding. We abstract away the numeric parts of the planning process encoded through SAs in a layer between the symbolic planner and external libraries. We leverage the abstract architecture of FIG0 with three layers inspired by the work of de BID2. In the symbolic layer we manipulate an anchor symbol as a term, such as polygon1, while in the external layer we manipulate a Polygon instance with N points as a geometric object based on what the selected external library specifies. With this approach we avoid complex representations in the symbolic layer. Instances created by the external layer that must be exposed to the symbolic layer are compared with stored object instances to reuse a previously defined symbol or create a new one, i.e. always represent position 2,5 with p1. This process makes symbol comparison work in the planning layer even for symbols related to complex external objects. The symbol-object table is also used to transform symbols into usable object instances by external function calls and SAs. Such table is global and consistent during the planning process, as each unique symbol will map the same internal object, even if such symbol is discarded in one decomposition branch. Once operations are finished in the external layer the process happens in reverse order, objects are transformed back into symbols that are exposed by free variables. The intermediate layer acts as the foreign function interface between the two layers, and can be modified to accommodate another external library without modifications to the symbolic description. SAs can work as interpreted predicates BID12, evaluating the truth value of a predicate procedurally, and also grounding free variables. SAs are currently limited to be used as method preconditions, which must not contain disjunctions. As only conjunctions and negations are allowed, one can reorder the preconditions during the compilation phase to improve execution time, removing the burden of the domain designer to optimize a mostly declarative description by hand, based on how free variables are used as SA Consider the abstract method example of Listing 1, with two SAs among preconditions, sa1 and sa2. The compiled output shown in Algorithm 1 has both SAs evaluated after common predicates, while function calls happen before or after each SA, based on which variables are ground at that point. In Line 4 the free variables fv1 and fv3 have a ground value that can only be read and not modified by other predicates or SAs. In Line 7 every variable is ground and the second function call can be evaluated. Algorithm 1 Compilation phase may reorder preconditions to optimize execution time. DISPLAYFORM0 for each fv1, fv3; state ⊂ {pre1,t1,t2, pre2,fv3,fv1} do 4:for each sa1(t1, fv1) do 5: free variable fv2 6:for each sa2(fv1, fv2) do 7: DISPLAYFORM1 The other limitation of current SA co-routines is that they must unify with a valid value within their internal iterations or have a stop condition, otherwise the HTN process will keep backtracking and evaluating the SA seeking new values and never returning failure. Due to the implementation support of arbitrary-precision arithmetic and accessing data from real-world streams of data/events (which are always new and potentially infinite) a valid value may never be found, and we expect the domain designer to implement mechanisms to limit the maximum number of times a SA might try to evaluate a call (i.e. to have finite stop conditions). This maximum number of tries can be implemented as a counter in the internal state of a SA, which is mostly used to mark values provided to the HTN to avoid repetition, but may achieve side-effects in external structures. The amount of side-effects in both external functions calls and SAs increase the complexity of correctness proofs and the ability to inspect and debug domain descriptions. A common problem when moving in dynamic and continuous environments is to check for object collisions, as agents and objects do not move across tiles in a grid. One solution is to calculate the distance between both objects centroid positions and verify if this value is in a safe margin before considering which action to take. To avoid the many geometric elements involved in this process we can map centroid position symbols to coordinate instances and only check the symbol returned from the symbol-object table, ignoring specific numeric details and comparing a symbol to verify if objects are near enough to collide. This process is illustrated in Figure 2, in which p 0 and p 1 are centroid position symbols that match symbols S 0 and S 1 in the symbol-object table, which maps their value to point objects O 0 and O 1. Such internal objects are used to compute distance and return a symbolic distance in situations where the actual numeric value is unnecessary. Figure 2: The symbol to object table maps symbols to object-oriented programming instances to hide procedural logic from the symbolic layer. In order to find a correct number to match a spatial or temporal constraint one may want to describe the relevant interval and precision to limit the amount of possibilities without having to discretely add each value to the state. Planning descriptions usually do not contain information about numeric intervals and precision, and if there is a way to add such information it is through the planner itself, as global definitions applied to all numeric functions, i.e. timestep, mantissa and exponent digits of DiNo BID15 ). The STEP SA described in Algorithm 2 addresses this problem, unifying t with one number at time inside the given interval with an step. To avoid having complex effects in the move operators one must not update adjacencies between planning objects during the planning process. Instead one must update only the Algorithm 2 The STEP SA replaces the pointer of t with a numeric symbol before resuming control to the HTN.1: function STEP(t, min = 0, max = ∞, = 1) 2: for i ← min to max step do 3: t ← symbol(i) 4:yield Resume HTN object position, deleting from the old position and adding the new position. Such positions come from a partitioned space, previously defined by the user. The positions and their adjacencies are either used to generate and store ground operators or stored as part of the state. To avoid both one could implement adjacency as a co-routine while hiding numeric properties of objects, such as position. Algorithm 3 shows the main two cases that appear in planning descriptions. In the first case both symbols are ground, and the co-routine resumes when both objects are adjacent, doing nothing otherwise, failing the precondition. In the second case s2, the second symbol, is free to be unified using s1, the first symbol, and a set of directions D to yield new positions to replace s2 pointer with a valid position, one at a time. In other terms, this co-routine either checks whether s2 is adjacent to s1 or tries to find a value adjacent to s1 binding it to s2 if such value exists. Algorithm 3 This ADJACENT SA implementation can either check if two symbols map to adjacent positions or generate new positions and their symbols to unify s2. DISPLAYFORM0 yield 8: else if s2 is free 9:for each (x, y) ∈ D do 10:nx ← x + x(s1); ny ← y + y(s1) 11:if 0 ≤ nx < WIDTH ∧ 0 ≤ ny < HEIGHT 12:s2 ← symbol(nx, ny) 13: yield We conducted emprirical tests in a machine with Dual 6-core Xeon CPUs @2GHz / 48GB memory, repeating experiments three times to obtain an average. The show a substantial speedup over the original classical description from ENHSP BID16 ) with more complex descriptions. Our HTN implementation is available at github.com/Maumagnaguagno/HyperTensioN U. In the Plant Watering domain BID7 one or more agents move in a 2D grid-based scenario to reach taps to obtain certain amounts of water and pour water in plants spread across the grid. Each agent can carry up to a certain amount of water and each plant requires a certain amount of water to be poured. Many state variables can be represented as numeric fluents, such as the coordinates of each agent, tap and plant, the amount of water to be poured and being carried by each agent, and the limits of how much water can be carried and the size of the grid. There are two common problems in this scenario, the first is to travel to either a tap or a plant, the second is the top level strategy. To avoid considering multiple paths in the decomposition process one can try to move straight to the goal first, and only to the goal in scenarios without obstacles, which simplifies the travel method. To achieve this straightforward movement we modify the ADJACENT SA to consider the goal position also, using an implementation of Algorithm 4. The top level strategy may consider which plant is closer to a tap or closer to an agent, how much water an agent can carry and so on. The simpler top level strategy is to verify how much water must be poured to a plant, travel to a tap, load water, travel to the previously selected plant and pour all the water loaded. Repeating this process until every plant has enough water poured. The travel method description using our modified JSHOP input language is shown in Listing 2 and compiled to Algorithm 5. We compare with the fastest satisficing configurations of ENHSP (sat and c sat) in FIG1, which shows that our approach is faster with execution times constantly below 0.01s, with both planners obtaining nonstep-optimal plans. Algorithm 4 In this goal-driven ADJACENT SA the positions are coordinate pairs, and two variables must be unified to a closer to the goal position in an obstacle-free scenario.1: function ADJACENT(x, y, nx, ny, gx, gy) 2: x ← numeric(x); y ← numeric(y) 3: gx ← numeric(gx); gy ← numeric(gy) 4:compare returns -1, 0, 1 for <, =, >, respectively 5: nx ← symbol(x + compare(gx, x)) 6: ny ← symbol(y + compare(gy, y)) 7: yield In the Car Linear domain BID0 the goal is to control the acceleration of a car, which has a minimum and maximum speed, without external forces applied, only moving through one axis to reach its destination, and requiring a small speed to safely stop. The idea is to propagate process effects to state functions, in this case acceleration to speed and speed to position, while being constrained to an acceptable speed and acceleration. The planner must decide when and for how long to increase or decrease acceleration, therefore becoming a temporal planning problem. We use a STEP SA to iterate over the time variable and propagate temporal effects and constraints, i.e. speed at time t. We compare the execution time of our approach with ENHSP with aibr, ENHSP main configuration for planning with autonomous processes, in TAB4. There is no comparison with a native HTN approach, as one would have to add a discrete finite set of time predicates (e.g. time 0) to the initial state description to be selected as time points during planning. For an agent to move in a continuous space it is common practice to simplify the environment to simpler geometric shapes for faster collision evaluation. One possible simplification is to find a circle or sphere that contains each obstacle and use this new shape to evaluate paths. In this context the best path is the one with the shortest lines between initial position and goal, considering bitangent lines between each simplified obstacle plus the amount of arc traversed on their borders, also know as Dubins path BID5. One possible approach for a satisficing plan is to move straight to the goal or to the closest obstacle to the goal and repeat the process. A precondition to such movement is to have a visible target, without any other obstacle between the current and target positions. A second consideration is the entrance direction, as clock or counterclockwise, to avoid cusped edges. Cusped edges are not part of optimal realistic paths, as the moving agent would have to turn around over a single point instead of changing its direction a few degrees to either side. For the problem defined in FIG3 Two possible approaches can be taken to solve the search over circular obstacles using bitangents. One is to rely on an external solver to compute the entire path, a motion planner, which could happen during or after HTN decomposition has taken place. When done during HTN decomposition, as seen in Listing 3, one must call the SEARCH-CIRCULAR function and consume the ing steps of the plan stored in the intermediate layer, not knowing about how close to the goal it could reach in case of failure. When done after HTN decomposition, one must replace certain dummy operators of the HTN plan and replan in case of failure. The second approach is to rely on parts of the external search, namely the VIS-IBLE function and CLOSEST SA, to describe continuous search to the HTN planner. The VISIBLE function returns true if from a point on a circle one can see the goal, false otherwise. The CLOSEST SA generates unifications from a circle with an entrance direction to a point in another circle with an exit direction, new points closer to the goal are generated first. Differently from external search, one can deal with failure at any moment, while being able to modify behavior with the same external parts, such as the initial direction the search starts with. Another advantage over the original solution is the ability to ask for N plans, which forces the HTN to backtrack after each plan is found and explore a different path until the amount of plans found equals N or the HTN planner fails to backtrack. A description of such approach is show in Listing 4. The execution time variance between the solutions is not as important as their different approaches to obtain a , from an external greedy best-first search to a HTN depth-first search. The external search also computes bitangents on demand, as bitangent precomputation takes a significant amount of time for many obstacles. We developed a notion of semantic attachments for HTN planners that not only allows a domain expert to easily define external numerical functions for real-world domains, but also provides substantial improvements on planning speed over comparable classical planning approaches. The use of semantic attachments improves the planning speed as one can express a potentially infinite state representation with procedures that can be exploited by a strategy described as HTN tasks. As only semantic attachments present in the path decomposed during planning are evaluated, a smaller amount of time is required when compared with approaches that precompute every possible value during operator grounding. Our description language is arguably more readable than the commonly used strategy of developing a domain specific planner with customized heuristics. Specifically, we allow designers to easily define external functions in a way that is readable within the domain knowledge encoded in HTN methods at design time, and also dynamically generate symbolic representations of external values at planning time, which makes generated plans easier to understand. Our work is the first attempt at defining the syntax and operation of semantic attachments for HTNs, allowing further research on search in SA-enabled domains within HTN planners. Future work includes implementing a cache to reuse previous values from external procedures applied to similar previous states BID4 ) and a generic construction to access such values in the symbolic layer, to obtain data from explored branches outside the state structure, i.e. to hold mutually exclusive predicate information. We plan to develop more domains, with varying levels of domain knowledge and SA usage, to obtain better comparison with other planners and their ing plan quality. The advantage of being able to exploit external implementations conflicts with the ability to incorporate such domain knowledge into heuristic functions, as such knowledge is outside the description. Further work is required to expose possible metrics from a SA to heuristic functions.
An approach to perform HTN planning using external procedures to evaluate predicates at runtime (semantic attachments).
374
scitldr
Recent literature suggests that averaged word vectors followed by simple post-processing outperform many deep learning methods on semantic textual similarity tasks. Furthermore, when averaged word vectors are trained supervised on large corpora of paraphrases, they achieve state-of-the-art on standard STS benchmarks. Inspired by these insights, we push the limits of word embeddings even further. We propose a novel fuzzy bag-of-words (FBoW) representation for text that contains all the words in the vocabulary simultaneously but with different degrees of membership, which are derived from similarities between word vectors. We show that max-pooled word vectors are only a special case of fuzzy BoW and should be compared via fuzzy Jaccard index rather than cosine similarity. Finally, we propose DynaMax, a completely unsupervised and non-parametric similarity measure that dynamically extracts and max-pools good features depending on the sentence pair. This method is both efficient and easy to implement, yet outperforms current baselines on STS tasks by a large margin and is even competitive with supervised word vectors trained to directly optimise cosine similarity. Natural languages are able to encode sentences with similar meanings using very different vocabulary and grammatical constructs, which makes determining the semantic similarity between pieces of text a challenge. It is common to cast semantic similarity between sentences as the proximity of their vector representations. More than half a century since it was first proposed, the Bag-of-Words (BoW) representation (; BID47 BID37 remains a popular baseline across machine learning (ML), natural language processing (NLP), and information retrieval (IR) communities. In recent years, however, BoW was largely eclipsed by representations learned through neural networks, ranging from shallow BID36 BID21 to recurrent BID12 BID53, recursive BID51 BID55, convolutional BID30 BID32, self-attentive BID57 BID9 and hybrid architectures BID19 BID56 BID66.Interestingly, BID5 showed that averaged word vectors BID38 BID44 BID6 BID29 weighted with the Smooth Inverse Frequency (SIF) scheme and followed by a Principal Component Analysis (PCA) post-processing procedure were a formidable baseline for Semantic Textual Similarity (STS) tasks, outperforming deep representations. Furthermore, BID59 and BID58 showed that averaged word vectors trained supervised on large corpora of paraphrases achieve state-of-the-art , outperforming even the supervised systems trained directly on STS.Inspired by these insights, we push the boundaries of word vectors even further. We propose a novel fuzzy bag-of-words (FBoW) representation for text. Unlike classical BoW, fuzzy BoW contains all the words in the vocabulary simultaneously but with different degrees of membership, which are derived from similarities between word vectors. Next, we show that max-pooled word vectors are a special case of fuzzy BoW. Max-pooling significantly outperforms averaging on standard benchmarks when word vectors are trained unsupervised. Since max-pooled vectors are just a special case of fuzzy BoW, we show that the fuzzy Jaccard index is a more suitable alternative to cosine similarity for comparing these representations. By contrast, the fuzzy Jaccard index completely fails for averaged word vectors as there is no connection between the two. The max-pooling operation is commonplace throughout NLP and has been successfully used to extract features in supervised systems BID10 BID32 BID31 BID13 BID12 BID15; however, to the best of our knowledge, the present work is the first to study max-pooling of pre-trained word embeddings in isolation and to suggest theoretical underpinnings behind this operation. Finally, we propose DynaMax, a completely unsupervised and non-parametric similarity measure that dynamically extracts and max-pools good features depending on the sentence pair. DynaMax outperforms averaged word vector with cosine similarity on every benchmark STS task when word vectors are trained unsupervised. It even performs comparably to BID58's vectors under cosine similarity, which is a striking as the latter are in fact trained supervised to directly optimise cosine similarity between paraphrases, while our approach is completely unrelated to that objective. We believe this makes DynaMax a strong baseline that future algorithms should aim to beat in order to justify more complicated approaches to semantic similarity. As an additional contribution, we conduct significance analysis of our . We found that recent literature on STS tends to apply unspecified or inappropriate parametric tests, or leave out significance analysis altogether in the majority of cases. By contrast, we rely on nonparametric approaches with much milder assumptions on the test statistic; specifically, we construct bias-corrected and accelerated (BCa) bootstrap confidence intervals BID17 for the delta in performance between two systems. We are not aware of any prior works that apply such methodology to STS benchmarks and hope the community finds our analysis to be a good starting point for conducting thorough significance testing on these types of experiments. The bag-of-words (BoW) model of representing text remains a popular baseline across ML, NLP, and IR communities. BoW, in fact, is an extension of a simpler set-of-words (SoW) model. SoW treats sentences as sets, whereas BoW treats them as multisets (bags) and so additionally captures how many times a word occurs in a sentence. Just like with any set, we can immediately compare SoW or BoW using set similarity measures (SSMs), such as DISPLAYFORM0 These coefficients usually follow the pattern #{shared elements} #{total elements}. From this definition, it is clear that sets with no shared elements have a similarity of 0, which is undesirable in NLP as sentences with completely different words can still share the same meaning. But can we do better?For concreteness, let's say we want to compare two sentences corresponding to the sets A = {'he', 'has', 'a', 'cat'} and B = {'she', 'had', 'one', 'dog'}. The situation here is that A ∩ B = ∅ and so their similarity according to any SSM is 0. Yet, both A and B describe pet ownership and should be at least somewhat similar. If a set contains the word'cat', it should also contain a bit of'pet', a bit of'animal', also a little bit of'tiger' but perhaps not too much of an'airplane'. If both A and B contained'pet','animal', etc. to some degree, they would have a non-zero similarity. This intuition is the main idea behind fuzzy sets: a fuzzy set includes all words in the vocabulary simultaneously, just with different degrees of membership. This generalises classical sets where a word either belongs to a set or it doesn't. We can easily convert a singleton set such as {'cat'} into a fuzzy set using a similarity function sim(w i, w j) between words. We simply compute the similarities between'cat' and all the words w j in the vocabulary and treat those values as membership degrees. As an example, the set {'cat'} really becomes {'cat' : 1, 'pet' : 0.9, 'animal' : 0.85, . . ., 'airplane' : 0.05, . . .} Fuzzifying singleton sets is straightforward, but how do we go about fuzzifying the entire sentence {'he', 'has', 'a', 'cat'}? Just as we use the classical union operation ∪ to build bigger sets from smaller ones, we use the fuzzy union to do the same but for fuzzy sets. The membership degree of a word in the fuzzy union is determined as the maximum membership degree of that word among each of the fuzzy sets we want to unite. This might sound somewhat arbitrary: after all, why max and not, say, sum or average? We explain the rationale in Section 2.1; and in fact, we use the max for the classical union all the time without ever noticing it. Indeed, {'cat'} ∪ {'cat'} = {'cat'} and not {'cat' : 2}. This is simply because we computed max = 1 and not sum = 2. Similarly {'cat'} ∪ ∅ = {'cat'} since max = 1 and not avg = 1/2.The key insight here is the following. An object that assigns the degrees of membership to words in a fuzzy set is called the membership function. Each word defines a membership function, and even though'cat' and'dog' are different, they are semantically similar (in terms of cosine similarity between their word vectors, for example) and as such give rise to very similar membership functions. This functional proximity will propagate into the SSMs, thus rendering them a much more realistic model for capturing semantic similarity between sentences. To actually compute the fuzzy SSMs, we need just a few basic tools from fuzzy set theory, all of which we briefly cover in the next section. Fuzzy set theory BID63 is a well-established formalism that extends classical set theory by incorporating the idea that elements can have degrees of membership in a set. Constrained by space, we define the bare minimum needed to compute the fuzzy set similarity measures and refer the reader to BID34 for a much richer introduction. Definition: A set of all possible terms V = {w 1, w 2, . . ., w N} that occur in a certain domain is called a universe. DISPLAYFORM0 Notice how the above definition covers all the set-like objects we discussed so far. If L = {0, 1}, then A is simply a classical set and µ is its indicator (characteristic) function. If L = N ≥0 (non-negative integers), then A is a multiset (a bag) and µ is called a count (multiplicity) function. In literature, A is called a fuzzy set when L =. However, we make no restrictions on the range and call A a fuzzy set even when L = R, i.e. all real numbers. Definition: Let A = (V, µ) and B = (V, ν) be two fuzzy sets. The union of A and B is a fuzzy set A ∪ B = (V, max(µ, ν)). The intersection of A and B is a fuzzy set A ∩ B = (V, min(µ, ν)).Interestingly, there are many other choices for the union and intersection operations in fuzzy set theory. However, only the max-min pair makes these operations idempotent, i.e. such that A∪A = A and A ∩ A = A, just as in the classical set theory. By contrast, it is not hard to verify that neither sum nor average satisfy the necessary axioms to qualify as a fuzzy union or intersection. Definition: Let A = (V, µ) be a fuzzy set. The number |A| = w∈V µ(w) is called the cardinality of a fuzzy set. Fuzzy set theory provides a powerful framework for reasoning about sets with uncertainty, but the specification of membership functions depends heavily on the domain. In practice these can be designed by experts or learned from data; below we describe a way of generating membership functions for text from word embeddings. From the algorithmic point of view any bag-of-words is just a row vector. The i-th term in the vocabulary has a corresponding N -dimensional one-hot encoding e (i). The vectors e (i) are orthonormal and in totality form the standard basis of R N. The BoW vector for a sentence S is simply DISPLAYFORM0, where c i is the count of the word w i in S.The first step in creating the fuzzy BoW representation is to convert every term vector e (i) into a membership vector µ (i). It really is the same as converting a singleton set {w i} into a fuzzy set. We call this operation'word fuzzification', and in the matrix form it is simply written as Algorithm 1 DynaMax-Jaccard Input: Word embeddings for the first sentence DISPLAYFORM1 Input: Word embeddings for the second sentence DISPLAYFORM2 Input: A vector with all zeros z ∈ R Output: DISPLAYFORM0 Here W ∈ R N ×d is the word embedding matrix and U ∈ R K×d is the'universe' matrix. Let us dissect the above expression. First, we convert a one-hot vector into a word embedding w (i) = e (i) W. This is just an embedding lookup and is exactly the same as the embedding layer in neural networks. Next, we compute a vector of similarities DISPLAYFORM1 and all the K vectors in the universe. The most sensible choice for the universe matrix is the word embedding matrix itself, i.e. U = W. In that case, the membership vector µ (i) has the same dimensionality as e (i) but contains similarities between the word w i and every word in the vocabulary (including itself).The second step is to combine all µ (i) back into a sentence membership vector µ s. At this point, it's very tempting to just sum or average over all DISPLAYFORM2. But we remember: in fuzzy set theory the union of the membership vectors is realised by the element-wise max-pooling. In other words, we don't take the average but max-pool instead: DISPLAYFORM3 Here the max returns a vector where each dimension contains the maximum value along that dimension across all N input vectors. In NLP this is also known as max-over-time pooling BID10. Note that any given sentence S usually contains only a small portion of the total vocabulary and so most word counts c i will be 0. If the count c i is 0, then we have no need for µ (i) and can avoid a lot of useless computations, though we must remember to include the zero vector in the max-pooling operation. We call the sentence membership vector µ S the fuzzy bag-of-words (FBoW) and the procedure that converts classical BoW b S into fuzzy BoW µ S the'sentence fuzzification'. Suppose we have two fuzzy BoW µ A and µ B. How can we compare them? Since FBoW are just vectors, we can use the standard cosine similarity cos(µ A, µ B). On the other hand, FBoW are also fuzzy sets and as such can be compared via fuzzy SSMs. We simply copy the definitions of fuzzy union, intersection and cardinality from Section 2.1 and write down the fuzzy Jaccard index: DISPLAYFORM0.Exactly the same can be repeated for other SSMs. In practice we found their performance to be almost equivalent but always better than standard cosine similarity (see Appendix B). So far we considered the universe and the word embedding matrix to be the same, i.e. U = W. This means any FBoW µ S contains similarities to all the words in the vocabulary and has exactly the same dimensionality as the original BoW b S. Unlike BoW, however, FBoW is almost never sparse. This motivates us to choose the matrix U with fewer rows that W. For example, the top principal axes of W could work. Alternatively, we could cluster W into k clusters and keep the centroids. Of course, the rows of such U are no longer word vectors but instead some abstract entities. A more radical but completely non-parametric solution is to choose U = I, where I ∈ R d×d is just the identity matrix. Then the word fuzzifier reduces to a word embedding lookup: DISPLAYFORM0 The sentence fuzzifier then simply max-pools all the word embeddings found in the sentence: DISPLAYFORM1 From this we see that max-pooled word vectors are only a special case of fuzzy BoW. Remarkably, when word vectors are trained unsupervised, this simple representation combined with the fuzzy Jaccard index is already a stronger baseline for semantic textual similarity than the averaged word vector with cosine similarity, as we will see in Section 4.More importantly, the fuzzy Jaccard index works for max-pooled word vectors but completely fails for averaged word vectors. This empirically validates the connection between fuzzy BoW representations and the max-pooling operation described above. From the linear-algebraic point of view, fuzzy BoW is really the same as projecting word embeddings on a subspace of R d spanned by the rows of U, followed by max-pooling of the features extracted by this projection. A fair question then is the following. If we want to compare two sentences, what subspace should we project on? It turns out that if we take word embeddings for the first sentence and the second sentence and stack them into matrix U, this seems to be a sufficient space to extract all the features needed for semantic similarity. We noticed this empirically, and while some other choices of U do give better , finding a principled way to construct them remains future work. The matrix U is not static any more but instead changes dynamically depending on the sentence pair. We call this approach Dynamic Max or DynaMax and provide pseudocode in Algorithm 1. Just as SoW is a special case of BoW, we can build the fuzzy set-of-words (FSoW) where the word counts c i are binary. The performance of FSoW and FBoW is comparable, with FBoW being marginally better. For simplicity, we implement FSoW in Algorithm 1 and in all our experiments. As evident from Equation FORMULA5, we use dot product as opposed to (scaled or clipped) cosine similarity for the membership functions. This is a reasonable choice as most unsupervised and some supervised word vectors maximise dot products in their objectives. For further analysis, see Appendix A. Any method that casts semantic similarity between sentences as the proximity of their vector representations is related to our work. Among those, the ones that strengthen bag-of-words by incorporating the sense of similarity between individual words are the most relevant. The standard Vector Space Model (VSM) basis e (i) is orthonormal and so the BoW model treats all words as equally different. BID50 proposed the'soft cosine measure' to alleviate this issue. They build a non-orthogonal basis f (i) where cos(f (i), f (j) ) = sim(w i, w j), i.e. the cosine similarity between the basis vectors is given by similarity between words. Next, they rewrite BoW in comparing different combinations of fuzzy BoW representation (either averaged or max-pooled, or the DynaMax approach) and similarity measure (either cosine or Jaccard). The bolded methods are ones proposed in the present work. Note that averaged vectors with Jaccard similarity are not included in these plots, as they consistently perform 20-50 points worse than other methods; this is predicted by our analysis as averaging is not an appropriate union operation in fuzzy set theory. In virtually every case, max-pooled with cosine outperforms averaged with cosine, which is in turn outperformed by max-pooled and DynaMax with Jaccard. An exception to the trend is STS13, for which the SMT subtask dataset is no longer publicly available; this may have impacted the performance when averaged over different types of subtasks.terms of f (i) and compute cosine similarity between transformed representations. However, when cos(DISPLAYFORM0, where w i, w j are word embeddings, their approach is equivalent to cosine similarity between averaged word embeddings, i.e. the standard baseline. BID35 consider L1-normalised bags-of-words (nBoW) and view them as a probability distributions over words. They propose the Word Mover's Distance (WMD) as a special case of the Earth Mover's Distance (EMD) between nBoW with the cost matrix given by pairwise Euclidean distances between word embeddings. As such, WMD does not build any new representations but puts a lot of structure into the distance between BoW. BID65 proposed an alternative version of fuzzy BoW that is conceptually similar to ours but executed very differently. They use clipped cosine similarity between word embeddings to compute the membership values in the word fuzzification step. We use dot product not only because it is theoretically more general but also because dot product leads to significant improvements on the benchmarks. More importantly, however, their sentence fuzzification step uses sum to aggregate word membership vectors into a sentence membership vector. We argue that max-pooling is a better choice because it corresponds to the fuzzy union. Had we used the sum, the representation would have really reduced to a (projected) summed word vector. Lastly, they use FBoW as features for a supervised model but stop short of considering any fuzzy similarity measures, such as fuzzy Jaccard index. BID24 BID32 BID0 proposed and developed soft cardinality as a generalisation to the classical set cardinality. In their framework set membership is crisp, just as in classical set theory. However, once the words are in a set, their contribution to the overall cardinality depends on how similar they are to each other. The intuition is that the set A = {'lion', 'tiger', 'leopard'} should have cardinality much less than 3, because A contains very similar elements. Likewise, the set B = {'lion', 'airplane', 'carrot'} deserves a cardinality closer to 3. We see that the soft cardinality framework is very different from our approach, as it'does not consider uncertainty in the membership of a particular element; only uncertainty as to the contribution of an element to the cardinality of the set' BID24. To evaluate the proposed similarity measures we set up a series of experiments on the established STS tasks, part of the SemEval shared task series 2012-2016 BID1 BID32 BID0 BID4 BID7. The idea behind the STS benchmarks is to measure comparing other BoW-based methods to ones using fuzzy Jaccard similarity. The bolded methods are ones proposed in the present work. We observe that even classical crisp Jaccard is a fairly reasonable baseline, but it is greatly improved by the fuzzy set treatment. Both max-pooled word vectors with Jaccard and DynaMax outperform the other methods by a comfortable margin, and the max-pooled version in particular performs astonishingly well given its great simplicity.how well the semantic similarity scores computed by a system (algorithm) correlate with human judgements. Each year's STS task itself consists of several subtasks. By convention, we report the mean Pearson correlation between system and human scores, where the mean is taken across all the subtasks in a given year. Our implementation wraps the SentEval toolkit BID11 and is available on GitHub 1. We also rely on the following publicly available word embeddings: GloVe BID44 trained on Common Crawl (840B tokens); fastText BID6 ) trained on Common Crawl (600B tokens); word2vec BID39;c) trained on Google News, CoNLL BID64, and Book Corpus; and several types of supervised paraphrastic vectors -PSL BID59, PP-XXL BID60, and PNMT BID58.We estimated word frequencies on an English Wikipedia dump dated July 1 st 2017 and calculated word weights using the same approach and parameters as in BID5. Note that these weights can in fact be derived from word vectors and frequencies alone rather than being inferred from the validation set BID18, making our techniques fully unsupervised. Finally, as the STS'13 SMT dataset is no longer publicly available, the mean Pearson correlations reported in our experiments involving this task have been re-calculated accordingly. We first ran a set of experiments validating the insights and derivations described in Section 2. These are presented in FIG0. The main takeaways are the following:• Max-pooled word vectors outperform averaged word vectors in most tasks.• Max-pooled vectors with cosine similarity perform worse than max-pooled vectors with fuzzy Jaccard similarity. This supports our derivation of max-pooled vectors as a special case of fuzzy BoW, which thus should be compared via fuzzy set similarity measures and not cosine similarity (which would be an arbitrary choice).• Averaged vectors with fuzzy Jaccard similarity completely fail. This is because fuzzy set theory tells us that the average is not a valid fuzzy union operation, so a fuzzy set similarity is not appropriate for this representation.• DynaMax shows the best performance across all tasks, possibly thanks to its superior ability to extract and max-pool good features from word vectors. Next we ran experiments against some of the related methods described in Section 3, namely WMD BID35 and soft cardinality BID28 with clipped cosine similarity as an affinity function and the softness parameter p = 1. From FIG1, we see that even classical Jaccard index is a reasonable baseline, but fuzzy Jaccard especially in the DynaMax formulation handily outperforms comparable methods. For context and completeness, we also compare against other popular sentence representations from the literature in TAB0. We include the following methods: BoW with ELMo embeddings BID54. Note that avg-cos refers to taking the average word vector and comparing by cosine similarity, and word2vec refers to the Google News version. Clearly more sophisticated methods of computing sentence representations do not shine on the unsupervised STS tasks when compared to these simple BoW methods with high-quality word vectors and the appropriate similarity metric. † indicates the only STS13 (to our knowledge) that includes the SMT subtask. BID46, Skip-Thought, InferSent BID12, Universal Sentence Encoder with DAN and Transformer BID8, and STN multitask embeddings BID54. These experiments lead to an interesting observation: • PNMT embeddings are the current state-of-the-art on STS tasks. PP-XXL and PNMT were trained supervised to directly optimise cosine similarity between average word vectors on very large paraphrastic datasets. By contrast, DynaMax is completely unrelated to the training objective of these vectors, yet has an equivalent performance. Finally, another well-known and high-performing simple baseline was proposed by BID5. However, as also noted by BID41, this method is still offline because it computes the sentence embeddings for the entire dataset, then performs PCA and removes the top principal component. While their method makes more assumptions than ours, nonetheless we make a head-to-head comparison with them in TAB2 using the same word vectors as in BID5, showing that DynaMax is still quite competitive. To strengthen our empirical findings, we provide ablation studies for DynaMax in Appendix C, showing that the different components of the algorithm each contribute to its strong performance. We also conduct significance testing in Appendix D by constructing bias-corrected and accelerated (BCa) bootstrap confidence intervals BID17 for the delta in performance between two algorithms. This constitutes, to the best of our knowledge, the first attempt to study statistical significance on the STS benchmarks with this type of non-parametric analysis that respects the statistical peculiarities of these datasets. In this work we combine word embeddings with classic BoW representations using fuzzy set theory. We show that max-pooled word vectors are a special case of FBoW, which implies that they should be compared via the fuzzy Jaccard index rather than the more standard cosine similarity. We also present a simple and novel algorithm, DynaMax, which corresponds to projecting word vectors onto a subspace dynamically generated by the given sentences before max-pooling over the features. DynaMax outperforms averaged word vectors compared with cosine similarity on every benchmark STS task when word vectors are trained unsupervised. It even performs comparably to supervised vectors that directly optimise cosine similarity between paraphrases, despite being completely unrelated to that objective. Both max-pooled vectors and DynaMax constitute strong baselines for further studies in the area of sentence representations. Yet, these methods are not limited to NLP and word embeddings, but can in fact be used in any setting where one needs to compute similarity between sets of elements that have rich vector representations. We hope to have demonstrated the benefits of experimenting more with similarity metrics based on the building blocks of meaning such as words, rather than complex representations of the final objects such as sentences. In the word fuzzification step the membership values for a word w are obtained through a similarity function sim (w, u (j) ) between the word embedding w and the rows of the universe matrix U, i.e. DISPLAYFORM0 In Section 2.2, sim(w, u (j) ) was the dot product w · u (j) and we could simply write µ = wU T. There are several reasons why we chose a similarity function that takes values in R as opposed to DISPLAYFORM1 First, we can always map the membership values from R to and vice versa using, e.g. the logistic function σ(x) = 1 1+e −ax with an appropriate scaling factor a > 0. Intuitively, large negative membership values would imply the element is really not in the set and large positive values mean it is really in the set. Of course, here both'large' and'really' depend on the scaling factor a. In any case, we see that the choice of R vs. is not very important mathematically. Interestingly, since we always max-pool with a zero vector, fuzzy BoW will not contain any negative membership values. This was not our intention, just a by-product of the model. For completeness, let us insist on the range and choose sim (w, u (j) ) to be the clipped cosine similarity max (0, cos(w, u (j) )). This is in fact equivalent to simply normalising the word vectors. Indeed, the dot product and cosine similarity become the same after normalisation, and max-pooling with the zero vector removes all the negative values, so the ing representation is guaranteed to be a-fuzzy set. Our for normalised word vectors are presented in TAB3.After comparing TAB0 we can draw two . Namely, DynaMax still outperforms avg-cos by a large margin even when word vectors are normalised. However, normalisation hurts both approaches and should generally be avoided. This is not surprising since the length of word vectors is correlated with word importance, so normalisation essentially makes all words equally important BID48. In Section 2 we mentioned several set similarity measures such as Jaccard BID23, OtsukaOchiai (; BID42 and Sørensen-Dice (; BID52 coefficients. Here in TAB4, we show that fuzzy versions of the above coefficients have almost identical performance, thus confirming that our are in no way specific to the Jaccard index. Table 5 : Mean Pearson correlation on STS tasks for the ablation studies. As described in Appendix C, it is clear that the three components of the algorithm -the dynamic universe, the max-pooling operation, and the fuzzy Jaccard index -all contribute to the strong performance of DynaMax-Jaccard. The DynaMax-Jaccard similarity (Algorithm 1) consists of three components: the dynamic universe, the max-pooling operation, and the fuzzy Jaccard index. As with any algorithm, it is very important to track the sources of improvements. Consequently, we perform a series of ablation studies in order to isolate the contribution of each component. For brevity, we focus on fastText because it produced the strongest for both the DynaMax and the baseline FIG0 ).The of the ablation study are presented in Table 5. First, we show that the dynamic universe is superior to other sensible choices, such as the identity and random 300 × 300 projection with components drawn from N. Next, we show that the fuzzy Jaccard index beats the standard cosine similarity on 4 out 5 benchmarks. Finally, we find that max considerably outperforms other pooling operations such as averaging, sum and min. We conclude that all three components of DynaMax are very important. It is clear that max-pooling is the top contributing factor, followed by the dynamic universe and the fuzzy Jaccard index, whose contributions are roughly equal. As discussed in Section 4, the core idea behind the STS benchmarks is to measure how well the semantic similarity scores computed by a system (algorithm) correlate with human judgements. In this section we provide detailed and significance analysis for all 24 STS subtasks. Our approach can be formally summarised as follows. We assume that the human scores H, the system scores A and the baseline system scores B jointly come from some trivariate distribution P (H, A, B), which is specific to each subtask. To compare the performance of two systems, we compute the sample Pearson correlation coefficients r AH and r BH. Since these correlations share the variable H, they are themselves dependent. There are several parametric tests for the difference between dependent correlations; however, their appropriateness beyond the assumptions of normality remains an active area of research BID22 BID62 BID61. The distributions of the human scores in the STS tasks are generally not normal; what's more, they vary greatly depending on the subtask (some are multimodal, others are skewed, etc.).Fortunately, nonparametric resampling-based approaches, such as bootstrap BID16, present an attractive alternative to parametric tests when the distribution of the test statistic is unknown. In our case, the statistic is simply the difference between two correlations∆ = r AH − r BH. The main idea behind bootstrap is intuitive and elegant: just like a sample is drawn from the population, a large number of'bootstrap' samples can be drawn from the actual sample. In our case, the dataset consists of triplets DISPLAYFORM0. Each bootstrap sample is a of drawing M data points from D with replacement. Finally, we approximate the distribution of ∆ by evaluating it on a large number of bootstrap samples, in our case ten thousand. We use this information to construct bias-corrected and accelerated (BCa) 95% confidence intervals for ∆. BCa BID17 ) is a fairly advanced second-order method that accounts for bias and skewness in the bootstrapped distributions, effects we did observe to a small degree in certain subtasks. Once we have the confidence interval for ∆, the decision rule is then simple: if zero is inside the interval, then the difference between correlations is not significant. Inversely, if zero is outside, we may conclude that the two approaches lead to statistically different . The location of the interval further tells us which one performs better. The are presented in TAB6. In summary, out of 72 experiments we significantly outperform the baseline in 56 (77.8%) and underperform in only one (1.39%), while in the remaining 15 (20.8%) the differences are nonsignificant. We hope our analysis is useful to the community and will serve as a good starting point for conducting thorough significance testing on the current as well as future STS benchmarks.
Max-pooled word vectors with fuzzy Jaccard set similarity are an extremely competitive baseline for semantic similarity; we propose a simple dynamic variant that performs even better.
375
scitldr
State-of-the-art in imitation learning are currently held by adversarial methods that iteratively estimate the divergence between student and expert policies and then minimize this divergence to bring the imitation policy closer to expert behavior. Analogous techniques for imitation learning from observations alone (without expert action labels), however, have not enjoyed the same ubiquitous successes. Recent work in adversarial methods for generative models has shown that the measure used to judge the discrepancy between real and synthetic samples is an algorithmic design choice, and that different choices can in significant differences in model performance. Choices including Wasserstein distance and various $f$-divergences have already been explored in the adversarial networks literature, while more recently the latter class has been investigated for imitation learning. Unfortunately, we find that in practice this existing imitation-learning framework for using $f$-divergences suffers from numerical instabilities stemming from the combination of function approximation and policy-gradient reinforcement learning. In this work, we alleviate these challenges and offer a reparameterization of adversarial imitation learning as $f$-divergence minimization before further extending the framework to handle the problem of imitation from observations only. Empirically, we demonstrate that our design choices for coupling imitation learning and $f$-divergences are critical to recovering successful imitation policies. Moreover, we find that with the appropriate choice of $f$-divergence, we can obtain imitation-from-observation algorithms that outperform baseline approaches and more closely match expert performance in continous-control tasks with low-dimensional observation spaces. With high-dimensional observations, we still observe a significant gap with and without action labels, offering an interesting avenue for future work. Imitation Learning (IL) refers to a paradigm of reinforcement learning in which the learning agent has access to an optimal, reward-maximizing expert for the underlying environment. In most work, this access is provided through a dataset of trajectories where each observed state is annotated with the action prescribed by the expert policy. This is often an extremely powerful learning paradigm in contrast to standard reinforcement learning, since not all tasks of interest admit easily-specified reward functions. Additionally, not all environments are amenable to the prolonged and potentially unsafe exploration needed for reward-maximizing agents to arrive at satisfactory policies . While the traditional formulation of the IL problem assumes access to optimal expert action labels, the provision of such information can often be laborious (in the case of a real, human expert) or incur significant financial cost (such as using elaborate instrumentation to record expert actions). Additionally, this restrictive assumption removes a vast number of rich, observation-only data sources from consideration . To bypass these challenges, recent work (; a; b; ; has explored what is perhaps a more natural problem formulation in which an agent must recover an imitation policy from a dataset containing only expert observation sequences. While this Imitation Learning from Observations (ILfO) setting carries tremendous potential, such as enabling an agent to learn complex tasks from watching freely available videos on the Internet, it also is fraught with significant additional challenges. In this paper, we show how to incorporate recent advances in generative-adversarial training of deep neural networks to tackle imitation-learning problems and advance the state-of-the-art in ILfO. With these considerations in mind, the overarching goal of this work is to enable sample-efficient imitation from expert demonstrations, both with and without the provision of expert action labels. Figure 1: Evaluating the original f -VIM framework (and its ILfO counterpart, f -VIMO) in the Ant (S ∈ R 111) and Hopper (S ∈ R 11) domains with the Total Variation distance. f -VIM/VIMO-sigmoid denotes our instantiation of the frameworks, detailed in Sections 4.2 and 4.3. Note that, in both plots, the lines for TV-VIM and TV-VIMO overlap. The rich literature on Generative Adversarial Networks has expanded in recent years to include alternative formulations of the underlying objective that yield qualitatively different solutions to the saddle-point optimization problem;;. Of notable interest are the findings of who present Variational Divergence Minimization (VDM), a generalization of the generative-adversarial approach to arbitrary choices of distance measures between probability distributions drawn from the class of f -divergences (; . Applying VDM with varying choices of fdivergence, encounter learned synthetic distributions that can exhibit differences from one another while producing equally realistic samples. Translating this idea for imitation is complicated by the fact that the optimization of the generator occurs via policy-gradient reinforcement learning . Existing work in combining adversarial IL and f -divergences, despite being well-motivated, fails to account for this difference; the end (shown partially in Figure 1, where TV-VIM is the method of, and discussed further in later sections) are imitation-learning algorithms that scale poorly to environments with higher-dimensional observations. In this work, we assess the effect of the VDM principle and consideration of alternative fdivergences in the contexts of IL and ILfO. We begin by reparameterizing the framework of for the standard IL problem. Our version transparently exposes the choices practitioners must make when designing adversarial imitation algorithms for arbitrary choices of f -divergence. We then offer a single instantiation of our framework that, in practice, allows stable training of good policies across multiple choices of f -divergence. An example is illustrated in Figure 1 where our methods (TV-VIM-sigmoid and TV-VIMO-sigmoid) in significantly superior policies. We go on to extend our framework to encapsulate the ILfO setting and examine the efficacy of the ing new algorithms across a range of continuous-control tasks in the MuJoCo domain. Our empirical validate our framework as a viable unification of adversarial imitation methods under the VDM principle. With the assistance of recent advances in stabilizing regularization for adversarial training , improvements in performance can be attained under an appropriate choice of f -divergence. However, there is still a significant performance gap between the recovered imitation policies and expert behavior for tasks with high dimensional observations, leaving open directions for future work in developing improved ILfO algorithms. The algorithms presented in this work fall in with inverse reinforcement learning (IRL) (Ng et al.; ; ; ; ;) approaches to IL. Early successes in this regime tend to rely on hand-engineered feature rep-resentations for success (; ;). Only in recent years, with the aid of deep neural networks, has there been a surge in the number of approaches that are capable of scaling to the raw, high-dimensional observations found in real-world control problems (; ; ; ; ;). Our work focuses attention exclusively on adversarial methods for their widespread effectiveness across a range of imitation tasks without requiring interactive experts (; ; ;); at the heart of these methods is the Generative Adversarial Imitation Learning (GAIL) approach which produces high-fidelity imitation policies and achieves state-of-the-art across numerous continuous-control benchmarks by leveraging the expressive power of Generative Adversarial Networks (GANs) for modeling complex distributions over a high-dimensional support. From an IRL perspective, GAIL can be viewed as iteratively optimizing a parameterized reward function (discriminator) that, when used to optimize an imitation policy (generator) via policy-gradient reinforcement learning , allows the agent to shift its own behavior closer to that of the expert. From the perspective of GANs, this is achieved by discriminating between the respective distributions over state-action pairs visited by the imitation and expert policies before training a generator to fool the discriminator and induce a state-action visitation distribution similar to that of the expert. While a large body of prior work exists for IL, recent work has drawn attention to the more challenging problem of imitation learning from observation (; ; ; ; a; b; ; . To more closely resemble observational learning in humans and leverage the wealth of publiclyavailable, observation-only data sources, the ILfO problem considers learning from expert demonstration data where no expert action labels are provided. Many early approaches to ILfO use expert observation sequences to learn a semantic embedding space so that distances between observation sequences of the imitation and expert policies can serve as a cost signal to be minimized via reinforcement learning (; ;). In contrast, Torabi et al. (2018a) introduce Behavioral Cloning from Observation (BCO) which leverages state-action trajectories collected under a random policy to train an inverse dynamics model for inferring the action responsible for a transition between two input states (assuming the two represent a state and next-state pair). With this inverse model in hand, the observation-only demonstration data can be converted into the more traditional dataset of state-action pairs over which standard BC can be applied. Recognizing the previously discussed limitations of BC approaches, Torabi et al. (2018b) introduce the natural GAIL counterpart for ILfO, Generative Adversarial Imitation from Observation (GAIFO); GAIFO is identical to GAIL except the distributions under consideration in the adversarial game are over state transitions (state and next-state pairs), as opposed to stateaction pairs requiring expert action labels. While Torabi et al. (2018b) offer empirical for continuous-control tasks with low-dimensional features as well as raw image observations, GAIFO falls short of expert performance in both settings leaving an open challenge for scalable ILfO algorithms that achieve expert performance across a wide spectrum of tasks. A central question of this work is to explore how alternative formulations of the GAN objective that underlies GAIFO might yield superior ILfO algorithms. For a more in-depth survey of ILfO approaches, we refer readers to. We refer readers to the Appendix for a broader overview of prior work. We begin by formulating the problems of imitation learning and imitation learning from observation respectively before taking a closer look at f -divergences and connecting them to imitation learning. We operate within the Markov Decision Process (MDP) formalism defined as a five-tuple M = S, A, R, T, γ where S denotes a (potentially infinite) set of states, A denotes a (potentially infinite) set of actions, R: S × A × S → R is a reward function, T: S × A → ∆(S) is a transition function, and γ ∈ is a discount factor. At each timestep, the agent observes the current state of the world, s t ∈ S, and randomly samples an action according to its stochastic policy π: S → ∆(A). The environment then transitions to a new state according to the transition function T and produces a reward signal according to the reward function R that is communicative of the agent's progress through the overall task. Unlike, the traditional reinforcement learning paradigm, the decision-making problem presented in IL lacks a concrete reward function; in lieu of R, a learner is provided with a dataset of expert demonstrations D = {τ 1, τ 2, . . . τ N} where each τ i = (s i1, a i1, s i2, a i2, . . .) represents the sequence of states and corresponding actions taken by an expert policy, π *. Naturally, the goal of an IL algorithm is to synthesize a policy π using D, along with access to the MDP M, whose behavior matches that of π *. While the previous section outlines several possible avenues for using D to arrive at a satisfactory imitation policy, our work focuses on adversarial methods that build around GAIL . Following from the widespread success of GANs , GAIL offers a highly-performant approach to IL wherein, at each iteration of the algorithm, transitions sampled from the current imitation policy are first used to update a discriminator, D ω (s, a), that acts a binary classifier to distinguish between state-action pairs sampled according to the distributions induced by the expert and student. Subsequently, treating the imitation policy as a generator, policy-gradient reinforcement learning is used to shift the current policy towards expert behavior, issuing higher rewards for those generated state-action pairs that are regarded as belonging to the expert according to D ω (s, a). More formally, this minimax optimization follows as where ρ π * (s, a) and ρ π (s, a) denote the undiscounted stationary distributions over state-action pairs for the expert and imitation policies respectively. Here represents the unconstrained output of a discriminator neural network with parameters ω and σ(v) = (1 + e −x) −1 denotes the sigmoid activation function. Since the imitation policy only exerts control over the latter term in the above objective, the per-timestep reward function maximized by reinforcement learning is given as r(s, a, s) = − log(1 − D ω (s, a)). In practice, an entropy regularization term is often added to the objective when optimizing the imitation policy so as to avoid premature convergence to a suboptimal solution (; ;). In order to accommodate various observation-only data sources and remove the burden of requiring expert action labels, the imitation from observation setting adjusts the expert demonstration dataset D such that each trajectory τ i = (s i1, s i2, . . .) consists only of expert observation sequences. Retaining the goal of recovering an imitation policy that closely resembles expert behavior, Torabi et al. (2018b) introduce GAIFO as the natural extension of GAIL for matching the state transition distribution of the expert policy. Note that an objective for matching the stationary distribution over expert state transitions enables the provision of per-timestep feedback while simultaneously avoid the issues of temporal alignment that arise when trying to match trajectories directly. The ing algorithm iteratively finds a solution to the following minimax optimization: where ρ π * (s, s) and ρ π (s, s) now denote the analogous stationary distributions over successive state pairs while D ω (s, s) = σ(V ω (s, s)) represents binary classifier over state pairs. Similar to GAIL, the imitation policy is optimized via policy-gradient reinforcement learning with per-timestep rewards computed according to r(s, a, s) = − log(1 − D ω (s, s)) and using entropy regularization as needed. In this section, we begin with an overview of f -divergences, their connection to GANs, and their impact on IL through the f -VIM framework ) (Section 4.1). We then present an alternative view of the framework that transparently exposes the fundamental choice practictioners must make in order to circumvent practical issues that arise when applying f -VIM to high-dimensional tasks (Section 4.2). We conclude by presenting our approach for ILfO as f -divergence minimization Table of various f -divergences studied in this work as well as the specific choices of activation function g f given by and utilized in. Also shown are the convex conjugates, inverse convex conjugates, and their respective domains. (Section 4.3) followed by a brief discussion of a regularization technique used to stabilize discriminator training in our experiments (Section 4.4). The GAIL and GAIFO approaches engage in an adversarial game where the discriminator estimates the divergence between state-action or state transition distributions according to the JensenShannon divergence . In this work, our focus is on a more general class of divergences, that includes the Jensen-Shannon divergence, known as Ali-Silvey distances or fdivergences (; . For two distributions P and Q with support over a domain X and corresponding continuous densities p and q, we have the f -divergence between them according to: where f : R + → R is a convex, lower-semicontinuous function such that f = 0. As illustrated in Table 1, different choices of function f yield well-known divergences between probability distributions. In order to accommodate the tractable estimation of f -divergences when only provided samples from P and Q, offer an approach for variational estimation of f -divergences. Central to their procedure is the use of the convex conjugate function or Fenchel conjugate (Hiriart-Urruty & Lemaréchal, 2004), f *, which exists for all convex, lower-semicontinuous functions f and is defined as the following supremum: Using the duality of the convex conjugate (f where T is an arbitrary class of functions T : X → dom f * . extend the use of this variational lower bound for GANs that utilize arbitrary f -divergences, or f -GANs. Specifically, the two distributions of interest are the real data distribution P and a synthetic distribution represented by a generative model Q θ with parameters θ. The variational function is also parameterized as T ω acting as the discriminator. This gives rise to the VDM principle which defines the f -GAN objective represent the variational function as X → R represents the unconstrained discriminator network while g f : R → dom f * is an activation function chosen in accordance with the f -divergence being optimized. Table 1 includes the "somewhat arbitrary" but effective choices for g f suggested by and we refer readers to their excellent work for more details and properties of f -divergences and f -GANs. Recently, have formalized the generalization from GAN to f -GAN for the traditional IL problem. They offer the f -Variational Imitation (f -VIM) framework for the specific case of estimating and then minimizing the divergence between state-action distributions induced by expert and imitation policies: min where V ω: S × A → R denotes the discriminator network that will supply per-timestep rewards during the outer policy optimization which itself is carried out over policy parameters θ via policygradient reinforcement learning . In particular, the per-timestep rewards provided to the agent are given according to r(s, a, While do an excellent job of motivating the use of f -divergences for IL (by formalizing the relationship between divergences over trajectory distributions vs. state-action distributions) and connecting f -VIM to existing imitation-learning algorithms, their experiments focus on smaller problems to study the mode-seeking/mode-covering aspects of different f -divergences and the implications of such behavior depending on the multimodality of the expert trajectory distribution. Meanwhile, in the course of attempting to apply f -VIM to large-scale imitation problems, we empirically observe numerical instabilities stemming from function approximation, demanding a reformulation of the framework. In their presentation of the f -VIM framework, retain the choices for activation function g f introduced by for f -GANs. Recall that these choices of g f play a critical role in defining the reward function optimized by the imitation policy on each iteration of f -VIM, r(s, a, s) = f * (g f (V ω (s, a))). It is well known in the reinforcement-learning literature that the nature of the rewards provided to an agent have strong implications on learning success and efficiency . While the activation choices made for f -GANs are suitable given that both optimization problems are carried out by backpropagation, we assert that special care must be taken when specifying these activations (and implicitly, the reward function) for imitation-learning algorithms. A combination of convex conjugate and activation function could induce a reward function that engenders numerical instability or a simply challenging reward landscape, depending on the underlying policy-gradient algorithm utilized . Empirically, we found that the particular activation choices for the KL and reverse KL divergences shown in Table 1 (linear and exponential, respectively) produced imitation-learning algorithms that, in all of our evaluation environments, failed to complete execution due to numerical instabilities caused by exploding policy gradients. In the case of the Total Variation distance, the corresponding f -GAN activation for the variational function is a tanh, requiring a learning agent to traverse a reward interval of [−1, 1] by crossing an intermediate region with reward signals centered around 0. To refactor the f -VIM framework so that it more clearly exposes the choice of reward function to practictioners and shifts the issues of reward scale away from the imitation policy, we propose uniformly applying an activation function g f (v) = f * −1 (r(v)) where f * −1 (t) denotes the inverse of the convex conjugate (see Table 1). Here r is effectively a free parameter that can be set according to one of the many heuristics used throughout the field of deep reinforcement learning for maintaining a reasonable reward scale so long as it obeys the domain of the inverse conjugate dom f * −1. In selecting g f accordingly, the reparameterized saddlepoint optimization for f -VIM becomes min where the per-timestep rewards used during policy optimization are given by r(s, a, s) = r(V ω (s, a) ). In applying this choice, we shift the undesirable scale of the latter term in VDM towards the discriminator, expecting it to be indifferent since training is done by backpropagation. As one potential instantiation, we consider r(u) = σ(u) where σ(·) denotes the sigmoid function leading to bounded rewards in the interval that conveniently adhere to dom f * −1 for almost all of the f -divergences examined in this work 1. In Section 5, we evaluate imitation-learning algorithms with this choice against those using f -VIM with the original f -GAN activations; we find that, without regard for the scale of rewards and the underlying reinforcement-learning problem being solved, the f -GAN activation choices either produce degenerate solutions or completely fail to produce an imitation policy altogether. Applying the variational lower bound of and the corresponding f -GAN extension, we can now present our Variational Imitation from Observation (f -VIMO) extension for a general family of ILfO algorithms that leverage the VDM principle in the underlying saddle-point optimization. Since optimization of the generator will continue to be carried out by policy-gradient reinforcement learning, we adhere to our reparameterization of the f -VIM framework and present the f -VIMO objective as: with the per-timestep rewards given according to r(s, a, s) = r(V ω (s, s)). We present the full approach as Algorithm 1. Just as in Section 4.2, we again call attention to Line 5 where the discriminator outputs (acting as individual reward signals) scale the policy gradient, unlike the more conventional discriminator optimization of Line 4 by backpropagation; this key difference is the primary motivator for our specific reparameterization of the f -VIM framework. Just as in the previous section, we take r(u) = σ(u) as a particularly convenient choice of activation given its agreement to the inverse conjugate domains dom f * −1 for many choices of f -divergence and we employ this instantiation throughout all of our experiments. We leave the examination of alternative choices for r to future work. Update θ i to θ i+1 via a policy-gradient update with rewards given by r(V ω (s, s)): 6: end for The refactored version of f -VIM presented in Section 4.2 is fundamentally addressing instability issues that may occur on the generator side of adversarial training; in our experiments, we also examine the utility of regularizing the discriminator side of the optimization for improved stability. Following from a line of work examining the underlying mathematical properties of GAN optimization 2018; ), we opt for the simple gradient-based regularization of which (for f -VIMO) augments the discriminator loss with the following regularization term: where ψ is a hyperparameter controlling the strength of the regularization. The form of this specific penalty follows from the analysis of; intuitively, its purpose is to disincentivize the 1 For Total Variation distance, we use r(u) = Table 1). discriminator from producing a non-zero gradient that shifts away from the Nash equilibrium of the minimax optimization when presented with a generator that perfectly matches the true data distribution. While originally developed for traditional GANs and shown to empirically exhibit stronger convergence properties over Wasserstein GANs, this effect is still desirable for the adversarial IL setting where the reward function (discriminator) used for optimizing the imitation policy should stop changing once the expert state-transition distribution has been matched. In practice, we compare f -VIM and f -VIMO both with and without the use of this regularization term and find that R(ω) can improve the stability and convergence of f -VIMO across almost all domains. We examine four instantiations of the f -VIM and f -VIMO frameworks (as presented in Sections 4.2 and 4.3) corresponding to imitation algorithms with the following choices of f -divergence: GAN, Kullback-Leibler, reverse KL, and Total Variation. We conduct our evaluation across four MuJoCo environments of varying difficulty: Ant, Hopper, HalfCheetah, and Walker (see the Appendix for more details on individual environments). The core questions we seek to answer through our empirical are as follows: 1. What are the implications of the choice of activation for the variational function in f -VIM on imitation policy performance? 2. Do f -divergences act as a meaningful axis of variation for IL and ILfO algorithms? 3. What is the impact of discriminator regularization on the stability and convergence properties of f -VIM/f -VIMO? 4. How does the impact of different f -divergences vary with the amount of expert demonstration data provided? To answer the first three questions above, we report the average total reward achieved by the imitation policy throughout the course of learning with rewards as defined by the corresponding OpenAI Gym environment . Shading in all plots denote 95% confidence intervals computed over 10 random trials with 10 random seeds. Expert demonstration datasets of 50 trajectories were collected from agents trained via Proximal Policy Optimization (PPO) ; 20 expert demonstrations were randomly subsampled at the start of learning and held fixed for the duration of the algorithm. We also utilize PPO as the underlying reinforcement-learning algorithm for training the imitation policy with a clipping parameter of 0.2, advantage normalization, entropy regularization coefficient 1e −3, and the Adam optimizer . Just as in we use a discount factor of γ = 0.995 and apply Generalized Advantage Estimation with parameter λ = 0.97. We run both f -VIM and f -VIMO for a total of 500 iterations, collecting 50000 environment samples per iteration. The policy and discriminator architectures are identically two separate multi-layer perceptrons each with two hidden layers of 100 units separated by tanh nonlinearities. A grid search was used for determining the initial learning rate, number of PPO epochs, and number of epochs used for discriminator training (please see the Appendix for more details) and we report for the best hyperparameter settings. To address our final question, we take the best hyperparameter settings recovered when given 20 expert demonstrations and re-run all algorithms with {1, 5, 10, 15} expert demonstrations that are randomly sampled at the start of each random trial and held fixed for the duration of the algorithm. We then record the average return of the final imitation policy for each level of expert demonstration. To highlight the importance of carefully selecting the variational function activation g f and validate our modifications to the f -VIM framework, we present in Figure 2 comparing to the original f -VIM framework of and its natural ILfO counterpart. Activation functions for the original methods are chosen according to the choices outlined in;. In our experiments using the KL and reverse KL divergences, we found that none of the trials reached completion due to numerical instabilities caused by exploding policy gradients. Consequently, we only present for the Total Variation distance. We observe that under the original f -GAN activation selection, we fail to produce meaningful imitation policies with learning stagnating after 100 iterations or less. As previously mentioned, we suspect that this stems from the use of tanh with TV leading to a dissipating reward signal. We present in Figure 3 to assess the utility of varying the choice of divergence in f -VIM and f -VIMO across each domain. In considering the impact of f -divergence choice, we find that most of the domains must be examined in isolation to observe a particular subset of f -divergences that stand out. In the IL setting, we find that varying the choice of f -divergence can yield different learning curves but, ultimately, produce near-optimal (if not optimal) imitation policies across all domains. In contrast, we find meaningful choices of f -divergence in the ILfO setting including {KL, TV} for Hopper, RKL for HalfCheetah, and {GAN, TV} for Walker. We note that the use of discriminator regularization per is crucial to achieving these performance gains, whereas the regularization generally fails to help performance in the IL setting. This finding is supportive of the logical intuition that ILfO poses a fundamentally more-challenging problem than standard IL. As a negative , we find that the Ant domain (the most difficult environment with S ⊂ R 111 and A ⊂ R 8) still poses a challenge for ILfO algorithms across the board. More specifically, we observe that discriminator regularization hurts learning in both the IL and ILfO settings. While the choice of RKL does manage to produce a marginal improvement over GAIFO, the gap between existing stateof-the-art and expert performance remains unchanged. It is an open challenge for future work to either identify the techniques needed to achieve optimal imitation policies from observations only or characterize a fundamental performance gap when faced with sufficiently large observation spaces. In Figure 4, we vary the total number of expert demonstrations available during learning and observe that certain choices of f -divergences can be more robust in the face of less expert data, both in the IL and ILfO settings. We find that KL-VIM and TV-VIM are slightly more performant than GAIL when only provided with a single expert demonstration. Notably, in each domain we see that certain choices of divergence for f -VIMO do a better job of residing close to their f -VIM counterparts suggesting that future improvements may come from examining f -divergences in the small-data regime. This idea is further exemplified when accounting for collected while using discriminator regularization . We refer readers to the Appendix for the associated learning curves. Our work leaves many open directions for future work to close the performance gap between student and expert policies in the ILfO setting. While we found the sigmoid function to be a suitable instantiation of our framework, exploring alternative choices of variational function activations could prove useful in synthesizing performant ILfO algorithms. Alternative choices of f -divergences could lead to more substantial improvements than the choices we examine in this paper. Moreover, while this work has a direct focus on f -divergences, Integral Probability Metrics (IPMs) (Müller, 1997;) represent a distinct but well-established family of divergences between probability distributions. The success of Total Variation distance in our experiments, which doubles as both a fdivergence and IPM , is suggestive of future work building IPM-based ILfO algorithms. In this work, we present a general framework for imitation learning and imitation learning from observations under arbitrary choices of f -divergence. We empirically validate a single instantiation of our framework across multiple f -divergences, demonstrating that we overcome the shortcomings of prior work and offer a wide class of IL and ILfO algorithms capable of scaling to larger problems. (; ;), where an agent must leverage demonstration data (typically provided as trajectories, each consisting of expert state-action pairs) to produce an imitation policy that correctly captures the demonstrated behavior. Within the context of LfD, a finer distinction can be made between behavioral cloning (BC) and inverse reinforcement learning (IRL) (Ng et al.; ; ; ; ;) approaches; BC approaches view the demonstration data as a standard dataset of input-output pairs and apply traditional supervisedlearning techniques to recover an imitation policy. Alternatively, IRL-based methods synthesize an estimate of the reward function used to train the expert policy before subsequently applying a reinforcement-learning algorithm to recover the corresponding imitation policy. Although not a focus of this work, we also acknowledge the myriad of approaches that operate at the intersection of IL and reinforcement learning or augment reinforcement learning with IL (; ; ; ; ;). While BC approaches have been successful in some settings (; ;), they are also susceptible to failures stemming from covariate shift where minute errors in the actions of the imitation policy compound and force the agent into regions of the state space not captured in the original demonstration data. While some preventative measures for covariate shift do exist (b), a more principled solution can be found in methods like DAgger and its descendants (; ;) that remedy covariate shift by querying an expert to provide on-policy action labels. It is worth noting, however, that these approaches are only feasible in settings that admit such online interaction with an expert and, even then, failure modes leading to poor imitation policies do exist (a). The algorithms presented in this work fall in with IRL-based approaches to IL. Early successes in this regime tend to rely on hand-engineered feature representations for success (; ;). Only in recent years, with the aid of deep neural networks, has there been a surge in the number of approaches that are capable of scaling to the raw, highdimensional observations found in real-world control problems (; ; ; ; ;). Our work focuses attention exclusively on adversarial methods for their widespread effectiveness across a range of imitation tasks without requiring interactive experts (; ; ;); at the heart of these methods is the Generative Adversarial Imitation Learning (GAIL) approach which produces high-fidelity imitation policies and achieves state-of-the-art across numerous continuous-control benchmarks by leveraging the expressive power of Generative Adversarial Networks (GANs) for modeling complex distributions over a high-dimensional support. From an IRL perspective, GAIL can be viewed as iteratively optimizing a parameterized reward function (discriminator) that, when used to optimize an imitation policy (generator) via policy-gradient reinforcement learning , allows the agent to shift its own behavior closer to that of the expert. From the perspective of GANs, this is achieved by discriminating between the respective distributions over state-action pairs visited by the imitation and expert policies before training a generator to fool the discriminator and induce a state-action visitation distribution similar to that of the expert. While a large body of prior work exists for IL, numerous recent works have drawn attention to the more challenging problem of imitation learning from observation (; ; ; ; a; b; ; . In an effort to more closely resemble observational learning in humans and leverage the wealth of publicly-available, observation-only data sources, the ILfO problem considers learning from expert demonstration data where no expert action labels are provided. Many early approaches to ILfO use expert observation sequences to learn a semantic embedding space so that distances between observation sequences of the imitation and expert policies can serve as a cost signal to be minimized via reinforcement learning (; ; choice and the multimodality of the expert trajectory distribution, we provide an empirical evaluation of their f -VIM framework across a range of continous control tasks in the Mujoco domain . Empirically, we find that some of the design choices f -VIM inherits from the original f -GAN work are problematic when coupled with adversarial IL and training of the generator by policy-gradient reinforcement learning, instead of via direct backpropagation as in traditional GANs. Consequently, we refactor their framework to expose this point and provide one practical instantiation that works well empirically. We then go on to extend the f -VIM framework to the IFO problem (f -VIMO) and evaluate the ing algorithms empirically against the state-of-the-art, GAIFO. Here we provide details of the MuJoCo environments used in our experiments as well as the details of the hyperparameter search conducted for all algorithms (IL and ILfO) presented. All environments have continuous observation and action spaces of varying dimensionality (as shown below). All algorithms evaluated in each environment were trained for a total of 500 iterations, collecting 50, 000 environment transitions per iteration. C.3 f -DIVERGENCE VARIATIONAL BOUND SWAP Throughout this paper, we advocate for the use of the following variational lower bound to the f -divergence for both f -VIM and f -VIMO: In particular, we value the above form as it clearly exposes the choice of reward function for the imitation policy as a free parameter that, in practice, has strong implications for the stability and convergence of adversarial IL/ILfO algorithms. Alternatively, one may consider appealing to the original lower bound of , used in f -GANs unmodified, but swapping the positions of the two distributions: Consequently, the term in this lower bound pertaining to the imitation policy is now similar to that of the bound in Equation 11; namely, an almost arbitrary activation function, g f, applied to the output of the variational function (discriminator) V ω. The difference being that the codomain of g f must obey the domain of the convex conjugate, f *, while the codomain of r must respect the domain of the inverse convex conjugate, f * −1. We evaluate these two choices empirically below for the specific choice of the KL-divergence in the Ant and Hopper domains (the two most difficult domains of our evaluation). We find that the original unswapped bound in Equation 11 used throughout this paper outperforms the variants with the distributions swapper, for both the IL and ILfO settings. Crucially, we find that the KL-VIM in the Ant domain no longer achieves expert performance while optimizing the swapped bound.
The overall goal of this work is to enable sample-efficient imitation from expert demonstrations, both with and without the provision of expert action labels, through the use of f-divergences.
376
scitldr
Momentary fluctuations in attention (perceptual accuracy) correlate with neural activity fluctuations in primate visual areas. Yet, the link between such momentary neural fluctuations and attention state remains to be shown in the human brain. We investigate this link using a real-time cognitive brain machine interface (cBMI) based on steady state visually evoked potentials (SSVEPs): occipital EEG potentials evoked by rhythmically flashing stimuli. Tracking momentary fluctuations in SSVEP power, in real-time, we presented stimuli time-locked to when this power reached (predetermined) high or low thresholds. We observed a significant increase in discrimination accuracy (d') when stimuli were triggered during high (versus low) SSVEP power epochs, at the location cued for attention. Our indicate a direct link between attention’s effects on perceptual accuracy and and neural gain in EEG-SSVEP power, in the human brain. To identify efficient acquisition software, we compared the round-trip (closed-loop) delay for four 34 acquisition software: ActiView, Lab Streaming Layer, OpenVibe and Fieldtrip (Fig. 1B). We measured 35 round-trip delay by varying the EEG+event packet size, across different sampling frequencies, fit a 36 line to the data, and estimated the intercept, which is a measure of the overhead. We observed that Fieldtrip produced the least overhead of 10.98 ± 0.50 ms. EEG data recording: Scalp EEG recordings were performed with 41 occipital electrodes out of 39 the total 128 electrodes. The data was streamed in real-time using the Fieldtrip buffer at 128 Hz. EEG 40 data was also stored at 4096 Hz for offline analyses. Spectral analysis was performed using Chronux 41 2.12 toolbox EEGLAB 13.6.5b functions were used to generate the topographical plots. Dense-array Studies). Finally, the EEG data was re-referenced to the average reference. higher, across the population, for AI-high trials as compared to AI-low trials (p < 0.01). We did not
With a cognitive brain-machine interface, we show a direct link between attentional effects on perceptual accuracy and neural gain in EEG-SSVEP power, in the human brain.
377
scitldr
Recent studies have demonstrated the vulnerability of deep convolutional neural networks against adversarial examples. Inspired by the observation that the intrinsic dimension of image data is much smaller than its pixel space dimension and the vulnerability of neural networks grows with the input dimension, we propose to embed high-dimensional input images into a low-dimensional space to perform classification. However, arbitrarily projecting the input images to a low-dimensional space without regularization will not improve the robustness of deep neural networks. We propose a new framework, Embedding Regularized Classifier (ER-Classifier), which improves the adversarial robustness of the classifier through embedding regularization. Experimental on several benchmark datasets show that, our proposed framework achieves state-of-the-art performance against strong adversarial attack methods. Deep neural networks (DNNs) have been widely used for tackling numerous machine learning problems that were once believed to be challenging. With their remarkable ability of fitting training data, DNNs have achieved revolutionary successes in many fields such as computer vision, natural language progressing, and robotics. However, they were shown to be vulnerable to adversarial examples that are generated by adding carefully crafted perturbations to original images. The adversarial perturbations can arbitrarily change the network's prediction but often too small to affect human recognition . This phenomenon brings out security concerns for practical applications of deep learning. Two main types of attack settings have been considered in recent research (Goodfellow et al.; a; ;): black-box and white-box settings. In the black-box setting, the attacker can provide any inputs and receive the corresponding predictions. However, the attacker cannot get access to the gradients or model parameters under this setting; whereas in the white-box setting, the attacker is allowed to analytically compute the model's gradients, and have full access to the model architecture and weights. In this paper, we focus on defending against the white-box attack which is the harder task. Recent work presented both theoretical arguments and an empirical one-to-one relationship between input dimension and adversarial vulnerability, showing that the vulnerability of neural networks grows with the input dimension. Therefore, reducing the data dimension may help improve the robustness of neural networks. Furthermore, a consensus in the highdimensional data analysis community is that, a method working well on the high-dimensional data is because the data is not really of high-dimension . These high-dimensional data, such as images, are actually embedded in a low dimensional space. Hence, carefully reducing the input dimension may improve the robustness of the model without sacrificing performance. Inspired by the observation that the intrinsic dimension of image data is actually much smaller than its pixel space dimension and the vulnerability of a model grows with its input dimension , we propose a defense framework that embeds input images into a low-dimensional space using a deep encoder and performs classification based on the latent embedding with a classifier network. However, an arbitrary projection does not guarantee improving the robustness of the model, because there are a lot of mapping functions including non-robust ones from the raw input space to the low-dimensional space capable of minimizing the classification loss. To constrain the mapping function, we employ distribution regularization in the embedding space leveraging optimal transport theory. We call our new classification framework Embedding Regularized Classifier (ER-Classifier). To be more specific, we introduce a discriminator in the latent space which tries to separate the generated code vectors from the encoder network and the ideal code vectors sampled from a prior distribution, i.e., a standard Gaussian distribution. Employing a similar powerful competitive mechanism as demonstrated by Generative Adversarial Networks , the discriminator enforces the embedding space of the model to follow the prior distribution. In our ER-Classifier framework, the encoder and discriminator structures together project the input data to a low-dimensional space with a nice shape, then the classifier performs prediction based on the lowdimensional embedding. Based on the optimal transport theory, the proposed ER-Classifier minimizes the discrepancy between the distribution of the true label and the distribution of the framework output, thus only retaining important features for classification in the embedding space. With a small embedding dimension, the effect of the adversarial perturbation is largely diminished through the projection process. We compare ER-Classifier with other state-of-the-art defense methods on MNIST, CIFAR10, STL10 and Tiny Imagenet. Experimental demonstrate that our proposed ER-Classifier outperforms other methods by a large margin. To sum up, this paper makes the following three main contributions: • A novel unified end-to-end robust deep neural network framework against adversarial attacks is proposed, where the input image is first projected to a low-dimensional space and then classified. • An objective is induced to minimize the optimal transport cost between the true class distribution and the framework output distribution, guiding the encoder and discriminator to project the input image to a low-dimensional space without losing important features for classification. • Extensive experiments demonstrate the robustness of our proposed ER-Classifier framework under the white-box attacks, and show that ER-Classifier outperforms other state-ofthe-art approaches on several benchmark image datasets. As far as we know, our approach is the first that applies optimal transport theory, i.e., a Wasserstein distance regularization, to a bottleneck embedding layer of a deep neural network in a purely supervised learning setting without considering any reconstruction loss, although optimal transport theory or a discriminator loss has been applied to generative models in an unsupervised learning setting ; Our method is also the first that establishes the connection between a Wasserstein distance regularization and the robustness of deep neural networks for defending against adversarial examples. In this section, we summarize related work into three categories: attack methods, defense mechanisms and optimal transport theory. We first discuss different white-box attack methods, followed by a description of different defense mechanisms against, and finally optimal transport theory. Under the white-box setting, attackers have all information about the targeted neural network, including network structure and gradients. Most white-box attacks generate adversarial examples based on the gradient of loss function with respect to the input. An algorithm called fast gradient sign method (FGSM) was proposed in (Goodfellow et al.) which generates adversarial examples based on the sign of gradient. Many other white-box attack methods have been proposed recently (; ; ; b), and among them C&W and PGD attacks have been widely used to test the robustness of machine learning models. C&W attack: The adversarial attack method proposed by Carlini and Wagner (b) is one of the strongest white-box attack methods. They formulate the adversarial example generating process as an optimization problem. The proposed objective function aims at increasing the probability of the target class and minimizing the distance between the adversarial example and the original input image. Therefore, C&W attack can be viewed as a gradient-descent based adversarial attack. PGD attack: The projected gradient descent attack is proposed by , which finds adversarial examples in an -ball of the image. The PGD attack updates in the direction that decreases the probability of the original class most, then projects the back to the -ball of the input. An advantage of PGD attack over C&W attack is that it allows direct control of distortion level by changing, while for C&W attack, one can only do so indirectly via hyper-parameter tuning. Both C&W attack and PGD attack have been frequently used to benchmark the defense algorithms due to their effectiveness . In this paper, we mainly use l ∞ -PGD untargeted attack to evaluate the effectiveness of the defense method under white-box setting. Instead of crafting different adversarial perturbation for different input image, an algorithm was proposed by to construct a universal perturbation that causes natural images to be misclassified. However, since this universal perturbation is image-agnostic, it is usually larger than the image-specific perturbation generated by PGD and C&W. Many works have been done to improve the robustness of deep neural networks. To defend against adversarial examples, defenses that aim to increase model robustness fall into three main categories: i) augmenting the training data with adversarial examples to enhance the existing classifiers (; ; Goodfellow et al.); ii) leveraging model-specific strategies to enforce model properties such as smoothness ; and, iii) trying to remove adversarial perturbations from the inputs (; ;). We select three representative methods that are effective under white-box setting. Adversarial training: Augmenting the training data with adversarial examples can increase the robustness of the deep neural network. Madry et al. recently introduced a minmax formulation against adversarial attacks. The proposed model is not only trained on the original dataset but also adversarial example in the -ball of each input image. Random Self-Ensemble: Another effective defense method under white-box setting is RSE . The authors proposed a "noise layer", which fuses output of each layer with Gaussian noise. They empirically show that the noise layer can help improve the robustness of deep neural networks. The noise layer is applied in both training and testing phases, so the prediction accuracy will not be largely affected. Defense-GAN: Defense-GAN leverages the expressive capability of GANs to defend deep neural networks against adversarial examples. It is trained to project input images onto the range of the GAN's generator to remove the effect of the adversarial perturbation. Another defense method that uses the generative model to filter out noise is MagNet proposed by . However, the differences between ER-Classifier and the two methods are obvious. ER-Classifier focuses on reducing the dimension, and performing classification based on the low-dimensional embedding, while Defense-GAN and MagNet mainly apply the generative model to filter out the adversarial noise, and both Defense-GAN and MagNet perform classification on the original dimension space. showed that Defense-GAN is more robust than MagNet, so we only compare with Defense-GAN in the experiments. Other Related Methods: regularizes the latent space with Gaussian Mixture Model and applies KL-divergence to do the optimization. However, our method employs a simple but nice-shaped Gaussian prior for Wasserstein distance minimization to constrain the global shape of the latent embeddings, while permitting high freedom for the shapes of individual class distributions of latent embeddings. We want the classifier to decide the optimal class-specific distributions of latent embeddings. shares a similar idea to adversarial learning, but it employs virtual labels generated by a current classifier to identify search directions that can smooth the output label distribution of the classifier and is best suitable for semi-supervised learning. Please note that both methods in are designed for improving generalization performance but not for defending against adversarial examples. A recent paper shows that adversarial examples are purely human phenomenon and models tend to learn features that are not robust yet generalize well. We show that our Wasserstein distance regularization helps to identify robust features, which will be discussed later. Notations In this paper, we use l ∞ and l 2 distortion metrics to measure similarity. We report l ∞ distance in the normalized space, so that a distortion of 0.031 corresponds to 8/256, and l 2 distance as the total root-mean-square distortion normalized by the total number of pixels. We use calligraphic letters for sets (i.e., X), capital letters for random variables (i.e., X), and lower case letters for their values (i.e., x). The probability distributions are denoted with capital letters (i.e., P X) and corresponding densities with lower case letters (i.e., p X). We propose a novel defense framework, ER-Classifier, which aims at projecting the image data to a low-dimensional space to remove noise and stabilize the classification model by minimizing the optimal transport cost between the true label distribution P Y and the distribution of the ER-Classifier output (P C). An overview of the framework is shown in Figure 1. The encoder and discriminator structures together help diminish the effect of the adversarial perturbation by projecting input data to a space of lower dimension, then the classifier part performs classification based on the lowdimensional embedding. Mathematically, input images X ∈ X = R d are projected to a low-dimensional embedding vector Z ∈ Z = R k through the encoder Q φ. The discriminator D γ discriminates between the generated codeZ ∼ Q φ (Z|X) and the ideal code Z ∼ P Z. The classifier C τ performs classification based on the generated codeZ, producing output U ∈ U = R m, where m is the number of classes. The label of X is denoted as Y ∈ U. The Kantorovich's distance induced by the optimal transport problem is given by is the set of all joint distributions of (Y, U) with marginals P Y and P C, and c(y, u): U × U → R + is any measurable cost function. W c (P Y, P C) measures the divergence between probability distributions P Y and P C. When the probability measures are on a metric space, the p-th root of W c is called the p-Wasserstein distance. To minimize the Wasserstein distance between the distribution of the true label (P Y) and the distribution of the ER-Classifier output (P C), we can prove that it is sufficient to find a conditional distribution Q(Z|X) such that its marginal distribution Q Z is identical to a prior distribution P Z. The theorem and the proof are deferred to the Appendix. In this paper, we apply standard Gaussian as our prior distribution P Z, but other priors may be used for different cases. The final objective of ER-Classifier is: where Q can be a deterministic encoder as focused by this paper due to its simplicity or stochastic encoder as the one in a standard Variational Autoencoder, λ > 0 is a hyper-parameter and D is an arbitrary divergence between Q Z and P Z. To estimate the divergences between Q Z and P Z, we apply a GAN-based framework, fitting a discriminator to minimize the 1-Wasserstein distance between Q Z and P Z: We have also tried the Jensen-Shannon divergence, but as expected, Wasserstein distance provides more stable training and better . When training the framework, the weight clipping method proposed in Wasserstein GAN is applied to help stabilize the training of discriminator D γ. The training algorithm is summarized in Algorithm 1. At training stage, the encoder Q φ first maps the input x to a low-dimensional space, ing in generated code (z). Another ideal code (z) is sampled from the prior distribution, and the discriminator D γ discriminates between the ideal code (positive data) and the generated code (negative data). The classifier (C τ) predicts the image label based on the generated code (z). Sample {(x 1, y 1),..., (x n, y n)} from the training set 5: Sample {z 1, ..., z n} from the prior P Z 6: Update D γ by ascending the following objective by 1-step Adam: Update Q φ and C τ by descending the following objective by 1-step Adam: Update Q φ by ascending the following objective by 1-step Adam: 10: end while At inference time, only the encoder Q φ and the classifier C τ are used. The input image x is first mapped to a low-dimensional space by the encoder (z = Q φ (x)), then the latent codez is fed into the classifier to obtain the predicted label. The main goal of ER-Classifier is leveraging input space dimension reduction to remove adversarial perturbations. Therefore, other defense methods can also benefit from this property. Our framework is trained with min-max robust optimization . There are two Wasserstein distances (W-distances) in our framework. One is the W-distance between the aggregated latent embedding distribution Q(z) and the prior distribution P Z, and the other one is the W-distance between the true label distribution P Y and the distribution of the ER-Classifier output (P C). In Algorithm 1, we are minimizing the first one. The theorem in the Appendix shows that minimizing the first W-distance in combination with minimizing a standard cross-entropy loss as done in Algorithm 1 is equivalent to minimizing the second W-distance, which guarantees that the training process is not distracted from the main goal of the framework, classification. That is to say, Algorithm 1 will in a classifier with the following property: the global output distribution of the classifier will match the global ground-truth label distribution in the data no matter whether the encoder Q φ is deterministic or stochastic (the second W-distance is automatically minimized). It's hard to analyze the importance of the theorem in the Appendix if we just look at a deterministic encoder. Let's convert this deterministic encoder to a stochastic encoder that outputs a Gaussian z with a fixed variance and the mean being the same as its corresponding deterministic version. The theory tells us that, by minimizing the first W-distance over all sampled z's from this stochastic encoder and the standard cross-entropy loss, we will automatically minimize the second W-distance and preserve the global label frequency in the dataset, even though these z's are only -close to the deterministic encoding features of training data. Moreover, we find that minimizing the W-distance helps the encoder identify some robust features instead of non-robust features , because our proposed regularization constrains the -ball around each Q φ (Z|X) to contribute to preserving the global label distribution in the data, even with X integrated out. From this perspective, we can view our proposed framework as "a supervised variant" of Generative Adversarial Network or Wasserstein Autoencoder in which the Generator or Decoder is replaced by a Classifier that generates labels from low-dimensional latent embeddings preserving global label frequency in the training dataset. Replacing W-distance with KL divergence loses all these nice properties. In our framework, we use a simple but nice-shaped Gaussian prior P Z for W-distance minimization to constrain the global shape of the latent embeddings, while permitting high freedom for the shapes of individual class distributions of latent embeddings. We want the classifier to decide the optimal class-specific distributions of latent embeddings. In addition, it is interesting to explore how to set -ball to make sure the stochastic encoder to best align the latent embedding z to human-perceived robust features, which will be left as future work. In this section, we compare the performance of our proposed algorithm (ER-Classifier) with other state-of-the-art defense methods on several benchmark datasets: • MNIST : handwritten digit dataset, which consists of 60, 000 training images and 10, 000 testing images. Theses are 28 × 28 black and white images in ten different classes. classes, and each class has 500 training images, 50 testing images, making it a challenging benchmark for defense task. The resolution of the images is 64 × 64. Various defense methods have been proposed to improve the robustness of deep neural networks. Here we compare our algorithm with state-of-the-art methods that are robust in white-box setting. Madry's adversarial training (Madry's Adv) has been recognized as one of the most successful defense method in white-box setting, as shown in . Random Self-Ensemble (RSE) method introduced by adds stochastic components in the neural network, achieving similar performance to Madry's adversarial training algorithm. Another method we would like to compare with is Defense-GAN . It first trains a generative adversarial network to model the distribution of the training data. At inference time, it finds a close output to the input image and feed that output into the classifier. This process "projects" input images onto the range of GAN's generator, which helps remove the effect of adversarial perturbations. In , the author demonstrated the performance of Defense-GAN on MNIST and Fashion-MNIST, so we will compare our method with Defense-GAN on MNIST. Since the main goal of ER-Classifier is using dimension reduction to improve adversarial robustness, other defense methods can also benefit from this property. The proposed ER-Classifier is trained with min-max robust optimization . To demonstrate the dimension reduction ability of ER-Classifier, we include a variant ER-Classifier − which trains ER-Classifier without min-max robust optimization. In this section, we evaluate the defense methods against l ∞ -PGD untargeted attack, which is one of the strongest white-box attack methods. Models are evaluated under different distortion level , Based on Figure 2 and − tends to perform better than other state-of-the-art defense methods on MNIST, CIFAR10 and Tiny Imagenet. This phenomenon is obvious on CIFAR10 and it even performs better than ER-Classifier when the attack strength is strong. The reason might be that without min-max robust optimization, it is easier to regularize the embedding space. Testing Accuracy Defense-GAN 55.0 ER-Classifier 99.1 We also compare Defense-GAN with our method ERClassifier on MNIST. Although Defense-GAN was shown to be partly broken by , both ER-Classifier and Defense-GAN share the similar idea of projecting the input to a learned manifold, and comparing to Defense-GAN is important to demonstrate the advantage of our novel Wassserstein distance regularization. Please note that Defense-GAN is not our major comparison baseline in this paper. Both ER-Classifier and Defense-GAN are evaluated against the l 2 -C&W untargeted attack, one of the strongest white-box attack proposed in (b). Defense-GAN is evaluated using the method proposed in , and the code is available on github 1. ER-Classifier is evaluated against l 2 -C&W untargeted attack with the same hyper-parameter values as those used in the evaluation of Defense-GAN. The under l 2 ≤ 0.005 threshold are shown in Table 2. Based on Table 2, ER-Classifier is much more robust than Defense-GAN un- der the l 2 ≤ 0.005 threshold. Since did not evaluate Defense-GAN on CIFAR10, STL10 and Tiny Imagenet, without details of GAN structure, we can not compare with Defense-GAN on these datasets. We evaluate Madry's adversarial training, ER-Classifier, and ER-Classifier −, against a recently proposed black-box attack method called Nattack 2 on CIFAR10. Nattack is only performed on the first 100 images of CIFAR10 since the attack process takes a long time. We report the accuracy = number of correctly classified / number of attacked images (exactly 100). The accuracy of Madry's adv, ER-Classifier, and ER-Classifier − is, respectively, 38%, 43%, and 32%. ER-Classifier still outperforms Madry's adv. ER-Classifier framework consists of three parts, and the classification task is done by the encoder Q φ and classifier C τ. Without the discriminator part, the encoder can also project the input images to a low-dimensional space. However, arbitrarily projecting the images to a low-dimensional space with only the encoder part cannot improve the robustness of the model. In contrast, sometimes it even decreases the robustness of the model. To show that arbitrarily projecting the input images to a low-dimensional space can not improve the robustness, we fit a framework with only the encoder and classifier part (E-CLA), where the encoder and classifier have the same structures as in ER-Classifier, and compare E-CLA with the ER-Classifier framework. For a fair comparison, both structures are trained without min-max robust optimization. The are shown in Figure 3. Based on Figure 3, we can observe that ER-Classifier is much more robust than just the encoder and classifier structure on MNIST, CIFAR10 and Tiny Imagenet. It is also more robust on STL10 but not that much. The reason might be that there are only 5, 000 training images in STL10 and the resolution is 96 × 96. Therefore, it is harder to learn a good embedding with limited amount of images. However, even when the number of training images is limited, ER-Classifier is still much more robust than the E-CLA structure. This observation demonstrates that regularization on the embedding space helps improve the adversarial robustness. Notice that the performance of E-CLA structure is similar to the performance of model without defense method on CIFAR10, STL10 and Tiny Imagenet, and worse on MNIST, which means the robustness of ER-Classifier does not come from the structure design. Variational auto-encoder can project the images to low-dimensional space and use Kullback-Leibler divergence loss to regularize the embedding distribution, which does not need discriminator structure. Therefore, we also tried VAE-CLA, which applies Variational auto-encoder structure to do the projection and regularization. The experimental in Figure 3 show that VAE-CLA does not perform as well as ER-Classifier. Based on the observation of the Kullback-Leibler loss and classification loss during the training process, it seems difficult for VAE-CLA to balance between the two tasks. The reason might be that Kullback-Leibler distances are not sensible cost functions when learning distributions supported by low dimensional manifolds . However, the selection of prior is important as it imposes different restrictions on the embedding space. Three different prior distributions are tested on MNIST and CIFAR10 datasets. They are standard Gaussian, Uniform(−3, 3) and Cauchy, where Cauchy has the same support as standard Gaussian but is heavy tailed and 99.7% of the standard Gaussian points lies within [−3, 3]. All the models are trained without min-max robust optimization, and the experimental are shown in Figure 4. Based on the , all three priors work well, but standard Gaussian performs best on both datasets. Ding et al. prove that adversarial robustness is sensitive to the input data distribution, and if the data is uniformly distributed in the input space, no algorithm can achieve good robustness. They also empirically show that cornered/concentrated data distributions tend to achieve better robustness. This helps explain why regularizing the embedding space can help improve robustness. Though the projection process reduces the input dimension, the embedding space is still large. Prior distribution helps push the embedding space to be more concentrated, reducing the valid perturbation space. Details of hyper-parameter selection, model structure and code are included in the supplementary part. Embedding space visualization can also be found in the supplementary material. In this paper, we propose a new defense framework, ER-Classifier, which projects the input images to a low-dimensional space to remove adversarial perturbation and stabilize the model through minimizing the discrepancy between the true label distribution and the framework output distribution. We empirically show that ER-Classifier is much more robust than other state-of-the-art defense methods on several benchmark datasets. Future work will include further exploration of the low-dimensional space to improve the robustness of deep neural network. Mathematically, input images X ∈ X = R d are projected to a low-dimensional embedding vector Z ∈ Z = R k through the encoder Q φ. The discriminator D γ discriminates between the generated codeZ ∼ Q φ (Z|X) and the ideal code Z ∼ P Z. The classifier C τ performs classification based on the generated codeZ, producing output U ∈ U = R m, where m is the number of classes. The label of X is denoted as Y ∈ U. The ER-Classifier framework embeds important classification features by minimizing the discrepancy between the distribution of the true label (P Y) and the distribution of the framework output (P C). In the framework, the classifier (P C (U |Z)) maps a latent code Z sampled from a fixed distribution in a latent space Z, to the output U ∈ U = R m. The density of ER-Classifier output is defined as follow: In this paper we apply standard Gaussian as our prior distribution P Z, but other priors may be used for different cases. Assume there is an oracle f: X → U assigning the image data (X ∈ X) its true label (Y ∈ U). We want to minimize the optimal transport cost between the distribution of the true label (P Y) and the distribution of the ER-Classifier output (P C). There are various ways to define the distance or divergence between the target distribution and the model distribution. In this paper, we turn to the optimal transport theory , which provides a much weaker topology than many others. In real applications, data is usually embedded in a space of a much lower dimension, such as a non-linear manifold. Kullback-Leibler divergence, Jensen-Shannon divergence and Total Variation distance are not sensible cost functions when learning distributions supported by lower dimensional manifolds . In contrast, the optimal transport cost is more sensible in this setting. Kantorovich's distance induced by the optimal transport problem is given by where P(Y ∼ P Y, U ∼ P C) is the set of all joint distributions of (Y, U) with marginals P Y and P C, and c(y, u): U × U → R + is any measurable cost function. W c (P Y, P C) measures the divergence between probability distributions P Y and P C. When the probability measures are on a metric space, the p-th root of W c is called the p-Wasserstein distance. To minimize the optimal transport cost between the distribution of the true label (P Y) and the distribution of the ER-Classifier output (P C), it is sufficient to find a conditional distribution Q(Z|X) such that its marginal distribution Q Z is identical to the prior distribution P Z. Theorem 1 For P C as defined above with a deterministic P C (U |Z) and any function C: where Γ ∈ P(Y ∼ P Y, U ∼ P C) is the set of all joint distributions of (Y, U) with marginals P Y and P C, and (y, u): U × U → R + is any measurable cost function. Q Z is the marginal distribution of Z when X ∼ P X and Z ∼ Q(Z|X). (The proof is presented later.) Therefore, optimizing over the objective on the r.h.s is equivalent to minimizing the discrepancy between the true label distribution (P Y) and the output distribution P C, thus the important classification features are embedded in the low-dimensional space. This is the core idea of the paper, summarizing the high-dimensional data in a space of much lower dimension without losing important features for classification. To implement the r.h.s objective, the constraint on Q Z can be relaxed by adding a penalty term. The final objective of ER-Classifier is: where Q is any nonparametric set of probabilistic encoders, λ > 0 is a hyper-parameter and D is an arbitrary divergence between Q Z and P Z. To estimate the divergences between Q Z and P Z, we apply a GAN-based framework, fitting a discriminator to minimize the 1-Wasserstein distance between Q Z and P Z: We have also tried the Jensen-Shannon divergence, but as expected, Wasserstein distance provides more stable training and better . When training the framework, the weight clipping method proposed in Wasserstein GAN is applied to help stabilize the training of discriminator D γ. The proof of Theorem 1 is adapted from the proof of Theorem 1 in . Consider certain sets of joint probability distributions of three random variables (X, U, Z) ∈ X × U × Z. X can be taken as the input images, U as the output of the framework, and Z as the latent codes. P C,Z (U, Z) represents a joint distribution of a variable pair (U, Z), where Z is first sampled from P Z and then U from P C (U |Z). P C defined in is the marginal distribution of U when (U, Z) ∼ P C,Z. The joint distributions Γ(X, U) or couplings between values of X and U can be written as Γ(X, U) = Γ(U |X)P X (X) due to the marginal constraint. Γ(U |X) can be decomposed into an encoding distribution Q(Z|X) and the generating distribution P C (U |Z), and Theorem 1 mainly shows how to factor it through Z. In the first part, we will show that if P C (U |Z) are Dirac measures, we have where P(X ∼ P X, U ∼ P C) denotes the set of all joint distributions of (X, U) with marginals P X, P C, and likewise for P(X ∼ P X, Z ∼ P Z). The set of all joint distributions of (X, U, Z) such that X ∼ P X, (U, Z) ∼ P C,Z, and (U ⊥ ⊥ X)|Z are denoted by P X,U,Z. P X,U and P X,Z denote the sets of marginals on (X, U) and (X, Z) induced by P X,U,Z. From the definition, it is clear that P X,U ⊆ P(P X, P C). Therefore, we have The identity is satisfied if P C (U |Z) are Dirac measures, such as U = C(Z). This is proved by the following Lemma in . Lemma 1 P X,U ⊆ P(P X, P C) with identity if P C (U |Z = z) are Dirac for all z ∈ Z. (see details in .) In the following part, we show that Based on the definition, P(P X, P C), P X,U,Z and P X,U depend on the choice of conditional distributions P C (U |Z), but P X,Z does not. It is also easy to check that P X,Z = P(X ∼ P X, Z ∼ P Z). The tower rule of expectation, and the conditional independence property of P X,U,Z implies Finally, since Y = f (X), it is easy to get Now, and are proved and the three together prove Theorem 1. Our proposed framework readily applies to non-deterministic case. If the classifier part is nondeterministic, Lemma 1 provides only the inclusion of sets P X,U ⊆ P(P X, P U), and we can get an upper bound on the Wasserstein distance between the ground-truth and predicted label distributions: where we assume the conditional distributions P C (U |Z = z) have mean values C(z) ∈ R d and marginal variances σ 2 1,..., σ 2 d ≥ 0 for all z ∈ Z, where C: Z → X, and (y, u) = y − u 2. The above upper bound is derived by: and In equation 11, the second term of the second last row becomes 0 since the optimization will drive f (X) − C(Z) to zero. One important hyper-parameter for the ER-Classifier is the dimension of the embedding space. If the dimension is too small, important features are "collapsed" onto the same dimension, and if the dimension is too large, the projection will not extract useful information, which in too much noise and instability. The maximum likelihood estimation of intrinsic dimension proposed in 3 is used to calculate the intrinsic dimension of each image dataset, serving as a guide for selecting the embedding dimension. The sample size used in calculating the intrinsic dimension is 1, 000, and changing the sample size does not influence the much. Based on the intrinsic dimension calculated by , we test several different values around the suggested intrinsic dimension and evaluate the models against l ∞ -PGD attack. All models are trained without min-max robust optimization, and the experimental are shown in Figure 5. The final embedding dimension is chosen based on robustness, number of parameters, and testing accuracy when there is no attack. The final embedding dimensions and suggested intrinsic dimensions are shown in Table 3 Table 3: Pixel space dimension, intrinsic dimension calculated by , and final embedding dimension used. Based on Figure 5, the embedding dimension close to the calculated intrinsic dimension usually offers better except on MNIST. One explanation may be that MNIST is a simple handwritten digit dataset, so performing classification on MNIST may not require that many dimensions. Epsilon is an important hyper-parameter for adversarial training. When doing Madry's adversarial training, we test the model robustness with different and choose the best one. The experiment are shown in Figure 6. Based on Figure 6, we use = 0.3, 0.03, 0.03 in Madry's adversarial training on MNIST, CIFAR10 and STL10 respectively. For Tiny Imagenet, we use = 0.01. To make a fair comparison, we use the same when training ER-Classifier. In this section, we compare the embedding learned by Encoder+Classifier structure (E-CLA) and the embedding learned by ER-Classifier on several datasets without min-max robust optimization. We first generate embedding of testing data using the encoder (z = Q φ (x)), then project the embedding points (z) to 2-D space by tSNE . Then we generate adversarial images (x adv) against E-CLA and ER-Classifier using l ∞ -PGD attack. The adversarial embedding is generated by feeding the adversarial images into the encoder (z adv = Q φ (x adv)). Finally, we project the adversarial embedding points (z adv) to 2-D space. The are shown in Figure 7. The plots in the first and second rows are embedding visualization plots for E-CLA, and the plots in the third and last rows are the embedding visualization plots for ER-Classifier. In adversarial embedding visualization plots, the misclassified point is marked as "down triangle", which means the PGD attack successfully changed the prediction, and the correctly classified point is marked as "point", which means the attack fails. Based on Figure 7, we can see that E-CLA can learn a good embedding on legitimate images of MNIST. Embedding points for different classes are separated on the 2D space, but under adversarial attack, some embedding points of different classes are mixed together. However, ER-Classifier can generate good separated embeddings on both legitimate and adversarial images. On CIFAR10, the E-CLA can not generate good separated embeddings on either legitimate images or adversarial images, while ER-Classifier can generate good separated embeddings for both. Code for reproduction will be made available online at github later. The pseudocode for training ER-Classifier is shown in Listing 1. MNIST, STL10 and TinyImagenet classifier structures used for baseline methods are shown in Fig
A general and easy-to-use framework that improves the adversarial robustness of deep classification models through embedding regularization.
378
scitldr
We investigate task clustering for deep learning-based multi-task and few-shot learning in the settings with large numbers of diverse tasks. Our method measures task similarities using cross-task transfer performance matrix. Although this matrix provides us critical information regarding similarities between tasks, the uncertain task-pairs, i.e., the ones with extremely asymmetric transfer scores, may collectively mislead clustering algorithms to output an inaccurate task-partition. Moreover, when the number of tasks is large, generating the full transfer performance matrix can be very time consuming. To overcome these limitations, we propose a novel task clustering algorithm to estimate the similarity matrix based on the theory of matrix completion. The proposed algorithm can work on partially-observed similarity matrices based on only sampled task-pairs with reliable scores, ensuring its efficiency and robustness. Our theoretical analysis shows that under mild assumptions, the reconstructed matrix perfectly matches the underlying “true” similarity matrix with an overwhelming probability. The final task partition is computed by applying an efficient spectral clustering algorithm to the recovered matrix. Our show that the new task clustering method can discover task clusters that benefit both multi-task learning and few-shot learning setups for sentiment classification and dialog intent classification tasks. This paper leverages knowledge distilled from a large number of learning tasks BID0 BID19, or MAny Task Learning (MATL), to achieve the goal of (i) improving the overall performance of all tasks, as in multi-task learning (MTL); and (ii) rapid-adaptation to a new task by using previously learned knowledge, similar to few-shot learning (FSL) and transfer learning. Previous work on multi-task learning and transfer learning used small numbers of related tasks (usually ∼10) picked by human experts. By contrast, MATL tackles hundreds or thousands of tasks BID0 BID19, with unknown relatedness between pairs of tasks, introducing new challenges such as task diversity and model inefficiency. MATL scenarios are increasingly common in a wide range of machine learning applications with potentially huge impact. Examples include reinforcement learning for game playing -where many numbers of sub-goals are treated as tasks by the agents for joint-learning, e.g. BID19 achieved the state-of-the-art on the Ms. Pac-Man game by using a multi-task learning architecture to approximate rewards of over 1,000 sub-goals (reward functions). Another important example is enterprise AI cloud services -where many clients submit various tasks/datasets to train machine learning models for business-specific purposes. The clients could be companies who want to know opinion from their customers on products and services, agencies that monitor public reactions to policy changes, and financial analysts who analyze news as it can potentially influence the stock-market. Such MATL-based services thus need to handle the diverse nature of clients' tasks. Challenges on Handling Diverse (Heterogeneous) Tasks Previous multi-task learning and fewshot learning research usually work on homogeneous tasks, e.g. all tasks are binary classification problems, or tasks are close to each other (picked by human experts) so the positive transfer between tasks is guaranteed. However, with a large number of tasks in a MATL setting, the above assumption may not hold, i.e. we need to be able to deal with tasks with larger diversity. Such diversity can be reflected as (i) tasks with varying numbers of labels: when tasks are diverse, different tasks could have different numbers of labels; and the labels might be defined in different label spaces without relatedness. Most of the existing multi-task and few-shot learning methods will fail in this setting; and more importantly (ii) tasks with positive and negative transfers: since tasks are not guaranteed to be similar to each other in the MATL setting, they are not always able to help each other when trained together, i.e. negative transfer BID22 between tasks. For example, in dialog services, the sentences "What fast food do you have nearby" and "Could I find any Indian food" may belong to two different classes "fast_food" and "indian_food" for a restaurant recommendation service in a city; while for a travel-guide service for a park, those two sentences could belong to the same class "food_options". In this case the two tasks may hurt each other when trained jointly with a single representation function, since the first task turns to give similar representations to both sentences while the second one turns to distinguish them in the representation space. A Task Clustering Based Solution To deal with the second challenge above, we propose to partition the tasks to clusters, making the tasks in each cluster more likely to be related. Common knowledge is only shared across tasks within a cluster, thus the negative transfer problem is alleviated. There are a few task clustering algorithm proposed mainly for convex models BID12 BID9 BID5 BID0, but they assume that the tasks have the same number of labels (usually binary classification). In order to handle tasks with varying numbers of labels, we adopt a similarity-based task clustering algorithm. The task similarity is measured by cross-task transfer performance, which is a matrix S whose (i, j)-entry S ij is the estimated accuracy by adapting the learned representations on the i-th (source) task to the j-th (target) task. The above task similarity computation does not require the source task and target task to have the same set of labels, as a , our clustering algorithm could naturally handle tasks with varying numbers of labels. Although cross-task transfer performance can provide critical information of task similarities, directly using it for task clustering may suffer from both efficiency and accuracy issues. First and most importantly, evaluation of all entries in the matrix S involves conducting the source-target transfer learning O(n 2) times, where n is the number of tasks. For a large number of diverse tasks where the n can be larger than 1,000, evaluation of the full matrix is unacceptable (over 1M entries to evaluate). Second, the estimated cross-task performance (i.e. some S ij or S ji scores) is often unreliable due to small data size or label noises. When the number of the uncertain values is large, they can collectively mislead the clustering algorithm to output an incorrect task-partition. To address the aforementioned challenges, we propose a novel task clustering algorithm based on the theory of matrix completion BID2. Specifically, we deal with the huge number of entries by randomly sample task pairs to evaluate the S ij and S ji scores; and deal with the unreliable entries by keeping only task pairs (i, j) with consistent S ij and S ji scores. Given a set of n tasks, we first construct an n × n partially-observed matrix Y, where its observed entries correspond to the sampled and reliable task pairs (i, j) with consistent S ij and S ji scores. Otherwise, if the task pairs (i, j) are not sampled to compute the transfer scores or the scores are inconsistent, we mark both Y ij and Y ji as unobserved. Given the constructed partially-observed matrix Y, our next step is to recover an n × n full similarity matrix using a robust matrix completion approach, and then generate the final task partition by applying spectral clustering to the completed similarity matrix. The proposed approach has a 2-fold advantage. First, our method carries a strong theoretical guarantee, showing that the full similarity matrix can be perfectly recovered if the number of observed correct entries in the partially observed similarity matrix is at least O(n log 2 n). This theoretical allows us to only compute the similarities of O(n log 2 n) instead of O(n 2) pairs, thus greatly reduces the computation when the number of tasks is large. Second, by filtering out uncertain task pairs, the proposed algorithm will be less sensitive to noise, leading to a more robust clustering performance. The task clusters allow us to handle (i) diverse MTL problems, by model sharing only within clusters such that the negative transfer from irrelevant tasks can be alleviated; and (ii) diverse FSL problems, where a new task can be assigned a task-specific metric, which is a linear combination of the metrics defined by different clusters, such that the diverse few-shot tasks could derive different metrics from the previous learning experience. Our show that the proposed task clustering algorithm, combined with the above MTL and FSL strategies, could give us significantly better deep MTL and FSL algorithms on sentiment classification and intent classification tasks. Task/Dataset Clustering on Model Parameters This class of task clustering methods measure the task relationships in terms of model parameter similarities on individual tasks. Given the parameters of convex models, task clusters and cluster assignments could be derived via matrix decomposition BID12 or k-means based approach BID9. The parameter similarity based task clustering method for deep neural networks BID21 applied low-rank tensor decomposition of the model layers from multiple tasks. This method is infeasible for our MATL setting because of its high computation complexity with respect to the number of tasks and its inherent requirement on closely related tasks because of its parametersimilarity based approach. Task/Dataset Clustering with Clustering-Specific Training Objectives Another class of task clustering methods joint assign task clusters and train model parameters for each cluster that minimize training loss within each cluster by K-means based approach BID5 or minimize overall training loss combined with sparse or low-ranker regularizers with convex optimization BID0 BID16. Deep neural networks have flexible representation power and they may overfit to arbitrary cluster assignment if we consider training loss alone. Also, these methods require identical class label sets across different tasks, which does not hold in most of the real-world MATL settings. Few Shot Learning FSL BID14 BID15 aims to learn classifiers for new classes with only a few training examples per class. Bayesian Program Induction BID13 represents concepts as simple programs that best explain observed examples under a Bayesian criterion. Siamese neural networks rank similarity between inputs BID11. Matching Networks BID20 ) maps a small labeled support set and an unlabeled example to its label, obviating the need for fine-tuning to adapt to new class types. These approaches essentially learn one metric for all tasks, which is sub-optimal when the tasks are diverse. An LSTM-based meta-learner BID18 learns the exact optimization algorithm used to train another learner neural-network classifier for the few-shot setting. However, it requires uniform classes across tasks. Our FSL approach can handle the challenges of diversity and varying sets of class labels. Let T = {T 1, T 2, · · ·, T n} be the set of n tasks to be clustered, and each task T i consists of a train/validation/test data split DISPLAYFORM0. We consider text classification tasks, comprising labeled examples {x, y}, where the input x is a sentence or document (a sequence of words) and y is the label. We first train each classification model M i on its training set D train i, which yields a set of models M = {M 1, M 2, · · ·, M n}. We use convolutional neural network (CNN), which has reported near state-of-the-art on text classification BID10 BID7.CNNs also train faster than recurrent neural networks BID6, making large-n MATL scenarios more feasible. FIG0 shows the CNN architecture. Following BID4 BID10, the model consists of a convolution layer and a max-pooling operation over the entire sentence. The model has two parts: an encoder part and a classifier part. Hence each model DISPLAYFORM1 The above broad definitions encompasses other classification tasks (e.g. image classification) and other classification models (e.g. LSTMs BID6).We propose a task-clustering framework for both multi-task learning (MTL) and few-shot learning (FSL) settings. In this framework, we have the MTL and FSL algorithms summarized in Section 3.3 & 3.4, where our task-clustering framework serves as the initial step in both algorithms. FIG1 gives an overview of our idea and an example on how our task-clustering algorithm helps MTL. Using single-task models, we can compute performance scores s ij by adapting each M i to each task T j (j = i). This forms an n × n pair-wise classification performance matrix S, called the transfer-performance matrix. Note that S is asymmetric since usually S ij = S ji.When all tasks have identical label sets, we can directly evaluate the model M i on the training set of task j, D train j, and use the accuracy as the cross-task transfer score S ij.When tasks have different label sets, we freeze the encoder M to get the accuracy as the transfer-performance S ij. The score shows how the representations learned on task i can be adapted to task j, thus indicating the similarity between tasks. Task Pair Sampling: When the number of tasks n is very large, the evaluation of O(n 2) entries is time-consuming. Thus we sample n pairs of tasks {i, j} (i = j), with n n. Then we set S ij and S ji as the transfer performance defined above when {i, j} is in the n samples, otherwise the entry is marked as "unobserved" 1. As discussed in the introduction, directly generating the full matrix S and partitioning tasks based on it has the following disadvantages: (i) there are too many entries to evaluate when the number of tasks is large; (ii) some task pairs are uncertain, thus can mislead the clustering algorithm to output an incorrect task-partition; and (iii) S is asymmetric, thus cannot be directly analyzed by many conventional clustering methods. We address the first issue by randomly sample some task pairs to evaluate, as described in Section 3.1. Besides, we address the other issues by constructing a symmetric similarity matrix and only consider the reliable task relationships, as will be introduced in Eq.. Below, we describe our method (summarized in Algorithm 1) in detail. First, we use only reliable task pairs to generate a partially-observed similarity matrix Y. Specifically, if S ij and S ji are high enough, then it is likely that tasks {i, j} belong to a same cluster and share significant information. Conversely, if S ij and S ji are low enough, then they tend to belong to different clusters. To this end, we need to design a mechanism to determine if a performance is high or low enough. Since different tasks may vary in difficulty, a fixed threshold is not suitable. Hence, we define a dynamic threshold using the mean and standard deviation of the target task performance, i.e., µ j = mean(S :j) and σ j = std(S :j), where S:j is the j-th column of S. We then introduce two positive parameters p 1 and p 2, and define high and low performance as S ij greater than µ j + p 1 σ j or lower than µ j − p 2 σ j, respectively. When both S ij and S ji are high and low enough, we set their pairwise similarity as 1 and 0, respectively. Other task pairs are treated as uncertain task pairs and are marked as unobserved, and will have no influence to our clustering method. This leads to a partially-observed symmetric matrix Y, i.e., DISPLAYFORM0 Given the partially observed matrix Y, we then reconstruct the full similarity matrix X ∈ R n×n. We first note that the similarity matrix X should be of low-rank (proof deferred to appendix). Additionally, since the observed entries of Y are generated based on high and low enough performance, it is safe to assume that most observed entries are correct and only a few may be incorrect. Therefore, we introduce a sparse matrix E to capture the observed incorrect entries in Y. Combining the two observations, Y can be decomposed into the sum of two matrices X and E, where X is a low rank matrix storing similarities between task pairs, and E is a sparse matrix that captures the errors in Y. The matrix completion problem can be cast as the following convex optimization problem: DISPLAYFORM1 where • * denotes the matrix nuclear norm, the convex surrogate of rank function. Ω is the set of observed entries in Y, and P Ω: R n×n → R n×n is a matrix projection operator defined as DISPLAYFORM2 The following theorem shows the perfect recovery guarantee for the problem. The proof is deferred to Appendix. Theorem 3.1. Let X * ∈ R n×n be a rank k matrix with a singular value decomposition X * = UΣV, where U = (u 1, . . ., u k) ∈ R n×k and V = (v 1, . . ., v k) ∈ R n×k are the left and right singular vectors of X *, respectively. Similar to many related works of matrix completion, we assume that the following two assumptions are satisfied:1. The row and column spaces of X have coherence bounded above by a positive number µ 0.2. Max absolute value in matrix UV is bounded above by µ 1 √ r/n for a positive number µ 1.Suppose that m 1 entries of X * are observed with their locations sampled uniformly at random, and among the m 1 observed entries, m 2 randomly sampled entries are corrupted. Using the ing partially observed matrix as the input to the problem, then with a probability at least 1 − n −3, the underlying matrix X * can be perfectly recovered, given DISPLAYFORM3 where C is a positive constant; ξ(•) and µ(•) denotes the low-rank and sparsity incoherence BID3.Theorem 3.1 implies that even if some of the observed entries computed by are incorrect, problem can still perfectly recover the underlying similarity matrix X * if the number of observed correct entries is at least O(n log 2 n). For MATL with large n, this implies that only a tiny fraction of all task pairs is needed to reliably infer similarities over all task pairs. Moreover, the completed similarity matrix X is symmetric, due to symmetry of the input matrix Y. This enables analysis by similarity-based clustering algorithms, such as spectral clustering. For each cluster C k, we train a model Λ k with all tasks in that cluster to encourage parameter sharing. We call Λ k the cluster-model. When evaluated on the MTL setting, with sufficient data to train a task-specific classifier, we only share the encoder part and have distinct task-specific classifiers FIG0 ). These task-specific classifiers provide flexibility to handle varying number of labels. We only have access to a limited number of training samples in few-shot learning setting, so it is impractical to train well-performing task-specific classifiers as in the multi-task learning setting. Instead, we make the prediction of a new task by linearly combining prediction from learned clusters. where Λ k is the learned (and frozen) model of the k-th cluster, {α k} K k=1 are adaptable parameters. We use some alternatives to train cluster-models Λ k, which could better suit (and is more consistent to) the above FSL method.2 When all tasks have identical label sets, we train a single classification model on all the tasks like in previous work BID0, the predictor P (y|x; Λ k) is directly derived from this cluster-model. When tasks have different label sets, we train a metriclearning model like BID20 among all the tasks in C k, which consist a shared encoding function Λ enc k aiming to make each example closer to examples with the same label compared to the ones with different labels. Then we use the encoding function to derive the predictor by DISPLAYFORM0 where x l is the corresponding training sample for label y l. Data Sets We test our methods by conducting experiments on three text classification data sets. In the data-preprocessing step we used NLTK toolkit 3 for tokenization. For MTL setting, all tasks are used for clustering and model training. For FSL setting, the task are divided into training tasks and testing tasks (target tasks), where the training tasks are used for clustering and model training, the testing tasks are few-shot learning ones used to for evaluating the method in Eq..1. Amazon Review Sentiment Classification First, following BID0, we construct a multi-task learning setting with the multi-domain sentiment classification BID1 data set. The dataset consists of Amazon product reviews for 23 types of products (see Appendix 3 for the details). For each domain, we construct three binary classification tasks with different thresholds on the ratings: the tasks consider a review as positive if it belongs to one of the following buckets =5 stars, >=4 stars or >=2 stars 4These review-buckets then form the basis of the task-setup for MATL, giving us 23 × 3 = 69 tasks in total. For each domain we distribute the reviews uniformly to the three tasks. For evaluation, we select tasks from 4 domains (Books, DVD, Electronics, Kitchen) as the target tasks (12 tasks) out of all 23 domains. For FSL evaluation, we create five-shot learning tasks on the selected target tasks. The cluster-models for this evaluation are standard CNNs shown in FIG0 (a), and we share the same output layer to evaluate the probability in Eq. as all tasks have the same number of labels.2. Diverse Real-World Tasks: User Intent Classification for Dialog System The second dataset is from an on-line service which trains and serves intent classification models to various clients. The dataset comprises recorded conversations between human users and dialog systems in various domains, ranging from personal assistant to complex serviceordering or a customer-service request scenarios. During classification, intent-labels 5 are assigned to user utterances (usually sentences). We use a total of 175 tasks from different clients, and randomly sample 10 tasks from them as our target tasks. For each task, we randomly sample 64% data into a training set, 16% into a validation set, and use the rest as the test set (see Appendix 3 for details). The number of labels for these tasks vary from 2 to 100. Hence, to adapt this to a FSL scenario, we keep one example for each label (one-shot), plus 20 randomly picked labeled examples to create our training data. We believe this is a fairly realistic estimate of labeled examples one client could provide easily. Since we deal with various number of labels in the FSL setting, we chose matching networks BID20 as the cluster-models.3. Extra-Large Number of Real-World Tasks Similar to the second dataset, we further collect 1,491 intent classification tasks from the on-line service. This setting is mainly used to verify the robustness of our task clustering method, since it is difficult to estimate the full transfer-performance matrix S in this setting (1,491 2 =2.2M entries). Therefore, in order to extract task clusters, we randomly sample task pairs from the data set to obtain 100,000 entries in S, which means that only about 100K/2.2M ≈ 4.5% of the entries in S are observed. The number of 100,000 is chosen to be close to n log 2 n in our theoretical bound in Theorem 3.1, so that we could also verify the tightness of the bound empirically. To make the best use of the sampled pairs, in this setting we modified the Eq. 1, so that each entry Y ij = Y ji = 1 if S ij ≥ µ j or S ji ≥ µ i and Y ij = 0 otherwise. In this way we could have determined number of entries in Y as well, since all the sampled pairs will correspond to observed (but noisy) entries in Y. We only run MTL setting on this data set. Baselines For MTL setting, we compare our method to the following baselines: single-task CNN: training a CNN model for each task individually; holistic MTL-CNN: training one MTL-CNN model FIG0 ) on all tasks; holistic MTL-CNN (target only): training one MTL-CNN model on all the target tasks. For FSL setting, the baselines consist of: single-task CNN: training a CNN model for each task individually; single-task FastText: training one FastText model BID8 with fixed embeddings for each individual task; Fine-tuned the holistic MTL-CNN: fine-tuning the classifier layer on each target task after training initial MTL-CNN model on all training tasks; Matching Network: a metric-learning based few-shot learning model trained on all training tasks. We initialize all models with pre-trained 100-dim Glove embeddings (trained on 6B corpus) BID17.As the intent classification tasks usually have various numbers of labels, to our best knowledge the proposed method is the only one supporting task clustering in this setting; hence we only compare with the above baselines. Since sentiment classification involves binary labels, we compare our method with the state-of-the-art logistic regression based task clustering method (ASAP-MT-LR) BID0. We also try another approach where we run our MTL/FSL methods on top of the (ASAP-Clus-MTL/FSL) clusters (as their entire formulation is only applicable to convex models). In all experiments, we set both p 1 and p 2 parameters in to 0.5. This strikes a balance between obtaining enough observed entries in Y, and ensuring that most of the retained similarities are consistent with the cluster membership. For MTL settings, we tune parameters like the window size and hidden layer size of CNN, learning rate and the initialization of embeddings (random or pre-trained) based on average accuracy on the union of all tasks' dev sets, in order to find the best identical setting for all tasks. Finally we have the CNN with window size of 5 and 200 hidden units. The learning rate is selected as 0.001; and all MTL models use random initialized word embeddings on sentiment classification and use Glove embeddings as initialization on intent classification, which is likely because the training sets of the intent tasks are usually small. We also used the early stopping criterion based on the previous condition. For the FSL setting, hyper-parameter selection is difficult since there is no validation data (which is a necessary condition to qualify as a k-shot learning). So, in this case we preselect a subset of training tasks as validation tasks and tune the learning rate and training epochs (for the rest we follow the best setting from the MTL experiments) on the validation tasks. During the testing phase (i.e. model training on the target FSL tasks), we fix the selected hyper-parameter values for all the algorithms. Out-of-Vocabulary in Transfer-Performance Evaluation In text classification tasks, transferring an encoder with fine-tuned word embeddings from one task to another may not work as there can be a significant difference between the vocabularies. Hence, while learning the single-task models (line 1 of Algorithm 1) we always use the CNNs with fixed set of pre-trained embeddings. Improving Observed Tasks (MTL Setting) TAB0 shows the of the 12 target tasks when all 69 tasks are used for training. Since most of the tasks have a significant amount of training data, the single-task baselines achieve good . Because the conflicts among some tasks (e.g. the 2-star bucket tasks and 5-star bucket tasks require opposite labels on 4-star examples), the holistic MTL-CNN does not show accuracy improvements compared to the single-task methods. It also lags behind the holistic MTL-CNN model trained only on 12 target domains, which indicates that the holistic MTL-CNN cannot leverage large number of tasks. Our ROBUSTTC-MTL method based on task clustering achieves a significant improvement over all the baselines. BID0 85 The ASAP-MTLR (best score achieved with five clusters) could improve single-task linear models with similar merit of our method. However, it is restricted by the representative strength of linear models so the overall is lower than the deep learning baselines. Adaptation to New Tasks (FSL Setting) Table 1(b) shows the on the 12 five-shot tasks by leveraging the learned knowledge from the 57 previously observed tasks. Due to the limited training resources, all the baselines perform poorly. Our ROBUSTTC-FSL gives far better compared to all baselines (>6%). It is also significantly better than applying Eq. without clustering (78.85%), i.e. using single-task model from each task instead of cluster-models for P (y|x; ·).Comparison to the ASAP Clusters Our clustering-based MTL and FSL approaches also work for the ASAP clusters, in which we replace our task clusters with the task clusters generated by ASAP-MTLR. In this setting we get a slightly lower performance compared to the ROBUSTTC-based ones on both MTL and FSL settings, but overall it performs better than the baseline models. This shows that, apart from the ability to handle varying number of class labels, our ROBUSTTC model can also generate better clusters for MTL/FSL of deep networks, even under the setting where all tasks have the same number of labels. It is worth to note that from Table 1(a), training CNNs on the ASAP clusters gives better compared to training logistic regression models on the same 5 clusters (86.07 vs. 85.17), despite that the clusters are not optimized for CNNs. Such further emphasizes the importance of task clustering for deep models, when better performance could be achieved with such models. TAB2 (a) & (b) show the MTL & FSL on dialog intent classification, which demonstrates trends similar to the sentiment classification tasks. Note that the holistic MTL methods achieve much better compared to single-task CNNs. This is because the tasks usually have smaller training and development sets, and both the model parameters learned on training set and the hyperparameters selected on development set can easily lead to over-fitting. ROBUSTTC-MTL achieves large improvement (5.5%) over the best MTL baseline, because the tasks here are more diverse than the sentiment classification tasks and task-clustering greatly reduces conflicts from irrelevant tasks. Although our ROBUSTTC-FSL improves over baselines under the FSL setting, the margin is smaller. This is because of the huge diversity among tasks -by looking at the training accuracy, we found several tasks failed because none of the clusters could provide a metric that suits the training examples. To deal with this problem, we hope that the algorithm can automatically decide whether the new task belongs to any of the task-clusters. If the task doesn't belong to any of the clusters, it would not benefit from any previous knowledge, so it should fall back to single-task CNN. The new task is treated as "out-of-cluster" when none of the clusters could achieve higher than 20% accuracy (selected on dev tasks) on its training data. We call this method Adaptive ROBUSTTC-FSL, and it gives more than 5% performance boost over the best ROBUSTTC-FSL . Discussion on Clustering-Based FSL The single metric based FSL method (Matching Network) achieved success on homogeneous few-shot tasks like Omniglot and miniImageNet BID20 but performs poorly in both of our experiments. This indicates that it is important to maintain multiple metrics for few-shot learning problems with more diverse tasks, similar to the few-shot NLP problems investigated in this paper. Our clustering-based FSL approach maintains diverse metrics while keeping the model simple with only K parameters to estimate. It is worthwhile to study how and why the NLP problems make few-shot learning more difficult/heterogeneous; and how well our method can generalize to non-NLP problems like miniImageNet. We will leave these topics for future work. TAB3 shows the MTL on the extra-large dialog intent classification dataset. Compared to the on the 175 tasks, the holistic MTL-CNN achieves larger improvement (6%) over the single-task CNNs, which is a stronger baseline. Similar as the observation on the 175 tasks, here the main reason for its improvement is the consistent development and test performance due to holistic multi-task training approach: both the single-task and holistic multi-task model achieve around 66% average accuracy on development sets. Unlike the experiments in Section 4.3, we did not evaluate the full transfer-performance matrix S due to time considerations. Instead, we only use the information of ∼ 4.5% of all the task-pairs, and our algorithm still achieves a significant improvement over the baselines. Note that this is obtained by only sampling about n log 2 n task pairs, it not only confirms the empirical advantage of our multi-task learning algorithm, but also verifies the correctness of our theoretical bound in Theorem 3.1. In this paper, we propose a robust task-clustering method that not only has strong theoretical guarantees but also demonstrates significantly empirical improvements when equipped by our MTL and FSL algorithms. Our empirical studies verify that (i) the proposed task clustering approach is very effective in the many-task learning setting especially when tasks are diverse; (ii) our approach could efficiently handle large number of tasks as suggested by our theory; and (iii) cross-task transfer performance can serve as a powerful task similarity measure. Our work opens up many future research directions, such as supporting online many-task learning with incremental computation on task similarities, and combining our clustering approach with the recent learning-to-learn methods (e.g. BID18), to enhance our MTL and FSL methods. We first prove that the full similarity matrix X ∈ R n×n is of low-rank. To see this, let A = (a 1, . . ., a k) be the underlying perfect clustering , where k is the number of clusters and a i ∈ {0, 1} n is the membership vector for the i-th cluster. Given A, the similarity matrix X is computed as DISPLAYFORM0 where B i = a i a i is a rank one matrix. Using the fact that rank(X) ≤ k i=1 rank(B i) and rank(B i) = 1, we have rank(X) ≤ k, i.e., the rank of the similarity matrix X is upper bounded by the number of clusters. Since the number of clusters is usually small, the similarity matrix X should be of low rank. APPENDIX B: PROOF OF THEOREM 4.1We then prove our main theorem. First, we define several notations that are used throughout the proof. Let X = UΣV be the singular value decomposition of matrix X, where U = (u 1, . . ., u k) ∈ R n×k and V = (v 1, . . ., v k) ∈ R n×k are the left and right singular vectors of matrix X, respectively. Similar to many related works of matrix completion, we assume that the following two assumptions are satisfied:1. A1: the row and column spaces of X have coherence bounded above by a positive number µ 0, i.e., n/r max i P U (e i) ≤ µ 0 and n/r max i P V (e i) ≤ µ 0, where P U = UU, P V = VV, and e i is the standard basis vector, and 2. A2: the matrix UV has a maximum entry bounded by µ 1 √ r/n in absolute value for a positive number µ 1.Let T be the space spanned by the elements of the form u i y and xv i, for 1 ≤ i ≤ k, where x and y are arbitrary n-dimensional vectors. Let T ⊥ be the orthogonal complement to the space T, and let P T be the orthogonal projection onto the subspace T given by DISPLAYFORM1 The following proposition shows that for any matrix Z ∈ T, it is a zero matrix if enough amount of its entries are zero. Proposition 1. Let Ω be a set of m entries sampled uniformly at random from [1, . . ., n] × [1, . . ., n], and P Ω (Z) projects matrix Z onto the subset Ω. If m > m 0, where m 0 = C 2 R µ 0 rnβ log n with β > 1 and C R being a positive constant, then for any Z ∈ T with P Ω (Z) = 0, we have Z = 0 with probability 1 − 3n −β.Proof. According to the Theorem 3.2 in Candès & , for any Z ∈ T, with a probability at least 1 − 2n 2−2β, we have DISPLAYFORM2 where δ = m 0 /m < 1. Since Z ∈ T, we have P T (Z) = Z. Then from, we have Z F ≤ 0 and thus Z = 0.In the following, we will develop a theorem for the dual certificate that guarantees the unique optimal solution to the following optimization problem DISPLAYFORM3 Theorem 1. Suppose we observe m 1 entries of X with locations sampled uniformly at random, denoted by Ω. We further assume that m 2 entries randomly sampled from m 1 observed entries are corrupted, denoted by ∆. Suppose that P Ω (Y) = P Ω (X + E) and the number of observed correct entries m 1 − m 2 > m 0 = C 2 R µ 0 rnβ log n. Then, for any β > 1, with a probability at least 1 − 3n −β, the underlying true matrices (X, E) is the unique optimizer of if both assumptions A1 and A2 are satisfied and there exists a dual Q ∈ R n×n such that (a) DISPLAYFORM4, and (e) P ∆ c (Q) ∞ < λ. Proof. First, the existence of Q satisfying the conditions (a) to (e) ensures that (X, E) is an optimal solution. We only need to show its uniqueness and we prove it by contradiction. Assume there exists another optimal solution (X + N X, E + N E), where P Ω (N X + N E) = 0. Then we have DISPLAYFORM5 where Q E and Q X satisfying DISPLAYFORM6 As a , we have DISPLAYFORM7 We then choose P ∆ c (Q E) and P T ⊥ (Q X) to be such that DISPLAYFORM8 is also an optimal solution, we have P Ω c (N E) 1 = P T ⊥ (N X) *, leading to P Ω c (N E) = P T ⊥ (N X) = 0, or N X ∈ T. Since P Ω (N X + N E) = 0, we have N X = N E + Z, where P Ω (Z) = 0 and P Ω c (N E) = 0. Hence, P Ω c ∩Ω (N X) = 0, where |Ω c ∩ Ω| = m 1 − m 2. Since m 1 − m 2 > m 0, according to Proposition 1, we have, with a probability 1 − 3n −β, N X = 0. Besides, since P Ω (N X + N E) = P Ω (N E) = 0 and ∆ ⊂ Ω, we have P ∆ (N E) = 0. Since N E = P ∆ (N E) + P ∆ c (N E), we have N E = 0, which leads to the contradiction. Given Theorem 1, we are now ready to prove Theorem 3.1.Proof. The key to the proof is to construct the matrix Q that satisfies the conditions (a)-(e) specified in Theorem 1. First, according to Theorem 1, when m 1 − m 2 > m 0 = C 2 R µ 0 rnβ log n, with a probability at least 1 − 3n −β, mapping P T P Ω P T (Z): T → T is an one to one mapping and therefore its inverse mapping, denoted by (P T P Ω P T) −1 is well defined. Similar to the proof of Theorem 2 in BID3, we construct the dual certificate Q as follows Q = λ sgn(E) + ∆ + P ∆ P T (P T P Ω P T) −1 (UV + T)where T ∈ T and ∆ = P ∆ (∆). We further define H = P Ω P T (P T P Ω P T) −1 (UV) DISPLAYFORM9 Evidently, we have P Ω (Q) = Q since ∆ ⊂ Ω, and therefore the condition (a) is satisfied. To satisfy the conditions (b)-(e), we need P T (Q) = UV → T = −P T (λ sgn(E) + ∆ ) P T ⊥ (Q) < 1 → µ(E) (λ + ∆ ∞) + P T ⊥ (H) + P T ⊥ (F) < 1 P ∆ (Q) = λ sgn(E) → ∆ = −P ∆ (H + F) |P ∆ c (Q)| ∞ < λ → ξ(X)(1 + T) < λBelow, we will first show that there exist solutions T ∈ T and ∆ that satisfy conditions and. We will then bound Ω ∞, T, P T ⊥ (H), and P T ⊥ (F) to show that with sufficiently small µ(E) and ξ(X), and appropriately chosen λ, conditions and FORMULA18 can be satisfied as well. First, we show the existence of ∆ and T that obey the relationships in and. It is equivalent to show that there exists T that satisfies the following relation T = −P T (λ sgn(E)) + P T P ∆ (H) + P T P ∆ P T (P T P Ω P T) −1 (T) or P T P Ω\∆ P T (P T P Ω P T) −1 (T) = −P T (λ sgn(E)) + P T P ∆ (H),where Ω \ ∆ indicates the complement set of set ∆ in Ω and |Ω \ ∆| denotes its cardinality. Similar to the previous argument, when |Ω \ ∆| = m 1 − m 2 > m 0, with a probability 1 − 3n −β, P T P Ω\∆ P T (Z): T → T is an one to one mapping, and therefore (P T P Ω\∆ P T (Z)) −1 is well defined. Using this , we have the following solution to the above equation T = P T P Ω P T (P T P Ω\∆ P T)−1 (−P T (λ sgn(E)) + P T P ∆ (H))We now bound T and ∆ ∞. Since T ≤ T F, we bound T F instead. First, according to Corollary 3.5 in BID2, when β = 4, with a probability 1 − n −3, for any Z ∈ T, we have P T ⊥ P Ω P T (P T P Ω P T) −1 (Z) F ≤ Z F.Using this , we have DISPLAYFORM10 In the last step, we use the fact that rank(T) ≤ 2k if T ∈ T. We then proceed to bound T as follows DISPLAYFORM11 Combining the above two inequalities together, we have 1 − 2(k + 1)ξ(X)µ(E) To ensure that there exists λ ≥ 0 satisfies the above two conditions, we have 1 − 5(k + 1)ξ(X)µ(E) + (10k 2 + 21k + 8)[ξ(X)µ(E)] 2 > 0 and 1 − ξ(X)µ(E)(4k + 5) ≥ 0 Since the first condition is guaranteed to be satisfied for k ≥ 1, we have ξ(X)µ(E) ≤ 1 4k + 5.Thus we finish the proof.
We propose a matrix-completion based task clustering algorithm for deep multi-task and few-shot learning in the settings with large numbers of diverse tasks.
379
scitldr
The resemblance between the methods used in studying quantum-many body physics and in machine learning has drawn considerable attention. In particular, tensor networks (TNs) and deep learning architectures bear striking similarities to the extent that TNs can be used for machine learning. Previous used one-dimensional TNs in image recognition, showing limited scalability and a request of high bond dimension. In this work, we train two-dimensional hierarchical TNs to solve image recognition problems, using a training algorithm derived from the multipartite entanglement renormalization ansatz (MERA). This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning. While keeping the TN unitary in the training phase, TN states can be defined, which optimally encodes each class of the images into a quantum many-body state. We study the quantum features of the TN states, including quantum entanglement and fidelity. We suggest these quantities could be novel properties that characterize the image classes, as well as the machine learning tasks. Our work could be further applied to identifying possible quantum properties of certain artificial intelligence methods. Over the past years, we have witnessed a booming progress in applying quantum theories and technologies to realistic problems. Paradigmatic examples include quantum simulators BID31 and quantum computers (; BID16 BID2 aimed at tackling challenging problems that are beyond the capability of classical digital computations. The power of these methods stems from the properties quantum many-body systems. Tensor networks (TNs) belong to the most powerful numerical tools for studying quantum manybody systems BID22 BID13 BID26. The main challenge lies in the exponential growth of the Hilbert space with the system size, making exact descriptions of such quantum states impossible even for systems as small as O(10 2) electrons. To break the "exponential wall", TNs were suggested as an efficient ansatz that lowers the computational cost to a polynomial dependence on the system size. Astonishing achievements have been made in studying, e.g. spins, bosons, fermions, anyons, gauge fields, and so on; BID23 BID26 BID26. TNs are also exploited to predict interactions that are used to design quantum simulators BID25.As TNs allowed the numerical treatment of difficult physical systems by providing layers of abstraction, deep learning achieved similar striking advances in automated feature extraction and pattern recognition BID19. The resemblance between the two approaches is beyond superficial. At theoretical level, there is a mapping between deep learning and the renormalization group BID1, which in turn connects holography and deep learning BID37 BID10, and also allows studying network design from the perspective of quantum entanglement BID20. In turn, neural networks can represent quantum states BID3 BID4 BID15 BID11.Most recently, TNs have been applied to solve machine learning problems such as dimensionality reduction BID5, handwriting recognition BID30 BID12. Through a feature mapping, an image described as classical information is transferred into a product state defined in a Hilbert space. Then these states are acted onto a TN, giving an output vector that determines the classification of the images into a predefined number of classes. Going further with this clue, it can be seen that when using a vector space for solving image recognition problems, one faces a similar "exponential wall" as in quantum many-body systems. For recognizing an object in the real world, there exist infinite possibilities since the shapes and colors change, in principle, continuously. An image or a gray-scale photo provides an approximation, where the total number of possibilities is lowered to 256 N per channel, with N describing the number of pixels, and it is assumed to be fixed for simplicity. Similar to the applications in quantum physics, TNs show a promising way to lower such an exponentially large space to a polynomial one. This work contributes in two aspects. Firstly, we derive an efficient quantum-inspired learning algorithm based on a hierarchical representation that is known as tree TN (TTN) (see, e.g., BID21). Compared with Refs. BID30 BID12 where a onedimensional (1D) TN (called matrix product state (MPS) (Östlund &) ) is used, TTN suits more the two-dimensional (2D) nature of images. The algorithm is inspired by the multipartite entanglement renormalization ansatz (MERA) approach BID35 BID36 BID7 BID9, where the tensors in the TN are kept to be unitary during the training. We test the algorithm on both the MNIST (handwriting recognition with binary images) and CIFAR (recognition of color images) databases and obtain accuracies comparable to the performance of convolutional neural networks. More importantly, the TN states can then be defined that optimally encodes each class of images as a quantum many-body state, which is akin to the study of a duality between probabilistic graphical models and TNs BID27. We contrast the bond dimension and model complexity, with indicating that a growing bond dimension overfits the data. we study the representation in the different layers in the hierarchical TN with t-SNE (BID32, and find that the level of abstraction changes the same way as in a deep convolutional neural network BID18 or a deep belief network BID14, and the highest level of the hierarchy allows for a clear separation of the classes. Finally, we show that the fidelities between each two TN states from the two different image classes are low, and we calculate the entanglement entropy of each TN state, which gives an indication of the difficulty of each class. A TN is defined as a group of tensors whose indexes are shared and contracted in a specific way. TN can represent the partition function of a classical system, and also of a quantum many-body state which is mathematically a higher-dimensional vector. For the latter, one famous example is the MPS that is written as DISPLAYFORM0 s N α N −1 . An MPS can simply be understood as a d N -dimensional vector, with d the dimension of s i . Though the space increases exponentially with N, the cost of an MPS increases only polynomially as N dD 2 (with D dimension of α n). When using it to describe an N − site physical state, the un-contracted open indexes {s n} are called physical bonds that represent the physical Hilbert space 1, and contracted dummy indexes {α m} are called virtual bonds that carry the quantum entanglement. MPS is essentially a 1D state representation. When applied to 2D systems, MPS suffers severe restrictions since one has to choose a snake-like 1D path that covers the 2D manifold. This issue is known in physics as the area law of entanglement entropy BID33 BID13 BID28.A TTN FIG1 ) provides a natural expression for 2D states, which we can write as a hierarchical structure of K layers: DISPLAYFORM1 where N k is the number of tensors in the k-th layer. To avoid the disaster brought by an extremely large number of indexes in a TN, we use the following symbolic and graphic conventions. A tensor is denoted by a bold letter without indexes, e.g., T, whose elements are denoted by T α1α2···. Note a vector and a matrix are first-and second-order tensors with one and two indexes, respectively. When two tensors are multiplied together, the common indexes are to be contracted. One example is the inner product of two vectors, where DISPLAYFORM2 We take the transpose of v because we always assume the vectors to be column vectors. Another example is the matrix product, where DISPLAYFORM3 αb2 is simplified to DISPLAYFORM4. α is an dummy index, and b 1 and b 2 are two open indexes. In the graphic representation, a tensor is a block connecting to several bonds. Each bond represents an index belonging to this tensor. The dummy indexes are represented by the shared bonds that connect to two different blocks. Following this convention, Eq. can be simplified to DISPLAYFORM5 Similar to the MPS, a TTN also provides a representation of a d N -dimensional vector. The cost is also polynomial to N. One advantage is that the TTN bears a hierarchical structure and can be naturally built for 2D systems. In a TTN, each local tensor is chosen to have one upward index and four downward indexes. For representing a pure state, the tensor on the top only has four downward indexes. All the indexes except the downward ones of the tensors in the first layer are dummy and will be contracted. In our work, the TTN is slightly different from the pure state representation, by adding an upward index to the top tensor (FIG1). This added index corresponds to the labels in the supervised machine learning. Before training, we need to prepare the data with a feature function that maps N scalars (N is the dimension of the images) to the tensor product of N normalized vectors. The choice of the feature function is diversified: we chose the one used in Ref. BID30, where the dimension of each vector (d) can be controlled. Then, the space is transformed from that of N scalars to a d N -dimensional vector (Hilbert) space. After "vectorizing" the j-th image in the dataset, the output for classification is ad-dimensional vector obtained by contracting this huge vector with the TTN, which reads as DISPLAYFORM6 where {v [j,n] } denotes the n-th vector given by the j-th sample. One can see thatd is the dimension of the upward index of the top tensor, and should equal to the number of the classes. We use the convention that the position of the maximum value gives the classification of the image predicted by the TTN, akin to a softmax layer in a deep learning network. One choice of the cost function to be minimized is the square error, which is defined as DISPLAYFORM7 where J is the number of training samples. L [j] is ad-dimensional vector corresponding to the j-th label. For example, if the j-th sample belongs to the p-th class, L [j] is defined as DISPLAYFORM8 3 MERA-INSPIRED TRAINING ALGORITHM Inspired by MERA BID35, we derive a highly efficient training algorithm. To proceed, let us rewrite the cost function in the following form DISPLAYFORM9 The third term comes from the normalization of L [j], and we assume the second term is always real. The dominant cost comes from the first term. We borrow the idea from the MERA approach to reduce this cost. Mathematically speaking, the central idea is to impose that Ψ is orthogonal, i.e., ΨΨ † = I. Then Ψ is optimized with Ψ † Ψ = I satisfied in the valid subspace that optimizes the classification. By satisfying in the subspace, we do not require an identity from Ψ † Ψ, but mean DISPLAYFORM10 In MERA, a stronger constraint is used. With the TTN, each tensor has one upward and four downward indexes, which gives a non-square orthogonal matrix by grouping the downward indexes into a large one. Such tensors are called isometries and satisfy TT † = I after contracting all downwards indexes with its conjugate. When all the tensors are isometries, the TTN gives a unitary transformation that satisfies ΨΨ † = I; it compresses a d N -dimensional space to ad-dimensional one. In this way, the first terms becomes a constant, and we only need to deal with the second term. The cost function becomes DISPLAYFORM11 Each term in f is simply the contraction of one TN, which can be efficiently computed. We stress that independent of Eq., Eq. can be directly used as the cost function. This will lead to a more interesting picture connected to the condensed matter physics and quantum information theory. From the physical point of view, the central idea of MERA is the renormalization group (RG) of the entanglement BID35. The RG flows are implemented by the isometries that satisfy TT † = I. On one hand, the orthogonality makes the state remain normalized, a basic requirement of quantum states. On the other hand, the renormalization group flows can be considered as the compressions of the Hilbert space (from the downward to upward indexes). The orthogonality ensure that such compressions are unbiased with T † T I in the subspace. The difference from the identity characterizes the errors caused by the compressions. More discussions are given in Sec. 5.The tensors in the TTN are updated alternatively to minimize Eq.. To update T [k,n] for instance, we assume other tensors are fixed and define the environment tensor E [k,n], which is calculated by contracting everything in Eq. after taking out T [k,n] FIG1 ) BID9. Then the cost function becomes f = −Tr(T [k,n] E [k,n] ). Under the constraint that T [k,n] is an isometry, the solution of the optimal point is given by T [k,n] = VU † where V and U are calculated from the singular value decomposition E [k,n] = UΛV †. At this point, we have f = − a Λ a.Then, the update of one tensor becomes the calculation of the environment tensor and its singular value decomposition. In the alternating process for updating all the tensors, some tricks are used to accelerate the computations. The idea is to save some intermediate to avoid repetitive calculations by taking advantage of the tree structure. Another important detail is to normalize the vector obtained each time by contracting four vectors with a tensor. The strategy for building a multi-class classifier is the one-against-all classification scheme in machine learning. For each class, we train one TTN so that it recognizes whether an image belongs to this class. The output of Eq. is a two-dimensional vector. We fix the label for a yes answer as L yes =. For P classes, we will accordingly have P TTNs, denoted by {Ψ (p) }. Then for recognizing an image (vectorized to {v[n] }), we define a P -dimensional vector F as DISPLAYFORM12 The position of its maximal element gives which class the image belongs to. Algorithm 1 One-against-All Require: data: data points, n: the number of data points 1: for i = 0 → 9 do 2:Train binary classifier classif ier k corresponding to each handwritten digit 3: end for 4: for j = 1 → n do 5: DISPLAYFORM13 Our approach to classify image data begins by mapping each pixel x j to a d-component vector φ sj (x j). This feature map was introduced by BID30 ) and defined as DISPLAYFORM0, where s j runs from 1 to d. By using a larger d, the TTN has the potential to approximate a richer class of functions. DISPLAYFORM1 Figure 3: Embedding of data instances of CIFAR-10 by t-SNE corresponding to each layer in the TTN: (a) original data distribution and (b) the 1st, (c) 2nd, (d) 3rd, (e) 4th, and (f) 5th layer. To verify the representation power of TTNs, we used the CIFAR-10 dataset BID17 ). The dataset consists of 60,000 32 × 32 RGB images in 10 classes, with 6,000 instances per class. There are 50,000 training images and 10,000 test images. Each RGB image was originally 32 × 32 pixels: we transformed them to grayscale. Working with gray-scale images reduced the complexity of training, with the trade-off being that less information was available for learning. We built a TTN with five layers and used the MERA-like algorithm (Section 3) to train the model. Specifically, we built a binary classification model to investigate key machine learning and quantum features, instead of constructing a complex multiclass model. We found both the input bond (physical indexes) and the virtual bond (geometrical indexes) had a great impact on the representation power of TTNs, as showed in FIG3. This indicates that the limitation of representation power (learnability) of the TTNs is related to the input bond. The same way, the virtual bond determine how accurately the TTNs approximate this limitation. From the perspective of tensor algebra, the representation power of TTNs depends on the tensor contracted from the entire TTN. Thus the limitation of this relies on the input bond. Furthermore, the TTNs could be considered as a decomposition of this complete contraction, and the virtual bond determine how well the TTNs approximate this. Moreover, this phenomenon could be interpreted from the perspective of quantum many-body theory: the higher entanglement in a quantum manybody system, the more representation power this quantum system has. The sequence of convolutional and pooling layers in the feature extraction part of a deep learning network is known to arrive at higher and higher levels of abstractions that helps separating the classes in a discriminative learner BID19. This is often visualized by embedding the representation in two dimensions by t-SNE (BID32, and by coloring the instances according to their classes. If the classes clearly separate in this embedding, the subsequent classifier will have an easy task performing classification at a high accuracy. We plotted this embedding for each layer in the TN in Fig. 3 . We observe the same pattern as in deep learning, having a clear separation in the highest level of abstraction. To test the generalization of TTNs on a benchmark dataset, we used the MNIST collection, which is widely used in handwritten digit recognition. The training set consists of 60,000 examples, and the Similar to the last experiment, we built a binary model to show the performance of generalization. With the increase of bond dimension (both of the input bond and virtual bond), we found an apparent rise of training accuracy, which is consistent with the in FIG3. At the same time, we observed the decline of testing accuracy. The increase of bond dimension leads to a sharp increase of the number of parameters and, as a , it will give rise to overfitting and lower the performance of generalization. Therefore, one must pay attention to finding the optimal bond dimension -we can think of this as a hyperparameter controlling model complexity. We choose the one-against-all strategy to build a 10-class model, which classify an input image by choosing the label for which the output is largest. Considering the efficiency and avoiding overfitting, we use the minimal values of d TAB0 to reach the training accuracy around 95%. Taking one trained TTN Ψ where the index for the labels is assumed to be P -dimensional, we can define P normalized TTN vector (state) as DISPLAYFORM0 In Φ [p], the upward index of the top tensor is contracted with the label (L [p] ), giving a TN state that represents a normalized d N -dimensional vector (pure quantum state).The quantum state representations allow us to use quantum theories to study images and the related issues. Let us begin with the cost function. In Section 3, we started from a frequently used cost function in Eq. FORMULA7, and derived a cost function in Eq.. In the following, we show that Eq. can be understood by the notion of fidelity. With Eq., the cost function in Eq. can be rewritten as DISPLAYFORM1 The fidelity between two states (normalized vectors) is defined as their inner product, thus each term in the summation is simply the fidelity (; BID0 between a vectorized image and the corresponding TTN state Φ [p]. Considering that the fidelity measures the distance between two states, {Φ [p] } are the P states that minimize the distance between each Φ [p] and the p-th vectorized images. In other words, the cost function is in fact the total fidelity, and Φ [p] is the quantum state (normalized vector) that optimally encodes the p-th class of images. Note that due to the orthogonality, such P states are orthogonal to each other, i.e., Φ[p] † Φ [p] = I p p. This might trap us to a bad local minimum. For this reason, we propose the one-against-all strategy (see Algorithm 3). For each class, we have two TN states labeled yes and no, respectively, and in total 2P TN states. {Φ [p] } are then defined by taking the P yes-labeled TN states. The elements of F in Eq. FORMULA12 are defined by the summation of the fidelity between Φ[p] and the class of vectorized images. In this scenario, the classification is decided by finding the Φ [p] that gives the maximal fidelity with the input image, while the orthogonal conditions among {Φ [p] } no longer exist. Besides the algorithmic interpretation, fidelity may imply more intrinsic information. Without the orthogonality of {Φ [p] }, the fidelity FIG1 ) describes the differences between the quantum states that encode different classes of images. As shown in FIG4, F p p remains quite small in most cases, indicating that the orthogonality still approximately holds. Still, some are still relatively large, e.g., F 4,9 = 0.1353. We speculate this is closely related to the ways how the data are fed and processed in the TN. In our case, two image classes that have similar shapes will in a larger fidelity, because the TTN essentially provides a real-space renormalization flow. In other words, the input vectors are still initially arranged and renormalized layer by layer according to their spatial locations in the image; each tensor renormalizes four nearest-neighboring vectors into one vector. Fidelity can be potentially applied to building a network, where the nodes are classes of images and the weights of the connections are given by the F p p. This might provide a mathematical model on how different classes of images are associated to each other. We leave these questions for future investigations. DISPLAYFORM2 Another important concept of quantum mechanics is (bipartite) entanglement, a quantum version of correlations BID0. It is one of the key characters that distinguishes the quantum states from classical ones. Entanglement is usually given by a normalized positivedefined vector called entanglement spectrum (denoted as Λ), and is measured by the entanglement entropy S = − a Λ 2 a ln Λ 2 a. Having two subsystems, entanglement entropy measures the amount of information of one subsystem that can be gained by measuring the other subsystem. In the framework of TN, entanglement entropy determines the minimal dimensions of the dummy indexes needed for reaching a certain precision. In our image recognition, entanglement entropy characterizes how much information of one part of the image we can gain by knowing the rest part of the image. In other words, if we only know a part of an image and want to predict the rest according to the trained TTN (the quantum state that encodes the corresponding class), the entanglement entropy measures how accurately this can be done. Here, an important analog is between knowing a part of the image and measuring the corresponding subsystem of the quantum state. Thus, the trained TTN might be used on image processing, e.g., to recover an image from a damaged or compressed lower-resolution version. FIG4 shows the entanglement entropy for each class in the MNIST dataset. We computed two kinds of entanglement entropy marked by up-down and left-right. The first one denotes the entanglement between Upper part of the images with the lower part one. The later one denotes the entanglement between left part with the right part. With the TTN, the entanglement spectrum is simply the singular values of the matrix M = L † T [K,1] with L the label and T [K,1] the top tensor (FIG1). This is because the all the tensors in the TTN are orthogonal. Note that M has four indexes, of which each represents the effective space renormalized from one quarter of the vectorized image. Thus, the bipartition of the entanglement determines how the four indexes of M are grouped into two bigger indexes before calculating the SVD. We compute two kinds of entanglement entropy by cutting the system in the middle along the x or y direction. Our suggest that the images of "0" and "4" are the easiest and hardest, respectively, to predict one part of the image by knowing the other part. We continued the forays into using tensor networks for machine learning, focusing on hierarchical, two-dimensional tree tensor networks that we found a natural fit for image recognition problems. This proved a scalable approach that had a high precision, and we can conclude the following observations:• The limitation of representation power (learnability) of the TTNs model strongly depends on the input bond (physical indexes). And, the virtual bond (geometrical indexes) determine how well the TTNs approximate this limitation.• A hierarchical tensor network exhibits the same increase level of abstraction as a deep convolutional neural network or a deep belief network.• Fidelity can give us an insight how difficult it is to tell two classes apart.• Entanglement entropy has potential to characterize the difficulty of representing a class of problems. In future work, we plan to use fidelity-based training in an unsupervised setting and applying the trained TTN to recover damaged or compressed images and using entanglement entropy to characterize the accuracy.
This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning.
380
scitldr
Predicting outcomes and planning interactions with the physical world are long-standing goals for machine learning. A variety of such tasks involves continuous physical systems, which can be described by partial differential equations (PDEs) with many degrees of freedom. Existing methods that aim to control the dynamics of such systems are typically limited to relatively short time frames or a small number of interaction parameters. We present a novel hierarchical predictor-corrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames. We propose to split the problem into two distinct tasks: planning and control. To this end, we introduce a predictor network that plans optimal trajectories and a control network that infers the corresponding control parameters. Both stages are trained end-to-end using a differentiable PDE solver. We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs such as the incompressible Navier-Stokes equations. Intelligent systems that operate in the physical world must be able to perceive, predict, and interact with physical phenomena . In this work, we consider physical systems that can be characterized by partial differential equations (PDEs). PDEs constitute the most fundamental description of evolving systems and are used to describe every physical theory, from quantum mechanics and general relativity to turbulent flows . We aim to endow artificial intelligent agents with the ability to direct the evolution of such systems via continuous controls. Such optimal control problems have typically been addressed via iterative optimization. Differentiable solvers and the adjoint method enable efficient optimization of high-dimensional systems (; de ;). However, direct optimization through gradient descent (single shooting) at test time is resource-intensive and may be difficult to deploy in interactive settings. More advanced methods exist, such as multiple shooting and collocation, but they commonly rely on modeling assumptions that limit their applicability, and still require computationally intensive iterative optimization at test time. Iterative optimization methods are expensive because they have to start optimizing from scratch and typically require a large number of iterations to reach an optimum. In many real-world control problems, however, agents have to repeatedly make decisions in specialized environments, and reaction times are limited to a fraction of a second. This motivates the use of data-driven models such as deep neural networks, which combine short inference times with the capacity to build an internal representation of the environment. We present a novel deep learning approach that can learn to represent solution manifolds for a given physical environment, and is orders of magnitude faster than iterative optimization techniques. The core of our method is a hierarchical predictor-corrector scheme that temporally divides the problem into easier subproblems. This enables us to combine models specialized to different time scales in order to control long sequences of complex high-dimensional systems. We train our models using a differentiable PDE solver that can provide the agent with feedback of how interactions at any point in time affect the outcome. Our models learn to represent manifolds containing a large number of solutions, and can thereby avoid local minima that can trap classic optimization techniques. We evaluate our method on a variety of control tasks in systems governed by advection-diffusion PDEs such as the Navier-Stokes equations. We quantitatively evaluate the ing sequences on how well they approximate the target state and how much force was exerted on the physical system. Our method yields stable control for significantly longer time spans than alternative approaches. Physical problems commonly involve nonlinear PDEs, often with many degrees of freedom. In this context, several works have proposed methods for improving the solution of PDE problems (; ;) or used PDE formulations for unsupervised optimization . Lagrangian fluid simulation has been tackled with regression forests , graph neural networks , and continuous convolutions . Data-driven turbulence models were trained with MLPs . Fully-convolutional networks were trained for pressure inference and advection components were used in adversarial settings . Temporal updates in reduced spaces were learned via the Koopman operator . In a related area, deep networks have been used to predict chemical properties and the outcome of chemical reactions . Differentiable solvers have been shown to be useful in a variety of settings. and de developed differentiable simulators for rigid body mechanics. (for earlier work in computer graphics.) applied related techniques to manipulation planning. Specialized solvers were developed to infer protein structures , interact with liquids , control soft robots , and solve inverse problems that involve cloth . Like ours, these works typically leverage the automatic differentiation of deep learning pipelines (; ; ; ; van Merriënboer et al., 2018; ; ; ;). However, while the works above target Lagrangian solvers, i.e. reference frames moving with the simulated material, we address grid-based solvers, which are particularly appropriate for dense, volumetric phenomena. The adjoint method (; ; ; ; ;) is used by most machine learning frameworks, where it is commonly known as reverse mode differentiation . While a variety of specialized adjoint solvers exist (; ;), these packages do not interface with production machine learning frameworks. A supporting contribution of our work is a differentiable PDE solver called Φ Flow that integrates with TensorFlow and PyTorch . It is publicly available at https://github.com/tumpbs/PhiFlow. Consider a physical system u(x, t) whose natural evolution is described by the PDE ∂u ∂t = P u, ∂u ∂x, ∂ 2 u ∂x 2,..., y(t), where P models the physical behavior of the system and y(t) denotes external factors that can influence the system. We now introduce an agent that can interact with the system by controlling certain parameters of the dynamics. This could be the rotation of a motor or fine-grained control over a field. We factor out this influence into a force term F, yielding The agent can now be modelled as a function that computes F (t). As solutions of nonlinear PDEs were shown to yield low-dimensional manifolds , we target solution manifolds of F (t) for a given choice of P with suitable boundary conditions. This motivates our choice to employ deep networks for our agents. In most real-world scenarios, it is not possible to observe the full state of a physical system. When considering a cloud of smoke, for example, the smoke density may be observable while the velocity field may not be seen directly. We model the imperfect information by defining the observable state of u as o(u). The observable state is problem dependent, and our agent is conditioned only on these observations, i.e. it does not have access to the full state u. Using the above notation, we define the control task as follows. An initial observable state o 0 of the PDE as well as a target state o * are given (Figure 1a). We are interested in a reconstructed trajectory u(t) that matches these states at t 0 and t *, i.e. o 0 = o(u(t 0)), o * = o(u(t *)), and minimizes the amount of force applied within the simulation domain D (Figure 1b): Taking discrete time steps ∆t, the reconstructed trajectory u is a sequence of n = (t * − t 0)/∆t states. When an observable dimension cannot be controlled directly, there may not exist any trajectory u(t) that matches both o 0 and o *. This can stem from either physical constraints or numerical limitations. In these cases, we settle for an approximation of o *. To measure the quality of the approximation of the target, we define an observation loss L * o. The form of this loss can be chosen to fit the problem. We combine Eq. 3 and the observation loss into the objective function 4) with α > 0. We use square brackets to denote functionals, i.e. functions depending on fields or series rather than single values. Differentiable solvers. Let u(x, t) be described by a PDE as in Eq. 1. A regular solver can move the system forward in time via Euler steps: Each step moves the system forward by a time increment ∆t. Repeated execution produces a trajectory u(t) that approximates a solution to the PDE. This functionality for time advancement by itself is not well-suited to solve optimization problems, since gradients can only be approximated by finite differencing. For high-dimensional or continuous systems, this method becomes computationally expensive because a full trajectory needs to be computed for each optimizable parameter. Differentiable solvers resolve this issue by solving the adjoint problem via analytic derivatives. The adjoint problem computes the same mathematical expressions while working with lower-dimensional vectors. A differentiable solver can efficiently compute the derivatives with respect to any of its inputs, i.e. ∂u(t i+1)/∂u(t i) and ∂u(t i+1) /∂y(t i). This allows for gradientbased optimization of inputs or control parameters over an arbitrary number of time steps. Iterative trajectory optimization. Many techniques exist that try to find optimal trajectories by starting with an initial guess for F (t) and slightly changing it until reaching an optimum. The simplest of these is known as single shooting. In one optimization step, it simulates the full dynamics, then backpropagates the loss through the whole sequence to optimize the controls . Replacing F (t) with an agent F (t|o t, o *), which can be parameterized by a deep network, yields a simple training method. For a sequence of n frames, this setup contains n linked copies of the agent and is depicted in Figure 2. We refer to such an agent as a control force estimator (CFE). Optimizing such a chain of CFEs is both computationally expensive and causes gradients to pass through a potentially long sequence of highly nonlinear simulation steps. When the reconstruction u is close to an optimal trajectory, this is not a problem because the gradients ∆u are small and the operations executed by the solver are differentiable by construction. The solver can therefore be locally approximated by a first-order polynomial and the gradients can be safely backpropagated. For large ∆u, e.g. at the beginning of an optimization, this approximation breaks down, causing the gradients to become unstable while passing through the chain. This instability in the training process can prevent single-shooting approaches from converging and deep networks from learning unless they are initialized near an optimum. Alternatives to single shooting exist, promising better and more efficient convergence. Multiple shooting splits the trajectory into segments with additional defect constraints. Depending on the physical system, this method may have to be adjusted for specific problems . Collocation schemes model trajectories with splines. While this works well for particle trajectories, it is poorly suited for Eulerian solvers where the evolution of individual points does not reflect the overall motion. Model reduction can be used to reduce the dimensionality or nonlinearity of the problem, but generally requires domain-specific knowledge. When applicable, these methods can converge faster or in a more stable manner than single shooting. However, as we are focusing on a general optimization scheme in this work, we will use single shooting and its variants as baseline comparisons. Supervised and differentiable physics losses. One of the key ingredients in training a machine learning model is the choice of loss function. For many tasks, supervised losses are used, i.e. losses that directly compare the output of the model for a specific input with the desired ground truth. While supervised losses can be employed for trajectory optimization, far better loss functions are possible when a differentiable solver is available. We will refer to these as differentiable physics loss functions. In this work, we employ a combination of supervised and differentiable physics losses, as both come with advantages and disadvantages. One key limitation of supervised losses is that they can only measure the error of a single time step. Therefore, an agent cannot get any measure of how its output would influence future time steps. Another problem arises from the form of supervised training data which comprises input-output pairs, which may be obtained directly from data generation or through iterative optimization. Since optimal control problems are generally not unimodal, there can exist multiple possible outputs for one input. This ambiguity in the supervised training process will lead to suboptimal predictions as the network will try to find a compromise between all possible outputs instead of picking one of them. Differentiable physics losses solve these problems by allowing the agent to be directly optimized for the desired objective (Eq. 4). Unlike supervised losses, differentiable physics losses require a differentiable solver to backpropagate the gradients through the simulation. Multiple time steps can be chained together, which is a key requirement since the objective (Eq. 4) explicitly depends on all time steps through L F [u(t)] (Eq. 3). As with iterative solvers, one optimization step for a sequence of n frames then invokes the agent n times before computing the loss, each invocation followed by a solver step. The employed differentiable solver backpropagates the gradients through the whole sequence, which gives the model feedback on (i) how its decisions change the future trajectory and (ii) how to handle states as input that were reached because of its previous decisions. Since no ground truth needs to be provided, multi-modal problems naturally converge towards one solution. In order to optimally interact with a physical system, an agent has to (i) build an internal representation of an optimal observable trajectory o(u(t)) and (ii) learn what actions to take to move the system along the desired trajectory. These two steps strongly resemble the predictor-corrector method . Given o(t), a predictor-corrector method computes o(t + ∆t) in two steps. First, a prediction step approximates the next state, yielding o p (t + ∆t). Then, the correction uses o p (t + ∆t) to refine the initial approximation and obtain o(t + ∆t). Each step can, to some degree, be learned independently. This motivates splitting the agent into two neural networks: an observation predictor (OP) network that infers intermediate states o p (t i), i ∈ {1, 2, ...n − 1}, planning out a trajectory, and a corrector network (CFE) that estimates the control force F (t i |o(u i), o p i+1 ) to follow that trajectory as closely as possible. This splitting has the added benefit of exposing the planned trajectory, which would otherwise be inaccessible. As we will demonstrate, it is crucial for the prediction stage to incorporate knowledge about longer time spans. We address this by modelling the prediction as a temporally hierarchical process, recursively dividing the problem into smaller subproblems. To achieve this, we let the OP not directly infer o but instead model it to predict the optimal center point between two states at times t i, t j, with i, j ∈ {1, 2, . This function is much more general than predicting the state of the next time step since two arbitrary states can be passed as arguments. Recursive OP evaluations can then partition the sequence until a prediction o p (t i) for every time step t i has been made. This scheme naturally enables scaling to arbitrary time frames or arbitrary temporal resolutions, assuming that the OP can correctly anticipate the physical behavior. Since physical systems often exhibit different behaviors on different time scales and the OP can be called with states separated by arbitrary time spans, we condition the OP on the time scale it is evaluated on by instantiating and training a unique version of the model for every time scale. This simplifies training and does not significantly increase the model complexity as we use factors of two for the time scales, and hence the number of required models scales with O(log 2 n). We will refer to one instance of an OP n by the time span between its input states, measured in the number of frames n = (t j − t i)/∆t. With the CFE and OP n as building blocks, many algorithms for solving the control problem, i.e. for computing F (t), can be assembled and trained. We compared a variety of algorithms and found that a scheme we will refer to as prediction refinement produces the best . It is based on the following principles: (i) always use the finest scale OP possible to make a prediction, (ii) execute the CFE followed by a solver step as soon as possible, (iii) refine predictions after the solver has computed the next state. The algorithm that realizes these goals is shown in Appendix B with an example for n = 8. To understand the algorithm and ing execution orders, it is helpful to consider simpler algorithms first. The simplest combination of CFE and OP n invocations that solves the full trajectory, shown in Figure, generated by the OP n. Using that as input to an OP n/2 in new predictions at t n/4, t 3n/4. Continuing with this scheme, a prediction can be made for each t i, i ∈ 1,..., n − 1. Next, the actual trajectory is evaluated step by step. For each step t i, the CFE computes the control force F (t i) conditioned on the state at t i and the prediction o p (t i+1). Once F (t i) is known, the solver can step the simulation to the next state at t i+1. This al-gorithm finds a trajectory in time O(n) since n CFE calls and n−1 OP calls are required in total (see Appendix B). However, there are inherent problems with this algorithm. The physical constraints of the PDE and potential approximation errors of the CFE can in observations that are only matched partially. This can in the reconstructed trajectory exhibiting undesirable oscillations, often visible as jittering. When subsequent predictions do not line up perfectly, large forces may be applied by the CFE or the reconstructed trajectory might stop following the predictions altogether. This problem can be alleviated by changing the execution order of the two-stage algorithm described above. The ing algorithm is shown in Figure 3b and will be referred to as staggered execution. In this setup, the simulation is advanced as soon as a prediction for the next observable state exists and OPs are only executed when their state at time t i is available. This staggered execution scheme allows future predictions to take deviations from the predicted trajectory into account, preventing a divergence of the actual evolution o(u(t)) from the prediction o p (t). While the staggered execution allows most predictions to correct for deviations from the predicted trajectory o p, this scheme leaves several predictions unmodified. Most notably, the prediction o p (t n/2), which is inferred from just the initial state and the desired target, remains unchanged. This prediction must therefore be able to guide the reconstruction in the right direction without knowing about deviations in the system that occurred up to t n/2−1. As a practical consequence, a network trained with this scheme typically learns to average over the deviations, ing in blurred predictions (see Appendix D.2). Algorithm 1: Recursive algorithm computing the prediction refinement. The algorithm is called via The prediction refinement scheme, listed in Algorithm 1 and illustrated in Figure 3c, solves this problem by re-evaluating existing predictions whenever the simulation progesses in time. Not all predictions need to be updated, though, and an update to a prediction at a finer time scale can depend on a sequence of other predictions. The prediction refinement algorithm that achieves this in an optimal form is listed in Appendix B. While the ing execution order is difficult to follow for longer sequences with more than n = 8 frames, we give an overview of the algorithm by considering the prediction for time t n/2. After the first center-frame prediction o p (t n/2) of the n-frame sequence is made by OP n, the prediction refinement algorithm calls itself recursively until all frames up to frame n/4 are reconstructed from the CFE and the solver. The center prediction is then updated using OP n/2 for the next smaller time scale compared to the previous prediction. The call of OP n/2 also depends on o p (t 3n/4), which was predicted using OP n/2. After half of the remaining distance to the center is reconstructed by the solver, the center prediction at t n/2 is updated again, this time by the OP n/4, including all prediction dependencies. Hence, the center prediction is continually refined every time the temporal distance between the latest reconstruction and the prediction halves, until the reconstruction reaches that frame. This way, all final predictions o p (t i) are conditioned on the reconstruction of the previous state u(t i−1) and can therefore account for all previous deviations. The prediction refinement scheme requires the same number of force inferences but an increased number of OP evaluations compared to the simpler algorithms. With a total of 3n − 2 log 2 (n) − 3 OP evaluations (see Appendix B), it is of the same complexity, O(n). In practice, this refinement scheme incurs only a small overhead in terms of computation, which is outweighed by the significant gains in quality of the learned control function. We evaluate the capabilities of our method to learn to control physical PDEs in three different test environments of increasing complexity. We first target a simple but nonlinear 1D equation, for which we present an ablation study to quantify accuracy. We then study two-dimensional problems: an incompressible fluid and a fluid with complex boundaries and indirect control. Full details are given in Appendix D. Supplemental material containing additional sequences for all of the tests can be downloaded from https://ge.in.tum.de/publications/2020-iclr-holl. Burger's equation. Burger's equation is a nonlinear PDE that describes the time evolution of a single field, u . Using Eq. 1, it can be written as Examples of the unperturbed evolution are shown in Figure 4a. We let the whole state be observable and controllable, i.e. o(t) = u(t), which implies that o * can always be reached exactly. The of our ablation study with this equation are shown in Table 1. The table compares the ing forces applied by differently trained models when reconstructing a ground-truth sequence (Figure 4e). The variant denoted by CFE chain uses a neural network to infer the force without any intermediate predictions. With a supervised loss, this method learns to approximate a single step well. However, for longer sequences, quickly deviate from an ideal trajectory and diverge because the network never learned to account for errors made in previous steps (Figure 4b). Training the network with the objective loss (Eq. 4) using the differentiable solver greatly increases the quality of the reconstructions. On average, it applies only 34% of the force used by the supervised model as it learns to correct the temporal evolution of the PDE model. Next, we evaluate variants of our predictor-corrector approach, which hierarchically predicts intermediate states. Here, the CFE is implemented as Unlike the simple CFE chain above, training with the supervised loss and staggered execution produces stable (albeit jittering) trajectories that successfully converge to the target state (Figure 4c). Surprisingly, this supervised method reaches almost the same accuracy as the differentiable CFE, despite not having access to physics-based gradients. However, employing the differentiable physics loss greatly improves the reconstruction quality, producing solutions that are hard to distinguish from ideal trajectories (Figure 4d). The prediction refinement scheme further improves the accuracy, but the differences to the staggered execution are relatively small as the predictions of the latter are already very accurate. Table 1 also lists the of classic shooting-based optimization applied to this problem. To match the quality of the staggered execution scheme, the shooting method requires around 60 optimization steps. These steps are significantly more expensive to compute, despite the fast convergence. After around 300 iterations, the classic optimization reaches an optimal value of 10.2 and the loss stops decreasing. Starting the iterative optimization with our method as an initial guess pushes the optimum slightly lower to 10.1. Thus, even this relatively simple problem shows the advantages of our learned approach. Incompressible fluid flow. Next, we apply our algorithm to two-dimensional fluid dynamics problems, which are challenging due to the complexities of the governing Navier-Stokes equations . For a velocity field v, these can be written as subject to the hard constraints ∇·v = 0 and ∇×p = 0, where p denotes pressure and ν the viscosity. In addition, we consider a passive density ρ that moves with the fluid via ∂ρ/∂t = −v · ∇ρ. We set v to be hidden and ρ to be observable, and allow forces to be applied to all of v. We run our tests on a 128 2 grid, ing in more than 16,000 effective continuous control parameters. We train the OP and CFE networks for two different tasks: reconstruction of natural fluid flows and controlled shape transitions. Example sequences are shown in Figure 5 and a quantitative evaluation, averaged over 100 examples, is given in Table 2. While all methods manage to approximate the target state well, there are considerable differences in the amount of force applied. The supervised technique exerts significantly more force than the methods based on the differentiable solver, ing in jittering reconstructions. The prediction refinement scheme produces the smoothest transitions, converging to about half the loss of the staggered, non-refined variant. We compare our method to classic shooting algorithms for this incompressible flow problem. While a direct shooting method fails to converge, a more advanced multi-scale shooting approach still requires 1500 iterations to obtain a level of accuracy that our model achieves almost instantly. In Table 2: A comparison of methods in terms of final cost for (a) the natural flow setup and (b) the shape transitions. The initial distribution is sampled randomly and evolved to the target state. Staggered Supervised 243 ± 11 1.53 ± 0.23 n/a n/a Staggered Diff. Physics 22.6 ± 1.1 0.64 ± 0.08 89 ± 6 0.331 ± 0.134 Refined Diff. Physics 11.7 ± 0.6 0.88 ± 0.11 75 ± 4 0.126 ± 0.010 Incompressible fluid with indirect control. The next experiment increases the complexity of the fluid control problem by adding obstacles to the simulated domain and limiting the area that can be controlled by the network. An example sequence in this setting is shown in Figure 6. As before, only the density ρ is observable. Here, the goal is to move the smoke from its initial position near the center into one of the three "buckets" at the top. Control forces can only be applied in the peripheral regions, which are outside the visible smoke distribution. Only by synchronizing the 5000 continuous control parameters can a directed velocity field be constructed in the central region. We first infer trajectories using a trained CFE network and predictions that move the smoke into the desired bucket in a straight line. This baseline manages to transfer 89%±2.6% of the smoke into the target bucket. Next we enable the hierarchical predictions and train the OPs. This version manages to maneuver 99.22% ± 0.15% of the smoke into the desired buckets while requiring 19.1% ± 1.0% less force. For comparison, Table 3 also lists success rate and execution time for a direct optimization. Despite only obtaining a low success rate of 82%, the shooting method requires several orders of magnitude longer than evaluating our trained model. Since all optimizations are independent of each other, some find better solutions than others, reflected in the higher standard deviation. The increased number of free parameters and complexity of the fluid dynamics to be controlled make this problem intractable for the shooting method, while our model can leverage the learned representation to infer a solution very quickly. Further details are given in Appendix D.3. We have demonstrated that deep learning models in conjunction with a differentiable physics solver can successfully predict the behavior of complex physical systems and learn to control them. The in- troduction of a hierarchical predictor-corrector architecture allowed the model to learn to reconstruct long sequences by treating the physical behavior on different time scales separately. We have shown that using a differentiable solver greatly benefits the quality of solutions since the networks can learn how their decisions will affect the future. In our experiments, hierarchical inference schemes outperform traditional sequential agents because they can easily learn to plan ahead. To model realistic environments, we have introduced observations to our pipeline which restrict the information available to the learning agent. While the PDE solver still requires full state information to run the simulation, this restriction does not apply when the agent is deployed. While we do not believe that learning approaches will replace iterative optimization, our method shows that it is possible to learn representations of solution manifolds for optimal control trajectories using data-driven approaches. Fast inference is vital in time-critical applications and can also be used in conjunction with classical solvers to speed up convergence and ultimately produce better solutions. This work was supported in part by the ERC Starting Grant realFlow (ERC-2015-StG-637014). Our solver is publicly available at https://github.com/tum-pbs/PhiFlow, licensed as MIT. It is implemented via existing machine learning frameworks to benefit from the built-in automatic differentiation and to enable tight integration with neural networks. For the experiments shown here we used the popular machine learning framework TensorFlow . However, our solver is written in a framework-independent way and also supports PyTorch . Both frameworks allow for a low-level NumPy-like implementation which is well suited for basic PDE building blocks. The following paragraphs outline how we implemented these building blocks and how they can be put together to solve the PDEs shown in Section 6. Staggered grids. Many of the experiments presented in Section 6 use PDEs which track velocities. We adopt the marker-and-cell method , storing densities in a regular grid and velocity in a staggered grid. Unlike regular grids, where all components are sampled at the centers of grid cells, staggered grids sample vector fields in a staggered form. Each vector component is sampled in the center of the cell face perpendicular to that direction. This sampling allows for an exact formulation of the divergence of a staggered vector field, decreasing discretization errors in many cases. On the other hand, it complicates operations that combine vector fields with regular fields such as transport or density-dependent forces. We use staggered grids for the velocities in all of our experiments. The buoyancy operation, which applies an upward force proportional to the smoke density, interpolates the density to the staggered grid. For the transport, also called advection, of regular or staggered fields, we interpolate the staggered field to grid cell centers or face centers, respectively. These interpolations are implemented in TensorFlow using basic tensor operations, similar to the differential operators. We implemented all differential operators that act on vector fields to support staggered grids as well. Differential operators. For the experiments outlined in this paper, we have implemented the following differential operators: • Gradient of scalar fields in any number of dimensions, ∇x • Divergence of regular and staggered vector fields in any number of dimensions, ∇ · x • Curl of staggered vector fields in 2D and 3D, ∇ × x • Laplace of scalar fields in any number of dimensions, ∇ 2 x All differential operators are local operations, i.e. they only act on a small neighbourhood of grid points. In the context of machine learning, this suggests implementing them as convolution operations with a fixed kernel. Indeed, all differential operators can be expressed this way and we have implemented some low-dimensional versions of them using this method. This method does, however, scale poorly with the dimensionality of the physical system as the convolutional kernels pick up a large number of zeros, thus wasting computations. Therefore, we express n-dimensional differential operators using basic mathematical tensor operations. Consider the gradient computation in 1D, which in a staggered grid. Each ing value is assuming the is staggered at the lower faces of each grid cell. This operation can be implemented as a 1D convolution with kernel (−1, 1) or as a vector operation which subtracts the array from itself, shifted by one element. In a low-dimensional setting, the convolution operation will be faster as it is highly optimized and can be executed on GPUs with one call. In higher dimensions, however, the vector-based version is faster and more practical because it avoids unnecessary computations and can be coded in a dimension-independent fashion. Both convolutions and basic mathematical operations are supported by all common machine learning frameworks, eliminating the need to implement custom gradient functions. Advection. PDEs containing material derivatives can be solved using an advection step which moves each value of a field f in the direction specified by a vector field v. We implement the advection with semi-Lagrangian step that looks back in time and supports regular and staggered vector fields. To determine the advected value of a grid cell or face x target, first v is interpolated to that point. Then the origin location is determined by following the vector backwards in time, The final value is determined by linearly interpolating between the neighbouring grid cells around x origin. All of these operations are, again, implemented using basic mathematical operations. Hence, gradients can be provided by the framework. Poisson problems. Incompressible fluids, governed by the Navier-Stokes equations, are subject to the hard constraints ∇ · v = 0 and ∇ × p = 0 where p denotes the pressure. A numerical solver can achieve this by finding a p such that these constraints are satisfied. This step is often called Chorin Projection, or Helmholtz decomposition, and is closely related to the fundamental theorem of vector calculus (von Helmholtz, 1858;). On a grid, solving for p is equal to solving a Poisson problem, i.e. a system of N linear equations, Ap = ∇ · u where N is the total number of grid cells. The (N · N) matrix A is sparse and its entries are located at predictable indices. where we explicitly compute Diffuse[u] = u + ν∇ 2 u with viscosity ν. The advection is semiLagrangian with back-tracing as described above. Solving the Navier-Stokes equations, typically comprises of the following steps: • Transporting the density, ρ ← Advect [ρ, v] • Transporting the velocity, v ← Advect [v, v] • Applying diffusion if the viscosity is ν > 0. • Applying buoyancy force, v ← v − β · ρ with buoyancy direction β • Enforcing incompressibility by solving for the pressure, These steps are executed in this order to advance the simulation forward in time. The staggered execution scheme recursively splits a sequence of length n into smaller sequences, as depicted in Fig. 3b and Fig. 7a for n = 8. With each level of recursion depth, the sequence length Sol is cut in half and twice as many predictions need to be performed. The maximum depth depends on the sequence length t n − t 0 and the time steps ∆t performed by the solver, Therefore, the total number of predictions, equal to the number of OP evaluations, is The prediction refinement scheme performs more predictions, as can be seen in Fig. 7b. To understand the number of OP evaluations, we need to consider the recursive algorithm Reconstruct[u 0, o n, o 2n], listed in Alg 1, that reconstructs a sequence or partial sequence of n frames. For the first invocation, the last parameter o 2n is absent, but for subsequences, that is not necessarily the case. Each invocation performs one OP evaluation if o 2n is absent, otherwise three. By counting the sequences for which this condition is fulfilled, we can compute the total number of network evaluations to be All neural networks used in this work are based on a modified U-net architecture . The U-net represents a typical multi-level convolutional network architecture with skip connections, which we modify by using residual blocks instead of regular convolutions for each level. We slightly modify this basic layout for some experiments. The network used for predicting observations for the fluid example is detailed in Tab. 4. The input to the network are two feature maps containing the current state and the target state. Zero-padding is applied to the input, so that all strided convolutions do not require padding. Next, five residual blocks are executed in order, each decreasing the resolution (1/2, 1/4, 1/8, 1/16, 1/32) while increasing the number of feature maps. Each block performs a convolution with kernel size 2 and stride 2, followed by two residual blocks with kernel size 3 and symmetric padding. Inside each block, the number of feature maps stays constant. Three more residual blocks are executed on the lowest resolution of the bowtie structure, after which the decoder part of the network commences, translating features into spatial content. The decoder works as follows: Starting with the lowest resolution, the feature maps are upsampled with linear interpolation. The upsampled maps and the output of the previous block of same resolution are then concatenated. Next, a convolution with 16 filters, a kernel size of 2 and symmetric padding, followed by two more residual blocks, is executed. When the original resolution is reached, only one feature map is produced instead of 16, forming the output of the network. Depending on the dimensionality of the problem, either 1D or 2D convolutions are used. The network used for the indirect control task is modified in the following ways: (i) It produces two output feature maps, representing the velocity (v x, v y). (ii) Four feature maps of the lowest resolution (4x4) are fed into a dense layer producing four output feature maps. These and the other feature maps are concatenated before moving to the upsampling stage. This modification ensures that the receptive field of the network is the whole domain. All networks were implemented in TensorFlow and trained using the ADAM optimizer on an Nvidia GTX 1080 Ti. We use batch sizes ranging from 4 to 16. Supervised training of all networks converges within a few minutes, for which we iteratively decrease the learning rate from 10 −3 to 10 −5. We stop supervised training after a few epochs, comprising between 2000 and 10.000 iterations, as the networks usually converge within a fraction of the first epoch. For training with the differentiable solver, we start with a decreased learning rate of 10 −4 since the backpropagation through long chains is more challenging than training with a supervised loss. Optimization steps are also considerably more expensive since the whole chain needs to be executed, which includes a forward and backward simulation pass. For the fluid examples, an optimization step takes 1-2 seconds to complete for the 2D fluid problems. We let the networks run about 100.000 iterations, which takes between one and two days for the shown examples. In the following paragraphs, we give further details on the experiments of Section 6. For this experiment, we simulate Burger's equation (Eq. 6) on a one-dimensional grid with 32 samples over a course of 32 time steps. The typical behavior of Burger's equation in 1D exhibits shock waves that move in +x or −x direction for u(x) > 0 or u(x) < 0, respectively. When opposing waves clash, they both weaken until only the stronger wave survives and keeps moving. Examples are shown in Figs. 4a and 8a. All 32 samples are observable and controllable, i.e. o(t) = u(t). Thus, we can enforce that all trajectories reach the target state exactly by choosing the force for the last step to be To measure the quality of a solution, it is therefore sufficient to consider the applied force t * t0 |F (t)| dt which is detailed for the tested methods in Table 1. Network training. Both for the CFE chains as well as for the observation prediction models, we use the same network architecture, described in Appendix C. We train the networks on 3600 randomly generated scenes with constant driving forces, F (t) = const. The examples are initialized with two Gaussian waves of random amplitude, size and position, set to clash in the center. In each time step, a constant Gaussian force with the same randomized parameters is applied to the system to steer it away from its natural evolution. Constant forces have a larger impact on the evolution than temporally varying forces since the effects of temporally varying forces can partly cancel out over time. The ground truth sequence can therefore be regarded as a near-perfect but not necessarily optimal trajectory. Figs. 4d and 8b display such examples. The same trajectories, without any forces applied, are shown in sub-figures (a) for comparison. We pretrain all networks (OPs or CFE, depending on the method) with a supervised observation loss, The ing trajectory after supervised training for the CFE chain is shown in Figure 4b and Figure 8c. For the observation prediction models, the trajectories are shown in Figure 4c and Figure 8e. After pretraining, we train all OP networks end-to-end with our objective loss function (see Eq. 4), making use of the differentiable solver. For this experiment, we choose the mean squared difference for the observation loss function: We test both the staggered execution scheme and the prediction refinement scheme, shown in Figure 8f and Figure 8g. Results. Table 1 compares the ing forces inferred by different methods. The are averaged over a set of 100 examples from the test set which is sampled from the same distribution as the training set. The CFE chains both fail to converge to o *. While the differentiable physics version manages to produce a u n−1 that resembles o *, the supervised version completely deviates from an optimal trajectory. This shows that learning to infer the control force F (t i) only from u(t i), o * and t is very difficult as the model needs to learn to anticipate the physical behavior over any length of time. Compared to the CFE chains, the hierarchical models require much less force and learn to converge towards o *. Still, the supervised training applies much more force to the system than required, the reasons for which become obvious when inspecting Figure 4b and Fig. 8e. While each state seems close to the ground truth individually, the control oscillates undesirably, requiring counter-actions later in time. The methods using the differentiable solver significantly outperform their supervised counterparts and exhibit an excellent performance that is very close the ground truth solutions in terms of required forces. On many examples, they even reach the target state with less force than was applied by the ground truth simulation. This would not be possible with the supervised loss alone, but by having access to the gradient-based feedback from the differentiable solver, they can learn to find more efficient trajectories with respect to the objective loss. This allows the networks to learn applying forces in different locations that make the system approach the target state with less force. Figure 4e and Fig.8f,g show examples of this. The ground truth applies the same force in each step, thereby continuously increasing the first sample u(x = 0), and the supervised method tries to imitate this behavior. The governing equation then slowly propagates u(x = 0) in positive x direction since u(x = 0) > 0. The learning methods that use a differentiable solver make use of this fact by applying much more force F (x = 0) > 0 at this point than the ground truth, even overshooting the target state. Later, when this value had time to propagate to the right, the model corrects this overshoot by applying a negative force F (x = 0) < 0. Using this trick, these models reach the target state with up to 13% less force than the ground truth on the sequence shown in Figure 4. Figure 9 analyzes the variance of inferred forces. The supervised methods often fail to properly converge to the target state, ing in large forces in the last step, visible as a second peak in the supervised CFE chain. The formulation of the loss (Eq. 3) suppresses force spikes. In the solutions inferred by our method, the likelihood of large forces falls off multi-exponentially as a consequence. This means that large forces are exponentially rare, which is the expected behavior given the L2 regularizer from Eq. 3. We also compare our to a single-shooting baseline which is able to find near-optimal solutions at the cost of higher computation times. The classic optimization uses the ADAM optimizer with a learning rate of 0.01 and converges after around 300 iterations. To reach the quality of the staggered prediction scheme, it requires only around 60 iterations. This quick convergence can be explained by the relatively simple setup that is dominated by linear effects. Therefore, the gradients are stable, even when propagated through many frames. The computation times, shown in Tab. 1, were recorded on a single GTX 1080 Ti. We run 100 examples in parallel to reduce the relative overhead caused by GPU instruction queuing. For the network-based methods, we average the inference time over 100 runs. We perform 10 runs for the optimization methods. The incompressible Navier-Stokes equations model dynamics of fluids such as water or air, which can develop highly complex and chaotic behavior. The phenomenon of turbulence is generally seen as one of the few remaining fundamental and unsolved problems of classical physics. The challenging nature of the equations indicates that typically a very significant computational effort and a large number of degrees of freedom are required to numerically compute solutions. Here, we target an incompressible two-dimensional gas with viscosity ν, described by the Navier-Stokes equations for the velocity field v. We assume a constant fluid density throughout the simulation, setting ρ f = const. ≡ 1. The gas velocity is controllable and, according to Eq. 1, we set subject to the hard constraints ∇ · v = 0 and ∇ × p = 0. For our experiments, we target fluids with low viscosities, such as air, and set ν = 0 in the equation above as the transport steps implicitly apply numerical diffusion that is on average higher than the targeted one. For fluids with a larger viscosity, the Poisson solver outlined above for computing p could be used to implicitly solve a vector-valued diffusion equation for v. However, incorporating a significant amount of viscosity would make the control problem easier to solve for most cases, as viscosity suppresses small scale structures in the motion. Hence, in order to create a challenging environment for training our networks, we have but a minimal amount of diffusion in the physical model. In addition to the velocity field v, we consider a smoke density distribution ρ which moves passively with the fluid. The evolution of ρ is described by the equation ∂ρ/∂t = −v·∇ρ. We treat the velocity field as hidden from observation, letting only the smoke density be observed, i.e. o(t) = ρ(t). We stack the two fields as u = (v, ρ) to write the system as one PDE, compatible with Eq. 1. For the OP and CFE networks, we use the 2D network architecture described in Appendix C. Instead of directly generating the velocity update in the CFE network for this problem setup, we make use of stream functions . Hence, the CFE network outputs a vector potential Φ of which the curl ∇ × Φ is used as a velocity update. This setup numerically simplifies the incompressibility condition of the Navier-Stokes equations but retains the same number of effective control parameters. Datasets. We generate training and test datasets for two distinct tasks: flow reconstruction and shape transition. Both datasets have a resolution of 128 × 128 with the velocity fields being sampled in staggered form (see Appendix A). This in over 16.000 effective continuous control parameters that make up the control force F (t i) for each step i. The flow reconstruction dataset is comprised of ground-truth sequences where the initial states (ρ 0, v 0) are randomly sampled and then simulated for 64 time steps. The ing smoke density is then taken to be the target state, o * ≡ ρ * = ρ sim (t 64). Since we use fully convolutional networks for both CFE and OPs, the open domain boundary must be handled carefully. If smoke was lost from the simulation, because it crossed the outer boundary, a neural network would see the smoke simply vanish unless it was explicitly given the domain size as input. To avoid these problems, we run the simulation backwards in time and remove all smoke from ρ 0 that left the simulation domain. For the shape transition dataset, we sample initial and target states ρ 0 and ρ * by randomly choosing a shape from a library containing ten basic geometric shapes and placing it at a random location inside the domain. These can then be used for reconstructing sequences of any length n. For the on shape transition presented in section 6, we choose n = 16 because all interesting behavior can be seen within that time frame. Due to the linear interpolation used in the advection step (see Appendix A), both ρ and v smear out over time. This numerical limitation makes it impossible to match target states exactly in this task as the density will become blurry over time. While we could generate ground-truth sequences using a classical optimizer, we refrain from doing so because (i) these trajectories are not guaranteed to be optimal and (ii) we want to see how well the model can learn from scratch, without initialization. Training. We pretrain the CFE on the natural flow dataset with a supervised loss, where v * (t) denotes the velocity from ground truth sequences. This supervised training alone constitutes a good loss for the CFE as it only needs to consider single-step intervals ∆t while the OPs handle longer sequences. Nevertheless, we found that using the differentiable solver with an observation loss, further improves the accuracy of the inferred force without sacrificing the ground truth match. Here B r (x) denotes a blur function with a kernel of the form 1 1+x/r. The blur helps make the gradients smoother and creates non-zero gradients in places where prediction and target do not overlap. During training, we start with a large radius of r = 16 ∆x for B r and successively decrease it to r = 2 ∆x. We choose α such that L F and L * o are of the same magnitude when the force loss spikes (see Fig. 15). After the CFE is trained, we successively train the OPs starting with the smallest time scale. For the OPs, we train different models for natural flow reconstruction and shape transition, both based on the same CFE model. We pre-train all OPs independently with a supervised observation loss before jointly training them end-to-end with objective loss function (Eq. 4) and the differentiable solver to find the optimal trajectory. We use the OPs trained with the staggered execution scheme as initialization for the prediction refinement scheme. The complexity of solving the Navier-Stokes equations over many time steps in this example requires such a fully supervised initialization step. Without it, this setting is so non-linear that the learning process does not converge to a good solution. Hence, it illustrates the importance of combining supervised and unsupervised (requiring differentiable physics) training for challenging learning objectives. A comparison of the different losses is shown in Fig. 10. The predictions, shown in the top rows of each subfigure, illustrate the differences between the three methods. The supervised predictions, especially the long-term predictions (central images), are blurry because the network learns to average over all ground truth sequences that match the given initial and target state. The differentiable physics solver largely resolves this issue. The predictions are much sharper but the long-term predictions still do not account for short-term deviations. This can be seen in the central prediction of Fig. 10b which shows hints of the target state o *, despite the fact that the actual reconstruction u cannot reach that state at that time. The refined prediction, shown in subfigure (c), is closer to u since it is conditioned on the previous reconstructed state. In the training data, we let the network transform one shape into another at a random location. The differentiable solver and the long-term intuition provided by our execution scheme make it possible to train networks that can infer accurate sequences of control forces. In most cases, the target shapes are closely matched. As our networks infer sequences over time, we refer readers to the supplemental material (https://ge.in.tum.de/publications/2020-iclr-holl), which contains animations of additional sequences. Generalization to multiple shapes. Splitting the reconstruction task into prediction and correction has the additional benefit of having full access to the intermediate predictions o p. These model real states of the system so classical processing or filter operations can be applied to them as well. We demonstrate this by generalizing our method to m > 1 shapes that evolve within the same domain. Figure 11 shows an example of two weakly-interacting shape transitions. We implement this by executing the OPs independently for each transition k ∈ {1, 2, ...m} while inferring the control force F (t) on the joint system. This is achieved by adding the predictions of the smoke density ρ before passing it to the CFE network,õ p = m k=1 o p k. The ing force is then applied to all sequences individually so that smoke from one transition does not end up in another target state. Using this scheme, we can define start and end positions for arbitrarily many shapes and let them evolve together. Evaluation of force strengths The average force strengths are detailed in Tab. 2 while Figure 12 gives a more detailed analysis of the force strengths. As expected from using a L2 regularizer on the force, large values are exponentially rare in the solutions inferred from our test set. None of the hierarchical execution schemes exhibit large outliers. The prediction refinement requires the least amount of force to match the target, slightly ahead of the staggered execution trained with the same loss. The supervised training produces trajectories with reduced continuity that in larger forces being applied. As a fourth test environment, we target a case with increased complexity, where the network does not have the means anymore to directly control the full fluid volume. Instead, the network can only apply forces in the peripheral regions, with a total of more than 5000 control parameters per step. The obstacles prevent fluid from passing through them and the domain is enclosed with solid boundaries from the left, right and bottom. This leads to additional hard constraints and interplays between constraints in the physical model, and as such provides an interesting and challenging test case for our method. The domain has three target regions (buckets) separated by walls at the top of the domain, into which a volume of smoke should be transported from any position in the center In this case the control is indirect since the smoke density lies outside the controlled area at all times. Only the incompressibility condition allows the network to influence the velocity outside the controlled area. This forces the model to consider the global context and synchronize a large number of parameters to create a desired flow field. The requirement of complex synchronized force fields makes generating reliable training data difficult, as manual or random sampling is unlikely to produce a directed velocity field in the center. We therefore skip the pretraining process and directly train the CFE using the differentiable solver, while the OP networks are trained as before with r = 2 ∆x. To evaluate how well the learning method performs, we measure how much of the smoke density ends up inside the buckets and how much force was applied in total. For reference, we replace the observation predictions with an algorithm that moves the smoke towards the bucket in a straight line. Averaged over 100 examples from the test set, the ing model manages to put 89% ± 2.6% of the smoke into the target bucket. In contrast, the model trained with our full algorithm moves 99.22% ± 0.15% of the smoke into the target buckets while requiring 19.1% ± 1.0% less force. We also compare our method to an iterative optimization which directly optimizes the control velocities. We use the ADAM optimizer with a learning rate of 0.1. Despite the highly non-linear setup, the gradients are stable enough to quickly let the smoke flow in the right direction. Fig. 14 shows how the trajectories improve during optimization. After around 60 optimization steps, the smoke distribution starts reaching the target bucket in some examples. Over the next 600 iterations, it converges to a a configuration in which 82.1 ± 7.3 of the smoke ends up in the correct bucket. We compare the sequences inferred by our trained models to classical shooting optimizations using our differentiable physics solver to directly optimize F (t) with the objective loss L (Eq. 4) for a single input. We make use of stream functions , as in the second experiment, to ensure the incompressibility condition is fulfilled. For this comparison, the velocities of all steps are initialized with a normal distribution with µ = 0 and σ = 0.01 so that the initial trajectory does not significantly alter the initial state, u(t) ≈ u(t 0). We first show how a simple single-shooting algorithm fares with our NavierStokes setup. When solving the ing optimization problem using single-shooting, strong artifacts in the reconstructions can be observed, as shown in Figure 17a. This undesirable behavior stems from the nonlinearity of the Navier-Stokes equations, which causes the gradients ∆u 0 to become noisy and unreliable when they are recurrently backpropagated through many time steps. Unsurprisingly, the single-shooting optimizer converges to a undesirable local minimum. As single-shooting is well known to have problems with non-trivial problem settings, we employ a multi-scale shooting (MS) method . This solver first computes the trajectory on a coarsely discretized version of the problem before iteratively refining the discretization. For the first resolution, we use 1/16 of the original width and height which both reduces the number of control parameters and reduces nonlinear effects from the physics model. By employing an exponential learning rate decay, this multi-scale optimization converges reliably for all examples. We use the ADAM optimizer to compute the control variable updates from the gradients of the differentiable Navier-Stokes solver. An averaged set of representative convergence curves for this setup is shown in Figure 15. The objective loss (Eq. 4) is shown in its decomposed state as the sum of the observation loss L * o, shown in Figure 15a, and the force loss L F, shown in Figure 15b. Due to the initialization of all velocities with small values, the force loss starts out small. For the first 1000 iteration steps, L * o dominates which causes the system to move towards the target state o *. This trajectory is not ideal, however, as more force than necessary is applied. Once observation loss and force loss are of the same magnitude, the optimization refines the trajectory to use less force. We found that the trajectories predicted by our neural network based method correspond to performing about 1500 steps with the MS optimization while requiring less tuning. Reconstructions of the same example are compared in Figure 17. Performing the MS optimization up to this point took 131 seconds on a GTX 1080 Ti graphics card for a single 16-frame sequence while the network inference ran for 0.5 seconds. For longer sequences, this gap grows further because the network inference time scales with O(n). This could only be matched if the number of iterations for the MS optimization scaled with O, which is not the case for most problems. These tests indicate that our model has successfully internalized the behavior of a large class of physical behavior, and can exert the right amount of force to reach the intended goal. The large number of iterations required for the single-case shooting optimization highlights the complexity of the individual solutions. Interestingly, the network also benefits from the much more difficult task to learn a whole manifold of solutions: comparing solutions with similar observation loss for the MS algorithm and our network, the former often finds solutions that are unintuitive and contain noticeable detours, e.g., not taking a straight path for the density matching examples of Fig. 5. In such situations, our network benefits from having to represent the solution manifold, instead of aiming for single task optimizations. As the solutions are changing relatively smoothly, the complex task effectively regularizes the inference of new solutions and gives the network a more global view. Instead, the shooting optimiza- tions have to purely rely on local gradients for single-shooting or manually crafted multi-resolution schemes for MS. Our method can also be employed to support the MS optimization by initializing it with the velocities inferred by the networks. In this case, shown in Figure 16, both L * o and L F decrease right from the beginning, similar to the behavior in Figure 15 from iteration 1500 on. The reconstructed trajectory from the neural-network-based method is so close to the optimum that the multi-resolution approach described above is not necessary. In Fig. 18, we provide a visual overview of a sub-set of the sequences that can be found in the supplemental materials. It contains 16 randomly selected reconstructions for each of the natural flow, the shape transitions, and the indirect control examples. In addition, the supplemental material, available at https://ge.in.tum.de/publications/2020-iclr-holl, highlights the differences between unsupervised, staggered, and refined versions of our approach.
We train a combination of neural networks to predict optimal trajectories for complex physical systems.
381
scitldr
The ability of overparameterized deep networks to generalize well has been linked to the fact that stochastic gradient descent (SGD) finds solutions that lie in flat, wide minima in the training loss -- minima where the output of the network is resilient to small random noise added to its parameters. So far this observation has been used to provide generalization guarantees only for neural networks whose parameters are either \textit{stochastic} or \textit{compressed}. In this work, we present a general PAC-Bayesian framework that leverages this observation to provide a bound on the original network learned -- a network that is deterministic and uncompressed. What enables us to do this is a key novelty in our approach: our framework allows us to show that if on training data, the interactions between the weight matrices satisfy certain conditions that imply a wide training loss minimum, these conditions themselves {\em generalize} to the interactions between the matrices on test data, thereby implying a wide test loss minimum. We then apply our general framework in a setup where we assume that the pre-activation values of the network are not too small (although we assume this only on the training data). In this setup, we provide a generalization guarantee for the original (deterministic, uncompressed) network, that does not scale with product of the spectral norms of the weight matrices -- a guarantee that would not have been possible with prior approaches. Modern deep neural networks contain millions of parameters and are trained on relatively few samples. Conventional wisdom in machine learning suggests that such models should massively overfit on the training data, as these models have the capacity to memorize even a randomly labeled dataset of similar size . Yet these models have achieved state-ofthe-art generalization error on many real-world tasks. This observation has spurred an active line of research (; BID2 BID11) that has tried to understand what properties are possessed by stochastic gradient descent (SGD) training of deep networks that allows these networks to generalize well. One particularly promising line of work in this area (; BID0 has been bounds that utilize the noise-resilience of deep networks on training data i.e., how much the training loss of the network changes with noise injected into the parameters, or roughly, how wide is the training loss minimum. While these have yielded generalization bounds that do not have a severe exponential dependence on depth (unlike other bounds that grow with the product of spectral norms of the weight matrices), these bounds are quite limited: they either apply to a stochastic version of the classifier (where the parameters are drawn from a distribution) or a compressed version of the classifier (where the parameters are modified and represented using fewer bits).In this paper, we revisit the PAC-Bayesian analysis of deep networks in Neyshabur et al. (2017; and provide a general framework that allows one to use noise-resilience of the deep network on training data to provide a bound on the original deterministic and uncompressed network. We achieve this by arguing that if on the training data, the interaction between the 'activated weight matrices' (weight matrices where the weights incoming from/outgoing to inactive units are zeroed out) satisfy certain conditions which in a wide training loss minimum, these conditions themselves generalize to the weight matrix interactions on the test data. After presenting this general PAC-Bayesian framework, we specialize it to the case of deep ReLU networks, showing that we can provide a generalization bound that accomplishes two goals simultaneously: i) it applies to the original network and ii) it does not scale exponentially with depth in terms of the products of the spectral norms of the weight matrices; instead our bound scales with more meaningful terms that capture the interactions between the weight matrices and do not have such a severe dependence on depth in practice. We note that all but one of these terms are indeed quite small on networks in practice. However, one particularly (empirically) large term that we use is the reciprocal of the magnitude of the network pre-activations on the training data (and so our bound would be small only in the scenario where the pre-activations are not too small). We emphasize that this drawback is more of a limitation in how we characterize noise-resilience through the specific conditions we chose for the ReLU network, rather than a drawback in our PAC-Bayesian framework itself. Our hope is that, since our technique is quite general and flexible, by carefully identifying the right set of conditions, in the future, one might be able to derive a similar generalization guarantee that is smaller in practice. To the best of our knowledge, our approach of generalizing noise-resilience of deep networks from training data to test data in order to derive a bound on the original network that does not scale with products of spectral norms, has neither been considered nor accomplished so far, even in limited situations. One of the most important aspects of the generalization puzzle that has been studied is that of the flatness/width of the training loss at the minimum found by SGD. The general understanding is that flatter minima are correlated with better generalization behavior, and this should somehow help explain the generalization behavior BID7 BID6 BID8. Flatness of the training loss minimum is also correlated with the observation that on training data, adding noise to the parameters of the network only in little change in the output of the network -or in other words, the network is noise-resilient. Deep networks are known to be similarly resilient to noise injected into the inputs ; but note that our theoretical analysis relies on resilience to parameter perturbations. While some progress has been made in understanding the convergence and generalization behavior of SGD training of simple models like two-layered hidden neural networks under simple data distributions (; ; BID2 BID11, all known generalization guarantees for SGD on deeper networks -through analyses that do not use noise-resilience properties of the networks -have strong exponential dependence on depth. In particular, these bounds scale either with the product of the spectral norms of the weight matrices BID0 BID1 or their Frobenius norms BID4 . In practice, the weight matrices have a spectral norm that is as large as 2 or 3, and an even larger Frobenius norm that scales with √ H where H is the width of the network i.e., maximum number of hidden units per layer.1 Thus, the generalization bound scales as say, 2 D or H D 2, where D is the depth of the network. At a high level, the reason these bounds suffer from such an exponential dependence on depth is that they effectively perform a worst case approximation of how the weight matrices interact with each other. For example, the product of the spectral norms arises from a naive approximation of the Lipschitz constant of the neural network, which would hold only when the singular values of the 1 To understand why these values are of this order in magnitude, consider the initial matrix that is randomly initialized with independent entries with variance 1 √ H. It can be shown that the spectral norm of this matrix, with high probability, lies near its expected value, near 2 and the Frobenius norm near its expected value which is √ H. Since SGD is observed not to move too far away from the initialization regardless of H , these values are more or less preserved for the final weight matrices.weight matrices all align with each other. However, in practice, for most inputs to the network, the interactions between the activated weight matrices are not as adverse. By using noise-resilience of the networks, prior approaches BID0 ) have been able to derive bounds that replace the above worst-case approximation with smaller terms that realistically capture these interactions. However, these works are limited in critical ways. BID0 use noise-resilience of the network to modify and "compress" the parameter representation of the network, and derive a generalization bound on the compressed network. While this bound enjoys a better dependence on depth because its applies to a compressed network, the main drawback of this bound is that it does not apply on the original network. On the other hand, take advantage of noise-resilience on training data by incorporating it within a PAC-Bayesian generalization bound BID14. However, their final guarantee is only a bound on the expected test loss of a stochastic network. In this work, we revisit the idea in , by pursuing the PAC-Bayesian framework BID14 to answer this question. The standard PAC-Bayesian framework provides generalization bounds for the expected loss of a stochastic classifier, where the stochasticity typically corresponds to Gaussian noise injected into the parameters output by the learning algorithm. However, if the classifier is noise-resilient on both training and test data, one could extend the PAC-Bayesian bound to a standard generalization guarantee on the deterministic classifier. Other works have used PAC-Bayesian bounds in different ways in the context of neural networks. BID9; BID3 optimize the stochasticity and/or the weights of the network in order to numerically compute good (i.e., non-vacuous) generalization bounds on the stochastic network. BID0 derive generalization bounds on the original, deterministic network by working from the PAC-Bayesian bound on the stochastic network. However, as stated earlier, their work does not make use of noise resilience in the networks learned by SGD.OUR CONTRIBUTIONS The key contribution in our work is a general PAC-Bayesian framework for deriving generalization bounds while leveraging the noise resilience of a deep network. While our approach is applied to deep networks, we note that it is general enough to be applied to other classifiers. In our framework, we consider a set of conditions that when satisfied by the network, makes the output of the network noise-resilient at a particular input datapoint. For example, these conditions could characterize the interactions between the activated weight matrices at a particular input. To provide a generalization guarantee, we assume that the learning algorithm has found weights such that these conditions hold for the weight interactions in the network on training data (which effectively implies a wide training loss minimum). Then, as a key step, we generalize these conditions over to the weight interactions on test data (which effectively implies a wide test loss minimum) 2. Thus, with the guarantee that the classifier is noise-resilient both on training and test data, we derive a generalization bound on the test loss of the original network. Finally, we apply our framework to a specific set up of ReLU based feedforward networks. In particular, we first instantiate the above abstract framework with a set of specific conditions, and then use the above framework to derive a bound on the original network. While very similar conditions have already been identified in prior work BID0 ) (see Appendix G for an extensive discussion of this), our contribution here is in showing how these conditions generalize from training to test data. Crucially, like these works, our bound does not have severe exponential dependence on depth in terms of products of spectral norms. We note that in reality, all but one of our conditions on the network do hold on training data as necessitated by the framework. The strong, non-realistic condition we make is that the pre-activation values of the network are sufficiently large, although only on training data; however, in practice a small proportion of the pre-activation values can be arbitrarily small. Our generalization bound scales inversely with the smallest absolute value of the pre-activations on the training data, and hence in practice, our bound would be large. Intuitively, we make this assumption to ensure that under sufficiently small parameter perturbations, the activation states of the units are guaranteed not to flip. It is worth noting that BID0 too require similar, but more realistic assumptions about pre-activation values that effectively assume only a small proportion of units flip under noise. However, even under our stronger condition that no such units exist, it is not apparent how these approaches would yield a similar bound on the deterministic, uncompressed network without generalizing their conditions to test data. We hope that in the future our work could be developed further to accommodate the more realistic conditions from BID0. In this section, we present our general PAC-Bayesian framework that uses noise-resilience of the network to convert a PAC-Bayesian generalization bound on the stochastic classifier to a generalization bound on the deterministic classifier. NOTATION. Let KL(⋅ ⋅) denote the KL-divergence. Let ⋅, ⋅ ∞ denote the 2 norm and the ∞ norms of a vector, respectively. Let ⋅ 2, ⋅ F, ⋅ 2,∞ denote the spectral norm, Frobenius norm and maximum row 2 norm of a matrix, respectively. Consider a K-class learning task where the labeled datapoints (x, y) are drawn from an underlying distribution D over X × {1, 2, ⋯, K} where X ∈ R N. We consider a classifier parametrized by weights W. For a given input x and class k, we denote the output of the classifier by f (x; W) [k]. In our PAC-Bayesian analysis, we will use U ∼ N (0, σ 2) to denote parameters whose entries are sampled independently from a Gaussian, and W + U to denote the entrywise addition of the two sets of parameters. We use DISPLAYFORM0 Given a training set S of m samples, we let (x, y) ∼ S to denote uniform sampling from the set. Finally, for any γ > 0, let L γ (f (x; W), y) denote a margin-based loss such that the loss is 0 only when f (x; W) [y] ≥ max j≠y f (x; W) [j] + γ, and 1 otherwise. Note that L 0 corresponds to 0-1 error. See Appendix A for more notations. TRADITIONAL PAC-BAYESIAN BOUNDS. The PAC-Bayesian framework BID14 b) allows us to derive generalization bounds for a stochastic classifier. Specifically, letW be a random variable in the parameter space whose distribution is learned based on training data S. Let P be a prior distribution in the parameter space chosen independent of the training data. The PAC-Bayesian framework yields the following generalization bound on the 0-1 error of the stochastic classifier that holds with probability 1 − δ over the draw of the training set S of m samples 3: DISPLAYFORM1 Typically, and in the rest of this discussion,W is a Gaussian with covariance σ 2 I for some σ > 0 centered at the weights W learned based on the training data. Furthermore, we will set P to be a Gaussian with covariance σ 2 I centered at the random initialization of the network like in BID3, instead of at the origin, like in BID0. This is because the ing KL-divergence -which depends on the distance between the means of the prior and the posterior -is known to be smaller, and to save a √ H factor in the bound . To extend the above PAC-Bayesian bound to a standard generalization bound on a deterministic classifier W, we need to replace the training and the test loss of the stochastic classifier with that of the original, deterministic classifier. However, in doing so, we will have to introduce extra terms in the upper bound to account for the perturbation suffered by the train and test loss under the Gaussian perturbation of the parameters. To tightly bound these two terms, we need that the network is noise-resilient on training and test data respectively. Our hope is that if the learning algorithm has found weights such that the network is noise-resilient on the training data, we can then generalize this noise-resilience over to test data as well, allowing us to better bound the excess terms. DISPLAYFORM0 For convenience, we also define an additional R + 1th set to be the singleton set containing the margin of the classifier on the input: f (x; W) [y] − max j≠y f (x; W) [j]. Note that if this term is positive (negative) then the classification is (in)correct. We will also denote the constant ∆ ⋆ R+1,1 as γ class.ORDERING OF THE SETS OF PROPERTIES We now impose a crucial constraint on how these sets of properties depend on each other. Roughly speaking, we want that for a given input, if the first r − 1 sets of properties approximately satisfy the condition in Equation 1, then the properties in the rth set are noise-resilient i.e., under random parameter perturbations, these properties do not suffer much perturbation. This kind of constraint would naturally hold for deep networks if we have chosen the properties carefully e.g., we will show that, for any given input, the perturbation in the pre-activation values of the dth layer is small as long as the absolute pre-activation values in the layers below d − 1 are large, and a few other norm-bounds on the lower layer weights are satisfied. We formalize the above requirement by defining expressions ∆ r,l (σ) that bound the perturbation in the properties ρ r,l, in terms of the variance σ 2 of the parameter perturbations. For any r ≤ R + 1 and for any (x, y), our framework requires the following to hold: DISPLAYFORM1 Let us unpack the above constraint. First, although the above constraint must hold for all inputs (x, y), it effectively applies only to those inputs that satisfy the pre-condition of the if-then statement: namely, it applies only to inputs (x, y) that approximately satisfy the first r − 1 conditions in DISPLAYFORM2. Next, we discuss the second part of the above if-then statement which specifies a probability term that is required to be small for all such inputs. In words, the first event within the probability term above is the event that for a given random perturbation U, the properties involved in the rth condition suffer a large perturbation. The second is the event that the properties involved in the first r − 1 conditions do not suffer much perturbation; but, given that these r − 1 conditions already hold approximately, this second event implies that these conditions are still preserved approximately under perturbation. In summary, our constraint requires the following: for any input on which the first r − 1 conditions hold, there should be very few parameter perturbations that significantly perturb the rth set of properties while preserving the first r − 1 conditions. When we instantiate the framework, we have to derive closed form expressions for the perturbation bounds ∆ r,l (σ) (in terms of only σ and the constants ∆ ⋆ r,l). As we will see, for ReLU networks, we will choose the properties in a way that this constraint naturally falls into place in a way that the perturbation bounds ∆ r,l (σ) do not grow with the product of spectral norms (Lemma E.1).THEOREM STATEMENT In this setup, we have the following'margin-based' generalization guarantee on the original network. That is, we bound the 0-1 test error of the network by a margin-based error on the training data. Our generalization guarantee, which scales linearly with the number of conditions R, holds under the setting that the training algorithm always finds weights such that on the training data, the conditions in Equation 1 is satisfied for all r = 1, ⋯, R. Theorem 3.1. Let σ * be the maximum standard deviation of the Gaussian parameter perturbation such that the constraint in Equation 2 holds with ∆ r,l (σ ⋆) ≤ ∆ ⋆ r,l ∀r ≤ R + 1 and ∀l. Then, for any δ > 0, with probability 1 − δ over the draw of samples S from D m, for any W we have that, if W satisfies the conditions in Equation 1 for all r ≤ R and for all training examples (x, y) ∈ S, then DISPLAYFORM3 The crux of our proof (in Appendix D) lies in generalizing the conditions of Equation 1 satisfied on the training data to test data one after the other, by proving that they are noise-resilient on both training and test data. Crucially, after we generalize the first r − 1 conditions from training data to test data (i.e., on most test and training data, the r − 1 conditions are satisfied), we will have from Equation 2 that the rth set of properties are noise-resilient on both training and test data. Using the noise-resilience of the rth set of properties on test/train data, we can generalize even the rth condition to test data. We emphasize a key, fundamental tool that we present in Theorem C.1 to convert a generic PACBayesian bound on a stochastic classifier, to a generalization bound on the deterministic classifier. Our technique is at a high level similar to approaches in BID12 BID13. In Section C.1, we argue how this technique is more powerful than other approaches in Neyshabur et al. FORMULA2; BID10; BID5 in leveraging the noiseresilience of a classifier. The high level argument is that, to convert the PAC-Bayesian bound, these latter works relied on a looser output perturbation bound, one that holds on all possible inputs, with high probability over all perturbations i.e., a bound on max x f (x; W) − f (x; W + U) ∞ w.h.p over draws of U. In contrast, our technique relies on a subtly different but significantly tighter bound: a bound on the output perturbation that holds with high probability given an input i.e., a bound on f (x; W) − f (x; W + U) ∞ w.h.p over draws of U for each x. When we do instantiate our framework as in the next section, this subtle difference is critical in being able to bound the output perturbation without suffering from a factor proportional to the product of the spectral norms of the weight matrices (which is the case in Neyshabur et al. FORMULA2). NOTATION. In this section, we apply our framework to feedforward fully connected ReLU networks of depth D (we care about D > 2) and width H (which we will assume is larger than the input dimensionality N, to simplify our proofs) and derive a generalization bound on the original network that does not scale with the product of spectral norms of the weight matrices. Let φ (⋅) denote the ReLU activation. We consider a network parameterized by DISPLAYFORM0 We denote the value of the hth hidden unit on the dth layer before and after the activation by DISPLAYFORM1 to be the Jacobian of the pre-activations of layer d with respect to the pre-activations of layer d ′ for d ′ ≤ d (each row in this Jacobian corresponds to a unit in layer d). In short, we will call this, Jacobian d d ′. Let Z denote the random initialization of the network. Informally, we consider a setting where the learning algorithm satisfies the following conditions on the training data that make it noise-resilient on training data: a) the 2 norm of the hidden layers are all small, b) the pre-activation values are all sufficiently large in magnitude, c) the Jacobian of any layer with respect to a lower layer, has rows with a small 2 norm, and has a small spectral norm. We cast these conditions in the form of Equation 1 by appropriately defining the properties ρ's and the margins ∆ ⋆'s in the general framework. We note that these properties are quite similar to those already explored in BID0; Neyshabur et al. FORMULA2; we provide more intuition about these properties, and how we cast them in our framework in Appendix E.1.Having defined these properties, we first prove in Lemma E.1 in Appendix E a guarantee equivalent to the abstract inequality in Equation 2. Essentially, we show that under random perturbations of the parameters, the perturbation in the output of the network and the perturbation in the input-dependent properties involved in (a), (b), (c) themselves can all be bounded in terms of each other. Crucially, these perturbation bounds do not grow with the spectral norms of the network. Having instantiated the framework as above, we then instantiate the bound provided by the framework. Our generalization bound scales with the bounds on the properties in (a) and (c) above as satisfied on the training data, and with the reciprocal of the property in (b) i.e., the smallest absolute value of the pre-activations on the training data. Additionally, our bound has an explicit dependence on the depth of the network, which arises from the fact that we generalize R = O(D) conditions. Most importantly, our bound does not have a dependence on the product of the spectral norms of the weight matrices. Theorem 4.1. (shorter version; see Appendix F for the complete statement) For any margin γ class > 0, and any δ > 0, with probability 1 − δ over the draw of samples from D m, for any W, we have that: DISPLAYFORM2 (an upper bound on the spectral norm of the Jacobian for each layer).In FIG0, we show how the terms in the bound vary for networks of varying depth with a small width of H = 40 on the MNIST dataset. We observe that B layer-2, B output, B jac-row-2, B jac-spec typically lie in the range of and scale with depth as ∝ 1.57 D. In contrast, the equivalent term from Neyshabur et al. FORMULA2 consisting of the product of spectral norms can be as large as 10 3 or 10 5 and scale with D more severely as 2.15 D.The bottleneck in our bound is B preact, which scales inversely with the magnitude of the smallest absolute pre-activation value of the network. In practice, this term can be arbitrarily large, even though it does not depend on the product of spectral norms/depth. This is because some hidden units can have arbitrarily small absolute pre-activation values -although this is true only for a small proportion of these units. To give an idea of the typical, non-pathological magnitude of the pre-activation values, we plot two other variations of B preact: a) 5%-B preact which is calculated by ignoring 5% of the training datapoints with the smallest absolute pre-activation values and b) median-B preact which is calculated by ignoring half the hidden units in each layer with the smallest absolute pre-activation values for each input. We observe that median-B preact is quite small (of the order of 10 2), while 5%-B preact, while large (of the order of 10 4), is still orders of magnitude smaller than B preact. In Figure 2 we show how our overall bound and existing product-of-spectral-norm-based bounds BID1 BID0 vary with depth. While our bound is orders of magnitude larger than prior bounds, the key point here is that our bound grows with depth as 1.57 D while prior bounds grow with depth as 2.15D indicating that our bound should perform asymptotically better with respect to depth. Indeed, we verify that our bound obtains better values than the other existing bounds when D = 28 (see Figure 2 b). We also plot hypothetical variations of our bound replacing B preact with 5%-B preact (see "Ours-5%") and median-B preact (see "Ours-Median") both of which perform orders of magnitude better than our actual bound (note that these two hypothetical bounds do not actually hold good). In fact for larger depth, the bound with 5%-B preact performs better than all other bounds (including existing bounds). This indicates that the only bottleneck in our bound comes from the dependence on the smallest pre-activation magnitudes, and if this particular dependence is addressed, our bound has the potential to achieve tighter guarantees for even smaller D such as D = 8. In the left, we vary the depth of the network (fixing H = 40) and plot the logarithm of various generalization bounds ignoring the dependence on the training dataset size and a log(DH) factor in all of the considered bounds. Specifically, we consider our bound, the hypothetical versions of our bound involving 5%-B preact and median-B preact respectively, and the bounds from Neyshabur et al. DISPLAYFORM3 and Bartlett et al. FORMULA2 maxx DISPLAYFORM4 both of which have been modified to include distance from initialization instead of distance from origin for a fair comparison. Observe the last two bounds have a plot with a larger slope than the other bounds indicating that they might potentially do worse for a sufficiently large D. Indeed, this can be observed from the plots on the right where we report the distribution of the logarithm of these bounds for D = 28 across 12 runs (although under training settings different from the experiments on the left; see Appendix F.3 for the exact details).We refer the reader to Appendix F.3 for added discussion where we demonstrate how all the quantities in our bound vary with depth for H = 1280 (Finally, as noted before, we emphasize that the dependence of our bound on the pre-activation values is a limitation in how we characterize noise-resilience through our conditions rather than a drawback in our general PAC-Bayesian framework itself. Specifically, using the assumed lower bound on the pre-activation magnitudes we can ensure that, under noise, the activation states of the units do not flip; then the noise propagates through the network in a tractable, "linear" manner. Improving this analysis is an important direction for future work. For example, one could modify our analysis to allow perturbations large enough to flip a small proportion of the activation states; one could potentially formulate such realistic conditions by drawing inspiration from the conditions in Neyshabur et al. FORMULA2 ; BID0 .However, we note that even though these prior approaches made more realistic assumptions about the magnitudes of the pre-activation values, the key limitation in these approaches is that even under our non-realistic assumption, their approaches would yield bounds only on stochastic/compressed networks. Generalizing noise-resilience from training data to test data is crucial to extending these bounds to the original network, which we accomplish. In this work, we introduced a novel PAC-Bayesian framework for leveraging the noise-resilience of deep neural networks on training data, to derive a generalization bound on the original uncompressed, deterministic network. The main philosophy of our approach is to first generalize the noise-resilience from training data to test data using which we convert a PAC-Bayesian bound on a stochastic network to a standard margin-based generalization bound. We apply our approach to ReLU based networks and derive a bound that scales with terms that capture the interactions between the weight matrices better than the product of spectral norms. For future work, the most important direction is that of removing the dependence on our strong assumption that the magnitude of the pre-activation values of the network are not too small on training data. More generally, a better understanding of the source of noise-resilience in deep ReLU networks would help in applying our framework more carefully in these settings, leading to tighter guarantees on the original network. We will use upper-case symbols to denote matrices, and lower-case bold-face symbols to denote vectors. In order to make the mathematical statements/derivations easier to read, if we want to emphasize a term, say x, we write, x. Recall that we consider a neural netork of depth D (i.e., D − 1 hidden layers and one output layer) mapping from R N → R K, where K is the number of class labels in the learning task. The layers are fully connected with H units in each hidden layer, and with ReLU activations φ (⋅) on all the hidden units and linear activations on the output units. We denote the parameters of the network using the symbol W, which in turn denotes a set of weight matrices DISPLAYFORM0 to be the Jacobian corresponding to the pre-activation values of layer d with respect to the pre-activation values of layer d ′ on an input x. That is, DISPLAYFORM1 In other words, this corresponds to the product of the'activated' portion of the matrices DISPLAYFORM2, where the weights corresponding to inactive inputs are zeroed out. In short, we will call this' Jacobian d d ′'. Note that each row in this Jacobian corresponds to a unit on the dth layer, and each column corresponds to a unit on the d ′ th layer. We will denote the parameters of a random initialization of the network by Z = (Z 1, Z 2, ⋯, Z d). Let D be an underlying distribution over R N × {1, 2, ⋯, K} from which the data is drawn. In our PAC-Bayesian analysis, we will use U to denote a set of D weight matrices U 1, U 2, ⋯, U D whose entries are sampled independently from a Gaussian. Furthermore, we will use U d to denote only the first d of the randomly sampled weight matrices, and W + U d to denote a network where the d random matrices are added to the first d weight matrices in W. Note that W + U 0 = W. Thus, f (x; W + U d) is the output of a network where the first d weight matrices have been perturbed. In our analysis, we will also need to study a perturbed network where the hidden units are frozen to be at the activation state they were at before the perturbation; we will use the notation W[+U d] to denote the weights of such a network. For our statements regarding probability of events, we will use ∧, ∨, and ¬ to denote the intersection, union and complement of events (to disambiguate from the set operators). In this section, we present some standard . The first two below will be useful for our noise resilience analysis. HOEFFDING BOUND Lemma B.1. For i = 1, 2, ⋯, n, let X i be independent random variables sampled from a Gaussian with mean µ i and variance σ 2 i. Then for all t ≥ 0, we have: DISPLAYFORM0 Or alternatively, for δ ∈ DISPLAYFORM1 Note that an identical inequality holds good symmetrically for the event ∑ n i=1 X i − µ i ≤ −t, and so the probability that the event ∑ n i=1 X i − µ i > t holds, is at most twice the failure probability in the above inequalities. Lemma B.2. Let U be a H 1 × H 2 matrix where each entry is sampled from N (0, σ 2). Let x be an arbitrary vector in DISPLAYFORM0 Proof. U x is a random vector sampled from a multivariate Gaussian with mean E[U x] = 0 and co-variance E[U xx T U T]. The (i, j)th entry in this covariance matrix is E[(u DISPLAYFORM1 2 . When i ≠ j, since u i and u j are independent random variables, we will have E[(u DISPLAYFORM2 SPECTRAL NORM OF ENTRY-WISE GAUSSIAN MATRIX The following bounds the spectral norm of a matrix with Gaussian entries, with high probability: DISPLAYFORM3 2 ) or alternatively, for any δ > 0, DISPLAYFORM4 2 ) KL DIVERGENCE OF GAUSSIANS. We will use the following KL divergence equality to bound the generalization error in our PAC-Bayesian analyses. Lemma B.4. Let P be the spherical Gaussian N (µ 1, σ 2 I) and Q be the spherical Gaussian N (µ 2, σ 2 I). Then, the KL-divergence between Q and P is: DISPLAYFORM5 In this section, we will present our main PAC-Bayesian theorem that will guide our analysis of generalization in our framework. Concretely, our extends the generalization bound provided by conventional PAC-Bayesian analysis BID13 -which is a generalization bound on the expected loss of a distribution of classifiers i.e., a stochastic classifier -to a generalization bound on a deterministic classifier. The way we reduce the PAC-Bayesian bound to a standard generalization bound, is different from the one pursued in previous works BID0 BID10.The generalization bound that we state below is a bit more general than standard generalization bounds on deterministic networks. Typically, generalization bounds are on the classification error; however, as discussed in the main paper we will be dealing with generalizing multiple different conditions on the interactions between the weights of the network from the training data to test data. So to state a bound that is general enough, we consider a set of generic functions ρ r (W, x, y) for r = 1, 2, ⋯R ′ (we use R ′ to distinguish it from R, the number of conditions in the abstract classifier of Section 3.1). Each of these functions compute a scalar value that corresponds to some input-dependent property of the network with parameters W for the datapoint (x, y). As an example, this property could simply be the margin of the function on the yth class i.e., f (x; W) [y] − max j≠y f (x; W) [j]. Theorem C.1. Let P be a prior distribution over the parameter space that is chosen independent of the training dataset. Let U be a random variable sampled entrywise from N (0, σ 2). Let ρ r (⋅, ⋅, ⋅) and ∆ r > 0 for r = 1, 2, ⋯R ′, be a set of input-dependent properties and their corresponding margins. We define the network W to be noise-resilient with respect to all these functions, at a given data point (x, y) if: DISPLAYFORM6, W) denote the probability over the random draw of a point (x, y) drawn from D, that the network with weights W is not noise-resilient at (x, y) according to Equation 3. DISPLAYFORM7, W) denote the fraction of data points (x, y) in a dataset S for which the network is not noise-resilient according to Equation 3. Then for any δ, with probability 1 − δ over the draws of a sample set S = {(x i, y i) ∼ D i = 1, 2, ⋯, m}, for any W we have: DISPLAYFORM8 The reader maybe curious about how one would bound the term µ D in the above bound, as this term corresponds to noise-resilience with respect to test data. This is precisely what we bound later when we generalize the noise-resilience-related conditions satisfied on train data over to test data. The above approach differs from previous approaches used by BID0; BID10 in how strong a noise-resilience we require of the classifier to provide the generalization guarantee. The stronger the noise-resilience requirement, the more price we have to pay when we jump from the PAC-Bayesian guarantee on the stochastic classifier to a guarantee on the deterministic classifier. We argue that our noise-resilience requirement is a much milder condition and therefore promises tighter guarantees. Our requirement is in fact philosophically similar to BID12 BID13, although technically different. More concretely, to arrive at a reasonable generalization guarantee in our setup, we would need that µ D andμ S are both only as large as O(1 √ m). In other words, we would want the following for (x, y) ∼ D and for (x, y) ∼ S: DISPLAYFORM0 Previous works require a noise resilience condition of the form that with high probability a particular perturbation does not perturb the classifier output on any input. For example, the noise-resilience condition used in BID0 written in terms of our notations, would be: DISPLAYFORM1 The main difference between the above two formulations is in what makes a particular perturbation (un)favorable for the classifier. In our case, we deem a perturbation unfavorable only after fixing the datapoint. However, in the earlier works, a perturbation is deemed unfavorable if it perturbs the classifier output sufficiently on some datapoint from the domain of the distribution. While this difference is subtle, the earlier approach would lead to a much more pessimistic analysis of these perturbations. In our analysis, this weakened noise resilience condition will be critical in analyzing the Gaussian perturbations more carefully than in Neyshabur et al. FORMULA2 i.e., we can bound the perturbation in the classifier output more tightly by analyzing the Gaussian perturbation for a fixed input point. Note that one way our noise resilience condition would seem stronger in that on a given datapoint we want less than 1 √ m mass of the perturbations to be unfavorable for us, while in previous bounds, there can be as much as 1 2 probability mass of perturbations that are unfavorable. In our analysis, this will only weaken our generalization bound by a ln √ m factor in comparison to previous bounds (while we save other significant factors). Proof. The starting point of our proof is a standard PAC-Bayesian theorem BID13 which bounds the generalization error of a stochastic classifier. Let P be a data-independent prior over the parameter space. Let L(W, x, y) be any loss function that takes as input the network parameter, and a datapoint x and its true label y and outputs a value in. Then, we have that, with probability 1 − δ over the draw of S ∼ D m, for every distribution Q over the parameter space, the following holds: DISPLAYFORM0 In other words, the statement tells us that except for a δ proportion of bad draws of m samples, the test loss of the stochastic classifierW ∼ Q would be close to its train loss. This holds for every possible distribution Q, which allows us to cleverly choose Q based on S. As is the convention, we choose Q to be the distribution of the stochastic classifier picked from N (W, σ 2 I) i.e., a Gaussian perturbation of the deterministic classifier W.RELATING TEST LOSS OF STOCHASTIC CLASSIFIER TO DETERMINISTIC CLASSIFIER. Now our task is to bound the loss for the deterministic classifier W, Pr (x,y)∼D [∃r ρ r (W, x, y) < 0]. To this end, let us define the following margin-based variation of this loss for some c ≥ 0: DISPLAYFORM1 and so we have Pr (x,y)∼D [∃r ρ r (W, x, y) DISPLAYFORM2 First, we will bound the expected L 0 of a deterministic classifier by the expected L 1 2 of the stochastic classifier; then we will bound the test L 1 2 of the stochastic classifier using the PAC-Bayesian bound. We will split the expected loss of the deterministic classifier into an expectation over datapoints for which it is noise-resilient with respect to Gaussian noise and an expectation over the rest. To write this out, we define, for a datapoint (x, y), N(W, x, y) to be the event that W is noise-resilient at (x, y) as defined in Equation 3 in the theorem statement: DISPLAYFORM3 To further continue the upper bound on the left hand side, we turn our attention to the stochastic classifier's loss on the noise-resilient part of the distribution D (we will lower bound this term in terms of the first term on the right hand side above). For simplicity of notations, we will write D ′ to denote the distribution D conditioned on N(W, x, y). Also, let U(W, x, y) be the favorable event that for a given data point (x, y) and a draw of the stochastic classifier,W, it is the case that for every r, ρ r (W, x, y) − ρ r (W, x, y) ≤ ∆ r 2. Then, the stochastic classifier's loss L 1 2 on D ′ is: DISPLAYFORM4 splitting the inner expectation over the favorable and unfavorable perturbations, and using linearity of expectations, DISPLAYFORM5 to lower bound this, we simply ignore the second term (which is positive) DISPLAYFORM6 Next, we use the following fact: if L 1 2 (W, x, y) = 0, then for all r, ρ r (W, x, y) ≥ ∆ r 2 and ifW is a favorable perturbation of W, then for all r, ρ r (W, DISPLAYFORM7 Hence ifW is a favorable perturbation then, DISPLAYFORM8 . Therefore, we can lower bound the above expression by replacing the stochastic classifier with the deterministic classifier (and thus ridding ourselves of the expectation over Q): DISPLAYFORM9 Since the favorable perturbations for a fixed datapoint drawn from D ′ have sufficiently high probability (that is, PrW ∼Q U(W, x, y) ≥ 1 − 1 √ m), we have: DISPLAYFORM10 Thus, we have a lower bound on the stochastic classifier's loss that is in terms of the deterministic classifier's loss on the noise-resilient datapoints. Rearranging it, we get an upper bound on the latter: DISPLAYFORM11 Thus, we have an upper bound on the expected loss of the deterministic classifier W on the noiseresilient part of the distribution. Plugging this back in the first term of the upper bound on the deterministic classifier's loss on the whole distribution D in Equation 5 we get: DISPLAYFORM12, W) rearranging, we get: DISPLAYFORM13 rewriting the expectation over D ′ explicitly as an expectation over D conditioned on N(W, x, y), we get: DISPLAYFORM14 the first term above is essentially an expectation of a loss over the distribution D with the loss set to be zero over the non-noise-resilient datapoints and set to be L 1 2 over the noise-resilient datapoints; thus we can upper bound it with the expectation of the L 1 2 loss over the whole distribution D: DISPLAYFORM15 Now observe that we can upper bound the first term here using the PAC-Bayesian bound by plugging in L 1 2 for the generic L in FIG5; however, the bound would still be in terms of the stochastic classifier's train error. To get the generalization bound we seek, which involves the deterministic classifier's train error, we need to take one final step mirroring these tricks on the train loss. LOSS. Our analysis here is almost identical to the previous analysis. Instead of working with the distribution D and D ′ we will work with the training data set S and a subset of it S ′ for which noise resilience property is satisfied by W. Below, to make the presentation neater, we use (x, y) ∼ S to denote uniform sampling from S. First, we upper bound the stochastic classifier's train loss (L 1 2) as follows: DISPLAYFORM0 splitting over the noise-resilient points S ′ ((x, y) ∈ S for which N(W, x, y) holds) like in Equation 5, we can upper bound as: DISPLAYFORM1 We can upper bound the first term by first splitting it over the favorable and unfavorable perturbations like we did before: DISPLAYFORM2 To upper bound this, we apply a similar argument. First, if L 1 2 (W, x, y) = 1, then ∃r such that ρ r (W, x, y) < ∆ r 2 and ifW is a favorable perturbation then for that value of r, ρ r (W, x, y) < ρ r (W, x, y) + ∆ r 2 < ∆ r. Thus ifW is a favorable perturbation then, L 1 (W, x, y) = 1 whenever L 1 2 (W, x, y) = 1 i.e., L 1 2 (W, x, y) ≤ L 1 (W, x, y). Next, we use the fact that the unfavorable perturbations for a fixed datapoint drawn from S ′ have sufficiently low probability i.e., PrW ∼Q ¬U(W, x, y) ≤ 1 √ m. Then, we get the following upper bound on the above equations, by replacing the stochastic classifier with the deterministic classifier (and thus ignoring the expectation over Q): DISPLAYFORM3 Plugging this back in the first term of Equation 7, we get: DISPLAYFORM4, W) since the first term is effectively the expectation of a loss over the whole distribution with the loss set to be zero on the non-noise-resilient points and set to L 1 over the rest, we can upper bound it by setting the loss to be L 1 over the whole distribution: DISPLAYFORM5 Applying the above upper bound and the bound in Equation 6 into the PAC-Bayesian of Equation 4 yields our (Note that combining these equations would produce the term DISPLAYFORM6 which is at most DISPLAYFORM7, which we reflect in the final bound.). In this section, we present the proof for the abstract generalization guarantee presented in Section 3. Our proof is based on the following recursive inequality that we demonstrate for all r ≤ R (we will prove a similar, but slightly different inequality for r = R + 1): DISPLAYFORM0 Recall that the rth condition in Equation 1 is that ∀l, ρ r,l > ∆ ⋆ r,l. Above, we bound the probability mass of test points such that any one of the first r conditions in Equation 1 is not even approximately satisfied, in terms of the probability mass of points where one of the first r − 1 conditions is not even approximately satisfied, and a term that corresponds to how much error there can be in generalizing the rth condition from the training data. Our proof crucially relies on Theorem C.1. This theorem provides an upper bound on the proportion of test data that fail to satisfy a set of conditions, in terms of four quantities. The first quantity is the proportion of training data that do not satisfy the conditions; the second and third quantities, which we will in short refer to asμ S andμ D, correspond to the proportion of training and test data on which the properties involved in the conditions are not noise-resilient. The fourth quantity is the generalization error. First, we consider the base case when r = 1, and apply the PAC-Bayes-based guarantee from Theorem C.1 on the first set of properties {ρ 1,1, ρ 1,2, ⋯} and their corresponding constants DISPLAYFORM1 Effectively this establishes that the noise-resilience requirement of Equation 3 in Theorem C.1 holds on all possible inputs, thus proving our claim that the termsμ S andμ D would be zero. Thus, we will get that DISPLAYFORM2 which proves the recursion statement for the base case. To prove the recursion for some arbitrary r ≤ R, we again apply the PAC-Bayes-based guarantee from Theorem C.1, but on the union of the first r sets of properties. Again, we will have that the first term in the guarantee would be zero, since the corresponding conditions are satisfied on the training data. Now, to bound the proportion of bad pointsμ S andμ D, we make the following claim:the network is noise-resilient as per Equation 3 in Theorem C.1 for any input that satisfies the r−1 conditions approximately i.e., ∀q ≤ r−1 and ∀l, ρ q,l (W, x, y) > 0.The above claim can be used to prove Equation 8 as follows. Since all the conditions are assumed to be satisfied by a margin on the training data, this claim immediately implies that µ S is zero. Similarly, this claim implies that for the test data, we can bound µ D in terms of Pr (x,y)∼D [∃q<r ∃l ρ q,l (W, x, y) < 0], thus giving rise to the recursion in Equation 8. Now, to prove our claim, consider an input (x, y) such that ρ q,l (W, x, y) > 0 for q = 1, 2, ⋯, r − 1 and for all possible l. First from the assumption in our theorem statement that ∆ q,l (σ ⋆) ≤ ∆ ⋆ q,l, we have the following upper bound on the proportion of parameter perturbations under which any of the properties in the first r sets suffer a large perturbation: DISPLAYFORM3 Now, this can be expanded as a summation over q = 1, 2, ⋯, r as: DISPLAYFORM4 and because (x, y) satisfies ρ q,l (W, x, y) > 0 for q = 1, 2, ⋯, r − 1 and for all possible l, by the constraint assumed in Equation 2, we have: DISPLAYFORM5 Thus, we have that (x, y) satisfies the noise-resilience condition from Equation 3 in Theorem C.1 if it also satisfies ρ q,l (W, x, y) > 0 for q = 1, 2, ⋯, r − 1 and for all possible l. This proves our claim, and hence in turn proves the recursion in Equation 8.Finally, we can apply a similar argument for the R + 1th set of input-dependent properties (which is a singleton set consisting of the margin of the network) with a small change since the first term in the guarantee from Theorem C.1 is not explicitly assumed to be zero; we will get an inequality in terms of the number of training points that are not classified correctly by a margin, giving rise to the margin-based bound: DISPLAYFORM6 Note that in the first term on the right hand side, ρ R+1,1 (W, x, y) corresponds to the margin of the classifier on (x, y). Now, by using the fact that the test error is upper bounded by the left hand side in the above equation, applying the recursion on the right hand side R + 1 times, we get our final . In this section, we will quantify the noise resilience of a network in different aspects. Each of our bounds has the following structure: we fix an input point (x, y), and then say that with high probability over a Gaussian perturbation of the network's parameters, a particular input-dependent property of the network (say the output of the network, or the pre-activation value of a particular unit h at a particular layer d, or say the Frobenius norm sof its active weight matrices), changes only by a small magnitude proportional to the variance σ 2 of the Gaussian perturbation of the parameters. A key feature of our bounds is that they do not involve the product of the spectral norm of the weight matrices and hence save us an exponential factor in the final generalization bound. Instead, the bound in the perturbation of a particular property will be in terms of i) the magnitude of the some'preceding' properties (typically, these are properties of the lower layers) of the network, and ii) how those preceding properties themselves respond to perturbations. For example, an upper bound in the perturbation of the dth layer's output would involve the 2 norm of the lower layers d ′ < d, and how much they would blow up under these perturbations. E.2 SOME NOTATIONS.To formulate our lemma statement succinctly, we design a notation wherein we define a set of'tolerance parameters' which we will use to denote the extent of perturbation suffered by a particular property of the network. LetĈ denote a'set' (more on what we mean by a set below) of positive tolerance values, consisting of the following elements: • We callĈ a'set' to denote a group of related constants into a single symbol. Each element in this set has a particular semantic associated with it, unlike the standard notation of a set, and so when we refer to, sayζ d d ′ ∈Ĉ, we are indexing into the set to pick a particular element.• We will use the subscriptĈ d to index into a subset of only those tolerance values corresponding to layers from 1 until d. Next we define two events. The first event formulates the scenario that for a given input, a particular perturbation of the weights until layer d brought about very little change in the properties of these layers (within some tolerance levels). The second event formulates the scenario that the perturbation did not flip the activation states of the network. Definition E.1. Given an input x, and an arbitrary set of constantsĈ ′, for any perturbation U of W, we denote by PERT-BOUND(W+U,Ĉ ′, x) the event that:• for eachα DISPLAYFORM0 • for eachγ ′ d ∈Ĉ ′, the maximum perturbation in the preactivation of hidden units on layer d DISPLAYFORM1 • for eachζ DISPLAYFORM2 • for eachψ DISPLAYFORM3 NOTE: If we supply only a subset ofĈ (sayĈ d instead of the whole ofĈ) to the above event, PERT-BOUND(W + U, ⋅, x), then it would denote the event that the perturbations suffered by only that subset of properties is within the respective tolerance values. Next, we define the event that the perturbations do not affect the activation states of the network. Definition E.2. For any perturbation U of the matrices W, let UNCHANGED-ACTS d (W + U, x) denote the event that none of the activation states of the first d layers change on perturbation. E.3 MAIN LEMMA.Our here are styled similar to the equations required by Equation 2 presented in the main paper. For a given input point and for a particular property of the network, roughly, we bound the the probability that a perturbation affects the value of the property while none of the preceding preceding properties themselves are perturbed beyond a certain tolerance level. Lemma E.1. LetĈ be a set of constants (that denote the amount of perturbation in the properties preceding a considered property). For anyδ > 0, defineĈ ′ (which is a bound on the perturbation of a considered property) in terms ofĈ and the perturbation parameter σ, for all d = 1, 2, ⋯, D and for all d ′ = d − 1, ⋯, 1 as follows: DISPLAYFORM4 2 ) for any d. Then, the following statements hold good:1. Bound on perturbation of of 2 norm of the output of layer d. DISPLAYFORM5 3. Bound on perturbation of 2 norm on the rows of the Jacobians d d ′.P r U ¬PERT-BOUND(W + U, {ζ P r U ¬PERT-BOUND(W + U, {ψ DISPLAYFORM6 DISPLAYFORM7 Proof. For the most part of this discussion, we will consider a perturbed network where all the hidden units are frozen to be at the same activation state as they were at, before the perturbation. We will denote the weights of such a network by W [+U] and its output at the dth layer by f d (x; W[+U]). By having the activations states frozen, the Gaussian perturbations propagate linearly through the activations, effectively remaining as Gaussian perturbations; then, we can enjoy the well-established properties of the Gaussian even after they propagate. DISPLAYFORM8 Now, the spectral norm of Y d ′′ is at most the products of the spectral norms of each of these three matrices. Using Lemma B.3, the spectral norm of the middle term U d ′′ can be bounded by σ 2H ln 2DĤ δ with high probability 1 −δ D over the draws of U d ′′. We will also decompose the spectral norm of the first term so that our final bound does not involve any Jacobian of the dth layer. When d ′′ = d, this term has spectral norm 1 because the Jacobian d d is essentially the identity matrix. When DISPLAYFORM0. Furthermore, since, for a ReLU network, DISPLAYFORM1 effectively W d with some columns zerod out, the spectral norm of the Jacobian is upper bounded by the spectral norm W d.Putting all these together, we have that with probability 1 −δ D over the draws of U d ′′, the following holds good: DISPLAYFORM2 By a union bound, we then get that with probability 1 −δ over the draws of U d, we can upper bound Equation 14 as: DISPLAYFORM3 δ Note that the above bound simultaneously holds over all d ′ (without the application of a union bound). Finally we get the of the lemma by a similar argument as in the case of the perturbation bound on the output of each layer. 10 Again, note that the below succinct formula works even for corner cases like d Below, we present our main for this section, a generalization bound on a class of networks that is based on certain norm bounds on the training data. We provide a more intuitive presentation of these bounds after the proof in Appendix F.3. Theorem. 4.1 For any δ > 0, with probability 1 − δ over the draw of samples S ∼ D m, for any W, we have that: DISPLAYFORM0 DISPLAYFORM1 GROUPING AND ORDERING THE PROPERTIES. Now to apply the abstract generalization bound in Theorem 3.1, recall that we need to come up with an ordered grouping of the functions above such that we can realize the constraint given in Equation 2. Specifically, this constraint effectively required that, for a given input, the perturbation in the properties grouped in a particular set be small, given that all the properties in the preceding sets satisfy the corresponding conditions on them. To this end, we make use of Lemma E.1 where we have proven perturbation bounds relevant to the properties we have defined above. Our lemma also naturally induces dependencies between these properties in a way that they can be ordered as required by our framework. The order in which we traverse the properties is as follows, as dictated by Lemma E.1. We will go from layer 0 uptil D. For a particular layer d, we will first group the properties corresponding to the spectral norms of the Jacobians of that layer whose corresponding margins are {ψ DISPLAYFORM2 Next, we will group the row 2 norms of the Jacobians of layer d, whose corresponding margins are {ζ DISPLAYFORM3 Followed by this, we will have a singleton set of the layer output's 2 norm whose corresponding margin is α ▵ d . We then will group the pre-activations of layer d, each of which has the corresponding margin γ ▵ d . For the output layer, instead of the pre-activations or the output 2 norm, we will consider the margin-based property we have defined above.12 13 Observe that the number of sets R that we have created in this manner, is at most 4D since there are at most 4 sets of properties in each layer. that is required by our framework. For any r, the rth set of properties need to satisfy the following statement: DISPLAYFORM0 Furthermore, we want the perturbation bounds ∆ r,l (σ) to satisfy ∆ r,l (σ ⋆) ≤ ∆ ⋆ r,l, where σ ⋆ is the standard deviation of the parameter perturbation chosen in the PAC-Bayesian analysis. The next step in our proof is to show that our choice of σ ⋆, and the input-dependent properties, all satisfy the above requirements. To do this, we instantiate Lemma E.1 with σ = σ ⋆ as in Theorem 4.1 DISPLAYFORM1 Then, it can be verified that the values of the perturbation bounds inĈ ′ in Lemma E.1 can be upper bounded by the corresponding value in C ▵ 2. In other words, we have that for our chosen value of σ, the perturbations in all the properties and the output of the network can be bounded by the constants specified in C ▵ 2. Succinctly, let us say: DISPLAYFORM2 Given that these perturbation bounds hold for our chosen value of σ, we will focus on showing that a constraint of the form Equation 2 holds for the row 2 norms of the Jacobians d d ′ for all d ′ < d. A similar approach would apply for the other properties. First, we note that the sets of properties preceding the ones corresponding to the row 2 norms of Jacobian d d ′, consists of all the properties upto layer d − 1. Therefore, the precondition for Equation 2 which is of the form ρ(W, x, y) > 0 for all the previous properties ρ, translates to norm bound on these properties involving the constants C † d−1 as discussed in Fact F.1. Succinctly, these norm bounds can be expressed as NORM-BOUND(W + U, DISPLAYFORM3 Given that these norm bounds hold for a particular x, our goal is to argue that the rest of the constraint in Equation 2 holds. To do this, we first argue that given these norm bounds, if PERT-BOUND(W + U, C ▵ d−1 2, x) holds, then so does UNCHANGED-ACTS d−1 (W + U, x). This is because, the event PERT-BOUND(W + U, C ▵ d−1 2, x) implies that the pre-activation values of layer d − 1 suffer a perturbation of at most γ DISPLAYFORM4 holds, we have that the preactivation values of this layer have a magnitude of at least γ † DISPLAYFORM5 From these two equations, we have that the hidden units even at layer d−1 of the network do not change their activation state (i.e., the sign of the pre-activation does not change) under this perturbation. We can similarly argue for the layers below d − 1, thus proving that UNCHANGED- DISPLAYFORM6 Then, from the above discussion on the activation states, and from Equation 15, we have that Lemma E.1 boils down to the following inequality, when we plug σ = σ ⋆:12 For layer 0, the only property that we have defined is the 2 norm of the input. 13 Note that the Jacobian for d d is nothing but an identity matrix regardless of the input datapoint; thus we do not need any generalization analysis to bound its value on a test datapoint. Hence, we ignore it in our analysis, as can be seen from the list of properties that we have defined. Note, that the ing bound would have a log term that does not affect our bound in an asymptotic sense. In this section, we provide more detailed demonstration of the dependence of the terms in our bound on the depth/width of the network. In all the experiments, including the ones in the main paper (except the one in Figure 2 (b)) we use SGD with learning rate 0.1 and mini-batch size 64. We train the network on a subset of 4096 random training examples from the MNIST dataset to minimize cross entropy loss. We stop training when we classify at least 0.99 of the data perfectly, with a margin of γ class = 10. In Figure 2 (b) where we train networks of depth D = 28, the above training algorithm is quite unstable. Instead, we use Adam with a learning rate of 10 −5 until the network achieves an accuracy of 0.95 on the training dataset. Finally, we note that all logarithmic transformations in our plots are to the base 10.In FIG2 we show how the norm-bounds on the input-dependent properties of the network do not scale as large as the product of spectral norms. For the remaining experiments in this section, we will present a slightly looser bound than the one presented in our main , motivated by the fact that computing our actual bound is expensive as it involves computing spectral norms of Θ(D 2) Jacobians on m training datapoints. We note that even this looser bound does not have a dependence on the product of spectral norms, and has similar overall dependence on the depth. Specifically, we will consider a bound that is based on a slightly modified noise-resilience analysis. Recall that in Lemma E.1, when we considered the perturbation in the row 2 norm Jacobian d d ′, we bounded Equation 13 in terms of the spectral norms of the Jacobians. Instead of taking this route, if we retained the bound in Equation 13, we will get a slightly different upper bound on the perturbation of the Jacobian row 2 norm as: DISPLAYFORM0 By using this bound in our analysis, we can ignore the spectral norm terms ψ Thus, the row 2 norms of these Jacobians must be split into separate sets of properties, and the bound on them generalized one after the other (instead of grouped into one set and generalized all at one go as before). This would Recall from the discussion in the introduction in the main paper that, prior works (; BID0 have also characterized noise resilience in terms of conditions on the interactions between the activated weight matrices. Below, we discuss the conditions assumed by these works, which parallel the conditions we have studied in our paper (such as the bounded 2 norm in each layer). There are two main high level similarities between the conditions studied across these works. First, these conditions -all of which characterize the interactions between the activated weights matrices in the network -are assumed only for the training inputs; such an assumption implies noise-resilience of the network on training inputs. Second, there are two kinds of conditions assumed. The first kind allows one to bound the propagation of noise through the network under the assumption that the activation states do not flip; the second kind allows one to bound the extent to which the activation states do flip.
We provide a PAC-Bayes based generalization guarantee for uncompressed, deterministic deep networks by generalizing noise-resilience of the network on the training data to the test data.
382
scitldr
Training neural networks to be certifiably robust is critical to ensure their safety against adversarial attacks. However, it is currently very difficult to train a neural network that is both accurate and certifiably robust. In this work we take a step towards addressing this challenge. We prove that for every continuous function $f$, there exists a network $n$ such that: (i) $n$ approximates $f$ arbitrarily close, and (ii) simple interval bound propagation of a region $B$ through $n$ yields a that is arbitrarily close to the optimal output of $f$ on $B$. Our can be seen as a Universal Approximation Theorem for interval-certified ReLU networks. To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks. Much recent work has shown that neural networks can be fooled into misclassifying adversarial examples , inputs which are imperceptibly different from those that the neural network classifies correctly. Initial work on defending against adversarial examples revolved around training networks to be empirically robust, usually by including adversarial examples found with various attacks into the training dataset (; ; ; ; ; ;). However, while empirical robustness can be practically useful, it does not provide safety guarantees. As a , much recent research has focused on verifying that a network is certifiably robust, typically by employing methods based on mixed integer linear programming , SMT solvers , semidefinite programming (a), duality (; b), and linear relaxations (; ; b; ; ;). Because the certification rates were far from satisfactory, specific training methods were recently developed which produce networks that are certifiably robust:; Raghunathan et al. (2018b); Wang et al. (2018a);;; train the network with standard optimization applied to an over-approximation of the network behavior on a given input region (the region is created around the concrete input point). These techniques aim to discover specific weights which facilitate verification. There is a tradeoff between the degree of the over-approximation used and the speed of training and certification. Recently, (b) proposed a statistical approach to certification, which unlike the non-probabilistic methods discussed above, creates a probabilistic classifier that comes with probabilistic guarantees. So far, some of the best non-probabilistic achieved on the popular MNIST and CIFAR10 datasets have been obtained with the simple Interval relaxation , which scales well at both training and verification time. Despite this progress, there are still substantial gaps between known standard accuracy, experimental robustness, and certified robustness. For example, for CIFAR10, the best reported certified robustness is 32.04% with an accuracy of 49.49% when using a fairly modest l ∞ region with radius 8/255 . The state-of-the-art non-robust accuracy for this dataset is > 95% with experimental robustness > 50%. Given the size of this gap, a key question then is: can certified training ever succeed or is there a fundamental limit? In this paper we take a step in answering this question by proving a parallel to the Universal Approximation Theorem . We prove that for any continuous function f defined on a compact domain Γ ⊆ R m and for any desired level of accuracy δ, there exists a ReLU neural network n which can certifiably approximate f up to δ using interval bound propagation. As an interval is a fairly imprecise relaxation, our directly applies to more precise convex relaxations (e.g., ;). Theorem 1.1 (Universal Interval-Certified Approximation, Figure 1). Let Γ ⊂ R m be a compact set and let f: Γ → R be a continuous function. For all δ > 0, there exists a ReLU network n such that for all boxes [a, b] in Γ defined by points a, b ∈ Γ where a k ≤ b k for all k, the propagation of the box [a, b] using interval analysis through the network n, denoted n ([a, b]), approximates the set We recover the classical universal approximation theorem (|f (x) − n(x)| ≤ δ for all x ∈ Γ) by considering boxes [a, b] describing points (x = a = b). Note that here the lower bound is not [l, u] as the network n is an approximation of f. Because interval analysis propagates boxes, the theorem naturally handles l ∞ norm bound perturbations to the input. Other l p norms can be handled by covering the l p ball with boxes. The theorem can be extended easily to functions f: Γ → R k by applying the theorem component wise. Practical meaning of theorem The practical meaning of this theorem is as follows: if we train a neural network n on a given training data set (e.g., CIFAR10) and we are satisfied with the properties of n (e.g., high accuracy), then because n is a continuous function, the theorem tells us that there exists a network n which is as accurate as n and as certifiable with interval analysis as n is with a complete verifier. This means that if we fail to find such an n, then either n did not possess the required capacity or the optimizer was unsuccessful. Focus on the existence of a network We note that we do not provide a method for training a certified ReLU network -even though our method is constructive, we aim to answer an existential question and thus we focus on proving that a given network exists. Interesting future work items would be to study the requirements on the size of this network and the inherent hardness of finding it with standard optimization methods. Universal approximation is insufficient We now discuss why classical universal approximation is insufficient for establishing our . While classical universal approximation theorems state that neural networks can approximate a large class of functions f, unlike our , they do not state that robustness of the approximation n of f is actually certified with a scalable proof method (e.g., interval bound propagation). If one uses a non scalable complete verifier instead, then the standard Universal approximation theorem is sufficient. To demonstrate this point, consider the function f: R → R (Figure 2b) mapping all x ≤ 0 to 1, all x ≥ 1 to 0 and all 0 < x < 1 to 1 − x and two ReLU networks n 1 (Figure 2a) and n 2 (Figure 2c) perfectly approximating f, that is n 1 (x) = f (x) = n 2 (x) for all x. For δ = 1 4, the interval certification that n 1 maps all However, interval certification succeeds for n 2, because n 2 =. To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks. After adversarial examples were discovered by , many attacks and defenses were introduced (for a survey, see). Initial work on verifying neural network robustness used exact methods on small networks, while later research introduced methods based on over-approximation (; a; ;) aiming to scale to larger networks. A fundamentally different approach is randomized smoothing (; Lécuyer et al., 2019; b), in which probabilistic classification and certification with high confidence is performed. As neural networks that are experimentally robust need not be certifiably robust, there has been significant recent research on training certifiably robust neural networks (b; ; 2019; ; ; a; ; a; ; b). As these methods appear to have reached a performance wall, several works have started investigating the fundamental barriers in the datasets and methods that preclude the learning of a robust network (let alone a certifiably robust one) (; ;). In our work, we focus on the question of whether neural networks are capable of approximating functions whose robustness can be established with the efficient interval relaxation. Feasibility Results with Neural Networks Early versions of the Universal Approximation Theorem were stated by and. showed that networks using sigmoidal activations could approximate continuous functions in the unit hypercube, while showed that even networks with only one hidden layer are capable of approximating Borel measurable functions. (2019a) provide an explicit construction to obtain the network. We note that both of these works focus on Lipschitz continuous functions, a more restricted class than continuous functions, which we consider in our work. In this section we provide the concepts necessary to describe our main . Adversarial Examples and Robustness Verification Let n: R m → R k be a neural network, which classifies an input x to a label t if n(x) t > n(x) j for all j = t. For a correctly classified input x, an adversarial example is an input y such that x is imperceptible from y to a human, but is classified to a different label by n. Frequently, two images are assumed to be "imperceptible" if there l p distance is at most. The l p ball around an image is said to be the adversarial ball, and a network is said to be -robust around x if (Figure 3a) using a ReLU network n = ξ 0 + k n k. The ReLU networks n k (Figure 3c) approximate the N -slicing of f (Figure 3b), as a sum of local bumps (Figure 6). every point in the adversarial ball around x classifies the same. In this paper, we limit our discussion to l ∞ adversarial balls which can be used to cover to all l p balls. The goal of robustness verification is to show that for a neural network n, input point x and label t, every possible input in an l ∞ ball of size around x (written B ∞ (x)) is also classified to t., and λ ∈ R ≥0. Furthermore, we used to distinguish the function f from its interval-transformation f. To illustrate the difference between f and f, consider f (illustrating the loss in precision that interval analysis suffers from. Interval analysis provides a sound over-approximation in the sense that for all function f, the values that Furthermore all combinations f of +, −, · and R are monotone, that is for . This will later be needed. In this section, we provide an explanation of the proof of our main , Theorem 4.6, and illustrate the main points of the proof. The first step in the construction is to deconstruct the function f into slices {f k : Γ → [0, for all x, where ξ 0 is the minimum of f (Γ). We approximate each slice f k by a ReLU network δ 2 · n k. The network n approximating f up to δ will be n(x):= ξ 0 + δ 2 k n k (x). The construction relies on 2 key insights, (i) the output of δ 2 · n k can be confined to the interval [0, δ 2], thus the loss of analysis precision is at most the height of the slice, and (ii) we can construct the networks n k using local bump functions, such that only 4 slices can contribute to the loss of analysis precision, two for the lower interval bound, two for the upper one. The slicing {f k} 0≤k<5 of the function f: [−2, 2] → R (Figure 3a), mapping x to f (x) = −x 3 + 3x is depicted in Figure 3b. The networks n k are depicted in Figure 3c. In this example, evaluating the interval-transformer of n, namely n on the box Definition 4.1 (N -slicing (Figure 3b) ). Let Γ ⊂ R m be a closed m-dimensional box and let f: Γ → R be continuous. The N -slicing of f is a set of functions {f k} 0≤k<N defined by where To construct a ReLU network satisfying the desired approximation property (Equation) if evaluated on boxes in B(Γ), we need the ReLU network nmin capturing the behavior of min as a building block (similar to). It is given by With the ReLU network nmin, we can construct recursively a ReLU network nmin N mapping N arguments to the smallest one (Definition A.8). Even though the interval-transformation loses precision, we can establish bounds on the precision loss of nmin N sufficient for our use case (Appendix A). Now, we use the clipping function R [*,1]:= 1 − R(1 − x) clipping every value exceeding 1 back to 1 (Figure 5) to construct the local bumps φ c w.r.t. a grid G. G specifies the set of all possible local bumps we can use to construct the networks n k. Increasing the finesse of G will increases the approximation precision. Definition 4.2 (local bump, Figure 6). M } ⊆ G be a set of grid points describing the corner points of a hyperrectangle in G. We define a ReLU neural network φ c: We will describe later how M and c get picked. A graphical illustration of a local bump for in two dimensions and c = {Figure 6 . The local bump φ c (x) evaluates to 1 for all x that lie within the convex hull of c, namely conv(c), after which φ c (x) quickly decreases linearly to 0. φ c has 1 + 2(2d − 1) + 2d ReLUs and 1 + log 2 (2d + 1) + 1 layers. The formal proof is given in Appendix A. The next lemma shows, how a ReLU network n k can approximate the slice f k while simultaneously confining the loss of analysis precision. Lemma 4.4. Let Γ ⊂ R m be a closed box and let f: Γ → R be continuous. For all δ > 0 there exists a set of ReLU networks {n k} 0≤k<N of size N ∈ N approximating the N -slicing of f, {f k} 0≤k<N (ξ k as in Definition 4.1) such that for all boxes B ∈ B(Γ) and It is important to note that in Equation we mean f and not f. The proof for Lemma 4.4 is given in Appendix A. In the following, we discuss a proof sketch. Because Γ is compact and f is continuous, f is uniformly continuous by the Heine-Cantor Theorem. So we can pick a M ∈ N such that for all x, y ∈ Γ satisfying ||y−x|| Next, we construct for every slice k a set ∆ k of hyperrectangles on the grid G: if a box B ∈ B(Γ) fulfills f (B) ≥ ξ k+1 + δ 2, then we add a minimal enclosing hyperrectangle c ⊂ G such that B ⊆ conv(c) to ∆ k, where conv(c) denotes the convex hull of c. This implies, using uniform continuity of f and that the grid G is fine enough, that f (conv(c)) ≥ ξ k+1. Since there is only a finite number of possible hyperrectangles in G, the set ∆ k is clearly finite. The network fulfilling Equation is where φ c is as in Definition 4.2. The n k are depicted in Figure 3c. Now, we see that Equation holds by construction: For all boxes B ∈ B(Γ) such that f ≥ ξ k+1 + δ 2 on B exists c ∈ ∆ k such that B ⊆ conv(c) which implies, using Lemma 4.3, that φ c (B) =, hence Similarly, if f (B) ≤ ξ k − δ 2 holds, then it holds for all c ∈ ∆ k that B does not intersect N (conv(c)). Indeed, if a c ∈ ∆ k would violate this, then by construction, f (conv(c)) ≥ ξ k+1, contradicting f (B) ≤ ξ k − where l:= min f (B) and u:= max f (B). Proof. Pick N such that the height of each slice is exactly δ 2, if this is impossible choose a slightly smaller δ. Let {n k} 0≤k<N be a series of networks as in Lemma 4.4. Recall that ξ 0 = min f (Γ). We define the ReLU network Let B ∈ B(Γ). Thus we have for all k Let p, q ∈ {0, . . ., N − 1} such that Figure 7: Illustration of the proof for Theorem 4.5. as depicted in Figure 7. Thus by Equation for all k ∈ {0, . . ., p − 2} it holds that n k (B) = and similarly, by Equation for all k ∈ {q + 2, . . ., N − 1} it holds that n k (B) =. Plugging this into Equation after splitting the sum into three parts leaves us with Applying the standard rules for interval analysis, leads to where we used in the last step, that ξ 0 + k δ 2 = ξ k. For all terms in the sum except the terms corresponding to the 3 highest and lowest k we get Indeed, from Equation we know that there is Similarly, from Equation we know, that there is x ∈ B such that f (x) ≥ ξ q = ξ q−1 + We know further, that if p + 3 ≤ q, than there is an x ∈ B such that f (x) ≥ ξ p+3 = ξ p+2 + If p + 3 > q the lower bound we want to prove becomes vacuous and only the upper one needs to be proven. Thus we have where l:= min f (B) and u:= max f (B). where l, u ∈ R m such that l k:= min f (B) k and u k:= max f (B) k for all k. Proof. This is a direct consequence of using Theorem 4.5 and the Tietze extension theorem to produce a neural network for each dimension d of the codomain of f. Note that Theorem 1.1 is a special case of Theorem 4.6 with d = 1 to simplify presentation. We proved that for all real valued continuous functions f on compact sets, there exists a ReLU network n approximating f arbitrarily well with the interval abstraction. This means that for arbitrary input sets, analysis using the interval relaxation yields an over-approximation arbitrarily close to the smallest interval containing all possible outputs. Our theorem affirmatively answers the open question, whether the Universal Approximation Theorem generalizes to Interval analysis. Our address the question of whether the interval abstraction is expressive enough to analyse networks approximating interesting functions f. This is of practical importance because interval analysis is the most scalable non-trivial analysis. Lemma A.1 (Monotonicity). The operations +, − are monotone, that is for all Further the operation * and R are monotone, that is for all Proof. [Definition A.2 (N -slicing). Let Γ ⊂ R m be a compact m-dimensional box and let f: Γ → R be continuous. The N -slicing of f is a set of functions {f k} 0≤k≤N −1 defined by where Proof. Pick x ∈ Γ and let l ∈ {0, . . ., N − 1} such that ξ l ≤ f (x) ≤ ξ l+1. Then Definition A.4 (clipping). Let a, b ∈ R, a < b. We define the clipping function R [*,b]: R → R by Lemma A.5 (clipping). The function R [*,b] sends all x ≤ b to x, and all x > b to b. Further, Proof. We show the proof for R [a,b], the proof for R [*,b] is similar. Definition A.6 (nmin). We define the ReLU network nmin: Lemma A.7 (nmin). Let x, y ∈ R, then nmin(x, y) = min(x, y). Proof. Because nmin is symmetric in its arguments, we assume w.o.l.g. x ≥ y. Definition A.8 (nmin N). For all N ∈ N ≥1, we define a ReLU network nmin N defined by Proof. The symmetry on abstract elements is immediate. In the following, we omit some of to improve readability. Claim: R(a+c)−R(−a−c) = a+c. If a+c > 0 then −a−c < 0 thus the claim in this case. Indeed: So the expression simplifies to We proceed by case distinction: By symmetry of nmin equivalent to Case 1. Hence Case 3: a − d < 0 and b − c > 0: Proof. By induction. Base case: Induction hypothesis: The property holds for N s.t. 0 < N ≤ N − 1. Induction step: Then it also holds for N: Proof. By induction: Let N = 2: Lemma A.14 Induction hypothesis: The statement holds for all 2 ≤ N ≤ N − 1. Proof. Let N ∈ N such that N ≥ 2 ξmax−ξmin δ where ξ min:= min f (Γ) and ξ max:= max f (Γ). For simplicity we assume Γ = m. Using the Heine-Cantor theorem, we get that f is uniformly continuous, thus there exists a δ > 0 such that ∀x, y ∈ Γ.||y − x|| ∞ < δ ⇒ ||f (y) − f (x)|| <
We prove that for a large class of functions f there exists an interval certified robust network approximating f up to arbitrary precision.
383
scitldr
In this paper, we propose an efficient framework to accelerate convolutional neural networks. We utilize two types of acceleration methods: pruning and hints. Pruning can reduce model size by removing channels of layers. Hints can improve the performance of student model by transferring knowledge from teacher model. We demonstrate that pruning and hints are complementary to each other. On one hand, hints can benefit pruning by maintaining similar feature representations. On the other hand, the model pruned from teacher networks is a good initialization for student model, which increases the transferability between two networks. Our approach performs pruning stage and hints stage iteratively to further improve the performance. Furthermore, we propose an algorithm to reconstruct the parameters of hints layer and make the pruned model more suitable for hints. Experiments were conducted on various tasks including classification and pose estimation. Results on CIFAR-10, ImageNet and COCO demonstrate the generalization and superiority of our framework. In recent years, convolutional neural networks (CNN) have been applied in many computer vision tasks, e.g. classification BID21; BID6, objects detection BID8; BID30, and pose estimation BID25. The success of CNN drives the development of computer vision. However, restricted by large model size as well as computation complexity, many CNN models are difficult to be put into practical use directly. To solve the problem, more and more researches have focused on accelerating models without degradation of performance. Pruning and knowledge distillation are two of mainstream methods in model acceleration. The goal of pruning is to remove less important parameters while maintaining similar performance of the original model. Despite pruning methods' superiority, we notice that for many pruning methods with the increase of pruned channel number, the performance of pruned model drops rapidlly. Knowledge distillation describes teacher-student framework: use high-level representations from teacher model to supervise student model. Hints method BID31 shares a similar idea of knowledge distillation, where the feature map of teacher model is used as high-level representations. According to BID36, the student network can achieve better performance in knowledge transfer if its initialization can produce similar features as the teacher model. Inspired by this work, we propose that pruned model outputs similar features with original model's and provide a good initialization for student model, which does help distillation. And on the other hand, hints can help reconstruct parameters and alleviate degradation of performance caused by pruning operation. FIG0 illustrates the motivation of our framework. Based on this analysis, we propose an algorithm: we do pruning and hints operation iteratively. And for each iteration, we conduct a reconstructing step between pruning and hints operations. And we demonstrate that this reconstructing operation can provide a better initialization for student model and promote hints step (See FIG1 . We name our method as PWH Framework. To our best knowledge, we are the first to combine pruning and hints together as a framework. Our framework can be applied on different vision tasks. Experiments on CIFAR- 10 , and Hints can help pruned model reconstruct parameters. And the network pruned from the teacher model can provide a good initialization for student model in hints learning.effectiveness of our framework. Furthermore, our method is a framework where different pruning and hints methods can be included. To summarize, the contributions of this paper are as follows: FORMULA0 We analyze the properties of pruning and hints methods and show that these two model acceleration methods are complementary to each other. To our best knowledge, this is the first work that combines pruning and hints. Our framework is easy to be extended to different pruning and hints methods. Sufficient experiments show the effectiveness of our framework on different datasets for different tasks. Recently, model acceleration has received a great deal of attention. Quantization methods BID29 BID5; BID20 BID39 reduce model size by quantizing float parameters to fixed-point parameters. And fixed-point networks can be speeded up on special implementation. Group convolution based methods BID16 BID3 separates a convolution operation into several groups, which can reduce computation complexity. Several works exploit linear structure of parameters and approach parameters using lowrank way to reduce computational parameters BID7 BID19; BID0. In our experiments, we use two of current mainstream model acceleration way: pruning and knowledge distillation. Network pruning has been proposed for a long time, such as BID11; BID12; BID22. Recent pruning methods can be roughly adopted in two levels, i.e. channel-wise BID28; BID14 BID32; BID17 and parameter-wise BID10; BID35; BID23 BID34; BID27.In this paper, we use channel-wise approach as our pruning method. There are many methods in channel-wise family. He et al. BID14 prune channels in LASSO regression way from sample feature map. Proposed in BID26, the scale parameters in Batch Normalization layers are used to evaluate the importance of different channels. BID28 use taylor formula to analyze each channel's contribution to the loss and prune the lowest contribution channel. We utilize this method in our framework. Despite the superiority of pruning methods, we find that the effectiveness of them will observably decrease with the increase of pruned channel numbers. Knowledge distillation (KD) BID15 is the pioneering work of this field. Hinton et al. BID15 define soft targets and use it to supervise student networks. Beyond soft targets, hints are introduced in Fitnets BID31, which can be explained as whole feature map mimic learning. Several researches have focused on hints. BID37 propose atloss to mimic combined output of an ensemble of large networks using student networks. Furthermore, Li et al. BID24 demonstrate a mimic learning strategy based on region of interests to improve small networks' performance for object detection. However, most of these works train student model from scratch and ignore the significance of student networks' initialization. In this section, we will describe our method in details. First we show hints and pruning methods which have been used in our framework. Then we introduce reconstructing operation and analyze its effectiveness. Finally, combining hints, pruning and reconstructing operation, we propose PWH Framework. The pruning method we use in this paper is based on BID28. The algorithm can be described as a combinatorial optimization problem: DISPLAYFORM0 Where C(·) is the cost function of the task, D is the training samples, W and W are the parameters of original and pruning networks. In the optimization problem, ||W || 0 bounds the number of nonzero parameters in W. The parameter W i whether to be pruned completely depends on its outputs h i. And the problem can be redescribed as minimizing ∆C(DISPLAYFORM1 However, it's hard to find global optimal solution of the problem. Using taylor expansion can get approximate formula of the objective function: DISPLAYFORM2 During backpropagation, we can get gradient δC δhi and activation h i . So this criteria can be easily implemented for channel pruning. Hints can provide an extra supervision for student network, and it usually combines with task loss. The whole loss function of hints learning is represented as follows: DISPLAYFORM0 Where L t is the task loss (e.g. classification loss) and L h is the hints loss. λ is hints loss weight which determines the intensity of hints supervision. There are many types of hints methods. Different hints methods are suitable for different tasks and network architectures. To demonstrate the superiority and generalization of our framework, we try three kinds of hints methods in our experiments: L2 hints, normalization hints and adaption hints. We introduce L2 hints, normalization hints in appendix. First we slim the original network with reducing certain number of channels. Then we reconstruct the hints layer of the pruned model to minimize the difference of feature map between teacher and student. Finally, we start hints step to advance the performance of pruned model. Adaption Hints demonstrates that it's necessary to add an adaption layer between student and teacher networks. The adaption layer can help transfer student layer feature space to teacher layer feature space, which will promote hints learning. The expression of adaption hints is described in equation 4. DISPLAYFORM1 Where r(·) is adaption layer. And for convolutional neural networks, adaption layer is 1 × 1 convolution layer. The objective function of reconstructing step is: DISPLAYFORM0 Where Y is the feature map of original (teacher) model, X is the input of hints layer and W is the parameter of hints layer. The optimization problem has close-form solution using least square method: DISPLAYFORM1 However, because some datasets (e.g. ImageNet BID6) have numerous images, X will be high-dimension matrix. And it's impossible to solve the problem which involves such huge matrix in one time. So we randomly sample images in dataset and compute X according to these images. Due to the randomness, the reconstructed weights may be worse than original weights. Thus, we finally use a switch to select better weights (See equation 6).W f = arg min DISPLAYFORM2 Where W f, W o and W r are the final weights, original weights and reconstructed weights of hints layer. Y 0 and X 0 are computed from the whole dataset. The objective function of Normalized L2 loss (See equation 16) is different from L2 loss, but we explain that the reconstruction step is also effective for normalized L2 loss. The details of proof is available in supplementary material. Combining pruning step, reconstructing step and hints step, we propose our PWH Framework. The framework iteratively conducts three steps. For pruning, we reduce the model size by certain num-bers of channels. Then the parameters of hints layer will be reconstructed to minimize the difference of feature map between pruned model's and teacher model's. Finally we use pruned model as student, original model as teacher and conduct hints learning. And in the next iteration, the original model will be substituted by the student model in this iteration. After training, the student model becomes the teacher model for the next iteration. And another hints step is implemented at the ending of the framework where the teacher model will be set as the original model at the beginning of training (the teacher model in the first iteration). The pseudocode of our approach is provided in appendix. We demonstrate that compared with the original model in the first iteration, the student model in the current iteration is a better candidate for the teacher model in next hints step. The reason is that the model before pruned and after pruned have more similar feature map and parameters, which can promote and speed up hints step. At the end of the whole framework, we do another hints step. Different from preceding hints step, the teacher model is selected as the original model in the first iteration. We demonstrate that the final hints step is like the finetune step in pruning methods, which may need long-time training and improves the performance of compressed model. And original model in the first iteration will be the better teacher. FIG1 shows the pipeline of our framework. The hints and pruning in PWH Framework are complementary to each other. On one hand, pruned model is a good initialization to student model in hints step. We propose that the feature map of pruned model is similar to original model's compared with random initialization model's. In this way, proposed in BID36, the transferability between student and teacher network will increase, which is beneficial for hints learning. Experiments in §4.4 demonstrate that the difference between original model's and pruned model's feature map is much smaller than random initialized model's. On the other hand, hints helps pruning reconstruct parameters. We demonstrate that when pruned channels number is large, pruning method is inefficient. The pruning operation will bring large degradation of performance in this case. We find that pruning numerous channels will destroy the main structure of networks(See 3). And hints can alleviate this trend and help recover the structure and reconstruct parameters in model(See 4).The motivation of reconstruct step is the generalization of our method. Our approach is a framework and it should be available for different pruning methods. However, the criteria of some pruning methods are not based on minimizing the reconstructing error of feature map. In other words, there is still room to improve the similarity between feature map of original (teacher) and pruned (student) networks, which is beneficial for hints learning. We only conduct reconstructing operation on hints layer because it can not only reduce the difference of feature map used for hints but also maintain the main structure of the pruned model (See experiments in 4.3.3). Moreover, for adaption hints methods, it need to initialize adaption layer(hint layer). Compared with random initialization, reconstruction operation can help to construct this layer and provide more similar features with teacher models'. We conduct experiments on CIFAR-10 , ImageNet BID6 and for classfication and pose estimation tasks to demonstrate the superiority of PWH Framework. In this section, we first introduce implementation details in different experiments on different datasets. Then we compare our method with pruning methods and hint methods. We train networks using PyTorch deep learning framework. Pruning-only refers to a classical iterative pruning and finetuning operation. And for hints-only methods, we set original model as teacher model and use the compressed random initialized model as student model. For fair comparison, the student model in hints-only shares the same network structure with student model in PHW Frame- TAB1 illustrates . We can find that PWH Framework outperforms pruning-only method and hints-only method for a large margin on all datasets for different tasks, which verify the effectiveness of our framework and also shows that hints and pruning can be combined to improve the performance. Results on different tasks and models show that PWH Framework can be implemented without the restriction of tasks and network architectures. Moreover, illustrated in table 1, our framework can be applied for different pruning ratios, which means that we can adjust pruning ratio in the framework for different tasks to achieve different acceleration ratios. To further analyze PWH Framework, we do several ablation studies. All experimetns are conducted on CIFAR-10 dataset using VGG16BN. The feature map proposed in this section refers to the output of last convolution layer, which is also the feature map used for hints learning. And in this section, we do ablation study for three different aspects. First, we do experiments to show iterative operation is an important component of PWH Framework. Then we study on the selection of teacher model in hints step. Finally, we validate on the effects of reconstructing step. In PWH Framework, we implement three steps iteratively. And in this section we will show the importance and necessity of iterative operation. We conduct an experiment to compare the effects The relationship between the performance of network and the number of pruned channels using different methods. We conduct experiment iteratively and for each iteration we prune 256 channels.of doing pruning and hints only once (i.e. First do pruning and then do hints. Both operations are conducted only once.) and doing pruning and hints iteratively. TAB2 shows . We can see that iterative operation can improve the performance of model dramatically. To further explain this , we do another experiment: we analyze the relationship between the performance of pruned model and the number of pruned channels. Results in FIG3 illustrate that when the number of pruned channels is large, the performance of pruned model will drop rapidlly. Thus, if we only do pruning and hints once, pruning will bring large degradation of performance and pruned model cannot output the similar feature to original model's. And in this way, pruned model is not a more resonable initialization to student model. Pruning step is useless to hints step in this situation. The teacher model is the pruned model from previous iteration in PWH Framework. Original model at the beginning of training can be another choice for teacher model in each iteration. We do an experiment to compare these two set-up for teacher model. And in this experiment, we prune 256 channels in each iteration. FIG4 shows . We observe that when iteration is small, using original model in the first iteration as teacher model has a comparable performance with using the pruned model in the previous iteration. However, with the increase of iterations, we can find that superiority of using the pruned model in the previous iteration increases. The reason is that the original model in the first iteration has higher accuracy so it performs well when iteration is small. But when iteration becomes large, pruned model's feature map will have large difference with original model's feature map. And in this situation, there is a gap between pruned model and teacher model in hints step. On the other hand, using the pruned model in the previous iteration will increase the similarity of feature map between student model's and teacher model's, which will help distillation in hints step. Proposed in §3.3, reconstructing step is used to further refine pruned model's feature and make it more similar to teacher's. We conduct the experiment to validate the effectiveness of reconstructing step. To fairly compare, we implement PWH Framework with and without reconstructing step. In each iteration, we prune 256 channels. We study on the accuracy of compressed model using two different methods. Furthermore, we also analyze L2 loss between pruned model's and original model's feature map in each iteration. Figure 5 shows experiment . We find that the method with reconstructing step performs better. We want our framework to be adaptive to different pruning methods but some of the pruning methods' criteria are not minimizing the reconstructing feature map's error. Reconstructing step can be used to solve this problem and increase the similarity of feature maps between two models. To further analyze the properties of PWH Framework, we conduct further experiments on our approach. The experiments verify our assumptions: pruning and hints are complementary to each other. All experiments are conducted on CIFAR-10 dataset using VGG16 as the original model. We conduct experiment to compare the reconstructing feature map error between pruned model and random initial model. We use the pruning method in §3.1 to prune certain number of channels from original network and we calculate L2 loss between pruned model's feature map and original model's feature map. Similarly, we randomly initialize a model whose size is same to the pruned model and calculate L2 loss of feature map between this model and original model. Then we increase pruned channel number and record these two errors accordingly. In TAB2, we notice that in a large range (0-1024 pruned channels) pruned model's feature map is much more similar to original model's. And this demonstrate that the transferability between pruned model and original model is larger. And student model, initialized with the weights of pruned model, can perform better in hints learning. To demonstrate hints method is beneficial to pruning, we first compare experiments between pruning-only with pruning and hints. Different from PWH Framework, pruning and hints method used in this section doesn't have reconstructing step. This is because we want to show the effectiveness of hints and reconstructing step is an extraneous variable. In contrast experiments, we iteratively implement pruning and hints operations. To fairly compare, in pruning and hints method, we substitute finetune operation for hints operation to get pruning-only method. In Figure 6, we observe that pruning and hints method has comparable performance with pruning-only method on small amount of iteration. However, the margin of two methods becomes larger and larger with the increase of iterations (pruned channels number). This phenomenon caused by the huge performance degradation in pruning operation when the original model is small. The small model doesn't have many redundant neurons and the main structure will be broken in pruning. And hints can alleviate this trend and help reconstruct parameters in pruned model. In this paper, we propose PWH Framework, an iterative framework for model acceleration. Our framework takes the advantage of both pruning and hints methods. To our best knowledge, this is the first work that combine these two model acceleration methods. Furthermore, we conduct reconstructing operation between hints and pruning steps as a cascader. We analyze the property of these two methods and show they are complementary to each other: pruning provides a better initialization for student model and hints method helps to adjust parameters in pruned model. Experiments on CIFAR-10, ImageNet and COCO datasets for classification and pose estimation tasks demonstrate the superiority of PWH Framework. In this supplementary material, we first provide more implementation details for better illustration of our experiments. In the second part, we give a proof in §3.3. We show that the upper bound of normalized L2 loss will decrease if L2 loss decreases theoretically. Following section contains more implementation details and We use PyTorch deep learning framework with 4 NVIDIA Titan X GPUs. A.0.1 CIFAR-10:CIFAR-10 dataset has 10 classes containing 6000 32 × 32 color images each. 50000 images are used for training and 10000 for test. We use top-1 error for evaluation. For model, we use the VGG-16 BID33 network with BatchNorm BID18. In CIFAR-10 finetune (hints) step, we use standard data augmentation with random cropping with 4 pixels, mean substraction of (0.4914, 0.4822, 0.4465) and std to (0.2023, 0.1994, 0.2010). A batch size of 128 and learning rate of 1e-3 are used. We set finetune (hints) epoch as 20. The hints method utilized on this dataset is L2 hints method with loss weight 10. We sample 1000 images to reconstruct weights. ImageNet classification dataset consists of 1000 classes. We train models on 1.28 million training images and test models on 100k images. Top-1 error is used to evaluate models. In this experiment, ResNet18 is our original model and we use PWH Framework to compress it. During finetune (hints) stage, the batchsize is 256, learning rate is 1e-3. We set mean substraction to (0.485, 0.456, 0.406) and std to (0.229, 0.224, 0.225). The loss weight is set as 0.5. The random cropping is used. In rescontructing stage, 1000 images are sampled to reconstruct hints layer weights. We conduct pose estimation experiment on COCO dataset. In this experiment, we train our models on trainval dataset and evaluate models on minival set. The evaluation criteria for COCO dataset we use is OKS-based mAP. We use ResNet18 with as the original model in the experiment. And in this experiment, we use random cropping, random rotation and random scale as our data augmentation strategy. We use a weight decay of 1e-5 and learning rate of 5e-5. The loss weight is set as 0.5. A batch size of 96 is used. The number of sampled images in reconstructing step is 500. In reconstructing step, we use least square method to reconstruct parameters in hints layer. The objective function of this step can be described in equation 7. DISPLAYFORM0 Where Y is the feature map of original (teacher) model, X is the input of hints layer and W is the parameter of hints layer. However, many hints methods use normalized L2 loss as hints loss (See equation 8). It's difficult to optimize the problem using common methods if we set normalized L2 loss as objective function. DISPLAYFORM1 In this section, we will show that the upper bound of normalized L2 loss is related to L2 loss. In other words, if L2 loss decreases, the upper bound of L2 loss will decrease. For an image x, y is the feature map of teacher network whose input is x. We suppose that: y = W x+e = y 0 + e DISPLAYFORM2 Where e is the error and it is independent with x. E[·] means the expectation. We suppose that 1 y can be expressed as the function of y 0 and e using taylor expansion. Where I is the identity matrix. For convenience, we assume that K =
This is a work aiming for boosting all the existing pruning and mimic method.
384
scitldr
For typical sequence prediction problems such as language generation, maximum likelihood estimation (MLE) has commonly been adopted as it encourages the predicted sequence most consistent with the ground-truth sequence to have the highest probability of occurring. However, MLE focuses on once-to-all matching between the predicted sequence and gold-standard, consequently treating all incorrect predictions as being equally incorrect. We refer to this drawback as {\it negative diversity ignorance} in this paper. Treating all incorrect predictions as equal unfairly downplays the nuance of these sequences' detailed token-wise structure. To counteract this, we augment the MLE loss by introducing an extra Kullback--Leibler divergence term derived by comparing a data-dependent Gaussian prior and the detailed training prediction. The proposed data-dependent Gaussian prior objective (D2GPo) is defined over a prior topological order of tokens and is poles apart from the data-independent Gaussian prior (L2 regularization) commonly adopted in smoothing the training of MLE. Experimental show that the proposed method makes effective use of a more detailed prior in the data and has improved performance in typical language generation tasks, including supervised and unsupervised machine translation, text summarization, storytelling, and image captioning. Language understanding is the crown jewel of artificial intelligence. As the well-known dictum by Richard Feynman states, "what I cannot create, I do not understand." Language generation therefore reflects the level of development of language understanding. Language generation models have seen remarkable advances in recent years, especially with the rapid development of deep neural networks (DNNs). There are several models typically used in language generation, namely sequenceto-sequence (seq2seq) models (; ; ; ;), generative adversarial networks (GANs) , variational autoencoders , and auto-regressive networks . Language generation is usually modeled as a sequence prediction task, which adopts maximum likelihood estimation (MLE) as the standard training criterion (i.e., objective). MLE has had much success owing to its intuitiveness and flexibility. However, sequence prediction has encountered the following series of problems due to MLE. • Exposure bias: The model is not exposed to the full range of errors during training. • Loss mismatch: During training, we maximize the log-likelihood, whereas, during inference, the model is evaluated by a different metric such as BLEU or ROUGE. • Generation diversity: The generations are dull, generic (; ; a), repetitive, and short-sighted (b). • Negative diversity ignorance: MLE fails to assign proper scores to different incorrect model outputs, which means that all incorrect outputs are treated equally during training. A variety of work has alleviated the above MLE training shortcomings apart from negative diversity ignorance. Negative diversity ignorance is a of unfairly downplaying the nuance of sequences' detailed token-wise structure. When the MLE objective compares its predicted and ground-truth sequences, it takes a once-for-all matching strategy; the predicted sequence is given a binary label, either correct or incorrect. However, these incorrect training predictions may be quite diverse and letting the model be aware of which incorrect predictions are more incorrect or less incorrect than others may more effectively guide model training. For instance, an armchair might be mistaken with a deckchair, but it should usually not be mistaken for a mushroom. To alleviate the issue of the negative diversity ignorance, we add an extra Gaussian prior objective to augment the current MLE training with an extra Kullback-Leibler divergence loss term. The extra loss is computed by comparing two probability distributions, the first of which is from the detailed model training prediction and the second of which is from a ground-truth token-wise distribution and is defined as a kind of data-dependent Gaussian prior distribution. The proposed data-dependent Gaussian prior objective (D2GPo) is then injected into the final loss through a KL divergence term. The D2GPo is poles apart from the commonly adopted data-independent Gaussian prior (L2 regularization) for the purpose of smoothing the training of MLE, which is also directly added into the MLE loss. Experimental show that the proposed method makes effectively use of a more detailed prior in the data and improves the performance of typical language generation tasks, including supervised and unsupervised machine translation, text summarization, storytelling, and image captioning. Natural language generation (NLG) has long been considered the most challenging natural language processing (NLP) task . NLG techniques have been widely adopted as a critical module in various tasks, including control-free sentence or poem generation and input-conditioned language generation, such as machine translation, image captioning, text summarization, storytelling (; ; ;), and sentiment/tense-controlled sentence generation. In this work, we focus on input-conditioned language generation tasks, though our proposed method can also be applied to other language generation fields. Input-conditioned language generation tasks are challenging because there is an information imbalance between the input and output in these tasks, especially for cases with non-text input . discussed different ways of building complicated knowledge-based systems for NLG. In recent years, neural networks (NNs), especially DNNs, have shown promising in many NLP tasks. first proposed the NN language model (NNLM) to exploit the advantages of NNs for language generation tasks. In an NNLM, the n-gram paradigm is extended by the generalization ability of NNs. Given the ground truth sequence s = w 1, w 2,..., w t−1, the NNLM adopts the equation p t ≈ p(w t |w t−n, w t−n+1, ..., w t−1). developed a more general implementation for a language model (called the recurrent NN language model (RNNLM) by integrating a Markov property using a recurrent NN (RNN) to address the NNLMs' theoretical inability to capture long-term dependencies: The RNNLM is an effective solution because it is designed to capture long-term dependencies. Because of the vanishing gradient problem in RNNs, however, the long-term dependency processing capability is limited. In contrast to an RNN, the Transformer provides a new self-attention mechanism for handling long-term dependencies in text, ing in robust performance across diverse tasks. proposed a Transformer language model called GPT, which uses a left-to-right architecture, where each token pays attention to previous tokens in the self-attention layers of the Transformer. introduced a new pre-training objective: the masked language model (MLM), which enables the representation to fuse the left and right contexts and allows us to pre-train a deep bidirectional Transformer called BERT. The generators of the most current language generation model use the RNNLM or Transformer language model structure. However, as pointed out by, fitting the distribution of observation data does not mean that satisfactory text will be generated, because the model is not exposed to the full range of errors during training. This is called the exposure bias problem. Reinforcement learning, GANs , and end-to-end re-parameterization techniques (Kusner & Hernández-) have been proposed to solve this problem. The exposure bias is no longer an issue in reinforcement learning models because the training sequences are generated by the model itself. Using MLE for the training objective leads to the problem of loss mismatch. incorporated the evaluation metric into the training of sequence-to-sequence (seq2seq) models and proposed the mixed incremental cross-entropy reinforce (MIXER) training strategy, which is similar to the idea of minimum risk training (; ; ; There is an increasing interest in incorporating problem field knowledge in machine learning approaches (; ;). One common way is to design specialized network architectures or features for specific knowledge (e.g.,). In contrast, for structured probabilistic models, posterior regularization and related frameworks (; ;) provide a general means to impose knowledge constraints during model estimation. established a mathematical correspondence between posterior regularization and reinforcement learning and, using this correspondence, expanded posterior regularization to learn knowledge constraints as the extrinsic reward in reinforcement learning. Our approach can be seen as incorporating a priori knowledge of the language field into language generation learning. proposed a new objective, unlikelihood training, which forces unlikely generations to be assigned lower probability by the model. The difference is that focuses on low-frequency words while our model focuses on negative tokens. It is claimed that the likelihood objective itself is at fault, ing in a model that assigns too much probability to sequences containing repetition and frequent words, unlike those from the human training distribution. From this point of view, there is a point in common with our motivation, which is to make the model prediction consistent with human training distribution to some extent. Consider a conditional probability model for sequence predictions y ∼ p θ (x) with parameters θ. The target sequence y can be conditioned on any type of source x (e.g., a phrase, sentence, or passage of human language or even an image), which is omitted for simplicity of notation. For the sequence y = y 1, y 2,..., y l, the probability p θ (y|x) is Published as a conference paper at ICLR 2020 Commonly, sequence prediction models are trained using MLE (also known as teacher forcing) . MLE minimizes the negative log-likelihood of p θ (y|x): Optimizing the MLE objective L MLE (θ) is straightforward and meets the principle of empirical risk minimization while focusing on only minimizing losses of the correct target on the training data set. However, there may be noise in the training data, and forcibly learning the distribution of a training set does not enable the obtained model to reach good generalization. Additionally, for sequence prediction, models trained subject to MLE cursorily evaluate all predictions as either correct or incorrect and ignore the similarity between the correct and "less incorrect" predictions. Incorrect predictions might range from nearly perfect (i.e., one token is mistaken with a synonym) to completely wrong, having nothing in common with the gold sequence. However, MLE training treats all incorrect training predictions equally, which implies that MLE fails to accurately assign scores to diverse (especially negative) model predictions. To capture the diversity of negative training predictions, we augment the MLE objective of the model with an additional objective O that more accurately models such a negative diversity. Without loss of generality, supposingỹ is the prediction candidate, we introduce a general evaluation function f (ỹ, y) ∈ R independent of the model prediction, such that with a golden target token y *, a higher f (ỹ, y *) value indicates a better p θ (ỹ|x) for a target candidateỹ ∈ V (where V is the target candidate set). Note that f (ỹ, y) can also involve other factors such as latent variables and extra supervisions. There are two main methods of learning f (ỹ, y) in the model. If p θ is a GAN-like implicit generative model or an explicit distribution that can be efficiently reparametrized (e.g., Gaussian) , then one effective method is maximizing E p θ [f (ỹ, y)]. The other method is computing the gradient ∇ θ E p θ [f (ỹ, y)] using the log-derivative trick that can suffer from high variance but is often used for the large set of non-parameterizable explicit distributions. Corresponding to the probability distribution of model predictions p θ (·), we define a prior distribution q(y) (for each target y i, it has its own unique distribution of q i = q(y i)) which is extracted and derived from the ground-truth data (e.g., language text in language generation tasks). To guide the probability distribution of model predictions p θ (·) to match the prior probability distributions q(·), we adopt Kullback-Leibler divergence. Considering also the learning of the evaluation function f (ỹ, y), the loss for objective O is calculated as where α is a weight for the evaluation function learning term. We derive the prior distribution q(y) from the ground-truth data (which is independent of model parameters θ), and therefore in which KL divergence can be expanded as The final objective for learning the model is written as where λ is the balancing hyperparameter. Because optimizing the original model objective L MLE (θ) is straightforward, in the following, we omit the discussion of L MLE (θ) and focus on the proposed The prior probability distribution q(y *) on y * can be obtained from the evaluation function f (·, ·) with a softmax operation. To expose the mass of the distribution over the classes, introduced a softmax temperature mechanism; therefore, the relationship between q and f (ỹ, y) is where T is a temperature parameter. When T → 0, the distribution becomes a Kronecker distribution (and is equivalent to a one-hot target vector); when T → +∞, the distribution becomes a uniform distribution. The softmax operation always turns an evaluation function f (·, ·) into a form of probability distribution no matter the form of the original f (·, ·); thus, we only focus on f (·, ·). To find a good evaluation function, we have to mine token-wise diversity for all y *. Considering all token typesỹ j in a vocabulary, with respect to each y *, there exists a prior topological order ORDER(y *) among all the known tokens, in which y * is always ranked top priority. f (ỹ j, y *) can then be defined as a monotonic function over the corresponding topological order so that it has a maximal value only when the input is y * itself. Note that defining f (·, ·) in this way leads to the ing q also being monotonic over the corresponding topological order. Considering that q is a priori, it is fixed throughout the learning process. The remaining questions are about how to find a meaningful evaluation function f (·, ·) for the distribution q. In language generation tasks, we may conveniently take word embedding as the token representation, and let the embedding distance determine such an order ORDER(y *) for each y *. In this work, we adopt the cosine similarity of pre-trained embeddings to sort the token (word/subword) order. Discussion For the evaluation function f (·, ·) of q, we adopt the Gaussian probability density function, though later we also present experimental for other types of functions in an ablation study. As the adopted Gaussian prior used in the training objective is derived from a datadependent token-wise distribution, we call it the data-dependent Gaussian prior objective (D2GPo). This objective is a big departure from the Gaussian prior commonly adopted for smoothing in MLE training (which we the data-independent Gaussian prior). The following briefly explains why we chose the Gaussian probability density function and how our D2GPo mathematically differs from the data-independent Gaussian prior. The central limit theorem indicates that suitably standardized sums of independent random variables have an approximately normal distribution. Thus, any random variable that arises as the sum of a sufficiently large number of small random components can be modeled accurately using a normal distribution. Embedding has a linear additive property (e.g., king -man + woman ≈ queen). The additive property of embedding can be explained by inspecting the training objective . Each dimension of an embedding represents a potential feature of the token. Considering each potential feature as an independent random variable, the sum follows a Gaussian distribution centered on the correct vocabulary unit y * according to the linear additive property. We can therefore use a Gaussian distribution for the embedding-distance-determined order to effectively model the distribution q(y *). An overview of the concepts underlying D2GPo is illustrated in Appendix A.1. The D2GPo in this paper is different from the data-independent Gaussian prior in machine learning optimization theory. We hypothesize and experimentally verify that the embedding feature extracted from the data obeys the Gaussian distribution. The distribution from the prior knowledge of language data is used as a soft target to guide the model language generation process using knowledge distillation. The Gaussian prior in the machine learning optimization theory assumes that each component in the parameter θ is subject to a zero-mean Gaussian prior distribution, which is equivalent to L2 regularization. In general, our Gaussian prior objective is to act on the guiding target probability, while the Gaussian prior in machine learning is applied to the selection of model parameters. This section describes the experimental evaluation of D2GPo on a variety of typical language generation tasks: neural machine translation (NMT), text summarization, storytelling, and image captioning. The hyperparameters in D2GPo and effect analysis are given in Appendix A.7. Our proposed D2GPo approach for experimental tasks requires either word embeddings or bytepair-encoding (BPE) (b) subword embeddings. We generated the pretrained embeddings using fastText with an embedding dimension of 512, a context window of size 5, and 10 negative samples. For NMT, fastText was applied to the concatenation of source and target language monolingual corpora, ing in cross-lingual BPE subword embedding. For text summarization, we generated BPE subword embedding only on the English monolingual corpora, while for the storytelling and image captioning, we obtained the word embedding also on the English monolingual corpora. We evaluated the model on several widely used translation tasks: WMT14 English-to-German (EN-DE), English-to-French (EN-FR), and WMT16 English-to-Romanian (EN-RO) 1 tasks, which all have standard large-scale corpora for NMT evaluation. Owing to space limits, the data details are provided in Appendix A.3. The sentences were encoded using sub-word types based on BPE, which has a shared vocabulary of 40,000 sub-word units for all three tasks. We chose the Transformer NMT model as our baseline. For the hyperparameters of the Transformer (base/big) models, we followed the settings used by. The BLEU score with multi-bleu.pl was calculated during the evaluation. Table 1: Comparison with baseline and existing systems on supervised translation tasks. Here, "++/+" after the BLEU score indicates that the proposed method was significantly better than the corresponding baseline Transformer (base or big) at significance levels p <0.01/0.05. "STD" represents synthetic training data from (b). In Table 1, we report the performance of our full model, the baseline, and existing systems. Our baseline model obtains similar to those of , the existing strong model used for these tasks. The indicate that our method performed better than the strong baselines for all language pairs. Our model is not only an improvement on the translation model of large-scale training sets but also performs better for small-scale training sets. Refer to Appendix A.8 and A.9 for analysis of the low-resource scenario and generation diversity. For unsupervised machine translation, we also used the three language pairs EN-DE, EN-FR, and EN-RO as our evaluation targets. Note that the evaluation performed on EN-DE uses newstest2016 instead of newstest2014 to ensure the are comparable with the of other works; this is unlike supervised machine translation. We used the masked sequence to sequence the pre-training (MASS) model as our baseline. Following the practice of , we pretrained our model with a masked sequence-to-sequence pre-training (MASS) objective (without D2GPo) on EN, FR, DE, and RO monolingual data samples from WMT 2007-2018 News Crawl datasets that respectively cover 190M, 60M, 270M, and 10M sentences. We then fine-tuned the models on the same monolingual data using the back-translation cross-entropy loss and our D2GPo loss. For the training dataset, we filtered out sentences longer than 175 words in length and jointly learned 60K BPE sub-word units for each language pair. Table 2: BLEU score comparisons between MASS and previous methods of unsupervised NMT. As shown in Table 2, D2GPo consistently outperformed MASS (the state-of-the-art baseline) on all unsupervised translation pairs. Meanwhile, the MASS and XLMsystems leverage large-scale monolingual pre-training, and the decoder (generator, language model) can still be improved by our D2GPo loss in the fine-tuning phase. This demonstrates the efficiency of the proposed method. Text summarization is a typical language generation task that creates a short and fluent summary of the given long-text document. fine-tuned the MASS pretrained model on the text summarization task and achieved state-of-the-art . We chose this model as our baseline, maintained consistent pre-training, and used D2GPo loss for enhancements in the fine-tuning phase. We used the Annotated Gigaword corpus as the benchmark, as detailed in Appendix A.4. In the evaluation, ROUGE-1, ROUGE-2, and ROUGE-L are reported. Table 3. We compared our +D2GPo with our baseline MASS, which is the current state-of-the-art model; +D2GPo consistently outperformed the baseline on all evaluation metrics. The models with a semi-supervised setting yielded a large-margin improvement relative to the model without any pre-training, which demonstrates that the supervised pre-training is effective in the text summarization task. Storytelling is at the frontier of current language generation technologies; i.e., stories must maintain a consistent theme throughout and require long-distance dependency modeling. Additionally, stories require creativity and a high-level plot with planning ahead rather than word-by-word generation . We used the hierarchical story generation model (which is introduced in Appendix A.5) as our baseline to test the improvements of D2GPo for the storytelling task. To guarantee the single-variable principle, we added only the D2GPo loss to the story generation model. The prompt generation model is consistent with. For automatic evaluation, we measured the model perplexity on validation and test sets. Table 4 shows obtained using D2GPo. It is seen that with the addition of D2GPo, the Conv seq2seq + self-attention model substantially improved the likelihood of human-generated stories and even outperformed the ensemble or fusion models without increasing the number of parameters. Perplexity was further reduced with the addition of the fusion mechanism. These suggest that Table 4: Perplexity on WRITINGPROMPTS. D2GPo improves the quality of language generation greatly, especially in settings where there are fewer restrictions on story generation tasks. Image captioning is a task that combines image understanding and language generation. It continues to inspire considerable research at the boundary of computer vision and natural language processing. We elected to experiment with image captioning to verify the performance of D2GPo on a language generation model having diverse types of input. In our experiments, we evaluated our model on an ablated baseline (top-down, as detailed in Appendix A.6) against prior work on the MSCOCO 2014 caption dataset , which has became the standard benchmark for image captioning. For validation of model hyperparameters and offline testing, we used Karpathy splits , which have been used extensively in prior work. SPICE , CIDEr , METEOR , ROUGE-L, and BLEU were used to evaluate the caption quality. Table 5: Image caption performance on the MSCOCO Karpathy test split. Table 5 summarizes the performance of our full model and the ResNet Top-down baseline in comparison with the existing strong Self-critical Sequence Training (SCST) approach on the test portion of the Karpathy splits. To ensure a fair comparison, are only reported for models trained with standard cross-entropy loss (i.e., MLE). All are reported for a single model with no fine tuning of the input ResNet model. Our ResNet baseline performs slightly better than the SCST models. After incorporating our proposed D2GPo loss, our model improves further across all metrics. According to the analysis in Section 4, for the embedding, we used the Gaussian probability density function as our evaluation function f (·); however, to evaluate the effectiveness of different evaluation functions, we changed the function and tested the performance changes on the supervised NMT EN-DE task. We used the same experiment settings as described in Section 5.2 and compared the BLEU score changes on the test set, as listed in Table 6. Table 6: Ablation study on our proposed D2GPo with different evaluation functions on the supervised NMT WMT14 EN-DE task, with the Transformer-base model. The table shows that the performance of Gaussian density, linear, and cosine functions increased while the performance of the random function decreased. This shows that the distance information obtained from embedding can effectively guide the generation process. Among these functions, the Gaussian density function had the greatest improvement, which agrees with our analysis of the embedding features obeying the Gaussian distribution. We postulate that because the linear and cosine functions are rough approximations of the Gaussian density function, they perform similarly to the Gaussian density function. This work proposed a data-dependent Gaussian prior objective (D2GPo) for language generation tasks with the hope of alleviating the difficulty of negative diversity ignorance. D2GPo imposes the prior from (linguistic) data over the sequence prediction models. D2GPo outperformed strong baselines in experiments on classic language generation tasks (i.e., neural machine translation, text summarization, storytelling, and image captioning tasks). A.1 CONCEPTS UNDERLYING D2GPO Figure 1: Overview of the concepts underlying D2GPo taking the example of the sentence The little boy sits on the armchair. Specifically, for target y i, we calculate the embedding cosine similarity as the distance dist(i, j) of y i and all other token types in the vocabularyỹ j, which are used to give the distance: dist i,j = cosine similarity(emb(y i), emb(ỹ j)). Sorting by distance from small to large to obtain the topological order of token types yields A.3 SUPERVISED NMT DATA For the EN-DE translation task, 4.43M bilingual sentence pairs from the WMT'14 dataset, which includes the Common Crawl, News Commentary, and Europarl v7 datasets, were used as training data. The newstest2013 and newstest2014 datasets were used as the dev set and test set, respectively. For the EN-FR translation task, 36M bilingual sentence pairs from the WMT'14 dataset were used as training data. The newstest2012 and newstest2013 datasets were combined for validation and newstest2014 was used as the test set, following the configuration of. For the EN-RO task, we tested two settings; i.e., Europarl v7, which uses only the officially provided parallel corpus, and SETIMES2, which yields 600,000 sentence pairs for a low-resource supervised machine translation study. Alternatively, following the work of Sennrich et al. (2016a), we used synthetic training data (STD) of Sennrich et al. (2016a), which provides 2.8M sentence pairs for training. We used newsdev2016 as the dev set and newstest2016 as the test set. Our reported on EN-RO are evaluated on a reference for which diacritics are removed from letters. The Annotated Gigaword corpus was used as a benchmark . This data set is derived from news articles and comprises pairs of main sentences in the article (longer) and headline (shorter). The article and headline were respectively used as the source input sentence and reference. The data include approximately 3.8M training samples, 400,000 validation samples, and 2000 test samples. The hierarchical story generation model was proposed for the situation in which a sentence called a prompt that describes the topic of the upcoming story generation is first generated, and then conditions on the prompt are applied when generating the story. used a self-attention gated convolutional language model (GCNN) as the sequence-to-sequence prompt generation model with top-k random sampling. For prompt-tostory generation, they collected a dataset from Reddit's WRITINGPROMPTS forum in which each prompt has multiple story responses. With the dataset, they trained a story generation model that benefitted from a novel form of model fusion that improved the relevance of the story to the prompt and added a new gated multi-scale self-attention mechanism to model the long-range context. The top-down image captioning model uses a ResNet convolutional neural net pretrained on ImageNet to encode each image. Similar to previous work , the cited study encoded the full-sized input image with the final convolutional layer of Resnet-101 and used bilinear interpolation to resize the output to a fixed-size spatial representation of 10×10. This is equivalent to the maximum number of spatial regions used in our full model. During training with our D2GPo, the value of the standard deviation of the KL diversity item λ was set to 0.1, and the softmax temperature was T = 2.0 in all experiments. To study the effects of hyperparameters (i.e., the standard deviation λ and softmax temperature T The experimental reveal that λ affects the model training process. We believe that the reason is that a small value of λ in the model being unable to make full use of the prior knowledge (distribution), while a larger value of λ will make the model more uncertain because of the higher probability of there being incorrect or even opposite words whose fastText embeddings are similar. In addition, experimental show that a small value of T can improve the model to some extent, whereas a large value of T will seriously decrease the performance of the model. Theoretically, when T approaches infinity, the distribution q becomes uniform, and there is no prior knowledge with which to guide the model. A loss penalty is applied to any model prediction, and an excessively high value of T is thus harmful to training. Priors are generally more helpful in low-data regimes. We sampled 10,000, 100,000, and 600,000 paired sentences from the bilingual training data of WMT16 EN-RO to explore the performance of D2GPo in different low-resource scenarios. We used the same BPE code and learned fastText embeddings in all WMT16 EN-RO training data. Table 7: Comparison of our baseline and our D2GPo method under different training data scales in terms of BLEU on the WMT16 EN-RO test set. As shown in Table 7, D2GPo outperforms the baseline model, demonstrating the effectiveness of our method in low-resource scenarios. At the same time, the show that the performance improvement provided by D2GPo increases with fewer training data. This shows that prior knowledge can substantially improve the performance of the model when training data are scarce. A possible reason is that the training data are insufficient to train a robust model. In this case, the injection of prior knowledge can help train the parameters of the model and substantially improve the translation performance. However, with an increase in the number of training data, the model itself can be optimized well, and the improvement gained by introducing prior knowledge is not as substantial as before. Compared with traditional MLE training, D2GPo encourages negative diversity. To examine differences between D2GPo and MLE models, we counted high-and low-frequency words in the training set and compared the frequencies of low-frequency words predicted by the two models and the golden reference on the test set. Table 8: The statics of low frequency words in the reference and generations. The experiment was carried out on WMT14 EN-DE, the baseline model was Transformer-base, and the statistics were calculated at the word level. We chose words with a frequency less than or equal to 100 in the training set as low-frequency words. We used the golden reference (#GOLD), baseline model prediction output, and +D2GPo model prediction output to count the total number of tokens (#SUM) and the number of low-frequency words (#LF). Results are given in Table 8. The show that compared with the baseline, the D2GPo optimized model generates more lowfrequency words and has a higher ratio of low-frequency words. However, the number is still far less than the golden reference. It is thus demonstrated that D2GPo increases the diversity of model output. Top-down: a woman holding an umbrella in her hand + D2GPo: a woman is holding an umbrella + SCST: a woman holding an umbrella in a street + SCST+ D2GPo: a woman is holding an umbrella in the street Top-down: a large airplane sitting on top of an airport runway + D2GPo: an airplane is sitting on top of an airport runway + SCST: a large jetliner sitting on top of an airport runway + SCST+ D2GPo: a large jetliner is sitting on top of an airport runway Top-down: a woman holding a surf board in the ocean + D2GPo: a woman is standing on the beach with a surfboard + SCST: a woman holding a surfboard on the beach + SCST+ D2GPo: a woman is standing on the beach with a surfboard Top-down: a traffic light with a traffic light on it + D2GPo: a traffic light on the side of a traffic light + SCST: a yellow traffic light on the side of a street + SCST+ D2GPo: yellow traffic lights on the side of a street Table 9: Captions generated for the left image by the various models described in the paper. The models trained with SCST return a more accurate and more detailed summary of the image. The models trained with D2GPo return a more grammatically complete sentence.
We introduce an extra data-dependent Gaussian prior objective to augment the current MLE training, which is designed to capture the prior knowledge in the ground-truth data.
385
scitldr
We propose an interactive classification approach for natural language queries. Instead of classifying given the natural language query only, we ask the user for additional information using a sequence of binary and multiple-choice questions. At each turn, we use a policy controller to decide if to present a question or pro-vide the user the final answer, and select the best question to ask by maximizing the system information gain. Our formulation enables bootstrapping the system without any interaction data, instead relying on non-interactive crowdsourcing an-notation tasks. Our evaluation shows the interaction helps the system increase its accuracy and handle ambiguous queries, while our approach effectively balances the number of questions and the final accuracy. Responding to natural language queries through simple, single-step classification has been studied extensively in many applications, including user intent prediction ), and information retrieval . Typical methods rely on a single user input to produce an output, missing an opportunity to interact with the user to reduce ambiguity and improve the final prediction. For example, users may under-specify a request due to incomplete understanding of the domain; or the system may fail to correctly interpret the nuances of the input query. In both cases, a low quality input could be mitigated by further interaction with the user. In this paper we propose a simple but effective interaction paradigm that consists of a sequence of binary and multiple choice questions allowing the system to ask the user for more information. Figure 1 illustrates the types of interaction supported by this method, showcasing the opportunity for clarification while avoiding much of the complexity involved in unrestricted natural language interactions. Following a natural language query from the user, our system then decides between posing another question to obtain more information or finalizing the current prediction. Unlike previous work which assumes access to full interaction data; Rao & Daumé ), we are interested in bootstrapping an interaction system using simple and relatively little annotation effort. This is particularly important in real-world applications, such as in virtual assistants, where the supported classification labels are subject to change and thereby require a lot of re-annotation. We propose an effective approach designed for interaction efficiency and simple system bootstrapping. Our approach adopts a Bayesian decomposition of the posterior distributions over classification labels and user's responses through the interaction process. Due to the decomposition, we can efficiently compute and select the next question that provides the maximal expected information based on the posteriors. To further balance the potential increase in accuracy with the cost of asking additional questions, we train a policy controller to decide whether to ask additional questions or return a final prediction. Our method also enables separately collecting natural language annotations to model the distributions of class labels and user responses. Specifically, we crowdsource initial natural language queries and question-answer pairs for each class label, alleviating the need for Wizard-of-Oz style dialog annotations . Furthermore, we leverage the natural language descriptions of class labels, questions and answers to help estimate their correlation and reduce the need for heavy annotation. Got it! The article below might be helpful: What is the bill length of the bird: shorter, similar, or longer than head? Shorter than head. Is the bird underpart orange? Yes. The identified bird is: Saw a little black bird with black eyes.... Figure 1: Two example use cases of interactive classification system: providing customer the best trouble-shooting suggestion (left) and helping user identify bird species from text interactions (right). The top parts show example classification labels: FAQ documents or bird species, where the ground truth label of each interaction example is shaded. The lower parts show how a user interact with the system typically. The user starts with an initial natural language query. At each step, the system asks a clarification question. The interaction ends when the system returns an output label. We evaluate our method on two public tasks: FAQ suggestion and bird identification using the text and attribute annotations of the Caltech-UCSD Birds dataset . The first task represents a virtual assistant application in a trouble-shooting domain, while the second task provides well-defined multiple-choice question annotations and naturally noisy language inputs. Our experiments show that adding user interactions significantly increases the classification accuracy, when evaluating against both a simulator and a real human. With one clarification question, our system obtains a relative accuracy boost of 40% and 65% for FAQ suggestion and bird identification compared to no-interaction baselines on simulated evaluation. Given at most five turns of interaction, our approach improves accuracy by over 100% on both tasks in both simulated and human evaluation. Notation Our goal is to classify a natural language query as one class label through interactions. To this end, we treat the classification label y, interaction question q as well as the user response r as random variables. We denote one possible value or assignment of the random variable using subscripts, such as y = y i and q = q j. We use superscripts for the observed value of the random variable at a given time step, for example, q t is a question asked at time step t. Whenever it is clear from the context, we simply write y i as the short notation of y = y i. For example, p(r|q j, y i) denotes the conditional distribution of r given y = y i and q = q j, and p(r k |q j, y i) further specifies the corresponding probability when r = r k. An interaction starts with the user providing an initial user query x. At each turn t, the system can choose to select a question q t, to which the user respond with r t. We consider two types of questions in this work: binary and multiple choice questions. The predefined set of possible answers for a question q t is R(q t), where R(q t) is {yes, no} for binary questions, or a predefined set of question-specific values for multiple choice questions. We denote an interaction up to time t as X t = (x, (q 1, r 1),..., (q t, r t)), and the set of possible class labels as Y = {y 1, . . ., y N}. Figure 1 shows exemplary interactions in our two evaluation domains. Model We model the interactive process using a parameterized distribution over class labels that is conditioned on the observed interaction (Section 4.1), a question selection criterion (Section 4.2), and a parameterized policy controller (Section 4.5). At each time step t, we compute the belief of each y i ∈ Y conditioned on X t−1. The trained policy controller decides between two actions: to return the current best possible label or to obtain additional information by asking a question. The model selects the question that maximizes the information gain. After receiving the user response, the model updates the beliefs over the classification labels. Learning We use crowdsourced non-interactive data to bootstrap model learning. The crowdsourced data collection consists of two sub-tasks. First, we obtain a set of user initial queries X i for each label y i. For example, for an FAQ,'How do I sign up for Spring Global Roaming', an annotator can come up with an initial query as'Travel out of country'. Second, we ask annotators to assign text tags to each y i, and convert these tags into a set of question-answer pairs Here q m denotes a templated question and r m denotes the answer. For example, a question' What is your phone operating system?' can pair with one of the following answers:'IOS','Android operating system','Windows operating system' or'Not applicable'. We denote this dataset as. We describe the data collection process in Section 5. We use this data to train our text embedding model (Section 4.3), to create a user simulator (Section 4.4), and to train the policy controller to minimize the number of interaction turns while achieving high classification accuracy (Section 4.5). Evaluation We report classification accuracy of the model, and study the trade-off between accuracy and the number of the turns that the system takes. We run our system against a user simulator and real human users. When performing human evaluation, we additionally collect qualitative ratings of the interactions. Learning from Feedback A lot of recent work have leveraged human feedback to train natural language processing models, including dialogue learning , semantic parsing (; ;) and text classification . These works collect user feedback after the model-predicting stage and treat user feedback as extra offline training data to improve the model. In contrast, our model leverages the user interaction and makes model prediction accordingly in an online fashion. Human feedback has been incorporated in reinforcement learning as well. For instance, learns a reward function from human preferences to provide richer rewards and uses language-illustrated subgoals as indirect interventions instead of conventional expert demonstrations. Modeling Interaction Language-based interaction has recently attracted a lot of attention in areas such as visual question answering (; ; ;), SQL generation , information retrieval and multi-turn textbased question answering (Rao & Daumé ; ;). Many of these works require learning from full-fledged dialogues; Rao & Daumé ) or conducting Wizard-of-Oz dialog annotations . Instead of utilizing unrestricted but expensive conversations, we limit ourselves to a simplified type of interaction consisting of multiple-choice and binary questions. This allows us to reduce the complexity of data annotation while still achieving efficient interaction and high accuracy. Our question selection method is closely related to the prior work of Rao & Daumé;;; ). For instance, refine image search by asking to compare visual qualities against selected reference images, and perform object identification in image by posing binary questions about the object or its location. Both works and our system use an entropy reduction criterion to select the best question. Our work makes use of a Bayesian decomposition of the joint distribution and can be easily extended to other model-driven selection. We also highlight the modeling of natural language to estimate information and probabilities. More recently, (Rao & Daumé) proposes a learning-to-ask model by modeling the expected utility obtained by a question. Our selection method can be considered as a special case when entropy is used as the utility. In contrast to (Rao & Daumé), our work models the entire interaction history instead of a single turn of follow-up questioning. Our model is trained using crowdsourced annotations, while (Rao & Daumé) uses real user-user interaction data. Learning 20Q Game Our task can be viewed as an instance of the popular 20-question game (20Q), which has been applied to the knowledge base of celebrities ). Our work differs in two fold. First, our method models the natural language descriptions of classification targets, quesitons ans answers, instead of treating them as categorical or structural data as in knowledge base. Second, instead of focusing on the 20Q game (on celebrities) itself, we aim to help users accomplish realistic goals with minimal interaction effort. We maintain a probability distribution p(y | X t) over the set of labels Y. At each interaction step, we first update this belief, decide if to ask the question or return the classification output using a policy controller and select a question to ask using information gain if needed. We decompose the conditional probability p(y = y i | X t) using Bayes rule: We make two simplifying assumptions for Eq.. First, we assume the user response only depends on the provided question q t and the underlying target label y i, and is independent of past interactions. The assumption simplifies p(r Second, we deterministically select the next question q t given the interaction history X t−1 (described in Section 4.2). As a , p(q = q t | y i, X t−1) = 1 and otherwise would be zero if q = q t. These two assumptions allow us to rewrite the decomposition as: That is, predicting the classification label given the observed interaction X t can be reduced to modeling p(y i | x) and p(r k | q j, y i), i.e. the label probability given the initial query only and the probability of user response conditioned on the chosen question and class label. This factorization enables us to leverage separate annotations to learn these two components directly, alleviating the need for collecting costly full user interactions. The system selects the question q t to ask at turn t to maximize the efficiency of the interaction. We use a maximum information gain criterion. Given X t−1, we compute the information gain on classification label y as the decrease on entropy by observing possible answers to question q: where H(· | ·) denotes the conditional entropy. Intuitively, the information gain measures the amount of information obtained about the variable y by observing the value of another variable q. Because the first entropy term H(y | X t−1) is a constant regardless of the choice of q, the selection of q t is equivalent to Here we use the independent assumption stated in Section 4.1 to calculate p(r k | X t−1, q j). Both p(r k | X t−1, q j) and p(y i | X t−1, q j, r k) can be iteratively updated utilizing p(y i | x) and p(r k | q j, y i) as the interaction progresses (Eq. 2), ing in efficient computation of information gain. We model p(y i | x) and p(r k | q j, y i) by encoding the natural language descriptions of questions, answers and classification labels 1. That is, we do not simply treat the labels, questions and answers as categorical variables in our model. Instead, we leverage their natural language aspects to better estimate their correlation, reduce the need for heavy annotation and improve our model in lowresource (and zero-shot) scenarios. Specifically, we use a shared neural encoder enc(·) to encode all texts. Both probability distributions are computed using the dot-product score, i.e. S(u, v) = enc(u) enc(v) where u and v are two pieces of text. The probability of predicting the label y i given an initial query x is:. The probability of an answer r k given a question q j and label y i is a linear combination of the observed empirical probabilityp(r k | q j, y i) and a parameterized estimationp(r k | q j, y i): where λ ∈ is a hyper-parameter. We use the question-answer annotations A i for each label y i to estimatep(r k | q j, y i) using the empirical count. For example, in the FAQ suggestion task, we collect multiple user responses for each question and class label, and average across annotator to estimatep (Section 5). The second termp(r k | q j, y i) is estimated using texts:, where q j #r k is a concatenation of the question q j and the answer r k 2 and w, b ∈ R are scalar parameters. Because we do not collect complete annotations to cover every label-question pair,p provides a smoothing of the partially observed counts using the learned encoding S(·). We estimate the enc(·) with parameters ψ by pre-training using the dataset,, as described earlier. We use this data to create a set of text pairs (u, v) to train the scoring function S(·). For each label y i, we create pairs (x, y i) with all its initial queries x ∈ X i. We also create (q m #r m, y i) for each question-answer pair (q m, r m) annotated with the label y i. We perform gradient descent to minimize the cross-entropy loss: The second term requires summation over all v, which are all the labels in Y. We approximate this sum using negative sampling that replaces the full set Y with a sampled subset in each training batch. The parameters ψ, w and b are fine-tuned using reinforcement learning during training of the policy controller (Section 4.5). The user simulator provides initial queries to the system, responds to the system initiated clarification questions and judges if the returned label is correct. The simulator is based on held-out dataset of tuples of a goal y i, a set of initial queries X i, and a set of question answer pairs m=1. We estimate the simulator response distribution p (r k | q j, y i) using smoothed empirical counts from the data. While this data is identical in structure to our training data, we keep it separated from the data used to estimate S(·), p(y i | x) and p(r k | q j, y i) (Section 4.3). At the beginning of an interaction, the simulator selects a target labelŷ, and samples a query x from the associated query set to start the interaction. Given a system clarification question q t at turn t, the simulator responds with an answer r t ∈ R(q t) by sampling from a belief distribution p (r | q t,ŷ). Sampling provides natural noise to the interaction, and our model has no knowledge of p. The interaction ends when the system returns a target. This setup is flexible in that the user simulator can be easily replaced or extended by a real human, and the system can be further trained with the human-in-the-loop setup. The policy controller decides at each turn t to either select another question to query the user or to conclude the interaction. This provides a trade-off between exploration by asking questions and exploitation by returning the most probable classification label. The policy controller f (·, ·; θ) is a feed-forward network parameterized by θ that takes the top-k probability values and current turn t as input state. It generates two possible actions, STOP or ASK. When the action is ASK, a question is selected to maximize the information gain, and when the action is STOP, the label y i with highest probability is returned using arg max yi∈Y p(y i | X t−1). We tune the policy controller using the user simulator (Section 4.4). Algorithm 1 describes the training process. During learning, we use a reward function that provides a positive reward for predicting the correct target at the end of the interaction, a negative reward for predicting the wrong target, and a small negative reward for every question asked. We learn the policy controller f (·, ·; θ), and fine-tune p(r k | q j, y i) by back-propagating through the policy gradient. We keep the enc(·) parameters fixed during this process. We design a crowdsourcing process to collect data for the FAQ task using Amazon Mechanical Turk 3. For the Birds domain, we re-purpose an existing dataset. We collect initial queries and tags for each FAQ document. We ask workers to consider the scenario of searching for an FAQ supporting document using an interactive system. Given a target FAQ, we ask for an initial query that they would provide to such a system. The set of initial queries that is collected for each document y i is X i. We encourage workers to provide incomplete information and avoid writing a simple paraphrase of the FAQ. This in more realistic and diverse utterances because users have limited knowledge of the system and the domain. We collect natural language tag annotations for the FAQ documents. First, we use domain experts to define the set of possible free-form tags. The tags are not restricted to a pre-defined ontology and can be a phrase or a single word, which describes the topic of the document. We heuristically remove duplicate tags to finalize the set. Next, experts heuristically combine some tags to categorical tags, while leave all the rest tags as binary tags. For example, tags'IOS','Android operating system' and'Windows operating system' are combined to form a categorical tag'phone operating system'. We then use a small set of deterministic, heuristically-designed templates to convert tags into questions. For example, the tag'international roaming' is converted into a binary question' Is it about international roaming?'; the categorical tag'phone operating system' is converted into a multi-choice question' What is your phone operating system?'. Finally, we use non-experts to collect user responses to the questions of the FAQ. For binary questions, we ask workers to associate the tags to the FAQ target if they would respond'yes' to the question. We show the workers a list of ten tags for a given target as well as'none of the above' option. Annotating all target-tag combinations is excessively expensive and most pairings are negative. We rank the tags based on the relevance against the target using S(·) and show only top-50 to the workers. For multi-choice tags, we show the workers a list of possible answers to a tag-generated question for a given FAQ. The workers need to choose one answer that they think best applies. They also have the option of choosing'not applicable'. We provide more data collection statistics in Appendix A.1. The workers do not engage in a multi-round interactive process. This allows for cheap and scalable collection. Task I: FAQ Suggestion We use the FAQ dataset from . The dataset contains 517 troubleshooting documents by crawling Sprint's technical website. In addition, we collect 3, 831 initial queries and 118, 640 tag annotations using the setup described in Section 5. We split the data into 310/103/104 documents as training, development, and test sets. Only the queries and tag annotations of the 310 training documents are used for pre-training and learning the policy controller. We use the queries and tag annotations of the development and test documents for evaluation only. The classification targets contain all 517 documents during evaluation 4. Task II: Bird Identification Our second set of experiments uses the Caltech-UCSD Birds (CUB-200) dataset . The dataset contains 11, 788 bird images for 200 different bird species. Each bird image is annotated with a subset of 27 visual attributes and 312 attribute values pertaining to the color or shape of a particular part of the bird. We take attributes with value count less than 5 as categorical tags, leaving us 8 categorical questions in total. The remaining 279 attributes are treated as binary tags and converted to binary questions. In addition, each image is annotated with 10 image captions describing the bird in the image . We use the image captions as initial user queries and bird species as labels. Since each image often contains only partial information about the bird species, the data is naturally noisy and provides challenging user interactions. We do not use the images from the dataset for model training. The images are only provided for grounding during human evaluation. Baselines We compare our full model against the following baseline methods: • No Interact: the best classification label is predicted using only the initial query. We consider two possible implementations: BM25, a common keyword-based scoring model for retrieval methods , and a neural model described in Section 4.3. • Random Interact: at each turn, a random question is chosen and presented to the user. After T turns, the classification label is chosen according to the belief p(y | X T). • Static Interact: questions are picked without conditioning on the initial user query using maximum information criterion. This is equivalent to using a static decision tree to pick the question, leading to the same first question, similar to . • Variants of Ours: we consider several variants of our full model. First, we replace the policy controller with two termination strategies: one which ends interaction when max p(y | X t) passes a threshold, and another one which ends interaction after the designated number of turns. Second, we disable the parameterized estimatorp(r k | q j, y i) by setting λ to 1. Evaluation We evaluate our model by running against both the user simulator and a real human. Given the user simulator, we evaluate the classification performance of our model and baselines using Accuracy@k, which is the percentage of time the correct target appears among the top-k predictions of the model. During human evaluation, we ask annotators to interact with our proposed or baseline models through a web-based interactive interface. Each interaction session starts with a user scenario 5 presented to an annotator (e.g a bird image or a device-troubleshooting scenario described in text). The annotator inputs an initial query accordingly and then answers follow-up questions selected by the system. Once the system returns its prediction, the annotator is asked to provide a few ratings of the interaction, such as rationality -do you feel that you were understood by the system?. We present more details of the human evaluation in Appendix A.3. We use a fast recurrent neural network to encode texts . The policy controller receives three different rewards: a positive reward for returning the correct target (r p = 20), a negative reward for providing the wrong target (r n = −10) and a turn penalty for each question asked (r a = −1, . . ., −5). We report the averaged over 3 independent runs for each model variant and baseline. More details about the model implementation and training procedure can be found in Appendix A.2. Bird Identification 38 ± 0.5% 61 ± 0.3% 23 ± 0.1% 41 ± 0.2% Random Interact 39 ± 0.3% 62 ± 0.4% 25 ± 0.1% 44 ± 0.1% Static Interact 46 ± 0.5% 66 ± 0.6% 29 ± 0.2% 50 ± 0.3% Ours 79 ± 0.7% 86 ± 0.8 % 49 ± 0.3 % 69 ± 0.5 % w/ threshold 72 ± 0.6% 82 ± 0.7% 41 ± 0.3% 59 ± 0.4% w/ fixed turn 71 ± 1.0% 81 ± 0.9% 39 ± 0.2% 56 ± 0.4% w/ λ = 1 66 ± 0.8% 71 ± 1.0% 40 ± 0.1% 56 ± 0.2% Table 1: Performance of our system against various baselines, which are evaluated using Accuracy@{1, 3}. For all interacting baselines, 5 clarification questions are used. Best performances are in bold. We report the averaged as well as the standard deviations from 3 independent runs for each model variant and baseline. Table 2: Human evaluation . Count is the total number of interaction examples. The system is evaluated with Accuracy@1 and the rationality score ranging from −2 (strongly disagree) to 2 (strongly agree). Table 1 shows the performance of our model against the baselines on both tasks while evaluating against user simulator. The No Interact (Neural) baseline achieves an Accuracy@1 of 38% for FAQ Suggestion and 23% for Bird Identification. The No Interact (BM25) baseline performs worst. The Random Interact baseline and the Static Interact baseline barely improve the performance, illustrating the challenge of building an effective interactive model. In contrast, our model and its variants obtain substantial gain in accuracy given a few turns of interaction. Our full model achieves an Accuracy@1 of 79% for FAQ Suggestion and 49% for Bird Identification using less than 5 turns, outperforming the No Interact (Neural) baseline by an absolute number of 41% and 26%. The two baselines with alternative termination strategies underperform the full model, indicating the effectiveness of the policy controller trained with reinforcement learning. The model variant with λ = 1, which has fewer probability components leveraging natural language than our full model, achieves worse Accuracy@1. This , together with the fact that our model outperforms the Static Interact baseline, confirms the importance of modeling natural language for efficient interaction. Figure 2 shows the trade-off between classification accuracy and the number of the turns the system takes. The number of interactions changes as we vary the model termination strategy, which includes varying turn penalty r a, the prediction threshold, and the predefined number of turns T. Our model with the policy controller or the threshold strategy does not explicitly control the number of turns, so we report the average number of turns across multiple runs for these two models. We achieve a relative accuracy boost of 40% for FAQ Suggestion and 65% for Bird Identification over no-interaction baselines with only one clarification question. This highlights the value of leveraging human feedback to improve model accuracy in classification tasks. Our approach outperforms baselines ranging across all numbers of interactions. Table 2 shows the human evaluation of our full model and two baselines on the FAQ Suggestion and Bird Identification tasks. The model achieves Accuracy@1 of 30% and 20% for FAQ and Bird tasks respectively, when there is no interaction. Each of the model variants uses 3 interaction turns on average, and all three models improve the classification after the interaction. Our full model achieves the best performance: an Accuracy@1 of 59% for FAQ Suggestion and 45% for Bird Identification. Qualitatively, the users rate our full model to be more rational. The human evaluation demonstrates that our model handles real user interaction more effectively despite being trained with only non-interactive data. Appendix A.3 includes additional details for human evaluation and exemple interactions. Figure 3 shows the learning curves of our model with the policy controller trained with different turn penalty r a ∈ {−0.5, −1, −3}. We observe interesting exploration behavior during the first 1, 000 training episodes in the middle and the right plots. The models achieve relatively stable accuracy after the early exploration stage. As expected, the three runs end up using different numbers of expected turns due to the choice of different r a values. We propose an approach for interactive classification, where users can provide under-specified natural language queries and the system can inquire missing information through a sequence of simple binary or multi-choice questions. Our method uses information theory to select the best question at every turn, and a lightweight policy to efficiently control the interaction. We show how we can bootstrap the system without any interaction data. We demonstrate the effectiveness of our approach on two tasks with different characteristics. Our show that our approach outperforms multiple baselines by a large margin. In addition, we provide a new annotated dataset for future work on bootstrapping interactive classification systems. A.1 DATA COLLECTION Query collection qualification One main challenge for the collection process lies within familiarizing the workers with the set of target documents. To make sure we get good quality annotation, we set up a two-step qualification task. The first one is to write paraphrase with complete information. After that, we reduce the number of workers down to 50. These workers then generate 19, 728 paraphrase queries. During the process, the workers familiarize themselves with the set of documents. We then post the second task (two rounds), where the workers try to provide initial queries with possibly insufficient information. We select 25 workers after the second qualification task and collect 3, 831 initial queries for the second task. Attribute Collection Qualification To ensure the quality of target-tag annotation, we use the pretrained model to rank-order the tags and pick out the highest ranked tags (as positives) and the lowest ranked tags (as negatives) for each target. The worker sees in total ten tags without knowing which ones are the negatives. To pass the qualifier, the workers need to complete annotation on three targets without selecting any of the negative tags. To make the annotation efficient, we rank the tag-document relevance using the model trained on the previously collected query data. We then take the top 50 possible tags for each document and split them into five non-overlapping lists (i.e. ten tags for each list). Each of the list is assigned to four separate workers to annotate. In the FAQ task, we observe that showing only the top-50 tags out of 813 is sufficient. Table A.1: Target-tag annotation statistics. We show five sets of tags to the annotators. The higher ranked ones are more likely to be related to the given target. The row mean # tags is the mean number of tags that are annotated to a target, N.A. is the percentage of the tasks are annotated as "none of the above", and mean κ is the mean pairwise Cohen's κ score. Learning Components Here we describe the detailed implementation of the text encoder and the policy controller network. We use a single-layer bidirectional Simple Recurrent Unit (SRU) as the encoder for the FAQ suggestion task and two layer bidirectional SRU for bird identification task. The encoder uses pre-trained fastText word embedding of size 300 (fixed during training), hidden size 150, batch size 200, and dropout rate 0.1. The policy controller is a two layer feed-forward network with hidden layer size of 32 and ReLU activation function. We use Noam learning rate scheduler with initial learning rate 1e − 3, warm-up step 4, 000 and Noam scaling factor 2.0. The policy controller is a 2 layer feed-forward network with a hidden layer of 32 dimensions and ReLU activation. The network takes the current step and the top-k values of belief probabilities as input. We choose k = 20 and allow a maximum of 10 interaction turns. We use initial queries as well as paraphrase queries to train the encoder, which has around 16K target-query examples. The breakdown analysis is shown in Table A.2. To see the effectiveness of the tag in addition to initial query, we generate pseudo-queries by combining existing queries with sampled subset of tags from the targets. This augmentation strategy is shown to be useful to improve the classification performance. On the other hand, when we use the set of tags instead of initial query as text input for a specific target label, the classification performance improves, indicating the designed tags can capture the target label well. Finally, when we concatenate user initial query and tags and use that as text input to the classifier, we achieve Accuracy@1 of 76%. In our full model, we achieve 79% with only querying about 5 tags, indicating effectiveness of our modelling. Each interaction session starts with presenting the annotator an user scenario (e.g a bird image or an issue with your phone). The annotator inputs an initial query accordingly and then answers follow-up questions selected by the system. We evaluate prediction accuracy, system rationality, and the number of counts by letting the system interact with human judges. We design user scenario for each target to present to the worker. At the end of each interaction, the predicted FAQ and the ground truth will be presented to the user as shown in the top right panel in Figure A.2. The user needs to answer the following questions: "How natural is the interaction?" and "Do you feel understood by the system during the interactions?" on the scale of −2 (strongly disagree) to 2 (strongly agree), which we record as naturalness and rationality in Table A. 3. Our full model performs best on Accuracy@1, naturalness, and rationality. We show human evaluation examples in Table A.4. The interface for bird identification task is similar to the FAQ suggestion task. Instead of presenting a scenario, we show a bird image to the user. The user needs to describe the bird to find out its category, which is analogous to writing an initial query. We allow the user to reply'not visible' if part of the bird is hidden or occluded. With such reply, the system stops asking binary questions from the same label group. For example, if user replied'not visible' to a the question'does the bird has black tail?', then question'does the bird has yellow tail?','does the bird has red tail?' etc. won't be asked again. At the end of the interaction, the predicted and ground-truth bird images along with their categories are presented to the user as shown in the bottom right panel in Figure A.2. Again, the user needs to fill out a similar questionnaire as in FAQ suggestion task. The bird identification task is very challenging due to its fine-grained categories, where many bird images look almost identical while belonging to different classes. Our full system improves Accuracy@1 from 20% to 45% against non-interactive baselines after less than 3 turns of interaction. For Bird Identification, the annotators reported that the predicted image is almost identical to the true image sometimes. To better understand the task and the model behavior, we show the confusion matrix of the final model prediction after interaction in Figure A Table A.3: Human evaluation on FAQ Suggestion and Bird Identification on our proposed model and several baslines. The three FAQ systems ask 2.8, 3 and 3 turns of questions, respectively. The three Bird systems ask 3.3, 4 and 4 turns of questions. The system is evaluated with both on performance and user experience. Performance include the initial and final Accuracy@1. The user experience score include both naturalness and rationality for both task. We desire high value in the diagonal part and low value elsewhere.
We propose an interactive approach for classifying natural language queries by asking users for additional information using information gain and a reinforcement learning policy controller.
386
scitldr
Convolutional neural networks (CNNs) have achieved state of the art performance on recognizing and representing audio, images, videos and 3D volumes; that is, domains where the input can be characterized by a regular graph structure. However, generalizing CNNs to irregular domains like 3D meshes is challenging. Additionally, training data for 3D meshes is often limited. In this work, we generalize convolutional autoencoders to mesh surfaces. We perform spectral decomposition of meshes and apply convolutions directly in frequency space. In addition, we use max pooling and introduce upsampling within the network to represent meshes in a low dimensional space. We construct a complex dataset of 20,466 high resolution meshes with extreme facial expressions and encode it using our Convolutional Mesh Autoencoder. Despite limited training data, our method outperforms state-of-the-art PCA models of faces with 50% lower error, while using 75% fewer parameters. Convolutional neural networks BID27 have achieved state of the art performance in a large number of problems in computer vision BID26 BID22, natural language processing BID32 and speech processing BID20. In recent years, CNNs have also emerged as rich models for generating both images BID18 and audio. These successes may be attributed to the multi-scale hierarchical structure of CNNs that allows them to learn translational-invariant localized features. Since the learned filters are shared across the global domain, the number of filter parameters is independent of the domain size. We refer the reader to BID19 for a comprehensive overview of deep learning methods and the recent developments in the field. Despite the recent success, CNNs have mostly been successful in Euclidean domains with gridbased structured data. In particular, most applications of CNNs deal with regular data structures such as images, videos, text and audio, while the generalization of CNNs to irregular structures like graphs and meshes is not trivial. Extending CNNs to graph structures and meshes has only recently drawn significant attention BID8 BID14. Following the work of BID14 on generalizing the CNNs on graphs using fast Chebyshev filters, we introduce a convolutional mesh autoencoder architecture for realistically representing high-dimensional meshes of 3D human faces and heads. The human face is highly variable in shape as it is affected by many factors such as age, gender, ethnicity etc. The face also deforms significantly with expressions. The existing state of the art 3D face representations mostly use linear transformations BID39 BID29 BID40 or higher-order tensor generalizations BID43 BID9. While these linear models achieve state of the art in terms of realistic appearance and Euclidean reconstruction error, we show that CNNs can perform much better at capturing highly non-linear extreme facial expressions with many fewer model parameters. One challenge of training CNNs on 3D facial data is the limited size of current datasets. Here we demonstrate that, since these networks have fewer parameters than traditional linear models, they can be effectively learned with limited data. This reduction in parameters is attributed to the locally invariant convolutional filters that can be shared on the surface of the mesh. Recent work has exploited thousands of 3D scans and 4D scan sequences for learning detailed models of 3D faces BID13 BID46 BID37 BID11. The availability of this data enables us to a learn rich non-linear representation of 3D face meshes that can not be captured easily by existing linear models. In summary, our work introduces a convolutional mesh autoencoder suitable for 3D mesh processing. Our main contributions are:• We introduce a mesh convolutional autoencoder consisting of mesh downsampling and mesh upsampling layers with fast localized convolutional filters defined on the mesh surface.• We use the mesh autoencoder to accurately represent 3D faces in a low-dimensional latent space performing 50% better than a PCA model that is used in state of the art methods BID39 for face representation.• Our autoencoder uses up to 75% fewer parameters than linear PCA models, while being more accurate on the reconstruction error.• We provide 20,466 frames of highly detailed and complex 3D meshes from 12 different subjects for a range of extreme facial expressions along with our code for research purposes. Our data and code is located at http://withheld.for.review.This work takes a step towards the application of CNNs to problems in graphics involving 3D meshes. Key aspects of such problems are the limited availability of training data and the need for realism. Our work addresses these issues and provides a new tool for 3D mesh modeling. Mesh Convolutional Networks. give a comprehensive overview of generalizations of CNNs on non-Euclidean domains, including meshes and graphs. defined the first mesh convolutions by locally parameterizing the surface around each point using geodesic polar coordinates, and defining convolutions on the ing angular bins. In a follow-up work, BID4 parametrized local intrinsic patches around each point using anisotropic heat kernels. BID33 introduced d-dimensional pseudo-coordinates that defined a local system around each point with weight functions. This method resembled the intrinsic mesh convolution of and BID4 for specific choices of the weight functions. In contrast, Monti et al. used Gaussian kernels with trainable mean vector and covariance matrix as weight functions. In other work, BID42 presented dynamic filtering on graphs where filter weights depend on the input data. The work however did not focus on reducing the dimensionality of graphs or meshes. BID45 also presented a spectral CNN for labeling nodes which did not involve any dimensionality reduction of the meshes. BID38 and BID30 embedded mesh surfaces into planar images to apply conventional CNNs. Sinha et al. used a robust spherical parametrization to project the surface onto an octahedron, which is then cut and unfolded to form a squared image. BID30 introduced a conformal mapping from the mesh surface into a flat torus. Although, the above methods presented generalizations of convolutions on meshes, they do not use a structure to reduce the meshes to a low dimensional space. The proposed mesh autoencoder efficiently handles these problems by combining the mesh convolutions with efficient meshdownsampling and mesh-upsampling operators. Graph Convolutional Networks. BID8 proposed the first generalization of CNNs on graphs by exploiting the connection of the graph Laplacian and the Fourier basis (see Section 3 for more details). This lead to spectral filters that generalize graph convolutions. extended this using a windowed Fourier transform to localize in frequency space. BID23 built upon the work of Bruna et al. by adding a procedure to estimate the structure of the graph. To reduce the computational complexity of the spectral graph convolutions, BID14 approximated the spectral filters by truncated Chebyshev poynomials which avoids explicitly computing the Laplacian eigenvectors, and introduced an efficient pooling operator for graphs. BID25 simplified this using only first-order Chebyshev polynomials. However, these graph CNNs are not directly applied to 3D meshes. Our mesh autoencoder is most similar to BID14 with truncated Chebyshev polynomials along with the efficient graph pooling. In addition, we define mesh upsampling layer to obtain a complete mesh autoencoder structure and use our model for representation of highly complex 3D faces obtained state of the art in realistic modeling of 3D faces. Learning Face Representations. BID2 introduced the first generic representation for 3D faces based on principal component analysis (PCA) to describe facial shape and texture variations. We also refer the reader to BID10 for a comprehensive overview of 3D face representations. Representing facial expressions with linear spaces has given state-of-the-art till date. The linear expression basis vectors are either computed using PCA (e.g. BID1 BID6 BID29 BID39 BID44, or are manually defined using linear blendshapes (e.g. BID40 BID28 BID5 . Multilinear models BID43, i.e. higher-order generalizations of PCA are also used to model facial identity and expression variations. In such methods, the model parameters globally influence the shape, i.e. each parameter affects all the vertices of the face mesh. To capture localized facial details, BID34 and BID15 used sparse linear models. BID9 used a hierarchical multiscale approach by computing localized multilinear models on wavelet coefficients. BID9 also used a hierarchical multi-scale representation, but their method does not use shared parameters across the entire domain. BID24 use a volumetric face representation in their CNN-based framework. In contrast to existing face representation methods, our mesh autoencoder uses convolutional layers to represent faces with significantly fewer parameters. Since, it is defined completely on the mesh space, we do not have memory constraints which effect volumetric convolutional methods for representing 3D models. We define a face mesh as a set of vertices and edges F = (V, A), with |V| = n vertices that lie in 3D Euclidean space, V ∈ R n×3. The sparse adjacency matrix A ∈ {0, 1} n×n represents the edge connections, where A ij = 1 denotes an edge connecting vertices i and j, and A ij = 0 otherwise. The non-normalized graph Laplacian is defined as L = D − A BID12, with the diagonal matrix D that represents the degree of each vertex in V as D ii = j A ij. n×n is a diagonal matrix with the associated real, nonnegative eigenvalues. The graph Fourier transform BID12 of the mesh vertices x ∈ R n×3 is then defined as x ω = U T x, and the inverse Fourier transform as x = U x ω, respectively. Fast spectral convolutions. The convolution operator * can be defined in Fourier space as a Hadamard product, DISPLAYFORM0. This is computationally expensive with large number of vertices. The problem is addressed by formulating mesh filtering with a kernel g θ using a recursive Chebyshev polynomial BID21 BID14. The filter g θ is parametrized as a Chebyshev polynomial of order K given by DISPLAYFORM1 whereL = 2L/λ max − I n is the scaled Laplacian, the parameter θ ∈ R K is a vector of Chebyshev coefficients, and T k ∈ R n×n is the Chebyshev polynomial of order k that can be computed recursively as T k (x) = 2xT k−1 (x) − T k−2 (x) with T 0 = 1 and T 1 = x. The spectral convolution can then be defined as BID14 Table 2: Decoder architecture that computes the j th feature of y ∈ R n×Fout. The input x ∈ R n×Fin has F in features. The input face mesh has F in = 3 features corresponding to its 3D vertex positions. Each convolutional layer has F in × F out vectors of Chebyshev coefficients θ i,j ∈ R K as trainable parameters. DISPLAYFORM2 Mesh Sampling The mesh sampling operators define the downscaling and upscaling of the mesh features in a neural net. We perform the in-network downsampling of a mesh with m vertices using transform matrices Q d ∈ {0, 1} n×m, and upsampling using Q u ∈ R m×n where m > n. The downsampling is obtained by contracting vertex pairs iteratively that maintains surface error approximations using quadric matrices BID16. The vertices after downsampling are a subset of the original mesh vertices DISPLAYFORM3 Since a loss-less downsampling and upsampling is not feasable for general surfaces, the upsampling matrix is built during downsampling. Vertices kept during downsampling are kept during upsampling DISPLAYFORM4 Vertices v q ∈ V discarded during downsampling where Q d (p, q) = 0 ∀p, are mapped into the downsampled mesh surface. This is done by projecting v q into the closest triangle (i, j, k) in the downsampled mesh, denoted by v p, and computing the Barycentric coordinates as DISPLAYFORM5 The weights are then updated in Q u as Q u (q, i) = w i, Q u (q, j) = w j, and Q u (q, k) = w k, and Q u (q, l) = 0 otherwise. Figure 1: The effect of downsampling (red arrows) and upsampling (green arrows) on 3D face meshes. The reconstructed face after upsampling maintains the overall structure but most of the finer details are lost. Now that we have defined the basic operations needed for our neural network in Section 3, we can construct the architecture of the convolutional mesh autoencoder. The structure of the encoder is shown in Table 1. The encoder consists of 4 Chebyshev convolutional filters with K = 6 Chebyshev polynomials. Each of the convolutions is followed by a biased ReLU BID17. The downsampling layers are interleaved between convolution layers. Each of the downsampling layers reduce the number of mesh vertices by 4 times. The encoder transforms the face mesh from R n×3 to an 8 dimensional latent vector using a fully connected layer at the end. The structure of the decoder is shown in Table 2. The decoder similarly consists of a fully connected layer that transforms the latent vector from R 8 to R 32×32 that can be further upsampled to reconstruct the mesh. Following the decoder's fully connected layer, 4 convolutional layers with interleaved upsampling layers generated a 3D mesh in R 8192×3. Each of the convolutions is followed by a biased ReLU similar to the encoder network. Each upsampling layer increases the numbers of vertices by 4x. The FIG0 shows the complete structure of our mesh autoencoder. Training. We train our autoencoder for 300 epochs with a learning rate of 8e-3 with a learning rate decay of 0.99 every epoch. We use stochastic gradient descent with a momentum of 0.9 to optimize the L1 loss between predicted mesh vertices and the ground truth samples. We use a regularization on the weights of the network using weight decay of 5e-4. The convolutions use Chebyshev filtering with K = 6. Facial Expression Dataset. Our dataset consists of 12 classes of extreme expressions from 12 different subjects. These expressions are highly complex and uncorrelated with each other. The expressions in our dataset are -bareteeth, cheeks in, eyebrow, high smile, lips back, lips up, mouth down, mouth extreme, mouth middle, mouth side and mouth up. The number of frames of each sequence is shown in TAB2.The data is captured at 60fps with a multi-camera active stereo system (3dMD LLC, Atlanta) with six stereo camera pairs, five speckle projectors, and six color cameras. Our dataset contains 20,466 3D Meshes, each with about 120,000 vertices. The data is pre-processed using a sequential mesh registration method BID29 to reduce the data dimensionality to 5023 vertices. We preprocess the data by adding fake vertices to increase the number of vertices to 8192. This enables pooling and upsampling of the mesh across the layers with a constant factor. Implementation details We use Tensorflow BID0 for our network implementation. We use Scikit-learn BID36 for computing PCA coefficients. Training each network takes about 8 hours on a single Nvidia Tesla P100 GPU. Each of the models is trained for 300 epochs with a batch size of 16. We evaluate the performance of our model based on its ability to interpolate the training data and extrapolate outside its space. We compare the performance of our model with a PCA model. We consistently use an 8-dimensional latent space to encode the face mesh using both the PCA model and the Mesh Autoencoder. Thus, the encoded latent vectors lie in R 8. Meanwhile, the number of parameters in our model is much smaller than PCA model (Table 4).In order to evaluate the interpolation capability of the autoencoder, we split the dataset in training and test samples in the ratio of 1:9. The test samples are obtained by picking consecutive frames of length 10 uniformly at random across the sequences. We train our autoencoder for 300 epochs and evaluate it on the test set. We use mean Euclidean distance for comparison with the PCA method. The mean Euclidean distance of N test mesh samples with n vertices each is given by DISPLAYFORM0 where x ij,x ij ∈ R 3 are j-th vertex predictions and ground truths respectively corresponding to i-th sample. Table 4 shows the mean Euclidean distance along with standard deviation in the form [µ ± σ]. The median error is also shown in the table. We show a performance improvement, as high as 50% over PCA models for capturing these highly non linear facial expressions. At the same time, the number of parameters in the CNN is about 75% fewer than the PCA model as shown in Table 4. Visual inspection of our qualitative in Figure 3 show that our reconstructions are more realistic and are effective in capturing extreme facial expressions. We also show the histogram of cumulative errors in FIG1. We observe that Mesh Autoencoder has about 76.9% of the vertices within an Euclidean error of 2 mm, as compared to 51.7% for the PCA model. To measure generalization of our model, we compare the performance of our model with a PCA model and FLAME BID29. For comparison, we train the expression and jaw model of FLAME with our dataset. The FLAME reconstructions are obtained with with latent vector size of 16 with 8 components each for encoding identity and expression. The latent vectors encoded using PCA model and Mesh autoencoder have a size of 8.We evaluate the generalization capability of the Mesh Autoencoder by attempting to reconstruct the expressions that are completely unseen by our model. We split our dataset by completely excluding one expression set from all the subjects of the dataset. We test our Mesh Autoencoder on the excluded expression as the test set. We compare the performance of our model with PCA and FLAME using the same mean Euclidean distance. We perform 12 cross validation experiments, one for each expression as shown in Table 5. For each experiment, we run our training procedure ten times initializing the weights at random. We pick the best performing network for comparison. We compare the using mean Euclidean distance and median error metric in Table 5. Our method performs better than PCA model and FLAME BID29 on all expression sequences. We show the qualitative in FIG3. Our model performs much better on these extreme expressions. We show the cumulative euclidean error histogram in FIG1. For a 2 mm accuracy, Mesh Autoencoder captures 84.9% of the vertices while the PCA model captures 73.6% of it. The FLAME model BID29 uses several PCA-models to represent expression, jaw motion, face identity etc. We evaluate the performance of mesh autoencoders by replacing the expression model of FLAME by our autoencoder. We compare the reconstruction errors with the original FLAME model. We run our experiment by varying the size of the latent vector for encoding. We show the comparisons in Table 6. While our convolutional Mesh Autoencoder leads to a representation that generalizes better for unseen 3D faces than PCA with much fewer parameters, our model has several limitations. Our network is restricted to learning face representation for a fixed topology, i.e., all our data samples needs to have the same adjacency matrix, A. The mesh sampling layers are also based on this fixed adjacency matrix A, which defines only the edge connections. The adjacency matrix does not take in to account the vertex positions thus affecting the performance of our sampling operations. In future, we would like to incorporate this information into our learning framework. Mesh Autoencoder PCA FLAME BID29 Table 5: Quantitative evaluation of Extrapolation experiment. The training set consists of the rest of the expressions. Mean error is of the form [µ ± σ] with mean Euclidean distance µ and standard deviation σ. The median error and number of frames in each expression sequnece is also shown. All errors are in millimeters (mm).The amount of data for high resolution faces is very limited. We believe that generating more of such data with high variability between faces would improve the performance of Mesh Autoencoders for 3D face representations. The data scarcity also limits our ability to learn models that can be trained for superior performance at higher dimensional latent space. The data scarcity also produces noise in some reconstructions. We have introduced a generalization of convolutional autoencoders to mesh surfaces with mesh downsampling and upsampling layers combined with fast localized convolutional filters in spectral space. The locally invariant filters that are shared across the surface of the mesh significantly reduce the number of filter parameters in the network. While the autoencoder is applicable to any class of mesh objects, we evaluated its quality on a dataset of realistic extreme facial expressions. Table 6: Comparison of FLAME and FLAME++. FLAME++ is obtained by replacing expression model of FLAME with our mesh autoencoder. All errors are in millimeters (mm).convolutional filters capture a lot of surface details that are generally missed in linear models like PCA while using 75% fewer parameters. Our Mesh Autoencoder outperforms the linear PCA model by 50% on interpolation experiments and generalizes better on completely unseen facial expressions. Face models are used in a large number of applications in computer animations, visual avatars and interactions. In recent years, a lot of focus has been given to capturing highly detailed static and dynamic facial expressions. This work introduces a direction in modeling these high dimensional face meshes that can be useful in a range of computer graphics applications.
Convolutional autoencoders generalized to mesh surfaces for encoding and reconstructing extreme 3D facial expressions.
387
scitldr
Computing distances between examples is at the core of many learning algorithms for time series. Consequently, a great deal of work has gone into designing effective time series distance measures. We present Jiffy, a simple and scalable distance metric for multivariate time series. Our approach is to reframe the task as a representation learning problem---rather than design an elaborate distance function, we use a CNN to learn an embedding such that the Euclidean distance is effective. By aggressively max-pooling and downsampling, we are able to construct this embedding using a highly compact neural network. Experiments on a diverse set of multivariate time series datasets show that our approach consistently outperforms existing methods. Measuring distances between examples is a fundamental component of many classification, clustering, segmentation and anomaly detection algorithms for time series BID38 BID43 BID13. Because the distance measure used can have a significant effect on the quality of the , there has been a great deal of work developing effective time series distance measures BID18 BID28 BID1 BID15. Historically, most of these measures have been hand-crafted. However, recent work has shown that a learning approach can often perform better than traditional techniques BID16 BID33 BID9.We introduce a metric learning model for multivariate time series. Specifically, by learning to embed time series in Euclidean space, we obtain a metric that is both highly effective and simple to implement using modern machine learning libraries. Unlike many other deep metric learning approaches for time series, we use a convolutional, rather than a recurrent, neural network, to construct the embedding. This choice, in combination with aggressive maxpooling and downsampling, in a compact, accurate network. Using a convolutional neural network for metric learning per se is not a novel idea BID35 BID45; however, time series present a set of challenges not seen together in other domains, and how best to embed them is far from obvious. In particular, time series suffer from:1. A lack of labeled data. Unlike text or images, time series cannot typically be annotated post-hoc by humans. This has given rise to efforts at unsupervised labeling BID4, and is evidenced by the small size of most labeled time series datasets. Of the 85 datasets in the UCR archive BID10, for example, the largest dataset has fewer than 17000 examples, and many have only a few hundred. 2. A lack of large corpora. In addition to the difficulty of obtaining labels, most researchers have no means of gathering even unlabeled time series at the same scale as images, videos, or text. Even the largest time series corpora, such as those on Physiobank BID19, are tiny compared to the virtually limitless text, image, and video data available on the web. 3. Extraneous data. There is no guarantee that the beginning and end of a time series correspond to the beginning and end of any meaningful phenomenon. I.e., examples of the class or pattern of interest may take place in only a small interval within a much longer time series. The rest of the time series may be noise or transient phenomena between meaningful events BID37 BID21.4. Need for high speed. One consequence of the presence of extraneous data is that many time series algorithms compute distances using every window of data within a time series BID34 BID4 BID37. A time series of length T has O(T) windows of a given length, so it is essential that the operations done at each window be efficient. As a of these challenges, an effective time series distance metric must exhibit the following properties:• Efficiency: Distance measurement must be fast, in terms of both training time and inference time.• Simplicity: As evidenced by the continued dominance of the Dynamic Time Warping (DTW) distance BID42 in the presence of more accurate but more complicated rivals, a distance measure must be simple to understand and implement.• Accuracy: Given a labeled dataset, the metric should yield a smaller distance between similarly labeled time series. This behavior should hold even for small training sets. Our primary contribution is a time series metric learning method, Jiffy, that exhibits all of these properties: it is fast at both training and inference time, simple to understand and implement, and consistently outperforms existing methods across a variety of datasets. We introduce the problem statement and the requisite definitions in Section 2. We summarize existing state-of-the-art approaches (both neural and non-neural) in Section 3 and go on to detail our own approach in Section 4. We then present our in Section 5. The paper concludes with implications of our work and avenues for further research. We first define relevant terms, frame the problem, and state our assumptions. Definition 2.2. Distance Metric A distance metric is defined a distance function d: S × S → R over a set of objects S such that, for any x, y ∈ S, the following properties hold: DISPLAYFORM0 • Identity of Indiscernibles: DISPLAYFORM1 Our approach to learning a metric is to first learn an embedding into a fixed-size vector space, and then use the Euclidean distance on the embedded vectors to measure similarity. Formally, we learn a function f: T D → R N and compute the distance between time series X, Y ∈ T D as: DISPLAYFORM2 2.1 ASSUMPTIONS Jiffy depends on two assumptions about the time series being embedded. First, we assume that all time series are primarily "explained" by one class. This means that we do not consider multilabel tasks or tasks wherein only a small subsequence within each time series is associated with a particular label, while the rest is noise or phenomena for which we have no class label. This assumption is implicitly made by most existing work and is satisfied whenever one has recordings of individual phenomena, such as gestures, heartbeats, or actions. The second assumption is that the time series dataset is not too small, in terms of either number of time series or their lengths. Specifically, we do not consider datasets in which the longest time series is of length T < 40 or the number of examples per class is less than 25. The former number is the smallest number such that our embedding will not be longer than the input in the univariate case, while the latter is the smallest number found in any of our experimental datasets (and therefore the smallest on which we can claim reasonable performance).For datasets too small to satisfy these constraints, we recommend using a traditional distance measure, such as Dynamic Time Warping, that does not rely on a learning phase.3 RELATED WORK Historically, most work on distance measures between time series has consisted of hand-crafted algorithms designed to reflect prior knowledge about the nature of time series. By far the most prevalent is the Dynamic Time Warping (DTW) distance BID42. This is obtained by first aligning two time series using dynamic programming, and then computing the Euclidean distance between them. DTW requires time quadratic in the time series' length in the worst case, but is effectively linear time when used for similarity search; this is thanks to numerous lower bounds that allow early abandoning of the computation in almost all cases BID38.Other handcrafted measures include the Uniform Scaling Distance BID32, the Scaled Warped Matching Distance BID17, the Complexity-Invariant Distance BID2, the Shotgun Distance BID43, and many variants of DTW, such as weighted DTW , DTW-A BID47, and global alignment kernels BID12. However, nearly all of these measures are defined only for univariate time series, and generalizing them to multivariate time series is not trivial BID47. In addition to hand-crafted functions of raw time series, there are numerous hand-crafted representations of time series. Perhaps the most common are Symbolic Aggregate Approximation (SAX) BID32 and its derivatives BID8 BID46. These are discretization techniques that low-pass filter, downsample, and quantize the time series so that they can be treated as strings. Slightly less lossy are Adaptive Piecewise Constant Approximation BID26, Piecewise Aggregate Approximation BID27, and related methods, which approximate time series as sequences of low-order polynomials. The most effective of these representations tend to be extremely complicated; the current state-ofthe-art BID44, for example, entails windowing, Fourier transformation, quantization, bigram extraction, and ANOVA F-tests, among other steps. Moreover, it is not obvious how to generalize them to multivariate time series. A promising alternative to hand-crafted representations and distance functions for time series is metric learning. This can take the form of either learning a distance function directly or learning a representation that can be used with an existing distance function. Among the most well-known methods in the former category is that of BID40, which uses an iterative search to learn data-dependent constraints on DTW alignments. More recently, BID33 use a learned Mahalanobis distance to improve the accuracy of DTW. Both of these approaches yield only a pseudometric, which does not obey the triangle inequality. To come closer to a true metric, BID9 combined a large-margin classification objective with a sampling step (even at test time) to create a DTW-like distance that obeys the triangle inequality with high probability as the sample size increases. In the second category are various works that learn to embed time series into Euclidean space. BID36 use recurrent neural networks in a Siamese architecture BID7 to learn an embedding; they optimize the embeddings to have positive inner products for time series of the same class but negative inner products for those of different classes. A similar approach that does not require class labels is that of BID0. This method trains a Siamese, single-layer CNN to embed time series in a space such that the pairwise Euclidean distances approximate the pairwise DTW distances. BID30 optimize a similar objective, but do so by sampling the pairwise distances and using matrix factorization to directly construct feature representations for the training set (i.e., with no model that could be applied to a separate test set).These methods seek to solve much the same problem as Jiffy but, as we show experimentally, produce metrics of much lower quality. We learn a metric by learning to embed time series into a vector space and comparing the ing vectors with the Euclidean distance. Our embedding function is takes the form of a convolutional neural network, shown in FIG1. The architecture rests on three basic layers: a convolutional layer, maxpooling layer, and a fully connected layer. The convolutional layer is included to learn the appropriate subsequences from the input. The network employs one-dimensional filters convolved over all time steps, in contrast to traditional twodimensional filters used with images. We opt for one-dimensional filters because time series data is characterized by infrequent sampling. Convolving over each of the variables at a given timestep has little intuitive meaning in developing an embedding when each step measurement has no coherent connection to time. For discussion regarding the mathematical connection between a learned convolutional filter and traditional subsequence-based analysis of time series, we direct the reader to BID11.The maxpooling layer allows the network to be resilient to translational noise in the input time series. Unlike most existing neural network architectures, the windows over which we max pool are defined as percentages of the input length, not as constants. This level of pooling allows us to heavily downsample and denoise the input signal and is fed into the final fully connected layer. We downsample heavily after the filters are applied such that each time series is reduced to a fixed size. We do so primarily for efficiency-further discussion on parameter choice for Jiffy may be found in Section 6.We then train the network by appending a softmax layer and using cross-entropy loss with the ADAM BID29 optimizer. We experimented with more traditional metric learning loss functions, rather than a classification objective, but found that they made little or no difference while adding to the complexity of the training procedure; specific loss functions tested include several variations of Siamese networks BID7 BID36 and the triplet loss BID22. For ease of comparison to more traditional distance measures, such as DTW, we present an analysis of Jiffy's complexity. Let T be the length of the D-variable time series being embedded, let F be the number of length K filters used in the convolutional layer, and Let L be the size of the final embedding. The time to apply the convolution and ReLU operations is Θ(T DF K). Following the convolutional layer, the maxpooling and downsampling require (T2DF) time if implemented naively, but (TDF) if an intelligent sliding max function is used, such as that of BID31. Finally, the fully connected layer, which constitutes the embedding, requires Θ(T DF L) time. The total time to generate the embedding is therefore Θ(T DF (K + L)). Given the embeddings, computing the distance between two time series requires Θ(L) time. Note that T no longer appears in either expression thanks to the max pooling. With F = 16, K = 5, L = 40, this computation is dominated by the fully connected layer. Consequently, when L T and embeddings can be generated ahead of time, this enables a significant speedup compared to operating on the original data. Such a situation would arise, e.g., when performing a similarity search between a new query and a fixed or slow-changing database . When both embeddings must be computed on-the-fly, our method is likely to be slower than DTW and other traditional approaches. Before describing our experiments, we first note that, to ensure easy reproduction and extension of our work, all of our code is freely available.1 All of the datasets used are public, and we provide code to clean and operate on them. We evaluate Jiffy-produced embeddings through the task of 1-nearest-neighbor classification, which assesses the extent to which time series sharing the same label tend to be nearby in the embedded space. We choose this task because it is the most widely used benchmark for time series distance and similarity measures BID15 BID1. To enable direct comparison to existing methods, we benchmark Jiffy using datasets employed by BID33. These datasets are taken from various domains and exhibit high variability in the numbers of classes, examples, and variables. We briefly describe each dataset below, and summarize statistics about each in TAB0. • ECG: Electrical recordings of normal and abnormal heartbeats, as measured by two electrodes on the patients' chests.• Wafer: Sensor data collected during the manufacture of semiconductor microelectronics, where the time series are labeled as normal or abnormal.• AUSLAN: Hand and finger positions during the performance of various signs in Australian Sign Language, measured via instrumented gloves.• Trajectories: Recordings of pen (x,y) position and force application as different English characters are written with a pen.• Libras: Hand and arm positions during the performance of various signs in Brazilian Sign Language, extracted from videos.• ArabicDigits: Audio signals produced by utterances of Arabic digits, represented by MelFrequency Cepstral Coefficients. We compare to recent approaches to time series metric learning, as well as popular means of generalizing DTW to the multivariate case:1. MDDTW BID33 ) -MDDTW compares time series using a combination of DTW and the Mahalanobis distance. It learns the precision matrix for the latter using a triplet loss. 2. Siamese RNN BID36 ) -The Siamese RNN feeds each time series through a recurrent neural network and uses the hidden unit activations as the embedding. It trains by feeding pairs of time series through two copies of the network and computing errors based on their inner products in the embedded space. 3. Siamese CNN The Siamese CNN is similar to the Siamese RNN, but uses convolutional, rather than recurrent, neural networks. This approach has proven successful across several computer vision tasks BID7 ). 4. DTW-I, DTW-D -As pointed out by BID47, there are two straightforward ways to generalize DTW to multivariate time series. The first is to treat the time series as D independent sequences of scalars (DTW-I). In this case, one computes the DTW distance for each sequence separately, then sums the . The second option is to treat the time series as one sequence of vectors (DTW-D). In this case, one runs DTW a single time, with elementwise distances equal to the squared Euclidean distances between the D-dimensional elements. 5. Zero Padding -One means of obtaining a fixed-size vector representation of a multivariate time series is to zero-pad such that all time series are the same length, and then treat the "flattened" representation as a vector. 6. Upsampling -Like Zero Padding, but upsamples to the length of the longest time series rather than appending zeros. This approach is known to be effective for univariate time series BID41 ). As shown in TAB1, we match or exceed the performance of all comparison methods on each of the six datasets. Although it is not possible to claim statistical significance in the absence of more datasets (see BID14), the average rank of our method compared to others is higher than its closest competitors at 1.16. The closest second, DTW-I, has an average rank of 3.33 over these six datasets. Not only does Jiffy attain higher classification accuracies than competing methods, but the method also remains consistent in its performance across datasets. This can most easily be seen through the standard deviation in classification accuracies across datasets for each method. Jiffy's standard deviation in accuracy (0.026) is approximately a third of DTWI's (0.071). The closest method in terms of variance is MDDTW with a standard deviation of 0.042, which exhibits a much lower rank than our method. This consistency suggests that Jiffy generalizes well across domains, and would likely remain effective on other datasets not tested here. A natural question when considering the performance of a neural network is whether, or to what extent, the hyperparameters must be modified to achieve good performance on a new dataset. In BID10, and evaluating how classification accuracy varies.6.1 EMBEDDING SIZE Figure 2.left shows that even a few dozen neurons are sufficient to achieve peak accuracy. As a , an embedding layer of 40 neurons is sufficient and leads to an architecture that is compact enough to run on a personal laptop. Figure 2: Effect of fully connected layer size and degree of max pooling on model accuracy using held-out datasets. Even small fully connected layers and large amounts of max poolingup to half of the length of the time series in some cases-have little or no effect on accuracy. For ease of visualization, each dataset's accuracies are scaled such that the largest value is 1.0. The typical assumption in machine learning literature is that max pooling windows in convolutional architectures should be small to limit information loss. In contrast, time series algorithms often max pool globally across each example (e.g. BID20). Contrary to the implicit assumptions of both, we find that the level of pooling that in the highest classification often falls in the 10-25% range, as shown by Figure 2.right We present Jiffy, a simple and efficient metric learning approach to measuring multivariate time series similarity. We show that our method learns a metric that leads to consistent and accurate classification across a diverse range of multivariate time series. Jiffy's resilience to hyperparameter choices and consistent performance across domains provide strong evidence for its utility on a wide range of time series datasets. Future work includes the extension of this approach to multi-label classification and unsupervised learning. There is also potential to further increase Jiffy's speed by replacing the fully connected layer with a structured BID6 or binarized BID39
Jiffy is a convolutional approach to learning a distance metric for multivariate time series that outperforms existing methods in terms of nearest-neighbor classification accuracy.
388
scitldr
Prefrontal cortex (PFC) is a part of the brain which is responsible for behavior repertoire. Inspired by PFC functionality and connectivity, as well as human behavior formation process, we propose a novel modular architecture of neural networks with a Behavioral Module (BM) and corresponding end-to-end training strategy. This approach allows the efficient learning of behaviors and preferences representation. This property is particularly useful for user modeling (as for dialog agents) and recommendation tasks, as allows learning personalized representations of different user states. In the experiment with video games playing, the show that the proposed method allows separation of main task’s objectives andbehaviors between different BMs. The experiments also show network extendability through independent learning of new behavior patterns. Moreover, we demonstrate a strategy for an efficient transfer of newly learned BMs to unseen tasks. Humans are highly intelligent species and are capable of solving a large variety of compound and open-ended tasks. The performance on those tasks often varies depending on a number of factors. In this work, we group them into two main categories: Strategy and Behaviour. The first group contains all the factors leading to the achievement of a defined set of goals. On the other hand, Behaviour is responsible for all the factors not directly linked to the goals and having no significant effect on them. Examples of such factors can be current sentiment status or the unique personality and preferences that affect the way an individual makes decisions. Existing Deep Networks have been focused on learning of a Strategy component. This was achieved by optimization of a model for defined sets of goals, also the goal might be decomposed into sub-goals first, as in FeUdal Networks BID29 or Policy Sketches approach BID1. Behavior component, in turn, obtained much less attention from the DL community. Although some works have been conducted on the identification of Behavior component in the input, such as works in emotion recognition BID15 BID11 BID17. To the best of our knowledge, there was no previous research on incorporation on Behavior Component or Behavior Representation in Deep Networks before. Modeling Behaviour along with Strategy component is an important step to mimicking a real human behavior and creation of robust Human-Computer Interaction systems, such as a dialog agent, social robot or recommendation system. The early work of artificial neural networks was inspired by brain structure BID9 BID16, and the convolution operation and hierarchical layer design found in the network designed for visual analytic are inspired by visual cortex BID9 BID16. In this work, we again seek inspiration from the human brain architecture. In the neuroscience studies, the prefrontal cortex (PFC) is the region of the brain responsible for the behavioral repertoire of animals BID18 ). Similar to the connectivity of the brain cortex (as shown in Figure 1), we hypothesize that a behavior can be modeled as a standalone module within the deep network architecture. Thus, in this work, we introduce a general purpose modular architecture of deep networks with a Behavioural Module (BM) focusing on impersonating the functionality of PFC.Apart from mimicking the PFC connectivity in our model, we also borrow the model training strategy from human behavior formation process. As we are trying to mimic the functionality of a human brain we approached the problem from the perspective of Reinforcement Learning. This approach also aligns with the process of unique personality development. According to BID6 and BID5 unique personality can be explained by different dopamine functions caused by genetic influence. These differences are also a reason for different Positive Emotionality (PE) Sensory Cortex (Conv Layers) PFC (Behavior Module) Motor Cortex (FC layers) Figure 1: Abstract illustration of the prefrontal cortex (PFC) connections of the brain BID18 and corresponding parts of the proposed model. patterns (sensitivity to reward stimuli), which are in turn a significant factor in behavior formation process BID5. Inspired by named biological processes we introduce extra positive rewards (referring to positive-stimuli or dopamine release, higher the reward referring to higher sensitivity) to encourage specific actions and provoke the development of specific behavioral patterns in the trained agent. To validate our method, we selected the challenging domain of classic Atari 2600 games BID2, where the simulated environment allows an AI algorithm to learn game playing by repeatedly seek to understand the input space, objectives and solution. Based on this environment and an established agent (i.e. Deep Q-Network (DQN) BID20 ), the behavior of the agent can be represented by preferences over different sets of actions. In other words, in the given setting, each behaviour is represented by a probability distribution over given action space. In real-world tasks, the extra-reward can be represented by the human satisfaction by taken action along with the correctness of the output (main reward).Importantly, the effect of human behavior is not restricted to a single task and can be observed in various similar situations. Although it is difficult to correlate the effect of human behavior on completely different tasks, it is often easier to observe akin patterns in similar domains and problems. To verify this, we study two BM transfer strategies to transfer a set of newly learned BMs across different tasks. As a human PFC is responsible for behavior patterns in a variety of tasks, we also aim to achieve a zero-shot transfer of learned modules across different tasks. The contributions of our work are as follow:• We propose a novel modular architecture with behavior module and a learning method for the separation of behavior from a strategy component.• We provide a 0-shot transfer strategy for newly learned behaviors to previously unseen tasks. The proposed approach ensures easy extendability of the model to new behaviors and transferability of learned BMs.• We demonstrate the effectiveness of our approach on video games domain. The experimental show good separation of behavior with different BMs, as well as promising when transfer learned BMs to new tasks. Along with that, we study the effects of different hyper-parameters on the behavior separation process. Task separation is an important yet relatively unexplored topic in deep learning. BID22 BID22 explored this idea by simulating a simplified primate visual cortex by separation a network into two parts, responsible for shape classification task and shape localization on a binary image, respectively The topic was further studied in BID12 BID13 b), however, due to the limitations in computational resources at that time, it has not gotten much advancement. Recently, number researchers have revisited the idea of task separation and modular networks with evolutionary algorithms. So in BID28 BID28 and BID21 applied neuroevolution algorithms to evolve predefined modules responsible for the problem subtasks, where improved performance was reported when compared against monolithic architectures. BID23 BID24 proposed a neuroevolution approach to develop a multi-modular network capable of learning different agent behaviors. The module structure and the number of modules in the network were evolved in the training process. Although the multi-module architecture achieved better performance, it has not achieved separation of the tasks among the modules. A number of evolved modules appeared redundant and not used in the test phase, while others have used shared neurons. Moreover, the architecture was fixed once learned and did not assume changes in the structure. The same approach with modifications in mutation strategy' BID25, genome encoding BID27 and task complexity BID25, but has not achieved significant performance. In 2016, BID4 proposed to use a coevolutionary algorithm for domain transfer problem to avoid training from the scratch. It first independently learns a pool of networks on different Atari2600 games, During the transfer phase, the networks were frozen and used as a'building blocks' in a new network while combined with newly evolved neurons. In 2017, BID8 introduced PathNet to address the task-transfer module on the example of Atari2600 games. PathNet has a fixed size architecture (L layers by N modules), where each module was represented by either convolutional or fully-connected block. During the training phase, authors applied the tournament genetic algorithm to learn active paths between modules along with the weights. Once the task was learned, active modules and paths were frozen and the new task could start learning a new path. Recently proposed FeUdal Networks architecture BID29, also proposed a Modular design for Reinforcement Learning problems with sub-goals. In this work authors use Manager and Worker modules for learning abstract goals and primitive actions respectively. FeUdal networks are designed to tackle environments with long-term credit assignment and sparse reward signals. The modules in the named architecture are not transferable and designed to learn different time-span goal embeddings. BID0 proposed the Neural Module Network for Visual Question Answering (VQA) task. It consists of separate modules responsible for different tasks (e.g. Find, Transform, Combine, Describe and Measure modules), which could be combined in different ways depending on the network input. A similar dynamic architecture was proposed and applied to robot manipulator task BID7. The model was end-to-end trained and consisted of two modules (i.e. robotspecific and task-specific) and achieved good performance on a zero-shot learning task. The Modular Neural Network was also applied in Reinforcement Learning task in a robotics environment BID1. In this work, each module was responsible for a separate sub-task of the main task. However, the modules could be combined only in a sequential manner. Most of the previous works focused on multi-task problems or problems with sub-goals where the modules were responsible for learning explicit sub-functionality directly affecting the model performance. Our approach is different in a sense, we learn a behavior module responsible for representation of user sentiment states or preferences not affecting the main goals. This approach leads to high adaptability of the network performance to new preferences or states of an enduser without retraining of the whole network, expandability of the network to future variations, removability of BMs in case of unknown preferences, as well as high-potential to transfer of the learned representations to unseen tasks. To the best of our knowledge, there are no similar approaches. The goal of our modular network is to introduce Behavior component into Deep Networks, ensure separation of the behavior and main task functionalities into different components of the network and provide a strategy for an efficient transfer of learned behaviors. Our model has three main parts The Main Network is responsible for the main task (strategy component) functionality, a replaceable/removable Behavior Module (BM) encodes the agent behavior and separate it from the main network, and the Discriminator is used during the transfer stage and helps to learn similar feature representations among different tasks. An overview of the proposed network architecture is shown in FIG0. In the given architecture Convolutional layers correspond to (Visual) Sensory cortex, Fully-Connected layers of the Main Network to the Motor Cortex and Behaviour Module to PFC from Figure 1. In this work, we adopt the deep Q-Network (DQN) with target network and memory replay BID20 to solve the main task (denoted as main network). DQN has reported good performance The DQN has a fairly simple network structure, which consists of 3 consecutive convolutional layers followed by two fully-connected (fc) layers (see FIG0 . All the layers, except the last one, use ReLU activation functions. The network output is represented by the set of expected future discounted rewards for each of the available actions. The output obeys Bellman-equation and Root-Mean-Square Error is applied as a loss function (L m) during the training phase. In this work, we are interested in the separation of the behavioral component from the main task functionality. Specifically, we design a network where the behavior is modeled with a replaceable and removable module, which is denoted as Behavioral Module (BM). The BM is supposed to have no significant effect on the performance on the main task. The BM is modeled as two fc layers with the first layer having ReLU activation function and the second layer having linear activation. The proposed BM input is the output from the last convolutional layer of the main network, while its output is directly fed to the first fc layer of the main network. This architecture follows the PFC connectivity pathways described in Figure 1. The forward pass of the network is represented by the following equations: DISPLAYFORM0 where I is the network input, j is the index of the current behaviour, l i is the output of the i-th layer, l bi is the output of i-th layer of a BM, f i is the activation function at the i-th layer, and denotes 2d convolution operation. The Main Network contains layers from l 1 to l 5. Note that l b becomes zero vector if no behavior is required or enforced. The summation operator at the layer 4 ensures the influence of BM can be easily removed from the main network (l b is zeros in this case). It also minimizes the effects of BM on the gradients flow during the backpropagation stage. The training is conducted in an end-to-end manner as presented in Algorithm 1.In our approach, the introduction of BM does not require additional loss function and the loss is directly incorporated into the main network loss (L m). To do this we introduce additional rewards for desired behaviors of the agent, similar to PE effect on human behavior formation process. In our i ← random(1,size(B)) setting behavior is defined by agent's preference to play specific actions. Thus, each preferred action played was rewarded with an extra action reward. The action reward is subject to the game of interest and its designing process will be described in the Experiment section. One of the advantages of network modularization is to allow the learned BMs to be transferred to a different main network with minimal or no network re-training. This property is useful for knowledge sharing among a group of models in problems with a variety of possible implementations, changing environments and open-ended tasks. Once task objectives have changed or new behaviors were developed in another model, the target model can just apply a new module without any updates or training of the main network. This property allows easy extension of a learned model and knowledge share across different models without any additional training. The learned BMs from the previous section is used during the transfer phase. In this work, we consider two approaches, namely fine-tuning and adversarial transfer. The first approach uses a source task model and fine-tunes it for a new target task, where BMs are kept unchanged. In the adversarial transfer approach, we introduce a discriminator network (as shown in FIG0, which enforces the convolutional layers to learn features similar to features of the source task. To do so, we adopt the domain-adversarial training BID10 . In this case, the discriminator network has 2 fully-connected layers with Relu and Softmax non-linearity functions, which tries to classify output of the last convolutional layers as being from the source or target task. Different from the original paper, we minimize the softmax loss at the discriminator output and flip the gradient sign at the convolutional layers. The weights update can be formulated as follows: DISPLAYFORM0 where θ dj t are the parameters of the discriminator at timestep t, θ aj t are the parameters of the convolutional layers at timestep t, β is the parameter as described by BID10, L a is the classification loss of the descriminator. In this section, we delineate the experiments that focus on two main aspects of this work: the separation of agent's behavior from the main task, and cross-task transfer of learned behaviors. In order to demonstrate the flexibility and extendability of the proposed network, we also considered zero-shot setting so that an end-user will not require additional training for the case of behavior module transfer. We evaluate the proposed novel modular architecture on the classic Atari 2600 games BID2. The main reason is that video games provide a controlled environment, where it is easier to control agent behavior by representing it with distribution over available actions. In addition, Atari 2600 emulator does not require data collection and labeling, yet it provides a wide range of tasks in terms of different games. The loss function used to encourage the learning of a specific behavior is described in the next section. In this work, we evaluate our architecture on four games, namely Pong, Space Invaders, Demon Attack and Breakout, which consist of four available actions, namely No action, Fire, Up/Right, and Down/Left. Data pre-processing: The input of the network is a stack of 4 game-frames. Each frame was converted into a gray-scale image and resized to 84 × 84 pixels. As the consecutive game frame contain similar information, we sample one game frame on every fourth frame. and an additional action. Additionally, we tested the effect of zero-behavior (i.e. BM0) presence during the training stage. In other words, no actions were stimulated and BM is not applied. Training: To train the proposed network (i.e. main network and BMs), we used the standard Qlearning algorithm and the corresponding loss function using Bellman equation BID19. The training used a combined reward represented by the sum of game score and individual action rewards. The magnitude of the additional reward was represented by an average reward per frame of the game. All the rewards obtained during the game were clipped at 1 and -1 BID20.Evaluation Metrics: We evaluate the proposed models using two metrics. The first metric focus on the game play performance. As each game has different reward scales, we compute the mean of game scores ratios achieved by the proposed modular network and the Vanilla DQN model TAB0. We refer to this metric as Average Main Score Ratio (AMSR). If AMSR is equal to 1, it means the trained model with BM performs equally well as the Vanilla DQN model. Similarly, AMSR higher than 1 indicate our proposed model perform better than the Vanilla DQN, or worst if it is lower than 1. Thus, AMSR that is close to or more than 1 would indicate our modular network is comparable to baseline. The second metric reflects the capability of the proposed modular network in term of modeling the desired behavior. To do this, we define the Behavior Distance (BD) by calculating the Bhattacharyya distance BID3 between the BMs' action distribution to an ideal target distribution. The target distribution is computed by divide 1 with the rewarded actions (i.e. BM5's target distribution is [0.0, 0.0, 0.5, 0.5]). In the ideal case, the BD of the learned network should be close to 1 as our training only encourages over certain actions set. This experiment aims to show that the behavior functionality can be separated from the main task and learned independently. To demonstrate that we conducted the training in two stages. During the Stage 1, we first trained the main network with five behaviors (i.e., BM0 -BM2, BM4 -BM5 and BM8) using Algorithm 1. Given the trained network from Stage 1, Stage 2 focused on training of the remaining BMs (i.e. BM3, BM6, and BM7) while the main network was frozen. This includes behaviors stimulating 1, 2 and 3 actions, respectively. Effect of key parameters: First of all, we studied the effect of the action reward magnitude on the training process. We started with estimation of average reward per frame value (r) for each game (without additional action rewards) and observed performance of our model with various action rewards (i.e. 0.25r, 0.5r, r, 2r, and 5r). TAB2 show that action reward magnitude directly affects the quality of learned behavior in both stages, where increasing the action reward above r value leads to degradation of the main task's performance. Although additional reward magnitude selection depends on a desired main score and BD, we recommend the value equal to r as it leads to the highest BD score during Stage 2, and as a better functionality separation. Next, to see the effect of other parameters we set the value of the action reward to 0.5r, so that we can observe an effect of the changes on the main reward, as well as the behavior pattern. As the next step, we have studied the effect of the complexity of the BMs on the quality of the learned behaviors by trying a different number of layers. Also, we looked for a better separation of the Behavior component by studying the effects of dropout, BM0 and different learning rate for the Behavior module. According to the TAB3 use of 2 fully-connected layers ed in a significant improvement on the main task score compared to 1-layer version. However, adding more layers did not in a better performance. Similar effect demonstrated a higher BM learning rate compared to the main network TAB4, while lower value leads to lower main scores. Finally implementing a Dropout layer for the BM and using BM0 ed in a higher BD score during Stage 2 and main score during Stage 1.Results: Taking into account hyper-parameter effect we have trained a final model with 2 layer BMs and 0.5 dropout layers, applying 2 times higher BM learning rate, BM0 and action reward equal to r. The trained model showed high main task scores compared to the vanilla DQN model, as well as high similarity of learned behaviors to ideal target distributions at Stage 1, as well as after separate training of BMs at Stage 2 TAB0. Experiments also showed that removing the BMs does not lead to a performance degradation of the model on the main task. Importantly, the effect of the action reward magnitude directly correlated with agents preferences to play rewarded actions, which aligns with the PE effect in human behavior formation process. Thus, the development process of exact behavior pattern can be controlled through variations in action reward magnitude. Therefore, we conclude that the proposed model and training method allows a successful separation of the strategy (main task) and behavior functionalities and further network expansion. Implementation details: To achieve a zero-shot performance of the transferred modules, we aimed to achieve a similar feature representation of the target model to the source model. To achieve that we tested two approaches: fine-tuning and adversarial transfer. In the first case, we have fine-tuned the main network obtained in Stage 1 of Section 4.1 on a new game with frozen Stage 1 BMs, applied to every pair of games and following Algorithm 1. After that, we tested the performance on previously unseen Stage 2 BMs. In adversarial setting we follow the same procedure, but with the use of the Discriminator part FIG0 ). The performance was compared to the of transferring Stage 2 BMs to the best model configuration after Stage 1 from Section 4.1.Results: As it can be seen from the Table 5 even a simple zero-shot transfer of learned BMs based on fine-tuning in a good performance of the model on unseen BMs. BM0 and Stage 1 behaviors achieved close performance to an original network. Although the BD score of zero-shot adversarial transfer is approximately 9% lower, the main task performance of transferred modules on an unseen task is close to a separately trained network. This fact shows that zero-shot transfer of separately learned BMs to unseen tasks in slightly worse performance compared to the separately trained model. This leads to a that target performance of transferred BMs can be achieved through much less training compared to complete network retraining. In this work, we have proposed a novel Modular Network architecture with Behavior Module, inspired by human brain Pre-Frontal Cortex connectivity. This approach demonstrated the successful separation of the Strategy and Behavior functionalities among different network components. This is particularly useful for network expandability through independent learning of new Behavior Modules. Adversarial 0-shot transfer approach showed high potential of the learned BMs to be transferred to unseen tasks. Experiments showed that learned behaviors are removable and do not degrade the performance of the network on the main task. This property allows the model to work in a general setting, when user preferences are unknown. The also align with human behavior formation process. We also conducted an exhaustive study on the effect of hyper-parameters on behavior learning process. As a future work, we are planning to extend the work to other domains, such as style transfer, chat bots, and recommendation systems. Also, we will work on improving module transfer quality. In this appendix, we show the details of our preliminary study on various key parameters. The experiments were conducted on the Behavior Separation task.
Extendable Modular Architecture is proposed for developing of variety of Agent Behaviors in DQN.
389
scitldr
A major component of overfitting in model-free reinforcement learning (RL) involves the case where the agent may mistakenly correlate reward with certain spurious features from the observations generated by the Markov Decision Process (MDP). We provide a general framework for analyzing this scenario, which we use to design multiple synthetic benchmarks from only modifying the observation space of an MDP. When an agent overfits to different observation spaces even if the underlying MDP dynamics is fixed, we term this observational overfitting. Our experiments expose intriguing properties especially with regards to implicit regularization, and also corroborate from previous works in RL generalization and supervised learning (SL). Generalization for RL has recently grown to be an important topic for agents to perform well in unseen environments. Complication arises when the dynamics of the environments entangle with the observation, which is often a high-dimensional projection of the true latent state. One particular framework, which we denote by zero-shot supervised framework (a; ;) and is used to study RL generalization, is to treat it analogous to a classical supervised learning (SL) problem -i.e. assume there exists a distribution of MDP's, train jointly on a finite "training set" sampled from this distribution, and check expected performance on the entire distribution, with the fixed trained policy. In this framework, there is a spectrum of analysis, ranging from almost purely theoretical analysis to full empirical on diverse environments ). However, there is a lack of analysis in the middle of this spectrum. On the theoretical side, previous work do not provide analysis for the case when the underlying MDP is relatively complex and requires the policy to be a non-linear function approximator such as a neural network. On the empirical side, there is no common ground between recently proposed empirical benchmarks. This is partially caused by multiple confounding factors for RL generalization that can be hard to identify and separate. For instance, an agent can overfit to the MDP dynamics of the training set, such as for control in Mujoco . In other cases, an RNN-based policy can overfit to maze-like tasks in exploration, or even exploit determinism and avoid using observations . Furthermore, various hyperparameters such as the batch-size in SGD , choice of optimizer , discount factor γ and regularizations such as entropy and weight norms can also affect generalization. Due to these confounding factors, it can be unclear what parts of the MDP or policy are actually contributing to overfitting or generalization in a principled manner, especially in empirical studies with newly proposed benchmarks. In order to isolate these factors, we study one broad factor affecting generalization that is most correlated with themes in SL, specifically observational overfitting, where an agent overfits due to properties of the observation which are irrelevant to the latent dynamics of the MDP family. To study this factor, we fix a single underlying MDP's dynamics and generate a distribution of MDP's by only modifying the observational outputs. Our contributions in this paper are the following: 1. We discuss realistic instances where observational overfitting may occur and its difference from other confounding factors, and design a parametric theoretical framework to induce observational overfitting that can be applied to any underlying MDP. 2. We study observational overfitting with linear quadratic regulators (LQR) in a synthetic environment and neural networks such as multi-layer perceptrons (MLPs) and convolutions in classic Gym environments. A primary novel we demonstrate for all cases is that implicit regularization occurs in this setting in RL. We further test the implicit regularization hypothesis on the benchmark CoinRun from using MLPs, even when the underlying MDP dynamics are changing per level. 3. In the Appendix, we expand upon previous experiments by including full training curve and hyperparamters. We also provide an extensive analysis of the convex one-step LQR case under the observational overfitting regime, showing that under Gaussian initialization of the policy and using gradient descent on the training cost, a generalization gap must necessarily exist. The structure of this paper is outlined as follows: Section 2 discusses the motivation behind this work and the synthetic construction to abstract certain observation effects. Section 3 demonstrates numerous experiments using this synthetic construction that suggest implicit regularization is at work. Finally, Section 3.4 tests the implicit regularization hypothesis on CoinRun, as well as ablates various ImageNet architectures and margin metrics in the Appendix. We start by showing an example of observational overfitting in Figure 1. This example highlights the issues surrounding MDP's with rich, textured observations -specifically, the agent can use any features that are correlated with progress, even those which may not generalize across levels. This is an important issue for vision-based policies, as many times it is not obvious what part of the observation causes an agent to act or generalize. Figure 1: Example of observational overfitting in Sonic from Gym Retro . Saliency maps highlight (in red) the top-left timer and objects such as clouds and textures because they are correlated with progress, as they move backwards while agent is moving forwards. The agent could memorize optimal actions for training levels even if its observation was only from the timer, and "blacking-out" the timer consistently improved generalization performance (see Appendix A.2.3). Currently most architectures used in model-free RL are simple (with fewer than one million parameters) compared to the much larger and more complex ImageNet architectures used for classification. This is due to the fact that most RL environments studied either have relatively simple and highly structured images (e.g. Atari) compared to real world images, or conveniently do not directly force the agent to observe highly detailed images. For instance in large scale RL such as DOTA2 or Starcraft 2 , the agent observations are internal minimaps pertaining to object xy-locations, rather than human-rendered observations. Several artificial benchmarks (b;) have been proposed before to portray this notion of overfitting, where an agent must deal with a changing however, a key difference in our work is that we explicitly require the "" to be correlated with the progress rather than loosely correlated (e.g. through determinism between the and the game avatar) or not at all. This makes a more explicit connection to causal inference (; ;) where spurious correlations between ungeneralizable features and progress may make training easy, but are detrimental to test performance because they induce false attributions. Previously, many works interpret the decision-making of an agent through saliency and other network visualizations (; on common benchmarks such as Atari. Other recent works such as analyze the interactions between noise-injecting explicit regularizations and the information bottleneck. However, our work is motivated by learning theoretic frameworks to capture this phenomena, as there is vast literature on understanding the generalization properties of SL classifiers (; ;) and in particular neural networks (b; ; ; c). For an RL policy with high-dimensional observations, we hypothesize its overfitting can come from more theoretically principled reasons, as opposed to purely good inductive biases on game images. As an example of what may happen in high dimensional observation space, consider linear least squares regression task where given the set X ∈ R m×d and Y ∈ R m, we want to find w ∈ R d that minimizes X,Y (w) = Y − Xw 2 where m is the number of samples and d is the input dimension. We know that if X X is full rank (hence d ≤ m), X,Y has a unique global minimum w * = (X X) −1 X Y. On the other hand if X X is not full rank (eg. when m < d), then there are many global minima w * such that Y = Xw * 1. Luckily, if we use any gradient based optimization to minimize the loss and initialize with w = 0, the solution will only span column spaces of X and converges to minimum 2 norm solution among all global minima due to implicit regularization . Thus a high dimensional observation space with a low dimensional state space can induce multiple solutions, some of which are not generalizable to other functions or MDP's but one could hope that implicit regularization would help avoiding this issue. We analyze this case in further detail for the convex one-step LQR case in Section 3.1 and Appendix A.4.3. In the zero-shot framework for RL generalization, we assume there exists a distribution D over MDP's M for which there exists a fixed policy π opt that can achieve maximal return on expectation over MDP's generated from the distribution. An appropriate finite training set M train = {M 1, . . ., M n} can then be created by repeatedly randomly sampling M ∼ D. Thus for a MDP M and any policy π, expected episodic reward is defined as R M (π). In many empirical cases, the support of the distribution D is made by parametrized MDP's where some process, given a parameter θ, creates a mapping θ → M θ (e.g. through procedural generation), and thus we may simplify notation and instead define a distribution Θ that induces D, which implies a set of samples Θ train = {θ 1, . . ., θ n} also induces a M train = {M 1, . . ., M n}, and we may redefine reward as R M θ (π) = R θ (π). 1 Given any X with full rank X X, it is possible to create many global minima by projecting the data onto As a simplified model of the observational problem from Sonic, we can construct a mapping θ → M θ by first fixing a base MDP M = (S, A, r, T), which corresponds to state space, action space, reward, and transition. The only effect of θ is to introduce an additional observation function φ θ: S → O, where the agent receives input from the high dimensional observation space O rather than from the state space S. Thus, for our setting, θ actually parameterizes a POMDP family which can be thought of as simply a combination of a base MDP M and an observational function φ θ, hence Let Θ train = {θ 1, . . ., θ n} be a set of n i.i.d. samples from Θ, and suppose we train π to optimize reward against is the average reward over this empirical sample. We want to generalize to the distribution Θ, which can be expressed as the average episode reward R over the full distribution, i.e. Thus we define the generalization gap as J Θ (π) − J Θ (π). We can model the effects of Figure 1 more generally, not specific to sidescroller games. We assume that there is an underlying state s (e.g. xy-locations of objects in a game), whose features may be very well structured, but that this state has been projected to a high dimensional observation space by φ θ. To abstract the notion of generalizable and non-generalizable features, we construct a simple and natural candidate class of functions, where In this setup, f (·) is a function invariant for the entire MDP population Θ, while g θ (·) is a function dependent on the sampled parameter θ. h is a "combination" function which combines the two outputs of f and g to produce a final observation. While f projects this latent data into salient and important, invariant features such as the avatar, monsters, and items, g θ projects the latent data to unimportant features that do not contribute to extra generalizable information, and can cause overfitting, such as the changing or textures. A visual representation is shown in Figure 2. This is a simplified but still insightful model relevant in more realistic settings. For instance, in settings where g θ does matter, learning this separation and task-identification could potentially help fast adaptation in meta-learning . From now on, we denote this setup as the (f, g)-scheme. This setting also leads to more interpretable generalization bounds -Lemma 2 of provides a high probability (1 − δ) bound for the "intrinsic" generalization gap when m levels are, where is the Rademacher Complexity under the MDP, where θ i are the ζ i parameters used in the original work, and the transition T and initialization I are fixed, therefore omitted, to accommodate our setting. The Rademacher Complexity term captures how invariant policies in the set Π with respect to θ. For most RL benchmarks, this is not interpretable due to multiple confounding factors such as the varying level dynamics. For instance, it is difficult to imagine what behaviors or network weights a policy would possess in order to produce the same total rewards, regardless of changing dynamics. However, in our case, because the environment parameters θ are only from g θ, the Rademacher Complexity is directly based on how much the policy "looks at" g θ. More formally, let Π * be the set of policies π * which are not be affected by changes in g θ; i.e. ∇ θ π * (φ θ (s)) = 0 ∀s and thus R θ (π *) = R const ∀θ, which implies that the environment parameter θ has no effect on the reward; Normally in a MDP such as a game, the concatenation operation may be dependent on time (e.g. textures move around in the frame). In the scope of this work, we simplify the concatenation effect and assume h(·) is a static concatenation, but still are able to demonstrate insightful properties. We note that this inductive bias on h allows explicit regularization to trivially solve this problem, by penalizing a policy's first layer that is used to "view" g θ (s) (Appendix A.1.1), hence we only focus on implicit regularizations. This setting is naturally attractive to analyzing architectural differences, as it is more closely related in spirit to image classifiers and SL. One particular line of work to explain the effects of certain architectural modifications in SL such as overparametrization and residual connections is implicit regularization (a; ;), as overparametrization through more layer depth and wider layers has proven to have no p -regularization equivalent , but rather precondition the dynamics during training. Thus, in order to fairly experimentally measure this effect, we always use fixed hyperparameters and only vary based on architecture. In this work, we only refer to architectural implicit regularization techniques, which do not have a explicit regularization equivalent. Some techniques e.g. coordinate descent are equivalent to explicit 1 -regularization. We first analyze the case of the LQR as a surrogate for what may occur in deep RL, which has been done before for various topics such as sample complexity and model-based RL. This is analogous to analyzing linear/logistic regression as a surrogate to understanding extensions to deep SL techniques (a;). In particular, this has numerous benefits -the cost (negative of reward) function is deterministic, and allows exact gradient descent (i.e. the policy can differentiate through the cost function) as opposed to necessarily using stochastic gradients in normal RL, and thus can cleanly provide evidence of implicit regularization. Furthermore, in terms of gradient dynamics and optimization, LQR readily possesses nontrivial qualities compared to linear regression, as the LQR cost is a non-convex function but all of its minima are global minima . To show that overparametrization alone is an important implicit regularizer in RL, LQR allows the use of linear policies (and consequently also allows stacking linear layers) without requiring a stochastic output such as discrete Gumbel-softmax or for the continuous case, a parametrized Gaussian. This is setting able to show that overparametrization alone can affect gradient dynamics, and is not a consequence of extra representation power due to additional non-linearities in the policy. There have been multiple recent works on this linear-layer stacking in SL and other theoretical problems such as matrix factorization and matrix completion (b; a;), but to our knowledge, we are the first to analyze this case in the context of RL generalization. We explicitly describe setup as follows: for a given θ, we let f (s) = W c · s, while g θ (s) = W θ · s where W c, W θ are semi-orthogonal matrices, to prevent information loss relevant to outputting the optimal action, as the state is transformed into the observation. Hence, if s t is the underlying state at time t, then the observation is o t = W c W θ s t and thus the action is a t = Ko t, where K is the policy matrix. While W c remains a constant matrix, we sample W θ randomly, using the "level ID" integer θ as the seed for random generation. In terms of dimensions, if s is of shape d state, then f also projects to a shape of d state, while g θ projects to a much larger shape d noise, implying that the observation to the agent is of dimension d signal + d noise. In our experiments, we set as default (d signal, d noise) =. If P is the unique minimizer of the original cost function, then the unique minimizer of the population. However, if we have a single level, then there exist multiple solutions, ∀α. This extra bottom component W θ P T causes overfitting. In Appendix A.4.3, we show that in the 1-step LQR case (which can be extended to convex losses whose gradients are linear in the input), gradient descent cannot remove this component, and thus overfitting necessarily occurs. Furthermore, we find that increasing d noise increases the generalization gap in the LQR setting. This is empirically verified in Figure 3 using an actual non-convex LQR loss, and the suggest that the gap scales by O(√ d noise). In terms of overparametrization, we experimentally added more (100 × 100) linear layers K = K 0 K 1,..., K j and increased widths for a 2-layer case (Figure 3), and observe that both settings reduce the generalization gap, and also reduce the norms (spectral, nuclear, Frobenius) of the final end-to-end policy K, without changing its expressiveness. This suggests that gradient descent under overparametrization implicitly biases the policy towards a "simpler" model in the LQR case. A ≥ 1 ∀A and A * = A F, A 1. We see that the naive spectral bound diverges at 2 layers, and the weight-counting sums are too loose. As a surrogate model for deep RL, one may ask if the generalization gap of the final end-to-end policy K can be predicted by functions of the layers K 0,..., K j. This is an important question as it is a required base case for predicting generalization when using stochastic policy gradient with nonlinear activations such as ReLU or Tanh. From examining the distribution of singular values on K (Appendix A.1.1), we find that more layers does not bias the policy towards a low rank solution in the nonconvex LQR case, unlike (b) which shows this does occur for matrix completion, and in general, convex losses. Ultimately, we answer in the negative: intriguingly, SL bounds have very little predictive power in the RL domain case. To understand why SL bounds may be candidates for the LQR case, we note that as a basic smoothness bound C(K) − C(K) ≤ O(K − K) (Appendix A.4) can lead to very similar reasoning with SL bounds. Since our setup is similar to SL in that "LQR levels" which may be interpreted as a dataset, we use bounds of the form ∆·Φ, where ∆ is a "macro" product term ∆ = ). However, the Φ terms increase too rapidly as shown in Figure 3. Terms such as Frobenius product and Fischer-Rao are effective for the SL depth case, but are both ineffective in the LQR depth case. For width, the only product which is effective is the nuclear norm product. In Section 3.1, we find that observational overfitting exists and overparametrization potentially helps in the linear setting. In order to analyze the case when the underlying dynamics are nonlinear, we let M be a classic Gym environment and we generate a M θ = (M, w θ) by performing the exact same (f, g)-scheme as the LQR case, i.e. sampling θ to produce an observation function We again can produce training/test sets of MDPs by repeatedly sampling θ, and for policy optimization, we use Proximal Policy Gradient . Although bounds on the smoothness term R θ (π) − R θ (π) affects upper bounds on Rademacher Complexity (and thus generalization bounds), we have no such theoretical guarantees in the Mujoco case as it is difficult to analyze the smoothness term for complicated transitions such as Mujoco's physics simulator. However, in Figure 4, we can observe empirically that the underlying state dynamics has a significant effect on generalization performance as the policy nontrivially increased test performance such as in CartPole-v1 and Swimmer-v2, while it could not for others. This suggests that the Rademacher complexity and smoothness on the reward function vary highly for different environments. Figure 4: Each Mujoco task is given 10 training levels (randomly sampling g θ parameters). We used a 2-layer ReLU policy, with 128 hidden units each. Dimensions of outputs of (f, g) were respectively. Even though it is common practice to use basic (2-layer) MLPs in these classic benchmarks, there are highly nontrivial generalization effects from modifying on this class of architectures. Our in Figures 5 and 6 show that increasing width and depth for basic MLPs can increase generalization and is significantly dependent on the choice of activation, and other implicit regularizations such as using residual layers can also improve generalization. Specifically, switching between ReLU and Tanh activations produces different during overparametrization. For instance, increasing Tanh layers improves generalization on CartPole-v1, and width increase with ReLU helps on Swimmer-v2. Tanh is noted to consistently improve generalization performance. However, stacking Tanh layers comes at a cost of also producing vanishing gradients which can produce subpar training performance, for e.g. HalfCheetah. To allow larger depths, we use ReLU residual layers, which also improves generalization and stabilizes training. Previous work did not find such an architectural pattern for GridWorld environments, suggesting that this effect may exist primarily for observational overfitting cases. While there have been numerous works which avoid overparametrization on simplifying policies or compactifying networks , we instead find that there are generalization benefits to overparametrization even in the nonlinear control case. From the above with MLPs, one may wonder if similar may carry to convolutional networks, as they are widely used for vision-based RL tasks. As a ground truth reference for our experiment, we the canonical networks proven to generalize well in the dataset CoinRun, which are from worst to best, NatureCNN Mnih et al., IMPALA Espeholt et al. (2018, and IMPALA-LARGE (IMPALA with more residual blocks and higher convolution depths), which have respective parameter numbers (600K, 622K, 823K). We setup a similar (f, g)-scheme appropriate for the inductive bias of convolutions, by passing the vanilla Gym 1D state corresponding to joint locations and velocities, through multiple deconvolutions. We do so rather than using the RGB image from env.render to enforce that the actual state is indeed low dimensional and minimize complications in experimentation, as e.g. inference of velocity information would require frame-stacking. Specifically in our setup, we project the actual state to a fixed length, reshaping it into a square, and replacing f and g θ both with the same orthogonally-initialized deconvolution architecture to each produce a 84 × 84 image (but g θ 's network weights are still generated by θ 1, ..., θ m similar to before). We combine the two outputs by using one half of the "image" from f, and one half from g θ, as shown back in Figure 2. shows that the same ranking between the three architectures exists as well on the GymDeconv dataset. We show that generalization ranking among NatureCNN/IMPALA/IMPALA-LARGE remains the same regardless of whether we use our synthetic constructions or CoinRun. This suggests that the RL generalization quality of a convolutional architecture is not limited to real world data, as our test purely uses numeric observations, which are not based on a human-prior. From these findings, one may conjecture that these RL generalization performances are highly correlated and may be due to common factors. One of these factors we suggest is due to implicit regularization. In order to support this claim, we perform a memorization test by only showing g θ's output to the policy. This makes the dataset impossible to generalize to, as the policy network cannot invert every single observation function {g θ1 (·), g θ2 (·),..., g θn (·)} simultaneously. also constructs a memorization test for mazes and grid-worlds, and showed that more parameters increased the memorization ability of the policy. While it is intuitive that more parameters would incur more memorization, we show in Figure 8 that this is perhaps not a complete picture when implicit regularization is involved. Using the underlying MDP as a Swimmer-v2 environment, we see that NatureCNN, IMPALA, IMPALA-LARGE have reduced memorization performances. IMPALA-LARGE, which has more depth parameters and more residual layers (and thus technically has more capacity), memorizes less than IMPALA due its inherent inductive bias. While memorization performance is dampened in 8, we perform another deconvolution memorization test using an LQR as the underlying MDP in Appendix A.1.1 that shows that there can exist specific hard limits to memorization, which also follows the same ranking above. We further test our overparametrization hypothesis from Sections 3.1, 3.2 to the CoinRun benchmark, using unlimited levels for training. For MLP networks, we downsized CoinRun from native 64 × 64 to 32 × 32, and flattened the 32 × 32 × 3 image for input to an MLP. Two significant differences from the synthetic cases are that 1. Inherent dynamics are changing per level in CoinRun, and 2. The relevant and irrelevant CoinRun features change locations across the 1-D input vector. Regardless, in Figure 9, we show that overparametrization can still improve generalization in this more realistic RL benchmark, much akin to (b) which showed that overparametrization for MLP's improved generalization on 32 × 32 × 3 CIFAR-10. Figure 9: Overparametrization improves generalization for CoinRun. While we also extend the case of large-parameter convolutional networks using ImageNet networks in Appendix A.2.1, an important question is how to predict the generalization gap only from the training phase. A particular set of metrics, popular in the SL community are margin distributions , as they deal with the case for softmax outputs which do not explicitly penalize the weight norm of a network, by normalizing the "confidence" margin of the logit outputs. While using margins on state-action pairs (from an on-policy replay buffer) is not technically rigorous, one may be curious to see if they have predictive power, especially as MLPs are relatively simple to norm-bound. We plotted these margin distributions in Appendix A.2.2, but found that the weight norm bounds used in SL are simply too dominant for this RL case. This, with the bound found earlier for the LQR case, suggests that current norm bounds are simply too loose for the RL case even though we have shown overparametrization helps generalization in RL, and hopefully this motivates more of the study of such theory. We have identified and isolated a key component of overfitting in RL as the particular case of "observational overfitting", which is particularly attractive for studying architectural implicit regularizations. We have analyzed this setting extensively, by examining 3 main components: 1. The analytical case of LQR and linear policies under exact gradient descent, which lays the foundation for understanding theoretical properties of networks in RL generalization. 2. The empirical but principled Projected-Gym case for both MLP and convolutional networks which demonstrates the effects of neural network policies under nonlinear environments. 3. The large scale case for CoinRun, which can be interpreted as a case where relevant features are moving across the input, where empirically, MLP overparametrization also improves generalization. We noted that current network policy bounds using ideas from SL are unable to explain overparametrization effects in RL, which is an important further direction. In some sense, this area of RL generalization is an extension of static SL classification from adding extra RL components. For instance, adding a nontrivial "combination function" between f and g θ that is dependent on time (to simulate how object pixels move in a real game) is both an RL generalization issue and potentially video classification issue, and extending to the memory-based RNN case will also be highly beneficial. Furthermore, it is unclear whether such overparametrization effects would occur in off-policy methods such as Q-learning and also ES-based methods. In terms of architectural design, recent works (; ;) have shed light on the properties of asymptotically overparametrized neural networks in the infinite width and depth cases and their performance in SL. Potentially such architectures (and a corresponding training algorithm) may be used in the RL setting which can possibly provide benefits, one of which is generalization as shown in this paper. We believe that this work provides an important initial step towards solving these future problems. We further verify that explicit regularization (norm based penalties) also reduces generalization gaps. However, explicit regularization may be explained due to the bias of the synthetic tasks, since the first layer's matrix may be regularized to only "view" the output of f, especially as regularizing the first layer's weights substantially improves generalization. Figure A2: Explicit Regularization on layer norms. We provide another deconvolution memorization test, using an LQR as the underlying MDP. While fg-Gym-Deconv shows that memorization performance is dampened, this test shows that there can exist specific hard limits to memorization. Specifically, NatureCNN can memorize 30 levels, but not 50; IMPALA can memorize 2 levels but not 5; IMPALA-LARGE cannot memorize 2 levels at all. Training, Test Rewards (f = NULL) IMPALA_2_levels IMPALA_5_levels IMPALA_30_levels IMPALA_LARGE_2_levels NatureCNN_30_levels NatureCNN_50_levels Figure A3: Deconvolution memorization test using LQR as underlying MDP. For reference, we also extend the case of large-parameter convolutional networks using ImageNet networks. We experimentally verify in Table 1 that large ImageNet models perform very differently in RL than SL. We note that default network with the highest test reward was IMPALA-LARGE-BN (IMPALA-LARGE, with Batchnorm) at ≈ 5.5 test score. In order to verify that this is inherently a feature learning problem rather than a combinatorial problem involving objects, such as in , we show that state-of-the-art attention mechanisms for RL such as Relational Memory Core (RMC) using pure attention on raw 32 × 32 pixels does not perform well here, showing that a large portion of generalization and transfer must be based on correct convolutional setups. We provide the training/testing curves for the ImageNet/large convolutional models used. Note the following: 1. RMC32x32 projects the native image from CoinRun from 64 × 64 to 32 × 32, and uses all pixels as components for attention, after adding the coordinate embedding found in . Optimal parameters were (mem slots = 4, head size = 32, num heads = 4, num blocks = 2, gate style = 'memory'). 2. Auxiliary Loss in ShakeShake was not used during training, only the pure network. 3. VGG-A is a similar but slightly smaller version of VGG-16. A key question is how to predict the generalization gap only from the training phase. A particular set of metrics, popular in the SL community are margin distributions , as they deal with the case for softmax categorical outputs which do not explicitly penalize the weight norm of a network, by normalizing the "confidence" margin of the logit outputs. While using margins on state-action pairs (from an on-policy replay buffer) is not technically rigorous, one may be curious to see if they have predictive power, especially as MLP's are relatively simple to norm-bound, and as seen from the LQR experiments, the norm of the policy may be correlated with the generalization performance. For a policy, the the margin distribution will be defined as (x, y) →, where F π (x) y is the logit value (before applying softmax) of output y given input x, and S is the matrix of states in the replay buffer, and R π is a norm-based Lipschitz measure on the policy network logits. In general, R π is a bound on the Lipschitz constant of the network but can also be simply expressions which allow the margin distribution to have high correlation with the generalization gap. Thus, we use measures inspired by recent literature in SL in which we designate Spectral-L1, Distance, and Spectral-Frobenius measures for R π, and we replace the classical supervised learning pair (x, y) = (s, a) with the state-action pairs found on-policy. The expressions for R π (after removing irrelevant constants) are as follows, with their analogous papers: 1. Spectral-L1 measure: ) 2. Distance measure: ) 3. Spectral-Fro measure: We verify in Figure A5, that indeed, simply measuring the raw norms of the policy network is a poor way to predict generalization, as it generally increases even as training begins to plateau. This is inherently because the softmax on the logit output does not penalize arbitrarily high logit values, and hence proper normalization is needed. The margin distribution converges to a fixed distribution even long after training has plateaued. However, unlike SL, the margin distribution is conceptually not fully correlated with RL generalization on the total reward, as a policy overconfident in some state-action pairs does not imply bad testing performance. This correlation is stronger if there are Lipschitz assumptions on state-action transitions, as noted in . For empirical datasets such as CoinRun, a metric-distance between transitioned states is ill-defined however. Nevertheless, the distribution over the on-policy replay buffer at each policy gradient iteration is a rough measure of overall confidence. We note that there are two forms of modifications, network dependent (explicit modifications to the policy -norm regularization, dropout, etc.) and data dependent (modifications only to the data in the replay buffer -action stochasticity, data augmentation, etc.). Ultimately however, we find that current norm measures R π become too dominant in the fraction, leading to the monotonic decreases in the means of the distributions as we increase parametrization. Figure A6: Margin Distributions at the end of training. In the Gym-Retro benchmark using Sonic , the agent is given 47 training levels with rewards corresponding to increases in horizontal location. The policy is trained until 5k reward. At test time, 11 unseen levels are partitioned into starting positions, and the rewards are measured and averaged. We briefly mention that the agent strongly overfits to the scoreboard (i.e. an artifact correlated with progress in the level), which may be interpreted as part of the output of g θ (·). In fact, the agent is still able to train to 5k reward from purely observing the timer as the observation. By blacking out this scoreboard with a black rectangle, we see an increase in test performance. Settings IMPALA NatureCNN Blackout 1250 ± 40 1141 ± 40 NoBlackout 1130 ± 40 1052 ± 40 for the the full solution and notations. Using the same notation (A, B, Q, R), denote C(K) = x0∼D x T 0 P K x 0 as the cost and u t = −Kx t as the policy, where P K satisfies the infinite case for the Lyapunov equation: We may calculate the precise LQR cost by vectorizing (i.e. flattening) both sides' matrices and using the Kroncker product ⊗, which leads to a linear regression problem on P K, which has a precise solution, implementable in TensorFlow: Parameter Generation A Orthogonal initialization, scaled 0.99 Orthogonal Initialization, scaled 0.5 The basis for producing f, g θ outputs is due to using batch matrix multiplication operations, or "BMV", where the same network architecture uses different network weights for each batch dimension, and thus each entry in a batchsize of B will be processed by the same architecture, but with different network weights. This is to simulate the effect of g θi. The numeric ID i of the environment is used as an index to collect a specific set of network weights θ i from a global memory of network weights (e.g. using tensorflow.gather). We did not use nonlinear activations for the BMV architectures, as they did not change the outcome of the . Architecture Setup BMV-Deconv (filtersize = 2, stride = 1, outchannel = 8, padding = "VALID") (filtersize = 4, stride = 2, outchannel = 4, padding = "VALID") (filtersize = 8, stride = 2, outchannel = 4, padding = "VALID") (filtersize = 8, stride = 3, outchannel = 3, padding = "VALID") BMV-Dense f: Dense 30, g: Dense 100 A.3.3 IMAGENET MODELS For the networks used in the supervised learning tasks, we direct the reader to the following repository: https://github.com/tensorflow/models/blob/master/research/ slim/nets/nets_factory.py. We also used the RMC: deepmind/sonnet/blob/ master/sonnet/python/modules/relational_memory.py See for the default parameters used for CoinRun. We only varied nminibatches in order to fit memory onto GPU. We also did not use RNN additions, in order to measure performance only from the feedforward network -the framestacking/temporal aspect is replaced by the option to present the agent velocity in the image. In this section, we use notation consistent with for our base proofs. However, in order to avoid confusion with a high dimensional policy K we described in 3.1, we denote our low dimensional base policy as P and state as s t rather than x t. Let · be the spectral norm of a matrix (i.e. largest singular value). Suppose C(P) was the infinite horizon cost for an (A, B, Q, R)-LQR where action a t = −P · s t, s t is the state at time t, state transition is s t+1 = A · s t + B · a t, and timestep cost is s T t Qs t + a T t Ra t. C(P) for an infinite horizon LQR, while known to be non-convex, still possess the property that when ∇C(P *) = 0, P * is a global minimizer, or the problem statement is rank deficient. To ensure that our cost C(P) always remains finite, we restrict our analysis when P ∈ P, where P = {P : P ≤ α and A − BP ≤ 1} for some constant α, by choosing A, B and the initialization of P appropriately, using the hyperparameters found in A.3.1. We further define the observation modified cost as C(K; W θ) = C K W c W θ T. As described in Lemma 16 of , we define and T P = sup X T P (X) X over all non-zero symmetric matrices X. Lemma 27 of provides a bound on the difference C(P) − C(P) for two different policies P, P when LQR parameters A, B, Q, R are fixed. During the derivation, it states that when P − P ≤ min σmin(Q)µ 4C(P) B (A−BP +1), P, then: C(P) − C(P) ≤ 2 T P (2 P R P − P + R P − P 2)+ 2 T P 2 2 B (A − BP + 1) P − P P 2 R Lemma 17 also states that: where Assuming that in our problem setup, x 0, Q, R, A, B were fixed, this means many of the parameters in the bounds are constant, and thus we conclude: C(P) − C(P) ≤ O C(P) 2 P 2 P − P (A − BP + B + 1) + P P − P 2 Since we assumed A − BP ≤ 1 or else T P (X) is infinite, we thus collect the terms: Since α is a bound on P for P ∈ P, note that P 2 P − P + P P − P 2 = P − P (P 2 + P + P − P) ≤ P − P (P 2 + P ( P + P) ≤ (3α 2) P − P From, this leads to the bound: Note that this directly implies a similar bound in the high dimensional observation case -in particular, We first start with a convex cost 1-step LQR toy example under this regime, which shows that linear components such as β 0 W θ T cannot be removed from the policy by gradient descent dynamics to improve generalization. To shorten notation, let W c ∈ R n×n and W θ ∈ R p×n, where n p. This is equivalent to setting d signal = d state = n and d noise = p, and thus the policy K ∈ R n×(n+p). In the 1-step LQR, we allow s 0 ∼ N (0, I), a 0 = K W c W θ s 0 and s 1 = s 0 + a 0 with cost 1 2 s 1 2, then C(K; W θ) = E s0 1 2 and Define the population cost as C(K):= E W θ [C(K; W θ)]. Let the notation O(p, n) denote the following set of orthogonal matrices: O(p, n) = {W ∈ R p×n : W T W = I}. We use the shorthand O(n) = O(n, n). Proposition 1. Suppose that W θ ∼ Unif(O(p, n)) and W c ∼ Unif(O(n)). Then Here, the expectation is over the randomness of the samples {W i} m i=1 and the initalization K 0. The contribution from E 1 is due to the generalization error of the minimum-norm stationary point of C m (·; {W i}). The contribution from E 2 is due to the full-rank initialization of K 0. We remark that our proof specifically relies on the rank of the Hessian as m increases, rather than a more common concentration inequality used in empirical risk minimization arguments, which leads to a 1 √ m scaling. Furthermore, the above expression for E[C(K ∞)] does not scale increasingly with poly(p) for the convex 1-Step LQR case, while empirically, the non-convex infinite LQR case does indeed increase from increasing the noise dimension p (as shown in Section 3.1). Interestingly, this suggests that there is an extra contribution from the non-convexity of the cost, where the observation-modified gradient dynamics tends to reach worse optima. A.4.3.2 PROOF OF THEOREM 1 Fix integers n, p with p ≥ n and suppose that n divides p. Draw a random W ∈ O(p) uniformly from the Haar measure on O(p) and divide W column-wise into W 1, W 2,..., W p/n (that is W i ∈ R p×n, W
We isolate one factor of RL generalization by analyzing the case when the agent only overfits to the observations. We show that architectural implicit regularizations occur in this regime.
390
scitldr
We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information. On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation. In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network. Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks. Linguistic theories generally regard natural language as consisting of two part: a lexicon, the complete set of all possible words in a language; and a syntax, the set of rules, principles, and processes that govern the structure of sentences BID46. To generate a proper sentence, tokens are put together with a specific syntactic structure. Understanding a sentence also requires lexical information to provide meanings, and syntactical knowledge to correctly combine meanings. Current neural language models can provide meaningful word represent BID0 BID41. However, standard recurrent neural networks only implicitly model syntax, thus fail to efficiently use structure information BID53.Developing a deep neural network that can leverage syntactic knowledge to form a better semantic representation has received a great deal of attention in recent years BID50 BID53 BID11. Integrating syntactic structure into a language model is important for different reasons: 1) to obtain a hierarchical representation with increasing levels of abstraction, which is a key feature of deep neural networks and of the human brain BID1 BID31 BID47; 2) to capture complex linguistic phenomena, like long-term dependency problem BID53 and the compositional effects BID50; 3) to provide shortcut for gradient back-propagation BID11.A syntactic parser is the most common source for structure information. Supervised parsers can achieve very high performance on well constructed sentences. Hence, parsers can provide accurate information about how to compose word semantics into sentence semantics BID50, or how to generate the next word given previous words BID56. However, only major languages have treebank data for training parsers, and it request expensive human expert annotation. People also tend to break language rules in many circumstances (such as writing a tweet). These defects limit the generalization capability of supervised parsers. Unsupervised syntactic structure induction has been among the longstanding challenges of computational linguistic BID23 BID25 BID2. Researchers are interested in this problem for a variety of reasons: to be able to parse languages for which no annotated treebanks exist BID35; to create a dependency structure to better suit a particular NLP application BID56; to empirically argue for or against the poverty of the stimulus BID12 BID10; and to examine cognitive issues in language learning BID51.In this paper, we propose a novel neural language model: Parsing-Reading-Predict Networks (PRPN), which can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to form a better language model. With our model, we assume that language can be naturally represented as a tree-structured graph. The model is composed of three parts:1. A differentiable neural Parsing Network uses a convolutional neural network to compute the syntactic distance, which represents the syntactic relationships between all successive pairs of words in a sentence, and then makes soft constituent decisions based on the syntactic distance.2. A Reading Network that recurrently computes an adaptive memory representation to summarize information relevant to the current time step, based on all previous memories that are syntactically and directly related to the current token.3. A Predict Network that predicts the next token based on all memories that are syntactically and directly related to the next token. We evaluate our model on three tasks: word-level language modeling, character-level language modeling, and unsupervised constituency parsing. The proposed model achieves (or is close to) the state-of-the-art on both word-level and character-level language modeling. The model's unsupervised parsing outperforms some strong baseline models, demonstrating that the structure found by our model is similar to the intrinsic structure provided by human experts. The idea of introducing some structures, especially trees, into language understanding to help a downstream task has been explored in various ways. For example, BID50; BID53 learn a bottom-up encoder, taking as an input a parse tree supplied from an external parser. There are models that are able to infer a tree during test time, while still need supervised signal on tree structure during training. For example, BID49; BID59, etc. Moreover, BID55 did an in-depth analysis of recursive models that are able to learn tree structure without being exposed to any grammar trees. Our model is also able to infer tree structure in an unsupervised setting, but different from theirs, it is a recurrent network that implicitly models tree structure through attention. Apart from the approach of using recursive networks to capture structures, there is another line of research which try to learn recurrent features at multiple scales, which can be dated back to 1990s (e.g. El BID15 BID48 ; BID32). The NARX RNN BID32 is another example which used a feed forward net taking different inputs with predefined time delays to model long-term dependencies. More recently, BID27 also used multiple layers of recurrent networks with different pre-defined updating frequencies. Instead, our model tries to learn the structure from data, rather than predefining it. In that respect, BID11 relates to our model since it proposes a hierarchical multi-scale structure with binary gates controlling intralayer connections, and the gating mechanism is learned from data too. The difference is that their gating mechanism controls the updates of higher layers directly, while ours control it softly through an attention mechanism. In terms of language modeling, syntactic language modeling can be dated back to BID7. BID6; BID44 have also proposed language models with a top-down parsing mechanism. Recently; have introduced neural networks into this space. It learns both a discriminative and a generative model with top-down parsing, trained with a supervision signal from parsed sentences in the corpus. There are also dependency-based approaches using neural networks, including BID4; BID16; BID54.Parsers are also related to our work since they are all inferring grammatical tree structure given a sentence. For example, SPINN is a shift-reduce parser that uses an LSTM as its composition function. The transition classifier in SPINN is supervisedly trained on the Stanford PCFG Parser BID24 output. Unsupervised parsers are more aligned with what our model is doing. BID25 presented a generative model for the unsupervised learning of dependency structures. BID23 is a generative distributional model for the unsupervised induction of natural language syntax which explicitly models constituent yields and contexts. We compare our parsing quality with the aforementioned two papers in Section 6.3. Figure 1: Hard arrow represents syntactic tree structure and parent-to-child dependency relation, dash arrow represents dependency relation between siblings Suppose we have a sequence of tokens x 0,..., x 6 governed by the tree structure showed in FIG4.The leafs x i are observed tokens. Node y i represents the meaning of the constituent formed by its leaves x l(yi),..., x r(yi), where l(·) and r(·) stands for the leftmost child and right most child. Root r represents the meaning of the whole sequence. Arrows represent the dependency relations between nodes. The underlying assumption is that each node depends only on its parent and its left siblings. Directly modeling the tree structure is a challenging task, usually requiring supervision to learn BID53. In addition, relying on tree structures can in a model that is not sufficiently robust to face ungrammatical sentences BID19. In contrast, recurrent models provide a convenient way to model sequential data, with the current hidden state only depends on the last hidden state. This makes models more robust when facing nonconforming sequential data, but it suffers from neglecting the real dependency relation that dominates the structure of natural language sentences. In this paper, we use skip-connection to integrate structured dependency relations with recurrent neural network. In other words, the current hidden state does not only depend on the last hidden state, but also on previous hidden states that have a direct syntactic relation to the current one. FIG0 shows the structure of our model. The non-leaf node y j is represented by a set of hidden states y j = {m i} l(yj)≤i≤r(yj), where l(y j) is the left most descendant leaf and r(y j) is the right most one. Arrows shows skip connections built by our model according to the latent structure. Skip connections are controlled by gates g t i. In order to define g t i, we introduce a latent variable l t to represent local structural context of x t:• if x t is not left most child of any subtree, then l t is the position of x t's left most sibling.• if x t is the left most child of a subtree y i, then l t is the position of the left most child that belongs to the left most sibling of y i.and gates are defined as: DISPLAYFORM0 Given this architecture, the siblings dependency relation is modeled by at least one skip-connect. The skip connection will directly feed information forward, and pass gradient backward. The parentto-child relation will be implicitly modeled by skip-connect relation between nodes. The model recurrently updates the hidden states according to: DISPLAYFORM1 and the probability distribution for next word is approximated by: DISPLAYFORM2 where g t i are gates that control skip-connections. Both f and h have a structured attention mechanism that takes g t i as input and forces the model to focus on the most related information. Since l t is an unobserved latent variable, We explain an approximation for g t i in the next section. The structured attention mechanism is explained in section 5.1. In this section we give a probabilistic view on how to model the local structure of language. A detailed elaboration for this section is given in Appendix B. At time step t, p(l t |x 0, ..., x t) represents the probability of choosing one out of t possible local structures. We propose to model the distribution by the Stick-Breaking Process: DISPLAYFORM0 The formula can be understood by noting that after the time step i+1,..., t−1 have their probabilities assigned, t−1 j=i+1 α t j is remaining probability, 1 − α t i is the portion of remaining probability that we assign to time step i. Variable α t j is parametrized in the next section. As shown in Appendix B, the expectation of gate value g t i is the Cumulative Distribution Function (CDF) of p(l t = i|x 0, ..., x t). Thus, we can replace the discrete gate value by its expectation: DISPLAYFORM1 With these relaxations, Eq.2 and 3 can be approximated by using a soft gating vector to update the hidden state and predict the next token. Inferring tree structure with Syntactic Distance In Eq.4, 1 − α t j is the portion of the remaining probability that we assign to position j. Because the stick-breaking process should assign high probability to l t, which is the closest constituent-beginning word. The model should assign large 1 − α t j to words beginning new constituents. While x t itself is a constituent-beginning word, the model should assign large 1 − α t j to words beginning larger constituents. In other words, the model will consider longer dependency relations for the first word in constituent. Given the sentence in FIG4, at time step t = 6, both 1 − α 6 2 and 1 − α 6 0 should be close to 1, and all other 1 − α 6 j should be close to 0.In order to parametrize α t j, our basic hypothesis is that words in the same constituent should have a closer syntactic relation within themselves, and that this syntactical proximity can be represented by a scalar value. From the tree structure point of view, the shortest path between leafs in same subtree is shorter than the one between leafs in different subtree. To model syntactical proximity, we introduce a new feature Syntactic Distance. For a sentence with length K, we define a set of K real valued scalar variables d 0,..., d K−1, with d i representing a measure of the syntactic relation between the pair of adjacent words (x i−1, x i). x −1 could be the last word in previous sentence or a padding token. For time step t, we want to find the closest words x j, that have larger syntactic distance than d t. Thus α t j can be defined as: DISPLAYFORM0 where hardtanh(x) = max(−1, min(1, x)). τ is the temperature parameter that controls the sensitivity of α t j to the differences between distances. The Syntactic Distance has some nice properties that both allow us to infer a tree structure from it and be robust to intermediate non-valid tree structures that the model may encounter during learning. In Appendix C and D we list these properties and further explain the meanings of their values. Parameterizing Syntactic Distance BID45 shows that it's possible to identify the beginning and ending words of a constituent using local information. In our model, the syntactic distance between a given token (which is usually represented as a vector word embedding e i) and its previous token e i−1, is provided by a convolutional kernel over a set of consecutive previous tokens e i−L, e i−L+1,..., e i. This convolution is depicted as the gray triangles shown in FIG2. Each triangle here represent 2 layers of convolution. Formally, the syntactic distance d i between token e i−1 and e i is computed by Convolving h and d on the whole sequence with length K yields a set of distances. For the tokens in the beginning of the sequence, we simply pad L − 1 zero vectors to the front of the sequence in order to get K − 1 outputs. Similar to Long Short-Term Memory-Network (LSTMN) BID9, the Reading Network maintains the memory states by maintaining two sets of vectors: a hidden tape H t−1 = (h t−Nm, ..., h t−1), and a memory tape C t−1 = (c t−L, ..., c t−1), where N m is the upper bound for the memory span. Hidden states m i is now represented by a tuple of two vectors (h i, c i). The Reading Network captures the dependency relation by a modified attention mechanism: structured attention. At each step of recurrence, the model summarizes the previous recurrent states via the structured attention mechanism, then performs a normal LSTM update, with hidden and cell states output by the attention mechanism. DISPLAYFORM1 Structured Attention At each time step t, the read operation attentively links the current token to previous memories with a structured attention layer: DISPLAYFORM2 where, δ k is the dimension of the hidden state. Modulated by the gates in Eq.5, the structured intra-attention weight is defined as: DISPLAYFORM3 This yields a probability distribution over the hidden state vectors of previous tokens. We can then compute an adaptive summary vector for the previous hidden tape and memory denoting byh t and c t: DISPLAYFORM4 Structured attention provides a way to model the dependency relations shown in FIG4.Recurrent Update The Reading Network takes x t,c t andh t as input, computes the values of c t and h t by the LSTM recurrent update BID20. Then the write operation concatenates h t and c t to the end of hidden and memory tape. Predict Network models the probability distribution of next word x t+1, considering on hidden states m 0,..., m t, and gates g t+1 0,..., g t+1 t.Note that, at time step t, the model cannot observe x t+1, a temporary estimation of d t+1 is computed considering on x t−L,..., x t: DISPLAYFORM0 From there we compute its corresponding {α t+1} and {g t+1 i} for Eq.3. We parametrize f (·) function as: DISPLAYFORM1 Figure 4: Syntactic distance estimated by Parsing Network. The model is trained on PTB dataset at the character level. Each blue bar is positioned between two characters, and represents the syntactic distance between them. From these distances we can infer a tree structure according to Section 4.2.where h l:t−1 is an adaptive summary of h lt+1≤i≤t−1, output by structured attention controlled by g t+1 0,..., g t+1 t−1.f (·) could be a simple feed-forward MLP, or more complex architecture, like ResNet, to add more depth to the model. We evaluate the proposed model on three tasks, character-level language modeling, word-level language modeling, and unsupervised constituency parsing. From a character-level view, natural language is a discrete sequence of data, where discrete symbols form a distinct and shallow tree structure: the sentence is the root, words are children of the root, and characters are leafs. However, compared to word-level language modeling, character-level language modeling requires the model to handle longer-term dependencies. We evaluate a character-level variant of our proposed language model over a preprocessed version of the Penn Treebank (PTB) and Text8 datasets. When training, we use truncated back-propagation, and feed the final memory position from the previous batch as the initial memory of next one. At the beginning of training and test time, the model initial hidden states are filled with zero. Optimization is performed with Adam using learning rate lr = 0.003, weight decay w decay = 10 −6, β 1 = 0.9, β 2 = 0.999 and σ = 10 −8. We carry out gradient clipping with maximum norm 1.0. The learning rate is multiplied by 0.1 whenever validation performance does not improve during 2 checkpoints. These checkpoints are performed at the end of each epoch. We also apply layer normalization to the Reading Network and batch normalization to the Predict Network and parsing network. For all of the character-level language modeling experiments, we apply the same procedure, varying only the number of hidden units, mini-batch size and dropout rate. Penn Treebank we process the Penn Treebank dataset BID34 by following the procedure introduced in BID38. For character-level PTB, Reading Network has two recurrent layers, Predict Network has one residual block. Hidden state size is 1024 units. The input and output embedding size are 128, and not shared. Look-back range L = 10, temperature parameter τ = 10, upper band of memory span N m = 20. We use a batch size of 64, truncated backpropagation with 100 timesteps. The values used of dropout on input/output embeddings, between recurrent layers, and on recurrent states were (0, 0.25, 0.1) respectively. In Figure 4, we visualize the syntactic distance estimated by the Parsing Network, while reading three different sequences from the PTB test set. We observe that the syntactic distance tends to be higher between the last character of a word and a space, which is a reasonable breakpoint to separate between words. In other words, if the model sees a space, it will attend on all previous step. If the model sees a letter, it will attend no further then the last space step. The model autonomously discovered to avoid inter-word attention connection, and use the hidden states of space (separator) tokens to summarize previous information. This is strong proof that the model can understand the latent structure of data. As a our model achieve state-of-the-art performance and significantly Model BPC Norm-stabilized RNN BID28 1.48 CW-RNN BID27 1.46 HF-MRNN BID38 1.41 MI-RNN BID57 1.39 ME n-gram BID38 1.37 BatchNorm LSTM BID13 1.32 Zoneout RNN BID29 1.27 HyperNetworks BID18 1.27 LayerNorm HM-LSTM BID11 1.24 LayerNorm HyperNetworks BID18 1.23 PRPN 1.202 Table 1: BPC on the Penn Treebank test set outperform baseline models. It is worth noting that HM-LSTM BID11 ) also unsupervisedly induce similar structure from data. But discrete operations in HM-LSTM make their training procedure more complicated then ours. Comparing to character-level language modeling, word-level language modeling needs to deal with complex syntactic structure and various linguistic phenomena. But it has less long-term dependencies. We evaluate the word-level variant of our language model on a preprocessed version of the Penn Treebank (PTB) BID34 and Text8 BID33 dataset. We apply the same procedure and hyper-parameters as in character-level language model. Except optimization is performed with Adam with β 1 = 0. This turns off the exponential moving average for estimates of the means of the gradients BID36. We also adapt the number of hidden units, mini-batch size and the dropout rate according to the different tasks. Penn Treebank we process the Penn Treebank dataset BID38 by following the procedure introduced in BID39. For word-level PTB, the Reading Network has two recurrent layers and the Predict Network do not have residual block. The hidden state size is 1200 units and the input and output embedding sizes are 800, and shared BID21 BID43. Look-back range L = 5, temperature parameter τ = 10 and the upper band of memory span N m = 15. We use a batch size of 64, truncated back-propagation with 35 time-steps. The values used of dropout on input/output embeddings, between recurrent layers, and on recurrent states were (0.7, 0.5, 0.5) respectively. Model PPL RNN-LDA + KN-5 + cache BID38 92.0 LSTM BID58 78.4 Variational LSTM BID22 78.9 CharCNN BID22 78.9 Pointer Sentinel-LSTM BID37 70.9 LSTM + continuous cache pointer BID17 72.1 Variational LSTM (tied) + augmented loss 68.5 Variational RHN (tied) BID61 65.4 NAS Cell (tied) BID62 62.4 4-layer skip connection LSTM (tied) BID36 58.3 PRPN 61.98 Table 3: Ablation test on the Penn Treebank. "-Parsing Net" means that we remove Parsing Network and replace Structured Attention with normal attention mechanism; "-Reading Net Attention" means that we remove Structured Attention from Reading Network, that is equivalent to replace Reading Network with a normal 2-layer LSTM; "-Predict Net Attention" means that we remove Structured Attention from Predict Network, that is equivalent to have a standard projection layer; "Our 2-layer LSTM" is equivalent to remove Parsing Network and remove Structured Attention from both Reading and Predict Network. Text8 dataset contains 17M training tokens and has a vocabulary size of 44k words. The dataset is partitioned into a training set (first 99M characters) and a development set (last 1M characters) that is used to report performance. As this dataset contains various articles from Wikipedia, the longer term information (such as current topic) plays a bigger role than in the PTB experiments BID42. We apply the same procedure and hyper-parameters as in character-level PTB, except we use a batch size of 128. The values used of dropout on input/output embeddings, between Recurrent Layers and on recurrent states were (0.4, 0.2, 0.2) respectively. Model PPL LSTM-500 BID42 156 SCRNN BID42 161 MemNN BID52 147 LSTM-1024 BID17 121 LSTM + continuous cache pointer BID17 99.9 PRPN 81.64 Table 4: PPL on the Text8 valid setIn TAB1, our are comparable to the state-of-the-art methods. Since we do not have the same computational resource used in BID36 to tune hyper-parameters at large scale, we expect that our model could achieve better performance after an aggressive hyperparameter tuning process. As shown in Table 4, our method outperform baseline methods. It is worth noticing that the continuous cache pointer can also be applied to output of our Predict Network without modification. Visualizations of tree structure generated from learned PTB language model are included in Appendix A. In Table 3, we show the value of test perplexity for different variants of PRPN, each variant remove part of the model. By removing Parsing Network, we observe a significant drop of performance. This stands as empirical evidence regarding the benefit of having structure information to control attention. The unsupervised constituency parsing task compares hte tree structure inferred by the model with those annotated by human experts. The experiment is performed on WSJ10 dataset. WSJ10 is the 7422 sentences in the Penn Treebank Wall Street Journal section which contained 10 words or less after the removal of punctuation and null elements. Evaluation was done by seeing whether proposed constituent spans are also in the Treebank parse, measuring unlabeled F1 (UF 1) of unlabeled constituent precision and recall. Constituents which could not be gotten wrong (those of span one and those spanning entire sentences) were discarded. Given the mechanism discussed in Section 4.2, our model generates a binary tree. Although standard constituency parsing tree is not limited to binary tree. Previous unsupervised constituency parsing model also generate binary trees BID23 BID2. Our model is compared with the several baseline methods, that are explained in Appendix E.Different from the previous experiment setting, the model treat each sentence independently during train and test time. When training, we feed one batch of sentences at each iteration. In a batch, shorter sentences are padded with 0. At the beginning of the iteration, the model's initial hidden states are filled with zero. When testing, we feed on sentence one by one to the model, then use the gate value output by the model to recursively combine tokens into constituents, as described in Appendix A.Model UF 1 LBRANCH 28.7 RANDOM 34.7 DEP-PCFG BID5 48.2 RBRANCH 61.7 CCM BID23 71.9 DMV+CCM BID26 77.6 UML-DOP BID2 82.9 PRPN 70.02 UPPER BOUND 88.1 Table 5: Parsing Performance on the WSJ10 dataset Table 5 summarizes the . Our model significantly outperform the RANDOM baseline indicate a high consistency with human annotation. Our model also shows a comparable performance with CCM model. In fact our parsing network and CCM both focus on the relation between successive tokens. As described in Section 4.2, our model computes syntactic distance between all successive pair of tokens, then our parsing algorithm recursively assemble tokens into constituents according to the learned distance. CCM also recursively model the probability whether a contiguous subsequences of a sentence is a constituent. Thus, one can understand how our model is outperformed by DMV+CCM and UML-DOP models. The DMV+CCM model has extra information from a dependency parser. The UML-DOP approach captures both contiguous and non-contiguous lexical dependencies BID2. In this paper, we propose a novel neural language model that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. We introduce a new neural parsing network: Parsing-Reading-Predict Network, that can make differentiable parsing decisions. We use a new structured attention mechanism to control skip connections in a recurrent neural network. Hence induced syntactic structure information can be used to improve the model's performance. Via this mechanism, the gradient can be directly backpropagated from the language model loss function into the neural Parsing Network. The proposed model achieve (or is close to) the state-of-the-art on both word/character-level language modeling tasks. Experiment also shows that the inferred syntactic structure highly correlated to human expert annotation. A INFERRED TREE STRUCTURE The tree structure is inferred from the syntactic distances yielded by the Parsing Network. We first sort the d i's in decreasing order. For the first d i in the sorted sequence, we separate sentence into constituents ((x <i), (x i, (x >i))). Then we separately repeat this operation for constituents (x <i) and (x >i). Until the constituent contains only one word. In this section we give a probabilistic view on how to model the local structure of language. Given the nature of language, sparse connectivity can be enforced as a prior on how to improve generalization and interpretability of the model. At time step t, p(l t |x 0, ..., x t) represents the probability of choosing one out of t possible local structures that defines the conditional dependencies. If l t = t, it means x t depends on all the previous hidden state from m t to m t (t ≤ t).A particularly flexible option for modeling p(l t |x 0, ..., x t) is the Dirichlet Process, since being nonparametric allows us to attend on as many words as there are in a sentence; i.e. number of possible structures (mixture components) grows with the length of the sentence. As a , we can write the probability of l t+1 = t as a consequence of the stick breaking process 1: DISPLAYFORM0 for 1 ≤ t < t − 1, and DISPLAYFORM1 where α j = 1 − β j and β j is a sample from a Beta distribution. Once we sample l t from the process, the connectivity is realized by a element-wise multiplication of an attention weight vector with a masking vector g t defined in Eq. 1. In this way, x t becomes functionally independent of all x s for all s < l t. The expectation of this operation is the CDF of the probability of l, since DISPLAYFORM2 By telescopic cancellation, the CDF can be expressed in a succinct way: DISPLAYFORM3 for t < t, and P(l t ≤ t) = 1. However, being Bayesian nonparametric and assuming a latent variable model require approximate inference. Hence, we have the following relaxations 1. First, we relax the assumption and parameterize α t j as a deterministic function depending on all the previous words, which we will describe in the next section.2. We replace the discrete decision on the graph structure with a soft attention mechanism, by multiplying attention weight with the multiplicative gate: DISPLAYFORM4 With these relaxations, Eq. can be approximated by using a soft gating vector to update the hidden state h and the predictive function f. This approximation is reasonable since the gate is the expected value of the discrete masking operation described above. In this appendix, we show that having no partial overlapping in dependency ranges is an essential property for recovering a valid tree structure, and PRPN can provide a binary version of g t i, that have this property. The masking vector g t i introduced in Section 4.1 determines the range of dependency, i.e., for the word x t we have g t i = 1 for all l t ≤ i < t. All the words fall into the range l t ≤ i < t is considered as x t's sibling or offspring of its sibling. If the dependency ranges of two words are disjoint with each other, that means the two words belong to two different subtrees. If one range contains another, that means the one with smaller range is a sibling, or is an offspring of a sibling of the other word. However, if they partially overlaps, they can't form a valid tree. While Eq.5 and Eq.6 provide a soft version of dependency range, we can recover a binary version by setting τ in Eq.6 to +∞. The binary version of α t j corresponding to Eq. 6 becomes: DISPLAYFORM0 which is basically the sign of comparing d t and d j+1, scaled to the range of 0 and 1. Then for each of its previous token the gate value g t i can be computed through Eq.5. Now for a certain x t, we have DISPLAYFORM1 where DISPLAYFORM2 Now all the words that fall into the range t ≤ i < t are considered as either sibling of x t, or offspring of a sibling of x t FIG2 ). The essential point here is that, under this parameterization, the dependency range of any two tokens won't partially overlap. Here we provide a terse proof:Proof. Let's assume that the dependency range of x v and x n partially overlaps. We should have g u i = 1 for u ≤ i < v and g n i = 1 for m ≤ i < n. Without losing generality, we assume u < m < v < n so that the two dependency ranges overlap in the range [m, v]. The second property comes from τ. The hyperparameter τ has an interesting effect on the tree structure: if it is set to 0, then for all t, the gates g t i will be open to all of e t's predecessors, which will in a flat tree where all tokens are direct children of the root node; as τ becomes larger, the number of levels of hierarchy in the tree increases. As it approaches + inf, the hardtanh(·) becomes sign(·) and the dependency ranges form a valid tree. Note that, due to the linear part of the gating mechanism, which benefits training, when τ has a value in between the two extremes the truncation range for each token may overlap. That may sometimes in vagueness in some part of the inferred tree. To eliminate this vagueness and ensure a valid tree, at test time we use τ = + inf. Under this framework, the values of syntactic distance have more intuitive meanings. If two adjacent words are siblings of each other, the syntactic distance should approximate zero; otherwise, if they belong to different subtrees, they should have a larger syntactic distance. In the extreme case, the syntactic distance approaches 1 if the two words have no subtree in common. In FIG2 we show the syntactic distances for each adjacent token pair which in the tree shown in FIG4. Our model is compared with the same baseline methods as in BID26. RANDOM chooses a binary tree uniformly at random from the set of binary trees. This is the unsupervised baseline. LBRANCH and RBRANCH choose the completely left-and right-branching structures, respectively. RBRANCH is a frequently used baseline for supervised parsing, but it should be stressed that it encodes a significant fact about English structure, and an induction system need not beat it to claim a degree of success. UPPER BOUND is the upper bound on how well a binary system can do against the Treebank sentences. Because the Treebank sentences are generally more flat than binary, limiting the maximum precision which can be attained, since additional brackets added to provide a binary tree will be counted as wrong. We also compared our model with other unsupervised constituency parsing methods. DEP-PCFG is dependency-structured PCFG BID5. CCM is constituent-context model BID23. DMV is an unsupervised dependency parsing model. DMV+CCM is a combined model that jointly learn both constituency and dependency parser BID25.
In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model.
391
scitldr
Unsupervised embedding learning aims to extract good representations from data without the use of human-annotated labels. Such techniques are apparently in the limelight because of the challenges in collecting massive-scale labels required for supervised learning. This paper proposes a comprehensive approach, called Super-AND, which is based on the Anchor Neighbourhood Discovery model. Multiple losses defined in Super-AND make similar samples gather even within a low-density space and keep features invariant against augmentation. As a , our model outperforms existing approaches in various benchmark datasets and achieves an accuracy of 89.2% in CIFAR-10 with the Resnet18 backbone network, a 2.9% gain over the state-of-the-art. Deep learning and convolutional neural network have become an indispensable technique in computer vision (; ;). Remarkable developments, in particular, were led by supervised learning that requires thousands or more labeled data. However, high annotation costs have become a significant drawback in training a scalable and practical model in many domains. In contrast, unsupervised deep learning that requires no label has recently started to get attention in computer vision tasks. From clustering analysis , and self-supervised model to generative model (; ;), various learning methods came out and showed possibilities and prospects. Unsupervised embedding learning aims to extract visually meaningful representations without any label information. Here "visually meaningful" refers to finding features that satisfy two traits: (i) positive attention and (ii) negative separation (; c;). Data samples from the same ground truth class, i.e., positive samples, should be close in the embedding space (Fig. 1a); whereas those from different classes, i.e., negative samples, should be pushed far away in the embedding space (Fig. 1b). However, in the setting of unsupervised learning, a model cannot have knowledge about whether given data points are positive samples or negative samples. Several new methods have been proposed to find'visually meaningful' representations. The sample specificity method considers all data points as negative samples and separates them in the feature space . Although this method achieves high performance, its decisions are known to be biased from learning only from negative separation. One approach utilizes data augmentation to consider positive samples in training , which efficiently reduces any ambiguity in supervision while keeping invariant features in the embedding space. Another approach is called the Anchor Neighborhood Discovery (AND) model, which alleviates the complexity in boundaries by discovering the nearest neighbor among the data points . Each of these approaches overcomes different limitations of the sample specificity method. However, no unified approach has been proposed. This paper presents a holistic method for unsupervised embedding learning, named Super-AND. Super-AND extends the AND algorithm and unifies various but dominant approaches in this domain with its unique architecture. Our proposed model not only focuses on learning distinctive features across neighborhoods, but also emphasizes edge information in embeddings and maintains the unchanging class information from the augmented data. Besides combining existing techniques, we newly introduce Unification Entropy loss (UE-loss), an adversary of sample specificity loss, which is able to gather similar data points within a low-density space. Extensive experiments are conducted on several benchmark datasets to verify the superiority of the model. The show the synergetic advantages among modules of Super-AND. The main contributions of this paper are as follows: • We effectively unify various techniques from state-of-the-art models and introduce a new loss, UE-loss, to make similar data samples gather in the low-density space. • Super-AND outperforms all baselines in various benchmark datasets. It achieved an accuracy of 89.2% in the CIFAR-10 dataset with the ResNet18 backbone network, compared to the state-of-the-art that gained 86.3%. • The extensive experiments and the ablation study show that every component in Super-AND contributes to the performance increase, and also indicate their synergies are critical. Our model's outstanding performance is a step closer to the broader adoption of unsupervised techniques in computer vision tasks. The premise of data-less embedding learning is at its applicability to practical scenarios, where there exists only one or two examples per cluster. Codes and trained data for Super-AND are accessible via a GitHub link. Generative model. This type of model is a powerful branch in unsupervised learning. By reconstructing the underlying data distribution, a model can generate new data points as well as features from images without labels. Generative adversarial network has led to rapid progress in image generation problems ). While some attempts have been made in terms of unsupervised embedding learning , the main objective of generative models lies at mimicking the true distribution of each class, rather than discovering distinctive categorical information the data contains. Self-supervised learning. This type of learning uses inherent structures in images as pseudo-labels and exploits labels for back-propagation. For example, a model can be trained to create embeddings by predicting the relative position of a pixel from other pixels or the degree of changes after rotating images . Predicting future frames of a video can benefit from this technique . proposed the sample specificity method that learns feature representation from capturing apparent discriminability among instances. All of these methods are suitable for unsupervised embedding learning, although there exists a risk of false knowledge from generated labels that weakly correlate with the underlying class information. Learning invariants from augmentation. Data augmentation is a strategy that enables a model to learn from datasets with an increased variety of instances. Popular techniques include flipping, scaling, rotation, and grey-scaling. These techniques do not deform any crucial features of data, but only change the style of images. Some studies hence use augmentation techniques and train models Clustering analysis. This type of analysis is an extensively studied area in unsupervised learning, whose main objective is to group similar objects into the same class. Many studies either leveraged deep learning for dimensionality reduction before clustering or trained models in an end-to-end fashion . proposed a concept called deep cluster, an iterative method that updates its weights by predicting cluster assignments as pseudo-labels. However, directly reasoning the global structures without any label is error-prone. The AND model, which we extend in this work, combines the advantages of sample specificity and clustering strategy to mitigate the noisy supervision via neighborhood analysis . Problem definition. Assume that there is an unlabeled image set I, and a batch set B with n images: B = {x 1, x 2, ..., x n} ⊂ I. Our goal is to get a feature extractor f θ whose representations (i.e., v i = f θ (x i)) are "visually meaningful," a definition we discussed earlier. LetB = {x 1,x 2, ...,x n} be the augmentation set of input batches B. Super-AND projects images x i,x i from batches B,B to 128 dimensional embeddings v i,v i. During this process, the Sobelprocessed images are also used, and feature vectors from both images are concatenated to emphasize edge information in embeddings (see the left side in Fig. 2). Then, the model computes the probability of images p i,p i being recognized as its own class with a nonparametric classifier (see the right side in Fig. 2). A temperature parameter (τ < 1) was added to ensure a label distribution with low entropy. To reduce the computational complexity in calculating embeddings from all images, we set up a memory bank M to save instance embeddings m i accumulated from the previous steps, as similarly proposed by. The memory bank M is updated by exponential moving average . The probability vector p i is defined in Eq 1, where the superscript j in vector notation (i.e., v j) represents the j-th component value of a given vector. We define the neighborhood relationship vectors r i,r i, and compute these vectors by the cosine similarity between the embedding vectors v i,v i, and the memory bank M (Eq 2). The extracted vectors are used to define the loss term that detects a discrepancy between neighborhoods. The loss term also enforces features v to remain unchanged even after data augmentation. The loss term is written as where N is the set of progressively selected pairs discovered by the nearest neighbor algorithm, and V, R,R, P are matrices of concatenated embedded vectors v i, r i,r i, p i from the batch image set, respectively. w(t) is the hyper-parameter that controls the weights of UE-loss. The algorithm below describes how to train Super-AND. Algorithm 1: Main algorithm for training Super-AND. Input: Unlabeled image set I, encoder f θ to train, the number of total rounds for training: Rounds, and the number of total epochs for training: Compute gradient and update weights by backpropagation 14 end 15 end Existing clustering methods like train networks to find an optimal mapping. However, their learned decisions are unstable due to initial randomness, and some overfitting can occur during the training period (a). To tackle these limitations, the AND model suggests a finer-grained clustering focusing on'neighborhoods'. By regarding the nearest-neighbor pairs as local classes, AND can separate data points that belong to different neighborhood sets from those in the same neighborhood set. We adopt this neighborhood discovery strategy in our Super-AND. The AND algorithm has three main steps: neighborhood discovery, progressive neighborhood selection with curriculum, and neighborhood supervision. For the first step, the k nearest neighborhood (k-NN) algorithm is used to discover all neighborhood pairs (Eq 7 and Eq 8), and these pairs are progressively selected for curriculum learning. We choose a small part of neighborhood pairs at the first round, and gradually increase the amount of selection for training (Current/Total rounds × 100%). Since we cannot assure that every neighborhood is visually similar, this progressive method helps provide a consistent view of local class information for training at each round. When selecting candidate neighborhoods for local classes, the entropy of probability vector H(x i) is utilized as a criterion (Eq 9). Probability vector p i, obtained from softmax function (Eq 1), shows the visual similarity between training instances in a probabilistic manner. Data points with low entropy represent they reside in a relatively low-density area and have only a few surrounding neighbors. Neighborhood pairs containing such data points likely share consistent and easily distinguishable features from other pairs. We select neighborhood set N fromÑ that is in a lower entropy order. The AND-loss function is defined to distinguish neighborhood pairs from one another. Data points from the same neighborhoods need to be classified in the same class (i.e., the left-hand term in Eq 10). If any data point is present in the selected pair, it is considered to form an independent class (i.e., the right-hand term in Eq 10). Existing sample specificity methods consider every single data point as a prototype for a class. They use the cross-entropy loss to separate all data points in the L2-normalized embedding space. Due to its confined space by normalization, data points cannot be placed far away from one another, and this space limitation induces an effect that leads to a concentration of positive samples, as shown in Fig. 1a. The unification entropy loss (UE-loss) is able to even strengthen the concentration-effect above. We define the UE-loss as the entropy of the probability vectorp i. Probability vectorp i is calculated from the softmax function and represents the similarity between instances except for instance itself (Eq 11). By excluding the class of one's own, minimizing the loss makes nearby data points attract each other -a concept that is contrary to minimizing the sample specificity loss. Employing both AND-loss and the UE-loss will enforce similar neighborhoods to be positioned close while keeping the overall neighborhoods as separated as possible. This loss is calculated as in Eq 12. Unsupervised embedding learning aims at training encoders to extract visually meaningful features that are consistent with ground truth labels. Such learning cannot use any external guidance on features. Several previous studies tried to infer which features are substantial in a roundabout way; data augmentation is one such solution (; ; ;). Since augmentation does not deform the underlying data characteristics, invariant features learned from the augmented data will still contain the class-related information. Naturally, a training network based on these features will show performance gain. We define the Augmentation-loss to learn invariant image features. Assume that there is an image along with its augmented versions. We may regard every augmentation instance as a positive sample. The neighborhood relationship vectors, which show the similarity between all instances stored in memory, should also be similar to initial data points than other instances in the same batch. In Eq 13, the probability of an augmented instance that is correctly identified as class-i is denoted asp i i; and that of i-th original instance that is wrongly identified as class-j (j = i),p j i. The Augmentation-loss is then defined to minimize misclassification over instances in all batches (Eq 14). The evaluation involved extensive experiments. We enumerated the model with different backbone networks on two kinds of benchmarks: coarse-grained and fine-grained. Our ablation study helps speculate which components of the model are critical in performance. Finally, the proposed model is compared to the original AND from different perspectives. Datasets. A total of six image datasets are utilized, where three are coarse-grained datasets: is used for qualitative analysis. Training. We used AlexNet and ResNet18 as the backbone networks. Hyper-parameters were tuned in the same way as the AND algorithm. We used SGD with Nesterov momentum 0.9 for the optimizer. We fixed the learning rate as 0.03 for the first 80 epochs, and scaled-down 0.1 every 40 epochs. The batch size is set as 128, and the model was trained in 5 rounds and 200 epochs per round. Weights for UE-loss w(t) (Eq 6) are initialized from 0 and increased 0.2 every 80 epochs. For Augmentation-loss, we used four types: Resized Crop, Grayscale, ColorJitter, and Horizontal Flip. Horizontal Flip was not used in the case of the SVHN dataset because the SVHN dataset is digit images. Update momentum of the exponential moving average for memory bank was set to 0.5. Evaluation. Following the method from , we used the weighted k-NN classifier for making prediction. Top k-nearest neighbors N top were retrieved and used to predict the final outcome in a weighted fashion. We set k = 200 and the weight function for each class c as, where c i is the class index for i-th instance. Top-1 classification accuracy was used for evaluation. Baseline models. We adopt six state-of-the-art baselines for comparison. They are SplitBrain (b), Counting , DeepCluster , Instance , ISIF , and AND . For fair comparison, the same backbone networks were used. Coarse-grained evaluation. Table 1 describes the object classification performance of seven models, including the proposed Super-AND on three coarse-grained datasets: CIFAR-10, CIFAR-100, and SVHN. Super-AND surpasses state-of-the-art baselines on all datasets except for one case, where the model underperforms marginally on CIFAR-100 with AlexNet. One notable observa- Table 1: k-NN Evaluation on coarse-grained datasets. Results that are marked as * are borrowed from the previous works (; tion is that the difference between previous models and super-AND is mostly larger in the case of ResNet18 than the AlexNet backbone network. These reveal that our model is superior to other methods and may indicate that our methodology can give more benefits to stronger CNN architectures. Fine-grained evaluation. We perform evaluations on fine-grained datasets that require the ability to discriminate subtle differences between classes. Table 2 shows that Super-AND achieves an outstanding performance compared to three baselines with the ResNet18 backbone network. We excerpted the of Instance and AND model from the previous work. Backbone network. We tested the choice of backbone networks in terms of classification performance. AlexNet, ResNet18, and ResNet101 are used and evaluated on CIFAR-10, as shown in Table 3 . From the , we can infer that the stronger the backbone network our model has, the better the performance model can produce. Ablation study. To verify every component does its role and has some contribution to the performance increase, an ablation study was conducted. Since Super-AND combines various mechanisms based on AND algorithm, we study the effect of removing each component: Super-AND without the UE-loss, Super-AND without the Sobel filter, Super-AND without the Augmentation-loss. Table 4 displays the evaluation based on the CIFAR-10 dataset and the ResNet18 backbone network. We found that every component contributes to the performance increase, and a particularly dramatic decrease in performance occurs when removing the Augmentation-loss. Initialization. Instead of running the algorithm from an arbitrary random model, we can pre-train the network with "good" initial data points to discover consistent neighborhoods. We investigate two different initialization methods and check whether the choice is critical. Three models were compared: a random model, an initialized model with instance loss from AND, and an initialized model with multiple losses from Super-AND. Table 5 shows that the choice of initialization is not significant, and solely using the instance loss even has an adverse effect on performance. This finding implies that Super-AND is robust to random initial data points, yet the model will show an unexpected outcome if initialization uses ambiguous knowledge. Embedding quality analysis. Super-AND leverages the synergies from learning both similarities in neighborhoods and invariant features from data augmentation. Super-AND, therefore, has a high capability of discovering cluster relationships, compared to the original AND model that only uses the neighborhood information. Fig. 3 exploits t-SNE to visualize the learned representations of three of the selected classes based on the two algorithms in CIFAR-10. The plot demonstrates that Super-AND discovers consistent and discriminative clusters. We investigated the embedding quality by evaluating the class consistency of selected neighborhoods. Cheat labels are used to check whether neighborhood pairs come from the same class. Since both algorithms increase the selection ratio every round when gathering the part of discovered neighborhoods, the consistency of selected neighborhoods will naturally decrease. This relationship is drawn in Fig. 4. The reduction for Super-AND, nonetheless, is not significant compared to AND: our model maintains high-performance throughout the training rounds. Qualitative study. Fig. 5 illustrates the top-5 nearest retrievals of AND (i.e., upper rows) and Super-AND (i.e., lower rows) based on the STL-10 dataset. The example queries shown are dump trucks, airplanes, horses, and monkeys. Images with red frames, which indicate negative samples, appear more frequently for AND than Super-AND. This finding implies that Super-AND excels in capturing the class information compared to AND. Its clusters are robust to misleading color information and well recognize the shape of objects within images. For example, in the case of the airplane query, pictures retrieved from Super-AND are consistent in shape while AND confuse a cruise picture as an airplane. The color composition in Super-AND is also more flexible and can find a red dump truck or a spotted horse, as shown in the examples. This paper presents Super-AND, a holistic technique for unsupervised embedding learning. Besides the synergetic advantage combining existing methods brings, the newly proposed UE-loss that groups nearby data points even in a low-density space while maintaining invariant features via data augmentation. The experiments with both coarse-grained and fine-grained datasets demonstrate our model's outstanding performance against the state-of-the-art models. Our efforts to advance unsupervised embedding learning directly benefit future applications that rely on various image clustering tasks. The high accuracy achieved by Super-AND makes the unsupervised learning approach an economically viable option where labels are costly to generate.
We proposed a comprehensive approach for unsupervised embedding learning on the basis of AND algorithm.
392
scitldr
Analysis of histopathology slides is a critical step for many diagnoses, and in particular in oncology where it defines the gold standard. In the case of digital histopathological analysis, highly trained pathologists must review vast whole-slide-images of extreme digital resolution (100,000^2 pixels) across multiple zoom levels in order to locate abnormal regions of cells, or in some cases single cells, out of millions. The application of deep learning to this problem is hampered not only by small sample sizes, as typical datasets contain only a few hundred samples, but also by the generation of ground-truth localized annotations for training interpretable classification and segmentation models. We propose a method for disease available during training. Even without pixel-level annotations, we are able to demonstrate performance comparable with models trained with strong annotations on the Camelyon-16 lymph node metastases detection challenge. We accomplish this through the use of pre-trained deep convolutional networks, feature embedding, as well as learning via top instances and negative evidence, a multiple instance learning technique fromatp the field of semantic segmentation and object detection. Histopathological image analysis (HIA) is a critical element of diagnosis in many areas of medicine, and especially in oncology, where it defines the gold standard metric. Recent works have sought to leverage modern developments in machine learning (ML) to aid pathologists in disease detection tasks, but the majority of these techniques require localized annotation masks as training data. These annotations are even more costly to obtain than the original diagnosis, as pathologists must spend time to assemble pixel-by-pixel segmentation maps of diseased tissue at extreme resolution, thus HIA datasets with annotations are very limited in size. Additionally, such localized annotations may not be available when facing new problems in HIA, such as new disease subtybe classification, prognosis estimation, or drug response prediction. Thus, the critical question for HIA is: can one design a learning architecture which achieves accurate classification with no additional localized annotation? A successful technique would be able train algorithms to assist pathologists during analysis, and could also be used to identify previously unknown structures and regions of interest. Indeed, while histopathology is the gold standard diagnostic in oncology, it is extremely costly, requiring many hours of focus from pathologists to make a single diagnosis BID21 BID30. Additionally, as correct diagnosis for certain diseases requires pathologists to identify a few cells out of millions, these tasks are akin to "finding a needle in a haystack." Hard numbers on diagnostic error rates in histopathology are difficult to obtain, being dependent upon the disease and tissue in question as well as self-reporting by pathologists of diagnostic errors. However, as reported in the review of BID25, false negatives in cancer diagnosis can lead not only to catastrophic consequences for the patient, but also to incredible financial risk to the pathologist. Any tool which can aid pathologists to focus their attention and effort to the must suspect regions can help reduce false-negatives and improve patient outcomes through more accurate diagnoses BID8. Medical researchers have looked to computer-aided diagnosis for decades, but the lack of computational resources and data have prevented wide-spread implementa-tion and usage of such tools BID11. Since the advent of automated digital WSI capture in the 1990s, researchers have sought approaches for easing the pathologist's workload and improve patient outcomes through image processing algorithms BID11 BID22. Rather than predicting final diagnosis, many of these procedures focused instead on segmentation, either for cell-counting, or for the detection of suspect regions in the WSI. Historical methods have focused on the use of hand-crafted texture or morphological BID5 features used in conjunction with unsupervised techniques such as K-means clustering or other dimensionality reduction techniques prior to classification via k-Nearest Neighbor or a support vector machine. Over the past decade, fruitful developments in deep learning BID19 have lead to an explosion of research into the automation of image processing tasks. While the application of such advanced ML techniques to image tasks has been successful for many consumer applications, the adoption of such approaches within the field of medical imaging has been more gradual. However, these techniques demonstrate remarkable promise in the field of HIA. Specifically, in digital pathology with whole-slide-imaging (WSI) BID33 BID26, highly trained and skilled pathologists review digitally captured microscopy images from prepared and stained tissue samples in order to make diagnoses. Digital WSI are massive datasets, consisting of images captured at multiple zoom levels. At the greatest magnification levels, a WSI may have a digital resolution upwards of 100 thousand pixels in both dimensions. However, since localized annotations are very difficult to obtain, datasets may only contain WSI-level diagnosis labels, falling into the category of weakly-supervised learning. The use of DCNNs was first proposed for HIA in BID3, where the authors were able to train a model for mitosis detection in H&E stained images. A similar technique was applied for WSI for the detection of invasive ductal carcinoma in BID4. These approaches demonstrated the usefulness of learned features as an effective replacement for hand-crafted image features. It is possible to train deep architectures from scratch for the classification of tile images BID29 BID13. However, training such DCNN architectures can be extremely resource intensive. For this reason, many recent approaches applying DCNNs to HIA make use of large pre-trained networks to act as rich feature extractors for tiles BID15 BID17 BID21 BID32 BID27. Such approaches have found success as aggregation of rich representations from pre-trained DCNNs has proven to be quite effective, even without from-scratch training on WSI tiles. In this paper, we propose CHOWDER 1, an approach for the interpretable prediction of general localized diseases in WSI with only weak, whole-image disease labels and without any additional expert-produced localized annotations, i.e. per-pixel segmentation maps, of diseased areas within the WSI. To accomplish this, we modify an existing architecture from the field of multiple instance learning and object region detection BID9 to WSI diagnosis prediction. By modifying the pre-trained DCNN model BID12, introducing an additional set of fully-connected layers for context-aware classification from tile instances, developing a random tile sampling scheme for efficient training over massive WSI, and enforcing a strict set of regualrizations, we are able to demonstrate performance equivalent to the best human pathologists. Notably, while the approach we propose makes use of a pre-trained DCNN as a feature extractor, the entire procedure is a true end-to-end classification technique, and therefore the transferred pre-trained layers can be fine-tuned to the context of H&E WSI. We demonstrate, using only whole-slide labels, performance comparable to top-10 ranked methods trained with strong, pixel-level labels on the Camelyon-16 challenge dataset, while also producing disease segmentation that closely matches ground-truth annotations. We also present for diagnosis prediction on WSI obtained from The Cancer Genome Atlas (TCGA), where strong annotations are not available and diseases may not be strongly localized within the tissue sample. While approaches using localized annotations have shown promise for HIA, they fail to address the cost associated with the acquisition of hand-labeled datasets, as in each case these methods require access to pixel-level labels. As shown with ImageNet BID6 ), access to data drives innovation, however for HIA hand-labeled segmentation maps are costly to produce, often subject to missed diseased areas, and cannot scale to the size of datasets required for truly effective deep learning. Because of these considerations, HIA is uniquely suited to the weakly supervised learning (WSL) setting. Here, we define the WSL task for HIA to be the identification of suspect regions of WSI when the training data only contains image-wide labels of diagnoses made by expert pathologists. Since WSI are often digitally processed in small patches, or tiles, the aggregation of these tiles into groups with a single label (e.g. "healthy", "cancer present") can be used within the framework of multiple instance learning (MIL) BID7 BID0 BID31. In MIL for binary classification, one often makes the standard multi-instance (SMI) assumption: a bag is classified as positive iff at least one instance (here, a tile) in the bag is labelled positive. The goal is to take the bag-level labels and learn a set of instance-level rules for the classification of single instances. In the case of HIA, learning such rules provides the ability to infer localized regions of abnormal cells within the large-scale WSI.In the recent work of BID13 for WSI classification in the WSL setting, the authors propose an EM-based method to identify discriminative patches in high resolution images automatically during patch-level CNN training. They also introduced a decision level fusion method for HIA, which is more robust than max-pooling and can be thought of as a Count-based Multiple Instance (CMI) learning method with two-level learning. While this approach was shown to be effective in the case of glioma classification and obtains the best , it only slightly outperforms much simpler approaches presented in BID13, but at much greater computational cost. In the case of natural images, the WELDON and WILDCAT techniques of BID9 and BID10, respectively, demonstrated state-of-the-art performance for object detection and localization for WSL with image-wide labels. In the case of WELDON, the authors propose an end-to-end trainable CNN model based on MIL learning with top instances BID20 as well as negative evidence, relaxing the SMI assumption. Specifically, in the case of semantic segmentation, BID20 argue that a target concept might not exist just at the subregion level, but that the proportion of positive and negative samples in a bag have a larger effect in the determination of label assignment. This argument also holds for the case of HIA, where pathologist diagnosis arises from a synthesis of observations across multiple resolution levels as well as the relative abundance of diseased cells. In Sec. 2.3, we will detail our proposed approach which makes a number of improvements on the framework of BID9, adapting it to the context of large-scale WSI for HIA. Tissue Detection. As seen in FIG0, large regions of a WSI may contain no tissue at all, and are therefore not useful for training and inference. To extract only tiles with content relevant to the task, we use the same approach as BID29, namely, Otsu's method BID24 applied to the hue and saturation channels of the image after transformation into the HSV color space to produce two masks which are then combined to produce the final tissue segmentation. Subsequently, only tiles within the foreground segmentation are extracted for training and inference. Color Normalization. According to, stain normalization is an important step in HIA since the of the H&E staining procedure can vary greatly between any two slides. We utilize a simple histogram equalization algorithm consisting of left-shifting RGB channels and subsequently rescaling them to, as proposed in BID23. In this work, we place a particular emphasis on the tile aggregation method rather than color normalization, so we did not make use of more advanced color normalization algorithms, such as BID16.Tiling. The tiling step is necessary in histopathology analysis. Indeed, due to the large size of the WSI, it is computationally intractable to process the slide in its entirety. For example, on the highest resolution zoom level, denoted as scale 0, for a fixed grid of non-overlapping tiles, a WSI may possess more than 200,000 tiles of 224×224 pixels. Because of the computational burden associated with processing the set of all possible tiles, we instead turn to a uniform random sampling from the space of possible tiles. Additionally, due to the large scale nature of WSI datasets, the computational burden associated with sampling potentially overlapping tiles from arbitrary locations is a prohibitive cost for batch construction during training. Instead, we propose that all tiles from the non-overlapping grid should be processed and stored to disk prior to training. As the tissue structure does not exhibit any strong periodicity, we find that sampling tiles along a fixed grid without overlapping provides a reasonably representative sampling while maximizing the total sampled area. Given a target scale ∈ {0, 1, . . ., L}, we denote the number of possible tiles in WSI indexed by i ∈ {1, 2, . . ., N} as M T i,. The number of tiles sampled for training or inference is denoted by M S i, and is chosen according to DISPLAYFORM0 DISPLAYFORM1 is the empirical average of the number of tiles at scale over the entire set of training data. Feature Extraction. We make use of the ResNet-50 BID12 architecture trained on the ImageNet natural image dataset. In empirical comparisons between VGG or Inception architectures, we have found that the ResNet architecture provides features more well suited for HIA. Additionally, the ResNet architecture is provided at a variety of depths. However, we found that ResNet-50 provides the best balance between the computational burden of forward inference and richness of representation for HIA.In our approach, for every tile we use the values of the ResNet-50 pre-output layer, a set of P = 2048 floating point values, as the feature vector for the tile. Since the fixed input resolution for ResNet-50 is 224 × 224 pixels, we set the resolution for the tiles extracted from the WSI to the same pixel resolution at every scale. Given a WSI, extracting tile-level features produces a bag of feature vectors which one attempts to use for classification against the known image-wide label. The dimension of these local descriptors is M S × P, where P is the number of features output from the pre-trained image DCNN and M S is the number of sampled tiles. Approaches such as Bag-of-visual-words (BoVW) or VLAD BID14 could be chosen as a baseline aggregation method to generate a single image-wide descriptor of size P ×1, but would require a huge computational power given the dimensionality of the input. Instead, we will try two common approaches for the aggregation of local features, specifically, the MaxPool and MeanPool and subsequently apply a classifier on the aggregated features. After applying these pooling methods over the axis of tile indices, one obtains a single feature descriptor for the whole image. Other pooling approaches have been used in the context of HIA, including Fisher vector encodings BID27 and p−norm pooling BID32. However, as the reported effect of these aggregations is quite small, we don't consider these approaches when constructing our baseline approach. After aggregation, a classifier can be trained to produce the desired diagnosis labels given the global WSI aggregated descriptor. For our baseline method, we use a logistic regression for this final prediction layer of the model. We present a description of the baseline approach in FIG0. In experimentation, we observe that the baseline approach of the previous section works well for diffuse disease, which is evidenced in the of TAB0 for TCGA-Lung. Here, diffuse implies that the number of disease-containing tiles, pertinent to the diagnosis label, are roughly proportional to the number of tiles containing healthy tissue. However, if one applies the same approach to different WSI datasets, such as Camelyon-16, the performance significantly degrades. In the case of Camelyon-16, the diseased regions of most of the slides are highly localized, restricted to a very small area within the WSI. When presented with such imbalanced bags, simple aggregation approaches for global slide descriptors will overwhelm the features of the disease-containing tiles. Instead, we propose an adaptation and improvement of the WELDON method BID9 designed for histopathology images analysis. As in their approach, rather than creating a global slide descriptor by aggregating all tile features, instead a MIL approach is used that combines both top-instances as well as negative evidence. A visual description of approach is given in FIG1.Feature Embedding. First, a set of one-dimensional embeddings for the P = 2048 ResNet-50 features are calcualted via J one-dimensional convolutional layers strided across the tile index axis. For tile t with features k t, the embedding according to kernel j is calculated as e j,t = w j, k t. Notably, the kernels w j have dimensionality P. This one-dimensional convolution is, in essence, a shortcut for enforcing a fully-connected layer with tied weights across tiles, i.e. the same embedding for every tile BID9. In our experiments, we found that the use of a single embedding, J = 1, is an appropriate choice for WSI datasets when the number of available slides is small (< 1000). In this case, choosing J > 1 will decrease training error, but will increase generalization error. Avoiding overtraining and ensuring model generality remains a major challenge for the application of WSL to WSI datasets. Top Instances and Negative Evidence. After feature embedding, we now have a M S,i × 1 vector of local tile-level (instance) descriptors. As in BID9, these instance descriptors are sorted by value. Of these sorted embedding values, only the top and bottom R entries are retained, ing in a tensor of 2R × 1 entries to use for diagnosis classification. This can be easily accomplished through a MinMax layer on the output of the one-dimensional convolution layer. The purpose of this layer is to take not only the top instances region but also the negative evidences, that is the region which best support the absence of the class. During training, the back-propagation runs only through the selected tiles, positive and negative evidences. When applied to WSI, the MinMax serves as a powerful tile selection procedure. Multi-layer Perceptron (MLP) Classifier. In the WELDON architecture, the last layer consists of a sum applied over the 2R × 1 output from the MinMax layer. However, we find that this approach can be improved for WSI classification. We investigate the possibility of a richer interactions between the top and bottom instances by instead using an MLP as the final classifier. In our implementation of CHOWDER, we use an MLP with two fully connected layers of 200 and 100 neurons with sigmoid activations. First, for pre-processing, we fix a single tile scale for all methods and datasets. We chose a fixed zoom level of 0.5 µm/pixel, which corresponds to = 0 for slides scanned at 20x magnification, or = 1 slides scanned at 40x magnification. Next, since WSI datasets often only contain a few hundred images, far from the millions images of ImageNet dataset, strong regularization required prevent over-fitting. We applied 2 -regularization of 0.5 on the convolutional feature embedding layer and dropout on the MLP with a rate of 0.5. However, these values may not be the global optimal, as we did not apply any hyper-parameter optimization to tune these values. To optimize the model parameters, we use Adam BID18 to minimize the binary cross-entropy loss over 30 epochs with a mini-batch size of 10 and with learning rate of 0.001.To reduce variance and prevent over-fitting, we trained an ensemble of E CHOWDER networks which only differ by their initial weights. The average of the predictions made by these E networks establishes the final prediction. Although we set E = 10 for the presented in TAB0, we used a larger ensemble of E = 50 with R = 5 to obtain the best possible model and compare our method to those presented in Table 2. We also use an ensemble of E = 10 when reporting the for WELDON. As the training of one epoch requires about 30 seconds on our available hardware, the total training time for the ensemble took just over twelve hours. While the ResNet-50 features were extracted using a GPU for efficient feed-forward calculations, the CHOWDER network is trained on CPU in order to take advantage of larger system RAM sizes, compared to on-board GPU RAM. This allows us to store all the training tiles in memory to provide faster training compared to a GPU due to reduced transfer overhead. The public Cancer Genome Atlas (TCGA) provides approximately 11,000 tissue slides images of cancers of various organs 2. For our first experiment, we selected 707 lung cancer WSIs (TCGA-Lung), which were downloaded in March 2017. Subsequently, a set of new lung slides have been added to TCGA, increasing the count of lung slides to 1,009. Along with the slides themselves, TCGA also provides labels representing the type of cancer present in each WSI. However, no local segmentation annotations of cancerous tissue regions are provided. The pre-processing step extracts 1,411,043 tiles and their corresponding representations from ResNet-50. The task of these experiments is then to predict which type of cancer is contained in each WSI: adenocarcinoma or squamous cell carcinoma. We evaluate the quality of the classification according to the area under the curve (AUC) of the receiver operating characteristic (ROC) curve generated using the raw output predictions. As expected in the case of diffuse disease, the advantage provided by CHOWDER is slight as compared to the MeanPool baseline, as evidenced in TAB0. Additionally, as the full aggregation techniques work quite well in this setting, the value of R does not seem to have a strong effect on the performance of CHOWDER as it increases to R = 100. In this setting of highly homogenous tissue content, we can expect that global aggregate descriptors are able to effectively separate the two classes of carcinoma. For our second experiment, we use the Camelyon-16 challenge dataset 3, which consists of 400 WSIs taken from sentinel lymph nodes, which are either healthy or exhibit metastases of some form. In addition to the WSIs themselves, as well as their labeling (healthy, contains-metastases), a segmentation mask is provided for each WSI which represents an expert analysis on the location of metastases within the WSI. Human labeling of sentinel lymph node slides is known to be quite tedious, as noted in BID21 BID30. Teams participating in the challenge had access to, and utilized, the ground-truth masks when training their diagnosis prediction and tumor localization models. For our approach, we set aside the masks of metastasis locations and utilize only diagnosis labels. Furthermore, many participating teams developed a post-processing step, extracting handcrafted features from predicted metastasis maps to improve their segmentation. No post-processing is performed for the presented CHOWDER , the score is computed directly from the raw output of the CHOWDER model. The Camelyon-16 dataset is evaluated on two different axes. First, the accuracy of the predicted label for each WSI in the test set is evaluated according to AUC. Second, the accuracy of metastasis localization is evaluated by comparing model outputs to the ground-truth expert annotations of metastasis location. This segmentation accuracy is measured according to the free ROC metric (FROC), which is the curve of metastasis detection sensitivity to the average number of also positives. As in the Camelyon challenge, we evaluate the FROC metric as the average detection sensitivity at the average false positive rates 0.25, 0.5, 1, 2, 4, and 8.Competition Split Bias. We also conduct a set of experiments on Camelyon-16 using random train-test cross-validation (CV) splits, respecting the same training set size as in the original competition split. We note distinct difference in AUC between the competition split and those obtained via random folds. This discrepancy is especially distinct for the MeanPool baseline, as reported in TAB0. We therefore note a distinct discrepancy in the data distribution between the competition test and training splits. Notably, using the MeanPool baseline architecture, we found that the competition train-test split can be predicted with an AUC of 0.75, however one only obtains an AUC of 0.55 when using random splits. Because this distribution mismatch in the competition split could produce misleading interpretations, we report 3-fold average CV along with the obtained on the competition split. Classification Performance. In TAB0, we see the classification performance of our proposed CHOWDER method, for E = 10, as compared to both the baseline aggregation techniques, as well as the WELDON approach. In the case of WELDON, the final MLP is not used and instead a summing is applied to the MinMax layer. The value of R retains the same meaning in both cases: the number of both high and low scoring tiles to pass on to the classification layers. We test a range of values R for both WELDON and CHOWDER. We find that over all values of R, CHOWDER provides a significant advantage over both the baseline aggregation techniques as well as WELDON. We also note that the optimal performance can be obtained without using a large number of discriminative tiles, i.e. R = 5.We also present in Table 2 our performance as compared to the public Camelyon leader boards for E = 50. In this case, we are able to obtain an effective 11 th place rank, but without using any of the ground-truth disease segmentation maps. This is a remarkable , as the winning approach of BID29 required tile-level disease labels derived from expert-provided annotations in order to train a full 27-layer GoogLeNet BID28 architecture for tumor prediction. We also show the ROC curve for this in FIG3. Finally, we note that CHOWDER's performance on this task roughly is equivalent to the best-performing human pathologist, an AUC of 0.884 as reported by, and better than the average human pathologist performance, an AUC of 0.810. Notably, this human-level performance is achieved without human assistance during training, beyond the diagnosis labels themselves. Localization Performance. Obtaining high performance in terms of whole slide classification is well and good, but it is not worth much without an interpretable which can be used by pathologists to aid their diagnosis. For example, the MeanPool baseline aggregation approach provides no information during inference from which one could derive tumor locations in the WSI: all locality information is lost with the aggregation. With MaxPool, one at least retains some information via the tile locations which provide each maximum aggregate feature. For CHOWDER, we propose the use of the full set of outputs from the convolutional feature embedding layer. These are then sorted and thresholded according to value τ such that tiles with an embedded value larger than τ are classified as diseased and those with lower values are classified as healthy. We show an example of disease localization produced by CHOWDER in FIG4. Here, we see that CHOWDER is able to very accurately localize the tumorous region in the WSI even though it has only been trained using global slide-wide labels and without any local annotations. While some potential false detections occur outside of the tumor region, we see that the strongest response occurs within the tumor region itself, and follows the border regions nicely. We present further localization in Appendix A.We also present FROC scores for CHOWDER in Table 2 as compared to the leader board . Here, we obtain comparable to the 18 th rank. However, this performance is incredibly significant as all other approaches were making use of tile-level classification in order to train their segmentation techniques. We also show the FROC curve in FIG3 We have shown that using state-of-the-art techniques from MIL in computer vision, such as the top instance and negative evidence approach of BID9, one can construct an effective technique for diagnosis prediction and disease location for WSI in histopathology without the need Table 2: Final leader boards for Camelyon-16 competition. All competition methods had access to the full set of strong annotations for training their models. In contrast, our proposed approach only utilizes image-wide diagnosis levels and obtains comparable performance as top-10 methods. for expensive localized annotations produced by expert pathologists. By removing this requirement, we hope to accelerate the production of computer-assistance tools for pathologists to greatly improve the turn-around time in pathology labs and help surgeons and oncologists make rapid and effective patient care decisions. This also opens the way to tackle problems where expert pathologists may not know precisely where relevant tissue is located within the slide image, for instance for prognosis estimation or prediction of drug response tasks. The ability of our approach to discover associated regions of interest without prior localized annotations hence appears as a novel discovery approach for the field of pathology. Moreover, using the suggested localization from CHOWDER, one may considerably speed up the process of obtaining ground-truth localized annotations. A number of improvements can be made in the CHOWDER method, especially in the production of disease localization maps. As presented, we use the raw values from convolutional embedding layer, which means that the resolution of the produced disease localization map is fixed to that of the sampled tiles. However, one could also sample overlapping tiles and then use a data fusion technique to generate a final localization map. Additionally, as a variety of annotations may be available, CHOWDER could be extended to the case of heterogenous annotation, e.g. some slides with expert-produced localized annotations and those with only whole-slide annotations. A FURTHER Figure 5: Visualization of metastasis detection on test image 2 of the Camelyon-16 dataset using our proposed approach. Left: Full WSI at zoom level 6 with ground truth annotation of metastases shown via black border. Tiles with positive feature embeddings are colored from white to red according to their magnitude, with red representing the largest magnitude. Right: Detail of metastases at zoom level 2 overlaid with classification output of our proposed approach. Here, the output of all tested tiles are shown and colored according to their value, from blue to white to red, with blue representing the most negative values, and red the most positive. Tiles without color were not included when randomly selecting tiles for inference. Figure 6: Visualization of metastasis detection on test image 92 of the Camelyon-16 dataset using our proposed approach. Left: Full WSI at zoom level 6 with ground truth annotation of metastases shown via black border. Tiles with positive feature embeddings are colored from white to red according to their magnitude, with red representing the largest magnitude. Right: Detail of metastases at zoom level 2 overlaid with classification output of our proposed approach. Here, the output of all tested tiles are shown and colored according to their value, from blue to white to red, with blue representing the most negative values, and red the most positive. Tiles without color were not included when randomly selecting tiles for inference.
We propose a weakly supervised learning method for the classification and localization of cancers in extremely high resolution histopathology whole slide images using only image-wide labels.
393
scitldr
Massively multi-label prediction/classification problems arise in environments like health-care or biology where it is useful to make very precise predictions. One challenge with massively multi-label problems is that there is often a long-tailed frequency distribution for the labels, ing in few positive examples for the rare labels. We propose a solution to this problem by modifying the output layer of a neural network to create a Bayesian network of sigmoids which takes advantage of ontology relationships between the labels to help share information between the rare and the more common labels. We apply this method to the two massively multi-label tasks of disease prediction (ICD-9 codes) and protein function prediction (Gene Ontology terms) and obtain significant improvements in per-label AUROC and average precision. In this paper, we study general techniques for improving predictive performance in massively multilabel classification/prediction problems in which there is an ontology providing relationships between the labels. Such problems have practical applications in biology, precision health, and computer vision where there is a need for very precise categorization. For example, in health care we have an increasing number of treatments that are only useful for small subsets of the patient population. This forces us to create large and precise labeling schemes when we want to find patients for these personalized treatments. One large issue with massively multi-label prediction is that there is often a long-tailed frequency distribution for the labels with a large fraction of the labels having very few positive examples in the training data. The corresponding low amount of training data for rare labels makes it difficult to train individual classifiers. Current multi-task learning approaches enable us to somewhat circumvent this bottleneck through sharing information between the rare and cofmmon labels in a manner that enables us to train classifiers even for the data poor rare labels BID6.In this paper, we introduce a new method for massively multi-label prediction, a Bayesian network of sigmoids, that helps achieve better performance on rare classes by using ontological information to better share information between the rare and common labels. This method is based on similar ideas found in Bayesian networks and hierarchical softmax BID18. The main distinction between this paper and prior work is that we focus on improving multi-label prediction performance with more complicated directed acyclic graph (DAG) structures between the labels while previous hierarchical softmax work focuses on improving runtime performance on multi-class problems (where labels are mutually exclusive) with simpler tree structures between the labels. In order to demonstrate the empirical predictive performance of our method, we test it on two very different massively multi-label tasks. The first is a disease prediction task where we predict ICD-9 (diagnoses) codes from medical record data using the ICD-9 hierarchy to tie the labels together. The second task is a protein function prediction task where we predict Gene Ontology terms BID0 BID5 from sequence information using the Gene Ontology DAG to combine the labels. Our experiments indicate that our new method obtains better average predictive performance on rare labels while maintaining similar performance on common labels. The goal of multi-label prediction is to learn the distribution P (L|X) which gives the probability of an instance X having a label L from a dictionary of N labels. We are particularly interested in the case where there is an ontology providing superclass relationships between the labels. This ontology consists of a DAG where every label L is a node and every directed edge from L i to L j indicates that the label L i is a superclass of the label L j. FIG0 gives corresponding example simplified subgraphs from both the ICD-9 hierarchy and the Gene Ontology DAG. We define parents(L) to be the direct parents of L. We define ancestors(L) to be all of the nodes that have a directed path to L. Gene Ontology The classical approach for solving this problem is to learn separate functions for each label. This transforms the problem into N binary prediction problems which can each be solved with standard techniques. The main issue with this approach is that it is less sample efficient in that it does not share information between the labels. A more sophisticated approach is to use multi-task learning techniques to share information between the individual label-specific binary classifiers. One approach for doing this with neural networks is to introduce shared layers between the different binary classifiers. The ing output layer is a flat structure of sigmoid outputs, with each sigmoid output representing one P (L|X). This reduces the number of parameters needed for every label and allows information to be shared among the labels BID6. However, even with this weight sharing, the final output layer still needs to be learned independently for each label. We propose a modification of the output layer by constructing a Bayesian network of sigmoids in order to use the ontology to share additional information between labels in a more guided way. The general idea is that we assume that the probability of our labels follows a Bayesian network BID19 with each edge in the ontology representing an edge within the Bayesian network. This, along with the fact that the edges denote superclasses, enables us to factor the probability of a label into several conditional probabilities. DISPLAYFORM0 As the edges denote superclasses, having a child label implies having every ancestor DISPLAYFORM1 From Baysian network assumption on the subgraph consisting of L and ancestors(L) BID19 We are now able to learn the conditional probability distributions P (L|X, parents(L)) for every label in the ontology and use the above formula to reconstruct the final target probabilities P (L|X). Consider the example simplified ICD-9 graph in FIG0. For this graph, we would learn P (Cancer|X), P (LungCancer|Cancer, X), and P (SkinCancer|Cancer, X). We would then be able to compute P (LungCancer|X) = P (Cancer|X) × P (LungCancer|Cancer, X).The intuition of why this factoring might be useful is that it enables the transferring of knowledge from more common higher-level labels to more rare lower-level labels. Consider the case where L is very rare. In that case it is difficult to learn P (L|X) directly due to the small amount of training data. However, the decomposed version ∈{L}∪ancestors(L) P (|X, parents) includes classifiers from the ancestors of L that have more training data and might be easier to learn. This factoring allows additional signal from the better trained higher-level labels to feed directly into the probability computation for the rare leaf L. If we can rule out one of the higher-level labels, we can also rule out a lower-level label. For example, consider the ICD-9 graph illustrated in FIG0. We might not have enough patients with lung cancer to directly learn an optimal P (LungCancer|X). However, we can pool all of our cancer patients to learn a hopefully more optimal P (Cancer|X). We can then use our Bayesian network factoring to incorporate the better trained P (Cancer|X) classifier in our calculation for P (LungCancer|X). In our experiments we show that this intuition plays out in practice through improved performance on rare labels. The Bayesian network assumption plays an important role in allowing us to factor the probabilities in this manner. In order to perform our factoring, we must assume that every subgraph of the ontology consisting of the nodes {L} ∪ ancestors(L) correctly represents a Bayesian network for the label probability distribution. These subgraphs are only correct Bayesian networks if the probability of every label L is conditionally independent of the probabilities of non-descendent labels given the parent labels and X BID20 ). This might seem somewhat limiting, but there are two reasons why this assumption is weaker than it might appear. First, we only require a Bayesian network to be correct for the subgraphs of the form {L} ∪ ancestors(L). This is true because we only consider the nodes {L} ∪ ancestors(L) when we do our factoring. This is a significantly weaker assumption than requiring the entire graph to follow a Bayesian network. One direct application of this is that every tree ontology can meet this assumption. The proof for this is that every {L} ∪ ancestors(L) subgraph of a tree is a simple chain. A simple chain is not able to violate the conditional independence assumption behind Bayesian networks because it has no non-descendent nodes that are not already ancestors. Ancestor nodes are always conditionally independent with the label given the parents because the edges represent superclasses and thus either the ancestors are always present if the parent i present or the label is always not present if the parent is not present. The second reason why this assumption is weaker than it might appear is that we only require conditional independence given a particular instance X. As an illustrative example, consider the two ICD-9 labels of male breast cancer (ICD-9 175) and female breast cancer (ICD-9 174). Male breast cancer and female breast cancer are trivially not conditionally independent due to the gender qualifier making them mutually exclusive. However, male breast cancer and female breast cancer become conditionally independent once you condition on the gender of the patient. Thus conditioning on the exact instance X enables more conditional independence than would otherwise be available. Nevertheless, even with these caveats, there will be some circumstances in which this conditional independence assumption is violated. In these situations, our factoring is not valid and our computed product ∈{L}∪ancestors(L) P (|X, parents) might diverge from the actual P (L|X). Yet, even in these situations, the ing scores can still be empirically useful. We demonstrate that this is the case in our experiments by showing performance improvements in a protein function prediction task that almost assuredly violates this conditional independence assumption. There are many potential ways in which the conditional probabilities P (L|X, parents(L)) could be modeled. We exclusively focus on modeling these probabilities using a sigmoid function computed on logits from neural networks. We define an encoder neural network for every task that takes in the input X and returns a fixed-length representation of the input. We also define a fixed-length embedding for every label L by constructing an output embedding matrix such that e L is the embedding for L. This encoder and label embedding then allow us to model P (L|X, parents(L)) as σ(encoder(X) · e L ), where σ indicates the sigmoid function and · indicates a dot product. Note that parents(L) is not used in this formula. This is because there is a unique set of parents for every label L, so there is no need to have distinct e L vectors for different sets of parents. We can then train P (L|X, parents(L)) by using cross entropy loss on patients who have all the labels in parents(L). Note that we explicitly do not train each of the conditional probabilities on every patient. We can only train the conditional probabilities on patients who satisfy the conditional requirement of having the parent labels. This does not change the number of positive examples for each classifier, but it does significantly reduce the number of negative examples for the lower level classifiers. For example, consider the ICD-9 subgraph shown in FIG0. In this situation, we have three labels and thus need to learn three conditional probabilities: P (Cancer|X), P (LungCancer|Cancer, X) and P (BreastCancer|Cancer, X). We have three labels, so our label embedding matrix consists of e Cancer, e LungCancer and e BreastCancer. We can now compute P (LungCancer|X) and P (BreastCancer|X) as follows: DISPLAYFORM0 As a baseline, we also train models with a normal flat sigmoid output layer. In these models we directly learn P (L|X) for each label. Similar to the conditional probabilities, we can define these probabilities as a sigmoid of the output from a neural network. We define P (L|X) to be σ(encoder(X) · e L ). We can then train P (L|X) using cross entropy loss on all patients. We evaluated the predictive performance of our method on two very different massively multi-label problems. We consider the task of predicting future diseases for patients given medical history in the form of ICD-9 codes and the task of predicting protein function from sequence data in the form of Gene Ontology terms. In this section, we introduce the datasets, encoders and baselines used for each problem.3.1 DISEASE PREDICTION 3.1.1 PROBLEM One of our experiments consists of predicting diseases in the form of ICD-9 codes from electronic medical record (EMR) data. We have two years and nine months of data covering 2013, 2014, and the first nine months of 2015. We use two years of history to predict which ICD-9 codes will appear in the following nine months. The problem setup for this experiment closely matches the setup in BID17. We use a large insurance claims dataset from [redacted to preserve anonymity] for modeling. Our claims data consists of diagnoses (ICD-9), medications (NDC), procedures (CPT), and some metadata such as age, gender, location, general occupation, and employment status. We restrict our analysis to patients who were enrolled during 2013, 2014 and January 2015.We have 15.7 million patients, of which a random 5% are used for validation and 5% are used for testing. This dataset is quite large, much larger than what is usually available in a hospital. Thus we consider two cases of this problem. The "high data case" is where we use all remaining 14.1 million patients for training. The " low data case" consists of training with a 2% random sample of 281,874 patients and is much closer in size to normal hospital datasets BID8 BID1.Our target label dictionary for this task consists of all leaf ICD-9 billing codes that appear at least 5 times in the training data. We only predict leaf codes as those are the only codes allowed for billing and thus the only ICD-9 codes that records are annotated with. This in a dictionary of 6,902 codes for the small disease prediction task and 12,533 codes for the large disease prediction task. We use the ICD-9 hierarchy included in the 2018AA UMLS release BID2 in order to construct relationships between the labels for our method. We additionally use the CPT and ATC ontologies included in the 2018AA for our encoder. The partitioning of the patient timelines into input history and output prediction labels as well as the subpartioning of the input history into time-bins. Each tick on the x-axis represents one month. The first two years of information is used as input and the final nine months is used to generate output prediction labels. These first two years are subdivided into six bins of the following lengths for featurization: one year, six months, three months, one month, one month, and one month. For our encoder, we use a feed-forward architecture inspired by BID1. As in their model, we split our two years of data into time-sliced bins. For each time slice, we find all the ICD-9, NDC and CPT codes that the patient experienced during the time slice. FIG1 details the exact layout of each time bin. We also add a feature for every higher-level code in the ICD-9, ATC and CPT ontologies that indicates whether the patient had any of the descendants of that particular code within the time slice. This expanded rollup scheme is structurally very similar to the subword method introduced in BID3. The weights for these input embeddings are tied to the output embedding matrix used in our output layers. We summarize the set of embeddings for each time bin using mean pooling. We also construct mean embedding for the metadata by feeding the metadata entries through an embedding matrix followed by mean pooling. Finally, we concatenate the means from each timeslice with the mean embeddings from the metadata and feed the ing vector into a feedforward neural network to compute a final patient embedding. These neural network models are trained with the Adam optimizer. The hyperparameters such as the learning rate, layer size, non-linearity, and number of layers are optimized using a grid search on the validation set. Appendix A.1 has details on the space searched as well as the best hyper-parameters for both the normal sigmoid and Bayesian network sigmoid models. Finally, as a further baseline, we also train logistic regression models individually for several rare ICD-9 codes. These models are trained on a binary matrix where each row represents a patient and each column represents an ICD-9 code, NDC code, CPT code, or metadata element. A particular row and column element is set to 1 whenever a patient has that particular item in the metadata or during the two years of provided medical history. These logistic regression models are regularized with L2 with a lambda optimized using cross-validation. One particular issue with training individual models on rare codes is that the dataset is distinctly unbalanced with vastly more negative examples than positive examples. We deal with this issue by subsampling negative examples so that the ratio of positive and negative samples is 1:10. For our other experiment, we predict protein functions in the form of Gene Ontology (GO) terms from sequence data. We focus only on human proteins that have at least one Gene Ontology annotation. Our features consist of amino acid sequences downloaded from Uniprot on July 27, 2018 . For our labels, we use the human GO labels which were generated on June 18, 2018. After joining the labels with the sequence data, we have a total of 15,497 annotated human protein sequences. A random 80% of the sequences are used for training, 10% are using for validation, and a final 10% are used for final testing. In this task we predict all leaf and higher level GO terms that appear at least 5 times in the training data. This in a target dictionary of 7,751 terms. We construct relationships between these labels using the July 24, 2018 release of the GO basic ontology. We use's 1-D CNN based encoder to encode our protein sequence information. We treat every letter in the alphabet as a word and encode each of those letters with an embedding size of size 26. We then apply a 1-D convolution with a window size of 8 over the embedded sequence. A fixed-length representation of the protein is then obtained by doing max-over-time pooling. This representation is finally fed through a ReLU and one fully connected layer. The ing fixed dimension vector is the encoded protein. For regularization, we add dropout before the convolution and fully connected layer. Following previous work, we also consider generating features using sequence alignment BID14. We use version 2.7.1 of the BLAST tool to find the most similar training set protein for every protein in our dataset . We then use this most similar protein to augment our protein encoder by adding a binary feature which signifies if the most similar protein has the particular term we are predicting. These CNN models are trained with Adam. Hyperparameters such as learning rate, number of filters, dropout, and the size of the final layer are optimized using a grid search on the validation set. See Appendix A.1 for a full listing of the space searched as well as the best hyperparameters for both the flat sigmoid and Bayesian network of sigmoids models. As a further baseline, we also consider using the BLAST features alone for predicting protein function. This model simply consists of a 1 if the most similar protein has the target term or a 0 otherwise. For these protein models, we also consider one final baseline where we take our flat sigmoid model and weight labels according to the inverse square root of their frequency. This weighting scheme is based off the subsampling scheme from BID16. Unfortunately, this baseline did not seem to perform well on rare words so we did not consider it for the disease case and our more general analysis. The for this baseline can be found in Appendix A.3. Figure 3 shows frequency binned per-label area under the receiver operating characteristic (AUROC) and average precision (AP) for less frequent labels that have at most 1,000 positive examples. See Appendix A.2 for the exact numerical which include 95 % bootstrap confidence intervals generated through 500 bootstrap samples of the test set. As shown in Figure 4, these less frequent labels cover a majority of the labels within each dataset. Our indicate that the Bayesian network of sigmoid output layer has better AUROC and average precision for rare labels in all three tasks, with the effect diminishing with increasing numbers of positive labels. This effect is especially strong in the average precision space. For example, the Bayesian network of sigmoid models obtain 187%, 28.5% and 17.9% improvements in average precision for the rarest code bin (5-10 positive examples) over the baseline models for the small disease, large disease and protein function tasks, respectively. This improvement persists for the next rarest bin (11-25 positive examples), but decreases to 89.2%, 10.7% and 11.1%. This matches our previous intuition as there is no need to transfer information from more general labels if there is enough data to model P (L|X) directly. TAB0 compares micro-AUROC and micro-AP on all labels for all three tasks. The benefits of the Bayesian sigmoid output layer seem much more limited and task specific in this setting. We do not expect significantly better in the micro averaged performance case because the micro are more dominated by more frequent codes and the Bayesian network of sigmoids is only expected to help when P (L|X) does not have enough data to be modeled directly. The Bayesian network of sigmoids output layer provides better AUROC and AP for the disease prediction task, but suffers from worse performance in the protein function task. One possible explanation for this discrepancy is that our Bayesian network assumption is guaranteed to be correct in the disease prediction task due to the tree structure of the ontology, but might not be correct in the protein function task with its more complicated DAG ontological structue. It is possible that minor violations of the Bayesian network assumption in the protein function prediction task cause the overall performance to be worse on the more common code compared to the flat sigmoid decoder. There is related work on improved softmax variants, predicting ICD-9 codes, predicting Gene Ontology terms and combining ontologies with Bayesian networks. Improved softmax variants. There has been a wide variety of work focusing on trying to come up with improved softmax variants for use in massively multi-class problems such as language modeling. This prior work primarily differs from this work in that it focuses exclusively on the multi-class case with a tree structure connecting the labels. Multi-class is distinct from multi-label in that multi-class requires each item to only have one label while multi-label allows multiple labels per item. Most of this work focuses around trying to improve the training time for the expensive softmax operation found in multi-class problems such as large-vocabulary language modeling. The most related of these variants fall under the hierarchical softmax family. Hierarchical softmax from BID18 (and related versions such as class based softmax from BID10 and adaptive softmax from BID11) focuses on speeding up softmax by using a tree structure to decompose the probability distribution. Disease prediction. Previous work has also explored the task of disease prediction through predicting ICD-9 codes from medical record data BID17 BID8 BID7. GRAM from is a particularly relevant instance which uses the CCS hierarchy to improve the encoder, ing in better predictions for rare codes. Our work differs from GRAM in that we improve the output layer while GRAM improves the encoder. Protein function prediction. Protein function prediction in the form of Gene Ontology term prediction has been considered by previous work BID14 BID15 BID4. DeepGO from BID14 is the most similar to the approach taken by this paper in that it uses a CNN on the sequence data to predict Gene Ontology terms. It also uses the ontology in that it creates a multi-task neural network in the shape of the ontology. Our work differs from DeepGO in that we focus on the rarer terms and we only modify the output layer. Combining ontologies with Bayesian networks. Phrank from is an algorithm for computing similarity scores between sets of phenotypes for use in diagnosing genetic disorders. Like this paper, Phrank constructs a Bayesian network based on an ontology. This work differs from Phrank in that we focus on the supervised prediction task of modeling the probability of a label given an instance while Phrank focuses on the simpler task of modeling the unconditional probability of a label (or set of labels). This paper introduces a new method for improving the performance of rare labels in massively multi-label problems with ontologically structured labels. Our new method uses the ontological relationships to construct a Bayesian network of sigmoid outputs which enables us to express the probability of rare labels as a product of conditional probabilities of more common higher-level labels. This enables us to share information between the labels and achieve empirically better performance in both AUROC and average precision for rare labels than flat sigmoid baselines in three separate experiments covering the two very different domains of protein function prediction and disease prediction. This improvement in performance for rare labels enables us to make more precise predictions for smaller label categories and should be applicable to a variety of tasks that contain an ontology that defines relationships between labels. This section has been redacted to preserve anonymity. A.1 HYPERPARAMETER GRID AND BEST HYPERPARAMETERS
We propose a new method for using ontology information to improve performance on massively multi-label prediction/classification problems.
394
scitldr
Generative Adversarial Networks have made data generation possible in various use cases, but in case of complex, high-dimensional distributions it can be difficult to train them, because of convergence problems and the appearance of mode collapse. Sliced Wasserstein GANs and especially the application of the Max-Sliced Wasserstein distance made it possible to approximate Wasserstein distance during training in an efficient and stable way and helped ease convergence problems of these architectures. This method transforms sample assignment and distance calculation into sorting the one-dimensional projection of the samples, which a sufficient approximation of the high-dimensional Wasserstein distance. In this paper we will demonstrate that the approximation of the Wasserstein distance by sorting the samples is not always the optimal approach and the greedy assignment of the real and fake samples can faster convergence and better approximation of the original distribution. Generative Adversarial Networks (GANs) were first introduced in , where instead of the application of a mathematically well-established loss function an other differentiable neural network, a discriminator was applied to approximate the distance between two distributions. These methods are popularly applied in data generation and has significantly improved the modelling capabilities of neural networks. It was demonstrated in various use cases that these approaches can approximate complex high-dimensional distributions in practice , ,. Apart from the theoretical advantage of GANs and applying a discriminator network instead of a distance metric (e.g. 1 or 2 loss), modelling high-dimensional distributions with GANs often proves to be problematic in practice. The two most common problems are mode collapse, where the generator gets stuck in a state where only a small portion of the whole distribution is modeled and convergence problems, where either the generator or the discriminator solves his task almost perfectly, providing low or no gradients for training for the other network. Convergence problems were improved, by introducing the Wasserstein distance , which instead of a point-wise distance calculation (e.g. cross-entropy or 1 distance) calculates a minimal transportation distance (earth mover's distance) between the two distributions. The approximation and calculation of the Wasserstein distance is complex and difficult in highdimensions, since in case of a large sample size calculation and minimization of the transport becomes exponentially complex, also distance can have various magnitudes in the different dimensions. it was demonstrated that high-dimensional distributions can be approximated by using a high number of one dimensional projections. For a selected projection the minimal transport between the one dimensional samples can be calculated by sorting both the real and the fake samples and assigning them according to their sorted indices correspondingly. As an additional advantage, it was also demonstrated in that instead of the regular mini-max game of adversarial training, the distribution of the real samples could be approximated directly by the generator only, omitting the discriminator and turning training into a simple and more stable minimization problem. The theory of this novel method is well described and it was demonstrated that it works in practice, but unfortunately for complex, high-dimensional distributions a large amount of projections are needed. it was demonstrated how the high number of random projections could be substituted by a single continuously optimized plane. The parameters of this projection are optimized in an adversarial manner selecting the "worst" projection, which maximizes separation between the real and fake samples using a surrogate function. This modification brought the regular adversarial training back and created a mini-max game again, where the generator creates samples which resemble well to the original distribution according to the selected plane and the discriminator tries to find a projection, which separates the real and fake samples from each other. The essence of Sliced Wasserstein distances is how they provide a method to calculate minimal transportation between the projected samples in one-dimension with ease, which approximates the Wasserstein distance in the original high-dimension. In theory this approach is sound and works well in practise. It was proven in that the sliced Wasserstein distance satisfies the properties of non-negativity, identity of indiscernibles, symmetry, and triangle inequality, this way forming a true metric. However it approximates high-dimensional distributions well, we would like to demonstrate in this paper that the assignment of real and fake samples by sorting them in one dimension also has its flaws and a greedy assignment approach can perform better on commonly applied datasets. We would also argue regarding the application of the Wasserstein distance. We will demonstrate that in many cases various assignments can the same minimal transportation during training and calculation of the Wasserstein distance with sorting can alter the distribution of perfectly modeled samples even when only a single sample differs from the approximated distribution. Generative adversarial networks can be described by a generator (G), whose task is to generate a fake distribution (P F), which resembles a distribution of real samples (P R) and a discriminator, whose task is to distinguish between P F and P R samples. Formally the following min-max objective is iteratively optimized: min Where V is a distance measure or divergence between the two distributions. the Wasserstein-p distance was proposed to improve the staibility of GANs, which can be defined as: where p is a parameter of the distance, and (P F, P R) defines all the possible joint distributions over P F and P R. The number of possible joint distributions increases factorially with the number of samples and the calculation of the minimal transport can be difficult. The instability and high complexity of wasserstein GANs were further improved by the introduction of the sliced Wassersstein distance in , which can be defined as: where P w F, P w R are one-dimensional projections of the high dimensional real and fake samples to w and Ω denotes a sufficiently high number of projections on the unit sphere. In this setting the Wasserstein distance can be calculated by sorting both the real and fake samples and assigning them by their indices, instead of checking all the possible joint distributions in (P w F, P w R). Max-sliced Wasserstein GANs can be introduced by selecting the worst projected distribution from Ω noted by w max and since the projection can be implemented by a fully connected layer, one can re-introduce the mini-max game used in regular GAN training: min In this setup the distance in the single, one-dimensional projected space can be calculated by sorting the samples in the similar manner as in case of the sliced Wasserstein distance. In case of high sample number and complex high-dimensional distributions the optimal transport can often be calculated using various assignments between two sets of samples. For example if a sample containing n elements from P F can be defined by the series of F w 1...n after the projection, and a sample from P R is defined by R w 1...n and we know that for every i and j (i, j ∈ 1...n): Which means that the projected samples are well separated from each other (all the projected fake samples are smaller than any of the projected real samples), for p = 1 all possible assignments of the i, j pairs will the same distance, which is the minimal transportation of the samples, although the pairwise differences can be very different. One can also easily see, that the minimal transport calculated with sorting might not assign identical samples to each other, which is depicted on Fig. 1. As it can be seen from the figure, for example in case of two Gaussian distributions with the same variance, but different mean values using sorting to calculate the minimal transport (along arbitrary dimensions), will the shift of one of the distributions (in this case green). This transformation will also affect those samples which are at the intersection of the two distributions. This means that there might be identical pairs in (P F and P R) generating an error, which is not correct. One could assume that if a generator produces a sample identical to one of the samples of P R it should not generate an error and no gradients should be invoked by this sample pair. In this case the assignment will not pair these identical samples for comparison. Figure 1: This figure depicts a flaw of the Wasserstein distance. In these figures a blue (desired output distribution and green, generated fake distribution can be seen), the assignments/desired transports are illustrated by red arrows. Wasserstein distance calculates minimal transportation between all the samples. In many cases this shifting all the samples to cover the whole distribution correctly, this is depicted on the left subfigure. Unfortunately this means that those samples which are at the intersection of the two distributions, meaning that for these generated samples an identical pair could be found in the real samples, will also be altered and shifted towards an other part of the real distribution. Instead of this we propose the calculation of the assignment using a greedy method. This will ensure that that identical (or similar) sample pairs will be selected first, and after this the transportation between the disjunct regions will be calculated, which is depicted on the right subfigure. Instead of sorting the samples we propose a method which assigns similar samples to each other. First we would like to remove the intersection of the two distributions and calculate the minimal transport for the remaining disjunct regions. By this calculation the value of the minimal transport will be the same (in case of a sufficiently large sample number), but more similar samples will be assigned to each other. The previous example reveals a caveat of the Wasserstein distance, that it optimizes the global transport between two distributions, but ignores identical sample pairs or intersecting regions of density functions in them. Unfortunately identical samples will be extremely rarely found in two distributions, but closer sample pairs should also be assigned to each other. In training one typically uses mini-batches and will not see the whole distribution, certain parts or elements might not be present in each mini-batch if they probability in the original distribution is lower than one over the minibatch size. This can cause further problems in the calculation of the minimal transport using sorting, since the appearance of one wrongly generated sample at the first or last position of the projected samples can a completely different assignment. This problem is depicted on Fig. 2. We have to emphasize that although the assignment and distance calculation happen in one dimension, these distances are of the projection of high dimensional embeddings, where there can exist dimensions which can decrease the distance between the two samples without significantly changing the projected position of all the other samples. Figure 2: In this figure we would like to demonstrate different approaches for minimal transportation in one-dimension using mini-batches. Real samples are noted by blue and fake samples are plotted by red stars. The position of the samples were shifted vertically for better display, but only their horizontal position matters. On both figures the same constellation can be seen: four similar sample pairs and one which is very different. The left subfigure depicts sample assignment by sorting, meanwhile the right figure depicts assignment by a greedy approach. The summed distances are the same in both cases for p = 1. We would like to argue that the lower assignment can be better for network training. We introduce greedy training of max sliced Wasserstein GANs, where instead of sorting the samples of the projections we iteratively select the most similar pairs of them for loss calculation. Our training algorithm as a pseudocode is defined in Algorithm 1. The training process is really similar to the approach introduced in. The main and only alteration is between line ten and sixteen, where instead of the sorting operation of the original approach we first generate a matrix which determines the distances for all possible sample pairs in one-dimension. First we select the smallest element in this matrix and remove its row and column and iterate this process until the matrix contains only one element which will be the distance between the last two, least similar sample pairs. We have to note that our approach requires O(n 3) steps compared to the original O(nlog(n)) operations of sorting. We have to implement n minimum selections in the distance matrix with size n × n. We also have to note that this increase in complexity refers for training only and has no effect ton the inference complexity of the generator, also n is the sample or mini-batch size during training, which is usually a relatively low number. In our experiences, using batches of 16 to 512 samples, this step did not a significant increase in computation time during training. In the previous section flaws of the Wasserstein distance using sorting in one-dimension were presented and greedy sample assignment was suggested as a possible method fixing these defects. On the other hand one has to admit that in certain cases greedy approach does not ensure minimal transportation, transportation ed by sorting will always be lower or equal to the assignment calculated by the greedy approach. We also have to state that distance calculation with the greedy assignment does not form a proper metric since it does not satisfy the triangle inequality. It can also easily be seen that one could generate one-dimensional cases where the greedy assignment is arbitrarily larger than sorting. To ensure that these cases can not occur during training we have introduced a hybrid approach where we first sort the samples and calculate their Wasserstein distance (noted as W S P) and after this step we also calculate the distance using greedy assignment (noted as Algorithm 1 Training the Greedy Max-Sliced Wasserstein Generator Given: Generator parameters θ g, discriminator parameters θ d, ω d, sample size n, learning rate α 1: while θ g not converged do 2: Sample data D i n i=1 compute surrogate loss s(ωD i, ωF end for 8: Sample data D i n i=1 Create distance matrix: for k = 1 → n do 13: Find min value of M : L ← L + m s,t Remove Row s and Column t from M end for 17: . In this case a parameter (ν) can be set, determining a limit and if the difference between the two distances is larger than this value the sorted distance will be used. This way Wasserstein distance with the hybrid assignment (W H P) can be defined as: In case of ν = 0 the greedy assignment will only be used in those cases, where it also the minimal transportation. If ν is set to one, the greedy assignment will be used if the distance calculated this way is less than twice the minimal distance. We were using ν = 1 in our experiments. It is also worth to note, that sorting the samples before creating the distance matrix (M) can help finding the minimal element, since in this case all series starting from the minimal element in a selected row or column will form a monotonically increasing series. For the first tests we have generated a one-dimensional toy problem using Gaussian Mixture Model with five modes, each of them with variance of 0.15 and expected values of 0, 2, 4, 6 and 8 accordingly. We have trained a four layered fully connected network containing 128, 256, 512 and 1 neurons in the corresponding layers for 5000 iterations using Adam optimizer, with learning rate of 0.001, with batches of 64. No discriminator was applied in this case, the position of the generated samples were optimized directly by the generator and distance calculation was done directly on the output of the generator. In one setup the loss was calculated by sorting the samples, in the other setup we used the greedy approach introduced in Algorithm 1. For further details regarding the hyperparameters of all simulations and for the sake of reproducibility our code can be found on github, containing all details of the described simulations: the link was removed from the text but for the sake of anonymity, but a link pointing to the codes was used during submission After training we have used the generator to generate 16000 samples and we have compared it to 16000 samples generated by the Gaussian Mixtures Model. We have calculated the KullbackLeibler divergence and the Pearson correlation coefficient between these two distributions, repeated all experiments ten times and averaged the . The can be seen in Table 1. As it can be seen training the network with greedy assignment ed lower Kullback-Leibler divergence and higher correlation between the generated and the original distributions, which signals the the greedy approach could approximate the original distribution of the samples better. For the calculation of the Kullback-Leibler divergence histogram calculation was done using 100 bins uniformly distributed between -0.5 and 8.5. An image plotting the histograms using these parameters can be seen on Fig. 3. Table 1: KL divergence, and Pearson correlation between the original and generated distribution using sorting and a greedy approach. Method KL DIV Pearson Corr Sorted 0.91 0.41 Greedy 0.68 0.74 Figure 3: Histograms of the one-dimensional distributions. The blue curve plots the original distribution of the Gaussian Mixture Model, the green curve depicts the generated distribution using sorting for sample assignment, meanwhile the red curve display the of the greedy approach. We have repeated the simulations described in section 3.1 using two-dimensional Gaussian Mixture models. Nine modes were generated forming a small 3 × 3 grid, each with variance of 0.1 and the distance between neighbouring modes was 3.5 in the grid. In this case a five-layered network with 128, 256, 512, 1024 and 1 neurons was applied and training was executed for 500.000 iterations. All other parameters of the simulation were kept the same. The Kullback-Leibler divergence and Pearson correlation coefficients of these simulations comparing the greedy and sorting approaches can be seen in Table 2, meanwhile random samples are depicted on Fig.??. for qulaitative comparison. In this experiment a two-dimensional histogram was used for the Kullback-Leibler divergence calculation, where a uniform grid was formed between -2 and 8 forming 100 bins. We have evaluated our approach on the MNIST dataset as well, where we have used the DCGAN architecture for image generation. Images were resized to 64 × 64 to match the input dimension of the architecture, but single channel images were used. We were using batches of 128 and 16 samples and Adam optimizer for training both the generator and the discriminator (which was a single projection). Since the comparison of high-dimensional (in this case 4096) distributions is complex, we have binarized the images (thresholded at half of the maximum intensity) and calculated the KL divergence between the distribution of white and black values at each pixel for the original distribution and 60000 generated fake samples. The can be seen in Table 3, meanwhile a qualitative comparison of randomly selected samples can be seen on Figure displays randomly selected samples generated with the same network architecture and training parameters on the MNIST dataset using batches of 16. The samples on the left were generated using sorting assignment with Max-sliced Wasserstein distance, the samples in the middle were generated using greedy sample assignment and the samples on the right were generated using the hybrid approach. All samples were generated after 20 epochs of training. We have also conducted experiments on the resized CelebA dataset , in our case images were downscaled to 64 × 64. We have used the DCGAN architecture and compared three different approaches for sample assignment for distance calculation. We did not use any special normalization or initialization methods during training. We used batches of 16 for training. We have selected 10.000 samples randomly from the dataset and generated 10.000 random projection and used sliced Wasserstein distance with sorting to compare the distributions along these lines. The can be seen in Table 4, meanwhile a qualitative comparison of randomly selected samples can be seen on generated with the same network architecture and training parameters on the CelebA dataset using batches of 16. All samples were generated using Max-sliced Wasserstein distance, the samples on the left were generated using sorting for sample assignment, in the middle the greedy approach was used, meanwhile the samples on the right were generated using the hybrid approach. All samples were generated after 20 epochs of training. In this paper we have introduced greedy sample assignment for Max-Sliced Wasserstein GANs. We have shown that using one-dimensional samples, in many cases multiple assignments can optimal transportation and in most cases sorting changes all the samples, meanwhile those parts of the distribution which are at a "good" position should not generate error. We proposed greedy assignment as a possible solution, where samples will be assigned to their most similar counterparts. We have also introduced how the combination of the two methods can be applied ing a hybrid approach in which it can automatically selected -based on the difference of the two measures -which assignment will be used. We have demonstrated on simple toy datasets that greedy assignment performs better than sorting the samples and we have evaluated both the greedy and the hybrid methods on commonly investigated datasets (MNIST and CelebA). With all datasets the greedy assignment ed lower KullbackLeibler divergence and higher correlation than the traditional approach. We have used the Max-Sliced Wasserstein distance for the base of our comparison, since this is the most recent version, which also the best performance, but all the approaches can be exploited in case of regular Sliced Wasserstein distances as well. Also our approach changes the distance calculation only and it can be applied together with various other improved techniques and architectures which are used in GAN training.
We apply a greedy assignment on the projected samples instead of sorting to approximate Wasserstein distance
395
scitldr
Lifelong machine learning focuses on adapting to novel tasks without forgetting the old tasks, whereas few-shot learning strives to learn a single task given a small amount of data. These two different research areas are crucial for artificial general intelligence, however, their existing studies have somehow assumed some impractical settings when training the models. For lifelong learning, the nature (or the quantity) of incoming tasks during inference time is assumed to be known at training time. As for few-shot learning, it is commonly assumed that a large number of tasks is available during training. Humans, on the other hand, can perform these learning tasks without regard to the aforementioned assumptions. Inspired by how the human brain works, we propose a novel model, called the Slow Thinking to Learn (STL), that makes sophisticated (and slightly slower) predictions by iteratively considering interactions between current and previously seen tasks at runtime. Having conducted experiments, the empirically demonstrate the effectiveness of STL for more realistic lifelong and few-shot learning settings. Deep Learning has been successful in various applications. However, it still has a lot of areas to improve on to reach human's lifelong learning ability. As one of its drawbacks, neural networks (NNs) need to be trained on large datasets before giving satisfactory performance. Additionally, they usually suffer from the problem of catastrophic forgetting -a neural network performs poorly on old tasks after learning a novel task. In contrast, humans are able to incorporate new knowledge even from few examples, and continually throughout much of their lifetime. To bridge this gap between machine and human abilities, effort has been made to study few-shot learning (; ; ; ; ; Ravi & Larochelle (2017b);;;; ), lifelong learning (; ; ; ; ; ; Serrà et al.;;; ), and both . The learning tasks performed by humans are, however, more complicated than the settings used by existing lifelong and few-shot learning works. Task uncertainty: currently, lifelong learning models are usually trained with hyperparameters (e.g., number of model weights) optimized for a sequence of tasks arriving at test time. The knowledge about future tasks (even their quantity) may be a too strong assumption in many real-world applications, yet without this knowledge, it is hard to decide the appropriate model architecture and capacity when training the models. Sequential few-shot tasks: existing few-shot learning models are usually (meta-)trained using a large collection of tasks. 1 Unfortunately, this collection is not available in the lifelong learning scenarios where tasks come in sequentially. Without seeing many tasks at training time, it is hard for an existing few-shot model to learn the shared knowledge behind the tasks and use the knowledge to speed up the learning of a novel task at test time. Humans, on the other hand, are capable of learning well despite having only limited information and/or even when not purposely preparing for a particular set of future tasks. Comparing how humans learn and think to how the current machine learning models are trained to learn and make predictions, we observe that the key difference lies on the part of thinking, which is the decision-making counterpart of models when making predictions. While most NN-based supervised learning models use a single forward pass to predict, humans make careful and less error-prone decisions in a more sophisticated manner. Studies in biology, psychology, and economics have shown that, while humans make fast predictions (like machines) when dealing with daily familiar tasks, they tend to rely on a slow-thinking system that deliberately and iteratively considers interactions between current and previously learned knowledge in order to make correct decisions when facing unfamiliar or uncertain tasks. We hypothesize that this slow, effortful, and less error-prone decision-making process can help bridge the gap of learning abilities between humans and machines. We propose a novel brain-inspired model, called the Slow Thinking to Learn (STL), for taskuncertain lifelong and sequential few-shot machine learning tasks. STL has two specialized but dependent modules, the cross-task Slow Predictor (SP) and per-task Fast Learners (FLs), that output lifelong and few-shot predictions, respectively. We show that, by making the prediction process of SP more sophisticated (and slightly slower) at runtime, the learning process of all modules can be made easy at training time, eliminating the need to fulfill the aforementioned impractical settings. Note that the techniques for slow predictions (; Ravi & Larochelle (2017b);; ) and fast learning (; ;) have already been proposed in the literature. Our contributions lie in that we 1) explicitly model and study the interactions between these two techniques, and 2) demonstrate, for the first time, how such interactions can greatly improve machine capability to solve the joint lifelong and few-shot learning problems encountered by humans everyday. 2 Slow Thinking to Learn (STL) Figure 1: The Slow Thinking to Learn (STL) model. To model the interactions between the shared SP f and per-task FLs {(g (t), M (t) )} t, we feed the output of FLs into the SP while simultaneously letting the FLs learn from the feedback given by SP. We focus on a practical lifelong and fewshot learning set-up:, · · · arriving in sequence and the labeled examples also coming in sequence, the goal is to design a model such that it can be properly trained by data ) collected up to any given time point s, and then make correct predictions for unlabeled data X (t) = {x (t,i) } i in any of the seen tasks, t ≤ s. Note that, at training time s, the future tasks To solve Problem 1, we propose the Slow Thinking to Learn (STL) model, whose architecture is shown in Figure 1. The STL is a cascade where the shared Slow Predictor (SP) network f parameterized by θ takes the output of multiple task-specific Fast Learners (FLs) {(g (t), M (t) )} t, t ≤ s, as input. An FL for task T (t) consists of an embedding network g (t)2 parameterized by φ (t) and augmented with an external, episodic, non-parametric memory Here, we use the Memory Module as the external memory which saves the clusters of seen examples {(x (t,i), y (t,i) )} i to achieve better storage efficiency-the h (t,j) of an entry (h (t,j), v (t,j) ) denotes the embedding of a cluster of x (t,i)'s with the same label while the v (t,j) denotes the shared label. We use the FL (g (t), M (t) ) and SP f to make few-shot and lifelong predictions for task T (t), respectively. We let the number of FLs grow with the number of seen tasks in order to ensure that the entire STL model will have enough complexity to learn from possibly endless tasks in lifelong. This does not imply that the SP will consume unbounded memory space to make predictions at runtime, as the FL for a specific task can be stored on a hard disk and loaded into the main memory only when necessary. Slow Predictions. The FL predicts the label of a test instance x using a single feedforward pass just like most existing machine learning models. As shown in Figure 2 (a), the FL for task T (t) first embed the instance to get h = g (t) (x) and then predicts the labelŷ FL of x by averaging the cluster labels where KNN(h) is the set of K nearest neighboring embeddings of h. We havê where h, h denotes the cosine similarity between h (t,j) and h. On the other hand, the SP predicts the label of x with a slower, iterative process, which is shown in Figure 2 (b). The SP first adapts (i.e., fine-tunes) its weights θ to KNN(h) and their corresponding values stored in M (t) to getθ by solving where loss(·) denotes a loss function. Then, the SP makes a prediction byŷ SP = f (h ;θ). The adapted network fθ is discarded after making the prediction. The slower decision-making process of SP may seem unnecessary and wasteful of computing resources at first glance. Next, we explain why it is actually a good bargain. Life-Long Learning with Task Uncertainty. Since the SP makes predictions after runtime adaptation, we define the training objective of θ for task T (s) such that it minimizes the losses after being adapted for each seen task The term loss(f (h;θ *), v) denotes the empirical slow-prediction loss of the adapted SP on an example (x, y) in M (t), whereθ * denotes the weights of the adapted SP for x following Eq.: requires recursively solvingθ * for each (x, y) remembered by the FLs. We use an efficient gradient-based approach proposed by ) to solve Eq.. Please refer to Section 2.1 of the Appendix for more details. Since the SP learns from the output of FLs, theθ * in Eq. approximates a hypothesis used by an FL to predict the label of x. The θ, after being trained, will be close to everyθ * and can be fine-tuned to become a hypothesis, meaning that θ encodes the invariant principles 3 underlying the hypotheses for different tasks. (a) (b) (c) Figure 3: The relative positions between the invariant representations θ and the approximate hypothesesθ (t)'s of FLs for different tasks T (t)'s on the loss surface defined by FLs after seeing the (a) first, (b) second, and (c) third task. Since θ−θ (t) ≤ R for any t in Eq., the effective capacity of SP (at runtime) is the union of the capacity of all possible points within the dashed R-circle centered at θ. Furthermore, after being sequentially trained by two tasks using Eq., the θ will easily get stuck in the middle ofθ andθ. To solve the third task, the third FL needs to change its embedding function (and therefore the loss surface) such thatθ falls into the R-circle centered at θ. Recall that in Problem 1, the nature of tasks arriving after a training process is unknown, thus, it is hard to decide the right model capacity at training time. A solution to this problem is to use an expandable network and expand the network when training it for a new task, but the number of units to add during each expansion remains unclear. Our STL walks around this problem by not letting the SP learn the tasks directly but making it learn the invariant principles behind the tasks. Assuming that the underlying principles of the learned hypotheses for different tasks are universal and relatively simple, 4 one only needs to choose a model architecture with capacity that is enough to learn the shared principles in lifelong manner. Note that limiting the capacity of SP at training time does not imply underfitting. As shown in Figure 3, the postadaptation capacity of SP at runtime can be much larger than the capacity decided during training. Sequential Few-Shot Learning. Although each FL is augmented with an external memory that has been shown to improve learning efficiency by the theory of complementary learning systems , it is not sufficient for FLs to perform few-shot predictions. Normally, these models need to be trained on many existing few-shot tasks in order to obtain good performance at test time. Without assuming s in Problem 1 to be a large number, the STL takes a different approach that fast stabilizes θ and then let the FL for a new incoming task learn a good hypothesis by extrapolating from θ. We define the training objective of g (s), which is parameterized by φ (s) and augmented with memory M (s), for the current task T (s) as follows: where ) is the empirical loss term whose specific form depends on the type of external memory used (see Section 2.2 of the Appendix for more details), and ) is a regularization term, which we call the feedback term, whose inverse value denotes the usefulness of the FL in helping SP (f parameterized by θ) adapt. Specifically, it is written as The feedback term encourages each FL to learn unique and salient features for the respective task so the SP will not be confused by two tasks having similar embeddings. As shown in Figure 3 (b), the relative position of θ gets "stuck" easily after seeing a few of previous tasks. To solve the current task, g (s) needs to change the loss surface for θ such thatθ (s) falls into the R-circle centered at θ (Figure 3(c) ). This makes θ an efficient guide (through the feedback term) to finding g (s) when there are only few examples and also few previous tasks. We use an alternate training procedure to train the SP and FLs. Please see Section 2.3 of the Appendix for more details. Note that when sequentially training STL for task T (s) in lifelong, we can safely discard the data in the previous tasks because the FLs are task-specific (see Eq.) and the SP does not require raw examples to train (see Eq.). In this section, we discuss related works that are not mentioned in Sections 1 and 2. For a complete discussion, please refer to Section 1 of the Appendix. Runtime Adaptation. Our idea of adapting SP at runtime is similar to that of MbPA , which is a method proposed for lifelong learning only. In MbPA, the embedder and output networks are trained together, in a traditional approach, as one network for the current task. Its output network adapts to examples stored in an external memory for a previous task before making lifelong predictions. Nevertheless, there is no discussion of how the runtime adaptation could improve the learning ability of a model, which is the main focus of this paper. Meta-Learning. The idea of learning the invariant representations in SP is similar to meta-learning (; Ravi & Larochelle (2017b); ), where a model (meta-)learns good initial weights that can speed up the training of the model for a new task using possibly only few shots of data. To learn the initial weights (which correspond to the invariant representations in our work), existing studies usually assume that the model can sample tasks, including training data, following the task distribution of the ground truth. However, the Problem 1 studied in this paper does not provide such a luxury. Memory-Augmented Networks. An FL is equipped with an external episodic memory module, which is shown to have fast-learning capability due to its nonparametric nature. Although we use the Memory Module in this work, our model can integrate with other types of external memory modules, such as;;. This is left as our future work. FewShot Learning without Forgetting. proposed a new few-shot learning approach that does not forget previous tasks when trained on a new one. However, it still needs to be trained on a large number of existing tasks in order to make few-shot predictions and therefore cannot be applied to Problem 1. In this section, we evaluate our model in different aspects. We implement STL and the following baselines using TensorFlow : Vanilla NN. A neural network without any technique for preventing catastrophic forgetting or preparation for few-shot tasks. EWC. A regularization technique protecting the weights that are important to previous tasks in order to mitigate catastrophic forgetting. Memory Module. An external memory module that can make predictions (using KNNs) by itself. It learns to cluster rows to improve prediction accuracy and space efficiency. MbPA+. A memory-augmented model (trained (FLs first, and then SP), and we use MAML to solve Eq. when training the SP. Separate-MbPA. This is similar to Separate-MAML, except that the SP is not trained to prepare for run-time adaptation, but it still applies run-time adaptation at test time. Next, we evaluate the abilities of the models to fight against catastrophic forgetting using the permuted MNIST and CIFAR-100 datasets. Then, we investigate the impact of task-uncertainty on model performance. Permuted MNIST. We create a sequence of 10 tasks, where each task contains MNIST images whose pixels are randomly permuted using the same seed. The seeds are different across tasks. We train models for one task at a time, and test their performance on all tasks seen so far. We first use a setting where all memory-augmented models can save raw examples in their external memory. This eliminates the need for an embedding network, and, following the settings in , we use a 2-layer MLP with 400 units for all models. We trained all models using the Adam optimizer for 10,000 iterations per task, with their best-tuned hyperparameters. Figure 4(a) shows the average performance of models for all tasks seen so far. The memory-augmented models outperform the Vanilla NN and EWC and do not suffer from forgetting. This is consistent with previous findings . However, saving raw examples for a potentially infinite number of tasks may be infeasible as it consumes a lot of space. We therefore use another setting where memory-augmented models save only the embedded examples. This time, we let both the embedder and the output network (in STL, it is SP) consist of 1-layer MLP with 400 units. Figure 4(b) shows that the memory-augmented models do not forget even when saving the embeddings. The only exception is MbPA+, because it uses the same embedder network for all tasks, the embedder network is prone to forgetting. CIFAR-100. Here, we design more difficult lifelong learning tasks using the CIFAR-100 dataset. The CIFAR-100 dataset consists of 100 classes. Each class belongs to a superclass, and each superclass has 5 classes. We create a sequence of tasks, called CIFAR-100 Normal, where the class labels in one task belong to different superclasses, but the labels across different tasks are from the same superclass. This ensures that there is transferable knowledge between tasks in the ground truth. We also create another sequence of tasks, called CIFAR-100 Hard, where the class labels in one task belong to the same superclasses, while the labels across different tasks are from different superclass. The tasks in CIFAR-100 Hard share less information, making the lifelong learning more difficult. For CIFAR-100 tasks, we let the memory-augmented models store embeddings in external memory. The embedding networks of all models consist of 4 convolutional layers followed by one fully connected layer, and all output networks (SP in STL) are a 2-layer MLP with 400 units. We search for the best hyperparameters for each model but limit the memory size to 100 embeddings, apply early stopping during training, and use Adam as the optimizer. As shown in Figures 5(a)(b), our SP clearly outperforms the baseline models for both the Normal and Hard tasks. Task Uncertainty and Hyperparameters. To understand why the SP outperforms other baselines, we study how the performance of each model changes with model capacity. Figure 4 (c) shows the performance of different models on the permuted MNIST dataset when we deliberately limit the size of external memory to 10 embeddings. Only the SP performs well in this case. We also vary the size of external memory used by our FLs and find out that the performance of SP does not drastically change like the other baselines, except when the memory size is extremely small, as shown in Figure 5 (c). The above justify that our STL can avoid the customization of memory size (a hyperparameter) to be specifically catered to expected future tasks, whose precise characteristics may not be known at training time. In addition to memory size, we also conduct experiments with models whose architectures of the output networks (SP in our STL) are changed based on LeNet . We consider two model capacities. The larger model has 4 convolutional layers with 128, 128, 256, and 256 filters followed by 3 fully-connected layers with 256, 256, and 128 units; whereas the small model has 4 convolutional layers with 16, 16, 32, and 32 filters followed by 3 fully-connected layers with 64, 32, 16 units. Figure 6 compares the performance of different parametric models for the current and previous CIFAR-100 Normal tasks. We can see that the performance of EWC on current task is heavily affected by model capacity. EWC with small model size can learn well at first, but struggles to fit the following tasks, which was not a problem when it has larger model size. MbPA has good performance on current task but forgets the previous tasks no matter how large the model is. On the other hand, STL is able to perform well on both the previous and current tasks regardless of model size. This proves the advantage of SP's runtime adaptation ability, that is, it mitigates the need for a model that is carefully sized to the incoming uncertain lifelong tasks. CIFAR-100. Existing few-shot learning models are usually trained using a large collection of tasks as training batches. However, in sequential continual learning settings, collections of these tasks are not available. Here we designed an experiment setting that simulates an incoming few-shot task during lifelong learning. We modified the CIFAR Normal and CIFAR Hard sequential tasks, where we trained the models with sequential tasks just like conventional lifelong learning set-up, except that the last task is a "few-shot" task. 5 In this experiment, we assume that the input domains are the same, which means we can use the network parameters (e.g. embedder's weights) learned from previous tasks as initial weights. We consider three baselines, namely the Memory Module, Separate-MAML, and Vanilla NN. The Memory Module is the only known model designed for both lifelong and few-shot learning. We use Separate-MAML to simulate the STL without the feedback term in Eq and the Vanilla NN to indicate "default" fine-tuning performance for each task. Figure 7 shows the performance on the few-shot task with different number of batches of available training data. Each batch contains 16 examples, and the memory size for each task is 20 in all memory-augmented models. We can see that both the FLs and SP in our model outperform other baselines. The Memory Module cannot learn well without seeing a large collection of tasks at training time, and the Separate-MAML gives unstable performance due to the lack of feedback from the SP (MAML). Interestingly, these two sophisticated models perform worse than the Vanilla NN sometimes, justifying that the interactions between the fast-leaning and slow-thinking modules are crucial to the joint lifelong and few-shot learning. Our above observations still hold on the even more challenging dataset, CIFAR Hard. Please refer to Section 3.3 of the Appendix for more details. Comparing the of FLs and SP, we can see that an FL gives better performance when the training data is small. This justifies that the invariant representations learned by the SP can indeed guide an FL to better learn from the few shots. Interestingly, the intersection of the predictive ability of an FL and the SP seem to be stable across tasks and usually falls within the range of 48 to 192 examples. In Section 3.4 of the Appendix, we visualize the embeddings stored in the FLs and the Memory Module to understand how the feedback from SP guide the representation learning of FLs. Inference Time. The SP makes "slow" predictions because of runtime adaptation. Here, we study the time required by the SP to make a single prediction. We run trained models on a machine with a commodity NVIDIA GTX-1070 GPU. The number of adaptation steps used is 3 as in previous experiments. For an FL, SP, and a non-adaptive Vanilla NN trained for the CIFAR-100 Normal tasks, we get 0.24 ms, 2.62 ms, and 0.79 ms per-example inference time on average. We believe that trading delay of a few milliseconds at runtime for a great improvement on lifelong and few-shot learning abilities is a good bargain in many applications. Space Efficiency. The STL also has an advantage in space efficiency. Please see Section 3 of the Appendix for more details. Inspired by the thinking process that humans undergo when making decisions, we propose STL, a cascade of per-task FLs and shared SP. To the best of our knowledge, this is the first work that studies the interactions between the fast-learning and slow-prediction techniques and shows how such interactions can greatly improve machine capability to solve the joint lifelong and few-shot learning problems under challenging settings. For future works, we will focus on integrating the STL with different types of external memory and studying the performance of STL in real-world deployments. Memory-Augmented Neural Network bridged the gap of leveraging an external memory for one-shot learning. MANN updates its external memory by learning a content-based memory writer. The Memory Module learns a Matching Network but includes an external memory that retains previously seen examples or their representatives (cluster of embeddings). Unlike MANN, the Memory Module has a deterministic way of updating its memory. Memory Module has the ability for providing ease and efficiency in grouping of incoming data and selecting class representations. However, Memory Module encounters limitation in learning and making precise predictions when the given memory space becomes extremely small. Our proposed STL focuses on the interaction between the per-task memory-augmented Fast Learners (FLs) and the Slow Predictor (SP) to optimize the data usage stored in the memory. This interaction allows an FL to learn better representations for a better lifelong and few-shot predictions. It is common for lifelong learning algorithms to store a form of knowledge from previously learned tasks to overcome forgetting. Some remember the task specific models , while some store raw data, the hessian of the task, or the attention mask of the network for the task (; ; Serrà et al. ). Some approaches such as not only attempts to consolidate the model but also expands the network size. Other works like; tried to solve the problem with fixed storage consumption. Except for , the previously mentioned works need to predefine the model capacity, and lacks the flexibility to unknown number of future tasks. Although can expand its capacity when training for a new task, the challenge of deciding how many number of units to add during each expansion still remains. Some of the recent models (; ; ; ; ; ;) in lifelong learning have taken inspiration on how the brain works . Our proposed framework is closely related to other dual-memory systems that are inspired by the complementary learning systems (CLS) theory, which defines the contribution of the hippocampus for quick learning and the neocortex for memory consolidation. A version of GeppNet that is augmented with external memory stores some of its training data for rehearsal after each new class is trained. FearNet ) is composed of three networks for quick recall, memory consolidation, and network selection. Both GeppNet and FearNet have dedicated sleep phases for memory replay, a mechanism to mitigate catastrophic forgetting. STL, however, does not require a dedicated sleep or shutdown to consolidate the memory. This choice is based on considering that there are cases wherein a dedicated sleep time is not feasible, such as when using a machine learning model to provide a frontline service that needs to be up and running all the time and cannot be interrupted by a regular sleep schedule. In this section, we discuss more technical details about the design and training of STL. There are different ways to solve Eq.. One can use either the gradient-based MAML or Reptile to get an approximated solution efficiently. The constraint θ − θ ≤ R can be implemented by either adding a Lagrange multiplier in the objective or limiting the number of gradient steps in MAML/Reptile. In this paper, we use MAML due to its simplicity, ease of implementation, and efficiency, and we enforce the constraint θ − θ ≤ R by limiting the number of adaptation steps of SP at runtime. An FL in STL is compatible with different types of external memory modules, such as;;. We choose the Memory Module in this paper due to its clustering capabilities, which increase space efficiency. For completeness, we briefly discuss how an FL based on the Memory Module are optimized. ) be the sorted K nearest neighbors (from the closest to the farthest) of the embedding of x, and where ·, · denotes the cosine similarity between two vectors, and p and b are the smallest indices such that v (p) = y and v (b) = y, respectively; h (p) and h (b) are the closest positive and negative neighbors. As this loss is minimized, g (s) maximizes the similarity of embedding of training data points to their positive neighbors, while minimizing the similarity to the negative neighbors by a margin of ε. also has deterministic update and replacement rules for records in M (s). In effect, an h represents the embedding of a cluster of data points, and its value v denotes the shared label of points in that group. We sequentially train the STL for tasks T, T, · · · coming in lifelong. For the current task T (s), we train the STL using an alternate training approach. First, the weights φ (s) of g (s) for T (t) is updated by taking some descent steps following Eq. in the main paper. Next, the θ that parametrizes f is updated following Eq. in the main paper. One alternate training iteration involves training the FL for the current task for a steps, and then the SP for b steps. We set the alternate ratio a: b to 1:1 by default. The pseudo-code of STL's training procedure is shown in Algorithms 1, 2, and 3. One important hyperparameter to decide before training the STL is R, which affects the number of adaptation steps used by SP. A larger R allows the adapted weightsθ's to move farther from θ, which may lead the SP to better lifelong predictions but will in higher computation cost at runtime. A smaller R helps stabilize θ after the model is trained on previous tasks and enables θ to guide the FLs for new incoming tasks sooner. We experimented on different values of R by adjusting the number of adaptation steps of SP, and found out that it does not need to be large to achieve good performance. Normally, it suffices to have less than 5 adaptation steps. The SP in STL can work with FTs having very small external memory. Figure 8(a) shows the trade-off between the average all-task performance at the 10-th sequential task on the permuted MNIST dataset and the size (in number of embedded examples) of external memory. While the performance of most memory-augmented models drops when the memory size is 10, the SP can still perform well. This justifies that the invariant principles learned by the SP can indeed guide FLs to find better representations that effectively "bring back" the knowledge of SP for a particular task. Figure 8(b) shows the memory space required by different models in order to achieve at least 0.9 average all-task accuracy. The STL consumes less than 1% space as compared to MbPA and Memory Module. The SP also has high adaptation efficiency. Figure 8 (c) shows the performance gain of different adaptive models after runtime adaptation. When the memory size is 1000, all models can adapt for the current task to give improved performance. However, the adaptation efficiency of the baseline models drops when the memory size is 10. The SP, on the other hand, achieves good performance in this case even after being adapted for just one step thanks to 1) the SP is trained to be ready for adaptation and 2) the invariant principles learned by the SP are useful by themselves and require only few examples (embeddings) to transform to a good hypothesis. 3.3 Sequential Few-shot Learning on CIFAR Hard Figure 9 shows the sequential few-shot learning of different models on the CIFAR Hard dataset. In this dataset, the labels of different tasks come from different superclasses in the original CIFAR 100 dataset. So, it is very challenging for a model to learn from only a few examples in a new task without being able to see a lot of previous tasks. As we can see, our STL model still outperforms other baselines. In particular, Figure 9 (a) shows that the STL is able to perform few-shot learning after seeing just one previous task. Again, the above demonstrates that the interactions between the FLs and SP are a key to improving machine ability for the joint lifelong and few-shot learning. In order to understand how the feedback from SP guide the representation learning of FLs, we visualize the embeddings stored in the FLs and the Memory Module. We sample 300 testing examples per task, get their embeddings from the two models, and then project them onto a 2D space using the t-SNE algorithm . The are shown in Figure 10. We can see that the embeddings produced by different FLs for different tasks are more distant from each other than those output by the Memory Module. Recall that the feedback term in Eq. encourages each FL to learn features that help the SP adapt. Therefore, each FL learns more salient features for the respective task so the SP will not be confused by two tasks having similar embeddings. This, in turn, quickly stabilizes SP and makes it an efficient guide (through the feedback term) to learning the FL for a new task when there are only few examples and also few previous tasks, as Figure 3 shows.
This paper studies the interactions between the fast-learning and slow-prediction models and demonstrate how such interactions can improve machine capability to solve the joint lifelong and few-shot learning problems.
396
scitldr
Hypernetworks are meta neural networks that generate weights for a main neural network in an end-to-end differentiable manner. Despite extensive applications ranging from multi-task learning to Bayesian deep learning, the problem of optimizing hypernetworks has not been studied to date. We observe that classical weight initialization methods like and , when applied directly on a hypernet, fail to produce weights for the mainnet in the correct scale. We develop principled techniques for weight initialization in hypernets, and show that they lead to more stable mainnet weights, lower training loss, and faster convergence. Meta-learning describes a broad family of techniques in machine learning that deals with the problem of learning to learn. An emerging branch of meta-learning involves the use of hypernetworks, which are meta neural networks that generate the weights of a main neural network to solve a given task in an end-to-end differentiable manner. Hypernetworks were originally introduced by as a way to induce weight-sharing and achieve model compression by training the same meta network to learn the weights belonging to different layers in the main network. Since then, hypernetworks have found numerous applications including but not limited to: weight pruning , neural architecture search (;, Bayesian neural networks (; ; ; ;), multi-task learning (; ; ; Serrà et al., 2019;), continual learning (von), generative models , ensemble learning , hyperparameter optimization , and adversarial defense . Despite the intensified study of applications of hypernetworks, the problem of optimizing them to this day remains significantly understudied. Given the lack of principled approaches to training hypernetworks, prior work in the area is mostly limited to ad-hoc approaches based on trial and error (c.f. Section 3). For example, it is common to initialize the weights of a hypernetwork by sampling a "small" random number. Nonetheless, these ad-hoc methods do lead to successful hypernetwork training primarily due to the use of the Adam optimizer , which has the desirable property of being invariant to the scale of the gradients. However, even Adam will not work if the loss diverges (i.e. integer overflow) at initialization, which will happen in sufficiently big models. The normalization of badly scaled gradients also in noisy training dynamics where the loss function suffers from bigger fluctuations during training compared to vanilla stochastic gradient descent (SGD). showed that while adaptive optimizers like Adam may exhibit lower training error, they fail to generalize as well to the test set as non-adaptive gradient methods. Moreover, Adam incurs a computational overhead and requires 3X the amount of memory for the gradients compared to vanilla SGD. Small random number sampling is reminiscent of early neural network research before the advent of classical weight initialization methods like Xavier init and Kaiming init . Since then, a big lesson learned by the neural network optimization community is that architecture specific initialization schemes are important to the ro-bust training of deep networks, as shown recently in the case of residual networks . In fact, weight initialization for hypernetworks was recognized as an outstanding open problem by prior work that had questioned the suitability of classical initialization methods for hypernetworks. Our We show that when classical methods are used to initialize the weights of hypernetworks, they fail to produce mainnet weights in the correct scale, leading to exploding activations and losses. This is because classical network weights transform one layer's activations into another, while hypernet weights have the added function of transforming the hypernet's activations into the mainnet's weights. Our solution is to develop principled techniques for weight initialization in hypernetworks based on variance analysis. The hypernet case poses unique challenges. For example, in contrast to variance analysis for classical networks, the case for hypernetworks can be asymmetrical between the forward and backward pass. The asymmetry arises when the gradient flow from the mainnet into the hypernet is affected by the biases, whereas in general, this does not occur for gradient flow in the mainnet. This underscores again why architecture specific initialization schemes are essential. We show both theoretically and experimentally that our methods produce hypernet weights in the correct scale. Proper initialization mitigates exploding activations and gradients or the need to depend on Adam. Our experiments reveal that it leads to more stable mainnet weights, lower training loss, and faster convergence. Section 2 briefly covers the relevant technical preliminaries and Section 3 reviews problems with the ad-hoc methods currently deployed by hypernetwork practitioners. We derive novel weight initialization formulae for hypernetworks in Section 4, empirically evaluate our proposed methods in Section 5, and finally conclude in Section 6. Definition. A hypernetwork is a meta neural network H with its own parameters φ that generates the weights of a main network θ from some embedding e in a differentiable manner: θ = H φ (e). Unlike a classical network, in a hypernetwork, the weights of the main network are not model parameters. Thus the gradients ∆θ have to be further backpropagated to the weights of the hypernetwork ∆φ, and then trained via gradient descent φ t+1 = φ t − λ∆φ t. This fundamental difference suggests that conventional knowledge about neural networks may not apply directly to hypernetworks and novel ways of thinking about weight initialization, optimization dynamics and architecture design for hypernetworks are sorely needed. We propose the use of Ricci calculus, as opposed to the more commonly used matrix calculus, as a suitable mathematical language for thinking about hypernetworks. Ricci calculus is useful because it allows us to reason about the derivatives of higher-order tensors with notational ease. For readers not familiar with the index-based notation of Ricci calculus, please refer to for a good introduction to the topic written from a machine learning perspective. For a general nth-order tensor T i1,...,in, we use d i k to refer to the dimension of the index set that i k is drawn from. We include explicit summations where the relevant expressions might be ambiguous, and use Einstein summation convention otherwise. We use square brackets to denote different layers for added clarity, so for example W [t] denotes the t-th weight layer. derived weight initialization formulae for a feedforward neural network by conducting a variance analysis over activations and gradients. For a linear layer suppose we make the following Xavier Assumptions at initialization: Thus, the forward pass and backward pass in symmetrical formulae. proposed an initialization based on their harmonic mean: Var(W i j) = 2 dj +di. In general, a feedforward network is non-linear, so these assumptions are strictly invalid. But symmetric activation functions with unit derivative at 0 in a roughly linear regime at initialization. extended's analysis by looking at the case of ReLU activation functions, i.e. This in an extra factor of 2 in the variance formula. W i j have to be symmetric around 0 to enforce Xavier Assumption 3 as the activations and gradients propagate through the layers. argued that both the forward or backward version of the formula can be adopted, since the activations or gradients will only be scaled by a depth-independent factor. For convolutional layers, we have to further divide the variance by the size of the receptive field.' Xavier init' and'Kaiming init' are terms that are sometimes used interchangeably. Where there might be confusion, we will refer to the forward version as fan-in init, the backward version as fan-out init, and the harmonic mean version as harmonic init. In the seminal paper, the authors identified two distinct classes of hypernetworks: dynamic (for recurrent networks) and static (for convolutional networks). They proposed Orthogonal init for the dynamic class, but omitted discussion of initialization for the static class. The static class has since proven to be the dominant variant, covering all kinds of non-recurrent networks (not just convolutional), and thus will be the central object of our investigation. Through an extensive literature and code review, we found that hypernet practitioners mostly depend on the Adam optimizer, which is invariant to and normalizes the scale of gradients, for training and resort to one of four weight initialization methods: M3 Kaiming init, but with the output layer scaled by 1 10 (as found in). M4 Kaiming init, but with the hypernet embedding set to be a suitably scaled constant (as found in). M1 uses classical neural network initialization methods to initialize hypernetworks. This fails to produce weights for the main network in the correct scale. Consider the following illustrative example of a one-layer linear hypernet generating a linear mainnet with T + 1 layers, given embeddings sampled from a standard normal distribution and weights sampled entry-wise from a zero-mean distribution. We leave the biases out for now. is likely to vanish, since the size of the embedding vector is typically small relatively to the width of the mainnet weight layer being generated. Where the fan-in is of a different scale than the fan-out, the harmonic mean has a scale close to that of the smaller number. Therefore, the fan-in, fan-out, and harmonic variants of Xavier and Kaiming init will all in activations and gradients that scale exponentially with the depth of the mainnet. M2 and M3 introduce additional hyperparameters into the model, and the ad-hoc manner in which they work is reminiscent of pre deep learning neural network research, before the introduction of classical initialization methods like Xavier and Kaiming init. This ad-hoc manner is not only inelegant and consumes more compute, but will likely fail for deeper and more complex hypernetworks. kt to a suitable constant (d in this case), such that both itkt can seem to be initialized with the same variance as Kaiming init. This ensures that the variance of the activations in the mainnet are preserved through the layers, but the restrictions on the embeddings might not be desirable in many applications. Luckily, the fix appears simple -set Var(. This in the variance of the generated weights in the mainnet Var( resembling conventional neural networks initialized with fan-in init. This suggests a general hypernet weight initialization strategy: initialize the weights of the hypernet such that they approximate weights coming from classical neural network initialization. We elaborate on and generalize this intuition in Section 4. Most hypernetwork architectures use a linear output layer so that gradients can pass from the mainnet into the hypernet directly without any non-linearities. We make use of this fact in developing methods called hyperfan-in init and hyperfan-out init for hypernetwork weight initialization based on the principle of variance analysis. Proposition. Suppose a hypernetwork comprises a linear output layer. Then, the variance between the input and output activations of a linear layer in the mainnet y i = W i j x j + b i can be preserved using fan-in init in the hypernetwork with appropriately scaled output layers. Case 1. The hypernet generates the weights but not the biases of the mainnet. The bias in the mainnet is initialized to zero. We can write the weight generation in the form where h computes all but the last layer of the hypernet and (H, β) form the output layer. We make the following Hyperfan Assumptions at initialization: Xavier assumptions hold for all the layers in the hypernet. Use fan-in init to initialize the weights for h. Then, Var(h(e) k ) = Var(e l). If we initialize H with the formula Var(and β with zeros, we arrive at Var(W i j) = 1 dj, which is the formula for fan-in init in the mainnet. The Hyperfan assumptions imply the Xavier assumptions hold in the mainnet, thus preserving the input and output activations. Case 2. The hypernet generates both the weights and biases of the mainnet. We can write the weight and bias generation in the form respectively, where h and g compute all but the last layer of the hypernet, and (H, β) and (G, γ) form the output layers. We modify Hyperfan Assumption 2 so it includes G i l, g(e) l, and γ i, and further assume Var(x j) = 1, which holds at initialization with the common practice of data standardization. Use fan-in init to initialize the weights for h and g. Then,, and β, γ with zeros, then the input and output activations in the mainnet can be preserved. If we initialize G i j to zeros, then its contribution to the variance will increase during training, causing exploding activations in the mainnet. Hence, we prefer to introduce a factor of 1/2 to divide the variance between the weight and bias generation, where the variance of each component is allowed to either decrease or increase during training. This becomes a problem if the variance of the activations in the mainnet deviates too far away from 1, but we found that it works well in practice. A similar derivation can be done for the backward pass using analogous assumptions on gradients flowing in the mainnet: and through mainnet biases: If we initialize the output layer H with the analogous hyperfan-out formula Var(and the rest of the hypernet with fan-in init, then we can preserve input and output gradients on the mainnet: Var. However, note that the gradients will shrink when flowing from the mainnet to the hypernet: Var(Var( and scaled by a depth-independent factor due to the use of fan-in rather than fan-out init. In the classical case, the forward version (fan-in init) and the backward version (fan-out init) are symmetrical. This remains true for hypernets if they only generated the weights of the mainnet. However, if they were to also generate the biases, then the symmetry no longer holds, since the biases do not affect the gradient flow in the mainnet but they do so for the hypernet (c.f. Equation 4). Nevertheless, we can initialize G so that it helps hyperfan-out init preserve activation variance on the forward pass as much as possible (keeping the assumption that Var(x j) = 1 as before): We summarize the variance formulae for hyperfan-in and hyperfan-out init in Table 1. It is not uncommon to re-use the same hypernet to generate different parts of the mainnet, as was originally done in. We discuss this case in more detail in Appendix Section A. We initialize h and g with fan-in init, and For convolutional layers, we have to further divide Var(H i jk) by the size of the receptive field. Uniform init: X ∼ U(− 3Var(X), 3Var(X)). Normal init: X ∼ N (0, Var(X)). Variance Formula Initialization Variance Formula We evaluated our proposed methods on four sets of experiments involving different use cases of hypernetworks: feedforward networks, continual learning, convolutional networks, and Bayesian neural networks. In all cases, we optimize with vanilla SGD and sample from the uniform distribution according to the variance formula given by the init method. More experimental details can be found in Appendix Section B. As an illustrative first experiment, we train a feedforward network with five hidden layers (500 hidden units), a hyperbolic tangent activation function, and a softmax output layer, on MNIST across four different settings: a classical network with Xavier init, a hypernet with Xavier init that generates the weights of the mainnet, a hypernet with hyperfan-in init that generates the weights of the mainnet, and a hypernet with hyperfan-out init that generates the weights of the mainnet. The use of hyperfan init methods on a hypernetwork reproduces mainnet weights similar to those that have been trained from Xavier init on a classical network, while the use of Xavier init on a hypernetwork causes exploding activations right at the beginning of training (see Figure 1). Observe in Figure 2 that when the hypernetwork is initialized in the proper scale, the magnitude of generated weights stabilizes quickly. This in turn leads to a more stable training regime, as seen in Figure 3. More visualizations of the activations and gradients of both the mainnet and hypernet can be viewed in Appendix Section B.1. Qualitatively similar observations were made when we replaced the activation function with ReLU and Xavier with Kaiming init, with Kaiming init leading to even bigger activations at initialization. Suppose now the hypernet generates both the weights and biases of the mainnet instead of just the weights. We found that this architectural change leads the hyperfan init methods to take more time (but still less than Xavier init), to generate stable mainnet weights (c.f. Figure 25 in the Appendix). Continual learning solves the problem of learning tasks in sequence without forgetting prior tasks. von used a hypernetwork to learn embeddings for each task as a way to efficiently regularize the training process to prevent catastrophic forgetting. We compare different initialization schemes on their hypernetwork implementation, which generates the weights and biases of a ReLU mainnet with two hidden layers to solve a sequence of three regression tasks. In Figure 4, we plot the training loss averaged over 15 different runs, with the shaded area showing the standard error. We observe that the hyperfan methods produce smaller training losses at initialization and during training, eventually converging to a smaller loss for each task. applied a hypernetwork on a convolutional network for image classification on CIFAR-10. We note that our initialization methods do not handle residual connections, which were in their chosen mainnet architecture and are important topics for future study. Instead, we implemented their hypernetwork architecture on a mainnet with the All Convolutional Net architecture that is composed of convolutional layers and ReLU activation functions. After searching through a dense grid of learning rates, we failed to enable the fan-in version of Kaiming init to train even with very small learning rates. The fan-out version managed to begin delayed training, starting from around epoch 270 (see Figure 5). By contrast, both hyperfan-in and hyperfan-out init led to successful training immediately. This shows a good init can make it possible to successfully train models that would have otherwise been unamenable to training on a bad init. In the work of , it was noticed that even with the use of batch normalization in the mainnet, classical initialization approaches still led to diverging losses (due to exploding activations, c.f. Section 3). We observe similar in our experiment (see Figure 6) -the fan-in version of Kaiming init, which is the default initialization in popular deep learning libraries like PyTorch and Chainer, ed in substantially higher initial losses and led to slower training than the hyperfan methods. We found that the observation still stands even when the last layer of the mainnet is not generated by the hypernet. This shows that while batch normalization helps, it is not the solution for a bad init that causes exploding activations. Our approach solves this problem in a principled way, and is preferable to trial-and-error based heuristics that had to resort to in order to train their model. Surprisingly, the fan-out version of Kaiming init led to similar as the hyperfan methods, suggesting that batch normalization might be sufficient to correct the bad initializations that in vanishing activations. That being said, hypernet practitioners should not expect batch normalization to be the panacea for problems caused by bad initialization, especially in memory-constrained scenarios. In a Bayesian neural network application (especially in hypernet architectures without relaxed weight-sharing), the blowup in the number of parameters limits the use of big batch sizes, which is essential to the performance of batch normalization . For example, in this experiment, our hypernet model requires 32 times as many parameters as a classical MobileNet. To the best of our knowledge, the interaction between batch normalization and initialization is not well-understood, even in the classical case, and thus, our findings prompt an interesting direction for future research. In all our experiments, hyperfan-in and hyperfan-out both lead to successful hypernetwork training with SGD. We did not find a good reason to prefer one over the other (similar to 's observation in the classical case for fan-in and fan-out init). For a long time, the promise of deep nets to learn rich representations of the world was left unfulfilled due to the inability to train these models. The discovery of greedy layer-wise pre-training and later, Xavier and Kaiming init, as weight initialization strategies to enable such training was a pivotal achievement that kickstarted the deep learning revolution. This underscores the importance of model initialization as a fundamental step in learning complex representations. In this work, we developed the first principled weight initialization methods for hypernetworks, a rapidly growing branch of meta-learning. We hope our work will spur momentum towards the development of principled techniques for building and training hypernetworks, and eventually lead to significant progress in learning meta representations. Other non-hypernetwork methods of neural network generation can also be improved by considering whether their generated weights in exploding activations and how to avoid that if so. A.1 FOR MAINNET WEIGHTS OF THE SAME SIZE For model compression or weight-sharing purposes, different parts of the mainnet might be generated by the same hypernet function. This will cause some assumptions of independence in our analysis to be invalid. Consider the example of the same hypernet being used to generate multiple different mainnet weight layers of the same size, i.e. The relaxation of some of these independence assumptions does not always prove to be a big problem in practice, because the correlations introduced by repeated use of H can be minimized with the use of flat distributions like the uniform distribution. It can even be helpful, since the re-use of the same hypernet for different layers causes the gradient flowing through the hypernet output layer to be the sum of the gradients from the weights of these layers: itk, thus combating the shrinking effect. Similar reasoning applies if the same hypernet was used to generate differently sized subsets of weights in the mainnet. However, we encourage avoiding this kind of hypernet architecture design if not otherwise essential, since it will complicate the initialization formulae listed in Table 1.'s hypernetwork architecture. Their two-layer hypernet generated weight chunks of size (K, n, n) for a main convolutional network where K = 16 was found to be the highest common factor among the size of mainnet layers, and n 2 = 9 was the size of the receptive field. We simplify the presentation by writing i for i t, j for j t, k for k t,m, and l for l t,m. Because the output layer (H, β) in the hypernet was re-used to generate mainnet weight matrices of different sizes (i.e. in general, i t = i t+1, j t = j t+1), G effectively becomes the output layer that we want to be considering for hyperfan-in and hyperfan-out initialization. Hence, to achieve fan-in in the mainnet Var, and hyperfan-in init for ). Analogously, to achieve fan-out in the mainnet Var(W [t] ), and hyperfan-out init for ). The networks were trained on MNIST for 30 epochs with batch size 10 using a learning rate of 0.0005 for the hypernets and 0.01 for the classical network. The hypernets had one linear layer with embeddings of size 50 and different hidden layers in the mainnet were all generated by the same hypernet output layer with a different embedding, which was randomly sampled from U(− √ 3, √ 3) and fixed. We use the mean cross entropy loss for training, but the summed cross entropy loss for testing. We show activation and gradient plots for two cases: (i) the hypernet generates only the weights of the mainnet, and (ii) the hypernet generates both the weights and biases of the mainnet. (i) covers Figures 3, 1, 7, 8, 9, 10, 11, 12, 2, 13, 14, 15, and 16. (ii) covers Figures 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, and 29. The activations and gradients in our plots were calculated by averaging across a fixed held-out set of 300 examples drawn randomly from the test set. In Figures 1, 8, 9, 11, 12, 13, 14, 16, 18, 20, 21, 23, 24, 26, 27, and 29, the y axis shows the number of activations/gradients, while the x axis shows the value of the activations/gradients. The value of activations/gradients from the hypernet output layer correspond to the value of mainnet weights. In Figures 2, 7, 10, 15, 19, 22, 25, and 28, the y axis shows the mean value of the activations/gradients, while each increment on the x axis corresponds to a measurement that was taken every 1000 training batches, with the bars denoting one standard deviation away from the mean. proposed to use the harmonic mean of the two different initialization formulae derived from the forward and backward pass. commented that either version suffices for convergence, and that it does not really matter given that the difference between the two will be a depth-independent factor. We experimented with the harmonic, geometric, and arithmetic means of the two different formulae in both the classical and the hypernet case. There was no indication of any significant benefit from taking any of the three different means in both cases. Thus, we confirm and concur with's original observation that either the fan-in or the fan-out version suffices. The mainnet is a feedforward network with two hidden layers (10 hidden units) and the ReLU activation function. The weights and biases of the mainnet are generated from a hypernet with two hidden layers (10 hidden units) and trainable embeddings of size 2 sampled from U(− √ 3, √ 3). We keep the same continual learning hyperparameter β output value of 0.005 and pick the best learning rate for each initialization method from {10 −2, 10 −3, 10 −4, 10 −5}. Notably, Kaiming (fan-in) could only be trained from learning rate 10 −5, with losses diverging soon after initialization using the other learning rates. Each task was trained for 6000 training iterations using batch size 32, with Figure 4 plotted from losses measured at every 100 iterations. The networks were trained on CIFAR-10 for 500 epochs starting with an initial learning rate of 0.0005 using batch size 100, and decaying with γ = 0.1 at epochs 350 and 450. The hypernet is composed of two layers (50 hidden units) with separate embeddings and separate input layers but shared output layers. The weight generation happens in blocks of where K = 96 is the highest common factor between the different sizes of the convolutional layers in the mainnet and n = 3 is the size of the convolutional filters (see Appendix Section A.2 for a more detailed explanation on the hypernet architecture). The embeddings are size 50 and fixed after random sampling from U(− √ 3, √ 3). We use the mean cross entropy loss for training, but the summed cross entropy loss for testing. showed that a Bayesian neural network can be developed by using a hypernetwork to express a prior distribution without substantial changes to the vanilla hypernetwork setting. Their methods simply require putting L 2 -regularization on the model parameters and sampling from stochastic embeddings. We trained a linear hypernet to generate the weights of a MobileNet mainnet architecture (excluding the batch normalization layers), using the block-wise sampling strategy described in , with a factor of 0.0005 for the L 2 -regularization. We initialize fixed embeddings of size 32 sampled from U(− √ 3, √ 3), and sample additive stochastic noise coming from U(−0.1, 0.1) at the beginning of every mini-batch training. The training was done on ImageNet with batch size 256 and learning rate 0.1 for 25 epochs, or equivalently, 125125 iterations. The testing was done with 10 Monte Carlo samples. We omit the test loss plots due to the computational expense of doing 10 forward passes after every mini-batch instead of every epoch.
The first principled weight initialization method for hypernetworks
397
scitldr
For bidirectional joint image-text modeling, we develop variational hetero-encoder (VHE) randomized generative adversarial network (GAN), a versatile deep generative model that integrates a probabilistic text decoder, probabilistic image encoder, and GAN into a coherent end-to-end multi-modality learning framework. VHE randomized GAN (VHE-GAN) encodes an image to decode its associated text, and feeds the variational posterior as the source of randomness into the GAN image generator. We plug three off-the-shelf modules, including a deep topic model, a ladder-structured image encoder, and StackGAN++, into VHE-GAN, which already achieves competitive performance. This further motivates the development of VHE-raster-scan-GAN that generates photo-realistic images in not only a multi-scale low-to-high-resolution manner, but also a hierarchical-semantic coarse-to-fine fashion. By capturing and relating hierarchical semantic and visual concepts with end-to-end training, VHE-raster-scan-GAN achieves state-of-the-art performance in a wide variety of image-text multi-modality learning and generation tasks.
A novel Bayesian deep learning framework that captures and relates hierarchical semantic and visual concepts, performing well on a variety of image and text modeling and generation tasks.
398
scitldr
Current classical planners are very successful in finding (non-optimal) plans, even for large planning instances. To do so, most planners rely on a preprocessing stage that computes a grounded representation of the task. Whenever the grounded task is too big to be generated (i.e., whenever this preprocess fails) the instance cannot even be tackled by the actual planner. To address this issue, we introduce a partial grounding approach that grounds only a projection of the task, when complete grounding is not feasible. We propose a guiding mechanism that, for a given domain, identifies the parts of a task that are relevant to find a plan by using off-the-shelf machine learning methods. Our empirical evaluation attests that the approach is capable of solving planning instances that are too big to be fully grounded. Given a model of the environment, classical planning attempts to find a sequence of actions that lead from an initial state to a state that satisfies a set of goals. Planning models are typically described in the Planning Domain Definition Language (PDDL) BID16 ) in terms of predicates and action schemas with arguments that can be instantiated with a set of objects. However, most planners work on a grounded representation without free variables, like STRIPS BID4 or FDR BID1. Grounding is the process of translating a task in the lifted (PDDL) representation to a grounded representation. It requires to compute all valid instantiations that assign objects to the arguments of predicates and action parameters, even though only a small fraction of these instantiations might be necessary to solve the task. The size of the grounded task is exponential in the number of arguments in predicates and action schemas. Although this is constant for all tasks of a given domain, and grounding can be done in polynomial time, it may still be prohibitive when the number of objects is large and/or some predicates or actions have many parameters. The success of planners like FF BID9 or LAMA BID24 in finding plans for large planning tasks is undeniable. However, since most planners rely on grounding for solving a task, they fail without even starting the search for a plan whenever an instance cannot be grounded, making grounding a bottleneck for the success of satisficing planners. Grounding is particularly challenging in open multi-task environments, where the planning task is automatically generated with all available objects even if only a few of them are relevant to achieve the goals. For example, in robotics, the planning task may contain all objects with which the robot may interact even if they are not needed BID13 ). In network-security environments, like the one modeled in the Caldera domain BID17, the planning task may contain all details about the network. However, to the best of our knowledge, no method exists that attempts to focus the grounding on relevant parts of the task. We propose partial grounding, where, instead of instantiating the full planning task, we focus on the parts that are required to find a plan. The approach is sound -if a plan is found for the partially grounded task then it is a valid plan for the original task -but incomplete -the partially grounded task will only be solvable if the operators in at least one plan have been grounded. To do so, we give priority to operators that we deem more relevant to achieve the goal. Inspired by relational learning approaches to domain control knowledge (e.g., BID31, BID3, BID11), we use machine learning methods to predict the probability that a given operator belongs to a plan. We learn from small training instances, and generalize to larger ones by using relational features in standard classification and regression algorithms (e.g., BID12). As an alternative model, we also experiment with relational trees to learn the probabilities BID18.Empirical show that our learning models can predict which operators are relevant with high accuracy in several domains, leading to a very strong reduction of task size when grounding and solving huge tasks. Throughout the paper, we assume for simplicity that tasks are specified in the STRIPS subset of PDDL BID4. Our algorithms and implementation, however, are directly applicable to a larger subset of PDDL containing ADL expressions BID20.A lifted (PDDL) task Π PDDL is a tuple (P, A, Σ C, Σ O, I, G) where P is a set of (first-order) atomic predicates, A is a set of action schemas, DISPLAYFORM0 is a non-empty set of objects consisting of constants Σ C, and non-constant objects Σ O, I is the initial state, and G is the goal. Predicates and action schemas have parameters. We denote individual parameters with x, y, z and sets of parameters with X, Y, Z. An action schema a[X] is a triple (pre(a), add(a), del(a)), consisting of preconditions, add list, and delete list, all of which are subsets of P, possibly pre-instantiated with objects from Σ C, such that X is the set of variables that appear in pre(a) ∪ add(a) ∪ del(a). I and G are subsets of P, instantiated with objects from Σ.A lifted task Π PDDL can be divided into two parts: the domain specification (P, A, Σ C) which is common to all instances of the domain, and the problem specification DISPLAYFORM1 DISPLAYFORM2 The plan is optimal if its length is minimal among all plans for Π.We define the delete-relaxation of a task Π as the task Π DISPLAYFORM3, we can compute the corresponding STRIPS task Π by instantiating the predicates and action schemas with the objects in Σ. Then, F contains a fact for each possible assignment of objects in Σ to the arguments of each predicate P [X] ∈ P, and O contains an operator for each possible assignment of objects in Σ to each action schema a[X] ∈ A. In practice, we do not enumerate all possible assignments of objects in Σ to the arguments in facts and action schemas. Instead, only those facts and operators are instantiated that are delete-relaxed reachable from the initial state BID7 ). We base our method on the grounding algorithm of Fast Downward BID6. To ground a planning task, this algorithm performs a fix-point computation similar to the computation of relaxed planning graphs BID2, where a queue is initialized with the facts in the initial state and in each iteration one element of the queue is popped and processed. If the element is a fact, then those operators of which all preconditions have already been processed (are reached) are added to the queue. If the element is an operator, all its add effects are pushed to the queue. The algorithm terminates when the queue is empty. Then, all processed facts and operators are delete-relaxed reachable from the initial state. For simplicity, the algorithm we describe here considers only STRIPS tasks but it can be adapted to support other PDDL features like negative preconditions or Algorithm 1: Partial Grounding. DISPLAYFORM0 conditional effects as it is done by BID7.Algorithm 1 shows details of our approach. The main difference with respect to the approach by BID7 is that the algorithm can stop before the queue is empty, and operators are instantiated in a particular order. For these two choice points we suggest an approach that aims at minimizing the size of the partially grounded task, while keeping it solvable. That said, our main focus is the operator ordering, and we only consider a simple stopping condition. Stopping condition. Typical grounding approaches terminate only when the queue is empty, meaning that all (deleterelaxed) reachable facts and operators have been grounded. In partial grounding, we allow the algorithm to stop earlier. Intuitively, this is a good idea because most planning tasks have short plans, usually in the order of at most a few hundred operators, compared to possibly millions of grounded operators. Hence, if the correct operators are selected, partial grounding can potentially stop much sooner than complete grounding. The key issue is how to decide when the probability of finding a plan using the so-far grounded operators is sufficient. Consider the following claims: 1. The grounded task is delete-relaxed solvable iff G ⊆ F. Item 1 provides a necessary condition for the task to be relaxed-solvable, so grounding should continue at least until G ⊆ F. But this is not sufficient, as it does not guarantee that a plan can be found for the non-relaxed task. Item 2 provides an obvious, but difficult to predict, condition for success. In this work, we consider only a simple stopping condition. To maximize the probability of the task being solvable, it is desirable to ground as many operators as possible. The main constraint on the number of operators to ground are the resources (time and memory) that can be spent on grounding. For that reason, one may want to continue grounding while these resources are not compromised 1. We provide a constant N op as a parameter, an estimate on the number of operators that can be grounded given the available resources, and let the algorithm continue as long as |O| ≤ N op.If not all actions are grounded, the ing grounded task is a partial representation of the PDDL input and the overall planning process of grounding and finding a plan for the grounded task is incomplete. We implemented a loop around the overall process that incrementally grounds more actions, when finding the partially grounded task unsolvable. This converges to full grounding, ing in a complete planner. Queue order. Standard grounding algorithms extract elements from the queue in an arbitrary order -since all operators are grounded, order does not matter. Our algorithm always grounds all facts that have been added to the queue, giving them preference over operators. This ensures that the effects of all grounded operators are part of the grounded task. After all facts in the queue have been processed, our algorithm picks an operator according to a heuristic criterion, which we will call the priority function. Some simple priority functions include FIFO, LIFO, or random. Since our aim is to ground all operators of a plan, the priority queue should sort operators by their probability of belonging to a plan. To estimate these probabilities, we use machine learning techniques as detailed in the next section. Additionally, one may want to increase the diversity of selected operators to avoid being misguided by a bias in the estimated probabilities. We consider a simple round robin (RR) criterion, which classifies all operators in the queue by the action schema they belong to, and chooses an operator from a different action schema in each iteration. RR works in combination with a priority function that is used to select which instantiation of a given action schema should be grounded next. We define a novelty criterion as a non-trivial priority function that is not based on learning, inspired by novelty pruning techniques that have successfully been applied in classical planning BID14. During search, the novelty of a state is defined as the minimum number m for which the state contains a set of facts of size m, that is not part of any previously generated state. This can be used to prune states with a novelty < k. We adapt the definition of novelty to operators in the grounding process as follows. Let Σ be the set of objects, a[X] an action schema, and O the set of already grounded operators corresponding to all instantiations of a[X]. Let σ = {(x 1, σ 1),..., (x k, σ k)} be an assignment of objects in Σ to parameters X instantiating an operator o, such that o ∈ O. Then, the novelty of o is defined as the number of assignments (x i, σ i) such that there does not exist an operator o ∈ O where x i got assigned σ i. In the grounding we will prioritize operators with a higher novelty, which are likely to generate facts that have not been grounded yet.1 While search can benefit from grounding less operators, an orthogonal pruning method that uses full information of the grounded task, can be employed at that stage (e. g. BID8). To guide the grounding process towards operators that are relevant to solve the task, we use a priority queue that gives preference to more promising operators. We use a priority function f: O → that estimates whether operators are useful or not. Ideally, we want to assign 1 to operators in an optimal plan and 0 to the rest, so that the number of grounded operators is minimal. We approximate this by assigning to each operator a number between 0 and 1 that estimates the probability that the operator belongs to an optimal plan for the task. This is challenging, however, due to lack of knowledge about the fully grounded task. We use a learning approach, training a model on small instances of a domain and using it to guide grounding in larger instances. Our training instances need to be small enough to compute the set of operators that belong to any optimal plan for the task. We do this by solving the tasks with a symbolic bidirectional breadth-first search BID29 ) and extracting all operators that belong to an optimal solution. Before grounding, the only information that we have available is the lifted task Π PDDL = (P, A, Σ C, Σ O, I, G). Our training data uses this information, consisting of tuples (I, G, Σ O, o, {0, 1}) for each operator o in a training instance, where o is assigned a value of 1 if it belongs to an optimal solution and 0 otherwise. We formulate our priority functions as a classification task, where we want to order the operators according to our confidence that they belong to the 1 class. To learn a model from this data, we need to characterize the tuple (I, G, Σ O, o) with a set of features. Since training and testing problems have different objects, these features cannot refer to specific objects in Σ O, so learning has to be done at the lifted level. We propose relational rules that connect the objects that have instantiated the action schema to the training sample (I, G, Σ O) to capture meaningful properties of an operator. Because different action schemas have different (numbers of) arguments, the features that characterize them will necessarily be different. Therefore, we train a separate model for each action schema a[X] ∈ A. All these models, however, predict the probability of an operator being in an optimal plan, so the values from two different models are still comparable. We considered two approaches to conduct the learning: inductive relational trees and classification/regression with relational features. Inductive Relational Learning Trees. Inductive Logic Programming (ILP) BID18 ) is a wellknown machine learning approach suitable when the training instances are described in relational logic. ILP has been used, e.g., to learn domain control knowledge for planning (de la BID3 BID11 . We use the Aleph tool BID27 to learn a tree where each inner node is a predicate connecting the parameters of a to-be-grounded operator to the facts in the initial state or goal, to objects referred to in a higher node in the tree, or to a constant. The nodes are evaluated by checking if there exists a predicate instantiated with the given objects in the initial state or goal. A wildcard symbol (" ") indicates that we do not require a particular object, but that any object instantiating the predicate at this position is fine. In FIG0, the left child corresponds to this check evaluating to false and the right child to true. For a given action, the tree is evaluated by checking if there exists an assignment to the free variables in a path from the root to a leaf node, such that all nodes on the path evaluate to the correct truth value. We then take the real value in the leaf node as an estimate of the probability that the operator is part of an optimal plan. This evaluation is akin to a CSP problem, so we need to keep the depth of the trees at bay to have an efficient priority function. FIG0 shows the tree learned for the turn-to action schema in Satellite. In this domain, the goal is to take pictures in different modes. Several satellites are available, each with several instruments that support some of the modes. The actions are switching the instruments on and off, calibrating them, turning the satellite into different directions, and taking images. The turn-to action changes the direction satellite? s is looking at. In this case, the learned tree considers that the operators turning to and from relevant directions are more likely part of an optimal plan than turning away from the goal direction. More concretely, if for a tobe-instantiated operator turn-to(s, from, to) with objects s, from, and to, there is no goal have-image(to,), i. e., taking an image in direction to using any mode, then the operator is deemed not useful by the trained model, it only has a probability of 1% of belonging to an optimal plan. In the opposite case, and if there is a have-image goal in the from direction, but no goal pointing(, from), then the operator is expected to be most useful, with a probability of 38% of being in an optimal plan. This is relevant information to predict the usefulness of turn-to. However, there is some margin of improvement since the initial state is ignored. An alternative is to use relational rules as features for standard classification and regression algorithms BID12. Our features are relational rules where the head is an action schema, and the body consists of a set of goal or initial-state predicates, partially instantiated with the arguments of the action schema in the head of the rule, constant objects in Σ C, or free variables used in other predicates in the rule. This is very similar to a path from root to leaf in the aforementioned relational trees. We generate rules by considering all possible predicates and parameter instantiations with two restrictions. First, to guarantee that the rule takes different values for different instantiations of the action schema, one of the arguments in the first predicate in the body of the rule must be bound to a parameter of the action schema. Second, at least one argument of each predicate after the first one, must be bound to a free variable used in a previously used predicate. This aims at reducing the number of features by avoiding redundant rules that can be described as a conjunction of simpler rules. We assume that, if the conjunction of two rules is relevant for the classification task, the machine learning algorithms will be able to infer this. Most of the generated rules do not provide useful information to predict whether an operator will be part of an optimal plan or not. This is because we brute-force generate all possible rules, including many that do not capture any useful properties. Therefore, it is important to select a subset of relevant features. We do this filtering in two steps. First, we remove all rules that evaluate to the same value in all training instances (e.g., rules that contain goal:predicate in the body will never evaluate to true if predicate is never part of the goal description in that domain). Then, we use attribute selection techniques in order to filter out those features that are not helpful to predict whether the operator is part of an optimal plan. As an example, the most relevant rule generated for the turn-to schema is: This can be read as: "do we have to take images in directions? to and?from in modes that are supported by one of the instruments on board?". This rule surprisingly accurately describes a scenario where turn-to is relevant (and can be complemented with other rules to capture different cases).Given a planning task and an operator, a rule is evaluated by replacing the arguments in the head of the rule by the objects that are used to instantiate the operator and checking if there exists an assignment to the free variables such that the corresponding facts are present in the initial state and goal of the task. Doing so, we generate a feature vector for each grounded action from the training instances with a binary feature for every rule indicating whether the rule evaluates to true for that operator or not. This in a training set where for each operator we get a vector of boolean features (one feature per rule), together with a to-be-predicted class that is 1 if the operator is part of an optimal plan for the task, and 0 if not. On this training set, we can use either classification or regression methods to map each operator to a real number. With classification methods we use the confidence that the model has in the operator belonging to the positive class. In regression, the model directly tries to minimize the error by assigning values to 1 for operators in an optimal plan and 0 to others. It is important to note that it is possible that there are two training examples with the same feature vector, but with different values in the target. In these cases, we merge all training examples with the same feature vector and replace them with a single one that belongs to the 1 class if any of the examples did 2. During grounding, for every operator that is inserted in the queue, we evaluate all rules and call the model to get its priority estimate. To speed-up rule evaluation, we precompute, before grounding, all possible assignments to the arguments of the action schema that satisfy the rule. The computational cost of doing this is exponential in the number of free variables but it was typically negligible for the rules used by our models. We evaluate the relational trees in a similar way. For the evaluation of our partial grounding approach, we adapted the implementation of the "translator" component of the Fast Downward planning system (FD) BID6. The translator parses the given input PDDL files and outputs a fully grounded task in finite-domain representation (FDR) BID1 BID7 ) that corresponds to the PDDL input. Our changes are minimally invasive, only changing the ordering in which actions are handled and the termination condition, as indicated in Algorithm 1. Therefore, none of the changes affect the correctness of the translator, i. e., the generated grounded planning task will always be a proper FDR task. The changes do not affect the performance too much either, except when using a computationally expensive priority function. Experimental Setup. For the evaluation of our technique, we require domains for which instance generators are available to generate a set of diverse instances small enough for training, and the size of the grounded instances grows at least cubically with respect to the parameters of the generator so that we have large instances that are hard to fully ground, for evaluation. We picked four domains that were part of the learning track of the international planning competition (IPC) 2011 (Blocksworld, Depots, Satellite, and TPP), as well as two domains of the deterministic track of IPC'18 (Agricola and Caldera). For all domains, we used the deterministic track IPC instances and a set of 25 large instances that we generated ourselves for the experiments. For the training of the models, we used between 40 and 250 small instances, to get enough training data for each action schema. Since the number of grounded actions per schema varies significantly across domains, we individually adapted the number of training instances. To generate the large instances, we started at roughly the same size as the largest IPC instances, scaling the parameters of the generator linearly when going beyond that. As an example, in Satellite, the biggest IPC instance has around 10 satellites and 20 instruments, which is the size of our smallest instances. In the largest instances that we generated, there are up to 15 satellites and 60 instruments. In Blocksworld, where IPC instances only scale up to 17 blocks, we scale in a different way, starting at 75 blocks and going up to 100, which can still easily be solved by our techniques. Regarding the domains, we used the typed domain encoding of Satellite from the learning track, which simplifies rule generation, but does not semantically change the domain. In Blocksworld, we use the "no-arm" encoding, which age but this ed in slightly worse in most cases. shows a cubic explosion in the grounding, in contrast to the "arm" encoding, where the size of the grounded task is only quadratic in the PDDL description. Beside the static queue orderings, FIFO, LIFO, and novelty-based methods, we experiment with learning-based approaches using classification and regression models. While the former exemplify what is possible without learning, the latter methods aim at grounding only those actions that belong to a plan for a given task. In all cases, we combine the methods with the round robin queue setup (RR).Learning Framework. We report for a logistic regression classifier (LOGR), kernel ridge regression (KRN), linear regression (LINR), and a support vector machine regressor (SVR). While LINR and LOGR learn linear functions to combine the features, differing mostly in the loss function and underlying model that is used, KRN and SVR are capable of learning non-linear functions. We expect nonlinear functions to be useful to combine features, which cannot be done with linear functions. We also report the of the decision trees learned by Aleph. To implement the machine learning algorithms, we use the scikit Python package BID21, which nicely connects to the translator in FD.For feature selection, i. e., to select which rules are useful to predict the probability of an operator to be in a plan, we used the properties included in the trained models. For each feature (rule) contained in the feature vector, the model returns a weight according to its relevance to discriminate the target vector. After experimenting with multiple different models, we decided to use a decision tree regressor to predict rule usefulness, for all trained models. We evaluated our models in isolation on a set of validation instances that are distinct from both our training and testing set, and small enough to compute the set of operators that are part of any optimal plan. FIG2 shows the outcome of the priority function learned by LOGR in Blocksworld. The Table 1: Number of instances solved by the baseline with full grounding (Base), and incremental grounding with static action orderings (FIFO, LIFO, random (RND)), novelty-based ordering, and several learning models (see text). "RR" indicates that we use a separate priority queue for each action schema, taking turns over the schemas. Best coverage highlighted in bold face.bars indicate the number of operators across all validation instances that got a priority in a given interval, highlighting operators from optimal plans in a different color. The plots nicely illustrate that the priority function works very well for the action schemas move-t-to-b and move-b-to-t, where it is able to distinguish "optimal" from "non-optimal" operators. The distinction works not so well for move-b-to-b, but in general gives a significantly lower priority to this action schema. Another important observation is that the total number of grounded move-b-to-b actions is much higher than that of the other two action schemas. Projecting these observations to the grounding process, we expect the model to work well when used in a single priority queue, since it will prioritize move-t-to-b and move-b-to-t (which are the only ones needed to solve any Blocksworld instance) over move-b-to-b (which is only needed to optimally solve a task). On the validation set, grounding all operators with a priority of roughly > 0.6 suffices to solve the tasks, pruning all move-b-to-b operators and most non-optimal ones of the other schemas. RR in contrast will ground an equal number of all action schemas, including many unnecessary operators. These conjectures are well-supported by the plots in Figure 3.When working with machine learning techniques, there is always the risk of overfitting. In our case the on the training set are very similar to those on the validation set shown in FIG2, suggesting that overfitting is not an issue in our setup. The in other domains are similar. Incremental Grounding. We use the incremental approach, where the first iteration grounds a given task until the goal is found to be relaxed-reachable. The left half of Figure 3 shows detailed information on how many operators need to be grounded until this is achieved for different priority functions. We discuss details later. In case this first iteration fails, i. e., the partial task is not solvable, we set a minimum number of operators to be grounded in the next iteration by using an increment of 10 000 operators. This strategy does not aim to maximize coverage but rather to find out what is the minimum number of operators that need to be grounded to solve a task for each priority function (with a granularity of 10 000 operators). The number of operators that were necessary to actually solve a given instance is illustrated in the right half of Figure 3.For all configurations, after grounding, we run the first iteration of the LAMA planner BID23, a good standard configuration for satisficing planning that is well integrated in FD. We also use LAMA's first iteration as a baseline on a fully grounded task, with runtime and memory limits for the entire process of 30 minutes and 4GB. All other methods perform incremental grounding using their respective priority function. We allowed for a total of 5 hours and 4GB for the incremental grounding, while restricting the search part to only 10 minutes per iteration to keep the overall runtime of the experiments manageable. We show coverage, i. e., number of instances solved, in Table 1, with the time and memory limits mentioned in the previous subsection. The left part of the table considers instances as solved when the overall incremental grounding process (including finding a plan) finished within 30 min. In the right part, we approximate the that could be achieved with a perfect stopping condition by considering an instance as solved if the last iteration, i. e., the successful grounding and search, finished within 30 min. The baseline (Base) can still fully ground most instances except in Caldera and TPP, but fails to solve most of the large instances with up to 9 million operators. We scaled instances in this way so that a comparison of the number of grounded operators to the baseline is possible; further scaling would make full grounding impossible. The table nicely shows that the incremental grounding approach, where several iterations of partial grounding and search are performed (remember that we only allow 10min for the search), significantly outperforms the baseline, even when considering an overall time limit of 30min. In fact, all instances in Blocksworld can be solved in less than 10s by LOGR. This illustrates the power of our approach when the learned model captures the important features of a domain. The static orderings typically perform worse than the baseline, only the novelty-based ordering can solve more instances in Blocksworld, and in Caldera when using RR.The plots in Figure 3 shed further light on the number of operators when (leftmost two columns) the goal is relaxed reachable in the first iteration and (rightmost two columns) the number of operators needed to actually solve the task. Each data point corresponds to a planning instance, with the number of ground actions of a fully grounded task on the xaxis. The y-axis shows the number of grounded actions for several priority functions, including FIFO (LIFO in TPP), novelty, the learned model that has the highest reduction on the number of grounded actions, and Aleph. In general, the models capture the features of most domains quite accurately, leading to a substantial reduction in the size of the grounded task, and still being able to find a solution. The plots show that our models obtain a very strong reduction of the number of operators in the partially grounded task in Agricola, Blocksworld, and Caldera; some reduction (one order of magnitude) in Depots, and Satellite, and a small reduction in TPP. In terms of the size of the partially grounded tasks, different learning models perform best in different domains, and there is not a clear winner. In comparison, the baselines FIFO, LIFO, and Random do not significantly reduce the size of the grounded task in most cases, with a few exceptions like LIFO in TPP and FIFO in Caldera. The novelty criterion is often the best method among those without learning. Grounding a delete-relaxed reachable task with less operators is often beneficial, but may be detrimental for the coverage if the task is unsolvable, as happens for the Novelty method in Agricola or the LIFO method in TPP. This also explains why the learning models with highest reductions in some domains (e.g. LOGR in Agricola) are not always the same as the ones with highest coverage. The RR queue mechanism often grounds more operators before reaching the delete-relaxed goal but this makes the first iteration solvable more often leading to more stable . The exception is Aleph, where RR has the opposite effect, making the partially grounded tasks unsolvable. Some approaches in the literature try to alleviate the grounding problem, e. g. by avoiding grounding facts and operators unreachable from the initial state BID7 ), reformulating the PDDL description by splitting action schemas with many parameters BID0, or using symmetries to avoid redundant work during the grounding process BID26.Lifted planning approaches that skip grounding entirely BID22 have lost popularity due to the advantages of grounding to speed-up the search and allow for more informative heuristics which are not easy to compute in a lifted level. BID25 adapted the delete-relaxation heuristic BID9 to the lifted level. This is related to our partial grounding approach since their relaxed plan extraction mechanism can be used to obtain a grounded task where the goal is relaxed reachable, and it could be used to enhance the novelty and learning priority functions that we use here. There are many approaches to eliminate irrelevant facts and operators from grounded tasks BID19 BID10 BID5 BID28. The closest to our approach is under-approximation refinement BID8, which also performs search with a subset of operators. However, all these techniques use information from the fully grounded representation to decide on the subset of relevant operators, so are not directly applicable in our setting. The of our learning models (see FIG2 show that applying learning to identify irrelevant operators is a promising avenue for future research. Recently, BID30 introduced a machine learning approach to learn heuristic functions for specific domains. This is similar to our work, in the sense that a heuristic estimate has been learned, though for states in the search, not actions in the grounding. Furthermore, the authors used neural networks instead of our, more classical, models. In this paper, we proposed an approach to partial grounding of planning tasks, to deal with tasks that cannot be fully grounded under the available time and memory resources. Our algorithm heuristically guides the grounding process giving preference to operators that are deemed most relevant for solving the task. To determine which operators are relevant, we train different machine learning models using optimal plans from small instances of the same domain. We consider two approaches, a direct application of relational decision trees, and using relational features with standard classification and regression algorithms. The empirical show the effectiveness of the approach. In most domains, the learned models are able to identify which operators are relevant with high accuracy, helping to reduce the number of grounded operators by several orders of magnitude, and greatly increasing coverage in large instances. Figure 3 : The scatter plots show the number of operators of a fully grounded task on the x-axis. The y-axis shows the number of operators that are needed to make the goal reachable in the grounding (leftmost two columns), and the number of operators that are needed to solve the task (rightmost two columns), for several priority functions.
This paper introduces partial grounding to tackle the problem that arises when the full grounding process, i.e., the translation of a PDDL input task into a ground representation like STRIPS, is infeasible due to memory or time constraints.
399
scitldr